CN113119110A - Method for fusing intelligent action of robot and dynamic pose of target in real time - Google Patents

Method for fusing intelligent action of robot and dynamic pose of target in real time Download PDF

Info

Publication number
CN113119110A
CN113119110A CN202110288635.6A CN202110288635A CN113119110A CN 113119110 A CN113119110 A CN 113119110A CN 202110288635 A CN202110288635 A CN 202110288635A CN 113119110 A CN113119110 A CN 113119110A
Authority
CN
China
Prior art keywords
target
subsystem
pose
vision
mechanical arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110288635.6A
Other languages
Chinese (zh)
Inventor
王智慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Holding Pearl Intelligent Technology Co ltd
Original Assignee
Shanghai Holding Pearl Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Holding Pearl Intelligent Technology Co ltd filed Critical Shanghai Holding Pearl Intelligent Technology Co ltd
Priority to CN202110288635.6A priority Critical patent/CN113119110A/en
Publication of CN113119110A publication Critical patent/CN113119110A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a real-time fusion method of intelligent actions of a robot and dynamic poses of targets, which comprises a system scheduling center, a voice subsystem, a vision subsystem and a mechanical arm subsystem, and comprises the following steps: the system scheduling center polls the action type and the operation target required by the user to the voice subsystem until the action type and the operation target which are explicitly fed back by the user are obtained; the system dispatching center transmits the target to be operated to the vision subsystem, and the vision subsystem starts to search the target; and the system dispatching center transmits the current target pose to the mechanical arm subsystem, and the mechanical arm subsystem carries out path design and trajectory planning on the tool center point according to the current pose and moves to the target. According to the invention, the robot has rich action types, the mechanical arm motion process is dynamically sensed and calculated, the trajectory planning is smooth and rapid, and the intelligence degree is high.

Description

Method for fusing intelligent action of robot and dynamic pose of target in real time
Technical Field
The invention relates to the technical field of robots, in particular to a method for fusing intelligent actions of a robot and dynamic poses of targets in real time.
Background
With the rapid development of computer technology, sensor technology, artificial intelligence and other technologies, the robot technology becomes mature day by day, and the requirement for the degree of intelligence of the robot to complete a series of actions becomes higher and higher, so that the robot plays an increasingly important role in numerous industries such as home service, aerospace, industry and the like, and various robots can well complete work under specific environments.
However, the existing robot has many defects. Because the working environment of the robot is unknown or uncertain in most cases, the robot cannot perform intelligent motion adjustment in real time well according to the dynamic pose of the target. Therefore, a method for fusing intelligent actions of the robot and dynamic poses of the target in real time is provided.
Disclosure of Invention
In order to solve the technical problems, the technical scheme provided by the invention is a method for fusing the intelligent action of a robot and the dynamic pose of a target in real time, which comprises the following steps:
a real-time fusion method for intelligent actions of a robot and dynamic poses of targets comprises a system scheduling center, a voice subsystem, a vision subsystem and a mechanical arm subsystem, and specifically comprises the following steps:
the method comprises the following steps: the system scheduling center polls the action type and the operation target required by the user to the voice subsystem until the action type and the operation target which are explicitly fed back by the user are obtained;
step two: the system scheduling center transmits a target to be operated to a visual subsystem, and the visual subsystem starts to search the target; if the search is failed, informing the user through the voice subsystem, determining the termination of the user, ending the action, appointing the target to be operated again by the user, and returning to the first step; if the searching is successful, the system dispatching center obtains the current target pose.
Step three: and the system dispatching center transmits the current target pose to the mechanical arm subsystem, and the mechanical arm subsystem carries out path design and trajectory planning on the tool center point according to the current pose and moves to the target.
As a refinement, the robotic arm subsystem controls robotic arm movements.
As an improvement, in the motion process, when the vision subsystem finds that the target pose changes, the system scheduling center is reported, the system scheduling center transmits the current changed target pose to the mechanical arm subsystem in real time, and the mechanical arm subsystem readjusts subsequent path design and corresponding trajectory planning according to the target pose and the current motion state to ensure the motion state connection smoothness of the current position; if the fusion radius of the current joining position is smaller than the minimum fusion radius of the mechanical arm, the planning is failed, the movement is stopped, meanwhile, the system scheduling center is reported, the system scheduling center informs a user of the abnormal pose of the target through a voice subsystem, and the movement is stopped; if the planning is feasible, the mechanical arm continues to move towards the target; if the target pose continues to change, the steps are repeated until the action required by the user is finished or the planning is stopped in advance due to the failure.
As an improvement, the trajectory planning method comprises: planning with reference to a minimum blend radius of the robotic arm; the motion state of the mechanical arm at the current position can be non-static and used as an initial state, a seventh-order polynomial is used for replanning, the newly planned acceleration of each joint of the mechanical arm at the current position is ensured to be smooth, and if the planning fails, the motion can be stopped.
As an improvement, a vision sensor is arranged in the vision subsystem, and the vision sensor integrates plane vision and depth vision.
As an improvement, when the operation target body has obvious geometric characteristics, the determination method of the target pose is as follows: the planar vision detects three key points on the front surface of an operation target body, and the pixel positions of the three key points are back projected into the depth vision to obtain the real space positions of the three key points; ensuring that the three key points are not collinear, if collinear, re-detecting one key point by the plane vision to replace any one key point in the front, and re-projecting the key point to the depth vision; the three key points form two vectors at will, a normal vector on the front face of the operation target body can be solved through the cross product of the two vectors, and the normal vector is the target pose of the operation target body.
As an improvement, when the geometric feature of the corner point of the operation target body is not obvious, the method for determining the target pose is as follows: manually identifying three non-collinear key points on the front surface of the operation target body, detecting the three non-collinear key points on the front surface of the operation target body through the planar vision, and reversely projecting the pixel positions of the three key points into the depth vision to obtain the real spatial positions of the three key points; the three key points form two vectors at will, a normal vector on the front face of the operation target body can be solved through the cross product of the two vectors, and the normal vector is the target pose of the operation target body.
Compared with the prior art, the invention has the advantages that: in the invention, the action type of the robot is not limited to single capture, but any action type supported in a required action library is dynamically specified by a user, so that the intelligence of action selection is realized; the posture of the target can be dynamically changed in the motion process of the mechanical arm, the vision can be sensed and calculated in real time, the calculation is efficient, and the intelligence of real-time posture calculation is realized; the path planning and the trajectory planning can be changed in the motion process of the mechanical arm, smooth subsequent trajectories can be provided, and the intelligence of real-time trajectory planning is embodied; one joint is always kept to run at the fastest speed, and the rapidity of action is guaranteed.
Drawings
FIG. 1 is a block diagram of a method for real-time fusion of intelligent actions of a robot and dynamic poses of targets according to the invention.
Fig. 2 is a schematic diagram of initial path planning in the method for fusing the intelligent motion of the robot and the dynamic pose of the target in real time.
FIG. 3 is a schematic diagram of a target coordinate system in the method for real-time fusion of the intelligent motion of the robot and the dynamic pose of the target.
FIG. 4 is a schematic diagram of dynamic path planning in the method for real-time fusion of intelligent actions of the robot and dynamic poses of the targets.
As shown in the figure: 1. the system comprises a system dispatching center, 2, a voice subsystem, 3, a vision subsystem, 4 and a mechanical arm subsystem.
Detailed Description
The method for fusing the intelligent action of the robot and the dynamic pose of the target in real time is further described in detail with reference to the accompanying drawings.
With reference to the accompanying drawings, fig. 1 shows a method for fusing intelligent actions of a robot and dynamic poses of targets in real time, which comprises a system scheduling center 1, a voice subsystem 2, a vision subsystem 3 and a mechanical arm subsystem 4, and comprises the following specific steps:
the method comprises the following steps: the system dispatching center 1 polls the action type and the operation target needed by the user to the voice subsystem 2 until the action type and the operation target explicitly fed back by the user are obtained;
step two: the system dispatching center 1 transmits the target to be operated to the vision subsystem 3, and the vision subsystem 3 starts to search the target; if the search is failed, informing the user through the voice subsystem 2, determining the termination of the user, ending the action, appointing the target to be operated again by the user, and returning to the first step; if the search is successful, the system dispatching center 1 obtains the current target pose.
Step three: the system dispatching center 1 transmits the current target pose to the mechanical arm subsystem 4, and the mechanical arm subsystem 4 carries out path design and trajectory planning on the tool center point according to the current pose and moves to the target.
In this embodiment, the robot arm subsystem 4 controls the robot arm movements, as shown in figure 1.
In this embodiment, as shown in fig. 1, in the moving process, when the vision subsystem 3 finds that the target pose changes, the system scheduling center 1 is reported, the system scheduling center 1 transmits the currently changed target pose to the mechanical arm subsystem 4 in real time, and the mechanical arm subsystem 4 readjusts the subsequent path design and the corresponding trajectory planning according to the target pose and the current moving state, so as to ensure the moving state linking smoothness of the current position; if the fusion radius of the current joining position is smaller than the minimum fusion radius of the mechanical arm, planning fails, the movement is stopped, meanwhile, a system dispatching center 1 is reported, the system dispatching center 1 informs a user of the abnormal position and posture of a target through a voice subsystem 2, and the movement is stopped; if the planning is feasible, the mechanical arm continues to move towards the target; if the target pose continues to change, the steps are repeated until the action required by the user is finished or the planning is stopped in advance due to the failure.
In this embodiment, as shown in fig. 1, the method for trajectory planning includes: planning by using the minimum fusion radius of the reference mechanical arm; the motion state of the mechanical arm at the current position can be non-static and used as an initial state, a seventh-order polynomial is used for replanning, the newly planned acceleration of each joint of the mechanical arm at the current position is ensured to be smooth, and if the planning fails, the motion can be stopped.
In this embodiment, as shown in fig. 1, a vision sensor is provided in the vision subsystem 3, and the vision sensor integrates plane vision and depth vision.
In this embodiment, as shown in fig. 1, when the operation target has an obvious geometric feature, the method for determining the target pose is as follows: detecting three key points on the front side of an operation target body through plane vision, and reversely projecting pixel positions of the three key points into depth vision to obtain real spatial positions of the three key points; ensuring that the three key points are not collinear, if collinear, re-detecting one key point by the planar vision to replace any one key point in the front, and re-projecting the key point to the depth vision; the three key points form two vectors at will, a normal vector on the front face of the operation target body can be solved through the cross product of the two vectors, and the normal vector is the target pose of the operation target body.
In this embodiment, as shown in fig. 1, when the geometric feature of the corner point of the operation target is not obvious, the method for determining the target pose is as follows: manually marking three non-collinear key points on the front surface of an operation target body, detecting the three non-collinear key points on the front surface of the operation target body through plane vision, and reversely projecting pixel positions of the three key points into depth vision to obtain real space positions of the three key points; the three key points form two vectors at will, a normal vector on the front face of the operation target body can be solved through the cross product of the two vectors, and the normal vector is the target pose of the operation target body.
The working principle of the invention is as follows: in the invention, the action type of the robot is dynamically specified by the user without single capture, and any action type supported in a required action library can be realized by polling the action type and the operation target required by the user through the system scheduling center 1, so that the intelligent action selection is realized; the target pose can be dynamically changed in the motion process of the mechanical arm, the visual sensor can sense the target pose in real time and solve the target pose through a specific algorithm, the calculation is efficient, and the target position pose real-time solving intelligence is achieved; the track planning can be changed in the motion process of the mechanical arm, and smooth subsequent tracks can be provided, so that the intelligence of real-time track planning is embodied; in the trajectory planning, one joint of the mechanical arm is always kept to run at the fastest speed by referring to the minimum fusion radius of the mechanical arm, so that the rapidity of action is ensured.
In the invention, in the trajectory planning, a 6-axis mechanical arm is taken as an example, the joint numbers are sequenced from 1 to 6, the angular displacement of each joint from an initial position to a target position is found out, the maximum angular displacement is used as a main joint, and the maximum movement speed of the main joint is kept in the non-transition planning; in the planning, if a certain joint exceeds the corresponding maximum speed at a certain moment, the joint is taken as a main joint, the maximum movement speed of the joint is kept, the subsequent planning is carried out, and the like is repeated until the trajectory planning is finished.
As shown in FIG. 2, the initial path planning is shown in the following figure, where the path design uses Cartesian space, the entire path lies on a plane P, which is defined by T points and lines BG, and its unit normal vector
Figure BDA0002981490030000041
The included angle between the space line segment BG and the normal of the front surface of the target is an acute angle, T is the real-time position of the TCP at the moment
Figure BDA0002981490030000042
And parallelly, BG length is an empirical value delta to avoid the contact between a tong and a target, AB is a spatial arc, a minor arc angle is theta, an arc center is C, a radius r is an empirical value, the smoothness of a track is ensured, a point A and a point B are tangent points, and TA is a spatial straight-line segment.
As shown in fig. 3, to determine the target coordinate system { G }: three points P in space1、P2And P3Is obtained visually, the position G being P1And P3The midpoint or by the specific geometry of the target, the direction vector is:
Figure BDA0002981490030000043
Figure BDA0002981490030000044
Figure BDA0002981490030000045
normal vector of front side
Figure BDA0002981490030000046
The direction of the clamping handle is close to the direction of the clamping handle
Figure BDA0002981490030000047
The included angle is an acute angle. The parametric equation of a certain point P on the line BG is:
Figure BDA0002981490030000048
in the coordinate system { C }, basis vectors
Figure BDA0002981490030000049
On the plane P of the plane,
Figure BDA00029814900300000410
⊥BG;
Figure BDA00029814900300000411
t is P; base vector
Figure BDA00029814900300000412
∥BG。
Figure BDA0002981490030000051
Figure BDA0002981490030000052
Figure BDA0002981490030000053
The parametric equation of a certain point P on the arc is:
Figure BDA0002981490030000054
direction vector of line segment TA:
Figure BDA0002981490030000055
the parametric equation for the last point P is:
Figure BDA0002981490030000056
similarly, the parameter equation of a certain point P on the line segment GB is:
Figure BDA0002981490030000057
as shown in fig. 4, in the process of moving to the target according to the fixed path, the gripper moves to a point T at a certain time T, and the posture of the target object is visually found to have changed, the thin line in the figure is the original path, and the thick line is the new path.
The known track updating rate is H, the path is modified from the time of n/H (n is more than or equal to 0), namely, the path is re-planned from the position of a last planned path point S, the new path SDEABG is shown in the figure, the point S, the point D, the point E, the point A and the point B are all space tangent points, the sub-path AB planning needs to be correspondingly changed together with the new sub-path SDE, the value of n is related to the calculation capacity, and if the new planning or partial trajectory can be completed in 1 track updating period, n is 0, so that the real-time performance is good.
The following is a method for dynamic sub-path segmentation SDEAB planning.
1. Passing through point S, point B and point G as plane PL0Point P0In the plane PL0Upper, normal vector
Figure BDA0002981490030000058
The upper side of the bottle is upward,
Figure BDA0002981490030000061
Figure BDA0002981490030000062
order:
a0=n0x,b0=n0y,c0=n0z
d0=-(n0xSx+n0ySy+n0zSz)
get the plane PL0
Figure BDA0002981490030000063
Namely:
a0P0x+b0P0y+C0P0z+d0=0
2. marking the original point A as A0As A0In the plane PL0Projection point A of0′,
Figure BDA0002981490030000064
And the normal vector
Figure BDA0002981490030000065
Parallel.
Order:
Figure BDA0002981490030000066
obtaining:
A′0x=A0x-a0t0,A′0y=A0y-b0t0,A′0z=A0z-c0t0
3. passing through point S and point A0And point A0' do plane PL1Point P1In the plane PL1See 1, above, to obtain plane PL1
a1P1x+b1P1y+c1P1z+d1=0
4. Making straight line GB extension line and plane PL1Point of intersection G1', order:
A1=A′0
Figure BDA0002981490030000071
intersection point G1′:
G1x=Gx+lGBxt1,G1y=Gy+lGByt1,G1z=Gz+lGBzt1
5. In the plane PL1Making an arc with radius r, point S as a tangent point, center of circle and point G1' on the same side of the original path:
consists of:
Figure BDA0002981490030000072
Figure BDA0002981490030000073
|SC1|=r
obtain the center C of a circle1Referring to the original path circular arc planning, a certain point P on the circular arc1The parameter equation of (1) is as follows:
Figure BDA0002981490030000074
6. in the plane PL1Upper tangent point E1Point E1As a circle and a straight line E1G1The tangent point of.
Consists of:
Figure BDA0002981490030000075
Figure BDA0002981490030000076
|E1C1|=r
point of tangency E1
7. Make inclined cylindrical surface SE1In the form of a circular arc SE1Is a collimation line, a bus
Figure BDA0002981490030000081
Then the oblique cylindrical surface SE1At some point above
Figure BDA0002981490030000082
The parameter equation of (1) is as follows:
Figure BDA0002981490030000083
8. passing point E1Point B and point G as plane PL2Point P2In the plane PL2Upper, normal vector
Figure BDA0002981490030000084
Upward, see 1, to obtain plane PL2
a2P2x+b2P2y+c2P2z+d2=0
9. Making oblique cylindrical surface SE1And plane PL2Cross line l ofE1Over-cut point E1The direction vector is parallel to
Figure BDA0002981490030000085
10. End point S edge of straight line TS
Figure BDA0002981490030000086
Oriented in the plane PL2Projection point S of2′,
S′2x=Sx-a2t2
S′2y=Sy-b2t2
S′2z=Sz-c2t2
In the same way, T is obtained2' coordinates of.
If the point S coincides with the point T, the original straight line SA is used for projection.
11. In the plane PL2Making two circles with radius r on the upper part, and respectively making B electricity and T electricity with the line segment GB2′S2' S of2The points are tangent, the position of the circle center is determined according to the actual situation, the smooth and shortest path is ensured, and 8 is referred to, and a certain point P is referred to2SThe arc parameter equation of the over-tangent point S is as follows:
Figure BDA0002981490030000087
at a certain point P2BThe arc parameter equation of the over-tangent point B is as follows:
Figure BDA0002981490030000088
12. in the plane PL2Common tangent D with two circles2A, point D2And the points A are tangent points respectively, so that the path is smooth and shortest. Easily obtaining the parameter beta of two tangent points from the tangent point condition2SAnd beta2BSubstituting 11 parameter equations to obtain point D2Coordinates and direction vectors of the point A are added to obtain a straight line
Figure BDA0002981490030000091
And (4) parameter equations.
13. Make a straight line
Figure BDA0002981490030000092
And
Figure BDA0002981490030000093
the coordinate of the intersection point E.
14. Drawing a curve S2' E edge
Figure BDA0002981490030000094
Oriented in the inclined cylindrical plane SE1The up-projection results in a sub-path segment SE.
In steps 11-14, point D is tangent point D2At the inclined cylindrical surface SE1The projection of (c) may also be in front of point E and need to be processed accordingly. Therefore, the new dynamic path planning keeps the characteristic of shortest fairing under the existing path condition.
The present invention and its embodiments have been described above, and the description is not intended to be limiting, and the drawings are only one embodiment of the present invention, and the actual structure is not limited thereto. In summary, those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiments as a basis for designing or modifying other structures for carrying out the same purposes of the present invention without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. A method for fusing intelligent actions of a robot and dynamic poses of targets in real time is characterized by comprising the following steps: the method comprises a system dispatching center (1), a voice subsystem (2), a vision subsystem (3) and a mechanical arm subsystem (4), and specifically comprises the following steps:
the method comprises the following steps: the system scheduling center (1) polls the action type and the operation target required by the user to the voice subsystem (2) until the action type and the operation target explicitly fed back by the user are obtained;
step two: the system scheduling center (1) transmits a target to be operated to the vision subsystem (3), and the vision subsystem (3) starts to search the target; if the search is failed, informing the user through the voice subsystem (2), the user determining termination, ending the action, the user appointing the target to be operated again, and returning to the first step; if the search is successful, the system dispatching center (1) obtains the current target pose.
Step three: the system dispatching center (1) transmits the current target pose to the mechanical arm subsystem (4), and the mechanical arm subsystem (4) carries out path design and trajectory planning on the tool center point according to the current pose and moves to the target.
2. The method for the real-time fusion of the intelligent action of the robot and the dynamic pose of the target according to claim 1, wherein the method comprises the following steps: the mechanical arm subsystem (4) controls the movement of the mechanical arm.
3. The method for the real-time fusion of the intelligent action of the robot and the dynamic pose of the target according to claim 2, wherein the method comprises the following steps: in the motion process, when the vision subsystem (3) finds that the target pose changes, the system scheduling center (1) is reported, the system scheduling center (1) transmits the current changed target pose to the mechanical arm subsystem (4) in real time, and the mechanical arm subsystem (4) readjusts subsequent path design and corresponding trajectory planning according to the target pose and the current motion state to ensure the motion state connection smoothness of the current position; if the fusion radius of the current joining position is smaller than the minimum fusion radius of the mechanical arm, planning fails, movement is stopped, the system scheduling center (1) is reported, the system scheduling center (1) informs a user of abnormal target pose through a voice subsystem (2), and movement is stopped; if the planning is feasible, the mechanical arm continues to move towards the target; if the target pose continues to change, the steps are repeated until the action required by the user is finished or the planning is stopped in advance due to the failure.
4. The method for the real-time fusion of the intelligent action of the robot and the dynamic pose of the target according to claim 2, wherein the method comprises the following steps: the method for planning the track comprises the following steps: planning with reference to a minimum blend radius of the robotic arm; the motion state of the mechanical arm at the current position can be non-static and used as an initial state, a seventh-order polynomial is used for replanning, the newly planned acceleration of each joint of the mechanical arm at the current position is ensured to be smooth, and if the planning fails, the motion can be stopped.
5. The method for the real-time fusion of the intelligent action of the robot and the dynamic pose of the target according to claim 4, wherein the method comprises the following steps: and a vision sensor is arranged in the vision subsystem (3), and the vision sensor integrates plane vision and depth vision.
6. The method for the real-time fusion of the intelligent action of the robot and the dynamic pose of the target according to claim 4, wherein the method comprises the following steps: when the operation target body has obvious geometric characteristics, the method for determining the target pose comprises the following steps: the planar vision detects three key points on the front surface of an operation target body, and the pixel positions of the three key points are back projected into the depth vision to obtain the real space positions of the three key points; ensuring that the three key points are not collinear, if collinear, re-detecting one key point by the plane vision to replace any one key point in the front, and re-projecting the key point to the depth vision; the three key points form two vectors at will, a normal vector on the front face of the operation target body can be solved through the cross product of the two vectors, and the normal vector is the target pose of the operation target body.
7. The method for the real-time fusion of the intelligent action of the robot and the dynamic pose of the target according to claim 1, wherein the method comprises the following steps: when the geometric features of the corner points of the operation target body are not obvious, the method for determining the target pose comprises the following steps: manually identifying three non-collinear key points on the front surface of the operation target body, detecting the three non-collinear key points on the front surface of the operation target body through the planar vision, and reversely projecting the pixel positions of the three key points into the depth vision to obtain the real spatial positions of the three key points; the three key points form two vectors at will, a normal vector on the front face of the operation target body can be solved through the cross product of the two vectors, and the normal vector is the target pose of the operation target body.
CN202110288635.6A 2021-03-18 2021-03-18 Method for fusing intelligent action of robot and dynamic pose of target in real time Pending CN113119110A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110288635.6A CN113119110A (en) 2021-03-18 2021-03-18 Method for fusing intelligent action of robot and dynamic pose of target in real time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110288635.6A CN113119110A (en) 2021-03-18 2021-03-18 Method for fusing intelligent action of robot and dynamic pose of target in real time

Publications (1)

Publication Number Publication Date
CN113119110A true CN113119110A (en) 2021-07-16

Family

ID=76773374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110288635.6A Pending CN113119110A (en) 2021-03-18 2021-03-18 Method for fusing intelligent action of robot and dynamic pose of target in real time

Country Status (1)

Country Link
CN (1) CN113119110A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114179085A (en) * 2021-12-16 2022-03-15 上海景吾智能科技有限公司 Method and system for robot control, track connection and smoothing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114179085A (en) * 2021-12-16 2022-03-15 上海景吾智能科技有限公司 Method and system for robot control, track connection and smoothing
CN114179085B (en) * 2021-12-16 2024-02-06 上海景吾智能科技有限公司 Robot control, track connection and smoothing method and system

Similar Documents

Publication Publication Date Title
JP6766186B2 (en) How to plan the trajectory of point-to-point movement in robot joint space
US6845295B2 (en) Method of controlling a robot through a singularity
US11667035B2 (en) Path-modifying control system managing robot singularities
CN109834706B (en) Method and device for avoiding kinematic singularities in robot motion planning
JP5025598B2 (en) Interference check control apparatus and interference check control method
Dahari et al. Forward and inverse kinematics model for robotic welding process using KR-16KS KUKA robot
JP2014024162A (en) Robot system, robot control device, robot control method and robot control program
CN105382835A (en) Robot path planning method for passing through wrist singular point
CN109623825B (en) Movement track planning method, device, equipment and storage medium
CN108582071A (en) A kind of method of industrial robot programming route diagnosis and speed-optimization
Corinaldi et al. Singularity-free path-planning of dexterous pointing tasks for a class of spherical parallel mechanisms
US20060255758A1 (en) Teaching data preparing method for articulated robot
Žlajpah On orientation control of functional redundant robots
CN113119110A (en) Method for fusing intelligent action of robot and dynamic pose of target in real time
US11003177B2 (en) Apparatus and method for generating robot program
CN113858205A (en) Seven-axis redundant mechanical arm obstacle avoidance algorithm based on improved RRT
CN112405527A (en) Method for processing arc track on surface of workpiece and related device
JP2013059815A (en) Positioning posture interpolation method and control device for robot
Li et al. Kinematics Modelling and Experimental Analysis of a Six-Joint Manipulator.
JP6743791B2 (en) Robot system and work manufacturing method
JP7144754B2 (en) Articulated robots and articulated robot systems
CN114378830B (en) Robot wrist joint singular avoidance method and system
Sun et al. Development of the “Quad-SCARA” platform and its collision avoidance based on Buffered Voronoi Cell
WO2021250923A1 (en) Robot system, control device, and control method
JP2009056593A (en) Gripping control device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210716

WD01 Invention patent application deemed withdrawn after publication