CN116652543A - Visual impedance control method and system for automatic product assembly and robot - Google Patents

Visual impedance control method and system for automatic product assembly and robot Download PDF

Info

Publication number
CN116652543A
CN116652543A CN202310724803.0A CN202310724803A CN116652543A CN 116652543 A CN116652543 A CN 116652543A CN 202310724803 A CN202310724803 A CN 202310724803A CN 116652543 A CN116652543 A CN 116652543A
Authority
CN
China
Prior art keywords
coordinate system
gripper
pixel
representing
product
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310724803.0A
Other languages
Chinese (zh)
Inventor
张海涛
林宇
吴天宇
矫海潮
荣玉琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202310724803.0A priority Critical patent/CN116652543A/en
Publication of CN116652543A publication Critical patent/CN116652543A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23PMETAL-WORKING NOT OTHERWISE PROVIDED FOR; COMBINED OPERATIONS; UNIVERSAL MACHINE TOOLS
    • B23P19/00Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23PMETAL-WORKING NOT OTHERWISE PROVIDED FOR; COMBINED OPERATIONS; UNIVERSAL MACHINE TOOLS
    • B23P19/00Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes
    • B23P19/001Article feeders for assembling machines
    • B23P19/007Picking-up and placing mechanisms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1607Calculation of inertia, jacobian matrixes and inverses
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1687Assembly, peg and hole, palletising, straight line, weaving pattern movement
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a visual impedance control method for automatic assembly of products, a system thereof and a robot thereof, belonging to the technical field of automatic assembly control of products, wherein the control method comprises the following steps: acquiring a pixel coordinate set s of a plurality of feature points in a base coordinate system t And a set of pixel coordinates s in the gripper coordinate system o Interaction force h of mechanical claw and outside o Based on control law Determining a desired speed v of the gripper d And the desired accelerationBased on PD calculationAdjusting the angular acceleration a so that the actual movement state of the gripper follows the desired movement state; according to the angle xi and the angular velocity of the jointAngular acceleration a and interaction force h o Determining an expected moment mu; updating the joint angle ζ, angular velocity of the gripper based on the desired moment μAnd interaction force h o And redetermines the actual position x and actual speed v of the gripper. Through the process, the motion state and the interaction force of the mechanical claw are dynamically adjusted, so that the product grabbing stability is ensured, and the performance of the product is not damaged.

Description

Visual impedance control method and system for automatic product assembly and robot
Technical Field
The invention belongs to the technical field of automatic product assembly control, and particularly relates to a visual impedance control method for automatic product assembly, a system thereof and a robot.
Background
The product assembly generally requires the manual implementation of a force to grasp the product in the production plant and grasp the product onto the mounting device according to a proper path to complete the mounting of the product. During the past, assembly work has been done manually by assembly workers, and with the rapid development of industrial automation, more and more assembly work has been transferred to machine completion. Although the assembly machine can rapidly complete the established assembly work, the control of the interaction force for grabbing the product is not accurate, during the moving process, if the interaction force for grabbing the product is too large, the extruded product is easy to deform or destroy, and if the interaction force is too small, the extruded product is easy to drop, so that the assembly speed and the assembly quality of the whole automatic assembly process line are seriously affected. At present, the interaction force applied when the gripper moves the product is generally constant, however, in practice, the interaction force required during movement is related to the movement state of the product, and if the interaction force cannot be adjusted in time or according to the movement state, the above problem is easy to occur.
Therefore, it is necessary to design a proper product automatic assembly scheme to reduce the problem of product automatic assembly, so as to improve the speed and quality of product integral assembly and improve the industrial automation level.
Disclosure of Invention
Aiming at the defects or improvement demands of the prior art, the invention provides a visual impedance control method for automatic product assembly, a system and a robot thereof, and aims to adjust interaction force between the two in real time during the process of grabbing products by mechanical claws, keep the grabbing stability of the products, not destroy the performance of the products and improve the assembly speed and quality of the products.
To achieve the above object, according to one aspect of the present invention, there is provided a visual impedance control system for automatic assembly of a product, comprising:
the visual impedance control module is used for acquiring a pixel coordinate set s of a plurality of characteristic points in a base coordinate system t And a set of pixel coordinates s in the gripper coordinate system o Interaction force h of mechanical claw and outside o Based on control lawDetermining a desired speed v of the gripper d And the desired accelerationWherein the feature points are on a target part to be reached, the base coordinate system is defined on the target part, and the gripper coordinate system is defined on the gripper; />Representing the speed of change of the image error,/-, and>v c for camera movement speed, M o 、D o And K is a symmetric positive matrix of virtual mass, tilt and stiffness, respectively; j (J) o 、J c Jacobian matrices representing the gripper, the camera, respectively; k (k) s Representing the gain factor;
a Cartesian motion control module for adjusting the angular acceleration a based on a PD algorithm to cause the actual motion state of the gripper to follow the desired motion state,the desired motion state includes a desired velocity v d And the desired acceleration
Inverse kinematics module for determining the angular velocity of the jointAngular acceleration a and interaction force h o Determining an expected moment mu;
an environment interaction module for updating the joint angle xi and the angular speed of the mechanical claw based on the expected moment muAnd interaction force h o
A positive kinematics module for generating a new joint angle ζ and an angular velocity based on the gripperThe actual position x and the actual speed v of the gripper are determined.
In one embodiment, the method further comprises: a vision processing module, the vision processing module comprising:
pixel coordinate unit: the camera is used for photographing the mechanical claw and the target component to obtain a mechanical claw coordinate system origin o, a base coordinate system origin t and pixel coordinates of each characteristic point under the same pixel coordinate system;
a pixel coordinate set determination unit: for calculating a set of pixel coordinates s in the base coordinate system for each feature point based on the origin of the base coordinate system and the pixel coordinates of each feature point in the pixel coordinate system t =[s t.1 ,s t.2 ,……,s t.n ] T And calculating a set of pixel coordinates s of each feature point in the gripper coordinate system based on the gripper coordinate system origin and the pixel coordinates of each feature point in the pixel coordinate system o =[s o.1 ,s o.2 ,……,s o.n ] T Wherein s is t.i Representing the ith feature point in the base coordinate systemPixel coordinates, s o.i Representing the pixel coordinates of the ith feature point in the gripper coordinate system, n being the number of feature points,
s t.i =(x i ,y i )-(x t ,y t )
s o.i =(x i ,y i )-(x o ,y o )
in (x) i ,y i ) Representing the pixel coordinates of the ith feature point in the pixel coordinate system, (x) t ,y t ) Representing the pixel coordinates of the origin of the base coordinate system in the pixel coordinate system, (x) o ,y o ) Representing the pixel coordinates of the origin of the gripper coordinate system in the pixel coordinate system.
In one embodiment, the vision processing module further comprises:
the coordinate conversion unit is used for converting pixel coordinates into coordinates in a world coordinate system based on a coordinate conversion model, and the coordinate conversion model is as follows:
wherein z is c Representing the distance of the camera optical center in the z direction of the camera coordinate system, u, v representing the pixel coordinates in the pixel coordinate system, dx, dy representing the width and length of the pixel, u 0 And v 0 Respectively representing the abscissa of the origin of the image coordinate system in the pixel coordinate system, the ordinate, f representing the focal length, R, t representing the rotation matrix and translation parameters of the homogeneous coordinate transformation, respectively, X, Y, Z representing the coordinates in the world coordinate system.
In one embodiment, a Cartesian motion control module is used to obtain a desired position x of the gripper d Desired speed v d And the desired accelerationActual position x and actual velocity v according to PD algorithmAdjusting the actual acceleration of the gripper>So that the actual position x and the actual velocity v approach the desired position x d Desired speed v d Based on the actual acceleration +.>Determining an angular acceleration a; wherein K is P Is a proportionality coefficient, K D Is a differential coefficient.
In one embodiment, the number of the feature points is greater than or equal to 3, so that when the origin of the gripper coordinate system reaches the origin of the base coordinate system, the gripper coordinate system coincides with the base coordinate system.
In one embodiment, the gripper is fitted with a clamp by which the gripper grips the component to be assembled and moves to the target component for assembly.
In one embodiment, the product is a 3C product.
According to another aspect of the present invention, there is provided a visual impedance control method for automatic assembly of a product, comprising:
step S1: acquiring a pixel coordinate set s of a plurality of feature points in a base coordinate system t And a set of pixel coordinates s in the gripper coordinate system o Interaction force h of mechanical claw and outside o Based on control lawDetermining a desired speed v of the gripper d And desired acceleration->Wherein the feature points are on a target part to be reached, the base coordinate system is defined on the target part, and the gripper coordinate system is defined on the gripper; />Representing the speed of change of the image error,/-, and>v c for camera movement speed, M o 、D o And K is a symmetric positive matrix of virtual mass, tilt and stiffness, respectively; j (J) o 、J c Jacobian matrices representing the gripper, the camera, respectively; k (k) s Representing the gain factor;
step S2: adjusting the angular acceleration a based on the PD algorithm to cause the actual motion state of the gripper to follow a desired motion state, including a desired velocity v d And the desired acceleration
Step S3: according to the angle xi and the angular velocity of the jointAngular acceleration a and interaction force h o Determining an expected moment mu;
step S4: updating the joint angle ζ and the angular velocity of the mechanical claw based on the expected moment μAnd interaction force h o
Step S5: based on the latest joint angle xi and angular velocity of the gripperDetermining the actual position x and the actual speed v of the mechanical claw;
step S6: and judging whether the base coordinate system and the mechanical claw coordinate system are coincident, if so, ending the movement, otherwise, jumping to the step S1.
In one embodiment, in the step S1, a set S of pixel coordinates of a plurality of feature points in a base coordinate system is obtained t And a set of pixel coordinates s in the gripper coordinate system o Comprising:
photographing the mechanical claw and the target component through a camera to obtain a mechanical claw coordinate system origin, a base coordinate system origin and pixel coordinates of each characteristic point under the same pixel coordinate system;
calculating a set of pixel coordinates s in the base coordinate system for each feature point based on the origin of the base coordinate system and the coordinates of each feature point in the pixel coordinate system t And calculating a set of pixel coordinates s of each feature point in the gripper coordinate system based on the gripper coordinate system origin and the coordinates of each feature point in the pixel coordinate system o
The utility model provides a product automatic assembly robot, includes the robot body, is located the terminal gripper of robot body and is used for the automatic assembly of foretell product vision impedance control system, vision impedance control system is used for controlling the gripper centre gripping is waited to assemble the part and remove to object part department assembles.
In general, the above technical solutions conceived by the present invention, compared with the prior art, enable the following beneficial effects to be obtained:
the invention dynamically adjusts the motion state and interaction force of the mechanical claw mobile product, and the motion state and interaction force are mutually related. First, based on a specific control law formula, the expected speed and the expected acceleration of the gripper at the next time sequence are determined according to the current interaction force state of the gripper, the current moving speed of the camera and the current distance from the target component. In the process, the determination of the expected speed and the expected acceleration not only considers the distance and the moving speed of the related camera, but also considers the current interaction force, adjusts the speed and the acceleration within the action range of the current interaction force, avoids the condition that the motion state is inappropriately adjusted, and the current interaction force cannot ensure the stable holding or the deformation of the product. After the expected speed and the expected acceleration of the next time sequence are determined, the actual angular acceleration of the mechanical claw is adjusted based on the PD algorithm, so that the actual movement state of the actual mechanical claw is ensured to follow the expected movement state. And then determining the expected moment based on the adjustment amount of the actual angular acceleration, the actual motion state of the mechanical claw and the interaction force, and adjusting the motion state of the mechanical claw and the interaction force based on the expected moment. In this process, the torque of the gripper is changed, so that the interaction force is also changed. Finally, the next time sequence control is carried out again after the updated actual position x and the updated actual speed v of the mechanical claw are determined based on the motion state of the mechanical claw. Through the process, the motion state and interaction force of the mechanical claw moving product are dynamically adjusted, the product grabbing stability is guaranteed, the performance of the product is not damaged, and the product assembling speed and quality are improved.
Drawings
FIG. 1 is a block diagram of a visual impedance control system for automated assembly of products in one embodiment;
FIG. 2 is a flow chart of steps of a visual impedance control method for automated assembly of a product in one embodiment;
fig. 3 is a schematic structural view of a product automatic assembling robot in an embodiment.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
As shown in fig. 1, in one embodiment, a visual impedance control system for automated assembly of a product includes a visual impedance control module, a cartesian motion control module, an inverse kinematics module, an environmental interaction module, and a positive kinematics module. Hereinafter, each of the modules will be described in detail.
The visual impedance control module is used for acquiring a pixel coordinate set s of a plurality of characteristic points in a base coordinate system t And a set of pixel coordinates s in the gripper coordinate system o Interaction force h of mechanical claw and outside o Based on control lawDetermining a machineThe desired speed v of the pawl d And desired acceleration->
Wherein the feature points are on the target component to be reached. The invention defines the component grabbed by the mechanical claw as a component to be assembled, and defines the component waiting far away from the component to be assembled as a target component. The form of the target part and the part to be assembled is variable, for example, the target part may be a part of the fitting of the product, and the part to be assembled may be another part of the fitting of the product that matches the target part; alternatively, the target component is a mounting device, and the component to be assembled is a product. The feature points are then a plurality of feature points selected on the target part.
The base coordinate system is defined at the target component, which is stationary during assembly, and thus, after establishment, is also stationary. A gripper coordinate system is defined on the gripper. Since the gripper is moving, the gripper coordinate system is also moving with the gripper but stationary with respect to the gripper.
After defining a base coordinate system and a gripper coordinate system, respectively calculating pixel coordinates of each feature point under the two coordinate systems to form a pixel coordinate set s of a plurality of feature points in the base coordinate system t And a set of pixel coordinates s in the gripper coordinate system o
In one embodiment, the number of feature points is greater than or equal to 3 such that the gripper coordinate system coincides with the base coordinate system when the gripper coordinate system origin reaches the base coordinate system origin. It can be understood that the product assembly needs to be strictly aligned, and the gripper coordinate system is overlapped with the base coordinate system, so that the grabbed product to be assembled and the target product can be ensured to be aligned according to the requirement so as to smoothly perform the assembly action.
In one embodiment, a set of pixel coordinates s of a plurality of feature points in a base coordinate system is determined by a vision processing module t And a set of pixel coordinates s in the gripper coordinate system o Visual processing mouldThe block includes a pixel coordinate unit and a pixel coordinate set determination unit.
Specifically, the pixel coordinate unit is used for photographing the mechanical claw and the target component through the camera to obtain the origin of the mechanical claw coordinate system, the origin of the base coordinate system and the pixel coordinates of each characteristic point under the same pixel coordinate system.
Defining the origin of the coordinate system of the mechanical claw as a point o, (x) o ,y o ) Pixel coordinates representing the origin of the gripper coordinate system, the origin of the base coordinate system being point t, (x) t ,y t ) The pixel coordinate representing the origin of the base coordinate system, and the pixel coordinate of the ith feature point in the pixel coordinate system is (x) i ,y i )。
The pixel coordinate set determining unit is used for calculating a pixel coordinate set s of each feature point in the base coordinate system based on the origin of the base coordinate system and the pixel coordinates of each feature point in the pixel coordinate system t And calculating a set of pixel coordinates s of each feature point in the gripper coordinate system based on the gripper coordinate system origin and the coordinates of each feature point in the pixel coordinate system o
The selection of 4 feature points is taken as an example.
Pixel coordinate set s of 4 feature points in base coordinate system t =[s t.1 ,s t.2 ,s t.3 ,s t.4 ] T Pixel coordinate set s of 4 feature points in gripper coordinate system o =[s o.1 ,s o.2 ,s o.3 ,s o.4 ] T
The pixel coordinates of the ith feature point in the base coordinate system are:
s t.i =(x i ,y i )-(x t ,y t )
the pixel coordinates of the ith feature point in the gripper coordinate system are as follows:
s o.i =(x i ,y i )-(x o ,y o )
at this time, the image error is:
e=s o -s t
the image error represents a deviation between the gripper coordinate system and the base coordinate system, and the gripper is moved with the object that the gripper coordinate system coincides with the base coordinate system to align and assemble the component to be assembled with the target component.
The image error change speed is:
in the method, in the process of the invention,representing a set of pixel coordinates s o Speed of change of the position of each coordinate in +.>Representing a set of pixel coordinates s t The speed of change of the position of each coordinate in (a).
At the time of calculation, the image error change speed may be distorted as:
wherein J is 0 、J c Jacobian matrix of mechanical claw and camera, J t Representing v c And (3) withV of the variation relation of (v) c Is the speed of movement of the camera.
Then, the speed of change is varied based on the current image errorCurrent movement speed v of camera c Current interaction force h o Determining a desired speed v of the gripper d And desired acceleration->The control law formula is:
therefore, in the visual impedance control module, the relation among the gripper moving speed, the gripper moving acceleration, the image error changing speed, the camera moving speed and the interaction force is comprehensively considered, the gripper can be accurately controlled, the grabbing stability of the product is realized, and the performance of the product is not damaged.
In an embodiment, since the data processing is performed using coordinate data in the world coordinate system, the vision processing module further includes a coordinate conversion unit for converting the pixel coordinates into coordinates in the world coordinate system based on the coordinate conversion model.
Specifically, the coordinate transformation model is as follows:
wherein z is c Representing the distance of the camera optical center in the z direction of the camera coordinate system, x and y representing the pixel coordinates in the pixel coordinate system, dx and dy representing the width and length of the pixel, u d And v d Respectively representing the abscissa of the origin of the image coordinate system in the pixel coordinate system, the ordinate, f representing the focal length, R, t representing the rotation matrix and translation parameters of the homogeneous coordinate transformation, respectively, X, Y, Z representing the coordinates in the world coordinate system.
For example, the pixel coordinates (x o ,y o ) Converted into coordinates (X) o ,Y o ,Z o ) Pixel coordinates (x) of the origin of the base coordinate system t ,y t ) Converted into coordinates (X) t ,Y t ,Z t ) The pixel coordinates (x) of the ith feature point in the pixel coordinate system i ,y i ) Converted into coordinates (X) i ,Y i ,Z i )。
In one embodiment, since the base coordinate system is fixed, the base coordinate system may be directly selected as the world coordinate system.
In one embodiment, M is used in specific applications in consideration of decoupling requirements o 、D o K are arranged in a diagonal matrix.
In one embodiment, the jacobian matrix J t The deduction process of (2) is as follows:
J t =[J t,1 ,J t,2 ,J t,3 ,……,J t,n ]
where n is the number of feature points,pixel coordinate change speed of the ith feature point in the base coordinate system, < >>Is the z-direction projection component of the coordinate of the ith feature point in the base coordinate system relative to the camera coordinate system, X t,i And Y t,i The i-th feature point is respectively on the abscissa and the ordinate of the base coordinate system, X t,i =X i -X t ,Y t,i =Y i -Y t ,R c Is a rotation change matrix of a camera coordinate system relative to a world coordinate system, v c Is the camera movement speed, J t,i Establish->And v c Is a changing relationship of (a).
In one embodiment, the gripper jacobian matrix J o The deduction process of (2) is as follows:
J o =[J o,1 ,J o,2 ,J o,3 ,……,J o,n ]
in the method, in the process of the invention,representing the pixel coordinate change speed of the ith feature point in a mechanical claw coordinate system and X o,i And Y o,i The i-th characteristic point is respectively on the abscissa and the ordinate of the mechanical claw coordinate system, X o,i =X i -X o ,Y t,o =Y i -Y o ,I 3 Representing 3-order unit vectors, S () is an operation of writing a column vector into an oblique symmetric matrix,/->Rotation matrix representing gripper coordinate system relative to camera coordinate system,/->Is the coordinate of the ith feature point in the gripper coordinate system, v o Representing the speed of movement of the gripper relative to the world coordinate system. J (J) o,i Establish->And v o 、v c Is a changing relationship of (a).
Camera jacobian matrix J c =J o +J t
It will be appreciated that the above jacobian matrices can be derived from modeling relationships.
In one embodiment, k s The gain factor is indicated, typically 1.
The Cartesian motion control module is used for adjusting the angular acceleration a based on the PD algorithm to enable the actual mechanical claw to beThe movement state follows a desired movement state comprising a desired velocity v d And the desired acceleration
Specifically, the desired position x of the gripper is obtained d Desired speed v d And the desired accelerationActual position x and actual velocity v according to PD algorithm +.>Adjusting the actual acceleration of the gripper>So that the actual position x and the actual velocity v approach the desired position x d Desired speed v d Based on the actual acceleration->Determining an angular acceleration a; wherein K is P Is a proportionality coefficient, K D Is a differential coefficient.
It will be appreciated that the desired velocity v can be found by the designed visual impedance control method d And the desired accelerationFor a desired velocity v d The desired position x can be found by time integration d
The cartesian motion control module is used for enabling the actual motion state of the mechanical claw to follow the expected motion state, and the PD algorithm is designed as follows:
since position, velocity and acceleration are interrelated, the PD algorithm is usedLine regulation, regulating acceleration when actual speed v and position x are less than desired valuesHigher than the desired acceleration to increase the value of the actual speed v and the position x so that it quickly follows the desired value, whereas when the actual speed v and the position x are greater than the desired value, the acceleration is adjusted>Below the desired acceleration to reduce the values of the actual velocity v and the position x to quickly approach the desired value.
In particular, accelerationAnd the conversion formula of the angular acceleration a is as follows:
in the method, in the process of the invention,for angular velocity +.>And (3) a matrix formed by respectively deriving time for each item of the J (xi) matrix.
The inverse kinematics module is used for controlling the angular velocity according to the joint angle xiAngular acceleration a and interaction force h o The expected moment mu is determined.
The environment interaction module is used for updating the joint angle xi and the angular speed of the mechanical claw based on the expected moment muAnd interaction force h o
The positive kinematics module is used forBased on the latest joint angle xi and angular velocity of the mechanical clawThe actual position x and the actual speed v of the gripper are determined.
The inverse kinematics module, the environment interaction module and the forward kinematics module may be implemented according to conventional techniques, which are not described here again.
In one embodiment, the assembled product may be a 3C product, but is not limited thereto.
In one embodiment, the gripper is provided with a clamp, and the gripper clamps the part to be assembled by the clamp and moves to the target part for assembly.
Correspondingly, the invention also relates to a visual impedance control method for automatic assembly of products, as shown in fig. 2, which mainly comprises the following steps:
step S1: acquiring a pixel coordinate set s of a plurality of feature points in a base coordinate system t And a set of pixel coordinates s in the gripper coordinate system o Interaction force h of mechanical claw and outside o Based on control lawDetermining a desired speed v of the gripper d And desired acceleration->
Wherein the feature points are located on a target part to be reached, a base coordinate system is defined on the target part, and a gripper coordinate system is defined on the gripper;representing the speed of change of the image error,/-, and>v c for camera movement speed, M o 、D o And K is a symmetric positive matrix of virtual mass, tilt and stiffness, respectively; j (J) o 、J c Jacobian matrices respectively representing the gripper and the camera; k (k) s Representing the gain factor.
Step S2: adjusting the angular acceleration a based on the PD algorithm to cause the actual motion state of the gripper to follow a desired motion state, including a desired velocity v d And the desired acceleration
Step S3: according to the angle xi and the angular velocity of the jointAngular acceleration a and interaction force h o The expected moment mu is determined.
Step S4: updating the joint angle ζ, angular velocity of the gripper based on the desired moment μAnd interaction force h o
Step S5: based on the latest joint angle xi and angular velocity of the mechanical clawThe actual position x and the actual speed v of the gripper are determined.
Step S6: and judging whether the base coordinate system and the mechanical claw coordinate system are coincident, if so, ending the movement, otherwise, jumping to the step S1.
Specifically, a pixel coordinate set S of a plurality of feature points in a base coordinate system is acquired in step S1 t And a set of pixel coordinates s in the gripper coordinate system o The process of (1) comprises:
step S11: and photographing the mechanical claw and the target component through a camera to obtain the origin of the mechanical claw coordinate system, the origin of the base coordinate system and the pixel coordinates of each characteristic point under the same pixel coordinate system.
Step S12: calculating a set of pixel coordinates s in the base coordinate system for each feature point based on the origin of the base coordinate system and the coordinates of each feature point in the pixel coordinate system t And based on a gripper coordinate systemCalculating a pixel coordinate set s of each feature point in a mechanical claw coordinate system by the origin and the coordinates of each feature point in the pixel coordinate system o
The specific implementation process of each step may be described above, and will not be described herein.
Accordingly, the invention also relates to a robot for automatically assembling products, which comprises a robot body 1, a mechanical claw 2 positioned at the tail end of the robot body and a visual impedance control system (not shown in the figure) for automatically assembling the products, wherein the visual impedance control system is used for controlling the mechanical claw to clamp a part to be assembled and move to a target part for assembling.
It will be readily appreciated by those skilled in the art that the foregoing is merely a preferred embodiment of the invention and is not intended to limit the invention, but any modifications, equivalents, improvements or alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. A visual impedance control system for automated assembly of a product, comprising:
a visual impedance control module for acquiring a pixel coordinate set st of a plurality of feature points in a base coordinate system and a pixel coordinate set s in a gripper coordinate system o Interaction force h of mechanical claw and outside o Based on control lawDetermining a desired speed v of the gripper d And the desired accelerationWherein the feature points are on a target part to be reached, the base coordinate system is defined on the target part, and the gripper coordinate system is defined on the gripper; />Representing image error variationsSpeed (I)>v c For camera movement speed, M o 、D o And K is a symmetric positive matrix of virtual mass, tilt and stiffness, respectively; j (J) o 、J c Jacobian matrices representing the gripper, the camera, respectively; k (k) s Representing the gain factor;
a Cartesian motion control module for adjusting the angular acceleration a based on a PD algorithm to cause an actual motion state of the gripper to follow a desired motion state including a desired velocity v d And the desired acceleration
Inverse kinematics module for determining the angular velocity of the jointAngular acceleration a and interaction force h o Determining an expected moment mu;
an environment interaction module for updating the joint angle xi and the angular speed of the mechanical claw based on the expected moment muAnd interaction force h o
A positive kinematics module for generating a new joint angle ζ and an angular velocity based on the gripperThe actual position x and the actual speed v of the gripper are determined.
2. The visual impedance control system for automated assembly of a product as recited in claim 1, further comprising: a vision processing module, the vision processing module comprising:
pixel coordinate unit: the camera is used for photographing the mechanical claw and the target component to obtain a mechanical claw coordinate system origin o, a base coordinate system origin t and pixel coordinates of each characteristic point under the same pixel coordinate system;
a pixel coordinate set determination unit: for calculating a set of pixel coordinates s in the base coordinate system for each feature point based on the origin of the base coordinate system and the pixel coordinates of each feature point in the pixel coordinate system t =[s t.1 ,s t.2 ,……,s t.n ] T And calculating a set of pixel coordinates s of each feature point in the gripper coordinate system based on the gripper coordinate system origin and the pixel coordinates of each feature point in the pixel coordinate system o =[s o.1 ,s o.2 ,…,s o.n ] T Wherein s is t,i Pixel coordinates s representing the ith feature point in the base coordinate system o.i Representing the pixel coordinates of the ith feature point in the gripper coordinate system, n being the number of feature points,
s t.i =(x i ,y i )-(x t ,y t )
s o.i =(x i ,y i )-(x o ,y o )
in (x) i ,y i ) Representing the pixel coordinates of the ith feature point in the pixel coordinate system, (x) t ,y t ) Representing the pixel coordinates of the origin of the base coordinate system in the pixel coordinate system, (x) o ,y o ) Representing the pixel coordinates of the origin of the gripper coordinate system in the pixel coordinate system.
3. The visual impedance control system for automated product assembly of claim 2, wherein the visual processing module further comprises:
the coordinate conversion unit is used for converting pixel coordinates into coordinates in a world coordinate system based on a coordinate conversion model, and the coordinate conversion model is as follows:
wherein zc represents the distance of the camera optical center in the z direction of the camera coordinate system, u and v represent the pixel coordinates in the pixel coordinate system, dx and dy represent the width and length of the pixel, and u 0 And v 0 Respectively representing the abscissa of the origin of the image coordinate system in the pixel coordinate system, the ordinate, f representing the focal length, R, t representing the rotation matrix and translation parameters of the homogeneous coordinate transformation, respectively, X, Y, Z representing the coordinates in the world coordinate system.
4. The visual impedance control system for automated product assembly of claim 1, wherein a cartesian motion control module is used to obtain a desired position x of the gripper d Desired speed vd and desired accelerationActual position x and actual velocity v according to PD algorithm +.> Adjusting the actual acceleration of the gripper>So that the actual position x and the actual velocity v approach the desired position x d Desired speed v d Based on the actual accelerationDetermining an angular acceleration a; wherein K is P Is a proportionality coefficient, K D Is a differential coefficient.
5. The visual impedance control system for automatic assembly of a product according to claim 1, wherein the number of feature points is greater than or equal to 3 such that when the origin of the gripper coordinate system reaches the origin of the base coordinate system, the gripper coordinate system coincides with the base coordinate system.
6. The visual impedance control system for automatic assembly of a product according to claim 1, wherein the gripper is fitted with a clamp by which the gripper grips a part to be assembled and moves to the target part for assembly.
7. The visual impedance control system for automated assembly of a product of claim 1, wherein the product is a 3C product.
8. A visual impedance control method for automatic assembly of a product, comprising:
step S1: acquiring a pixel coordinate set s of a plurality of feature points in a base coordinate system t And a set of pixel coordinates s in the gripper coordinate system o Interaction force h of mechanical claw and outside o Based on control lawDetermining a desired speed v of the gripper d And desired acceleration->Wherein the feature points are on a target part to be reached, the base coordinate system is defined on the target part, and the gripper coordinate system is defined on the gripper; />Representing the speed of change of the image error,/-, and>v c for camera movement speed, M o 、D o And K is a symmetric positive matrix of virtual mass, tilt and stiffness, respectively; j (J) o 、J c Jacobian representing the gripper, camera, respectivelyA ratio matrix; k (k) s Representing the gain factor;
step S2: adjusting the angular acceleration a based on the PD algorithm to cause the actual motion state of the gripper to follow a desired motion state, including a desired velocity v d And the desired acceleration
Step S3: according to the angle xi and the angular velocity of the jointAngular acceleration a and interaction force h o Determining an expected moment mu;
step S4: updating the joint angle ζ and the angular velocity of the mechanical claw based on the expected moment μAnd interaction force h o
Step S5: based on the latest joint angle xi and angular velocity of the gripperDetermining the actual position x and the actual speed v of the mechanical claw;
step S6: and judging whether the base coordinate system and the mechanical claw coordinate system are coincident, if so, ending the movement, otherwise, jumping to the step S1.
9. The method for controlling visual impedance for automatic product assembling according to claim 8, wherein in said step S1, a set S of pixel coordinates of a plurality of feature points in a base coordinate system is obtained t And a set of pixel coordinates s in the gripper coordinate system o Comprising:
photographing the mechanical claw and the target component through a camera to obtain a mechanical claw coordinate system origin, a base coordinate system origin and pixel coordinates of each characteristic point under the same pixel coordinate system;
based on the basic coordinate systemCalculating a pixel coordinate set s of each feature point in a base coordinate system based on the coordinates of the point and each feature point in the pixel coordinate system t And calculating a set of pixel coordinates s of each feature point in the gripper coordinate system based on the gripper coordinate system origin and the coordinates of each feature point in the pixel coordinate system o
10. A product automatic assembling robot comprising a robot body, a gripper at an end of the robot body, and a visual impedance control system for automatic assembly of a product according to any one of claims 1 to 7 for controlling the gripper to grip a part to be assembled and move to the target part for assembly.
CN202310724803.0A 2023-06-16 2023-06-16 Visual impedance control method and system for automatic product assembly and robot Pending CN116652543A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310724803.0A CN116652543A (en) 2023-06-16 2023-06-16 Visual impedance control method and system for automatic product assembly and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310724803.0A CN116652543A (en) 2023-06-16 2023-06-16 Visual impedance control method and system for automatic product assembly and robot

Publications (1)

Publication Number Publication Date
CN116652543A true CN116652543A (en) 2023-08-29

Family

ID=87727856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310724803.0A Pending CN116652543A (en) 2023-06-16 2023-06-16 Visual impedance control method and system for automatic product assembly and robot

Country Status (1)

Country Link
CN (1) CN116652543A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117066843A (en) * 2023-10-08 2023-11-17 荣耀终端有限公司 Method and apparatus for assembling product components

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117066843A (en) * 2023-10-08 2023-11-17 荣耀终端有限公司 Method and apparatus for assembling product components
CN117066843B (en) * 2023-10-08 2024-05-10 荣耀终端有限公司 Method and apparatus for assembling product components

Similar Documents

Publication Publication Date Title
CN110039542B (en) Visual servo tracking control method with speed and direction control function and robot system
CN111660306B (en) Robot variable admittance control method and system based on operator comfort
JP4265088B2 (en) Robot apparatus and control method thereof
CN116652543A (en) Visual impedance control method and system for automatic product assembly and robot
CN112894823B (en) Robot high-precision assembling method based on visual servo
CN114912287B (en) Robot autonomous grabbing simulation system and method based on target 6D pose estimation
CN110695996B (en) Automatic hand-eye calibration method for industrial robot
CN115625711B (en) Double-arm robot cooperative control method considering tail end force
CN108927801B (en) Method and device for adjusting tail end attitude of mechanical arm
CN111452038B (en) High-precision workpiece assembly and assembly method thereof
CN113601158B (en) Bolt feeding pre-tightening system based on visual positioning and control method
CN110561440A (en) multi-objective planning method for acceleration layer of redundant manipulator
CN112008696A (en) Industrial robot system based on vision
CN113352327B (en) Five-degree-of-freedom mechanical arm joint variable determination method
CN114986498A (en) Mobile operation arm cooperative control method
CN114074331A (en) Disordered grabbing method based on vision and robot
CN116214531B (en) Path planning method and device for industrial robot
CN116494250B (en) Mechanical arm control method, controller, medium and system based on speed compensation
CN110533727B (en) Robot self-positioning method based on single industrial camera
CN110815177B (en) Migration method for 2D visual guidance teaching of composite robot
CN110977478A (en) Mobile dual-robot machining system and method for drilling and milling weak rigid support
JPH06187021A (en) Coordinate correcting method for robot with visual sense
US20220101213A1 (en) Method of performing display with respect to control parameters for robot, program, and information processing apparatus
CN114939867A (en) Calibration method and system for mechanical arm external irregular asymmetric tool based on stereoscopic vision
CN215701709U (en) Configurable hand-eye calibration device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination