CN108068115B - Parallel robot position closed-loop calibration algorithm based on visual feedback - Google Patents

Parallel robot position closed-loop calibration algorithm based on visual feedback Download PDF

Info

Publication number
CN108068115B
CN108068115B CN201711488083.3A CN201711488083A CN108068115B CN 108068115 B CN108068115 B CN 108068115B CN 201711488083 A CN201711488083 A CN 201711488083A CN 108068115 B CN108068115 B CN 108068115B
Authority
CN
China
Prior art keywords
calibration
parallel robot
correction
motion
visual feedback
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711488083.3A
Other languages
Chinese (zh)
Other versions
CN108068115A (en
Inventor
陈秋强
周聪辉
蔡兆晖
卢祺斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Boge Intelligent Technology Co ltd
Original Assignee
Fujian Boge Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Boge Intelligent Technology Co ltd filed Critical Fujian Boge Intelligent Technology Co ltd
Priority to CN201711488083.3A priority Critical patent/CN108068115B/en
Publication of CN108068115A publication Critical patent/CN108068115A/en
Application granted granted Critical
Publication of CN108068115B publication Critical patent/CN108068115B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

A parallel robot position closed-loop calibration algorithm based on visual feedback. The invention relates to a parallel robot position closed-loop calibration algorithm based on visual feedback. The method comprises the following steps: based on calibration and correction cycle quantity T input by a user, forcibly requiring the mobile platform to arrive at each correction time point, based on the existing release bit, entering a calibration mode, based on the existing coordinate system, moving to a specified position C0 (x 0, y0 and z 0), and recording a kinematics resolving position (namely a theoretical position in the motion system) of the motion of the mobile platform as a calibration data packet into a calibration buffer area according to a fixed time stamp interval when entering a view boundary line; step two: and when the mobile platform enters the boundary line of the view, the kinematic solution position of the motion of the mobile platform (namely the theoretical position in the motion system) is recorded as a proofreading data packet and stored in a proofreading buffer area according to a fixed time stamp interval. The method is used for the parallel robot position closed-loop calibration algorithm.

Description

Parallel robot position closed-loop calibration algorithm based on visual feedback
Technical Field
The invention relates to a parallel robot position closed-loop calibration algorithm based on visual feedback.
Background
In the robot field, the parallel robot with the Delta configuration has the characteristic of high precision relative to the serial robot, so that the parallel robot can be more effectively applied to the fields of food packaging, manufacturing and processing, aerospace and the like with higher requirements on the motion speed and precision. With the improvement of the industrial level and the wide application of the intelligent robot in the industry, the industrial field puts higher requirements on the working speed and the precision of the robot. Meanwhile, the parallel robot starts to develop towards the direction of high speed and light weight, so that the factors of the dynamic performance and the vibration characteristic of the parallel robot are more and more complex, for example, the inertia force excitation frequency is increased due to high-speed motion, the inherent frequency is reduced due to the increase of the flexibility of a rod piece caused by light weight design, the motion error and the elastic vibration of the parallel robot in the motion process are increased, the working precision of a mechanism is influenced, and therefore how to improve the pose precision of the parallel robot is an important problem for improving the use effect of the parallel robot.
At present, no especially mature scheme is provided for solving the pose accuracy of the parallel robot, especially the position deviation compensation under high-speed motion, the defects of poor control effect and large calculation amount exist in the traditional PID control mode, and the modes of intelligent particle swarm algorithm, fuzzy PID control and the like are still in the theoretical research stage, so that the development of an efficient and simple parallel robot pose accuracy calibration and correction method is necessary.
Disclosure of Invention
The invention aims to provide a parallel robot position closed-loop calibration algorithm based on visual feedback, which realizes error compensation in the motion control process of a parallel robot and performs calibration and correction on the position of a parallel robot moving platform gripping device on a two-dimensional plane by taking the machine visual feedback as a core.
The above purpose is realized by the following technical scheme:
a parallel robot position closed-loop calibration algorithm based on visual feedback is realized by the following steps:
the method comprises the following steps: based on the calibration and correction cycle quantity T input by a user, forcibly requiring the movable platform to enter a calibration mode after the completion of the existing release bit at each correction time point, and moving to a specified position C0 (x 0, y0 and z 0) based on the existing coordinate system;
step two: recording the kinematic resolving position of the motion of the movable platform as a proofreading data packet and storing the proofreading data packet into a proofreading buffer area according to a fixed time stamp interval when the movable platform enters a view boundary line;
step three: after acquiring a first vision correction UDP packet, the motion control module starts to read a corresponding position data pair in the theoretical position buffer area by taking a time stamp as a matching main key, and assuming that the position pair after the analysis of the captured first effective vision correction UDP packet is Ck (xk, yk, z 0), the variation amount of the theoretical position and the actual position of the movable platform is delta (xk-x0, yk-y0, 0), all coordinate variable values of the motion control module are traversed and corrected by delta, a calibration mode is completed, and the recovery to the motion control mode is exited.
In the first step, the motion control module and the vision recognition module synchronously enter a calibration mode periodically so as to complete the calibration process by real-time constraint, and the correction mechanism is rapid and controllable in frequency.
In the third step, a plurality of vision correction UDP packets are thrown out from the vision recognition module, and meanwhile, a plurality of theoretical position record pairs corresponding to the time stamps are recorded by the motion control module.
Has the advantages that:
the invention can use a certain moving platform marker as a marking basis according to the identification result of the machine vision, and form real-time position feedback calibration based on reliable machine vision identification in the motion process of the parallel robot, thereby efficiently calibrating and correcting the motion position in the motion process of the parallel robot and improving the working precision of the parallel robot under high-speed motion.
Description of the drawings:
FIG. 1 is a flow chart of the position calibration and correction algorithm of the present invention.
Fig. 2 is a schematic time distribution diagram of fig. 1.
The specific implementation mode is as follows:
example 1
A visual feedback-based parallel robot position closed-loop calibration algorithm, comprising: the method is characterized in that: the method is realized by the following steps:
the method comprises the following steps: based on calibration and correction cycle quantity T input by a user, forcibly requiring the movable platform to enter a calibration mode after finishing based on the existing release position when each correction time point arrives, and moving to a specified position C0 (x 0, y0 and z 0) based on the existing coordinate system, wherein the position is a mid-point coordinate in the field of vision in the field identifiable by the linkage robot vision of the parallel robot;
step two: recording a kinematic resolving position (namely a theoretical position in a motion system) of the motion of the movable platform as a proofreading data packet and storing the proofreading data packet into a proofreading buffer area according to a fixed time stamp interval when the movable platform enters a view boundary line;
step three: after acquiring a first vision correction UDP packet (UDP: user datagram protocol), the motion control module starts to read a corresponding position data pair in a theoretical position buffer area by taking a time stamp as a matching main key, and assuming that the position pair after the analysis of the captured first effective vision correction UDP packet is Ck (xk, yk, z 0), the variation amount of the theoretical position and the actual position of the movable platform is delta (xk-x0, yk-y0, 0), all coordinate variable values of the motion control module are traversed and corrected by delta, a calibration mode is completed, and the operation is recovered to a motion control mode.
Example 2
In the parallel-connection robot position closed-loop calibration algorithm based on visual feedback in embodiment 1, in the first step, the motion control module periodically and synchronously enters a calibration mode with the visual identification module so as to implement a calibration process in a real-time constrained manner, and the correction mechanism is fast and controllable in frequency.
Example 3
In the third step, a plurality of vision correction UDP packets are thrown out from the vision recognition module, and meanwhile, a plurality of theoretical position record pairs corresponding to the time stamps are recorded by the motion control module. Therefore, correction calculation can be carried out as long as any matched time stamp exists in the two time stamps, so that the stability of the algorithm is high, and the correction invalidation phenomenon caused by mismatching of the time stamps can be avoided.
Example 4
In the second step, the current mainstream decoupling control based on dynamics, such as a fuzzy PID controller, a particle swarm correction algorithm, and other complex error compensation and damping optimization mechanisms, is replaced with visual identification, so that the parallel robot position closed-loop calibration algorithm based on visual feedback has the characteristics of simple implementation manner, intelligent identification mechanism, simplified architecture mode, higher stability and greatly improved correction efficiency.
Example 5
The parallel robot position closed-loop calibration algorithm based on visual feedback in embodiment 1, in the first step, calibration and correction depend on a parallel robot arm with Delta configuration, and the method includes the following steps:
step one, based on test simulation data of the parallel robot, determining a correction time point of a manipulator by a calibration and correction period quantity T set by a user outside or automatically measured by an external module, specifically, the time point of entering a formal motion mode after starting is T0, each complete period comprises a motion time T and a correction time delta T, as shown in fig. 1, when each correction time point T1, T2 and T3 … … requires the motion platform to arrive at each correction time point and enter a calibration mode (refer to a time distribution schematic diagram of the formal motion mode and the calibration mode in fig. 2) after the completion of an existing release bit, when the calibration mode starts, a motion control module drives the parallel robot to move to a specified position C0 (x 0, y0 and z 0) from an existing position Cr (xr, yr and zr) based on an existing coordinate system, wherein the position is a visual field midpoint coordinate in a visual field identifiable by a linkage robot vision field of the parallel robot, recording a kinematic resolving position (namely a theoretical position in a motion system) of the motion of the movable platform as a proofreading data packet and storing the proofreading data packet into a proofreading buffer area according to a fixed time stamp interval when the movable platform enters a view boundary line;
step two, when the synchronous correction time point starts, the vision identification module enters a calibration mode, the preset shape target identification is switched to the moving platform specific mark identification, and the position coordinates C1 (x 1, y1, z 0), C2 (x 2, y2, z 0), … …, Cn (xn, yn, z 0) of the vision coordinate system where the midpoint of the established marker is located and the time stamp are packaged into a vision correction UDP packet which is thrown to the motion control module for correction, wherein the main data variables contained in the vision correction UDP packet include:
firstly, Timestamp: POSIX time, 4 bytes, for time synchronization and matching timestamp variables;
and the method comprises the following steps: float type, the X coordinate of the visual calibration object-moving platform;
a third step of: float type, visual calibration object-Y coordinate of moving platform;
step Z: float type, visual calibration object-Z coordinate of moving platform (plane camera can be configured hardware to determine height);
step three, after acquiring a first vision correction UDP packet, a motion control module starts to read a corresponding position data pair in a theoretical position buffer area by taking a timestamp as a matching main key, supposing that the captured first effective vision correction UDP packet has a timestamp variable of Timestampk, traversing a motion control buffer area, searching a data packet with Timestampn = Timestampk, if the data packet is found, taking the vision correction UDP packet as a check packet, if the data packet cannot be found, discarding the packet, and grabbing a next vision correction UDP packet until a proper matching packet is found;
assuming that after the matching packet K is found, the position pair is Ck (xk, yk, z 0) after analysis according to the standard UDP protocol, the variation amount between the theoretical position and the actual position of the mobile platform is Δ (xk-x0, yk-y0, 0), traversing and correcting all coordinate variable values of the motion control module by Δ, completing the calibration mode, and exiting and recovering to the motion control mode.
It is to be understood that the above description is not intended to limit the present invention, and the present invention is not limited to the above examples, and those skilled in the art may make modifications, alterations, additions or substitutions within the spirit and scope of the present invention.

Claims (3)

1. A parallel robot position closed-loop calibration algorithm based on visual feedback is characterized in that: the method is realized by the following steps:
the method comprises the following steps: based on the calibration and correction cycle quantity T input by a user, forcibly requiring the movable platform to enter a calibration mode after the completion of the existing release bit at each correction time point, and moving to a specified position C0 (x 0, y0 and z 0) based on the existing coordinate system;
step two: recording the kinematic resolving position of the motion of the movable platform as a proofreading data packet and storing the proofreading data packet into a proofreading buffer area according to a fixed time stamp interval when the movable platform enters a view boundary line;
step three: after acquiring a first vision correction UDP packet, the motion control module starts to read a corresponding position data pair in the theoretical position buffer area by taking a time stamp as a matching main key, and assuming that the position pair after the analysis of the captured first effective vision correction UDP packet is Ck (xk, yk, z 0), the variation amount of the theoretical position and the actual position of the movable platform is delta (xk-x0, yk-y0, 0), all coordinate variable values of the motion control module are traversed and corrected by delta, a calibration mode is completed, and the recovery to the motion control mode is exited.
2. The parallel robot position closed-loop calibration algorithm based on visual feedback as claimed in claim 1, wherein: in the first step, the motion control module and the vision recognition module synchronously enter a calibration mode periodically so as to complete the calibration process by real-time constraint, and the correction mechanism is rapid and controllable in frequency.
3. The parallel robot position closed-loop calibration algorithm based on visual feedback as claimed in claim 1, wherein: in the third step, a plurality of vision correction UDP packets are thrown out from the vision recognition module, and meanwhile, a plurality of theoretical position record pairs corresponding to the time stamps are recorded by the motion control module.
CN201711488083.3A 2017-12-30 2017-12-30 Parallel robot position closed-loop calibration algorithm based on visual feedback Active CN108068115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711488083.3A CN108068115B (en) 2017-12-30 2017-12-30 Parallel robot position closed-loop calibration algorithm based on visual feedback

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711488083.3A CN108068115B (en) 2017-12-30 2017-12-30 Parallel robot position closed-loop calibration algorithm based on visual feedback

Publications (2)

Publication Number Publication Date
CN108068115A CN108068115A (en) 2018-05-25
CN108068115B true CN108068115B (en) 2021-01-12

Family

ID=62156140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711488083.3A Active CN108068115B (en) 2017-12-30 2017-12-30 Parallel robot position closed-loop calibration algorithm based on visual feedback

Country Status (1)

Country Link
CN (1) CN108068115B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110000116B (en) * 2019-04-19 2021-04-23 福建铂格智能科技股份公司 Free-fall fruit and vegetable sorting method and system based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102066057A (en) * 2009-01-22 2011-05-18 松下电器产业株式会社 Apparatus and method for controlling robot arm, robot, program for controlling robot arm, and integrated electronic circuit
CN104511900A (en) * 2013-09-26 2015-04-15 佳能株式会社 Robot calibrating apparatus and robot calibrating method, and robot apparatus and method of controlling robot apparatus
CN106524910A (en) * 2016-10-31 2017-03-22 潍坊路加精工有限公司 Execution mechanism visual calibration method
CN107030699A (en) * 2017-05-18 2017-08-11 广州视源电子科技股份有限公司 Position and attitude error modification method and device, robot and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6548422B2 (en) * 2015-03-27 2019-07-24 キヤノン株式会社 INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102066057A (en) * 2009-01-22 2011-05-18 松下电器产业株式会社 Apparatus and method for controlling robot arm, robot, program for controlling robot arm, and integrated electronic circuit
CN104511900A (en) * 2013-09-26 2015-04-15 佳能株式会社 Robot calibrating apparatus and robot calibrating method, and robot apparatus and method of controlling robot apparatus
CN104511900B (en) * 2013-09-26 2017-12-15 佳能株式会社 Robot calibration device and calibration method, robot device and its control method
CN106524910A (en) * 2016-10-31 2017-03-22 潍坊路加精工有限公司 Execution mechanism visual calibration method
CN107030699A (en) * 2017-05-18 2017-08-11 广州视源电子科技股份有限公司 Position and attitude error modification method and device, robot and storage medium

Also Published As

Publication number Publication date
CN108068115A (en) 2018-05-25

Similar Documents

Publication Publication Date Title
CN104589357B (en) The DELTA robot control system of view-based access control model tracking and method
CN101402199B (en) Hand-eye type robot movable target extracting method with low servo accuracy based on visual sensation
CN104010774B (en) System and method for automatically generating robot program
JP5383760B2 (en) Robot with workpiece mass measurement function
CN107433590A (en) Mechanical arm load quality and the gravitational compensation method of sensor fluctating on-line identification
CN103313921B (en) Image processing apparatus and image processing system
KR20180035172A (en) Simultaneous kinematic and hand-eye calibration
CN108890650A (en) PTP acceleration optimization method and device based on dynamic parameters identification
CN111775154A (en) Robot vision system
CN109159151A (en) A kind of mechanical arm space tracking tracking dynamic compensation method and system
JP2018176334A (en) Information processing device, measurement device, system, interference determination method and article manufacturing method
JP2012055999A (en) System and method for gripping object, program and robot system
CN104786036A (en) Automobile instrument automatic pointer pressing system
EP3988254A1 (en) Robot hand-eye calibration method and apparatus, computing device, medium and product
US20130111731A1 (en) Assembling apparatus and method, and assembling operation program
CN108068115B (en) Parallel robot position closed-loop calibration algorithm based on visual feedback
CN204639548U (en) A kind of automobile instrument automatic pressing needle system
CN106092053B (en) A kind of robot resetting system and its localization method
Koutecký et al. Method of photogrammetric measurement automation using TRITOP system and industrial robot
CN109641352A (en) The method and apparatus of Robot calibration
CN111360256A (en) Control device and control method suitable for bidirectional powder laying stable flow field
CN108145928A (en) Formation system
CN112171664B (en) Production line robot track compensation method, device and system based on visual identification
Cheng et al. Dynamic error modeling and compensation in high speed delta robot pick-and-place process
CN107443369A (en) A kind of robotic arm of the inverse identification of view-based access control model measurement model is without demarcation method of servo-controlling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant