WO2022160787A1 - Procédé et appareil d'étalonnage main-œil de robot, support de stockage lisible, et robot - Google Patents

Procédé et appareil d'étalonnage main-œil de robot, support de stockage lisible, et robot Download PDF

Info

Publication number
WO2022160787A1
WO2022160787A1 PCT/CN2021/124609 CN2021124609W WO2022160787A1 WO 2022160787 A1 WO2022160787 A1 WO 2022160787A1 CN 2021124609 W CN2021124609 W CN 2021124609W WO 2022160787 A1 WO2022160787 A1 WO 2022160787A1
Authority
WO
WIPO (PCT)
Prior art keywords
calibration
coordinate system
measurement point
iteration variable
transformation matrix
Prior art date
Application number
PCT/CN2021/124609
Other languages
English (en)
Chinese (zh)
Inventor
张硕
谢铮
刘益彰
陈金亮
熊友军
Original Assignee
深圳市优必选科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市优必选科技股份有限公司 filed Critical 深圳市优必选科技股份有限公司
Publication of WO2022160787A1 publication Critical patent/WO2022160787A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/0095Means or methods for testing manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/08Programme-controlled manipulators characterised by modular constructions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Definitions

  • the present application belongs to the field of robotics, and in particular, relates to a robot hand-eye calibration method, device, computer-readable storage medium, and a robot.
  • embodiments of the present application provide a robot hand-eye calibration method, device, computer-readable storage medium, and robot to solve the problem of low accuracy of calibration results obtained by the existing robot hand-eye calibration method.
  • a first aspect of the embodiments of the present application provides a robot hand-eye calibration method, which may include:
  • the first calibration result is iteratively optimized using a preset optimization algorithm to obtain an optimized second calibration result.
  • the iterative optimization of the first calibration result by using a preset optimization algorithm to obtain an optimized second calibration result may include:
  • the second calibration result is determined according to the updated value of the iteration variable.
  • the iterative calculation of the current value of the iteration variable according to the Jacobian matrix and the residual to obtain the updated value of the iteration variable may include:
  • the updated value of the iteration variable is calculated according to:
  • x k is the current value of the iteration variable
  • J k is the Jacobian matrix corresponding to the current value of the iteration variable
  • f(x k ) is the residual value corresponding to the current value of the iteration variable difference, that is, the function value corresponding to the current value of the iteration variable in the preset objective function
  • T is the transpose symbol
  • I is the identity matrix
  • is the preset optimization factor
  • x k+1 is the The updated value of the iteration variable.
  • the iteration variable can be set according to the following formula:
  • x is the iteration variable
  • orient( b R c ) 3 ⁇ 1 is the attitude of the camera in the base coordinate system
  • b p c is the position of the camera in the base coordinate system
  • e p o is the coordinate at the end of the calibration object position in the system
  • the objective function is set according to the following formula:
  • f(x) is the objective function
  • i is the serial number of the measurement point, 1 ⁇ i ⁇ m
  • m is the total number of measurement points
  • b T c is the homogeneous transformation matrix from the camera coordinate system to the base coordinate system.
  • performing robot hand-eye calibration according to the measurement data to obtain a first calibration result may include:
  • the calibration equation system is solved to obtain the first calibration result.
  • the measurement data of each measurement point includes the joint angle of each joint of the robot and the pose of the calibration object measured by the camera;
  • the establishing a calibration equation between every two measurement points according to the measurement data may include:
  • the homogeneous transformation matrix from the base coordinate system of the first measurement point to the end effector coordinate system the homogeneous transformation matrix from the camera coordinate system of the first measurement point to the calibration object coordinate system, and the end effector coordinate system of the second measurement point
  • the homogeneous transformation matrix to the base coordinate system and the homogeneous transformation matrix of the calibration object coordinate system of the second measurement point to the camera coordinate system establish the calibration equation between the first measurement point and the second measurement point.
  • establishing the calibration equation between the first measurement point and the second measurement point may include:
  • a second aspect of the embodiments of the present application provides a robot hand-eye calibration device, which may include:
  • the measurement data acquisition module is used to acquire the measurement data of three or more measurement points respectively;
  • a first calibration module configured to perform robot hand-eye calibration according to the measurement data to obtain a first calibration result
  • the second calibration module is configured to iteratively optimize the first calibration result by using a preset optimization algorithm to obtain an optimized second calibration result.
  • the second calibration module may include:
  • a Jacobian matrix calculation unit configured to calculate the Jacobian matrix corresponding to the current value of the iteration variable, wherein the initial value of the iteration variable is determined by the first calibration result
  • a residual error calculation unit configured to calculate the residual error corresponding to the current value of the iteration variable
  • an iterative calculation unit configured to iteratively calculate the current value of the iteration variable according to the Jacobian matrix and the residual to obtain an updated value of the iteration variable
  • an update unit configured to replace the current value of the iteration variable with the updated value when the preset iterative optimization termination condition is not satisfied, and continue to perform the next iterative optimization until the iterative optimization termination condition is satisfied ;
  • a calibration result determination unit configured to determine the second calibration result according to the updated value of the iteration variable when the iterative optimization termination condition is satisfied.
  • the iterative calculation unit is specifically configured to calculate the update value of the iteration variable according to the following formula:
  • x k is the current value of the iteration variable
  • J k is the Jacobian matrix corresponding to the current value of the iteration variable
  • f(x k ) is the residual value corresponding to the current value of the iteration variable difference, that is, the function value corresponding to the current value of the iteration variable in the preset objective function
  • T is the transpose symbol
  • I is the identity matrix
  • is the preset optimization factor
  • x k+1 is the The updated value of the iteration variable.
  • the second calibration module may also include:
  • An iteration variable setting unit for setting the iteration variable according to the following formula:
  • x is the iteration variable
  • orient( b R c ) 3 ⁇ 1 is the attitude of the camera in the base coordinate system
  • b p c is the position of the camera in the base coordinate system
  • e p o is the coordinate at the end of the calibration object position in the system
  • the objective function setting unit is used to set the objective function according to the following formula:
  • f(x) is the objective function
  • i is the serial number of the measurement point, 1 ⁇ i ⁇ m
  • m is the total number of measurement points
  • b T c is the homogeneous transformation matrix from the camera coordinate system to the base coordinate system.
  • the first calibration module may include:
  • a calibration equation establishment unit configured to establish a calibration equation between every two measurement points according to the measurement data
  • a calibration equation system establishment unit which is used to combine the calibration equations between every two measurement points into a calibration equation system
  • a calibration equation set solving unit configured to solve the calibration equation set to obtain the first calibration result.
  • the calibration equation establishment unit may include:
  • a first calculation subunit configured to calculate a homogeneous transformation matrix from the base coordinate system of the first measurement point to the end effector coordinate system according to the joint angle in the first measurement point;
  • the second calculation subunit is used to calculate the homogeneous transformation matrix from the camera coordinate system of the first measurement point to the calibration object coordinate system according to the calibration object pose in the first measurement point;
  • a third calculation subunit configured to calculate a homogeneous transformation matrix from the end effector coordinate system of the second measurement point to the base coordinate system according to the joint angle in the second measurement point;
  • the fourth calculation subunit is used to calculate the homogeneous transformation matrix from the calibration object coordinate system of the second measurement point to the camera coordinate system according to the calibration object pose in the second measurement point;
  • the calibration equation establishes a subunit, which is used for the homogeneous transformation matrix from the base coordinate system of the first measurement point to the end effector coordinate system, the homogeneous transformation matrix from the camera coordinate system of the first measurement point to the calibration object coordinate system, and the second
  • the homogeneous transformation matrix from the end effector coordinate system of the measurement point to the base coordinate system, and the homogeneous transformation matrix from the calibration object coordinate system of the second measurement point to the camera coordinate system establishes the calibration between the first measurement point and the second measurement point equation.
  • calibration equation establishment subunit is specifically used to establish the calibration equation as shown below:
  • a third aspect of the embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the steps of any of the foregoing methods for calibrating a robot hand and eye .
  • a fourth aspect of the embodiments of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the computer program when the processor executes the computer program.
  • a fifth aspect of the embodiments of the present application provides a computer program product, which, when the computer program product runs on a robot, causes the robot to perform the steps of any one of the above-mentioned robot hand-eye calibration methods.
  • the embodiments of the present application have the following beneficial effects: the embodiments of the present application obtain measurement data of more than three measurement points respectively; perform robot hand-eye calibration according to the measurement data to obtain a first calibration result; The set optimization algorithm performs iterative optimization on the first calibration result to obtain an optimized second calibration result.
  • the hand-eye calibration of the robot can be performed step by step. First, a rough calibration result, that is, the first calibration result, is obtained according to the measurement data, and then the error is gradually reduced through continuous iterative optimization, so as to obtain a high-precision calibration result.
  • the calibration result is the second calibration result.
  • 1 is a schematic diagram of a hand-eye calibration scene with eyes on hands
  • FIG. 2 is a schematic diagram of a hand-eye calibration scene with eyes outside
  • FIG. 3 is a flowchart of an embodiment of a robot hand-eye calibration method in an embodiment of the present application
  • Fig. 4 is a schematic flow chart of robot hand-eye calibration according to measurement data
  • Fig. 5 is the schematic diagram of the mapping relationship between each coordinate system
  • Fig. 6 is a schematic diagram of the mapping relationship between each coordinate system when only the position information of the calibration object is considered
  • FIG. 7 is a schematic flowchart of iterative optimization of the first calibration result using a preset optimization algorithm
  • FIG. 8 is a structural diagram of an embodiment of a robot hand-eye calibration device in an embodiment of the application.
  • FIG. 9 is a schematic block diagram of a robot in an embodiment of the present application.
  • the term “if” may be contextually interpreted as “when” or “once” or “in response to determining” or “in response to detecting” .
  • the phrases “if it is determined” or “if the [described condition or event] is detected” may be interpreted, depending on the context, to mean “once it is determined” or “in response to the determination” or “once the [described condition or event] is detected. ]” or “in response to detection of the [described condition or event]”.
  • hand-eye calibration can be divided into two specific scenarios, one is Eye-In-Hand, that is, the camera is installed on the end effector of the robot, as shown in Figure 1 As shown, the other is Eye-To-Hand, that is, the camera is installed in a fixed position, and the calibration object (Object) is installed on the end effector of the robot, as shown in Figure 2 .
  • the eye-outside scene is used as an example to describe the process of hand-eye calibration in detail.
  • the eye-on-hand scene is similar, and details are not described again in this embodiment of the present application.
  • an embodiment of a robot hand-eye calibration method in the embodiment of the present application may include:
  • Step S301 respectively acquiring measurement data of three or more measurement points.
  • the end effector of the robot can be controlled to move, and several measurement points can be selected for data measurement during the movement process.
  • the measurement data of each measurement point includes the joint angle of each joint of the robot and the measurement obtained by the camera.
  • the pose of the calibration object The specific number of measurement points can be set according to the actual situation, but at least three measurement points should be selected to obtain the calibration result.
  • Step S302 perform robot hand-eye calibration according to the measurement data, and obtain a first calibration result.
  • step S302 may specifically include the following processes:
  • Step S3021 establishing a calibration equation between every two measurement points according to the measurement data.
  • b T e is the homogeneous transformation matrix from the end effector coordinate system to the base coordinate system
  • e T o is the homogeneous transformation matrix from the calibration object coordinate system to the end effector coordinate system, which is a fixed but unknown quantity
  • b T c is the homogeneous transformation matrix from the camera coordinate system to the base coordinate system, that is, the quantity to be solved for hand-eye calibration, which is also a fixed but unknown quantity
  • c T o is the calibration object coordinate system measured by the camera to the camera coordinate system The homogeneous transformation matrix of .
  • e T b is the homogeneous transformation matrix from the base coordinate system to the end effector coordinate system
  • o T e is the homogeneous transformation matrix from the end effector coordinate system to the calibration object coordinate system
  • c T b is the base coordinate system to Homogeneous transformation matrix of the camera coordinate system
  • o T c is the homogeneous transformation matrix of the camera coordinate system to the calibration object coordinate system.
  • the left and right sides of the equal sign in the above formula describe the homogeneous transformation matrix from the coordinate system of the calibration object to the coordinate system of the end effector, which is a fixed quantity.
  • the two measurement points are recorded as the first measurement point and the second measurement point respectively, and the relationship shown in the following formula can be established:
  • the upper right label represents different measurement points.
  • the upper right label is 1, which is the first measurement point
  • the upper right label is 2, which is the second measurement point.
  • the homogeneous transformation matrix from the base coordinate system of the first measurement point to the coordinate system of the end effector can be calculated according to the joint angle in the first measurement point; Calculate the homogeneous transformation matrix from the camera coordinate system of the first measurement point to the coordinate system of the calibration object; calculate the homogeneous transformation matrix from the end effector coordinate system of the second measurement point to the base coordinate system according to the joint angle in the second measurement point ; Calculate the homogeneous transformation matrix from the calibration object coordinate system of the second measurement point to the camera coordinate system according to the pose of the calibration object in the second measurement point; then the calibration between the two measurement points can be established according to these homogeneous transformation matrices equation.
  • Step S3022 combining the calibration equations between every two measurement points into a calibration equation group.
  • the calibration equation shown above can be established. If three or more measurement points are combined in pairs, multiple calibration equations can be obtained. By combining the calibration equations between every two measurement points, a system of calibration equations can be obtained, which is marked as:
  • Step S3023 Solve the calibration equation set to obtain the first calibration result.
  • the calibration equation system There are many mathematical solutions to the calibration equation system, and any one of the solutions can be selected according to the actual situation to solve, which is not specifically limited in the embodiment of the present application.
  • the result obtained by solving the calibration equation system is denoted as the first calibration result.
  • Step S303 using a preset optimization algorithm to iteratively optimize the first calibration result to obtain an optimized second calibration result.
  • the first calibration result often cannot meet the requirements.
  • it can be further optimized through the fine calibration process.
  • the residuals are iterated using only the position information of the calibrator. If only the position information of the calibration object is considered, the mapping relationship between the coordinate systems can be abstracted as shown in FIG. 6 .
  • e p o is the position of the calibration object in the end coordinate system, which is an unknown quantity
  • c p o is the position of the calibration object in the camera coordinate system, which is a known quantity. Then there is a relationship as shown in the following formula:
  • i is the serial number of the measurement point, 1 ⁇ i ⁇ m
  • m is the total number of measurement points
  • the upper right sign indicates different measurement points
  • orient( b R c ) 3 ⁇ 1 is the attitude of the camera in the base coordinate system, that is, the attitude information in b T c , including the pitch angle, roll angle and yaw angle, here are three rows and one column
  • b p c is the position of the camera in the base coordinate system, that is, the position information in b T c
  • b p c and e p o are also in vector form with three rows and one column
  • x is the iteration variable, Here it is in vector form with nine rows and one column.
  • zi can be regarded as the dependent variable of x at this time, namely:
  • f(x) is the objective function
  • T is the transpose symbol, namely is the transposed matrix of zi (x).
  • the algorithm process shown in FIG. 7 can be used for iterative optimization:
  • Step S3031 Calculate the Jacobian matrix corresponding to the current value of the iteration variable.
  • the initial value of the iteration variable is determined by the first calibration result. Specifically, after obtaining the first calibration result, extract the attitude information therein, that is, orient( b R c ) 3 ⁇ 1 , and extract the position information therein, that is, b p c , according to the first calibration result, you can further Calculate e To , and extract the position information therein, namely e p o . Combining orient( b R c ) 3 ⁇ 1 , b p c and e p o into a vector form with nine rows and one column is the initial value of the iteration variable. In the first iterative optimization, the current value of the iteration variable is the initial value. A derivative calculation is performed on the objective function, and the current value of the iteration variable is substituted into the obtained derivative, so as to obtain the corresponding Jacobian matrix.
  • Step S3032 Calculate the residual corresponding to the current value of the iteration variable.
  • the current value of the iteration variable can be substituted into the objective function, and the obtained result is the corresponding residual.
  • Step S3033 Perform iterative calculation on the current value of the iteration variable according to the Jacobian matrix and the residual to obtain an updated value of the iteration variable.
  • the update value of the iteration variable can be calculated according to the following formula:
  • x k is the current value of the iteration variable
  • J k is the Jacobian matrix corresponding to the current value of the iteration variable
  • f(x k ) is the residual corresponding to the current value of the iteration variable, that is, the function value corresponding to the current value of the iteration variable in the objective function
  • I is the unit matrix
  • is a preset optimization factor, and its specific value can be set according to the actual situation, which is not specifically limited in this embodiment of the present application
  • x k+1 is the update value of the iteration variable.
  • Step S3034 judging whether a preset iterative optimization termination condition is satisfied.
  • the iterative optimization termination condition may be that the number of iterations is greater than a preset threshold of the number of iterations, or the residual corresponding to the update value of the iteration variable, that is, f(x k+1 ) is less than the preset residual threshold, the
  • the specific values of the threshold for the number of iterations and the residual threshold may be set according to actual conditions, which are not specifically limited in this embodiment of the present application.
  • step S3035 is performed, and when the iterative optimization termination condition is satisfied, step S3036 is performed.
  • Step S3035 Replace the current value of the iteration variable with the updated value.
  • step S3031 performs the next iterative optimization until the iterative optimization termination condition is satisfied.
  • Step S3036 Determine the second calibration result according to the updated value of the iteration variable.
  • the 1st to 3rd lines in x k+1 are the optimized camera poses in the base coordinate system, and the 4th to 6th lines in x k+1 It is the position of the optimized camera in the base coordinate system.
  • the measurement data of three or more measurement points are obtained respectively; the robot hand-eye calibration is performed according to the measurement data to obtain the first calibration result; the first calibration result is obtained by using a preset optimization algorithm Iterative optimization is performed to obtain the optimized second calibration result.
  • the hand-eye calibration of the robot can be performed step by step. First, a rough calibration result, that is, the first calibration result, is obtained according to the measurement data, and then the error is gradually reduced through continuous iterative optimization, so as to obtain a high-precision calibration result.
  • the calibration result is the second calibration result.
  • FIG. 8 shows a structural diagram of an embodiment of a robot hand-eye calibration device provided by an embodiment of the present application.
  • a robot hand-eye calibration device may include:
  • a measurement data acquisition module 801 configured to acquire measurement data of three or more measurement points respectively;
  • a first calibration module 802 configured to perform robot hand-eye calibration according to the measurement data to obtain a first calibration result
  • the second calibration module 803 is configured to iteratively optimize the first calibration result by using a preset optimization algorithm to obtain an optimized second calibration result.
  • the second calibration module may include:
  • a Jacobian matrix calculation unit configured to calculate the Jacobian matrix corresponding to the current value of the iteration variable, wherein the initial value of the iteration variable is determined by the first calibration result
  • a residual error calculation unit configured to calculate the residual error corresponding to the current value of the iteration variable
  • an iterative calculation unit configured to iteratively calculate the current value of the iteration variable according to the Jacobian matrix and the residual to obtain an updated value of the iteration variable
  • an update unit configured to replace the current value of the iteration variable with the updated value when the preset iterative optimization termination condition is not satisfied, and continue to perform the next iterative optimization until the iterative optimization termination condition is satisfied ;
  • a calibration result determination unit configured to determine the second calibration result according to the updated value of the iteration variable when the iterative optimization termination condition is satisfied.
  • the iterative calculation unit is specifically configured to calculate the update value of the iteration variable according to the following formula:
  • x k is the current value of the iteration variable
  • J k is the Jacobian matrix corresponding to the current value of the iteration variable
  • f(x k ) is the residual value corresponding to the current value of the iteration variable difference, that is, the function value corresponding to the current value of the iteration variable in the preset objective function
  • T is the transpose symbol
  • I is the identity matrix
  • is the preset optimization factor
  • x k+1 is the The updated value of the iteration variable.
  • the second calibration module may also include:
  • An iteration variable setting unit for setting the iteration variable according to the following formula:
  • x is the iteration variable
  • orient( b R c ) 3 ⁇ 1 is the attitude of the camera in the base coordinate system
  • b p c is the position of the camera in the base coordinate system
  • e p o is the coordinate at the end of the calibration object position in the system
  • the objective function setting unit is used to set the objective function according to the following formula:
  • f(x) is the objective function
  • i is the serial number of the measurement point, 1 ⁇ i ⁇ m
  • m is the total number of measurement points
  • b T c is the homogeneous transformation matrix from the camera coordinate system to the base coordinate system.
  • the first calibration module may include:
  • a calibration equation establishment unit configured to establish a calibration equation between every two measurement points according to the measurement data
  • a calibration equation system establishment unit which is used to combine the calibration equations between every two measurement points into a calibration equation system
  • a calibration equation set solving unit configured to solve the calibration equation set to obtain the first calibration result.
  • the calibration equation establishment unit may include:
  • a first calculation subunit configured to calculate a homogeneous transformation matrix from the base coordinate system of the first measurement point to the end effector coordinate system according to the joint angle in the first measurement point;
  • the second calculation subunit is used to calculate the homogeneous transformation matrix from the camera coordinate system of the first measurement point to the calibration object coordinate system according to the pose of the calibration object in the first measurement point;
  • a third calculation subunit configured to calculate a homogeneous transformation matrix from the end effector coordinate system of the second measurement point to the base coordinate system according to the joint angle in the second measurement point;
  • the fourth calculation subunit is used to calculate the homogeneous transformation matrix from the calibration object coordinate system of the second measurement point to the camera coordinate system according to the calibration object pose in the second measurement point;
  • the calibration equation establishes a subunit, which is used for the homogeneous transformation matrix from the base coordinate system of the first measurement point to the end effector coordinate system, the homogeneous transformation matrix from the camera coordinate system of the first measurement point to the calibration object coordinate system, and the second
  • the homogeneous transformation matrix from the end effector coordinate system of the measurement point to the base coordinate system, and the homogeneous transformation matrix from the calibration object coordinate system of the second measurement point to the camera coordinate system establishes the calibration between the first measurement point and the second measurement point equation.
  • calibration equation establishment subunit is specifically used to establish the calibration equation as shown below:
  • FIG. 9 shows a schematic block diagram of a robot provided by an embodiment of the present application. For convenience of description, only parts related to the embodiment of the present application are shown.
  • the robot 9 of this embodiment includes a processor 90 , a memory 91 , and a computer program 92 stored in the memory 91 and executable on the processor 90 .
  • the processor 90 executes the computer program 92
  • the steps in each of the above embodiments of the robot hand-eye calibration method are implemented, for example, steps S301 to S303 shown in FIG. 3 .
  • the processor 90 executes the computer program 92
  • the functions of the modules/units in the foregoing device embodiments for example, the functions of the modules 801 to 803 shown in FIG. 8 , are implemented.
  • the computer program 92 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 91 and executed by the processor 90 to complete the this application.
  • the one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 92 in the robot 9 .
  • FIG. 9 is only an example of the robot 9, and does not constitute a limitation to the robot 9. It may include more or less components than the one shown in the figure, or combine some components, or different components, such as
  • the robot 9 may also include input and output devices, network access devices, buses, and the like.
  • the processor 90 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 91 may be an internal storage unit of the robot 9 , such as a hard disk or a memory of the robot 9 .
  • the memory 91 can also be an external storage device of the robot 9, such as a plug-in hard disk equipped on the robot 9, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, Flash card (Flash Card) and so on.
  • the memory 91 may also include both an internal storage unit of the robot 9 and an external storage device.
  • the memory 91 is used to store the computer program and other programs and data required by the robot 9 .
  • the memory 91 may also be used to temporarily store data that has been output or will be output.
  • the disclosed apparatus/robot and method may be implemented in other ways.
  • the device/robot embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated modules/units if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing the relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium, and the computer When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable storage medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory) ), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media, etc. It should be noted that the content contained in the computer-readable storage medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, computer-readable Storage media exclude electrical carrier signals and telecommunications signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)

Abstract

Un procédé d'étalonnage main-œil de robot, consistant : à acquérir respectivement des données de mesure d'au moins trois points de mesure; à réaliser un étalonnage main-œil de robot en fonction des données de mesure afin d'obtenir un premier résultat d'étalonnage; et à utiliser un algorithme d'optimisation prédéfini pour réaliser une optimisation itérative sur le premier résultat d'étalonnage pour obtenir un second résultat d'étalonnage après optimisation. Au moyen du procédé, l'étalonnage main-œil de robot peut être réalisé pas à pas, dans un premier temps un résultat d'étalonnage grossier, c'est-à-dire un premier résultat d'étalonnage, est obtenu en fonction de données de mesure, puis l'erreur est progressivement réduite par optimisation itérative continue, ce qui permet d'obtenir un résultat d'étalonnage de plus grande précision, c'est-à-dire un second résultat d'étalonnage. L'invention concerne également un appareil d'étalonnage main-œil de robot, un support de stockage lisible et un robot.
PCT/CN2021/124609 2021-01-26 2021-10-19 Procédé et appareil d'étalonnage main-œil de robot, support de stockage lisible, et robot WO2022160787A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110114619.5 2021-01-26
CN202110114619.5A CN112936301B (zh) 2021-01-26 2021-01-26 一种机器人手眼标定方法、装置、可读存储介质及机器人

Publications (1)

Publication Number Publication Date
WO2022160787A1 true WO2022160787A1 (fr) 2022-08-04

Family

ID=76238322

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/124609 WO2022160787A1 (fr) 2021-01-26 2021-10-19 Procédé et appareil d'étalonnage main-œil de robot, support de stockage lisible, et robot

Country Status (2)

Country Link
CN (1) CN112936301B (fr)
WO (1) WO2022160787A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115533922A (zh) * 2022-11-29 2022-12-30 北京航空航天大学杭州创新研究院 位姿关系标定方法及装置、计算机设备和可读存储介质
CN115861445A (zh) * 2022-12-23 2023-03-28 广东工业大学 一种基于标定板三维点云的手眼标定方法
CN116038720A (zh) * 2023-04-03 2023-05-02 广东工业大学 一种基于点云配准的手眼标定方法、装置及设备
CN116673941A (zh) * 2023-03-28 2023-09-01 北京纳通医用机器人科技有限公司 基于机械臂辅助的手术控制方法和装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112936301B (zh) * 2021-01-26 2023-03-03 深圳市优必选科技股份有限公司 一种机器人手眼标定方法、装置、可读存储介质及机器人
CN113672866A (zh) * 2021-07-27 2021-11-19 深圳市未来感知科技有限公司 测量点坐标标定方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040102911A1 (en) * 2002-11-21 2004-05-27 Samsung Electronics Co., Ltd. Hand/eye calibration method using projective invariant shape descriptor of 2-dimensional image
CN107993227A (zh) * 2017-12-15 2018-05-04 深圳先进技术研究院 一种获取3d腹腔镜手眼矩阵的方法和装置
KR101964332B1 (ko) * 2017-10-13 2019-07-31 재단법인대구경북과학기술원 핸드-아이 캘리브레이션 방법, 이를 실행하기 위한 컴퓨터 프로그램 및 로봇 시스템
CN110842914A (zh) * 2019-10-15 2020-02-28 上海交通大学 基于差分进化算法的手眼标定参数辨识方法、系统及介质
CN111986271A (zh) * 2020-09-04 2020-11-24 廊坊和易生活网络科技股份有限公司 一种基于光束平差的机器人方位与手眼关系同时标定方法
CN112936301A (zh) * 2021-01-26 2021-06-11 深圳市优必选科技股份有限公司 一种机器人手眼标定方法、装置、可读存储介质及机器人

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102162738B (zh) * 2010-12-08 2012-11-21 中国科学院自动化研究所 摄像头与惯性传感器组合定位定姿系统的标定方法
CN102663767B (zh) * 2012-05-08 2014-08-06 北京信息科技大学 视觉测量系统的相机参数标定优化方法
CN104021554B (zh) * 2014-04-23 2017-03-01 北京大学深圳研究生院 基于部分传感器信息的相机‑惯性传感器标定方法
CN105014667B (zh) * 2015-08-06 2017-03-08 浙江大学 一种基于像素空间优化的相机与机器人相对位姿标定方法
JP7003462B2 (ja) * 2017-07-11 2022-01-20 セイコーエプソン株式会社 ロボットの制御装置、ロボットシステム、並びに、カメラの校正方法
CN109483516B (zh) * 2018-10-16 2020-06-05 浙江大学 一种基于空间距离和极线约束的机械臂手眼标定方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040102911A1 (en) * 2002-11-21 2004-05-27 Samsung Electronics Co., Ltd. Hand/eye calibration method using projective invariant shape descriptor of 2-dimensional image
KR101964332B1 (ko) * 2017-10-13 2019-07-31 재단법인대구경북과학기술원 핸드-아이 캘리브레이션 방법, 이를 실행하기 위한 컴퓨터 프로그램 및 로봇 시스템
CN107993227A (zh) * 2017-12-15 2018-05-04 深圳先进技术研究院 一种获取3d腹腔镜手眼矩阵的方法和装置
CN110842914A (zh) * 2019-10-15 2020-02-28 上海交通大学 基于差分进化算法的手眼标定参数辨识方法、系统及介质
CN111986271A (zh) * 2020-09-04 2020-11-24 廊坊和易生活网络科技股份有限公司 一种基于光束平差的机器人方位与手眼关系同时标定方法
CN112936301A (zh) * 2021-01-26 2021-06-11 深圳市优必选科技股份有限公司 一种机器人手眼标定方法、装置、可读存储介质及机器人

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115533922A (zh) * 2022-11-29 2022-12-30 北京航空航天大学杭州创新研究院 位姿关系标定方法及装置、计算机设备和可读存储介质
CN115861445A (zh) * 2022-12-23 2023-03-28 广东工业大学 一种基于标定板三维点云的手眼标定方法
CN116673941A (zh) * 2023-03-28 2023-09-01 北京纳通医用机器人科技有限公司 基于机械臂辅助的手术控制方法和装置
CN116673941B (zh) * 2023-03-28 2024-05-14 北京纳通医用机器人科技有限公司 基于机械臂辅助的手术控制方法和装置
CN116038720A (zh) * 2023-04-03 2023-05-02 广东工业大学 一种基于点云配准的手眼标定方法、装置及设备
CN116038720B (zh) * 2023-04-03 2023-08-11 广东工业大学 一种基于点云配准的手眼标定方法、装置及设备

Also Published As

Publication number Publication date
CN112936301A (zh) 2021-06-11
CN112936301B (zh) 2023-03-03

Similar Documents

Publication Publication Date Title
WO2022160787A1 (fr) Procédé et appareil d'étalonnage main-œil de robot, support de stockage lisible, et robot
WO2021115331A1 (fr) Appareil, dispositif et procédé de positionnement de coordonnées basé sur une triangulation, et support d'enregistrement
WO2020207190A1 (fr) Procédé de détermination d'informations tridimensionnelles, dispositif de détermination d'informations tridimensionnelles et appareil terminal
CN111015655B (zh) 机械臂抓取方法、装置、计算机可读存储介质及机器人
WO2022198994A1 (fr) Procédé et appareil de planification de mouvement de bras robotisé, ainsi que support de stockage lisible et bras robotisé
US20220254059A1 (en) Data Processing Method and Related Device
US20220319050A1 (en) Calibration method and apparatus, processor, electronic device, and storage medium
WO2022193639A1 (fr) Bras mécanique, ainsi que procédé et appareil de planification de trajectoire associés
WO2022121003A1 (fr) Procédé et dispositif de commande de robot, support de stockage lisible par ordinateur, et robot
CN113997295B (zh) 机械臂的手眼标定方法、装置、电子设备及存储介质
WO2021115061A1 (fr) Procédé et appareil de segmentation d'image, et serveur
WO2020103220A1 (fr) Procédé et dispositif de positionnement de produit, et dispositif terminal
US20220327740A1 (en) Registration method and registration apparatus for autonomous vehicle
WO2022193640A1 (fr) Procédé et appareil d'étalonnage de robot, et robot et support de stockage
WO2022205845A1 (fr) Procédé et appareil d'étalonnage de pose, et robot et support de stockage lisible par ordinateur
CN114186189A (zh) 坐标转换矩阵的计算方法、装置、设备及可读存储介质
CN112652020B (zh) 一种基于AdaLAM算法的视觉SLAM方法
WO2022198992A1 (fr) Procédé et appareil pour planifier un mouvement de bras robotique, et support de stockage lisible et bras robotique
CN114359400A (zh) 一种外参标定方法、装置、计算机可读存储介质及机器人
CN112991463A (zh) 相机标定方法、装置、设备、存储介质和程序产品
CN116109685B (zh) 一种零件点云配准方法、装置、设备及介质
CN114708336B (zh) 多相机在线标定方法、装置、电子设备和计算机可读介质
CN115661592B (zh) 焊缝识别方法、装置、计算机设备及存储介质
CN117475399B (zh) 车道线拟合方法、电子设备及可读介质
CN114399555B (zh) 数据在线标定方法、装置、电子设备和计算机可读介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21922378

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21922378

Country of ref document: EP

Kind code of ref document: A1