WO2020173240A1 - Image acquisition apparatus calibration method and apparatus, computer device, and storage medium - Google Patents
Image acquisition apparatus calibration method and apparatus, computer device, and storage medium Download PDFInfo
- Publication number
- WO2020173240A1 WO2020173240A1 PCT/CN2020/072491 CN2020072491W WO2020173240A1 WO 2020173240 A1 WO2020173240 A1 WO 2020173240A1 CN 2020072491 W CN2020072491 W CN 2020072491W WO 2020173240 A1 WO2020173240 A1 WO 2020173240A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- robot
- coordinates
- pixel coordinates
- feature point
- preset feature
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Definitions
- This application relates to the field of machine vision technology, for example, to an image acquisition device calibration method, device, computer equipment, and storage medium. Background technique
- a tool point is usually corrected at the end of the robot.
- the robot drives the camera to move so that the product is in the camera's field of view. Calculate the coordinates of the point on the product image pixel in the robot tool coordinate system, guide the robot to grab the product at the specified position, and ensure the consistency of the product grabbing position.
- the robot tool points are usually displayed and calibrated by 3 or 4 points, which are realized by human observation. Many tool points displayed and calibrated have errors, and it is difficult to ensure the accuracy of the tool points. The accuracy of the calibration affects, resulting in low accuracy of the final robot vision positioning equipment. Summary of the invention
- the present application provides a camera calibration method, device, computer equipment, and storage medium that can improve the calibration accuracy, so as to improve the robot vision positioning accuracy.
- An embodiment of the application provides a method for calibrating an image acquisition device, where the image acquisition device is applied to a robot visual positioning device, and the method includes:
- the robot coordinates corresponding to the pixel coordinates of the preset feature points obtain the translational position relationship between the pixel coordinates of the preset feature points and the robot coordinates; obtain when the robot rotates to different angles The pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point;
- the rotation center position according to the pixel coordinates of the preset feature point at the different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature point; According to the translational position relationship and the rotation center position, the plane conversion relationship between the robot and the image acquisition device is obtained to calibrate the image acquisition device.
- An embodiment of the present application also provides a calibration device for an image acquisition device, the image acquisition device is applied to a robot vision positioning device, and the device includes:
- the first information acquisition module is configured to acquire the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points when the robot moves to different positions of the area to be calibrated;
- the translational relationship acquisition module is configured to obtain the translational position relationship between the pixel coordinates of the preset feature points and the robot coordinates according to the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points;
- the second information acquiring module is configured to acquire the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points when the robot rotates to different angles;
- a rotation center obtaining module configured to obtain the rotation center position according to the pixel coordinates of the preset feature points at the different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature points;
- the calibration information acquisition module is configured to obtain the plane conversion relationship between the robot and the image acquisition device according to the translational position relationship and the rotation center position, so as to calibrate the image acquisition device.
- An embodiment of the present application further provides a computer device, which includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
- the robot coordinates corresponding to the pixel coordinates of the preset feature points obtain the translational position relationship between the pixel coordinates of the preset feature points and the robot coordinates; obtain when the robot rotates to different angles The pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point;
- the plane conversion relationship between the robot and the image acquisition device is obtained, so as to calibrate the image acquisition device.
- the embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented:
- Figure 1 is an application scenario diagram of an image acquisition device calibration method in an embodiment of this application
- Figure 2 is a schematic flow chart of an image acquisition device calibration method in an embodiment of this application
- Figure 3 is a translational position relationship acquisition in an embodiment of this application
- FIG. 4 is a schematic diagram of the robot position translation in the XY axis direction in an embodiment of the application
- FIG. 5 is a schematic diagram of the area to be calibrated equally divided in an embodiment of the application;
- FIG. 6 is a schematic flow chart of the step of obtaining the position of the rotation center in an embodiment of the application
- FIG. 7 is a schematic flow chart of the step of obtaining the rotation center conversion relationship in an embodiment of the application
- FIG. 9 is a schematic diagram of a position of the robot during rotation in an embodiment of the application
- Fig. 10 is a schematic flow chart of a method for calibrating an image capture device in another embodiment of this application
- Fig. 11 is a schematic diagram of the application of the method for calibrating an image capture device in an embodiment of this application
- Fig. 12 is a schematic diagram of an image capture device calibration in an embodiment of this application Block diagram of the device;
- Fig. 13 is an internal structure diagram of a computer device in an embodiment of the application. DETAILED DESCRIPTION OF THE EMBODIMENTS The following describes the application with reference to the drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the application, and are not used to limit the application.
- the image acquisition device calibration method provided in this application can be applied to the application environment as shown in FIG. 1.
- the end of the robot 10 includes a movable tool arm 11, through which an image acquisition device 20 and a standard fixture 30 can be installed at the end of the robot 10 through the tool arm 11.
- a fixed feature point 40 is placed below the.
- the image acquisition device may be a camera.
- a method for calibrating an image acquisition device is provided.
- the method is applied to the image acquisition device in the robot visual positioning equipment in FIG. 1 as an example for description.
- This embodiment provides The method includes the following steps:
- Step 202 Obtain the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points when the robot moves to different positions of the area to be calibrated.
- Step 204 Obtain the translational position relationship between the pixel coordinates of the preset feature point and the robot coordinates according to the pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point.
- the area to be calibrated is equally divided into a preset number of sub-areas, and the pixel coordinates of the center of the preset number of sub-regions are extracted respectively, and the rough movement of the robot is calculated through the rough translation position conversion relationship and the pixel coordinates of the center of the preset number of sub-regions. position. Move the robot to the calculated rough positions, and recalculate the pixel coordinates of the circular feature points when the robot reaches different rough positions through image recognition, thereby obtaining the precise translational position between the pixel coordinates of the feature points and the robot coordinates relationship.
- the rough translation position conversion relationship may also be referred to as the first translation position conversion relationship
- the precise translation position relationship may also be referred to as the second translation position conversion relationship
- the rough position may also be referred to as the target position.
- Step 206 Obtain the pixel coordinates of the preset feature points when the robot rotates to different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature points.
- Step 208 Obtain the rotation center position according to the pixel coordinates of the preset feature points at different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature points.
- the camera is driven by the robot to rotate with a preset step length to obtain an arc composed of the center of the circular feature points when rotated to different angles, and the center coordinates of the arc are fitted to obtain the rough rotation center of the robot. Calculate the difference between the rough rotation center and the initial rotation position of the robot, and find the rotation center conversion relationship between the pixel coordinates of the circular feature point and the robot coordinates when the robot rotates.
- the camera is driven by the robot to rotate in another fixed step. Every time the robot rotates, the robot coordinates corresponding to the pixel coordinates of the circular feature points at this time are obtained through the rotation center conversion relationship. Calculate the difference between the coordinates and the actual coordinates of the robot at this time, and compensate the calculated difference to the robot coordinates, and the robot moves to the specified position. In an embodiment, after the robot moves to the corresponding robot coordinates, according to the pixel coordinates of the circular feature point at this time and the pixel coordinates of the circular feature point when the robot is at the initial position, Obtain the robot coordinate error, and compensate the error to the robot coordinate.
- the robot After the robot completes its rotation within 360 degrees, it fits all coordinate errors, and compensates the fitted coordinate errors to the rough rotation center of the robot coordinates, and calculates the difference between the compensated rotation center coordinates of the robot and the initial position coordinates , So that the robot can accurately rotate around the initial position of the circular feature point through the calculated difference.
- Step 210 Obtain the plane conversion relationship between the robot and the image acquisition device according to the translational position relationship and the rotation center position, so as to calibrate the image acquisition device.
- the pixel coordinates of the feature points and the corresponding robot coordinates are preset according to the robot translation to different positions in the area to be calibrated to obtain the translational position relationship between the pixel coordinates of the preset feature points and the robot coordinates, and then According to the pixel coordinates of the feature points and the corresponding robot coordinates when the robot rotates to different angles, the rotation center position is obtained, and the plane conversion relationship between the robot and the image acquisition device is obtained according to the translational position relationship and the rotation center position, so as to collect images.
- the device is calibrated without the need for manual calibration of robot tool points, only the feature points need to be placed in the field of view of the image acquisition device.
- the above solution can improve the calibration accuracy, thereby improving the accuracy of the robot's visual positioning equipment, and does not require additional auxiliary hardware
- the equipment is simple and efficient, which can greatly reduce the difficulty of the operator to adjust the machine.
- Step 302 according to the robot coordinates corresponding to the pixel coordinates of the preset feature points and the pixel coordinates of the preset feature points when the robot translates to different preset positions along the X-axis direction and the Y-axis direction in the area to be calibrated, to obtain Roughly translate the positional relationship, the X-axis and Y-axis are mutually perpendicular axes on the same horizontal plane in the three-dimensional space coordinates; Step 304, divide the area to be calibrated into a preset number of sub-areas, and extract the preset number of sub-areas respectively The pixel coordinates of the center; Step 306: Obtain a preset number of rough positions according to the rough translation position relationship and the
- the rough translational position relationship includes: when the ear harvesting robot translates from the current position to the preset first position along the X-axis, the first pixel coordinates of the preset feature points and the first pixel coordinates of the preset feature points of the robot corresponding to the first pixel coordinates One coordinate, where the preset first position is obtained by the dichotomy; the second pixel coordinate of the preset feature point and the second pixel coordinate of the preset feature point are acquired when the robot translates from the current position to the preset second position along the Y-axis direction The second coordinate of the robot corresponding to the pixel coordinate, where the preset The two positions are obtained by dichotomy; according to the first pixel coordinates of the preset feature points and the first
- the area to be calibrated is equally divided into 9 areas in the camera's field of view, and the center coordinates of the 9 areas are lifted separately /V(, v 0 ), P, ⁇ u h Vj). /VO f , V f ), where ( ⁇ 8, by converting the moment
- obtaining the rotation center position includes: Step 602: Obtain the rotation center transformation The relationship and the rough rotation center.
- the rotation center conversion relationship is used to characterize the relationship between the pixel coordinates of the preset feature points and the robot coordinates when the robot rotates; step 604, preset the first step according to the rotation center conversion relationship and the initial position of the robot Default for long rotation to different angles
- the pixel coordinates of the feature point are obtained, and the corresponding robot coordinates are obtained; step 606, the pixel coordinates of the preset feature points are obtained when the robot reaches the position of the corresponding robot coordinates; step 608, the preset is based on the robot reaching the position of the corresponding robot coordinates
- the pixel coordinates of the feature points and the pixel coordinates of the feature points are preset at the initial position of the robot to obtain the robot coordinate error; Step 610: Fit the coordinate error within the full angle range, and compensate the fitted coordinate error to a rough rotation The coordinates of the center, get the precise rotation center position.
- obtaining the rotation center conversion relationship and the rough rotation center includes: Step 702, preset the pixel of the feature point when the robot rotates to a different angle with a preset second step length at the initial position Coordinates, fit the pixel coordinates to obtain the rough rotation center; step 704, calculate the rotation error between the coordinates of the rough rotation center and the pixel coordinates of the preset feature point at the initial position; step 706, correspond to the rotation error, the pixel coordinates, and the pixel coordinates The robot coordinates of, get the rotation center conversion relationship.
- the method further includes: acquiring the robot in the area to be calibrated When rotating around the Z axis in the positive direction, the image acquisition device searches for the maximum forward angle of the preset feature point, and when the robot rotates around the Z axis in the reverse direction, the image acquisition device searches for the maximum reverse angle of the preset feature point, the Z axis Is the axis perpendicular to the horizontal plane in the three-dimensional space coordinate axis; according to the maximum forward angle and the maximum reverse angle, the preset second step length of the robot is obtained.
- the robot returns to the initial position 2 0 (, ), rotates around the Z axis, the relative angle range is 0-360°, the step length de is set to a fixed value, and the corresponding circular feature point is calculated through the rotation center conversion relationship 2 .
- the robot coordinates corresponding to corpse 0 (« 0 ,%) are (x e ,;y c ),
- Xm x c + ((x 0- x c ) cos(md d )-(y 0 y c ) sm(md d ))
- the robot moves to the calculated coordinate positions Q 2 (x 0 ,yo) ⁇ 22(, ), ., 220m Jm), and then recognizes the corresponding circular feature point center image coordinates A(M 0 , v 0 ), PsiM hV] ), ., P 3 (u m , v m ), calculate the actual circular feature point center image coordinate value and the theoretical value of the robot coordinate error of 0 (M 0 , V 0 ).
- the 2o coordinate of the robot rotating around corpse 0 ( «0,) is transformed into Q'oixojo) ⁇ Q '
- V 0 is the corresponding circular feature point center image coordinate and 0 coordinate value
- Coordinate value, d ⁇ is a fixed step value, O ⁇ m ⁇ n and 11.
- V /o is the coordinate value of the corresponding circular feature point center image coordinate
- i/ 3 m is the coordinate value of coordinate 3 .
- the body 3 is the actual value of the image coordinate of the center of the circular feature point.
- the method further includes: calculating the difference between the coordinates of the precise rotation center position and the initial position robot coordinates According to the translational position relationship and the rotation center position, obtaining the plane conversion relationship between the robot and the image acquisition device includes: obtaining the plane conversion relationship between the robot and the image acquisition device according to the translation position relationship, the rotation center position and the difference.
- the rough translation position conversion relationship may also be referred to as the first translation position conversion relationship
- the precise translation position relationship may also be referred to as the second translation position conversion relationship
- the rough position may also be referred to as the target position.
- step (3) Every time the robot rotates, the conversion relationship A 2 between the rotation center and the robot coordinates obtained in step (3) is used to compensate the calculated difference to the robot coordinates, and the robot moves to the specified Position; After the robot moves in place, calculate the robot coordinate error between the actual coordinate of the circular feature point and the theoretical coordinate, and compensate the error to the robot coordinate; After the robot completes rotation within 360 degrees, fit all the robot after the rotation error compensation For the rotation center of the coordinates, calculate the difference between the rotation center coordinates and the initial position of the robot after fitting, so that the robot can accurately rotate around the initial position of the circular feature point through the calculated value.
- the coordinates of the position where the robot grabs the product at the camera are (5 ⁇ :, 5 ⁇ ; y, 5 ⁇ r), and the feature point coordinates of the template product image in the camera field of view Is (M/w, M/? ⁇ ;, M/?r), the robot grabs the template product position coordinates as (Tqx, Tqy, rgr), then when the robot takes a photo at the position (V ⁇ x, Vqy, Vgr) .
- the coordinates (Rqx, Rqy, Rqr) of any product grabbing position can be calculated by identifying the coordinates of any product image feature point as (A//w, A/p ⁇ ;, A/pr) in the camera field of view.
- Q Vqr Sqr Mqx M ⁇ ; y represents the robot coordinate value corresponding to the feature point of the template image, and Nqx Nqy represents the coordinate value of the robot rotation center corresponding to any product image feature point.
- FIGS. 2-3 and 6-7 are displayed in sequence as indicated by the arrows, these steps are not necessarily executed in the order indicated by the arrows. Unless explicitly stated in this article, there is no strict order for the execution of these steps, and these steps can be executed in other orders. Moreover, at least part of the steps in FIGS. 2-3 and 6-7 may include multiple sub-steps or multiple stages, and these sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. The order of execution of these sub-steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with other steps or at least part of the sub-steps or stages of other steps.
- an image acquisition device calibration device including: a first information acquisition module 1202, a translation relationship acquisition module 1204, a second information acquisition module 1206, a rotation center acquisition module 1208, and Calibration information acquisition module 1210.
- the first information acquisition module is set Set to obtain the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points when the robot moves to different positions of the area to be calibrated.
- the translational relationship acquisition module is configured to obtain the translational position relationship between the pixel coordinates of the preset feature point and the robot coordinates according to the pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point.
- the second information acquiring module is configured to acquire the pixel coordinates of the preset feature points when the robot rotates to different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature points.
- the rotation center obtaining module is set to obtain the position of the rotation center according to the pixel coordinates of the preset feature points at different angles and the corresponding robot coordinates.
- the calibration information acquisition module is set to obtain the plane conversion relationship between the robot and the image acquisition device according to the translational position relationship and the rotation center position, so as to calibrate the image acquisition device.
- the translational relationship acquisition module is configured to obtain the pixel coordinates of the preset feature points and the robot coordinates according to the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points in the following manner
- the translational position relationship According to the robot in the area to be calibrated along the X-axis direction and the Y-axis direction to preset different positions, the pixel coordinates of the preset feature points and the corresponding robot coordinates are obtained to obtain the first translational position relationship.
- the and Y-axis are mutually perpendicular axes on the same horizontal plane in the three-dimensional space coordinates; divide the area to be calibrated into a preset number of sub-areas, and extract the pixel coordinates of the center of the preset number of sub-areas respectively; And the pixel coordinates of the center of the preset number of sub-regions to obtain the preset number of corresponding rough positions; obtain the pixel coordinates of the preset feature points when the robot reaches the preset number of rough positions; according to when the robot reaches the preset number of rough positions The pixel coordinates of the feature point and the coordinates of the corresponding rough position are preset, and the first translational position relationship is corrected to obtain the second translational position relationship.
- the translational relationship acquisition module is configured to preset the pixel coordinates of the feature points and the corresponding robot when the robot translates in the X-axis direction and the Y-axis direction to preset different positions in the area to be calibrated in the following manner Coordinates to obtain the first translational position relationship: obtain the first pixel coordinates of the preset feature point and the first pixel coordinates of the preset feature point when the robot translates from the current position along the X-axis direction to the preset first position One coordinate, where the preset first position is obtained by the dichotomy; the second pixel coordinate of the preset feature point and the second pixel coordinate of the preset feature point are acquired when the robot translates from the current position to the preset second position along the Y-axis direction The second coordinate of the robot corresponding to the pixel coordinates, where the preset second position is obtained by dichotomy; the first pixel coordinates of the preset feature point and the first pixel coordinates of the preset feature point corresponding to the robot first coordinate and
- the rotation center obtaining module is configured to obtain the rotation center conversion relationship and the first rotation center, and the rotation center conversion relationship is used to characterize the relationship between the pixel coordinates of the preset feature point and the robot coordinates when the robot rotates;
- the rotation center conversion relationship and the pixel coordinates of the preset feature points when the robot rotates to different angles with the preset first step at the initial position, to obtain the corresponding robot sitting Obtain the pixel coordinates of the preset feature point when the robot reaches the position of the corresponding robot coordinate;
- According to the pixel coordinates of the preset feature point when the robot reaches the position of the corresponding robot coordinate and the preset feature point of the robot at the initial position The pixel coordinates are used to obtain the robot coordinate error; the coordinate error in the full angle range is fitted, and the fitted coordinate error is compensated to the coordinates of the first rotation center to obtain the position of the second rotation center.
- the rotation center obtaining module is configured to obtain the rotation center conversion relationship and the first rotation center in the following manner: preset the pixels of the feature points when the robot rotates to a different angle with a preset second step length at the initial position Coordinates, fit the pixel coordinates to obtain the first rotation center; calculate the rotation error between the coordinates of the first rotation center and the pixel coordinates of the preset feature point at the initial position; obtain the rotation according to the rotation error, the pixel coordinates and the corresponding robot coordinates Center conversion relationship.
- the rotation center obtaining module is further configured to obtain the maximum forward angle of the preset feature point searched by the image acquisition device when the robot rotates around the Z axis in the positive direction in the area to be calibrated, and the robot rotates around the Z axis in the reverse direction
- the Z axis is the axis perpendicular to the horizontal plane in the three-dimensional space coordinate axis; according to the maximum forward angle and the maximum reverse angle, the robot’s preset second step is obtained long.
- the rotation center obtaining module is further configured to calculate the difference between the coordinates of the precise rotation center position and the coordinates of the initial position
- the calibration information obtaining module is configured to calculate the difference between the positional relationship of the translation, the position of the rotation center, and the difference. , Obtain the plane conversion relationship between the robot and the image acquisition device.
- the various modules in the calibration device of the image acquisition device described above can be implemented in whole or in part by software, hardware, and combinations thereof.
- the foregoing multiple modules may be embedded in the form of hardware or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the operations corresponding to the foregoing multiple modules.
- a computer device is provided.
- the computer device may be a terminal, and its internal structure diagram may be as shown in FIG. 13.
- the computer equipment includes a processor, a memory, a network interface, a display screen and an input device connected through a system bus.
- the processor of the computer device is used to provide calculation and control capabilities.
- the memory of the computer device includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium stores an operating system and a computer program.
- the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
- the network interface of the computer device is used to communicate with an external terminal through a network connection.
- the computer program is executed by the processor to realize a calibration method of the image acquisition device.
- the display screen of the computer device may be a liquid crystal display or an electronic ink display screen
- the input device of the computer device may be a touch layer covered on the display screen, or it may be a button, a trackball or a touch pad provided on the shell of the computer device , It can also be an external keyboard, touchpad or mouse.
- FIG. 13 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
- the computer device may include more or fewer components than shown in the figure, or combine certain components, or have a different component arrangement.
- a computer device including a memory and a processor, the memory stores a computer program, and the processor executes the computer program to implement the image acquisition device calibration method in any embodiment.
- a computer-readable storage medium is provided, and a computer program is stored thereon, and the computer program is executed by a processor to implement the image acquisition device calibration method in any embodiment.
- any reference to memory, storage, database, or other media used in the multiple embodiments provided in this application may include non-volatile and/or volatile memory.
- Non-volatile memory may include read-only memory (Read-Only Memory, ROM), programmable ROM (Programmable Read-Only Memory, PROM), electrically programmable ROM (Erasable Programmable Read-Only Memory, EPROM), and electrically erasable Except for programmable ROM (Electrically Erasable Programmable Read-Only Memory, EEPROM) or flash memory.
- Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory.
- RAM is available in various forms, such as static RAM (Static Random-Access Memory, SRAM), dynamic RAM (Dynamic Random Access Memory, DRAM), synchronous DRAM (Synchronous Dynamic Random Access Memory, SDRAM), Double Data Rate SDRAM (Double Data Rate Synchronous Dynamic Random Access Memory, DDRSDRAM), Enhanced SDRAM (Enhanced Synchronous Dynamic Random Access Memory, ESDRAM), Synchronous Link DRAM (Synchlink Dynamic Random Access Memory, SLDRAM), Memory Bus Direct RAM ( Rambus Direct Dynamic Random Access Memory (RDRAM), Direct Rambus Dynamic Random Access Memory (DRDRAM), and Rambus Dynamic Random Access Memory (RDRAM), etc.
- SRAM static RAM
- DRAM Dynamic Random Access Memory
- SDRAM synchronous DRAM
- Double Data Rate SDRAM Double Data Rate Synchronous Dynamic Random Access Memory
- ESDRAM Enhanced SDRAM (Enhanced Synchronous Dynamic Random Access Memory, ESDRAM)
- Synchronous Link DRAM Synchronous Link DRAM
- SLDRAM Synchronous
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Manipulator (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Disclosed in the present application are an image acquisition apparatus calibration method and apparatus, a computer device, and a storage medium. A method in an embodiment comprises: acquiring pixel coordinates of a preset feature point and corresponding robot coordinates when a robot translates to different positions in an area to be calibrated; according to the pixel coordinates of the preset feature point and the corresponding robot coordinates when the robot translates to different positions in the area to be calibrated, obtaining a translation position relationship between the pixel coordinates of the preset feature point and the robot coordinates; according to the pixel coordinates of the preset feature point and the corresponding robot coordinates when the robot rotates to different angles, acquiring pixel coordinates of the preset feature point and corresponding robot coordinates when the robot rotates to different angles, obtaining the position of the center of rotation; and according to the translation position relationship and the position of the center of rotation, obtaining a plane conversion relationship between the robot and an image acquisition apparatus so as to calibrate the image acquisition apparatus.
Description
图像采集装置标定方法、 装置、 计算机设备和存储介质 本申请要求在 2019年 02月 27日提交中国专利局、申请号为 201910146310.7 的中国专利申请的优先权, 该申请的全部内容通过引用结合在本申请中。 技术领域 Image acquisition device calibration method, device, computer equipment and storage medium. This application requires the priority of the Chinese patent application filed with the Chinese Patent Office with application number 201910146310.7 to be submitted to the Chinese Patent Office on February 27, 2019. The entire content of the application is incorporated into this by reference Applying. Technical field
本申请涉及机器视觉技术领域, 例如涉及一种图像采集装置标定方法、 装 置、 计算机设备和存储介质。 背景技术 This application relates to the field of machine vision technology, for example, to an image acquisition device calibration method, device, computer equipment, and storage medium. Background technique
在机器人视觉定位设备抓取产品过程中, 对于相机安装在机器人末端, 即 相机随机器人一起运动的手眼标定方式, 通常在机器人末端冶具校正出一个工 具点。 机器人抓取位置不一致的产品时, 机器人带动相机移动, 使产品在相机 视野中。 计算产品图像像素上的点在机器人工具坐标系下的坐标, 引导机器人 在指定位置抓取产品, 保证产品抓取位置的一致性。 In the process of grasping products by the robot's visual positioning equipment, for the hand-eye calibration method where the camera is installed at the end of the robot, that is, the camera moves with the robot, a tool point is usually corrected at the end of the robot. When the robot grabs products with inconsistent positions, the robot drives the camera to move so that the product is in the camera's field of view. Calculate the coordinates of the point on the product image pixel in the robot tool coordinate system, guide the robot to grab the product at the specified position, and ensure the consistency of the product grabbing position.
在具体操作过程中, 机器人工具点的示校通常采用 3个或 4个点, 并通过 人眼观察的方法来实现, 示校出的工具点多存在误差, 难以保证工具点的精度, 对相机标定的精度造成影响, 导致最终的机器人视觉定位设备精度不高。 发明内容 In the specific operation process, the robot tool points are usually displayed and calibrated by 3 or 4 points, which are realized by human observation. Many tool points displayed and calibrated have errors, and it is difficult to ensure the accuracy of the tool points. The accuracy of the calibration affects, resulting in low accuracy of the final robot vision positioning equipment. Summary of the invention
本申请提供一种能够提高标定精度的相机标定方法、 装置、 计算机设备和 存储介质, 提高机器人视觉定位精度。 The present application provides a camera calibration method, device, computer equipment, and storage medium that can improve the calibration accuracy, so as to improve the robot vision positioning accuracy.
本申请实施例提供一种图像采集装置标定方法, 所述图像采集装置应用于 机器人视觉定位设备, 所述方法包括: An embodiment of the application provides a method for calibrating an image acquisition device, where the image acquisition device is applied to a robot visual positioning device, and the method includes:
获取机器人平移至待标定区域的不同位置时, 预设特征点的像素坐标以及 预设特征点的像素坐标对应的机器人坐标; Acquiring the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points when the robot moves to different positions in the area to be calibrated;
根据所述预设特征点的像素坐标以及预设特征点的像素坐标对应的机器人 坐标, 得到所述预设特征点的像素坐标与机器人坐标之间的平移位置关系; 获取机器人旋转至不同角度时所述预设特征点的像素坐标以及预设特征点 的像素坐标对应的机器人坐标; According to the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points, obtain the translational position relationship between the pixel coordinates of the preset feature points and the robot coordinates; obtain when the robot rotates to different angles The pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point;
根据所述不同角度时所述预设特征点的像素坐标以及预设特征点的像素坐 标对应的机器人坐标, 得到旋转中心位置;
根据所述平移位置关系和所述旋转中心位置, 得到机器人与图像采集装置 的平面转换关系, 以对所述图像采集装置进行标定。 Obtaining the rotation center position according to the pixel coordinates of the preset feature point at the different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature point; According to the translational position relationship and the rotation center position, the plane conversion relationship between the robot and the image acquisition device is obtained to calibrate the image acquisition device.
本申请实施例还提供一种图像采集装置标定装置, 所述图像采集装置应用 于机器人视觉定位设备, 所述装置包括: An embodiment of the present application also provides a calibration device for an image acquisition device, the image acquisition device is applied to a robot vision positioning device, and the device includes:
第一信息获取模块, 设置为获取机器人平移至待标定区域的不同位置时, 预设特征点的像素坐标以及预设特征点的像素坐标对应的机器人坐标; The first information acquisition module is configured to acquire the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points when the robot moves to different positions of the area to be calibrated;
平移关系获取模块, 设置为根据所述预设特征点的像素坐标以及预设特征 点的像素坐标对应的机器人坐标, 得到所述预设特征点的像素坐标与机器人坐 标之间的平移位置关系; The translational relationship acquisition module is configured to obtain the translational position relationship between the pixel coordinates of the preset feature points and the robot coordinates according to the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points;
第二信息获取模块, 设置为获取机器人旋转至不同角度时所述预设特征点 的像素坐标以及预设特征点的像素坐标对应的机器人坐标; The second information acquiring module is configured to acquire the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points when the robot rotates to different angles;
旋转中心获取模块, 设置为根据所述不同角度时所述预设特征点的像素坐 标以及预设特征点的像素坐标对应的机器人坐标, 得到旋转中心位置; A rotation center obtaining module, configured to obtain the rotation center position according to the pixel coordinates of the preset feature points at the different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature points;
标定信息获取模块, 设置为根据所述平移位置关系和所述旋转中心位置, 得到机器人与图像采集装置的平面转换关系, 以对所述图像采集装置进行标定。 The calibration information acquisition module is configured to obtain the plane conversion relationship between the robot and the image acquisition device according to the translational position relationship and the rotation center position, so as to calibrate the image acquisition device.
本申请实施例还提供一种计算机设备, 计算机设备包括存储器和处理器, 所述存储器存储有计算机程序, 所述处理器执行所述计算机程序时实现以下步 骤: An embodiment of the present application further provides a computer device, which includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
获取机器人平移至待标定区域的不同位置时, 预设特征点的像素坐标以及 预设特征点的像素坐标对应的机器人坐标; Acquiring the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points when the robot moves to different positions in the area to be calibrated;
根据所述预设特征点的像素坐标以及预设特征点的像素坐标对应的机器人 坐标, 得到所述预设特征点的像素坐标与机器人坐标之间的平移位置关系; 获取机器人旋转至不同角度时所述预设特征点的像素坐标以及预设特征点 的像素坐标对应的机器人坐标; According to the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points, obtain the translational position relationship between the pixel coordinates of the preset feature points and the robot coordinates; obtain when the robot rotates to different angles The pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point;
根据所述不同角度时所述预设特征点的像素坐标以及预设特征点的像素坐 标对应的机器人坐标, 得到旋转中心位置; Obtaining the rotation center position according to the pixel coordinates of the preset feature point at the different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature point;
根据所述平移位置关系和所述旋转中心位置, 得到机器人与图像采集装置 的平面转换关系, 以对所述图像采集装置进行标定。 According to the translational position relationship and the rotation center position, the plane conversion relationship between the robot and the image acquisition device is obtained, so as to calibrate the image acquisition device.
本申请实施例还提供一种计算机可读存储介质, 计算机可读存储介质上存 储有计算机程序, 所述计算机程序被处理器执行时实现以下步骤: The embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented:
获取机器人平移至待标定区域的不同位置时, 预设特征点的像素坐标以及 预设特征点的像素坐标对应的机器人坐标;
根据所述预设特征点的像素坐标以及预设特征点的像素坐标对应的机器人 坐标, 得到所述预设特征点的像素坐标与机器人坐标之间的平移位置关系; 获取机器人旋转至不同角度时所述预设特征点的像素坐标以及预设特征点 的像素坐标对应的机器人坐标; Acquiring the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points when the robot moves to different positions of the area to be calibrated; According to the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points, obtain the translational position relationship between the pixel coordinates of the preset feature points and the robot coordinates; obtain when the robot rotates to different angles The pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point;
根据所述不同角度时所述预设特征点的像素坐标以及预设特征点的像素坐 标对应的机器人坐标, 得到旋转中心位置; Obtaining the rotation center position according to the pixel coordinates of the preset feature point at the different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature point;
根据所述平移位置关系和所述旋转中心位置, 得到机器人与图像采集装置 的平面转换关系, 以对所述图像采集装置进行标定。 附图说明 According to the translational position relationship and the rotation center position, the plane conversion relationship between the robot and the image acquisition device is obtained, so as to calibrate the image acquisition device. Description of the drawings
图 1为本申请一个实施例中图像采集装置标定方法的应用场景图; 图 2为本申请一个实施例中图像采集装置标定方法的流程示意图; 图 3为本申请一个实施例中平移位置关系获取步骤的流程示意图; 图 4为本申请一个实施例中机器人位置在 XY轴方向平移的示意图; 图 5为本申请一个实施例中均分待标定区域的示意图; Figure 1 is an application scenario diagram of an image acquisition device calibration method in an embodiment of this application; Figure 2 is a schematic flow chart of an image acquisition device calibration method in an embodiment of this application; Figure 3 is a translational position relationship acquisition in an embodiment of this application A schematic flow diagram of the steps; FIG. 4 is a schematic diagram of the robot position translation in the XY axis direction in an embodiment of the application; FIG. 5 is a schematic diagram of the area to be calibrated equally divided in an embodiment of the application;
图 6为本申请一个实施例中旋转中心位置获取步骤的流程示意图; 图 7为本申请一个实施例中旋转中心转换关系获取步骤的流程示意图; 图 8为本申请一个实施例中机器人旋转至不同角度时圆形特征点的示意图; 图 9为本申请一个实施例中机器人旋转时的位置示意图; 6 is a schematic flow chart of the step of obtaining the position of the rotation center in an embodiment of the application; FIG. 7 is a schematic flow chart of the step of obtaining the rotation center conversion relationship in an embodiment of the application; A schematic diagram of a circular feature point at an angle; FIG. 9 is a schematic diagram of a position of the robot during rotation in an embodiment of the application;
图 10为本申请另一个实施例中图像采集装置标定方法的流程示意图; 图 11为本申请一个实施例中图像采集装置标定方法的应用示意图; 图 12为本申请一个实施例中图像采集装置标定装置的结构框图; Fig. 10 is a schematic flow chart of a method for calibrating an image capture device in another embodiment of this application; Fig. 11 is a schematic diagram of the application of the method for calibrating an image capture device in an embodiment of this application; Fig. 12 is a schematic diagram of an image capture device calibration in an embodiment of this application Block diagram of the device;
图 13为本申请一个实施例中计算机设备的内部结构图。 具体实施方式 以下结合附图及实施例, 对本申请进行说明。 应当理解, 此处描述的具体 实施例仅仅用以解释本申请, 并不用于限定本申请。 Fig. 13 is an internal structure diagram of a computer device in an embodiment of the application. DETAILED DESCRIPTION OF THE EMBODIMENTS The following describes the application with reference to the drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the application, and are not used to limit the application.
本申请提供的图像采集装置标定方法, 可以应用于如图 1 所示的应用环境 中。 在机器人视觉定位设备抓取物体过程中, 机器人 10的末端包括有一个可移 动的工具臂 11 , 通过工具臂 11可以在机器人 10的末端安装有图像采集装置 20 和标准冶具 30, 在机器人 10末端的下方放置有一个固定不动的特征点 40。 在
一实施例中, 图像采集装置可以是相机。 The image acquisition device calibration method provided in this application can be applied to the application environment as shown in FIG. 1. In the process of grasping objects by the robot visual positioning equipment, the end of the robot 10 includes a movable tool arm 11, through which an image acquisition device 20 and a standard fixture 30 can be installed at the end of the robot 10 through the tool arm 11. A fixed feature point 40 is placed below the. In In an embodiment, the image acquisition device may be a camera.
在一个实施例中, 如图 2所示, 提供了一种图像采集装置标定方法, 以该 方法应用于图 1 中的机器人视觉定位设备中的图像采集装置为例进行说明, 本 实施例提供的方法包括以下步骤: In one embodiment, as shown in FIG. 2, a method for calibrating an image acquisition device is provided. The method is applied to the image acquisition device in the robot visual positioning equipment in FIG. 1 as an example for description. This embodiment provides The method includes the following steps:
步骤 202, 获取机器人平移至待标定区域的不同位置时, 预设特征点的像素 坐标以及预设特征点的像素坐标对应的机器人坐标。 Step 202: Obtain the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points when the robot moves to different positions of the area to be calibrated.
步骤 204,根据预设特征点的像素坐标以及预设特征点的像素坐标对应的机 器人坐标, 得到预设特征点的像素坐标与机器人坐标之间的平移位置关系。 Step 204: Obtain the translational position relationship between the pixel coordinates of the preset feature point and the robot coordinates according to the pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point.
在机器人当前坐标位置朝 X轴移动一个相对位置, 在相机中搜索给定圆形 特征点, 记录此时机器人坐标和圆形特征点中心图像坐标。 同样的方法, 在机 器人当前坐标位置朝 Y轴移动一个相对位置, 在相机中搜索给定圆形特征点, 记录此时机器人坐标和圆形特征点中心图像坐标。 通过上述方式得到圆形特征 点的像素坐标与机器人坐标之间的粗略平移位置转换关系。 Move a relative position toward the X axis at the current coordinate position of the robot, search for a given circular feature point in the camera, and record the coordinates of the robot and the center image coordinates of the circular feature point at this time. In the same way, the robot moves a relative position toward the Y axis at the current coordinate position of the robot, searches the camera for a given circular feature point, and records the coordinates of the robot and the center image coordinates of the circular feature point at this time. The rough translation position conversion relationship between the pixel coordinates of the circular feature points and the robot coordinates is obtained by the above method.
将待标定区域均分为预设数量的子区域, 分别提取预设数量的子区域中心 的像素坐标, 通过粗略平移位置转换关系以及预设数量的子区域中心的像素坐 标分别计算机器人移动的粗略位置。 将机器人分别移动到计算出的粗略位置处, 通过图像识别重新计算在机器人到达不同粗略位置处时圆形特征点的像素坐 标, 由此得到特征点的像素坐标与机器人坐标之间的精准平移位置关系。 The area to be calibrated is equally divided into a preset number of sub-areas, and the pixel coordinates of the center of the preset number of sub-regions are extracted respectively, and the rough movement of the robot is calculated through the rough translation position conversion relationship and the pixel coordinates of the center of the preset number of sub-regions. position. Move the robot to the calculated rough positions, and recalculate the pixel coordinates of the circular feature points when the robot reaches different rough positions through image recognition, thereby obtaining the precise translational position between the pixel coordinates of the feature points and the robot coordinates relationship.
本实施例中, 粗略平移位置转换关系也可称为第一平移位置转换关系, 精 确平移位置关系也可称为第二平移位置转换关系, 粗略位置也可称为目标位置。 In this embodiment, the rough translation position conversion relationship may also be referred to as the first translation position conversion relationship, the precise translation position relationship may also be referred to as the second translation position conversion relationship, and the rough position may also be referred to as the target position.
步骤 206,获取机器人旋转至不同角度时预设特征点的像素坐标以及预设特 征点的像素坐标对应的机器人坐标。 Step 206: Obtain the pixel coordinates of the preset feature points when the robot rotates to different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature points.
步骤 208,根据不同角度时预设特征点的像素坐标以及预设特征点的像素坐 标对应的机器人坐标, 得到旋转中心位置。 Step 208: Obtain the rotation center position according to the pixel coordinates of the preset feature points at different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature points.
通过机器人带动相机以预设步长旋转, 得到旋转至不同角度时圆形特征点 圆心组成的一个圆弧, 拟合圆弧中心坐标, 得到机器人粗略旋转中心。 计算粗 略旋转中心与机器人旋转初始位置的差值, 找出在机器人旋转时圆形特征点的 像素坐标与机器人坐标之间的旋转中心转换关系。 The camera is driven by the robot to rotate with a preset step length to obtain an arc composed of the center of the circular feature points when rotated to different angles, and the center coordinates of the arc are fitted to obtain the rough rotation center of the robot. Calculate the difference between the rough rotation center and the initial rotation position of the robot, and find the rotation center conversion relationship between the pixel coordinates of the circular feature point and the robot coordinates when the robot rotates.
通过机器人带动相机以另一固定步长旋转, 机器人每旋转一次, 通过旋转 中心转换关系得到此时圆形特征点的像素坐标对应的机器人坐标。 计算该坐标 与此时机器人实际坐标之间的差值, 将计算出的差值补偿到机器人坐标中, 机 器人运动到指定位置。 在一实施例中, 机器人运动到对应的机器人坐标后, 根 据此时圆形特征点的像素坐标与机器人在初始位置时圆形特征点的像素坐标,
得到机器人坐标误差, 将误差补偿到机器人坐标中。 机器人在 360度范围内完 成旋转后, 拟合所有坐标误差, 并将拟合后的坐标误差补偿至机器人坐标的粗 略旋转中心, 计算机器人补偿后的旋转中心坐标与初始位置坐标之间的差值, 使机器人通过计算的差值能够绕圆形特征点初始位置精确旋转。 The camera is driven by the robot to rotate in another fixed step. Every time the robot rotates, the robot coordinates corresponding to the pixel coordinates of the circular feature points at this time are obtained through the rotation center conversion relationship. Calculate the difference between the coordinates and the actual coordinates of the robot at this time, and compensate the calculated difference to the robot coordinates, and the robot moves to the specified position. In an embodiment, after the robot moves to the corresponding robot coordinates, according to the pixel coordinates of the circular feature point at this time and the pixel coordinates of the circular feature point when the robot is at the initial position, Obtain the robot coordinate error, and compensate the error to the robot coordinate. After the robot completes its rotation within 360 degrees, it fits all coordinate errors, and compensates the fitted coordinate errors to the rough rotation center of the robot coordinates, and calculates the difference between the compensated rotation center coordinates of the robot and the initial position coordinates , So that the robot can accurately rotate around the initial position of the circular feature point through the calculated difference.
步骤 210, 根据平移位置关系和旋转中心位置, 得到机器人与图像采集装置 的平面转换关系, 以对图像采集装置进行标定。 Step 210: Obtain the plane conversion relationship between the robot and the image acquisition device according to the translational position relationship and the rotation center position, so as to calibrate the image acquisition device.
上述图像采集装置标定方法, 根据机器人平移至待标定区域不同位置时, 预设特征点的像素坐标以及对应的机器人坐标, 得到预设特征点的像素坐标与 机器人坐标之间的平移位置关系, 再根据机器人旋转至不同角度时预设特征点 的像素坐标以及对应的机器人坐标, 得到旋转中心位置, 从而根据平移位置关 系和旋转中心位置, 得到机器人与图像采集装置的平面转换关系, 以对图像采 集装置进行标定, 不需要人为示校机器人工具点, 只需将特征点放置在图像采 集装置视野中, 通过上述方案可以提高标定精度, 从而提高机器人视觉定位设 备的精度, 而且不需要额外的辅助硬件设备, 简单高效, 能够大幅度降低操作 人员调机难度。 In the calibration method of the image acquisition device, the pixel coordinates of the feature points and the corresponding robot coordinates are preset according to the robot translation to different positions in the area to be calibrated to obtain the translational position relationship between the pixel coordinates of the preset feature points and the robot coordinates, and then According to the pixel coordinates of the feature points and the corresponding robot coordinates when the robot rotates to different angles, the rotation center position is obtained, and the plane conversion relationship between the robot and the image acquisition device is obtained according to the translational position relationship and the rotation center position, so as to collect images. The device is calibrated without the need for manual calibration of robot tool points, only the feature points need to be placed in the field of view of the image acquisition device. The above solution can improve the calibration accuracy, thereby improving the accuracy of the robot's visual positioning equipment, and does not require additional auxiliary hardware The equipment is simple and efficient, which can greatly reduce the difficulty of the operator to adjust the machine.
在一个实施例中, 如图 3 所示, 根据预设特征点的像素坐标以及预设特征 点的像素坐标对应的机器人坐标, 得到预设特征点的像素坐标与机器人坐标之 间的平移位置关系, 包括: 步骤 302, 根据机器人在待标定区域分别沿 X轴方 向和 Y轴方向平移至预设不同位置时, 预设特征点的像素坐标以及预设特征点 的像素坐标对应的机器人坐标, 得到粗略平移位置关系, X轴和 Y轴为三维空 间坐标中的处于同一水平面上相互垂直的轴线; 步骤 304, 将待标定区域均分为 预设数量的子区域, 分别提取预设数量的子区域中心的像素坐标; 步骤 306 , 根 据粗略平移位置关系以及预设数量的子区域中心的像素坐标, 得到预设数量的 的粗略位置; 步骤 308, 分别获取机器人到达预设数量的粗略位置时预设特征点 的像素坐标; 步骤 310, 根据机器人到达预设数量的粗略位置时预设特征点的像 素坐标以及对应的粗略位置的坐标, 对粗略平移位置关系进行修正, 得到精准 平移位置关系。 In one embodiment, as shown in FIG. 3, according to the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points, the translational position relationship between the pixel coordinates of the preset feature points and the robot coordinates is obtained , Including: Step 302, according to the robot coordinates corresponding to the pixel coordinates of the preset feature points and the pixel coordinates of the preset feature points when the robot translates to different preset positions along the X-axis direction and the Y-axis direction in the area to be calibrated, to obtain Roughly translate the positional relationship, the X-axis and Y-axis are mutually perpendicular axes on the same horizontal plane in the three-dimensional space coordinates; Step 304, divide the area to be calibrated into a preset number of sub-areas, and extract the preset number of sub-areas respectively The pixel coordinates of the center; Step 306: Obtain a preset number of rough positions according to the rough translation position relationship and the preset number of pixel coordinates of the center of the sub-region; Step 308: Obtain the preset number of rough positions when the robot reaches the preset number Pixel coordinates of the feature point; Step 310, according to the pixel coordinates of the preset feature point and the coordinates of the corresponding rough position when the robot reaches the preset number of rough positions, correct the rough translation position relationship to obtain the precise translation position relationship.
在一个实施例中, 根据机器人在待标定区域分别沿 X轴方向和 Y轴方向平 移至预设不同位置时, 预设特征点的像素坐标以及预设特征点的像素坐标对应 的机器人坐标, 得到粗略平移位置关系, 包括: 获耳又机器人从当前位置沿 X轴 方向平移至预设第一位置时, 预设特征点的第一像素坐标以及预设特征点的第 一像素坐标对应的机器人第一坐标, 其中, 预设第一位置通过二分法获取; 获 取机器人从当前位置沿 Y轴方向平移至预设第二位置时, 预设特征点的第二像 素坐标以及预设特征点的第二像素坐标对应的机器人第二坐标, 其中, 预设第
二位置通过二分法获取; 根据预设特征点的第一像素坐标以及根据预设特征点 的第一像素坐标对应的机器人第一坐标以及预设特征点的第二像素坐标以及预 设特征点的第二像素坐标对应的机器人第二坐标, 得到粗略平移位置关系。 如 图 4所示, 在机器人当前坐标 20( ,};0)位置朝 X轴移动一个相对位置 , 通过 二分法调整 值, 在相机中搜索给定圆形特征点, 记录机器人坐标 2o( )和 圆形特征点中心图像坐标尸 0( , )。 同样的方法, 在 2oCxo,};())位置朝 Y轴移动一 个相对位置办, 通过二分法调整办值, 在相机中搜索给定圆形特征点, 记录机 器人坐标 QoO:2,};2)和圆形特征点中心图像坐标 Po(u2,v2)o 计算尸0和 2o之间的转 换关系 A0, 便得到 P0*A0 = Qo, In one embodiment, according to the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points when the robot translates to different preset positions along the X-axis direction and the Y-axis direction in the area to be calibrated, The rough translational position relationship includes: when the ear harvesting robot translates from the current position to the preset first position along the X-axis, the first pixel coordinates of the preset feature points and the first pixel coordinates of the preset feature points of the robot corresponding to the first pixel coordinates One coordinate, where the preset first position is obtained by the dichotomy; the second pixel coordinate of the preset feature point and the second pixel coordinate of the preset feature point are acquired when the robot translates from the current position to the preset second position along the Y-axis direction The second coordinate of the robot corresponding to the pixel coordinate, where the preset The two positions are obtained by dichotomy; according to the first pixel coordinates of the preset feature points and the first pixel coordinates of the robot corresponding to the first pixel coordinates of the preset feature points, the second pixel coordinates of the preset feature points, and the values of the preset feature points The second coordinate of the robot corresponding to the second pixel coordinate obtains the rough translational position relationship. 4, the current coordinate of the robot 20 (}; 0) position a relative position of the X-axis, the adjustment value by dichotomy, the camera searches a given circular feature points, recorded robot coordinate 2O () And the center image coordinate of the circular feature point is 0 (, ). In the same way, at 2oCxo,}; ()), move a relative position to the Y axis, adjust the value through the dichotomy, search for a given circular feature point in the camera, and record the robot coordinate QoO: 2 }; 2 ) And the center image coordinates of the circular feature point Po(u 2 , v 2 ) o Calculate the conversion relationship A 0 between corpse 0 and 2o, and obtain P 0* A 0 = Qo,
如图 5所示, 在相机视野中将待标定区域等份划分为 9个区域, 分别提耳又 9 个区域中心坐标 /V( , v0)、 P, \uh Vj) . /VOf,vf), 其中(^ 8 , 通过转换矩 As shown in Figure 5, the area to be calibrated is equally divided into 9 areas in the camera's field of view, and the center coordinates of the 9 areas are lifted separately /V(, v 0 ), P, \u h Vj). /VO f , V f ), where (^ 8, by converting the moment
在一个实施例中, 如图 6 所示, 根据不同角度时预设特征点的像素坐标以 及预设特征点的像素坐标对应的机器人坐标, 得到旋转中心位置, 包括: 步骤 602, 获取旋转中心转换关系以及粗略旋转中心, 旋转中心转换关系用于表征机 器人旋转时预设特征点的像素坐标与机器人坐标之间的关系; 步骤 604, 根据旋 转中心转换关系以及机器人在初始位置以预设第一步长旋转至不同角度时预设
特征点的像素坐标, 得到对应的机器人坐标; 步骤 606 , 获取机器人到达对应的 机器人坐标所处位置时预设特征点的像素坐标; 步骤 608 , 根据机器人到达对应 的机器人坐标所处位置时预设特征点的像素坐标以及机器人在初始位置时预设 特征点的像素坐标, 得到机器人坐标误差; 步骤 610, 对全角度范围内的坐标误 差进行拟合, 将拟合后的坐标误差补偿至粗略旋转中心所处坐标, 得到精准旋 转中心位置。 In one embodiment, as shown in FIG. 6, according to the pixel coordinates of the preset feature points at different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature points, obtaining the rotation center position includes: Step 602: Obtain the rotation center transformation The relationship and the rough rotation center. The rotation center conversion relationship is used to characterize the relationship between the pixel coordinates of the preset feature points and the robot coordinates when the robot rotates; step 604, preset the first step according to the rotation center conversion relationship and the initial position of the robot Default for long rotation to different angles The pixel coordinates of the feature point are obtained, and the corresponding robot coordinates are obtained; step 606, the pixel coordinates of the preset feature points are obtained when the robot reaches the position of the corresponding robot coordinates; step 608, the preset is based on the robot reaching the position of the corresponding robot coordinates The pixel coordinates of the feature points and the pixel coordinates of the feature points are preset at the initial position of the robot to obtain the robot coordinate error; Step 610: Fit the coordinate error within the full angle range, and compensate the fitted coordinate error to a rough rotation The coordinates of the center, get the precise rotation center position.
在一个实施例中, 如图 7所示, 获取旋转中心转换关系以及粗略旋转中心, 包括: 步骤 702, 根据机器人在初始位置以预设第二步长旋转至不同角度时预设 特征点的像素坐标, 拟合像素坐标得到粗略旋转中心; 步骤 704, 计算粗略旋转 中心的坐标与初始位置时预设特征点的像素坐标之间的旋转误差; 步骤 706, 根 据旋转误差、 像素坐标以及像素坐标对应的机器人坐标, 得到旋转中心转换关 系。 In one embodiment, as shown in FIG. 7, obtaining the rotation center conversion relationship and the rough rotation center includes: Step 702, preset the pixel of the feature point when the robot rotates to a different angle with a preset second step length at the initial position Coordinates, fit the pixel coordinates to obtain the rough rotation center; step 704, calculate the rotation error between the coordinates of the rough rotation center and the pixel coordinates of the preset feature point at the initial position; step 706, correspond to the rotation error, the pixel coordinates, and the pixel coordinates The robot coordinates of, get the rotation center conversion relationship.
在一个实施例中, 根据机器人在初始位置以预设第二步长旋转至不同角度 时预设特征点的像素坐标, 拟合像素坐标得到粗略旋转中心之前, 还包括: 获 取在待标定区域机器人绕 Z轴正方向旋转时, 图像采集装置搜索到预设特征点 的最大正向角度, 以及机器人绕 Z轴反方向旋转时, 图像采集装置搜索到预设 特征点的最大反向角度, Z轴为三维空间坐标轴中垂直于水平面的轴线; 根据最 大正向角度以及最大反向角度, 得到机器人的预设第二步长。 In one embodiment, according to the pixel coordinates of the preset feature points when the robot rotates to a different angle with the preset second step length at the initial position, before fitting the pixel coordinates to obtain the rough rotation center, the method further includes: acquiring the robot in the area to be calibrated When rotating around the Z axis in the positive direction, the image acquisition device searches for the maximum forward angle of the preset feature point, and when the robot rotates around the Z axis in the reverse direction, the image acquisition device searches for the maximum reverse angle of the preset feature point, the Z axis Is the axis perpendicular to the horizontal plane in the three-dimensional space coordinate axis; according to the maximum forward angle and the maximum reverse angle, the preset second step length of the robot is obtained.
Dx\Dy D x \D y
P2 = P, + P 2 = P, +
Dx\Dy 机器人返回至初始位置 20( , ), 绕 Z轴旋转, 相对角度范围 0-360°, 步长 de设为固定值, 通过旋转中心转换关系 2计算对应的圆形特征点中心图像坐标 尸0(«0, )不变时的机器人坐标 22OG,};G)、 Q2ixhyi). .、 Q2(xm,ym), 其中 且 11。 尸0(«0,%)对应的机器人坐标为 (xe,;yc) , D x \D y The robot returns to the initial position 2 0 (, ), rotates around the Z axis, the relative angle range is 0-360°, the step length de is set to a fixed value, and the corresponding circular feature point is calculated through the rotation center conversion relationship 2 . The robot coordinates 2 2 O G ,}; G ), Q 2 ix h yi)..., Q 2 (x m ,y m ), Q 2 (x m ,y m ) when the center image coordinate 0 (« 0 ,) is unchanged The robot coordinates corresponding to corpse 0 (« 0 ,%) are (x e ,;y c ),
xc = uQa2n + vQa2 2i + a2 3\ x c = u Q a 2 n + v Q a 2 2i + a 2 3\
<<
Xm =xc + ((x0-xc) cos(mdd )-(y0 yc) sm(mdd)) Xm = x c + ((x 0- x c ) cos(md d )-(y 0 y c ) sm(md d ))
<<
ym = yc+ (Oo— A ) sin(md0 ) + (y0 yc) cos(mde)) y m = y c + (Oo— A) sin(md 0 ) + (y 0 y c ) cos(md e ))
将机器人分别移动到计算出的坐标位置 Q2{x0,yo)^ Qiixuyi) . Q2(xm,ym) 机器人运动到位后计算圆形特征点实际坐标与理论坐标 尸0(«0, )之间的机器人 坐标误差, 将误差补偿到机器人坐标中。 在一实施例中, 机器人分别移动到计 算出的坐标位置 Q2(x0,yo)^ 22( , )、 .、 220m Jm)后, 识别出对应的圆形特 征点中心图像坐标 A(M0,v0)、 PsiMhV])、 .、 P3(um,vm), 计算实际圆形特征点中 心图像坐标值和理论值尸 0(M0, V0)的机器人坐标误差。根据粗略平移位置转换关系 计算机器人旋转后机器人坐标数据, 计算圆形特征点像素坐标和旋转后机器人 坐标的转换关系。 绕 尸0(«0, )旋转的机器人 2o坐标变换为 Q 'oixojo) ^ Q ' Move the robot to the calculated coordinate positions Q 2 {x 0 ,yo) ^ Qiixuyi). Q 2 (x m ,y m ) After the robot moves in place, calculate the actual and theoretical coordinates of the circular feature points. 0 (« 0 The robot coordinate error between,) will be compensated to the robot coordinate. In one embodiment, the robot moves to the calculated coordinate positions Q 2 (x 0 ,yo) ^ 22(, ), ., 220m Jm), and then recognizes the corresponding circular feature point center image coordinates A(M 0 , v 0 ), PsiM hV] ), ., P 3 (u m , v m ), calculate the actual circular feature point center image coordinate value and the theoretical value of the robot coordinate error of 0 (M 0 , V 0 ). Calculate the robot coordinate data after the robot is rotated according to the rough translation position conversion relationship, and calculate the conversion relationship between the pixel coordinates of the circular feature point and the robot coordinates after the rotation. The 2o coordinate of the robot rotating around corpse 0 («0,) is transformed into Q'oixojo) ^ Q '
V0为对应的圆形特征点中心图像 坐标尸0坐标值, 、
坐标值, d㊀为固定步长值, O^m^n且 11。 V 0 is the corresponding circular feature point center image coordinate and 0 coordinate value,, Coordinate value, d㊀ is a fixed step value, O^m^n and 11.
其中 V /o为对应的圆形特征点中心图像坐标尸 o坐标值, i/3m 为 坐标尸3坐标值。 Where V /o is the coordinate value of the corresponding circular feature point center image coordinate, and i/ 3 m is the coordinate value of coordinate 3 .
本实施例中, 尸3是圆形特征点中心图像坐标实际值。 In this embodiment, the body 3 is the actual value of the image coordinate of the center of the circular feature point.
在一个实施例中, 将拟合后的坐标误差补偿至粗略旋转中心所处坐标, 得 到精准旋转中心位置之后, 还包括: 计算精准旋转中心位置所处坐标与初始位 置机器人坐标之间的差值; 根据平移位置关系和旋转中心位置, 得到机器人与 图像采集装置的平面转换关系, 包括: 根据平移位置关系、 旋转中心位置以及 差值, 得到机器人与图像采集装置的平面转换关系。 In one embodiment, after the coordinate error after fitting is compensated to the coordinates of the rough rotation center to obtain the precise rotation center position, the method further includes: calculating the difference between the coordinates of the precise rotation center position and the initial position robot coordinates According to the translational position relationship and the rotation center position, obtaining the plane conversion relationship between the robot and the image acquisition device includes: obtaining the plane conversion relationship between the robot and the image acquisition device according to the translation position relationship, the rotation center position and the difference.
本实施例中, 粗略平移位置转换关系也可称为第一平移位置转换关系, 精 确平移位置关系也可称为第二平移位置转换关系, 粗略位置也可称为目标位置。 In this embodiment, the rough translation position conversion relationship may also be referred to as the first translation position conversion relationship, the precise translation position relationship may also be referred to as the second translation position conversion relationship, and the rough position may also be referred to as the target position.
圆弧中心坐标, 计算与初始位置的差值, 找出步骤(2)平移过程中圆形特征点 在机器人旋转中心处的像素坐标与机器人坐标的转换关系 A2。 (4)求解机器人 的精确旋转中心位置, 机器人每旋转一次, 通过步骤( 3)得到的旋转中心与机 器人坐标的转换关系 A2, 将计算出的差值补偿到机器人坐标中, 机器人运动到 指定位置; 机器人运动到位后, 计算圆形特征点实际坐标与理论坐标之间的机 器人坐标误差, 将误差补偿到机器人坐标中; 机器人在 360度范围内完成旋转 后, 拟合所有旋转误差补偿后机器人坐标的旋转中心, 计算机器人拟合后旋转 中心坐标与初始位置差值, 使机器人通过计算值能够绕圆形特征点初始位置精 确旋转。 (5)机器人与相机精确平面转换关系求解, 根据步骤(2)机器人与 相机的精确平移位置关系和步骤(4)机器人的精确旋转中心位置计算机器人与 相机平面转换关系,圆形特征点中心图像坐标 /MM0,V0)、 PAUU V!) . Pi{uhv^
Calculate the difference between the arc center coordinates and the initial position to find out the conversion relationship A 2 between the pixel coordinates of the circular feature point at the rotation center of the robot and the robot coordinates in the translation process of step (2). (4) Solve the precise rotation center position of the robot. Every time the robot rotates, the conversion relationship A 2 between the rotation center and the robot coordinates obtained in step (3) is used to compensate the calculated difference to the robot coordinates, and the robot moves to the specified Position; After the robot moves in place, calculate the robot coordinate error between the actual coordinate of the circular feature point and the theoretical coordinate, and compensate the error to the robot coordinate; After the robot completes rotation within 360 degrees, fit all the robot after the rotation error compensation For the rotation center of the coordinates, calculate the difference between the rotation center coordinates and the initial position of the robot after fitting, so that the robot can accurately rotate around the initial position of the circular feature point through the calculated value. (5) Solve the precise plane conversion relationship between the robot and the camera, and calculate the plane conversion relationship between the robot and the camera according to the step (2) the precise translation position relationship between the robot and the camera and the step (4) the precise rotation center position of the robot, and the circular feature point center image Coordinates /MM 0 , V 0 ), PAU UV! ). Pi{u h v^
在一个实施例中, 如图 11所示, 假设已知机器人抓取产品在相机拍照位置 坐标为 (5^:,5^;y, 5^r), 在相机视野中模板产品图像特征点坐标为 (M/w,M/?};,M/?r), 机器人抓取模板产品位置坐标为 (Tqx,Tqy, rgr), 则当机器人在 (V^x, Vqy, Vgr)位置 拍照时, 可通过在相机视野中识别任意产品图像特征点坐标为 (A//w,A/p};,A/pr)计 算任意产品抓取位置的坐标 (Rqx,Rqy,Rqr)。 In one embodiment, as shown in FIG. 11, it is assumed that the coordinates of the position where the robot grabs the product at the camera are (5^:, 5^; y, 5^r), and the feature point coordinates of the template product image in the camera field of view Is (M/w, M/?};, M/?r), the robot grabs the template product position coordinates as (Tqx, Tqy, rgr), then when the robot takes a photo at the position (V^x, Vqy, Vgr) , The coordinates (Rqx, Rqy, Rqr) of any product grabbing position can be calculated by identifying the coordinates of any product image feature point as (A//w, A/p};, A/pr) in the camera field of view.
关 turn off
其中, Q Vqr Sqr Mqx M^;y表示模板图像特征点对应的机器人坐标值, Nqx Nqy表示任意产品图像特征点对应的机器人旋转中心坐标值。 Among them, Q Vqr Sqr Mqx M^; y represents the robot coordinate value corresponding to the feature point of the template image, and Nqx Nqy represents the coordinate value of the robot rotation center corresponding to any product image feature point.
应该理解的是, 虽然图 2-3、 图 6-7的流程图中的多个步骤按照箭头的指示 依次显示, 但是这些步骤并不是必然按照箭头指示的顺序依次执行。 除非本文 中有明确的说明, 这些步骤的执行并没有严格的顺序限制, 这些步骤可以以其 它的顺序执行。 而且, 图 2-3、 图 6-7中的至少一部分步骤可以包括多个子步骤 或者多个阶段, 这些子步骤或者阶段并不必然是在同一时刻执行完成, 而是可 以在不同的时刻执行, 这些子步骤或者阶段的执行顺序也不必然是依次进行, 而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交 替地执行。 It should be understood that although the multiple steps in the flowcharts of FIGS. 2-3 and 6-7 are displayed in sequence as indicated by the arrows, these steps are not necessarily executed in the order indicated by the arrows. Unless explicitly stated in this article, there is no strict order for the execution of these steps, and these steps can be executed in other orders. Moreover, at least part of the steps in FIGS. 2-3 and 6-7 may include multiple sub-steps or multiple stages, and these sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. The order of execution of these sub-steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with other steps or at least part of the sub-steps or stages of other steps.
在一个实施例中, 如图 12所示, 提供了一种图像采集装置标定装置, 包括: 第一信息获取模块 1202、 平移关系获取模块 1204、 第二信息获取模块 1206、 旋 转中心获取模块 1208和标定信息获取模块 1210。 其中, 第一信息获取模块, 设
置为获取机器人平移至待标定区域的不同位置时, 预设特征点的像素坐标以及 预设特征点的像素坐标对应的机器人坐标。 平移关系获取模块, 设置为根据预 设特征点的像素坐标以及预设特征点的像素坐标对应的机器人坐标, 得到预设 特征点的像素坐标与机器人坐标之间的平移位置关系。 第二信息获取模块, 设 置为获取机器人旋转至不同角度时预设特征点的像素坐标以及预设特征点的像 素坐标对应的机器人坐标。 旋转中心获取模块, 设置为根据不同角度时预设特 征点的像素坐标以及对应的机器人坐标, 得到旋转中心位置。 标定信息获取模 块, 设置为根据平移位置关系和旋转中心位置, 得到机器人与图像采集装置的 平面转换关系, 以对图像采集装置进行标定。 In one embodiment, as shown in FIG. 12, an image acquisition device calibration device is provided, including: a first information acquisition module 1202, a translation relationship acquisition module 1204, a second information acquisition module 1206, a rotation center acquisition module 1208, and Calibration information acquisition module 1210. Among them, the first information acquisition module is set Set to obtain the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points when the robot moves to different positions of the area to be calibrated. The translational relationship acquisition module is configured to obtain the translational position relationship between the pixel coordinates of the preset feature point and the robot coordinates according to the pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point. The second information acquiring module is configured to acquire the pixel coordinates of the preset feature points when the robot rotates to different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature points. The rotation center obtaining module is set to obtain the position of the rotation center according to the pixel coordinates of the preset feature points at different angles and the corresponding robot coordinates. The calibration information acquisition module is set to obtain the plane conversion relationship between the robot and the image acquisition device according to the translational position relationship and the rotation center position, so as to calibrate the image acquisition device.
在一个实施例中, 平移关系获取模块是设置为通过如下方式根据预设特征 点的像素坐标以及预设特征点的像素坐标对应的机器人坐标, 得到预设特征点 的像素坐标与机器人坐标之间的平移位置关系: 根据机器人在待标定区域分别 沿 X轴方向和 Y轴方向平移至预设不同位置时, 预设特征点的像素坐标以及对 应的机器人坐标, 得到第一平移位置关系, X轴和 Y轴为三维空间坐标中的处 于同一水平面上相互垂直的轴线; 将待标定区域均分为预设数量的子区域, 分 别提取预设数量的子区域中心的像素坐标; 根据粗略平移位置关系以及预设数 量的子区域中心的像素坐标, 得到预设数量的对应的粗略位置; 获取机器人到 达预设数量的粗略位置时预设特征点的像素坐标; 根据机器人到达预设数量的 粗略位置时预设特征点的像素坐标以及对应的粗略位置的坐标, 对第一平移位 置关系进行修正, 得到第二平移位置关系。 In one embodiment, the translational relationship acquisition module is configured to obtain the pixel coordinates of the preset feature points and the robot coordinates according to the pixel coordinates of the preset feature points and the robot coordinates corresponding to the pixel coordinates of the preset feature points in the following manner The translational position relationship: According to the robot in the area to be calibrated along the X-axis direction and the Y-axis direction to preset different positions, the pixel coordinates of the preset feature points and the corresponding robot coordinates are obtained to obtain the first translational position relationship. The and Y-axis are mutually perpendicular axes on the same horizontal plane in the three-dimensional space coordinates; divide the area to be calibrated into a preset number of sub-areas, and extract the pixel coordinates of the center of the preset number of sub-areas respectively; And the pixel coordinates of the center of the preset number of sub-regions to obtain the preset number of corresponding rough positions; obtain the pixel coordinates of the preset feature points when the robot reaches the preset number of rough positions; according to when the robot reaches the preset number of rough positions The pixel coordinates of the feature point and the coordinates of the corresponding rough position are preset, and the first translational position relationship is corrected to obtain the second translational position relationship.
在一个实施例中, 平移关系获取模块是设置为通过如下方式根据机器人在 待标定区域分别沿 X轴方向和 Y轴方向平移至预设不同位置时, 预设特征点的 像素坐标以及对应的机器人坐标, 得到第一平移位置关系: 获取机器人从当前 位置沿 X轴方向平移至预设第一位置时, 预设特征点的第一像素坐标以及预设 特征点的第一像素坐标对应的机器人第一坐标, 其中, 预设第一位置通过二分 法获取; 获取机器人从当前位置沿 Y轴方向平移至预设第二位置时, 预设特征 点的第二像素坐标以及预设特征点的第二像素坐标对应的机器人第二坐标, 其 中, 预设第二位置通过二分法获取; 根据预设特征点的第一像素坐标以及预设 特征点的第一像素坐标对应的机器人第一坐标以及预设特征点的第二像素坐标 以及预设特征点的第二像素坐标对应的机器人第二坐标, 得到第一平移位置关 系。 In one embodiment, the translational relationship acquisition module is configured to preset the pixel coordinates of the feature points and the corresponding robot when the robot translates in the X-axis direction and the Y-axis direction to preset different positions in the area to be calibrated in the following manner Coordinates to obtain the first translational position relationship: obtain the first pixel coordinates of the preset feature point and the first pixel coordinates of the preset feature point when the robot translates from the current position along the X-axis direction to the preset first position One coordinate, where the preset first position is obtained by the dichotomy; the second pixel coordinate of the preset feature point and the second pixel coordinate of the preset feature point are acquired when the robot translates from the current position to the preset second position along the Y-axis direction The second coordinate of the robot corresponding to the pixel coordinates, where the preset second position is obtained by dichotomy; the first pixel coordinates of the preset feature point and the first pixel coordinates of the preset feature point corresponding to the robot first coordinate and the preset The second pixel coordinates of the feature point and the second pixel coordinates of the robot corresponding to the second pixel coordinates of the preset feature point obtain the first translational position relationship.
在一个实施例中, 旋转中心获取模块是设置为获取旋转中心转换关系以及 第一旋转中心, 旋转中心转换关系用于表征机器人旋转时预设特征点的像素坐 标与机器人坐标之间的关系; 根据旋转中心转换关系以及机器人在初始位置以 预设第一步长旋转至不同角度时预设特征点的像素坐标, 得到对应的机器人坐
标; 获取机器人到达对应的机器人坐标所处位置时预设特征点的像素坐标; 根 据机器人到达对应的机器人坐标所处位置时预设特征点的像素坐标以及机器人 在初始位置时预设特征点的像素坐标, 得到机器人坐标误差; 对全角度范围内 的坐标误差进行拟合, 将拟合后的坐标误差补偿至第一旋转中心所处坐标, 得 到第二旋转中心的位置。 In one embodiment, the rotation center obtaining module is configured to obtain the rotation center conversion relationship and the first rotation center, and the rotation center conversion relationship is used to characterize the relationship between the pixel coordinates of the preset feature point and the robot coordinates when the robot rotates; The rotation center conversion relationship and the pixel coordinates of the preset feature points when the robot rotates to different angles with the preset first step at the initial position, to obtain the corresponding robot sitting Obtain the pixel coordinates of the preset feature point when the robot reaches the position of the corresponding robot coordinate; According to the pixel coordinates of the preset feature point when the robot reaches the position of the corresponding robot coordinate and the preset feature point of the robot at the initial position The pixel coordinates are used to obtain the robot coordinate error; the coordinate error in the full angle range is fitted, and the fitted coordinate error is compensated to the coordinates of the first rotation center to obtain the position of the second rotation center.
在一个实施例中, 旋转中心获取模块是设置为通过如下方式获取旋转中心 转换关系以及第一旋转中心: 根据机器人在初始位置以预设第二步长旋转至不 同角度时预设特征点的像素坐标, 拟合像素坐标得到第一旋转中心; 计算第一 旋转中心的坐标与初始位置时预设特征点的像素坐标之间的旋转误差; 根据旋 转误差、 像素坐标以及对应的机器人坐标, 得到旋转中心转换关系。 In one embodiment, the rotation center obtaining module is configured to obtain the rotation center conversion relationship and the first rotation center in the following manner: preset the pixels of the feature points when the robot rotates to a different angle with a preset second step length at the initial position Coordinates, fit the pixel coordinates to obtain the first rotation center; calculate the rotation error between the coordinates of the first rotation center and the pixel coordinates of the preset feature point at the initial position; obtain the rotation according to the rotation error, the pixel coordinates and the corresponding robot coordinates Center conversion relationship.
在一个实施例中,旋转中心获取模块还设置为获取在待标定区域机器人绕 Z 轴正方向旋转时, 图像采集装置搜索到预设特征点的最大正向角度, 以及机器 人绕 Z轴反方向旋转时, 图像采集装置搜索到预设特征点的最大反向角度, Z 轴为三维空间坐标轴中垂直于水平面的轴线; 根据最大正向角度以及最大反向 角度, 得到机器人的预设第二步长。 In one embodiment, the rotation center obtaining module is further configured to obtain the maximum forward angle of the preset feature point searched by the image acquisition device when the robot rotates around the Z axis in the positive direction in the area to be calibrated, and the robot rotates around the Z axis in the reverse direction When the image acquisition device searches for the maximum reverse angle of the preset feature point, the Z axis is the axis perpendicular to the horizontal plane in the three-dimensional space coordinate axis; according to the maximum forward angle and the maximum reverse angle, the robot’s preset second step is obtained long.
在一个实施例中, 旋转中心获取模块还设置为计算精准旋转中心位置所处 坐标与初始位置的坐标之间的差值, 标定信息获取模块是设置为根据平移位置 关系、 旋转中心位置以及差值, 得到机器人与图像采集装置的平面转换关系。 In one embodiment, the rotation center obtaining module is further configured to calculate the difference between the coordinates of the precise rotation center position and the coordinates of the initial position, and the calibration information obtaining module is configured to calculate the difference between the positional relationship of the translation, the position of the rotation center, and the difference. , Obtain the plane conversion relationship between the robot and the image acquisition device.
关于图像采集装置标定装置的具体限定可以参见上文中对于图像采集装置 标定方法的限定, 在此不再赘述。 上述图像采集装置标定装置中的各个模块可 全部或部分通过软件、 硬件及其组合来实现。 上述多个模块可以硬件形式内嵌 于或独立于计算机设备中的处理器中, 也可以以软件形式存储于计算机设备中 的存储器中, 以便于处理器调用执行以上多个模块对应的操作。 Regarding the specific definition of the image acquisition device calibration device, please refer to the above definition of the image acquisition device calibration method, which will not be repeated here. The various modules in the calibration device of the image acquisition device described above can be implemented in whole or in part by software, hardware, and combinations thereof. The foregoing multiple modules may be embedded in the form of hardware or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the operations corresponding to the foregoing multiple modules.
在一个实施例中, 提供了一种计算机设备, 该计算机设备可以是终端, 其 内部结构图可以如图 13所示。 该计算机设备包括通过系统总线连接的处理器、 存储器、 网络接口、 显示屏和输入装置。 本实施例中, 该计算机设备的处理器 用于提供计算和控制能力。 该计算机设备的存储器包括非易失性存储介质、 内 存储器。 该非易失性存储介质存储有操作系统和计算机程序。 该内存储器为非 易失性存储介质中的操作系统和计算机程序的运行提供环境。 该计算机设备的 网络接口用于与外部的终端通过网络连接通信。 该计算机程序被处理器执行时 以实现一种图像采集装置标定方法。 该计算机设备的显示屏可以是液晶显示屏 或者电子墨水显示屏, 该计算机设备的输入装置可以是显示屏上覆盖的触摸层, 也可以是计算机设备外壳上设置的按键、 轨迹球或触控板, 还可以是外接的键 盘、 触控板或鼠标等。
本领域技术人员可以理解, 图 13中示出的结构, 仅仅是与本申请方案相关 的部分结构的框图, 并不构成对本申请方案所应用于其上的计算机设备的限定。 一实施例中, 计算机设备可以包括比图中所示更多或更少的部件, 或者组合某 些部件, 或者具有不同的部件布置。 In an embodiment, a computer device is provided. The computer device may be a terminal, and its internal structure diagram may be as shown in FIG. 13. The computer equipment includes a processor, a memory, a network interface, a display screen and an input device connected through a system bus. In this embodiment, the processor of the computer device is used to provide calculation and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used to communicate with an external terminal through a network connection. The computer program is executed by the processor to realize a calibration method of the image acquisition device. The display screen of the computer device may be a liquid crystal display or an electronic ink display screen, and the input device of the computer device may be a touch layer covered on the display screen, or it may be a button, a trackball or a touch pad provided on the shell of the computer device , It can also be an external keyboard, touchpad or mouse. Those skilled in the art can understand that the structure shown in FIG. 13 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied. In an embodiment, the computer device may include more or fewer components than shown in the figure, or combine certain components, or have a different component arrangement.
在一个实施例中, 提供了一种计算机设备, 包括存储器和处理器, 该存储 器存储有计算机程序, 该处理器执行计算机程序时实现任一实施例中图像采集 装置标定方法。 In one embodiment, a computer device is provided, including a memory and a processor, the memory stores a computer program, and the processor executes the computer program to implement the image acquisition device calibration method in any embodiment.
在一个实施例中, 提供了一种计算机可读存储介质, 其上存储有计算机程 序, 计算机程序被处理器执行时实现任一实施例中图像采集装置标定方法。 In one embodiment, a computer-readable storage medium is provided, and a computer program is stored thereon, and the computer program is executed by a processor to implement the image acquisition device calibration method in any embodiment.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程, 是可以通过计算机程序来指令相关的硬件来完成, 所述的计算机程序可存储于 一非易失性计算机可读取存储介质中, 该计算机程序在执行时, 可包括如上述 方法的实施例的流程。 一实施例中, 本申请所提供的多个实施例中所使用的对 存储器、 存储、 数据库或其它介质的任何引用, 均可包括非易失性和 /或易失性 存储器。 非易失性存储器可包括只读存储器 ( Read-Only Memory, ROM ) 、 可 编程 ROM( Programmable Read-Only Memory, PROM )、电可编程 ROM( Erasable Programmable Read-Only Memory, EPROM )、电可擦除可编程 ROM( Electrically Erasable Programmable Read-Only Memory, EEPROM ) 或闪存。 易失性存储器 可包括随机存取存储器 ( Random Access Memory, RAM ) 或者外部高速缓冲存 储器。 作为说明而非局限, RAM 以多种形式可得, 诸如静态 RAM ( Static Random- Access Memory, SRAM )、动态 RAM( Dynamic Random Access Memory, DRAM ) 、 同步 DRAM ( Synchronous Dynamic Random Access Memory, SDRAM ) 、 双数据率 SDRAM ( Double Data Rate Synchronous Dynamic Random Access Memory, DDRSDRAM )、增强型 SDRAM( Enhanced Synchronous Dynamic Random Access Memory, ESDRAM ) 、 同步链路 DRAM ( Synchlink Dynamic Random Access Memory, SLDRAM ) 、 存储器总线直接 RAM ( Rambus Direct Dynamic Random Access Memory , RDRAM )、直接存储器总线动态 RAM( Direct Rambus Dynamic Random Access Memory , DRDRAM ) 、 以及存储器总线动态 RAM ( Rambus Dynamic Random Access Memory, RDRAM ) 等。
A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through a computer program. The computer program can be stored in a non-volatile computer readable storage. In the medium, when the computer program is executed, it may include the process of the embodiment of the foregoing method. In an embodiment, any reference to memory, storage, database, or other media used in the multiple embodiments provided in this application may include non-volatile and/or volatile memory. Non-volatile memory may include read-only memory (Read-Only Memory, ROM), programmable ROM (Programmable Read-Only Memory, PROM), electrically programmable ROM (Erasable Programmable Read-Only Memory, EPROM), and electrically erasable Except for programmable ROM (Electrically Erasable Programmable Read-Only Memory, EEPROM) or flash memory. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. As an illustration and not a limitation, RAM is available in various forms, such as static RAM (Static Random-Access Memory, SRAM), dynamic RAM (Dynamic Random Access Memory, DRAM), synchronous DRAM (Synchronous Dynamic Random Access Memory, SDRAM), Double Data Rate SDRAM (Double Data Rate Synchronous Dynamic Random Access Memory, DDRSDRAM), Enhanced SDRAM (Enhanced Synchronous Dynamic Random Access Memory, ESDRAM), Synchronous Link DRAM (Synchlink Dynamic Random Access Memory, SLDRAM), Memory Bus Direct RAM ( Rambus Direct Dynamic Random Access Memory (RDRAM), Direct Rambus Dynamic Random Access Memory (DRDRAM), and Rambus Dynamic Random Access Memory (RDRAM), etc.
Claims
权 利 要 求 书 Rights request
1、 一种图像采集装置标定方法, 所述图像采集装置应用于机器人视觉定位 设备, 所述方法包括: 1. A method for calibrating an image acquisition device, wherein the image acquisition device is applied to a robot visual positioning device, the method comprising:
获取机器人平移至待标定区域的不同位置时, 预设特征点的像素坐标以及 所述预设特征点的像素坐标对应的机器人坐标; Acquiring the pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point when the robot moves to different positions of the area to be calibrated;
根据所述预设特征点的像素坐标以及所述预设特征点的像素坐标对应的机 器人坐标, 得到所述预设特征点的像素坐标与机器人坐标之间的平移位置关系; 获取所述机器人旋转至不同角度时所述预设特征点的像素坐标以及所述预 设特征点的像素坐标对应的机器人坐标; Obtain the translational position relationship between the pixel coordinates of the preset feature point and the robot coordinates according to the pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point; obtain the robot rotation The pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point at different angles;
根据所述不同角度时所述预设特征点的像素坐标以及所述预设特征点的像 素坐标对应的机器人坐标, 得到旋转中心位置; Obtaining the rotation center position according to the pixel coordinates of the preset feature point at the different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature point;
根据所述平移位置关系和所述旋转中心位置, 得到所述机器人与所述图像 采集装置的平面转换关系, 以对所述图像采集装置进行标定。 According to the translational position relationship and the rotation center position, the plane conversion relationship between the robot and the image acquisition device is obtained, so as to calibrate the image acquisition device.
1、 根据权利要求 1所述的方法, 其中, 所述根据所述预设特征点的像素坐 标以及所述预设特征点的像素坐标对应的机器人坐标, 得到所述预设特征点的 像素坐标与机器人坐标之间的平移位置关系, 包括: 1. The method according to claim 1, wherein the pixel coordinates of the preset feature point are obtained according to the pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point The translational positional relationship with robot coordinates, including:
根据所述机器人在待标定区域分别沿 X轴方向和 Y轴方向平移至预设不同 位置时, 所述预设特征点的像素坐标以及所述预设特征点的像素坐标对应的机 器人坐标, 得到第一平移位置关系, 所述 X轴和所述 Y轴为三维空间坐标中的 处于同一水平面上相互垂直的轴线; According to the pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point when the robot translates in the X-axis direction and the Y-axis direction to preset different positions in the area to be calibrated, In the first translational position relationship, the X axis and the Y axis are mutually perpendicular axes on the same horizontal plane in three-dimensional space coordinates;
将所述待标定区域均分为预设数量的子区域, 分别提取所述预设数量的子 区域中心的像素坐标; Dividing the area to be calibrated into a preset number of sub-areas, and respectively extracting the pixel coordinates of the center of the preset number of sub-areas;
根据所述第一平移位置关系以及所述预设数量的子区域中心的像素坐标, 得到预设数量的的目标位置; Obtaining a preset number of target positions according to the first translational position relationship and the pixel coordinates of the center of the preset number of sub-regions;
分别获取机器人到达所述预设数量的目标位置时所述预设特征点的像素坐 标; Respectively acquiring the pixel coordinates of the preset feature points when the robot reaches the preset number of target positions;
根据所述机器人到达所述预设数量的目标位置时所述预设特征点的像素坐 标以及对应的目标位置的坐标, 对所述第一平移位置关系进行修正, 得到第二 平移位置关系。 According to the pixel coordinates of the preset feature point and the coordinates of the corresponding target position when the robot reaches the preset number of target positions, the first translational position relationship is corrected to obtain a second translational position relationship.
3、 根据权利要求 2所述的方法, 其中, 所述根据所述机器人在待标定区域 分别沿 X轴方向和 Y轴方向平移至预设不同位置时, 所述预设特征点的像素坐 标以及所述预设特征点的像素坐标对应的机器人坐标, 得到第一平移位置关系, 包括:
获取所述机器人从当前位置沿 X轴方向平移至预设第一位置时, 所述预设 特征点的第一像素坐标以及所述预设特征点的第一像素坐标对应的机器人第一 坐标, 其中, 所述预设第一位置通过二分法获取; 3. The method according to claim 2, wherein the pixel coordinates of the preset feature points and the pixel coordinates of the preset feature points when the robot translates to preset different positions in the X-axis direction and the Y-axis direction in the area to be calibrated respectively The robot coordinates corresponding to the pixel coordinates of the preset feature points to obtain the first translational position relationship includes: Acquiring the first pixel coordinate of the preset feature point and the first pixel coordinate of the robot corresponding to the first pixel coordinate of the preset feature point when the robot translates from the current position along the X-axis direction to the preset first position, Wherein, the preset first position is obtained by dichotomy;
获取所述机器人从当前位置沿 Y轴方向平移至预设第二位置时, 所述预设 特征点的第二像素坐标以及所述预设特征点的第二像素坐标对应的机器人第二 坐标, 其中, 所述预设第二位置通过二分法获取; Acquiring the second pixel coordinates of the preset feature point and the second pixel coordinates of the robot corresponding to the second pixel coordinates of the preset feature point when the robot translates from the current position along the Y-axis direction to the preset second position, Wherein, the preset second position is obtained by dichotomy;
根据所述预设特征点的第一像素坐标、 所述预设特征点的第一像素坐标对 应的机器人第一坐标、 所述预设特征点的第二像素坐标以及所述预设特征点的 第二像素坐标对应的机器人第二坐标, 得到第一平移位置关系。 According to the first pixel coordinates of the preset feature point, the first coordinate of the robot corresponding to the first pixel coordinates of the preset feature point, the second pixel coordinates of the preset feature point, and the value of the preset feature point The second coordinate of the robot corresponding to the second pixel coordinate obtains the first translational position relationship.
4、 根据权利要求 1所述的方法, 其中, 所述根据所述不同角度时所述预设 特征点的像素坐标以及所述预设特征点的像素坐标对应的机器人坐标, 得到旋 转中心位置, 包括: 4. The method according to claim 1, wherein said obtaining the rotation center position according to the pixel coordinates of the preset feature point at the different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature point, Include:
获取旋转中心转换关系以及第一旋转中心, 所述旋转中心转换关系用于表 征所述机器人旋转时所述预设特征点的像素坐标与机器人坐标之间的关系; 根据所述旋转中心转换关系以及所述机器人在初始位置以预设第一步长旋 转至不同角度时所述预设特征点的像素坐标, 得到对应的机器人坐标; Obtaining a rotation center conversion relationship and a first rotation center, where the rotation center conversion relationship is used to characterize the relationship between the pixel coordinates of the preset feature point and the robot coordinates when the robot rotates; the conversion relationship is based on the rotation center; The pixel coordinates of the preset feature points when the robot rotates to a different angle with a preset first step in the initial position, to obtain the corresponding robot coordinates;
获取所述机器人到达所述对应的机器人坐标所处位置时所述预设特征点的 像素坐标; Acquiring the pixel coordinates of the preset feature point when the robot reaches the position of the corresponding robot coordinate;
根据所述机器人到达所述对应的机器人坐标所处位置时所述预设特征点的 像素坐标以及机器人在初始位置时所述预设特征点的像素坐标, 得到机器人坐 标误差; Obtain the robot coordinate error according to the pixel coordinates of the preset feature point when the robot reaches the position where the corresponding robot coordinate is located and the pixel coordinates of the preset feature point when the robot is in the initial position;
对全角度范围内的坐标误差进行拟合, 将拟合后的坐标误差补偿至所述第 一旋转中心所处坐标, 得到第二旋转中心的位置。 Fitting the coordinate errors in the full angle range, and compensating the fitted coordinate errors to the coordinates of the first rotation center to obtain the position of the second rotation center.
5、 根据权利要求 4所述的方法, 所述将拟合后的坐标误差补偿至所述第一 旋转中心所处坐标, 得到第二旋转中心的位置之后, 还包括: 5. The method according to claim 4, after said compensating the fitted coordinate error to the coordinate of the first rotation center and obtaining the position of the second rotation center, the method further comprises:
计算所述第二旋转中心的位置所处坐标与所述初始位置的坐标之间的差 值; Calculating the difference between the coordinates of the position of the second rotation center and the coordinates of the initial position;
所述根据所述平移位置关系和所述旋转中心位置, 得到所述机器人与所述 图像采集装置的平面转换关系, 包括: The obtaining the plane conversion relationship between the robot and the image acquisition device according to the translational position relationship and the rotation center position includes:
根据所述平移位置关系、 所述旋转中心位置以及所述差值, 得到所述机器 人与所述图像采集装置的平面转换关系。 According to the translational position relationship, the rotation center position, and the difference value, the plane conversion relationship between the robot and the image acquisition device is obtained.
6、 根据权利要求 4所述的方法, 其中, 所述获取旋转中心转换关系以及第
一旋转中心, 包括: 6. The method according to claim 4, wherein said obtaining the rotation center conversion relationship and the second A center of rotation, including:
根据所述机器人在所述初始位置以预设第二步长旋转至不同角度时所述预 设特征点的像素坐标, 拟合所述像素坐标得到第一旋转中心; Fitting the pixel coordinates to obtain the first rotation center according to the pixel coordinates of the preset feature points when the robot rotates to a different angle with a preset second step length at the initial position;
计算所述第一旋转中心的坐标与所述初始位置时所述预设特征点的像素坐 标之间的旋转误差; Calculating the rotation error between the coordinates of the first rotation center and the pixel coordinates of the preset feature point at the initial position;
根据所述旋转误差、 所述机器人到达所述预设数量的目标位置时所述预设 特征点的像素坐标以及所述像素坐标对应的机器人坐标, 得到旋转中心转换关 系。 According to the rotation error, the pixel coordinates of the preset feature point when the robot reaches the preset number of target positions, and the robot coordinates corresponding to the pixel coordinates, a rotation center conversion relationship is obtained.
7、 根据权利要求 6所述的方法, 所述根据所述机器人在所述初始位置以预 设第二步长旋转至不同角度时所述预设特征点的像素坐标, 拟合所述像素坐标 得到粗略旋转中心之前, 还包括: 7. The method according to claim 6, said fitting the pixel coordinates according to the pixel coordinates of the preset feature point when the robot rotates to a different angle with a preset second step length at the initial position Before getting the rough rotation center, it also includes:
获取机器人在所述待标定区域绕 Z轴正方向旋转时, 所述图像采集装置搜 索到所述预设特征点的最大正向角度, 以及所述机器人在所述待标定区域绕 Z 轴反方向旋转时, 所述图像采集装置搜索到所述预设特征点的最大反向角度, 所述 Z轴为三维空间坐标轴中垂直于水平面的轴线; Obtain the maximum positive angle of the preset feature point when the robot rotates around the Z-axis in the area to be calibrated, the image acquisition device searches for the maximum positive angle of the preset feature point, and the robot in the area to be calibrated around the Z-axis in the opposite direction When rotating, the image acquisition device searches for the maximum reverse angle of the preset feature point, and the Z axis is the axis perpendicular to the horizontal plane in the three-dimensional space coordinate axis;
根据所述最大正向角度以及所述最大反向角度, 得到所述预设第二步长。 Obtain the preset second step length according to the maximum forward angle and the maximum reverse angle.
8、 一种图像采集装置标定装置, 所述图像采集装置应用于机器人视觉定位 设备, 所述装置包括: 8. A calibration device for an image acquisition device, wherein the image acquisition device is applied to a robot visual positioning equipment, and the device includes:
第一信息获取模块, 设置为获取机器人平移至待标定区域的不同位置时, 预设特征点的像素坐标以及对应的机器人坐标; The first information acquisition module is configured to acquire the pixel coordinates of the preset feature points and the corresponding robot coordinates when the robot moves to different positions in the area to be calibrated;
平移关系获取模块, 设置为根据所述预设特征点的像素坐标以及所述预设 特征点的像素坐标对应的机器人坐标, 得到所述预设特征点的像素坐标与机器 人坐标之间的平移位置关系; The translational relationship acquisition module is configured to obtain the translational position between the pixel coordinates of the preset feature point and the robot coordinates according to the pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point Relationship
第二信息获取模块, 设置为获取所述机器人旋转至不同角度时所述预设特 征点的像素坐标以及所述预设特征点的像素坐标对应的机器人坐标; The second information acquisition module is configured to acquire the pixel coordinates of the preset feature point and the robot coordinates corresponding to the pixel coordinates of the preset feature point when the robot rotates to different angles;
旋转中心获取模块, 设置为根据所述不同角度时所述预设特征点的像素坐 标以及所述预设特征点的像素坐标对应的机器人坐标, 得到旋转中心位置; 标定信息获取模块, 设置为根据所述平移位置关系和所述旋转中心位置, 得到所述机器人与所述图像采集装置的平面转换关系, 以对所述图像采集装置 进行标定。 The rotation center obtaining module is set to obtain the rotation center position according to the pixel coordinates of the preset feature points at the different angles and the robot coordinates corresponding to the pixel coordinates of the preset feature points; the calibration information obtaining module is set to be based on The translational position relationship and the rotation center position are used to obtain a plane conversion relationship between the robot and the image acquisition device, so as to calibrate the image acquisition device.
9、 一种计算机设备, 包括存储器和处理器, 所述存储器存储有计算机程序, 所述处理器执行所述计算机程序时实现权利要求 1至 7中任一项所述的方法。
10、 一种计算机可读存储介质, 存储有计算机程序, 所述计算机程序被处 理器执行时实现权利要求 1至 7中任一项所述的方法。
9. A computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the method according to any one of claims 1 to 7 when the computer program is executed. 10. A computer-readable storage medium storing a computer program, which when executed by a processor implements the method according to any one of claims 1 to 7.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910146310.7 | 2019-02-27 | ||
CN201910146310.7A CN109829953B (en) | 2019-02-27 | 2019-02-27 | Image acquisition device calibration method and device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020173240A1 true WO2020173240A1 (en) | 2020-09-03 |
Family
ID=66864630
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/072491 WO2020173240A1 (en) | 2019-02-27 | 2020-01-16 | Image acquisition apparatus calibration method and apparatus, computer device, and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109829953B (en) |
WO (1) | WO2020173240A1 (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109829953B (en) * | 2019-02-27 | 2021-09-03 | 广东拓斯达科技股份有限公司 | Image acquisition device calibration method and device, computer equipment and storage medium |
CN110116411B (en) * | 2019-06-06 | 2020-10-30 | 浙江汉振智能技术有限公司 | Robot 3D vision hand-eye calibration method based on spherical target |
CN113795358B (en) * | 2019-06-17 | 2024-07-09 | 西门子(中国)有限公司 | Coordinate system calibration method, device and computer readable medium |
CN110465946B (en) * | 2019-08-19 | 2021-04-30 | 珞石(北京)科技有限公司 | Method for calibrating relation between pixel coordinate and robot coordinate |
CN113021328A (en) * | 2019-12-09 | 2021-06-25 | 广东博智林机器人有限公司 | Hand-eye calibration method, device, equipment and medium |
CN111369625B (en) * | 2020-03-02 | 2021-04-13 | 广东利元亨智能装备股份有限公司 | Positioning method, positioning device and storage medium |
CN111524073A (en) * | 2020-04-14 | 2020-08-11 | 云南电网有限责任公司信息中心 | Image geometric transformation method, device, computer equipment and medium |
CN111627071B (en) * | 2020-04-30 | 2023-10-17 | 如你所视(北京)科技有限公司 | Method, device and storage medium for measuring motor rotation precision |
CN111912337B (en) * | 2020-07-24 | 2021-11-09 | 上海擎朗智能科技有限公司 | Method, device, equipment and medium for determining robot posture information |
CN112058679A (en) * | 2020-08-11 | 2020-12-11 | 武汉万邦德新科技有限公司 | Soft agricultural product robot grabbing and sorting method and device based on impedance control |
CN112348895B (en) * | 2020-11-18 | 2024-08-23 | 深圳创维-Rgb电子有限公司 | Control method, control equipment and medium for bonding liquid crystal panel |
CN114683267B (en) * | 2020-12-31 | 2023-09-19 | 北京小米移动软件有限公司 | Calibration method, calibration device, electronic equipment and storage medium |
CN112819884B (en) * | 2021-01-08 | 2024-07-12 | 苏州华兴源创科技股份有限公司 | Coordinate correction method and device, electronic equipment and computer readable medium |
CN113510697B (en) * | 2021-04-23 | 2023-02-14 | 知守科技(杭州)有限公司 | Manipulator positioning method, device, system, electronic device and storage medium |
CN114505858B (en) * | 2022-02-17 | 2023-08-18 | 北京极智嘉科技股份有限公司 | Cantilever shaft butt joint control method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH106264A (en) * | 1996-06-27 | 1998-01-13 | Ricoh Co Ltd | Robot arm condition sensing method and its system |
CN106934813A (en) * | 2015-12-31 | 2017-07-07 | 沈阳高精数控智能技术股份有限公司 | A kind of industrial robot workpiece grabbing implementation method of view-based access control model positioning |
CN109366472A (en) * | 2018-12-04 | 2019-02-22 | 广东拓斯达科技股份有限公司 | Article laying method, device, computer equipment and the storage medium of robot |
CN109483531A (en) * | 2018-10-26 | 2019-03-19 | 江苏大学 | It is a kind of to pinpoint the NI Vision Builder for Automated Inspection and method for picking and placing FPC plate for manipulator |
CN109829953A (en) * | 2019-02-27 | 2019-05-31 | 广东拓斯达科技股份有限公司 | Image collecting device scaling method, device, computer equipment and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104354167B (en) * | 2014-08-29 | 2016-04-06 | 广东正业科技股份有限公司 | A kind of Robotic Hand-Eye Calibration method and device |
EP3510562A1 (en) * | 2016-09-07 | 2019-07-17 | Starship Technologies OÜ | Method and system for calibrating multiple cameras |
CN107808401B (en) * | 2017-10-30 | 2020-09-22 | 大族激光科技产业集团股份有限公司 | Hand-eye calibration method for single camera at tail end of mechanical arm |
-
2019
- 2019-02-27 CN CN201910146310.7A patent/CN109829953B/en active Active
-
2020
- 2020-01-16 WO PCT/CN2020/072491 patent/WO2020173240A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH106264A (en) * | 1996-06-27 | 1998-01-13 | Ricoh Co Ltd | Robot arm condition sensing method and its system |
CN106934813A (en) * | 2015-12-31 | 2017-07-07 | 沈阳高精数控智能技术股份有限公司 | A kind of industrial robot workpiece grabbing implementation method of view-based access control model positioning |
CN109483531A (en) * | 2018-10-26 | 2019-03-19 | 江苏大学 | It is a kind of to pinpoint the NI Vision Builder for Automated Inspection and method for picking and placing FPC plate for manipulator |
CN109366472A (en) * | 2018-12-04 | 2019-02-22 | 广东拓斯达科技股份有限公司 | Article laying method, device, computer equipment and the storage medium of robot |
CN109829953A (en) * | 2019-02-27 | 2019-05-31 | 广东拓斯达科技股份有限公司 | Image collecting device scaling method, device, computer equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
CAO, RUNNING: "Optimization study of robot odd-form placement", CHINA MASTER’S THESES FULL-TEXT DATABASE, INFORMATION SCIENCE, 15 April 2018 (2018-04-15), XP006067973, ISSN: 1674-0246, DOI: 20200413155003X * |
Also Published As
Publication number | Publication date |
---|---|
CN109829953B (en) | 2021-09-03 |
CN109829953A (en) | 2019-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020173240A1 (en) | Image acquisition apparatus calibration method and apparatus, computer device, and storage medium | |
KR102532072B1 (en) | System and method for automatic hand-eye calibration of vision system for robot motion | |
CN111331592B (en) | Mechanical arm tool center point correcting device and method and mechanical arm system | |
US10223589B2 (en) | Vision system for training an assembly system through virtual assembly of objects | |
WO2023082990A1 (en) | Method and apparatus for determining working pose of robotic arm | |
WO2020024178A1 (en) | Hand-eye calibration method and system, and computer storage medium | |
CN113442169B (en) | Method and device for calibrating hands and eyes of robot, computer equipment and readable storage medium | |
WO2021169855A1 (en) | Robot correction method and apparatus, computer device, and storage medium | |
CN113910219A (en) | Exercise arm system and control method | |
US20200262080A1 (en) | Comprehensive model-based method for gantry robot calibration via a dual camera vision system | |
US20180161983A1 (en) | Control device, robot, and robot system | |
WO2020063058A1 (en) | Calibration method for multi-degree-of-freedom movable vision system | |
CN111862220A (en) | Correction method and device for UVW platform calibration, deviation correction method and alignment system | |
CN111759463A (en) | Method for improving positioning precision of surgical mechanical arm | |
WO2023134237A1 (en) | Coordinate system calibration method, apparatus and system for robot, and medium | |
JP6410411B2 (en) | Pattern matching apparatus and pattern matching method | |
CN108420531B (en) | Surgical tool adjusting method, electronic device and clamping device | |
CN112975959B (en) | Machine vision-based radiator assembling and positioning method, system and medium | |
CN117340879A (en) | Industrial machine ginseng number identification method and system based on graph optimization model | |
JPH11320465A (en) | Control method for robot arm | |
JP6253847B1 (en) | Laser processing apparatus, laser processing method, and laser processing program | |
JP2006049755A (en) | Rotation center calculation method and work positioning device using the same | |
CN112716518B (en) | Ultrasonic scanning method, device, terminal equipment and storage medium | |
CN116175569A (en) | Method for determining relation model of hand-eye matrix, hand-eye calibration method and equipment | |
KR102577964B1 (en) | Alignment system for liver surgery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20763427 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20763427 Country of ref document: EP Kind code of ref document: A1 |