CN118544360A - Robot vision detection method, system, terminal and medium based on laser compensation - Google Patents

Robot vision detection method, system, terminal and medium based on laser compensation Download PDF

Info

Publication number
CN118544360A
CN118544360A CN202411001958.2A CN202411001958A CN118544360A CN 118544360 A CN118544360 A CN 118544360A CN 202411001958 A CN202411001958 A CN 202411001958A CN 118544360 A CN118544360 A CN 118544360A
Authority
CN
China
Prior art keywords
information
pixel coordinate
coordinate information
preset
robotic arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411001958.2A
Other languages
Chinese (zh)
Inventor
王恺
谢凌波
卢清华
黄枢扬
邹家华
陈为林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan University
Original Assignee
Foshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan University filed Critical Foshan University
Priority to CN202411001958.2A priority Critical patent/CN118544360A/en
Publication of CN118544360A publication Critical patent/CN118544360A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1692Calibration of manipulator
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application is suitable for the technical field of computer vision, and provides a robot vision detection method, a system, a terminal and a medium based on laser compensation, wherein the method comprises the steps of firstly acquiring terminal marker image set information, and then executing the processing on each terminal marker image information: acquiring first pixel coordinate information of a theoretical position of the tail end of the mechanical arm, carrying out distortion correction processing on the image information of the tail end marker according to the first pixel coordinate information and the distortion coefficient set information, determining second pixel coordinate information of an actual measurement position of the tail end of the mechanical arm, effectively determining coordinate error value information of the tail end of the mechanical arm, and finally carrying out compensation processing on the second pixel coordinate information to generate compensated pixel coordinate information. The application can compensate the global error condition, so that the accuracy of camera detection almost approaches to the motion error detected by the laser tracker, the accuracy of mechanical arm positioning is greatly improved, and monocular vision precision detection oriented to plane mechanical arm control is realized.

Description

基于激光补偿的机器人视觉检测方法、系统、终端及介质Robot vision detection method, system, terminal and medium based on laser compensation

技术领域Technical Field

本申请涉及计算机视觉的技术领域,具体而言,涉及一种基于激光补偿的机器人视觉检测方法、系统、终端及介质。The present application relates to the technical field of computer vision, and in particular to a robot vision detection method, system, terminal and medium based on laser compensation.

背景技术Background Art

20世纪80年代,计算机视觉技术首次应用到了机器人系统中,与传统的机器人相比,视觉机器人在适应能力、控制精度和鲁棒性等方面都更为出色。随着机器人在工业生产中的应用日益广泛,机械臂视觉化已成为机器人领域的一个重要研究方向,也是公认的被广泛认为是现代高新技术发展最重要的方向。对于机械臂视觉化的准确性直接影响了此类机械臂引导的作业的运动精度,机械臂视觉化的准确性直接联系到机械臂视觉的标定,机械臂视觉标定的精度直接影响了工业机械臂运动反馈的精准度,标定的复杂性直接影响了此类机械臂标定的快速性,因此,实现高精度的定位、识别和检测对于机械臂的成功操作至关重要。In the 1980s, computer vision technology was first applied to robot systems. Compared with traditional robots, visual robots are more outstanding in adaptability, control accuracy and robustness. With the increasing application of robots in industrial production, robot visualization has become an important research direction in the field of robotics, and is also widely recognized as the most important direction for the development of modern high-tech. The accuracy of robot visualization directly affects the motion accuracy of the operations guided by such robots. The accuracy of robot visualization is directly related to the calibration of robot vision. The accuracy of robot vision calibration directly affects the accuracy of the motion feedback of industrial robot arms. The complexity of calibration directly affects the rapidity of such robot calibration. Therefore, achieving high-precision positioning, identification and detection is crucial for the successful operation of robot arms.

目前,在面向平面机械臂控制的单目视觉精密检测中,实时捕捉机械臂运动时会出现图像变形,导致对标定物的检测产生严重偏差,影响机械臂定位的准确性,存在准确性较低的问题,有待进一步改进。At present, in the monocular vision precision inspection for planar robotic arm control, image deformation will occur when capturing the robotic arm movement in real time, resulting in serious deviations in the detection of the calibration object, affecting the accuracy of the robotic arm positioning, and there is a problem of low accuracy, which needs further improvement.

发明内容Summary of the invention

基于此,本申请实施例提供了一种基于激光补偿的机器人视觉检测方法、系统、终端及介质,以解决现有技术中准确性较低的问题。Based on this, the embodiments of the present application provide a robot vision detection method, system, terminal and medium based on laser compensation to solve the problem of low accuracy in the prior art.

第一方面,本申请实施例提供了一种基于激光补偿的机器人视觉检测方法,所述方法包括:In a first aspect, an embodiment of the present application provides a robot vision detection method based on laser compensation, the method comprising:

基于预设的相机,持续获取末端标记物图像集信息,其中,所述末端标记物图像集信息包括多个连续的末端标记物图像信息,所述末端标记物图像信息的拍摄对象为指定的机械臂末端,所述机械臂末端安装有至少两个光源标记物;Based on a preset camera, continuously acquire end marker image set information, wherein the end marker image set information includes a plurality of continuous end marker image information, the shooting object of the end marker image information is a designated robotic arm end, and the robotic arm end is equipped with at least two light source markers;

针对各个所述末端标记物图像信息:获取所述机械臂末端的理论位置对应的第一像素坐标信息,并根据所述第一像素坐标信息和预设的畸变系数集信息,对所述末端标记物图像信息进行畸变纠正处理,确定所述机械臂末端的实测位置对应的第二像素坐标信息;For each of the end marker image information: obtaining first pixel coordinate information corresponding to the theoretical position of the end of the robotic arm, and performing distortion correction processing on the end marker image information according to the first pixel coordinate information and the preset distortion coefficient set information, to determine second pixel coordinate information corresponding to the measured position of the end of the robotic arm;

根据所述第二像素坐标信息和预设的末端位置运动误差计算函数,确定所述机械臂末端的坐标误差值信息;Determine the coordinate error value information of the end of the robotic arm according to the second pixel coordinate information and a preset end position motion error calculation function;

根据所述坐标误差值信息,对所述第二像素坐标信息进行补偿处理,生成补偿像素坐标信息。The second pixel coordinate information is compensated according to the coordinate error value information to generate compensated pixel coordinate information.

与现有技术相比存在的有益效果是:本申请实施例提供的基于激光补偿的机器人视觉检测方法,终端设备可以先利用相机,获取末端标记物图像集信息,然后针对各个末端标记物图像信息执行该处理:获取机械臂末端的理论位置对应的第一像素坐标信息,再根据第一像素坐标信息和畸变系数集信息,对末端标记物图像信息进行畸变纠正处理,准确地确定机械臂末端的实测位置对应的第二像素坐标信息,然后根据第二像素坐标信息和末端位置运动误差计算函数,有效地确定机械臂末端的坐标误差值信息,最后根据坐标误差值信息,对第二像素坐标信息进行补偿处理,生成补偿像素坐标信息,从而实现对全局误差情况进行补偿,提高机械臂定位的准确性,在一定程度上解决了当前准确性较低的问题。Compared with the prior art, the beneficial effect is as follows: in the robot vision detection method based on laser compensation provided in the embodiment of the present application, the terminal device can first use the camera to obtain the end marker image set information, and then perform the processing for each end marker image information: obtain the first pixel coordinate information corresponding to the theoretical position of the end of the robotic arm, and then perform distortion correction processing on the end marker image information based on the first pixel coordinate information and the distortion coefficient set information, accurately determine the second pixel coordinate information corresponding to the actual position of the end of the robotic arm, and then effectively determine the coordinate error value information of the end of the robotic arm based on the second pixel coordinate information and the end position motion error calculation function, and finally, based on the coordinate error value information, compensate the second pixel coordinate information to generate compensated pixel coordinate information, thereby achieving compensation for the global error situation, improving the accuracy of robotic arm positioning, and to a certain extent solving the current problem of low accuracy.

第二方面,本申请实施例提供了一种基于激光补偿的机器人视觉检测系统,所述系统包括:In a second aspect, an embodiment of the present application provides a robot vision detection system based on laser compensation, the system comprising:

末端标记物图像集信息获取模块:用于基于预设的相机,持续获取末端标记物图像集信息,其中,所述末端标记物图像集信息包括多个连续的末端标记物图像信息,所述末端标记物图像信息的拍摄对象为指定的机械臂末端,所述机械臂末端安装有至少两个光源标记物;The terminal marker image set information acquisition module is used to continuously acquire the terminal marker image set information based on a preset camera, wherein the terminal marker image set information includes a plurality of continuous terminal marker image information, and the shooting object of the terminal marker image information is a designated robotic arm terminal, and at least two light source markers are installed at the robotic arm terminal;

第一像素坐标信息获取模块:用于针对各个所述末端标记物图像信息:获取所述机械臂末端的理论位置对应的第一像素坐标信息,并根据所述第一像素坐标信息和预设的畸变系数集信息,对所述末端标记物图像信息进行畸变纠正处理,确定所述机械臂末端的实测位置对应的第二像素坐标信息;The first pixel coordinate information acquisition module is used to obtain the first pixel coordinate information corresponding to the theoretical position of the end of the robotic arm for each end marker image information, and perform distortion correction processing on the end marker image information according to the first pixel coordinate information and the preset distortion coefficient set information, so as to determine the second pixel coordinate information corresponding to the measured position of the end of the robotic arm;

坐标误差值信息确定模块:用于根据所述第二像素坐标信息和预设的末端位置运动误差计算函数,确定所述机械臂末端的坐标误差值信息;A coordinate error value information determination module: used to determine the coordinate error value information of the end of the robotic arm according to the second pixel coordinate information and a preset end position motion error calculation function;

补偿像素坐标信息生成模块:用于根据所述坐标误差值信息,对所述第二像素坐标信息进行补偿处理,生成补偿像素坐标信息。Compensated pixel coordinate information generating module: used to perform compensation processing on the second pixel coordinate information according to the coordinate error value information to generate compensated pixel coordinate information.

第三方面,本申请实施例提供了一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述第一方面的方法的步骤。In a third aspect, an embodiment of the present application provides a terminal device, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method of the first aspect described above when executing the computer program.

第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述第一方面的方法的步骤。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the method of the first aspect described above are implemented.

可以理解的是,上述第二方面至第四方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。It can be understood that the beneficial effects of the second to fourth aspects mentioned above can be found in the relevant description of the first aspect mentioned above, and will not be repeated here.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the following briefly introduces the drawings required for use in the embodiments or descriptions of the prior art.

图1是本申请一实施例提供的机器人视觉检测方法的流程示意图;FIG1 is a schematic diagram of a flow chart of a robot vision detection method provided by an embodiment of the present application;

图2是本申请一实施例提供的机械臂的第一示意图;FIG2 is a first schematic diagram of a robotic arm provided by an embodiment of the present application;

图3是本申请一实施例提供的机械臂的第二示意图;FIG3 is a second schematic diagram of a robotic arm provided by an embodiment of the present application;

图4是本申请一实施例提供的机器人视觉检测方法中步骤S100之前的流程示意图;FIG4 is a schematic diagram of a flow chart before step S100 in a robot vision detection method provided in an embodiment of the present application;

图5是本申请一实施例提供的相机成像的示意图;FIG5 is a schematic diagram of camera imaging provided by an embodiment of the present application;

图6是本申请一实施例提供的标定板图像信息的示意图;FIG6 is a schematic diagram of image information of a calibration plate provided in an embodiment of the present application;

图7是本申请一实施例提供的标定板角点信息的示意图;FIG7 is a schematic diagram of corner point information of a calibration plate provided in an embodiment of the present application;

图8是本申请一实施例提供的重投影误差信息的示意图;FIG8 is a schematic diagram of reprojection error information provided by an embodiment of the present application;

图9是本申请一实施例提供的机器人视觉检测方法中步骤S200的流程示意图;FIG9 is a flow chart of step S200 in the robot vision detection method provided in one embodiment of the present application;

图10是本申请一实施例提供的第一圆形标记物边缘拟合的示意图;FIG10 is a schematic diagram of edge fitting of a first circular marker provided in an embodiment of the present application;

图11是本申请一实施例提供的第二圆形标记物边缘拟合的示意图;FIG11 is a schematic diagram of edge fitting of a second circular marker provided in an embodiment of the present application;

图12是本申请一实施例提供的机械臂的第三示意图;FIG12 is a third schematic diagram of a robotic arm provided in an embodiment of the present application;

图13是本申请一实施例提供的点位置分布的示意图;FIG13 is a schematic diagram of point position distribution provided by an embodiment of the present application;

图14是本申请一实施例提供的机器人视觉检测方法中步骤S240之后的流程示意图;FIG14 is a schematic diagram of a flow chart after step S240 in a robot vision detection method provided in an embodiment of the present application;

图15是本申请一实施例提供的机器人视觉检测方法中步骤S300之前的流程示意图;FIG15 is a schematic diagram of a flow chart before step S300 in a robot vision detection method provided in an embodiment of the present application;

图16是本申请一实施例提供的预测位置的示意图;FIG16 is a schematic diagram of a predicted position provided by an embodiment of the present application;

图17是本申请一实施例提供的机器人视觉检测方法中步骤S400的流程示意图;FIG. 17 is a flow chart of step S400 in the robot vision detection method provided in one embodiment of the present application;

图18是本申请一实施例提供的机器人视觉检测方法中步骤S400之后的流程示意图;FIG. 18 is a schematic diagram of a flow chart after step S400 in a robot vision detection method provided in an embodiment of the present application;

图19是本申请一实施例提供的机器人视觉检测系统的模块框图;FIG19 is a block diagram of a robot vision detection system provided by an embodiment of the present application;

图20是本申请一实施例提供的终端设备的示意图。FIG. 20 is a schematic diagram of a terminal device provided in an embodiment of the present application.

具体实施方式DETAILED DESCRIPTION

以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, specific details such as specific system structures, technologies, etc. are provided for the purpose of illustration rather than limitation, so as to provide a thorough understanding of the embodiments of the present application. However, it should be clear to those skilled in the art that the present application may also be implemented in other embodiments without these specific details. In other cases, detailed descriptions of well-known systems, devices, circuits, and methods are omitted to prevent unnecessary details from obstructing the description of the present application.

在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。In the description of the present application specification and the appended claims, the terms "first", "second", "third", etc. are only used to distinguish the descriptions and should not be understood as indicating or implying relative importance.

在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。References to "one embodiment" or "some embodiments" etc. described in the specification of this application mean that one or more embodiments of the present application include specific features, structures or characteristics described in conjunction with the embodiment. Therefore, the statements "in one embodiment", "in some embodiments", "in some other embodiments", "in some other embodiments", etc. that appear in different places in this specification do not necessarily refer to the same embodiment, but mean "one or more but not all embodiments", unless otherwise specifically emphasized in other ways. The terms "including", "comprising", "having" and their variations all mean "including but not limited to", unless otherwise specifically emphasized in other ways.

为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。In order to illustrate the technical solution described in this application, a specific embodiment is provided below for illustration.

请参阅图1,图1是本申请实施例提供的基于激光补偿的机器人视觉检测方法的流程示意图。在本实施例中,机器人视觉检测方法的执行主体为终端设备。可以理解的是,终端设备的类型包括但不限于手机、平板电脑、笔记本电脑、超级移动个人计算机(Ultra-Mobile Personal Computer,UMPC)、上网本、个人数字助理(Personal DigitalAssistant,PDA)等,本申请实施例对终端设备的具体类型不作任何限制。Please refer to FIG. 1, which is a flow chart of a robot vision detection method based on laser compensation provided in an embodiment of the present application. In this embodiment, the execution subject of the robot vision detection method is a terminal device. It is understandable that the types of terminal devices include but are not limited to mobile phones, tablet computers, laptop computers, ultra-mobile personal computers (Ultra-Mobile Personal Computer, UMPC), netbooks, personal digital assistants (Personal Digital Assistant, PDA), etc., and the embodiments of the present application do not impose any restrictions on the specific types of terminal devices.

请参阅图1,本申请实施例提供的机器人视觉检测方法包括但不限于以下步骤:Please refer to FIG1 . The robot vision detection method provided in the embodiment of the present application includes but is not limited to the following steps:

在S100中,基于预设的相机,持续获取末端标记物图像集信息。In S100, based on a preset camera, terminal marker image set information is continuously acquired.

示例性地,请参阅图2,该机器人视觉检测方法可以适用于机械臂。Exemplarily, referring to FIG. 2 , the robot vision detection method may be applicable to a robotic arm.

具体来说,终端设备可以基于预设的相机,对机械臂的末端进行拍摄,持续地获取末端标记物图像集信息,其中,末端标记物图像集信息包括多个连续的末端标记物图像信息;末端标记物图像信息用于描述相机对机械臂的末端进行拍摄所获得的图像;机械臂的末端用于描述机械臂直接与工作对象或者环境进行交互的一端;末端标记物图像信息的拍摄对象为指定的机械臂末端;机械臂末端安装有至少两个光源标记物。Specifically, the terminal device can photograph the end of the robotic arm based on a preset camera, and continuously obtain end marker image set information, wherein the end marker image set information includes multiple continuous end marker image information; the end marker image information is used to describe the image obtained by the camera photographing the end of the robotic arm; the end of the robotic arm is used to describe the end of the robotic arm that directly interacts with the work object or environment; the shooting object of the end marker image information is the specified end of the robotic arm; at least two light source markers are installed at the end of the robotic arm.

在一种可能的实现方式,光源标记物可以是小型白光灯,当光源标记物的数量为两个的时候,两个光源标记物可以分别位于成对角的位置,并且将两个小型白光灯安装在零件圆形孔内进行固定,再通过亚克力棒导光,然后通过在孔面贴白膜进行散光处理。In one possible implementation, the light source marker may be a small white light lamp. When there are two light source markers, the two light source markers may be located at diagonal positions, respectively, and the two small white light lamps may be installed and fixed in the circular hole of the part, and then light is guided through acrylic rods, and then diffuse light is processed by sticking a white film on the hole surface.

不失一般性地,请参阅图3,终端设备可以针对机械臂末端的运动位置和机械臂的两个臂的关节角角度,构建运动学逆解方程,从而便于通过操作的位姿确定出关节角的组合,其中,所述运动学逆解方程可以是:Without loss of generality, referring to FIG. 3 , the terminal device can construct an inverse kinematics equation for the motion position of the end of the robotic arm and the joint angles of the two arms of the robotic arm, so as to facilitate determining the combination of joint angles through the operation posture, wherein the inverse kinematics equation can be:

,

式中,是机械臂的第一臂对应的长度,是机械臂的第二臂对应的长度,是第一臂与X轴正方向的夹角,是第二臂与X轴正方向的夹角,P(A,B)是机械臂末端位置。In the formula, is the length corresponding to the first arm of the robot, is the length of the second arm of the robot, is the angle between the first arm and the positive direction of the X-axis, is the angle between the second arm and the positive direction of the X-axis, and P(A,B) is the end position of the robot arm.

式中,In the formula, ;

.

在一些可能的实现方式中,为了有利于提高准确性,请参阅图4,在步骤S100之前,该方法还包括但不限于以下步骤:In some possible implementations, in order to improve accuracy, please refer to FIG. 4 , before step S100, the method further includes but is not limited to the following steps:

在S101中,构建相机的成像模型。In S101 , an imaging model of a camera is constructed.

具体来说,终端设备可以先构建相机的成像模型,示例性地,相机可以是具体型号为ME2P-2621-15U3M的工业相机。Specifically, the terminal device may first construct an imaging model of the camera. For example, the camera may be an industrial camera of the specific model ME2P-2621-15U3M.

不失一般性地,请参阅图5,相机可以由传感器和镜头所组成,镜头的作用是将外界环境中由一点发散出来的光线汇聚到光敏传感器的一点上,以实现清晰的成像,根据投影方式的不同可以分为普通镜头和远心镜头,前者为透视投影,后者为平行投影。而相机的成像模型可以简化为小孔成像模型,其特点是所有来自场景的光线均通过一个投影中心,即透镜的中心。Without loss of generality, please refer to Figure 5. A camera can be composed of a sensor and a lens. The function of the lens is to converge the light emitted from a point in the external environment to a point on the photosensitive sensor to achieve clear imaging. According to different projection methods, it can be divided into ordinary lenses and telecentric lenses. The former is perspective projection and the latter is parallel projection. The imaging model of the camera can be simplified to a pinhole imaging model, which is characterized in that all light from the scene passes through a projection center, that is, the center of the lens.

示例性地,由于小孔投影整个三维坐标转化为二维像素坐标的过程,可以视为将视觉中一个三维世界坐标系坐标点P(Xw,Yw,Zw),通过与相机坐标系的关系将世界坐标系下的坐标点转化为相机坐标系下的坐标点P(Xc,Yc,Zc),再由相机坐标系的坐标点P(Xc,Yc,Zc)转化为像素坐标下坐标点P(u,v),最后再从像素坐标转化为图像坐标点P(x,y)的过程,故在不考虑畸变情况下,该成像模型可以是:Exemplarily, since the pinhole projection converts the entire three-dimensional coordinates into two-dimensional pixel coordinates, it can be regarded as a process of converting a three-dimensional world coordinate system coordinate point P (Xw, Yw, Zw) in the vision into a coordinate point P (Xc, Yc, Zc) in the camera coordinate system through the relationship with the camera coordinate system, and then converting the coordinate point P (Xc, Yc, Zc) in the camera coordinate system into a coordinate point P (u, v) in the pixel coordinate, and finally converting from the pixel coordinate to the image coordinate point P (x, y). Therefore, without considering the distortion, the imaging model can be:

,

式中;表示所述成像模型;表示拍摄物体的深度信息;表示所述末端标记物图像信息的图像中心在预设的图像坐标系中的实际横坐标;表示所述末端标记物图像信息的图像中心在预设的图像坐标系中的实际纵坐标;表示所述相机的内参信息;该内参信息可以包含5个未知数;表示相机的焦距信息,单位是毫米;表示有关于横坐标的第一像元尺寸;表示有关于纵坐标的第二像元尺寸;表示所述相机的内外参信息;表示末端标记物图像信息的图像中心在预设的图像坐标系中的理论横坐标;表示末端标记物图像信息的图像中心在预设的图像坐标系中的理论纵坐标。Where; represents the imaging model; Indicates the depth information of the photographed object; The actual horizontal coordinate of the image center representing the terminal marker image information in a preset image coordinate system; The actual vertical coordinate of the image center representing the terminal marker image information in a preset image coordinate system; Indicates the internal parameter information of the camera; the internal parameter information may include 5 unknown numbers; , , Indicates the focal length information of the camera, in millimeters; Indicates the first pixel size with respect to the horizontal axis; Indicates the second pixel size with respect to the vertical axis; Indicates the intrinsic and extrinsic parameter information of the camera; The theoretical horizontal coordinate of the image center representing the terminal marker image information in the preset image coordinate system; The theoretical ordinate of the image center representing the terminal marker image information in the preset image coordinate system.

式中,表示相机坐标相对世界坐标的旋转矩阵, 表示相机坐标相对世界坐标的平移向量,In the formula, Represents the rotation matrix of the camera coordinates relative to the world coordinates, ; Represents the translation vector of the camera coordinates relative to the world coordinates, .

需要说明的是,虽然单目相机标定只能得到内外参,无法确定物体的图像深度,但是在三维世界坐标点转化为像素坐标点的时候,可以将图像深度消掉,故在相机与机械臂共平面时,终端设备可以根据第一函数式、第二函数式和第三函数式,确定出三维世界坐标具体的单位距离所占据具体的像素数量,以此利用平面间两点像素数量来实现单目相机的共平面的测距。It should be noted that although the calibration of a monocular camera can only obtain internal and external parameters and cannot determine the image depth of an object, the image depth can be eliminated when the three-dimensional world coordinate points are converted into pixel coordinate points. Therefore, when the camera and the robotic arm are coplanar, the terminal device can determine the specific number of pixels occupied by a specific unit distance of the three-dimensional world coordinates based on the first function, the second function, and the third function, thereby using the number of pixels between two points between the planes to achieve coplanar ranging of the monocular camera.

示例性地,第一函数式可以是:Exemplarily, the first functional formula may be:

在S102中,基于相机,获取标定板图像信息。In S102, the calibration plate image information is acquired based on the camera.

不失一般性地,从二维图像中获取三维几何信息是必不可少的,而且相机校准的准确性对于视觉测量也是非常重要的,而相机校准的主要内容就是确定相机的内部参数、外部参数以及获得相机的畸变系数,以便准确地将图像中的像素坐标映射到真实世界中的物理坐标。Without loss of generality, it is essential to obtain three-dimensional geometric information from two-dimensional images, and the accuracy of camera calibration is also very important for visual measurement. The main content of camera calibration is to determine the internal parameters and external parameters of the camera and obtain the distortion coefficient of the camera in order to accurately map the pixel coordinates in the image to the physical coordinates in the real world.

示例性地,请参阅图6,由于相机镜头的制造和使用过程中,会引入径向畸变和切向畸变,这些畸变均会使图像中的直线变形或者角点不准确。终端设备通过相机标定,可以计算出畸变系数,然后在图像处理过程中对图像进行矫正,以保证图像中的物体保持准确的形状和位置,进而获得物体真实位置的像素信息,故在终端设备构建相机的成像模型之后,终端设备可以基于相机,获取标定板图像信息,其中,标定板图像信息用于描述利用相机对标定板进行拍摄所获得的图像;标定板可以是棋盘格或者圆点阵,譬如:具体规格为GP290-20-12*9的棋盘格标定板,标定板可以预先放置于与待检测平面同样高度的平面上,从而便于后续检测固定高度平面运动的光源特征像素。For example, please refer to FIG6. Since radial distortion and tangential distortion are introduced during the manufacture and use of the camera lens, these distortions will cause the straight lines in the image to be deformed or the corners to be inaccurate. The terminal device can calculate the distortion coefficient through camera calibration, and then correct the image during the image processing process to ensure that the object in the image maintains an accurate shape and position, thereby obtaining pixel information of the object's real position. Therefore, after the terminal device builds the imaging model of the camera, the terminal device can obtain the calibration plate image information based on the camera, wherein the calibration plate image information is used to describe the image obtained by shooting the calibration plate with the camera; the calibration plate can be a checkerboard or a dot matrix, for example: a checkerboard calibration plate with a specific specification of GP290-20-12*9, and the calibration plate can be pre-placed on a plane at the same height as the plane to be detected, so as to facilitate the subsequent detection of the characteristic pixels of the light source moving on the fixed height plane.

在S103中,对标定板图像信息进行灰度化处理,生成灰度化图像信息。In S103, grayscale processing is performed on the calibration plate image information to generate grayscale image information.

具体来说,在终端设备获取标定板图像信息之后,终端设备可以对标定板图像信息进行灰度化处理,生成灰度化图像信息,其中,灰度化图像信息用于描述进行灰度化处理后的标定板图像信息。Specifically, after the terminal device acquires the calibration plate image information, the terminal device may perform grayscale processing on the calibration plate image information to generate grayscale image information, wherein the grayscale image information is used to describe the calibration plate image information after the grayscale processing.

在S104中,基于预设的角点检测算法和灰度化图像信息,确定灰度化图像信息的多个标定板角点信息。In S104, based on a preset corner point detection algorithm and the grayscale image information, multiple calibration plate corner point information of the grayscale image information is determined.

示例性地,请参阅图7,在终端设备生成灰度化图像信息之后,终端设备可以基于预设的角点检测算法和灰度化图像信息,确定灰度化图像信息的多个标定板角点信息,其中,标定板角点信息用于描述标定板上的角点,该些角点用于后续的计算,图7中的正圆形圆圈表示标定板角点信息。Exemplarily, please refer to Figure 7. After the terminal device generates grayscale image information, the terminal device can determine multiple calibration plate corner point information of the grayscale image information based on a preset corner point detection algorithm and the grayscale image information, wherein the calibration plate corner point information is used to describe the corner points on the calibration plate, and these corner points are used for subsequent calculations. The perfect circle in Figure 7 represents the calibration plate corner point information.

在S105中,针对各个标定板角点信息:获取标定板角点信息的角点实际物理坐标信息;In S105, for each calibration plate corner point information: obtaining the actual physical coordinate information of the corner point of the calibration plate corner point information;

示例性地,请参阅图7,在终端设备确定多个标定板角点信息之后,终端设备可以针对各个标定板角点信息执行该处理:获取该标定板角点信息的角点实际物理坐标信息,从而便于实现将图像坐标和物理坐标进行关联,其中,角点实际物理坐标信息用于描述该标定板角点信息的实际物理坐标,示例性地,该实际物理坐标可以是该标定板角点信息在标定板上的网格点坐标。Exemplarily, please refer to Figure 7. After the terminal device determines multiple calibration plate corner point information, the terminal device can perform this processing for each calibration plate corner point information: obtain the actual physical coordinate information of the corner point of the calibration plate corner point information, so as to facilitate the association of image coordinates and physical coordinates, wherein the actual physical coordinate information of the corner point is used to describe the actual physical coordinates of the calibration plate corner point information. Exemplarily, the actual physical coordinates can be the grid point coordinates of the calibration plate corner point information on the calibration plate.

在S106中,根据预设的欧式距离计算函数、标定板角点信息和角点实际物理坐标信息,生成重投影误差信息。In S106, reprojection error information is generated according to a preset Euclidean distance calculation function, the calibration plate corner point information, and the actual physical coordinate information of the corner points.

具体来说,在终端设备获取角点实际物理坐标信之后,终端设备可以将标定板角点信息和角点实际物理坐标信息输入至预设的欧式距离计算函数中,生成重投影误差信息,从而实现评估标定结果,以便于确定标定的准确性,其中,重投影误差信息用于描述标定板角点信息和角点实际物理坐标信息之间的投影误差值。Specifically, after the terminal device obtains the actual physical coordinate information of the corner point, the terminal device can input the calibration plate corner point information and the actual physical coordinate information of the corner point into a preset Euclidean distance calculation function to generate reprojection error information, thereby evaluating the calibration result to facilitate determining the accuracy of the calibration, wherein the reprojection error information is used to describe the projection error value between the calibration plate corner point information and the actual physical coordinate information of the corner point.

示例性地,请参阅图8,图8为针对二十五张标定图片所计算出的重投影误差信息,由图8可知,最大的重投影误差信息为0.1pixels,平均的重投影误差信息为0.07pixels。Exemplarily, please refer to FIG. 8 , which shows the reprojection error information calculated for twenty-five calibration images. As can be seen from FIG. 8 , the maximum reprojection error information is 0.1 pixels, and the average reprojection error information is 0.07 pixels.

需要说明的是,相机分辨率可以是51205120pixel;焦距可以是535.460毫米;内参可以是;待测平面单像素代表距离可以是0.2235毫米。It should be noted that the camera resolution can be 5120 5120pixel; focal length can be 535.460 mm; internal reference can be ; The representative distance of a single pixel on the plane to be measured can be 0.2235 mm.

在S107中,根据重投影误差信息和预设的误差阈值信息,生成标定准确性结果信息。In S107, calibration accuracy result information is generated according to the re-projection error information and the preset error threshold information.

具体来说,在终端设备生成重投影误差信息之后,终端设备可以比对重投影误差信息和预设的误差阈值信息,生成标定准确性结果信息,其中,标定准确性结果信息为标定合格信息或标定不合格信息,标定合格信息用于描述标定合格,标定不合格信息用于描述标定不合格;误差阈值信息可以预先自定义。示例性地,若重投影误差信息小于误差阈值信息,则终端设备可以生成标定合格信息,否则终端设备可以生成标定不合格信息。Specifically, after the terminal device generates the reprojection error information, the terminal device can compare the reprojection error information with the preset error threshold information to generate calibration accuracy result information, wherein the calibration accuracy result information is calibration pass information or calibration fail information, the calibration pass information is used to describe the calibration pass, and the calibration fail information is used to describe the calibration fail; the error threshold information can be pre-defined. Exemplarily, if the reprojection error information is less than the error threshold information, the terminal device can generate calibration pass information, otherwise the terminal device can generate calibration fail information.

相应地,上述步骤S100包括但不限于以下步骤:,包括:Accordingly, the above step S100 includes but is not limited to the following steps:

在S110中,若标定准确性结果信息为标定合格信息,则基于预设的相机,持续获取末端标记物图像集信息。In S110 , if the calibration accuracy result information is calibration qualified information, the terminal marker image set information is continuously acquired based on a preset camera.

具体来说,如果标定准确性结果信息为标定合格信息,则终端设备可以基于预设的相机,持续获取末端标记物图像集信息。Specifically, if the calibration accuracy result information is calibration qualified information, the terminal device can continuously acquire the terminal marker image set information based on a preset camera.

在S200中,针对各个末端标记物图像信息:获取机械臂末端的理论位置对应的第一像素坐标信息,并根据第一像素坐标信息和预设的畸变系数集信息,对末端标记物图像信息进行畸变纠正处理,确定机械臂末端的实测位置对应的第二像素坐标信息。In S200, for each end marker image information: the first pixel coordinate information corresponding to the theoretical position of the end of the robotic arm is obtained, and based on the first pixel coordinate information and the preset distortion coefficient set information, the end marker image information is subjected to distortion correction processing to determine the second pixel coordinate information corresponding to the actual measured position of the end of the robotic arm.

具体来说,在终端设备持续获取末端标记物图像集信息之后,终端设备可以针对各个末端标记物图像信息执行该处理:获取机械臂末端的理论位置对应的第一像素坐标信息,然后根据第一像素坐标信息和预设的畸变系数集信息,对末端标记物图像信息进行畸变纠正处理,有效地确定出机械臂末端的实测位置对应的第二像素坐标信息。在一种可能的实现方式中,畸变系数集信息包括第一畸变系数信息和第二畸变系数信息,第一畸变系数信息和第二畸变系数信息能够有效地对图像进行去除畸变处理,需要说明的是,第一畸变系数信息和第二畸变系数信息可以通过相机标定所获得。Specifically, after the terminal device continuously obtains the end marker image set information, the terminal device can perform the processing for each end marker image information: obtain the first pixel coordinate information corresponding to the theoretical position of the end of the robotic arm, and then perform distortion correction processing on the end marker image information according to the first pixel coordinate information and the preset distortion coefficient set information, and effectively determine the second pixel coordinate information corresponding to the measured position of the end of the robotic arm. In a possible implementation, the distortion coefficient set information includes the first distortion coefficient information and the second distortion coefficient information. The first distortion coefficient information and the second distortion coefficient information can effectively remove the distortion of the image. It should be noted that the first distortion coefficient information and the second distortion coefficient information can be obtained through camera calibration.

在一些可能的实现方式中,为了实现对图像进行去除畸变处理,请参阅图9,步骤S200包括但不限于以下步骤:In some possible implementations, in order to implement the dedistortion processing on the image, please refer to FIG. 9 , step S200 includes but is not limited to the following steps:

在S210中,针对各个末端标记物图像信息:获取机械臂末端的理论位置对应的第一像素坐标信息。In S210 , for each end marker image information: first pixel coordinate information corresponding to the theoretical position of the end of the robot arm is obtained.

具体来说,终端设备可以针对各个末端标记物图像信息执行该处理:获取机械臂末端的理论位置对应的第一像素坐标信息。Specifically, the terminal device can perform this processing on each end marker image information: obtain the first pixel coordinate information corresponding to the theoretical position of the end of the robotic arm.

示例性地,该第一像素坐标信息可以是:Exemplarily, the first pixel coordinate information may be:

,

式中,表示第一像素坐标信息的横坐标,表示末端标记物图像信息的图像中心在预设的图像坐标系中的实际横坐标,表示末端标记物图像信息的图像中心在预设的图像坐标系中的理论横坐标,表示相机的焦距信息,毫米,表示有关于横坐标的第一像元尺寸,表示第一像素坐标信息的纵坐标,表示末端标记物图像信息的图像中心在预设的图像坐标系中的实际纵坐标,表示末端标记物图像信息的图像中心在预设的图像坐标系中的理论纵坐标,表示有关于纵坐标的第二像元尺寸。In the formula, The horizontal coordinate representing the first pixel coordinate information, The actual horizontal coordinate of the image center representing the terminal marker image information in the preset image coordinate system, The theoretical horizontal coordinate of the image center representing the terminal marker image information in the preset image coordinate system, , Indicates the focal length information of the camera. mm, Represents the first pixel size with respect to the horizontal axis, The ordinate representing the first pixel coordinate information, The actual vertical coordinate of the image center representing the terminal marker image information in the preset image coordinate system, The theoretical ordinate of the image center representing the terminal marker image information in the preset image coordinate system, , Indicates the second pixel size with respect to the vertical axis.

在S220中,根据第一像素坐标信息、预设的第一畸变系数信息和第二畸变系数信息,对末端标记物图像信息进行畸变纠正处理,确定机械臂末端的实测位置对应的畸变纠正像素坐标信息。In S220, distortion correction processing is performed on the end marker image information according to the first pixel coordinate information, the preset first distortion coefficient information and the second distortion coefficient information to determine the distortion corrected pixel coordinate information corresponding to the measured position of the end of the robot arm.

具体来说,由于在采集图像过程中,相机自身透镜非理想特征以及成像仪与透镜之间的非平行关系,均会形成径向畸变和切向畸变,导致图像不能反映真实状态,在用相机拍摄一张相片获得标记点位置的像素的时候,会因为图像畸变的影响导致实际像素与理想像素之间有偏差,想要获得理想的像素坐标就必须处理图像的畸变问题,而上述基于标定板图像信息的标定过程只处理了较大的径向畸变,故在终端设备第一像素坐标信息之后,终端设备可以根据第一像素坐标信息、预设的第一畸变系数信息和预设的第二畸变系数信息,对末端标记物图像信息进行畸变纠正处理,确定机械臂末端的实测位置对应的畸变纠正像素坐标信息,其中,畸变纠正像素坐标信息用于描述初步去除畸变后的第一像素坐标信息。Specifically, due to the non-ideal characteristics of the camera's own lens and the non-parallel relationship between the imager and the lens during the image acquisition process, radial distortion and tangential distortion will be formed, resulting in the image being unable to reflect the actual state. When a camera is used to take a photo to obtain the pixels of the marked point position, the image distortion will cause a deviation between the actual pixels and the ideal pixels. In order to obtain the ideal pixel coordinates, the image distortion problem must be dealt with. The above-mentioned calibration process based on the calibration plate image information only deals with larger radial distortions. Therefore, after the first pixel coordinate information of the terminal device, the terminal device can perform distortion correction processing on the end marker image information according to the first pixel coordinate information, the preset first distortion coefficient information and the preset second distortion coefficient information, and determine the distortion corrected pixel coordinate information corresponding to the actual measured position of the end of the robotic arm, wherein the distortion corrected pixel coordinate information is used to describe the first pixel coordinate information after the initial removal of the distortion.

示例性地,假设理想无畸变的图像坐标为(x,y),则相机实际拍摄的图像坐标可以表示为(),即上述畸变纠正像素坐标信息可以是:For example, assuming that the ideal undistorted image coordinates are (x, y), the image coordinates actually captured by the camera can be expressed as ( , ), that is, the above-mentioned distortion-corrected pixel coordinate information can be:

,

式中,表示畸变纠正像素坐标信息的横坐标,表示第一像素坐标信息的横坐标,表示第一畸变系数信息,第一畸变系数信息用于描述一阶径向畸变系数,表示第二畸变系数信息,第二畸变系数信息用于描述二阶径向畸变系数,表示预设的三阶径向畸变系数,表示预设的一阶切向畸变系数,表示预设的二阶切向畸变系数,表示畸变纠正像素坐标信息的纵坐标,表示第一像素坐标信息的纵坐标。In the formula, The horizontal coordinate representing the distortion-corrected pixel coordinate information, The horizontal coordinate representing the first pixel coordinate information, Represents the first distortion coefficient information, which is used to describe the first-order radial distortion coefficient. , , Represents the second distortion coefficient information, which is used to describe the second-order radial distortion coefficient. , Represents the preset third-order radial distortion coefficient, represents the preset first-order tangential distortion coefficient, represents the preset second-order tangential distortion coefficient, The ordinate represents the distortion-corrected pixel coordinate information, The vertical coordinate representing the first pixel coordinate information.

在S230中,针对第一标记物的各个畸变纠正像素坐标信息和第二标记物的各个畸变纠正像素坐标信息:对畸变纠正像素坐标信息进行最小二乘法拟合处理,生成目标理想圆的理想圆特征集信息。In S230, for each distortion-corrected pixel coordinate information of the first marker and each distortion-corrected pixel coordinate information of the second marker: least square fitting processing is performed on the distortion-corrected pixel coordinate information to generate ideal circle feature set information of the target ideal circle.

具体来说,在终端设备确定畸变纠正像素坐标信息之后,终端设备可以针对第一标记物的各个畸变纠正像素坐标信息和第二标记物的各个畸变纠正像素坐标信息均执行该处理:对各个畸变纠正像素坐标信息进行最小二乘法拟合处理,生成目标理想圆的理想圆特征集信息,从而实现分别对两个标记物的边缘进行最小二乘法拟合圆以分别得到两个标记物的圆心,其中,第一标记物用于描述任意一个光源标记物,第二标记物用于描述任意另一个光源标记物,理想圆特征集信息包括目标理想圆的圆心像素坐标信息和半径信息。Specifically, after the terminal device determines the distortion-corrected pixel coordinate information, the terminal device can perform this processing on each distortion-corrected pixel coordinate information of the first marker and each distortion-corrected pixel coordinate information of the second marker: perform least squares fitting processing on each distortion-corrected pixel coordinate information to generate ideal circle feature set information of the target ideal circle, thereby performing least squares fitting of circles on the edges of the two markers respectively to obtain the centers of the two markers respectively, wherein the first marker is used to describe any one light source marker, and the second marker is used to describe any other light source marker, and the ideal circle feature set information includes the center pixel coordinate information and radius information of the target ideal circle.

在一种可能的实现方式中,请参阅图10和图11,图10表示第一圆形标记物的边缘拟合图,图11表示第=二圆形标记物的边缘拟合图;在终端设备对拍摄的图片灰度化处理以保留特定灰度阈值的光源标记物像素之后,终端设备可以对所保留的圆形光源利用通过Canny算子确定出边缘点邻域灰度分布的矩估计拟合边缘,其中,利用Canny算子能够提高图像抗噪性能和提高边缘连接效果。In a possible implementation, please refer to Figures 10 and 11, Figure 10 shows an edge fitting diagram of the first circular marker, and Figure 11 shows an edge fitting diagram of the second circular marker; after the terminal device grayscales the captured image to retain the light source marker pixels of a specific grayscale threshold, the terminal device can fit the edge of the retained circular light source using the moment estimate of the grayscale distribution of the edge point neighborhood determined by the Canny operator, wherein the use of the Canny operator can improve the image noise resistance and improve the edge connection effect.

不失一般性地,在得到相机检测的机械臂末端位置对应的像素信息之后,终端设备可以将激光跟踪球安装到与末端位置相对齐的位置,然后利用激光跟踪仪对机械臂末端位置进行采集,再利用激光跟踪仪的信息来评判相机对机械臂末端位置检测的精确性。Without loss of generality, after obtaining the pixel information corresponding to the end position of the robotic arm detected by the camera, the terminal device can install the laser tracking ball to a position aligned with the end position, and then use the laser tracker to collect the end position of the robotic arm, and then use the information from the laser tracker to judge the accuracy of the camera's detection of the end position of the robotic arm.

在S240中,根据多个圆心像素坐标信息,确定第二像素坐标信息。In S240, second pixel coordinate information is determined according to a plurality of circle center pixel coordinate information.

具体来说,在终端设备生成理想圆特征集信息之后,终端设备可以根据多个圆心像素坐标信息,有效地确定第二像素坐标信息,从而实现在通过提取两个圆形标记物边缘像素信息拟合出两个圆形标记物的圆心像素坐标之后,将两个圆心像素坐标求和确定出机械臂末端位置在相机检测下的像素坐标,其中,第二像素坐标信息可以是:Specifically, after the terminal device generates the ideal circle feature set information, the terminal device can effectively determine the second pixel coordinate information based on the multiple circle center pixel coordinate information, so as to achieve the pixel coordinates of the end position of the robot arm under camera detection by summing the two circle center pixel coordinates after fitting the circle center pixel coordinates of the two circular markers by extracting the edge pixel information of the two circular markers, wherein the second pixel coordinate information can be:

,

式中,表示所述第二像素坐标信息的横坐标,表示所述第二像素坐标信息的纵坐标。In the formula, a horizontal coordinate representing the second pixel coordinate information, The vertical coordinate representing the second pixel coordinate information.

示例性地,请参阅图12,针对离散测量点集(,),可以预先假设目标理想圆的圆心为,目标理想圆的半径为R,其中,i=1、2…m;由于最小二乘法拟合的要求通常是距离的平方和达到最小,故,然后终端设备进一步简化得出,和,同时,由于最小二乘法应满足,则,假设,可以确定出,进一步确定出:,和,以及For example, referring to FIG. 12 , for a discrete measurement point set ( , ), we can assume in advance that the center of the target ideal circle is , the radius of the target ideal circle is R, where i = 1, 2…m; since the requirement of the least squares fitting method is usually to minimize the sum of the squares of the distances, , and then the terminal device is further simplified to obtain ,and At the same time, since the least squares method should satisfy ,but , assuming , it can be determined , further determined that: ,and ,as well as .

不失一般性地,通过对机械臂原点建立机械臂本体坐标系与像素坐标系的关系,能够实现机械臂末端像素信息与机械臂末端位置的关系,请参阅图13,图13为通过相机对机械臂分别走100mm、200mm、300mm和400mm的圆形轨迹对应的25个点进行检测,所获得的机械臂末端位置的点位置分布图。Without loss of generality, by establishing the relationship between the robot body coordinate system and the pixel coordinate system for the robot origin, the relationship between the pixel information of the robot end and the position of the robot end can be realized. Please refer to Figure 13, which is a point position distribution diagram of the robot end position obtained by detecting 25 points corresponding to the circular trajectories of 100mm, 200mm, 300mm and 400mm respectively by the camera.

在一些可能的实现方式中,为了实现利用相机重复检测随机误差,请参阅图14,在步骤S240之后,该方法还包括但不限于以下步骤:In some possible implementations, in order to implement repeated detection of random errors using a camera, please refer to FIG. 14 . After step S240, the method further includes but is not limited to the following steps:

在S241中,基于预设的重复检测次数信息,获取机械臂末端在同一位置的多个重复测量样本数据信息。In S241, based on the preset number of repeated detection times information, multiple repeated measurement sample data information at the same position of the end of the robot arm is obtained.

具体来说,在相机获取光源标记物圆心的像素时,光源标记物的光源和环境光强会存在非常轻微的变化,导致在对两个标记物边缘提取时会受到轻微影响,为了减少该种轻微影响所带来的不良干扰,终端设备可以基于预设的重复检测次数信息,获取机械臂末端在同一位置的多个重复测量样本数据信息,其中,重复检测次数信息可以是预设值。Specifically, when the camera acquires the pixel of the center of the light source marker, there will be very slight changes in the light source of the light source marker and the ambient light intensity, resulting in a slight impact on the extraction of the edges of the two markers. In order to reduce the adverse interference caused by this slight impact, the terminal device can obtain multiple repeated measurement sample data information at the same position of the end of the robotic arm based on the preset number of repeated detections, where the number of repeated detections can be a preset value.

在S242中,根据多个重复测量样本数据信息,生成重复测量样本数据平均值信息。In S242, repeated measurement sample data average value information is generated based on a plurality of repeated measurement sample data information.

具体来说,在终端设备获取多个重复测量样本数据信息之后,终端设备可以根据多个重复测量样本数据信息,生成重复测量样本数据平均值信息,其中,该重复测量样本数据平均值信息可以是:Specifically, after the terminal device obtains a plurality of repeated measurement sample data information, the terminal device may generate repeated measurement sample data average value information according to the plurality of repeated measurement sample data information, wherein the repeated measurement sample data average value information may be:

,

式中,表示该重复测量样本数据平均值信息,表示重复测量样本数据信息。In the formula, Indicates the mean value information of the repeated measurement sample data. Represents repeated measures sample data information.

在S243中,根据重复测量样本数据平均值信息,确定机械臂末端在同一位置的标准偏差值信息。In S243, the standard deviation value information of the end of the robot arm at the same position is determined based on the average value information of the repeated measurement sample data.

具体来说,在终端设备生成重复测量样本数据平均值信息之后,终端设备可以根据重复测量样本数据平均值信息,确定机械臂末端在同一位置的标准偏差值信息,其中,标准偏差值信息可以是:Specifically, after the terminal device generates the average value information of the repeated measurement sample data, the terminal device can determine the standard deviation value information of the end of the robot arm at the same position according to the average value information of the repeated measurement sample data, wherein the standard deviation value information can be:

,

式中,表示标准偏差值信息。In the formula, Indicates standard deviation value information.

不失一般性地,终端设备可以对机械臂分别走100毫米、200毫米、300毫米和400毫米的圆轨迹对应的十三个点,分别隔三十度进行重复检测5次,得出标准偏差值信息在0.1毫米,即在相机采集两个标记物并进行最小二乘法拟合时,重复检测像素误差在0至0.05个像素范围内。Without loss of generality, the terminal device can perform repeated detection five times at intervals of thirty degrees on the thirteen points corresponding to the circular trajectories of the robotic arm of 100 mm, 200 mm, 300 mm and 400 mm respectively, and obtain the standard deviation value information of 0.1 mm, that is, when the camera captures two markers and performs least squares fitting, the repeated detection pixel error is in the range of 0 to 0.05 pixels.

在S300中,根据第二像素坐标信息和预设的末端位置运动误差计算函数,确定机械臂末端的坐标误差值信息。In S300, coordinate error value information of the end of the robot arm is determined according to the second pixel coordinate information and a preset end position motion error calculation function.

具体来说,在终端设备确定第二像素坐标信息之后,终端设备可以根据第二像素坐标信息和预设的末端位置运动误差计算函数,有效地确定机械臂末端的坐标误差值信息。Specifically, after the terminal device determines the second pixel coordinate information, the terminal device can effectively determine the coordinate error value information of the end of the robotic arm according to the second pixel coordinate information and a preset end position motion error calculation function.

不失一般性地,由于运动机械臂,属于一个开环运动模式,它进行点对点的运动时会存在与理想圆轨迹一定的运动误差,终端设备可以利用相机检测机械臂分别走100mm、200mm、300mm和400mm的圆形轨迹上分别隔三十度的十三个点的运动轨迹,根据检测数据与理论位置进行差值对比来获得在相机检测下机械臂的运动误差,其中,可以结合该函数式:Without loss of generality, since the motion robot belongs to an open-loop motion mode, there will be a certain motion error with the ideal circular trajectory when it moves point-to-point. The terminal device can use the camera to detect the motion trajectory of thirteen points on the circular trajectory of 100mm, 200mm, 300mm and 400mm respectively, which are separated by thirty degrees. The motion error of the robot under camera detection is obtained by comparing the difference between the detection data and the theoretical position, which can be combined with the function:

,

式中,表示运动误差,()表示相机检测机械臂末端位置的值,()表示理论位置的值。In the formula, represents the motion error, ( , ) represents the value of the end position of the robot arm detected by the camera, ( , ) represents the value of the theoretical position.

示例性地,由于用相机检测机械臂运动轨迹的运动误差时,机械臂运动的工作范围越大,其检测运动误差会随之增大,离光心越远,其检测的视场对应的畸变误差会越大,并且在对图像进行畸变处理后,其相机还会存在一定的系统误差,即机械臂平面运动范围越大,相机检测误差会越大,同时,由于机械臂本体同轨迹平面运动下也会存在的运动偏差,所以终端设备可以对机械臂末端位置平面同轨迹的重复几次运动,检测并取平均值,再将平均值与同轨迹每次运动的数值进行对比,获得机械臂在系统误差影响下形成的运动偏差。示例性地,终端设备可以通过激光跟踪仪对机械臂的正、反转依次来回做同轨迹运动5次检测,譬如使用激光跟踪仪对机械臂分别走100mm、200mm、300mm和400mm的圆轨迹对应的25个点进行检测机械臂的运动误差检测,从而有利于实现将相机对机械臂末端位置检测的精确性与激光跟踪仪的数据进行对比,以评判相机检测效果的精准性,以及将相机检测的误差与激光跟踪仪检测的误差进行对比,以评估相机的检测效果。Exemplarily, when a camera is used to detect the motion error of the robot's motion trajectory, the larger the working range of the robot's motion, the larger the detected motion error will be, and the farther away from the optical center, the larger the distortion error corresponding to the detected field of view will be. After the image is distorted, the camera will still have a certain system error, that is, the larger the planar motion range of the robot, the larger the camera detection error will be. At the same time, due to the motion deviation that will exist when the robot's body moves in the same trajectory plane, the terminal device can repeat the same trajectory of the robot's end position plane several times, detect and take the average value, and then compare the average value with the value of each motion on the same trajectory to obtain the motion deviation of the robot under the influence of the system error. Exemplarily, the terminal device can use a laser tracker to detect the forward and reverse motion of the robotic arm five times in sequence along the same trajectory. For example, the laser tracker can be used to detect the 25 points corresponding to the circular trajectories of 100mm, 200mm, 300mm and 400mm respectively, to detect the motion error of the robotic arm. This is conducive to comparing the accuracy of the camera's detection of the end position of the robotic arm with the data of the laser tracker to judge the accuracy of the camera's detection effect, and comparing the error of the camera detection with the error of the laser tracker detection to evaluate the camera's detection effect.

在一些可能的实现方式中,为了实现生成多个预测像素坐标信息,请参阅图15,在步骤S300之前,该方法还包括但不限于以下步骤:In some possible implementations, in order to realize the generation of multiple predicted pixel coordinate information, please refer to FIG. 15 . Before step S300, the method further includes but is not limited to the following steps:

在S301中,基于根据第二像素坐标信息和预设的距离反比权重插值函数,生成空间插值距离信息。In S301, spatial interpolation distance information is generated based on the second pixel coordinate information and a preset distance inverse weight interpolation function.

在S302中,根据空间插值距离信息和第二像素坐标信息,等间距阵列生成多个预测像素坐标信息。In S302, a plurality of predicted pixel coordinate information is generated in an equidistant array according to the spatial interpolation distance information and the second pixel coordinate information.

具体来说,在终端设备生成空间插值距离信息之后,终端设备可以,其中,预测像素坐标信息用于描述等间距阵列生成的第二像素坐标信息,多个预测像素坐标信息之间的距离为空间插值距离信息,预测像素坐标信息和第二像素坐标信息之间的距离为空间插值距离信息。Specifically, after the terminal device generates spatial interpolation distance information, the terminal device can, wherein the predicted pixel coordinate information is used to describe the second pixel coordinate information generated by an equally spaced array, the distance between multiple predicted pixel coordinate information is the spatial interpolation distance information, and the distance between the predicted pixel coordinate information and the second pixel coordinate information is the spatial interpolation distance information.

相应地,上述步骤S300包括但不限于以下步骤:Accordingly, the above step S300 includes but is not limited to the following steps:

在S310中,根据第二像素坐标信息的横坐标和预设的末端位置运动误差计算函数,确定机械臂末端的坐标误差值信息。In S310, coordinate error value information of the end of the robot arm is determined according to the horizontal coordinate of the second pixel coordinate information and a preset end position motion error calculation function.

具体来说,终端设备可以根据第二像素坐标信息的横坐标和预设的末端位置运动误差计算函数,确定机械臂末端的坐标误差值信息,其中,末端位置运动误差计算函数可以为:Specifically, the terminal device can determine the coordinate error value information of the end of the robotic arm according to the horizontal coordinate of the second pixel coordinate information and a preset end position motion error calculation function, wherein the end position motion error calculation function can be:

,

式中,表示坐标误差值信息,表示第二像素坐标信息的总数量,表示第二像素坐标信息的次序,表示第个第二像素坐标信息对应的横坐标与前一个第二像素坐标信息对应的横坐标之间的差值。In the formula, Indicates coordinate error value information, Indicates the total amount of second pixel coordinate information, Indicates the order of the second pixel coordinate information, Indicates The difference between the horizontal coordinate corresponding to the second pixel coordinate information and the horizontal coordinate corresponding to the previous second pixel coordinate information.

示例性地,由于终端设备较难对机械臂末端位置的每一个点的运动情况均进行相机测量,为了减少实验测量的局限性和提高时间利用率,终端设备可以先对机械臂末端位置平面运动时的整个平面部分点进行测量,然后通过已知数据进行待估方法去得到希望获得的位置的运动情况。譬如,终端设备可以对100mm、200mm、300mm和400mm的运动轨迹部分检测的位置点的信息,分别进行对150mm、250mm和350mm运动轨迹部分位置进行待估预测,以获得误差补偿,再将预测结果与实际检测的数据做对比,以评价效果。其中,距离反比权重插值法为空间插值方法之一,即离待插点越近的样本点,被赋予越大的权重,其权重贡献与距离成反比,距离反比权重插值法可以结合以下函数式:Exemplarily, since it is difficult for the terminal device to perform camera measurement on the motion of each point at the end position of the robot arm, in order to reduce the limitations of experimental measurement and improve time utilization, the terminal device can first measure the entire plane part of the points when the end position of the robot arm moves in the plane, and then use the known data to perform the estimation method to obtain the motion of the desired position. For example, the terminal device can perform estimation prediction on the positions of 150mm, 250mm and 350mm motion trajectory parts for the information of the position points detected for the motion trajectory parts of 100mm, 200mm, 300mm and 400mm, respectively, to obtain error compensation, and then compare the prediction results with the actual detection data to evaluate the effect. Among them, the inverse distance weighted interpolation method is one of the spatial interpolation methods, that is, the closer the sample point is to the point to be interpolated, the greater the weight is assigned, and its weight contribution is inversely proportional to the distance. The inverse distance weighted interpolation method can be combined with the following function:

,和,以及 ,and ,as well as .

式中,表示待估点的数值;x待估点的位置对应的横坐标,y表示待估点的位置对应的纵坐标,表示已知点的数值,表示待估点与已知点之间的距离。In the formula, Indicates the value of the point to be estimated; x represents the horizontal coordinate of the position of the point to be estimated, and y represents the vertical coordinate of the position of the point to be estimated. represents the value of a known point, Represents the distance between the point to be estimated and the known point.

示例性地,请参阅图16,终端设备可以根据坐标误差值信息,对第二像素坐标信息进行补偿处理,生成补偿像素坐标信息,在图16中,与A点所指示的星号标记位于同一圆形的多个星号标记表示实测位置,与B点所指示的星号标记位于同一圆形的多个星号标记表示预测位置。Exemplarily, referring to FIG. 16 , the terminal device may perform compensation processing on the second pixel coordinate information according to the coordinate error value information to generate compensated pixel coordinate information. In FIG. 16 , a plurality of asterisk marks located in the same circle as the asterisk mark indicated by point A represent the measured position, and a plurality of asterisk marks located in the same circle as the asterisk mark indicated by point B represent the predicted position.

不失一般性地,发明人经过大量的实验发现,待估位置的误差估计与机械臂整体运动误差趋势十分接近,表明选取的数据点对待估位置误差使用距离反比插值方法具有可行性。Without loss of generality, the inventors found through a large number of experiments that the error estimate of the position to be estimated is very close to the overall motion error trend of the robotic arm, indicating that it is feasible to use the inverse distance interpolation method for the selected data points to estimate the position error.

示例性地,终端设备可以通过相机检测机械臂末端位置150mm、250mm和350mm的运动轨迹插值待估位置的误差,再与激光跟踪仪进行相同位置的误差检测对比,确定出相机对该位置的实际检测误差的情况,对于150mm的运动轨迹,平均值为0.534毫米,对于250mm的运动轨迹,平均值为0.931毫米,对于350mm的运动轨迹,平均值为1.405毫米,总平均值为0.967毫米。同时,通过距离反比法插值得到待估位置的平面运动误差后,终端设备可以将距离反比插值估计的误差值进行逐个补偿相机检测运动轨迹相对应的点的误差,发明人经过大量的实验发现,通过插值补偿后,150mm轨迹检测位置总体的误差值从0.534mm减小到0.136mm,250mm轨迹检测位置总体的误差值从0.931mm减小到0.229mm,350mm轨迹检测位置总体的误差值从1.405mm减小到0.262mm,3个检测轨迹的整体误差平均值从0.967mm降到0.209mm。Exemplarily, the terminal device can detect the error of the estimated position by interpolating the motion trajectory of the end position of the robot arm at 150mm, 250mm and 350mm through a camera, and then perform error detection comparison with the laser tracker at the same position to determine the actual detection error of the camera at that position. For the motion trajectory of 150mm, the average value is 0.534mm, for the motion trajectory of 250mm, the average value is 0.931mm, and for the motion trajectory of 350mm, the average value is 1.405mm, with a total average value of 0.967mm. At the same time, after obtaining the planar motion error of the position to be estimated by interpolating the inverse distance method, the terminal device can compensate the error of the points corresponding to the camera detection motion trajectory one by one using the error value estimated by the inverse distance interpolation. After a large number of experiments, the inventor found that after interpolation compensation, the overall error value of the 150mm trajectory detection position was reduced from 0.534mm to 0.136mm, the overall error value of the 250mm trajectory detection position was reduced from 0.931mm to 0.229mm, and the overall error value of the 350mm trajectory detection position was reduced from 1.405mm to 0.262mm. The overall error average value of the three detection trajectories was reduced from 0.967mm to 0.209mm.

在一种可能的实现方式中,终端设备还可以采用RBF神经网络来预测待估位置的误差,其中,RBF神经网络为径向基函数网络,是一种使用径向基函数作为激活函数的人工神经网络,RBF神经网络能够逼近任意的非线性函数,可以处理系统内难以解析的规律性,具有良好的泛化能力,并有很快的学习收敛速度,可以成功应用于非线性函数逼近。In a possible implementation, the terminal device can also use an RBF neural network to predict the error of the position to be estimated, where the RBF neural network is a radial basis function network, which is an artificial neural network that uses a radial basis function as an activation function. The RBF neural network can approximate any nonlinear function, can handle the regularity that is difficult to analyze within the system, has good generalization ability, and has a fast learning convergence speed, and can be successfully applied to nonlinear function approximation.

不失一般性地,RBF神经网络可以由输入层、隐藏层、输出层三层组成前向网络,其中,第一层为输入层,由输入数据组成;第二层为隐含层,是各单元的变换函数,将低维输入通过非线性高斯函数映射到一个高维空间;第三层为输出层,对输入信号作出响应,隐含层到输出是通过线性的加权求值得到输出层的输出值,其中,权值为。该RBF神经网络中所使用的激活函数可以为Gauss径向基函数,其表达式可以为:Without loss of generality, the RBF neural network can be composed of three layers: input layer, hidden layer, and output layer. The first layer is the input layer, which is composed of input data; the second layer is the hidden layer, which is the transformation function of each unit, mapping the low-dimensional input to a high-dimensional space through a nonlinear Gaussian function; the third layer is the output layer, which responds to the input signal. The output from the hidden layer to the output is obtained by linear weighted evaluation to obtain the output value of the output layer, where the weight is The activation function used in the RBF neural network can be the Gauss radial basis function, and its expression can be:

,

式中,r=,表示第j个输入数据,表示第i个样本数据中心点,为隐藏层中核心函数的平均偏差;In the formula, r= , represents the jth input data, represents the center point of the i-th sample data, is the average deviation of the core function in the hidden layer;

故RBF神经网络训练输出函数表达式可以为:Therefore, the output function expression of RBF neural network training can be:

;

不失一般性地,RBF神经网络的部分其他参数可以如下:训练样本数目为125个,测试样本数目为20个,径向基速度为2.0个,隐藏层神经元个数为125个,通过RBF神经网络预测得到待估位置的平面运动误差后,终端设备可以将其预测的误差值进行逐个补偿相机检测运动轨迹相对应的点的误差,发明人经过大量的实验发现,通过RBF神经网络补偿后,150mm轨迹检测位置总体的误差值从0.534mm减小到0.0768mm,250mm轨迹检测位置总体的误差值从0.931mm减小到0.147mm,350mm轨迹检测位置总体的误差值从1.405mm减小到0.221mm,3个检测轨迹的整体误差平均值从0.967mm降到0.148mm,因此,RBF神经网络在对待估位置误差预测方面的效果和距离反比插值法的补偿效果均符合预期。Without loss of generality, some other parameters of the RBF neural network can be as follows: the number of training samples is 125, the number of test samples is 20, the radial basis velocity is 2.0, and the number of hidden layer neurons is 125. After the planar motion error of the position to be estimated is predicted by the RBF neural network, the terminal device can compensate the error of the points corresponding to the camera detection motion trajectory one by one with its predicted error value. After a large number of experiments, the inventor found that after compensation by the RBF neural network, the overall error value of the 150mm trajectory detection position was reduced from 0.534mm to 0.0768mm, the overall error value of the 250mm trajectory detection position was reduced from 0.931mm to 0.147mm, and the overall error value of the 350mm trajectory detection position was reduced from 1.405mm to 0.221mm. The overall error average of the three detection trajectories was reduced from 0.967mm to 0.148mm. Therefore, the effect of the RBF neural network on the error prediction of the position to be estimated and the compensation effect of the inverse distance interpolation method are both in line with expectations.

在S400中,根据坐标误差值信息,对第二像素坐标信息进行补偿处理,生成补偿像素坐标信息。In S400, the second pixel coordinate information is compensated according to the coordinate error value information to generate compensated pixel coordinate information.

具体来说,在终端设备确定坐标误差值信息之后,终端设备可以根据坐标误差值信息,对第二像素坐标信息进行补偿处理,生成补偿像素坐标信息,其中,补偿像素坐标信息用于描述补偿后的第二像素坐标信息。Specifically, after the terminal device determines the coordinate error value information, the terminal device may perform compensation processing on the second pixel coordinate information according to the coordinate error value information to generate compensated pixel coordinate information, wherein the compensated pixel coordinate information is used to describe the compensated second pixel coordinate information.

在一些可能的实现方式中,为了实现生成补偿像素坐标信息,请参阅图17,步骤S400包括但不限于以下步骤:In some possible implementations, in order to realize the generation of compensated pixel coordinate information, please refer to FIG. 17 , step S400 includes but is not limited to the following steps:

在S410中,根据第一像素坐标信息和第二像素坐标信息,确定补偿方向信息。In S410, compensation direction information is determined according to the first pixel coordinate information and the second pixel coordinate information.

具体来说,终端设备可以根据第一像素坐标信息和第二像素坐标信息,以第一像素坐标信息为起点,且以第二像素坐标信息为终点,确定出补偿方向信息,其中,补偿方向信息用于描述进行补偿的方向。Specifically, the terminal device can determine the compensation direction information based on the first pixel coordinate information and the second pixel coordinate information, with the first pixel coordinate information as the starting point and the second pixel coordinate information as the end point, wherein the compensation direction information is used to describe the direction of compensation.

在S420中,基于补偿方向信息和坐标误差值信息,对第二像素坐标信息进行补偿处理,生成补偿像素坐标信息。In S420, based on the compensation direction information and the coordinate error value information, the second pixel coordinate information is compensated to generate compensated pixel coordinate information.

具体来说,在终端设备确定补偿方向信息之后,终端设备可以基于补偿方向信息和坐标误差值信息,对第二像素坐标信息进行补偿处理,生成补偿像素坐标信息,从而实现提高机械臂定位的准确性,其中,补偿像素坐标信息用于描述往补偿方向信息补偿坐标误差值信息后的第二像素坐标信息。Specifically, after the terminal device determines the compensation direction information, the terminal device can perform compensation processing on the second pixel coordinate information based on the compensation direction information and the coordinate error value information to generate compensated pixel coordinate information, thereby improving the accuracy of robot arm positioning, wherein the compensated pixel coordinate information is used to describe the second pixel coordinate information after the coordinate error value information is compensated for the compensation direction information.

在一些可能的实现方式中,为了实现机械臂精准运作,请参阅图18,在步骤S400之后,该方法还包括但不限于以下步骤:In some possible implementations, in order to achieve precise operation of the robot arm, please refer to FIG. 18 . After step S400, the method further includes but is not limited to the following steps:

在S500中,根据多个补偿像素坐标信息,生成补偿运动轨迹信息。In S500, compensated motion trajectory information is generated according to a plurality of compensated pixel coordinate information.

具体来说,终端设备可以依次连接多个补偿像素坐标信息,生成补偿运动轨迹信息,其中,补偿运动轨迹信息用于描述多个补偿像素坐标信息所组成的运动轨迹。Specifically, the terminal device may sequentially connect multiple compensation pixel coordinate information to generate compensation motion trajectory information, wherein the compensation motion trajectory information is used to describe the motion trajectory composed of multiple compensation pixel coordinate information.

在S510中,根据补偿运动轨迹信息,控制机械臂末端运动。In S510, the movement of the end of the robot arm is controlled according to the compensation motion trajectory information.

具体来说,在终端设备生成补偿运动轨迹信息之后,终端设备可以根据补偿运动轨迹信息,控制机械臂末端按照补偿运动轨迹信息进行运动。Specifically, after the terminal device generates the compensation motion trajectory information, the terminal device can control the end of the robot arm to move according to the compensation motion trajectory information.

本申请实施例基于激光补偿的机器人视觉检测方法的实施原理为:终端设备可以先基于相机,获取末端标记物图像集信息,然后针对各个末端标记物图像信息执行该处理:获取机械臂末端理论位置的第一像素坐标信息,再基于此对末端标记物图像信息进行畸变纠正处理,确定机械臂末端的实测位置对应的第二像素坐标信息,然后结合末端位置运动误差计算函数进一步确定机械臂末端的坐标误差值信息,最后根据坐标误差值信息,对第二像素坐标信息进行补偿处理,生成补偿像素坐标信息,从而实现对全局误差情况进行补偿,提高机械臂定位的准确性。The implementation principle of the robot vision detection method based on laser compensation in the embodiment of the present application is: the terminal device can first obtain the end marker image set information based on the camera, and then perform the processing for each end marker image information: obtain the first pixel coordinate information of the theoretical position of the end of the robot arm, and then perform distortion correction processing on the end marker image information based on this, determine the second pixel coordinate information corresponding to the actual measured position of the end of the robot arm, and then further determine the coordinate error value information of the end of the robot arm in combination with the end position motion error calculation function, and finally, according to the coordinate error value information, compensate the second pixel coordinate information to generate compensated pixel coordinate information, thereby achieving compensation for the global error situation and improving the accuracy of robot arm positioning.

需要说明的是,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be noted that the size of the serial numbers of the steps in the above embodiments does not mean the order of execution. The execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.

本申请的实施例还提供了一种基于激光补偿的机器人视觉检测系统,为便于说明,仅示出与本申请相关的部分,如图19所示,该系统190包括:The embodiment of the present application further provides a robot vision detection system based on laser compensation. For ease of description, only the parts related to the present application are shown. As shown in FIG. 19 , the system 190 includes:

末端标记物图像集信息获取模块1911:用于基于预设的相机,持续获取末端标记物图像集信息,其中,末端标记物图像集信息包括多个连续的末端标记物图像信息,末端标记物图像信息的拍摄对象为指定的机械臂末端,机械臂末端安装有至少两个光源标记物;The terminal marker image set information acquisition module 1911 is used to continuously acquire terminal marker image set information based on a preset camera, wherein the terminal marker image set information includes a plurality of continuous terminal marker image information, and the shooting object of the terminal marker image information is a specified robotic arm terminal, and at least two light source markers are installed at the robotic arm terminal;

第一像素坐标信息获取模块192:用于针对各个末端标记物图像信息:获取机械臂末端的理论位置对应的第一像素坐标信息,并根据第一像素坐标信息和预设的畸变系数集信息,对末端标记物图像信息进行畸变纠正处理,确定机械臂末端的实测位置对应的第二像素坐标信息;The first pixel coordinate information acquisition module 192 is used to obtain the first pixel coordinate information corresponding to the theoretical position of the end of the robot arm for each end marker image information, and perform distortion correction processing on the end marker image information according to the first pixel coordinate information and the preset distortion coefficient set information, so as to determine the second pixel coordinate information corresponding to the actual measured position of the end of the robot arm;

坐标误差值信息确定模块193:用于根据第二像素坐标信息和预设的末端位置运动误差计算函数,确定机械臂末端的坐标误差值信息;Coordinate error value information determination module 193: used to determine the coordinate error value information of the end of the robot arm according to the second pixel coordinate information and a preset end position motion error calculation function;

补偿像素坐标信息生成模块194:用于根据坐标误差值信息,对第二像素坐标信息进行补偿处理,生成补偿像素坐标信息。The compensated pixel coordinate information generating module 194 is used to perform compensation processing on the second pixel coordinate information according to the coordinate error value information to generate compensated pixel coordinate information.

需要说明的是,上述模块之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。It should be noted that the information interaction, execution process and other contents between the above-mentioned modules are based on the same concept as the method embodiment of the present application. Their specific functions and technical effects can be found in the method embodiment part and will not be repeated here.

本申请实施例还提供了一种终端设备,如图20所示,该实施例的终端设备200包括:处理器201、存储器202以及存储在存储器202中并可在处理器201上运行的计算机程序203。处理器201执行计算机程序203时实现上述机器人视觉检测方法实施例中的步骤,例如图1所示的步骤S100至S400;或者,处理器201执行计算机程序203时实现上述装置中各模块的功能,例如图19所示模块191至194的功能。The embodiment of the present application also provides a terminal device, as shown in FIG20, the terminal device 200 of the embodiment includes: a processor 201, a memory 202, and a computer program 203 stored in the memory 202 and executable on the processor 201. When the processor 201 executes the computer program 203, the steps in the above-mentioned robot vision detection method embodiment are implemented, such as steps S100 to S400 shown in FIG1; or, when the processor 201 executes the computer program 203, the functions of each module in the above-mentioned device are implemented, such as the functions of modules 191 to 194 shown in FIG19.

该终端设备200可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备,该终端设备200包括但不仅限于处理器201、存储器202。本领域技术人员可以理解,图20仅仅是终端设备200的示例,并不构成对终端设备200的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如终端设备200还可以包括输入输出设备、网络接入设备、总线等。The terminal device 200 may be a computing device such as a desktop computer, a notebook, a PDA, or a cloud server, and the terminal device 200 includes but is not limited to a processor 201 and a memory 202. Those skilled in the art will appreciate that FIG. 20 is merely an example of the terminal device 200 and does not limit the terminal device 200, and may include more or fewer components than shown in the figure, or may combine certain components, or different components, for example, the terminal device 200 may also include input and output devices, network access devices, buses, etc.

其中,处理器201可以是中央处理单元(Central Processing Unit,CPU),还可以是其它通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等;通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。Among them, the processor 201 can be a central processing unit (CPU), or other general-purpose processors, digital signal processors (DSP), application-specific integrated circuits (ASIC), field-programmable gate arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.; the general-purpose processor can be a microprocessor or the processor can also be any conventional processor, etc.

存储器202可以是终端设备200的内部存储单元,例如终端设备200的硬盘或内存,存储器202也可以是终端设备200的外部存储设备,例如终端设备200上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(FlashCard)等;进一步地,存储器202还可以既包括终端设备200的内部存储单元也包括外部存储设备,存储器202还可以存储计算机程序203以及终端设备200所需的其它程序和数据,存储器202还可以用于暂时地存储已经输出或者将要输出的数据。The memory 202 may be an internal storage unit of the terminal device 200, such as a hard disk or memory of the terminal device 200, or the memory 202 may be an external storage device of the terminal device 200, such as a plug-in hard disk, a smart media card (SMC), a secure digital (SD) card, a flash card (FlashCard), etc. equipped on the terminal device 200; further, the memory 202 may also include both an internal storage unit and an external storage device of the terminal device 200, and the memory 202 may also store a computer program 203 and other programs and data required by the terminal device 200, and the memory 202 may also be used to temporarily store data that has been output or is to be output.

本申请的一个实施例还提供了一种计算机可读存储介质,该存储介质存储有计算机程序,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,计算机程序包括计算机程序代码,计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等;计算机可读介质可以包括:能够携带计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(Read-OnlyMemory,ROM)、随机存取存储器(Random Access Memory,RAM)、电载波信号、电信信号以及软件分发介质等。An embodiment of the present application also provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the steps of each of the above method embodiments can be implemented. Among them, the computer program includes computer program code, which can be in source code form, object code form, executable file or some intermediate form, etc.; the computer-readable medium may include: any entity or device that can carry computer program code, recording medium, USB flash drive, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM), random access memory (RAM), electric carrier signal, telecommunication signal and software distribution medium, etc.

以上均为本申请的较佳实施例,并非依此限制本申请的保护范围,故:凡依本申请的方法、原理、结构所做的等效变化,均应涵盖于本申请的保护范围之内。The above are all preferred embodiments of the present application, and the protection scope of the present application is not limited thereto. Therefore, all equivalent changes made according to the methods, principles, and structures of the present application should be included in the protection scope of the present application.

Claims (10)

1.一种基于激光补偿的机器人视觉检测方法,其特征在于,所述方法包括:1. A robot vision detection method based on laser compensation, characterized in that the method comprises: 基于预设的相机,持续获取末端标记物图像集信息,其中,所述末端标记物图像集信息包括多个连续的末端标记物图像信息,所述末端标记物图像信息的拍摄对象为指定的机械臂末端,所述机械臂末端安装有至少两个光源标记物;Based on a preset camera, continuously acquire end marker image set information, wherein the end marker image set information includes a plurality of continuous end marker image information, the shooting object of the end marker image information is a designated robotic arm end, and the robotic arm end is equipped with at least two light source markers; 针对各个所述末端标记物图像信息:获取所述机械臂末端的理论位置对应的第一像素坐标信息,并根据所述第一像素坐标信息和预设的畸变系数集信息,对所述末端标记物图像信息进行畸变纠正处理,确定所述机械臂末端的实测位置对应的第二像素坐标信息;For each of the end marker image information: obtaining first pixel coordinate information corresponding to the theoretical position of the end of the robotic arm, and performing distortion correction processing on the end marker image information according to the first pixel coordinate information and the preset distortion coefficient set information, to determine second pixel coordinate information corresponding to the measured position of the end of the robotic arm; 根据所述第二像素坐标信息和预设的末端位置运动误差计算函数,确定所述机械臂末端的坐标误差值信息;Determine the coordinate error value information of the end of the robotic arm according to the second pixel coordinate information and a preset end position motion error calculation function; 根据所述坐标误差值信息,对所述第二像素坐标信息进行补偿处理,生成补偿像素坐标信息。The second pixel coordinate information is compensated according to the coordinate error value information to generate compensated pixel coordinate information. 2.根据权利要求1所述的方法,其特征在于,在所述基于预设的相机,持续获取末端标记物图像集信息之前,所述方法还包括:2. The method according to claim 1, characterized in that before continuously acquiring the terminal marker image set information based on the preset camera, the method further comprises: 构建所述相机的成像模型;Constructing an imaging model of the camera; 基于所述相机,获取标定板图像信息;Based on the camera, obtaining calibration plate image information; 对所述标定板图像信息进行灰度化处理,生成灰度化图像信息;Performing grayscale processing on the calibration plate image information to generate grayscale image information; 基于预设的角点检测算法和所述灰度化图像信息,确定所述灰度化图像信息的多个标定板角点信息;Based on a preset corner point detection algorithm and the grayscale image information, determining a plurality of calibration plate corner point information of the grayscale image information; 针对各个所述标定板角点信息:获取所述标定板角点信息的角点实际物理坐标信息;For each of the calibration plate corner point information: obtaining the actual physical coordinate information of the corner point of the calibration plate corner point information; 根据预设的欧式距离计算函数、所述标定板角点信息和所述角点实际物理坐标信息,生成重投影误差信息;Generate reprojection error information according to a preset Euclidean distance calculation function, the calibration plate corner point information and the actual physical coordinate information of the corner point; 根据所述重投影误差信息和预设的误差阈值信息,生成标定准确性结果信息,其中,所述标定准确性结果信息为标定合格信息或标定不合格信息;Generate calibration accuracy result information according to the reprojection error information and preset error threshold information, wherein the calibration accuracy result information is calibration qualified information or calibration unqualified information; 相应地,所述基于预设的相机,持续获取末端标记物图像集信息,包括:Accordingly, the method of continuously acquiring the terminal marker image set information based on the preset camera includes: 若所述标定准确性结果信息为所述标定合格信息,则基于预设的相机,持续获取末端标记物图像集信息。If the calibration accuracy result information is the calibration qualified information, the terminal marker image set information is continuously acquired based on a preset camera. 3.根据权利要求2所述的方法,其特征在于,所述畸变系数集信息包括第一畸变系数信息和第二畸变系数信息;所述针对各个所述末端标记物图像信息:获取所述机械臂末端的理论位置对应的第一像素坐标信息,并根据所述第一像素坐标信息和预设的畸变系数集信息,对所述末端标记物图像信息进行畸变纠正处理,确定所述机械臂末端的实测位置对应的第二像素坐标信息,包括:3. The method according to claim 2, characterized in that the distortion coefficient set information includes first distortion coefficient information and second distortion coefficient information; for each of the end marker image information: obtaining first pixel coordinate information corresponding to the theoretical position of the end of the robotic arm, and performing distortion correction processing on the end marker image information according to the first pixel coordinate information and the preset distortion coefficient set information, and determining second pixel coordinate information corresponding to the measured position of the end of the robotic arm, comprising: 针对各个所述末端标记物图像信息:获取所述机械臂末端的理论位置对应的第一像素坐标信息,其中,所述第一像素坐标信息为:For each of the end marker image information: obtain the first pixel coordinate information corresponding to the theoretical position of the end of the robotic arm, wherein the first pixel coordinate information is: , 式中,为所述第一像素坐标信息的横坐标,为所述末端标记物图像信息的图像中心在预设的图像坐标系中的实际横坐标,为所述末端标记物图像信息的图像中心在预设的图像坐标系中的理论横坐标,为所述相机的焦距信息,毫米,为有关于横坐标的第一像元尺寸,为所述第一像素坐标信息的纵坐标,为所述末端标记物图像信息的图像中心在预设的图像坐标系中的实际纵坐标,为所述末端标记物图像信息的图像中心在预设的图像坐标系中的理论纵坐标,为有关于纵坐标的第二像元尺寸;In the formula, is the horizontal coordinate of the first pixel coordinate information, is the actual horizontal coordinate of the image center of the terminal marker image information in the preset image coordinate system, is the theoretical horizontal coordinate of the image center of the terminal marker image information in a preset image coordinate system, , is the focal length information of the camera, mm, is the first pixel size with respect to the horizontal axis, is the ordinate of the first pixel coordinate information, is the actual ordinate of the image center of the terminal marker image information in the preset image coordinate system, is the theoretical ordinate of the image center of the terminal marker image information in a preset image coordinate system, , is the second pixel size with respect to the ordinate; 根据所述第一像素坐标信息、预设的所述第一畸变系数信息和所述第二畸变系数信息,对所述末端标记物图像信息进行畸变纠正处理,确定所述机械臂末端的实测位置对应的畸变纠正像素坐标信息,其中,所述畸变纠正像素坐标信息为:According to the first pixel coordinate information, the preset first distortion coefficient information and the second distortion coefficient information, the end marker image information is subjected to distortion correction processing to determine the distortion corrected pixel coordinate information corresponding to the measured position of the end of the robotic arm, wherein the distortion corrected pixel coordinate information is: , 式中,为所述畸变纠正像素坐标信息的横坐标,为所述第一像素坐标信息的横坐标,为所述第一畸变系数信息,所述第一畸变系数信息用于描述一阶径向畸变系数,为所述第二畸变系数信息,所述第二畸变系数信息用于描述二阶径向畸变系数,为预设的三阶径向畸变系数,为预设的一阶切向畸变系数,为预设的二阶切向畸变系数,为所述畸变纠正像素坐标信息的纵坐标,为所述第一像素坐标信息的纵坐标;In the formula, is the horizontal coordinate of the distortion-corrected pixel coordinate information, is the horizontal coordinate of the first pixel coordinate information, is the first distortion coefficient information, and the first distortion coefficient information is used to describe the first-order radial distortion coefficient. , , is the second distortion coefficient information, where the second distortion coefficient information is used to describe the second-order radial distortion coefficient. , is the preset third-order radial distortion coefficient, is the preset first-order tangential distortion coefficient, is the preset second-order tangential distortion coefficient, is the ordinate of the distortion-corrected pixel coordinate information, is the ordinate of the first pixel coordinate information; 针对第一标记物的各个所述畸变纠正像素坐标信息和第二标记物的各个所述畸变纠正像素坐标信息:对所述畸变纠正像素坐标信息进行最小二乘法拟合处理,生成目标理想圆的理想圆特征集信息,其中,所述第一标记物用于描述任意一个所述光源标记物,所述第二标记物用于描述任意另一个所述光源标记物,所述理想圆特征集信息包括所述目标理想圆的圆心像素坐标信息和半径信息For each of the distortion-corrected pixel coordinate information of the first marker and each of the distortion-corrected pixel coordinate information of the second marker: performing least squares fitting processing on the distortion-corrected pixel coordinate information to generate ideal circle feature set information of a target ideal circle, wherein the first marker is used to describe any one of the light source markers, the second marker is used to describe any other of the light source markers, and the ideal circle feature set information includes the center pixel coordinate information and radius information of the target ideal circle 根据多个所述圆心像素坐标信息,确定所述第二像素坐标信息。The second pixel coordinate information is determined according to the plurality of circle center pixel coordinate information. 4.根据权利要求3所述的方法,其特征在于,在所述根据多个所述圆心像素坐标信息,确定所述第二像素坐标信息之后,所述方法还包括:4. The method according to claim 3, characterized in that after determining the second pixel coordinate information according to the plurality of circle center pixel coordinate information, the method further comprises: 基于预设的重复检测次数信息,获取所述机械臂末端在同一位置的多个重复测量样本数据信息;Based on the preset number of repeated detection times, obtaining multiple repeated measurement sample data information of the end of the robotic arm at the same position; 根据多个所述重复测量样本数据信息,生成重复测量样本数据平均值信息;Generate repeated measurement sample data average value information according to the plurality of repeated measurement sample data information; 根据所述重复测量样本数据平均值信息,确定所述机械臂末端在同一位置的标准偏差值信息。The standard deviation value information of the end of the robot arm at the same position is determined according to the average value information of the repeated measurement sample data. 5.根据权利要求4所述的方法,其特征在于,在所述根据所述第二像素坐标信息和预设的末端位置运动误差计算函数,确定所述机械臂末端的坐标误差值信息之前,所述方法还包括:5. The method according to claim 4, characterized in that before determining the coordinate error value information of the end of the robotic arm according to the second pixel coordinate information and a preset end position motion error calculation function, the method further comprises: 基于根据所述第二像素坐标信息和预设的距离反比权重插值函数,生成空间插值距离信息;Generate spatial interpolation distance information based on the second pixel coordinate information and a preset distance inverse weight interpolation function; 根据所述空间插值距离信息和所述第二像素坐标信息,等间距阵列生成多个预测像素坐标信息,其中,所述预测像素坐标信息用于描述等间距阵列生成的所述第二像素坐标信息,多个所述预测像素坐标信息之间的距离为所述空间插值距离信息,所述预测像素坐标信息和所述第二像素坐标信息之间的距离为所述空间插值距离信息;According to the spatial interpolation distance information and the second pixel coordinate information, a plurality of predicted pixel coordinate information is generated in an equally spaced array, wherein the predicted pixel coordinate information is used to describe the second pixel coordinate information generated in an equally spaced array, the distance between the plurality of predicted pixel coordinate information is the spatial interpolation distance information, and the distance between the predicted pixel coordinate information and the second pixel coordinate information is the spatial interpolation distance information; 相应地,所述根据所述第二像素坐标信息和预设的末端位置运动误差计算函数,确定所述机械臂末端的坐标误差值信息,包括:Correspondingly, determining the coordinate error value information of the end of the robotic arm according to the second pixel coordinate information and a preset end position motion error calculation function includes: 根据所述第二像素坐标信息的横坐标和预设的末端位置运动误差计算函数,确定所述机械臂末端的坐标误差值信息,其中,所述末端位置运动误差计算函数为:The coordinate error value information of the end of the robotic arm is determined according to the horizontal coordinate of the second pixel coordinate information and a preset end position motion error calculation function, wherein the end position motion error calculation function is: , 式中,为所述坐标误差值信息,为所述第二像素坐标信息的总数量,为所述第二像素坐标信息的次序,为第个所述第二像素坐标信息对应的横坐标与前一个所述第二像素坐标信息对应的横坐标之间的差值。In the formula, is the coordinate error value information, is the total amount of the second pixel coordinate information, is the order of the second pixel coordinate information, For the The difference between the horizontal coordinate corresponding to the second pixel coordinate information and the horizontal coordinate corresponding to the previous second pixel coordinate information. 6.根据权利要求4所述的方法,其特征在于,所述根据所述坐标误差值信息,对所述第二像素坐标信息进行补偿处理,生成补偿像素坐标信息,包括:6. The method according to claim 4, characterized in that the step of compensating the second pixel coordinate information according to the coordinate error value information to generate compensated pixel coordinate information comprises: 根据所述第一像素坐标信息和所述第二像素坐标信息,确定补偿方向信息;determining compensation direction information according to the first pixel coordinate information and the second pixel coordinate information; 基于所述补偿方向信息和所述坐标误差值信息,对所述第二像素坐标信息进行补偿处理,生成补偿像素坐标信息,其中,所述补偿像素坐标信息用于描述往所述补偿方向信息补偿所述坐标误差值信息后的所述第二像素坐标信息。Based on the compensation direction information and the coordinate error value information, the second pixel coordinate information is compensated to generate compensated pixel coordinate information, wherein the compensated pixel coordinate information is used to describe the second pixel coordinate information after the coordinate error value information is compensated toward the compensation direction information. 7.根据权利要求6所述的方法,其特征在于,在所述根据所述坐标误差值信息,对所述第一像素坐标信息进行补偿处理,生成补偿像素坐标信息之后,所述方法还包括:7. The method according to claim 6, characterized in that after compensating the first pixel coordinate information according to the coordinate error value information to generate compensated pixel coordinate information, the method further comprises: 根据多个所述补偿像素坐标信息,生成补偿运动轨迹信息;Generate compensation motion trajectory information according to the plurality of compensation pixel coordinate information; 根据所述补偿运动轨迹信息,控制所述机械臂末端运动。The movement of the end of the robotic arm is controlled according to the compensated motion trajectory information. 8.一种基于激光补偿的机器人视觉检测系统,其特征在于,所述系统包括:8. A robot vision detection system based on laser compensation, characterized in that the system comprises: 末端标记物图像集信息获取模块:用于基于预设的相机,持续获取末端标记物图像集信息,其中,所述末端标记物图像集信息包括多个连续的末端标记物图像信息,所述末端标记物图像信息的拍摄对象为指定的机械臂末端,所述机械臂末端安装有至少两个光源标记物;The terminal marker image set information acquisition module is used to continuously acquire the terminal marker image set information based on a preset camera, wherein the terminal marker image set information includes a plurality of continuous terminal marker image information, and the shooting object of the terminal marker image information is a designated robotic arm terminal, and at least two light source markers are installed at the robotic arm terminal; 第一像素坐标信息获取模块:用于针对各个所述末端标记物图像信息:获取所述机械臂末端的理论位置对应的第一像素坐标信息,并根据所述第一像素坐标信息和预设的畸变系数集信息,对所述末端标记物图像信息进行畸变纠正处理,确定所述机械臂末端的实测位置对应的第二像素坐标信息;The first pixel coordinate information acquisition module is used to obtain the first pixel coordinate information corresponding to the theoretical position of the end of the robotic arm for each end marker image information, and perform distortion correction processing on the end marker image information according to the first pixel coordinate information and the preset distortion coefficient set information, so as to determine the second pixel coordinate information corresponding to the measured position of the end of the robotic arm; 坐标误差值信息确定模块:用于根据所述第二像素坐标信息和预设的末端位置运动误差计算函数,确定所述机械臂末端的坐标误差值信息;A coordinate error value information determination module: used to determine the coordinate error value information of the end of the robotic arm according to the second pixel coordinate information and a preset end position motion error calculation function; 补偿像素坐标信息生成模块:用于根据所述坐标误差值信息,对所述第二像素坐标信息进行补偿处理,生成补偿像素坐标信息。Compensated pixel coordinate information generating module: used to perform compensation processing on the second pixel coordinate information according to the coordinate error value information to generate compensated pixel coordinate information. 9.一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述方法的步骤。9. A terminal device, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to any one of claims 1 to 7 when executing the computer program. 10.一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述方法的步骤。10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the steps of the method according to any one of claims 1 to 7.
CN202411001958.2A 2024-07-25 2024-07-25 Robot vision detection method, system, terminal and medium based on laser compensation Pending CN118544360A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411001958.2A CN118544360A (en) 2024-07-25 2024-07-25 Robot vision detection method, system, terminal and medium based on laser compensation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411001958.2A CN118544360A (en) 2024-07-25 2024-07-25 Robot vision detection method, system, terminal and medium based on laser compensation

Publications (1)

Publication Number Publication Date
CN118544360A true CN118544360A (en) 2024-08-27

Family

ID=92453941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411001958.2A Pending CN118544360A (en) 2024-07-25 2024-07-25 Robot vision detection method, system, terminal and medium based on laser compensation

Country Status (1)

Country Link
CN (1) CN118544360A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102208098A (en) * 2010-03-29 2011-10-05 佳能株式会社 Image processing apparatus and method of controlling the same
CN110276806A (en) * 2019-05-27 2019-09-24 江苏大学 An online hand-eye calibration and grasping pose calculation method for a four-degree-of-freedom parallel robot stereo vision hand-eye system
US20210241491A1 (en) * 2020-02-04 2021-08-05 Mujin, Inc. Method and system for performing automatic camera calibration
CN113524194A (en) * 2021-04-28 2021-10-22 重庆理工大学 Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning
CN116476046A (en) * 2023-03-27 2023-07-25 佛山科学技术学院 Mechanical arm calibration and control device and method based on particle swarm optimization
CN116619350A (en) * 2022-02-14 2023-08-22 上海理工大学 Robot error calibration method based on binocular vision measurement
CN117557657A (en) * 2023-12-15 2024-02-13 武汉理工大学 Binocular fisheye camera calibration method and system based on ChArUco calibration plate

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102208098A (en) * 2010-03-29 2011-10-05 佳能株式会社 Image processing apparatus and method of controlling the same
CN110276806A (en) * 2019-05-27 2019-09-24 江苏大学 An online hand-eye calibration and grasping pose calculation method for a four-degree-of-freedom parallel robot stereo vision hand-eye system
US20210241491A1 (en) * 2020-02-04 2021-08-05 Mujin, Inc. Method and system for performing automatic camera calibration
CN113524194A (en) * 2021-04-28 2021-10-22 重庆理工大学 Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning
CN116619350A (en) * 2022-02-14 2023-08-22 上海理工大学 Robot error calibration method based on binocular vision measurement
CN116476046A (en) * 2023-03-27 2023-07-25 佛山科学技术学院 Mechanical arm calibration and control device and method based on particle swarm optimization
CN117557657A (en) * 2023-12-15 2024-02-13 武汉理工大学 Binocular fisheye camera calibration method and system based on ChArUco calibration plate

Similar Documents

Publication Publication Date Title
CN111127422B (en) Image labeling method, device, system and host
Yan et al. Joint camera intrinsic and lidar-camera extrinsic calibration
JP5612916B2 (en) Position / orientation measuring apparatus, processing method thereof, program, robot system
CN110555889A (en) CALTag and point cloud information-based depth camera hand-eye calibration method
CN115345942B (en) Space calibration method, device, computer equipment and storage medium
US10628968B1 (en) Systems and methods of calibrating a depth-IR image offset
CN113379845B (en) Camera calibration method and device, electronic device and storage medium
CN111964680A (en) A real-time positioning method of inspection robot
Ding et al. A robust detection method of control points for calibration and measurement with defocused images
CN117173254A (en) Camera calibration method, system, device and electronic equipment
CN117848234A (en) Object scanning mechanism, method and related equipment
CN118918265A (en) Three-dimensional reconstruction method and system based on monocular camera and line laser
CN116630444A (en) An optimization method for camera and lidar fusion calibration
CN104166995B (en) Harris-SIFT binocular vision positioning method based on horse pace measurement
CN111915681B (en) External parameter calibration method, device, storage medium and equipment for multi-group 3D camera group
CN113592934B (en) Target depth and height measuring method and device based on monocular camera
CN105654474A (en) Mechanical arm positioning method based on visual guidance and device thereof
CN118544360A (en) Robot vision detection method, system, terminal and medium based on laser compensation
CN116934867A (en) Camera calibration method
Liang et al. An integrated camera parameters calibration approach for robotic monocular vision guidance
CN116030138A (en) Method for acquiring position of calibration plate, combined calibration method, calibration plate and equipment
CN115018922A (en) Distortion parameter calibration method, electronic device and computer readable storage medium
CN114419168B (en) Calibration method and device for image feature points
CN115294046B (en) Parameter calibration method, device and system based on microlens array visual identification
CN114066970B (en) Distance measurement method, device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination