WO2021217976A1 - 一种基于单目视觉定位的机械臂控制方法及装置 - Google Patents

一种基于单目视觉定位的机械臂控制方法及装置 Download PDF

Info

Publication number
WO2021217976A1
WO2021217976A1 PCT/CN2020/111190 CN2020111190W WO2021217976A1 WO 2021217976 A1 WO2021217976 A1 WO 2021217976A1 CN 2020111190 W CN2020111190 W CN 2020111190W WO 2021217976 A1 WO2021217976 A1 WO 2021217976A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate system
target point
camera
world coordinate
relative
Prior art date
Application number
PCT/CN2020/111190
Other languages
English (en)
French (fr)
Inventor
郜开开
周宸
周宝
龚连银
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Priority to SG11202113181UA priority Critical patent/SG11202113181UA/en
Publication of WO2021217976A1 publication Critical patent/WO2021217976A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J18/00Arms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Definitions

  • This application relates to the technical field of machine vision, and in particular to a method and device for controlling a robotic arm based on monocular vision positioning.
  • Robotic arm movement control based on machine vision is to obtain target positioning through machine vision, and then control the end of the robot arm to move to the target object to perform grasping and other related operations.
  • the currently commonly used machine vision positioning method is a binocular vision positioning method, that is, two monocular cameras with a certain distance are used to collect images, and the image processing method is used to obtain the matching point between the two monocular camera images.
  • Distance that is, disparity
  • the target object distance information can be obtained, and then the distance and the focal length are combined to further obtain the coordinate information in the X and Y directions.
  • this method uses more cameras, resulting in a complicated positioning system, difficult to operate, and reducing positioning efficiency.
  • the present application provides a robotic arm control method and device based on monocular vision positioning.
  • the main purpose is to solve the problem that the binocular vision positioning method in the prior art uses more cameras, which causes the positioning system to be complicated, difficult to operate, and reduces positioning. The question of efficiency.
  • a robotic arm control system applied to monocular vision positioning includes a robotic arm and a monocular camera installed at the end of the robotic arm, which includes:
  • a robotic arm control device based on monocular vision positioning
  • a coordinate system construction module for establishing a pixel coordinate system, a camera coordinate system, and a world coordinate system
  • a first acquisition module using For obtaining the pixel coordinates of the target point of the target object in the pixel coordinate system, the camera internal parameter matrix, the camera external parameter matrix, and the homogeneous transformation matrix of the end of the robotic arm relative to the world coordinate system
  • the sight vector determination module Used to obtain the target point relative to the The line-of-sight vector of the world coordinate system
  • the line-of-sight equation determining module is used to determine the line-of-sight vector of the target point relative to the world coordinate system according to the homogeneous transformation matrix of the camera coordinate system relative to the world coordinate system ,
  • the second acquisition module is used to obtain the plane equation of the plane where the target point is located in the world coordinate system
  • the world coordinate determination module is used to obtain the line-
  • a storage medium is provided, and at least one executable instruction is stored in the storage medium, and the executable instruction causes a processor to perform the following operations:
  • a computer device including: a processor, a memory, a communication interface, and a communication bus.
  • the processor, the memory, and the communication interface complete mutual communication through the communication bus.
  • Communication; the memory is used to store at least one executable instruction, the executable instruction causes the processor to perform the following steps:
  • the embodiment of the present application does not require two monocular cameras, simplifies the positioning system, has simple operation, does not involve the matching of image feature points and the calculation of the optimal distance, simplifies the processing algorithm, and reduces the requirements for hardware equipment.
  • Fig. 1 shows a flow chart of a method for controlling a robotic arm based on monocular visual positioning according to an embodiment of the present application
  • Figure 2 shows a flow chart of another robotic arm control method based on monocular visual positioning provided by an embodiment of the present application
  • Figure 3 shows an application scenario diagram of an embodiment of the present application
  • FIG. 4 shows a block diagram of a device composition of a robotic arm control method based on monocular visual positioning according to an embodiment of the present application
  • FIG. 5 shows a block diagram of another mechanical arm control method based on monocular visual positioning provided by an embodiment of the present application
  • Fig. 6 shows a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • the technical solution of this application can be applied to the field of artificial intelligence and/or smart city technology, involving computer vision technology.
  • the robotic arm control solution based on the machine vision positioning of this application can help simplify positioning effects, improve positioning efficiency, and realize smart life .
  • the embodiment of the application provides a robotic arm control method based on monocular vision positioning, which is applied to a robotic arm control system for monocular vision positioning.
  • the system includes a robotic arm and a monocular camera installed at the end of the robotic arm, as shown in Figure 1 As shown, the method includes:
  • Step 101 Establish a pixel coordinate system, a camera coordinate system at the end of the robotic arm, and a world coordinate system.
  • the world coordinate system can be constructed according to the actual position of the target object.
  • the camera coordinate system is established in space to determine the optical center of the monocular camera, that is, the center of the optical axis of the monocular camera as the origin of the camera coordinate system.
  • the Z C axis coincides with the camera optical axis and is perpendicular to the imaging plane.
  • And take the photography direction as the positive direction, thereby establishing the camera coordinate system O C -X C Y C Z C.
  • the establishment of the camera coordinate system is based on the image plane collected by the monocular camera, the top left corner of the image plane can be determined as the origin of the image coordinate system, and the horizontal and vertical lines are determined as the u-axis and v-axis respectively, thereby establishing a pixel coordinate system ,
  • the pixel coordinate system takes the number of pixels as the coordinate system unit.
  • Step 102 Obtain the pixel coordinates of the target point of the target object in the pixel coordinate system, the camera internal parameter matrix, the camera external parameter matrix, and the homogeneous transformation matrix of the end of the robot relative to the world coordinate system.
  • the target object is the object operated by the robotic arm, and the target point is a specific point on the target object, generally the center point of the target object.
  • the camera internal parameter matrix and the camera external parameter matrix can be obtained by camera calibration.
  • Step 103 According to the pixel coordinates of the target point, the internal parameter matrix of the camera, the external parameter matrix of the camera, and the homogeneous transformation matrix of the end of the manipulator relative to the world coordinate system, the sight vector of the target point relative to the world coordinate system is obtained.
  • Step 104 The homogeneous transformation matrix of the camera coordinate system relative to the world coordinate system and the line-of-sight vector of the target point relative to the world coordinate system are obtained to obtain the line-of-sight line equation.
  • Step 105 Obtain the plane equation of the plane where the target point is located in the world coordinate system.
  • Step 106 Obtain the world coordinates of the target point according to the line of sight equation and the plane equation.
  • Step 107 According to the world coordinates of the target point, control the robotic arm to operate the object according to a preset strategy.
  • the embodiment of the application provides a robot arm control method based on monocular vision positioning. First, obtain the pixel coordinates of the target point of the target object in the pixel coordinate system, and then use the homogeneous transformation matrix of the camera coordinate system relative to the end of the robot arm And the end of the robotic arm is transformed with the homogeneous transformation matrix of the world coordinate system to obtain the line-of-sight equation.
  • the embodiment of the present application does not require two monocular cameras, simplifies the positioning system, is simple to operate, does not involve the matching of image feature points and the optimal distance calculation, and simplifies the processing algorithm. Reduce the requirements for hardware equipment.
  • the embodiment of this application provides another robotic arm control method based on monocular vision positioning, which is applied to a robotic arm control system for monocular vision positioning.
  • the system includes a robotic arm and a monocular camera installed at the end of the robotic arm, as shown in the figure As shown in 2, the method includes:
  • Step 201 Establish a pixel coordinate system, a robot arm end camera coordinate system, and a world coordinate system.
  • the world coordinate system can be constructed according to the actual position of the target object.
  • the target object is a teacup, which is placed on the worktable.
  • the plane formed by the X-axis and Y-axis of the world coordinate system and the worktable The upper surface is parallel, and the Z axis of the world coordinate system is perpendicular to the upper surface of the worktable, that is, the coordinate system O b -X b Y b Z b .
  • the camera coordinate system is established in space to determine the optical center of the monocular camera, that is, the center of the optical axis of the monocular camera as the origin of the camera coordinate system.
  • the Z C axis coincides with the camera optical axis and is perpendicular to the imaging plane. , And take the photography direction as the positive direction, thereby establishing the camera coordinate system O C -X C Y C Z C.
  • the establishment of the camera coordinate system is based on the image plane collected by the monocular camera, the top left corner of the image plane can be determined as the origin of the image coordinate system, and the horizontal and vertical lines are determined as the u-axis and v-axis respectively, thereby establishing a pixel coordinate system ,
  • the pixel coordinate system takes the number of pixels as the coordinate system unit.
  • Step 202 Obtain the pixel coordinates of the target point of the target object in the pixel coordinate system, the camera internal parameter matrix, the camera external parameter matrix, and the homogeneous transformation matrix of the end of the robot relative to the world coordinate system.
  • the target object is the object operated by the robotic arm
  • the target point is a specific point on the target object, generally the center point of the target object, taking Figure 3 as an example
  • the target object is a teacup
  • the center point of the cup mouth of the teacup Determined as the target point.
  • the camera internal parameter matrix and the camera external parameter matrix can be obtained by collecting images of multiple calibration boards by a monocular camera, and calibrating the monocular camera using the image set.
  • the specific calibration process is: focus the monocular camera on the checkerboard calibration board, the checkerboard size is known, the calibration board is photographed from different angles, and the pixels of the corner points of the calibration board in each calibration board image are extracted in the image. Then, according to the pixel position of the corner point of the calibration plate in the image and the parameters of the calibration plate, the camera is calibrated by Zhang's calibration method to obtain the parameter matrix in the camera.
  • the target object is the object operated by the robotic arm, and the target point is a specific point on the target object, generally the center point of the target object.
  • the parameter matrix inside the camera can be obtained by camera calibration.
  • Step 203 According to the pixel coordinates of the target point and the camera's internal parameter matrix, the three-dimensional coordinate expression of the target point in the camera coordinate system is obtained, and the three-dimensional coordinate expression of the target point in the camera coordinate system is in, Is the pixel coordinates of the target point, s is the matching coefficient of homogeneous transformation, Is the parameter matrix in the camera, f x is the focal length coefficient of the X axis of the pixel plane, f y is the focal length factor of the y axis of the pixel plane, ⁇ is the offset of the x axis of the pixel plane, ⁇ is the offset of the y axis of the pixel plane, Is the three-dimensional coordinates of the target point in the camera coordinate system.
  • Step 204 According to the three-dimensional coordinate expression of the target point in the camera coordinate system, the line-of-sight vector of the target point relative to the camera coordinate system is obtained, and the line-of-sight vector is
  • Is the projection value of the target point on the X axis of the camera coordinate system Is the projection value of the target point on the y axis of the camera coordinate system, Is the projection value of the target point on the z-axis of the camera coordinate system
  • f x is the focal length coefficient of the X-axis of the pixel plane
  • f y is the focal length factor of the y-axis of the pixel plane
  • ⁇ and ⁇ are the pixel coordinates of the target point.
  • Step 205 According to the camera external parameter matrix and the homogeneous transformation matrix of the end of the robotic arm relative to the world coordinate system, according to the following formula, obtain the homogeneous transformation matrix of the camera coordinate system relative to the world coordinate system;
  • Homogeneous transformation matrix of the end of the robotic arm relative to the world coordinate system It can be solved based on the kinematics of the robotic arm.
  • the kinematics of the robotic arm calculates the pose of the end of the robotic arm relative to the world coordinate system based on the known joint angles of the robotic arm.
  • the kinematics modeling of the manipulator adopts the DH four-parameter method, which is a matrix method for establishing the relative pose.
  • Use homogeneous transformation to describe the spatial geometric relationship of each link relative to the fixed reference coordinate system use a 4 ⁇ 4 homogeneous transformation matrix to describe the spatial relationship between two adjacent links, and deduce the end of the robot arm relative to the world coordinate system
  • Homogeneous transformation matrix Homogeneous transformation matrix
  • Step 206 According to the homogeneous transformation matrix of the camera coordinate system relative to the world coordinate system and the line of sight vector of the target point relative to the camera coordinate system, the line of sight vector of the target point relative to the world coordinate system is obtained according to the following formula;
  • Is the line of sight vector of the target point relative to the world coordinate system Is the projection value of the target point on the x-axis of the world coordinate system, Is the projection value of the target point on the y axis of the world coordinate system, Is the projection value of the target point on the z-axis of the world coordinate system
  • the camera Is the homogeneous transformation matrix of the camera coordinate system relative to the world coordinate system, Is the line of sight vector of the target point relative to the world coordinate system, Is the projection value of the target point on the X axis of the camera coordinate system, Is the projection value of the target point on the y axis of the camera coordinate system, Is the projection value of the target point on the z-axis of the camera coordinate system.
  • Step 207 Obtain the coordinates of the camera origin in the world coordinate system according to the homogeneous transformation matrix of the camera coordinate system relative to the world coordinate system.
  • the coordinates of the camera origin in the world coordinate system are the coordinates of the camera origin in the world coordinate system.
  • Is the projection value of the camera origin on the x-axis of the world coordinate system Is the projection value of the camera origin on the x-axis of the world coordinate system, Is the projection value of the camera origin on the x-axis of the world coordinate system.
  • Step 208 Obtain the line-of-sight equation according to the coordinates of the camera origin and the line-of-sight vector of the target point relative to the world coordinate system.
  • the line of sight equation is Among them, x, y and z are the coordinates of points on the line of sight equation, Is the projection value of the target point on the x-axis of the world coordinate system, Is the projection value of the target point on the y axis of the world coordinate system, Is the projection value of the target point on the z-axis of the world coordinate system, t is the dependent variable, and x c , y c and z c are the coordinates of the camera origin in the world coordinate system.
  • Step 209 Obtain the plane equation of the plane where the target point is located in the world coordinate system.
  • Step 210 Obtain the world coordinates of the target point according to the line of sight equation and the plane equation.
  • Step 211 According to the world coordinates of the target point, control the robotic arm to operate the object according to a preset strategy.
  • the preset strategy can be set by the staff according to actual needs, such as grabbing the target point, etc. Taking a teacup as an example, the preset strategy can be grabbing the teacup, or filling the teacup with water, etc.
  • the embodiment of the application provides a robot arm control method based on monocular vision positioning. First, obtain the pixel coordinates of the target point of the target object in the pixel coordinate system, and then use the homogeneous transformation matrix of the camera coordinate system relative to the end of the robot arm And the end of the robotic arm is transformed with the homogeneous transformation matrix of the world coordinate system to obtain the line-of-sight equation.
  • the embodiment of the present application does not require two monocular cameras, simplifies the positioning system, is simple to operate, does not involve the matching of image feature points and the optimal distance calculation, and simplifies the processing algorithm. Reduce the requirements for hardware equipment.
  • an embodiment of the present application provides a robotic arm control device based on monocular vision positioning.
  • the device includes:
  • the coordinate system construction module 401 is used to establish a pixel coordinate system, a camera coordinate system and a world coordinate system;
  • the first obtaining module 402 is configured to obtain the pixel coordinates of the target point of the target object in the pixel coordinate system, the camera internal parameter matrix, the camera external parameter matrix, and the homogeneous transformation matrix of the end of the robot relative to the world coordinate system;
  • the line of sight vector determination module 403 is used to obtain the line of sight of the target point relative to the world coordinate system according to the pixel coordinates of the target point, the internal parameter matrix of the camera, the external parameter matrix of the camera, and the homogeneous transformation matrix of the end of the robot relative to the world coordinate system vector;
  • the line-of-sight equation determining module 404 is used to obtain the line-of-sight equation according to the homogeneous transformation matrix of the camera coordinate system relative to the world coordinate system and the line-of-sight vector of the target point relative to the world coordinate system;
  • the second obtaining module 405 is configured to obtain the plane equation of the plane where the target point is located in the world coordinate system;
  • the world coordinate determination module 406 is used to obtain the world coordinates of the target point according to the line of sight equation and the plane equation;
  • the control module 407 is used to control the robot arm to operate the object according to a preset strategy according to the world coordinates of the target point.
  • the device includes:
  • the coordinate system construction module 501 is used to establish a pixel coordinate system, a camera coordinate system and a world coordinate system;
  • the first obtaining module 502 is configured to obtain the pixel coordinates of the target point of the target object in the pixel coordinate system, the camera internal parameter matrix, and the camera external parameter matrix;
  • the sight vector determination module 503 is used to obtain the sight line of the target point relative to the world coordinate system according to the pixel coordinates of the target point, the internal parameter matrix of the camera, the external parameter matrix of the camera, and the homogeneous transformation matrix of the end of the robot relative to the world coordinate system vector;
  • the line-of-sight equation determining module 504 is used to obtain the line-of-sight equation according to the homogeneous transformation matrix of the camera coordinate system relative to the world coordinate system and the line-of-sight vector of the target point relative to the world coordinate system;
  • the second obtaining module 505 is used to obtain the plane equation of the plane where the target point is located in the world coordinate system;
  • the world coordinate determination module 506 is used to obtain the world coordinates of the target point according to the line of sight equation and the plane equation;
  • the control module 507 is used to control the robot arm to operate the object according to a preset strategy according to the world coordinates of the target point.
  • the sight vector determining module 503 includes:
  • the line-of-sight vector determining unit 5031 is used to obtain the line-of-sight vector of the target point relative to the camera coordinate system according to the pixel coordinates of the target point and the internal parameter matrix of the camera.
  • the direction of the line-of-sight vector points from the origin of the camera coordinate system to the center of the target object.
  • the line-of-sight vector conversion unit 5032 is used to obtain the line-of-sight vector of the target point relative to the world coordinate system according to the external camera parameter matrix, the homogeneous transformation matrix of the end of the robot arm relative to the world coordinate system, and the line-of-sight vector of the target point relative to the camera coordinate system .
  • the sight vector determining unit 5031 includes:
  • the three-dimensional coordinate expression determination subunit 50311 is used to obtain the three-dimensional coordinate expression of the target point in the camera coordinate system according to the pixel coordinates of the target point and the camera's internal parameter matrix, and the three-dimensional coordinate expression of the target point in the camera coordinate system
  • the formula is in, Is the pixel coordinates of the target point, s is the matching coefficient of homogeneous transformation, Is the parameter matrix in the camera, f x is the focal length coefficient of the X axis of the pixel plane, f y is the focal length factor of the y axis of the pixel plane, ⁇ is the offset of the x axis of the pixel plane, ⁇ is the offset of the y axis of the pixel plane, Is the three-dimensional coordinates of the target point in the camera coordinate system.
  • the line-of-sight vector calculation subunit 50312 is used to obtain the line-of-sight vector of the target point relative to the camera coordinate system according to the three-dimensional coordinate expression of the target point in the camera coordinate system.
  • the line-of-sight vector is
  • Is the projection value of the target point on the X axis of the camera coordinate system Is the projection value of the target point on the y axis of the camera coordinate system, Is the projection value of the target point on the z-axis of the camera coordinate system
  • f x is the focal length coefficient of the X-axis of the pixel plane
  • f y is the focal length factor of the y-axis of the pixel plane
  • ⁇ and ⁇ are the pixel coordinates of the target point.
  • the sight vector conversion unit 5032 includes:
  • the homogeneous transformation matrix sub-determining unit 50321 is used to obtain the homogeneous transformation matrix of the camera coordinate system relative to the world coordinate system according to the camera external parameter matrix and the homogeneous transformation matrix of the end of the robot arm relative to the world coordinate system according to the following formula;
  • the line-of-sight vector conversion subunit 50322 is used to obtain the line-of-sight vector of the target point relative to the world coordinate system according to the homogeneous transformation matrix of the camera coordinate system relative to the world coordinate system and the line-of-sight vector of the target point relative to the camera coordinate system according to the following formula ;
  • Is the line of sight vector of the target point relative to the world coordinate system Is the projection value of the target point on the x-axis of the world coordinate system, Is the projection value of the target point on the y axis of the world coordinate system, Is the projection value of the target point on the z-axis of the world coordinate system
  • the camera Is the homogeneous transformation matrix of the camera coordinate system relative to the world coordinate system, Is the line of sight vector of the target point relative to the world coordinate system, Is the projection value of the target point on the X axis of the camera coordinate system, Is the projection value of the target point on the y axis of the camera coordinate system, Is the projection value of the target point on the z-axis of the camera coordinate system.
  • the line-of-sight equation determining module 504 includes:
  • the camera origin coordinate determination unit 5041 is configured to obtain the coordinates of the camera origin in the world coordinate system according to the homogeneous transformation matrix of the camera coordinate system relative to the world coordinate system.
  • the coordinates of the camera origin in the world coordinate system are the coordinates of the camera origin in the world coordinate system.
  • Is the projection value of the camera origin on the x-axis of the world coordinate system Is the projection value of the camera origin on the x-axis of the world coordinate system, Is the projection value of the camera origin on the x-axis of the world coordinate system.
  • the line-of-sight equation determining unit 5042 is used to obtain the line-of-sight equation according to the coordinates of the camera origin and the line-of-sight vector of the target point relative to the world coordinate system.
  • the line of sight equation is Among them, x, y and z are the coordinates of points on the line of sight equation, Is the projection value of the target point on the x-axis of the world coordinate system, Is the projection value of the target point on the y axis of the world coordinate system, Is the projection value of the target point on the z-axis of the world coordinate system, t is the dependent variable, and x c , y c and z c are the coordinates of the camera origin in the world coordinate system.
  • a storage medium stores at least one executable instruction, and the computer executable instruction can execute the method for controlling a robotic arm based on monocular visual positioning in any of the foregoing method embodiments.
  • the storage medium involved in this application may be a computer-readable storage medium, and the storage medium, such as a computer-readable storage medium, may be non-volatile or volatile.
  • FIG. 6 shows a schematic structural diagram of a computer device provided according to an embodiment of the present application, and the specific embodiment of the present application does not limit the specific implementation of the computer device.
  • the computer device may include a processor 602, a communications interface 604, a memory 606, and a communications bus 608.
  • the processor 602, the communication interface 604, and the memory 606 communicate with each other through the communication bus 608.
  • the communication interface 604 is used to communicate with other devices such as network elements such as clients or other servers.
  • the processor 602 is configured to execute the program 610, and specifically can execute the relevant steps in the above-mentioned embodiment of the robot arm control method based on monocular visual positioning.
  • the program 610 may include program code, and the program code includes computer operation instructions.
  • the processor 602 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present application.
  • the one or more processors included in the computer device may be the same type of processor, such as one or more CPUs, or different types of processors, such as one or more CPUs and one or more ASICs.
  • the memory 606 is used to store the program 610.
  • the memory 606 may include a high-speed RAM memory, and may also include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • the program 610 may specifically be used to cause the processor 602 to perform the following operations: establish a pixel coordinate system, a robot arm end camera coordinate system, and a world coordinate system. Obtain the pixel coordinates of the target point of the target object in the pixel coordinate system, the parameter matrix inside the camera, the parameter matrix outside the camera, and the homogeneous transformation matrix of the end of the robot relative to the world coordinate system.
  • the target object is the object operated by the robotic arm, and the target point is a specific point on the target object, generally the center point of the target object.
  • the camera internal parameter matrix and the camera external parameter matrix can be obtained by camera calibration.
  • the line-of-sight vector of the target point relative to the camera coordinate system is obtained.
  • the direction of the line-of-sight vector points from the origin of the camera coordinate system to the center of the target object.
  • the camera external parameter matrix the end of the robot arm, the homogeneous transformation matrix of the end of the robot arm relative to the world coordinate system, and the line of sight vector of the target point relative to the camera coordinate system
  • the line of sight vector of the target point relative to the world coordinate system is obtained.
  • the homogeneous transformation matrix of the camera coordinate system relative to the world coordinate system and the line-of-sight vector of the target point relative to the world coordinate system obtain the line-of-sight equation.
  • the robot arm is controlled to operate the object according to a preset strategy.
  • modules or steps of this application can be implemented by a general computing device, and they can be concentrated on a single computing device or distributed in a network composed of multiple computing devices.
  • they can be implemented with program codes executable by the computing device, so that they can be stored in the storage device for execution by the computing device, and in some cases, can be executed in a different order than here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

本申请提供了一种基于单目视觉定位的机械臂控制方法及装置,首先获取目标物体的目标点在像素坐标系下的像素坐标,然后利用相机坐标系相对于机械臂末端的齐次变换矩阵及机械臂末端相对于世界坐标系的齐次变换矩阵进行转换,得到视线直线方程,最后在世界坐标系中,获取目标点所在平面的平面方程,并通过视线直线方程及平面方程,得到目标点的世界坐标,因此,与现有技术相比,本申请实施例无需两个单目摄像机,简化定位系统,操作简单,且不涉及图像特征点的匹配和最优距离的测算,简化处理算法,降低对硬件设备的要求。

Description

一种基于单目视觉定位的机械臂控制方法及装置
本申请要求于2020年4月28日提交中国专利局、申请号为202010349776.X,发明名称为“一种基于单目视觉定位的机械臂控制方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及一种机器视觉技术领域,特别是涉及一种基于单目视觉定位的机械臂控制方法及装置。
背景技术
随着智能化水平的提高与机器人使用的普及,基于机器视觉的机械臂移动控制受到广泛关注。基于机械视觉的机械臂移动控制是通过机器视觉获取目标定位,然后控制机械臂末端移动到目标物体执行抓取等相关操作。
发明人意识到,目前普遍采用的机器视觉定位方法为双目视觉定位方法,即利用两个距离一定的单目摄像机采集图像,通过图像处理手段获取两单目摄像机图像中匹配点对之间的距离(即视差),然后结合视差与深度信息之间的对应关系,可获取目标物体距离信息,再结合距离与焦距,进一步获得X和Y方向的坐标信息。但是,该方法使用摄像机较多,造成定位系统复杂,不易操作,降低定位效率。
发明内容
有鉴于此,本申请提供一种基于单目视觉定位的机械臂控制方法及装置,主要目的在于解决现有技术中双目视觉定位方法使用摄像机较多,造成定位系统复杂,不易操作,降低定位效率的问题。
依据本申请一个方面,提供了一种应用于单目视觉定位的机械臂控制系统,所述系统包括机械臂及安装在所述机械臂末端的单目像机,其中,包括:
建立像素坐标系、相机坐标系和世界坐标系;获取目标物体的目标点在所述像素坐标系中的像素坐标、相机内参数矩阵、相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵;根据所述目标点的像素坐标、相机的内参数矩阵、所述相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,得到所述目标点相对于所述世界坐标系的视线向量;根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵和所述目标点相对于所述世界坐标系的视线向量,得到视线直线方程;在所述世界坐标系中,获取所述目标点所在平面的平面方程;根据所述视线直线方程及所述平面方程,得到所述目标点的世界坐标;根据所述目标点的世界坐标,控制所述机械臂按照预设策略对物体进行操作。
依据本申请另一个方面,提供了一种基于单目视觉定位的机械臂控制装置,包括:坐标系构建模块,用于建立像素坐标系、相机坐标系和世界坐标系;第一获取模块,用于获取目标物体的目标点在所述像素坐标系中的像素坐标、相机内参数矩阵、相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵;视线向量确定模块,用于根据所述目标点的像素坐标、相机的内参数矩阵、所述相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,得到所述目标点相对于所述世界坐标系的视线向量;视线直线方程确定模块,用于根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵和所述目标点相对于所述世界坐标系的视线向量,得到视线直线方程;第二获取模块,用于在所述世界坐标系中,获取所述目标点所在平面的平面方程;世界坐标确定模块,用于根据所述视线直线方程及所述平面方程,得到所述目标点的世界坐标;控制模块,用于根据所述目标点的世界坐标,控制所述机械臂按照预设策略对物体进行操作。
根据本申请的又一方面,提供了一种存储介质,所述存储介质中存储有至少一可执行指令,所述可执行指令使处理器执行以下操作:
建立像素坐标系、相机坐标系和世界坐标系;获取目标物体的目标点在所述像素坐标系中的像素坐标、相机内参数矩阵、相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵;根据所述目标点的像素坐标、相机的内参数矩阵、所述相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,得到所述目标点相对于所述世界坐标系的视线向量;根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵和所述目标点相对于所述世界坐标系的视线向量,得到视线直线方程;在所述世界坐标系中,获取所述目标点所在平面的平面方程;根据所述视线直线方程及所述平面方程,得到所述目标点的世界坐标;根据所述目标点的世界坐标,控制所述机械臂按照预设策略对物体进行操作。
根据本申请的再一方面,提供了一种计算机设备,包括:处理器、存储器、通信接口和通信总线,所述处理器、所述存储器和所述通信接口通过所述通信总线完成相互间的通信;所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行以下步骤:
建立像素坐标系、相机坐标系和世界坐标系;获取目标物体的目标点在所述像素坐标系中的像素坐标、相机内参数矩阵、相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵;根据所述目标点的像素坐标、相机的内参数矩阵、所述相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,得到所述目标点相对于所述世界坐标系的视线向量;根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵和所述目标点相对于所述世界坐标系的视线向量,得到视线直线方程;在所述世界坐标系中,获取所述目标点所在平面的平面方程;根据所述视线直线方程及所述平面方程,得到所述目标点的世界坐标;根据所述目标点的世界坐标,控制所述机械臂按照预设策略对物体进行操作。
本申请实施例无需两个单目摄像机,简化定位系统,操作简单,且不涉及图像特征点的匹配和最优距离的测算,简化处理算法,降低对硬件设备的要求。
附图说明
图1示出了本申请实施例提供的一种基于单目视觉定位的机械臂控制方法的流程图;
图2示出了本申请实施例提供的另一种基于单目视觉定位的机械臂控制方法的流程图;
图3示出了本申请实施例的应用场景图;
图4示出了本申请实施例提供的一种基于单目视觉定位的机械臂控制方法装置组成框图;
图5示出了本申请实施例提供的另一种基于单目视觉定位的机械臂控制方法装置组成框图;
图6示出了本申请实施例提供的一种计算机设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
本申请的技术方案可应用于人工智能和/或智慧城市技术领域,涉及计算机视觉技术,基于本申请的机器视觉定位的机械臂控制方案,有助于简化定位效果,提升定位效率,实现智慧生活。
本申请实施例提供了一种基于单目视觉定位的机械臂控制方法,应用于单目视觉定位的机械臂控制系统,系统包括机械臂及安装在机械臂末端的单目像机,如图1所示,该方法包括:
步骤101:建立像素坐标系、机械臂末端相机坐标系和世界坐标系。
其中,世界坐标系可以根据目标物体的实际位置进行构建。相机坐标系的建立为在空间中,以将单目相机的光心,即单目像机的光轴中心确定为相机坐标系的原点,Z C轴与相机光轴重合,并且垂直于成像平面,且取摄影方向为正方向,从而建立相机坐标系O C-X CY CZ C。相机坐标系的建立为在单目相机采集的图像平面上,可以图像平面左上角顶点确定为像坐标系的原点,以水平线和竖直线分别确定为u轴和v轴,从而建立像素坐标系,像素坐标系以像素点数量为坐标系单位。
步骤102:获取目标物体的目标点在像素坐标系中的像素坐标、相机内参数矩阵、相机外参数矩阵和机械臂末端相对于世界坐标系的齐次变换矩阵。
目标物体为机械臂所操作的物体,目标点为目标物体上的特定点,一般为目标物体的中心点。相机内参数矩阵和相机外参数矩阵可由相机标定获得。
步骤103:根据目标点的像素坐标、相机的内参数矩阵、相机外参数矩阵和机械臂末端相对于世界坐标系的齐次变换矩阵,得到目标点相对于世界坐标系的视线向量。
步骤104:相机坐标系相对于世界坐标系的齐次变换矩阵和目标点相对于世界坐标系的视线向量,得到视线直线方程。
步骤105:在世界坐标系中,获取目标点所在平面的平面方程。
步骤106:根据视线直线方程及平面方程,得到目标点的世界坐标。
步骤107:根据目标点的世界坐标,控制机械臂按照预设策略对物体进行操作。
本申请实施例提供了一种基于单目视觉定位的机械臂控制方法,首先获取目标物体的目标点在像素坐标系下的像素坐标,然后利用相机坐标系相对于机械臂末端的齐次变换矩阵及机械臂末端相对于世界坐标系的齐次变换矩阵进行转换,得到视线直线方程,最后在世界坐标系中,获取目标点所在平面的平面方程,并通过视线直线方程及平面方程,得到目标点的世界坐标,因此,与现有技术相比,本申请实施例无需两个单目摄像机,简化定位系统,操作简单,且不涉及图像特征点的匹配和最优距离的测算,简化处理算法,降低对硬件设备的要求。
本申请实施例提供了另一种基于单目视觉定位的机械臂控制方法,应用于单目视觉定位的机械臂控制系统,系统包括机械臂及安装在机械臂末端的单目像机,如图2所示,该方法包括:
步骤201:建立像素坐标系、机械臂末端相机坐标系和世界坐标系。
其中,世界坐标系可以根据目标物体的实际位置进行构建,以图3为例,目标物体为茶杯,茶杯放置在工作台上,世界坐标系的X轴和Y轴所形成的平面与工作台的上表面相平行,世界坐标系的Z轴与工作台的上表面相垂直,即坐标系O b-X bY bZ b
相机坐标系的建立为在空间中,以将单目相机的光心,即单目像机的光轴中心确定为相机坐标系的原点,Z C轴与相机光轴重合,并且垂直于成像平面,且取摄影方向为正方向,从而建立相机坐标系O C-X CY CZ C。相机坐标系的建立为在单目相机采集的图像平面上,可以图像平面左上角顶点确定为像坐标系的原点,以水平线和竖直线分别确定为u轴和v轴,从而建立像素坐标系,像素坐标系以像素点数量为坐标系单位。
步骤202:获取目标物体的目标点在像素坐标系中的像素坐标、相机内参数矩阵、相机外参数矩阵和机械臂末端相对于世界坐标系的齐次变换矩阵。
其中,目标物体为机械臂所操作的物体,目标点为目标物体上的特定点,一般为目标 物体的中心点,以图3为例,目标物体为茶杯,并且将茶杯的杯口的中心点确定为目标点。
相机内参数矩阵和相机外参数矩阵可由单目相机采集多个标定板的图像,利用图像集对单目相机进行标定而得。具体地标定过程为:将单目相机对焦到棋盘格标定板上,棋盘格尺寸已知,通过不同角度对标定板进行拍摄,提取每个标定板图像中标定板的角点在图像中的像素位置,然后根据标定板的角点在图像中的像素位置和标定板的参数,然后利用张氏标定法进行相机标定,得到相机内参数矩阵。
目标物体为机械臂所操作的物体,目标点为目标物体上的特定点,一般为目标物体的中心点。相机内参数矩阵可由相机标定获得。
步骤203:根据目标点的像素坐标和相机的内参数矩阵,得到目标点在相机坐标系下的三维坐标表达式,目标点在相机坐标系下的三维坐标表达式为
Figure PCTCN2020111190-appb-000001
其中,
Figure PCTCN2020111190-appb-000002
为目标点的像素坐标,s为齐次变换匹配系数,
Figure PCTCN2020111190-appb-000003
为相机内参数矩阵,f x为像素平面X轴的焦距系数,f y为像素平面y轴的焦距系数,α为像素平面x轴的偏移量,β为像素平面y轴的偏移量,
Figure PCTCN2020111190-appb-000004
为目标点在相机坐标系下的三维坐标。
步骤204:根据目标点在相机坐标系下的三维坐标表达式,得到目标点相对于相机坐标系的视线向量,视线向量为
Figure PCTCN2020111190-appb-000005
其中,
Figure PCTCN2020111190-appb-000006
为目标点在相机坐标系X轴的投影值,
Figure PCTCN2020111190-appb-000007
为目标点在相机坐标系y轴的投影值,
Figure PCTCN2020111190-appb-000008
为目标点在相机坐标系z轴的投影值,f x为像素平面X轴的焦距系数,f y为像素平面y轴的焦距系数,μ和ν为目标点的像素坐标。
步骤205:根据相机外参数矩阵、机械臂末端相对于世界坐标系的齐次变换矩阵,按 照如下公式,得到相机坐标系相对于世界坐标系的齐次变换矩阵;
Figure PCTCN2020111190-appb-000009
其中,
Figure PCTCN2020111190-appb-000010
为相机坐标系相对于世界坐标系的齐次变换矩阵,
Figure PCTCN2020111190-appb-000011
为机械臂末端相对于世界坐标系的齐次变换矩阵,
Figure PCTCN2020111190-appb-000012
为相机外部参数矩阵。
机械臂末端相对于世界坐标系的齐次变换矩阵
Figure PCTCN2020111190-appb-000013
可基于机械臂运动学求解而得。机械臂运动学根据已知机械臂的各个关节角,计算机械臂末端相对于世界坐标系的位姿。具体地,机械臂运动学建模采用D-H四参数法,D-H四参数法是一种建立相对位姿的矩阵方法。利用齐次变换描述各个连杆相对于固定参考坐标系的空间几何关系,用一个4×4的齐次变换矩阵描述相邻两连杆的空间关系,推导出机械臂末端相对于世界坐标系的齐次变换矩阵
Figure PCTCN2020111190-appb-000014
步骤206:根据相机坐标系相对于世界坐标系的齐次变换矩阵及目标点相对于相机坐标系的视线向量,按照如下公式,得到目标点相对于世界坐标系的视线向量;
Figure PCTCN2020111190-appb-000015
其中,
Figure PCTCN2020111190-appb-000016
为目标点相对于世界坐标系的视线向量,
Figure PCTCN2020111190-appb-000017
为目标点在世界坐标系x轴的投影值,
Figure PCTCN2020111190-appb-000018
为目标点在世界坐标系y轴的投影值,
Figure PCTCN2020111190-appb-000019
为目标点在世界坐标系z轴的投影值,相机
Figure PCTCN2020111190-appb-000020
为相机坐标系相对于世界坐标系的齐次变换矩阵,
Figure PCTCN2020111190-appb-000021
为目标点相对于世界坐标系的视线向量,
Figure PCTCN2020111190-appb-000022
为目标点在相机坐标系X轴的投影值,
Figure PCTCN2020111190-appb-000023
为目标点在相机坐标系y轴的投影值,
Figure PCTCN2020111190-appb-000024
为目标点在相机坐标系z轴的投影值。
步骤207:根据相机坐标系相对于世界坐标系的齐次变换矩阵,得到相机原点在世界坐标系下的坐标。
相机原点在世界坐标系下的坐标为
Figure PCTCN2020111190-appb-000025
其中,
Figure PCTCN2020111190-appb-000026
为相机原点在世界坐标系x轴的投影值,
Figure PCTCN2020111190-appb-000027
为相机原点在世界坐标系x轴的投影值,
Figure PCTCN2020111190-appb-000028
为相机原点在世界坐标系x轴的投影值。
步骤208:根据相机原点坐标和目标点相对于世界坐标系的视线向量,得到视线直线方程。
视线直线方程为
Figure PCTCN2020111190-appb-000029
其中,x、y和z为视线方程上的点的坐标,
Figure PCTCN2020111190-appb-000030
为目标点在世界坐标系x轴的投影值,
Figure PCTCN2020111190-appb-000031
为目标点在世界坐标系y轴的投影值,
Figure PCTCN2020111190-appb-000032
为目标点在世界坐标系z轴的投影值,t为因变量,x c、y c和z c为相机原点在世界坐标系下的坐标。
步骤209:在世界坐标系中,获取目标点所在平面的平面方程。
以图3为例,目标点所在的平面为茶杯杯口所在的平面,其平面方程为Z=H+h,其中,H为工作台上表面在世界坐标系中的高度,h为茶杯杯口所在平面在世界坐标系中的高度。
步骤210:根据视线直线方程及平面方程,得到目标点的世界坐标。
继续以图3为例,将步骤208的视线直线方程与平面方程联立,最后得到目标点的世界坐标为
Figure PCTCN2020111190-appb-000033
步骤211:根据目标点的世界坐标,控制机械臂按照预设策略对物体进行操作。
预设策略可根据实际需求由工作人员设定,例如对目标点进行抓取等,以茶杯为例,预设策略可为对茶杯进行抓取,或者对茶杯进行注水等。
本申请实施例提供了一种基于单目视觉定位的机械臂控制方法,首先获取目标物体的目标点在像素坐标系下的像素坐标,然后利用相机坐标系相对于机械臂末端的齐次变换矩阵及机械臂末端相对于世界坐标系的齐次变换矩阵进行转换,得到视线直线方程,最后在世界坐标系中,获取目标点所在平面的平面方程,并通过视线直线方程及平面方程,得到目标点的世界坐标,因此,与现有技术相比,本申请实施例无需两个单目摄像机,简化定位系统,操作简单,且不涉及图像特征点的匹配和最优距离的测算,简化处理算法,降低对硬件设备的要求。
进一步的,作为对上述图1所示方法的实现,本申请实施例提供了一种基于单目视觉 定位的机械臂控制装置,如图4所示,该装置包括:
坐标系构建模块401,用于建立像素坐标系、相机坐标系和世界坐标系;
第一获取模块402,用于获取目标物体的目标点在像素坐标系中的像素坐标、相机内参数矩阵、相机外参数矩阵和机械臂末端相对于世界坐标系的齐次变换矩阵;
视线向量确定模块403,用于根据目标点的像素坐标、相机的内参数矩阵、相机外参数矩阵和机械臂末端相对于世界坐标系的齐次变换矩阵,得到目标点相对于世界坐标系的视线向量;
视线直线方程确定模块404,用于根据相机坐标系相对于世界坐标系的齐次变换矩阵和目标点相对于世界坐标系的视线向量,得到视线直线方程;
第二获取模块405,用于在世界坐标系中,获取目标点所在平面的平面方程;
世界坐标确定模块406,用于根据视线直线方程及平面方程,得到目标点的世界坐标;
控制模块407,用于根据目标点的世界坐标,控制机械臂按照预设策略对物体进行操作。
进一步的,作为对上述图2所示方法的实现,本申请实施例提供了另一种装置,如图5所示,该装置包括:
坐标系构建模块501,用于建立像素坐标系、相机坐标系和世界坐标系;
第一获取模块502,用于获取目标物体的目标点在像素坐标系中的像素坐标、相机内参数矩阵和相机外参数矩阵;
视线向量确定模块503,用于根据目标点的像素坐标、相机的内参数矩阵、相机外参数矩阵和机械臂末端相对于世界坐标系的齐次变换矩阵,得到目标点相对于世界坐标系的视线向量;
视线直线方程确定模块504,用于根据相机坐标系相对于世界坐标系的齐次变换矩阵和目标点相对于世界坐标系的视线向量,得到视线直线方程;
第二获取模块505,用于在世界坐标系中,获取目标点所在平面的平面方程;
世界坐标确定模块506,用于根据视线直线方程及平面方程,得到目标点的世界坐标;
控制模块507,用于根据目标点的世界坐标,控制机械臂按照预设策略对物体进行操作。
进一步地,视线向量确定模块503包括:
视线向量确定单元5031,用于根据目标点的像素坐标和相机的内参数矩阵,得到目标点相对于相机坐标系的视线向量,视线向量的方向由相机坐标系的原点指向目标物体的中心。
视线向量转换单元5032,用于根据相机外参数矩阵、机械臂末端相对于世界坐标系的齐次变换矩阵及目标点相对于相机坐标系的视线向量,得到目标点相对于世界坐标系的视线向量。
进一步地,视线向量确定单元5031包括:
三位坐标表达式确定子单元50311,用于根据目标点的像素坐标和相机的内参数矩阵,得到目标点在相机坐标系下的三维坐标表达式,目标点在相机坐标系下的三维坐标表达式为
Figure PCTCN2020111190-appb-000034
其中,
Figure PCTCN2020111190-appb-000035
为目标点的像素坐标,s为齐次变换匹配系数,
Figure PCTCN2020111190-appb-000036
为相机内参数矩阵,f x为像素平面X轴的焦距系数,f y为像素平面y轴的焦距系数,α为像素平面x轴的偏移量,β为像素平面y轴的偏移量,
Figure PCTCN2020111190-appb-000037
为目标点在相机坐标系下的三维坐标。
视线向量计算子单元50312,用于根据目标点在相机坐标系下的三维坐标表达式,得到目标点相对于相机坐标系的视线向量,视线向量为
Figure PCTCN2020111190-appb-000038
其中,
Figure PCTCN2020111190-appb-000039
为目标点在相机坐标系X轴的投影值,
Figure PCTCN2020111190-appb-000040
为目标点在相机坐标系y轴的投影值,
Figure PCTCN2020111190-appb-000041
为目标点在相机坐标系z轴的投影值,f x为像素平面X轴的焦距系数,f y为像素平面y轴的焦距系数,μ和ν为目标点的像素坐标。
进一步地,视线向量转换单元5032包括:
齐次变换矩阵子确定单元50321,用于根据相机外参数矩阵、机械臂末端相对于世界坐标系的齐次变换矩阵,按照如下公式,得到相机坐标系相对于世界坐标系的齐次变换矩阵;
Figure PCTCN2020111190-appb-000042
其中,
Figure PCTCN2020111190-appb-000043
为相机坐标系相对于世界坐标系的齐次变换矩阵,
Figure PCTCN2020111190-appb-000044
为机械臂末端相对于世界坐标系的齐次变换矩阵,
Figure PCTCN2020111190-appb-000045
为相机外部参数矩阵。
视线向量转换子单元50322,用于根据相机坐标系相对于世界坐标系的齐次变换矩阵及目标点相对于相机坐标系的视线向量,按照如下公式,得到目标点相对于世界坐标系的视线向量;
Figure PCTCN2020111190-appb-000046
其中,
Figure PCTCN2020111190-appb-000047
为目标点相对于世界坐标系的视线向量,
Figure PCTCN2020111190-appb-000048
为目标点在世界坐标系x轴的投影值,
Figure PCTCN2020111190-appb-000049
为目标点在世界坐标系y轴的投影值,
Figure PCTCN2020111190-appb-000050
为目标点在世界坐标系z轴的投影值,相机
Figure PCTCN2020111190-appb-000051
为相机坐标系相对于世界坐标系的齐次变换矩阵,
Figure PCTCN2020111190-appb-000052
为目标点相对于世界坐标系的视线向量,
Figure PCTCN2020111190-appb-000053
为目标点在相机坐标系X轴的投影值,
Figure PCTCN2020111190-appb-000054
为目标点在相机坐标系y轴的投影值,
Figure PCTCN2020111190-appb-000055
为目标点在相机坐标系z轴的投影值。
进一步地,视线直线方程确定模块504包括:
相机原点坐标确定单元5041,用于根据相机坐标系相对于世界坐标系的齐次变换矩阵,得到相机原点在世界坐标系下的坐标。
相机原点在世界坐标系下的坐标为
Figure PCTCN2020111190-appb-000056
其中,
Figure PCTCN2020111190-appb-000057
为相机原点在世界坐标系x轴的投影值,
Figure PCTCN2020111190-appb-000058
为相机原点在世界坐标系x轴的投影值,
Figure PCTCN2020111190-appb-000059
为相机原点在世界坐标系x轴的投影值。
视线直线方程确定单元5042,用于根据相机原点坐标和目标点相对于世界坐标系的视线向量,得到视线直线方程。
视线直线方程为
Figure PCTCN2020111190-appb-000060
其中,x、y和z为视线方程上的点的坐标,
Figure PCTCN2020111190-appb-000061
为目标 点在世界坐标系x轴的投影值,
Figure PCTCN2020111190-appb-000062
为目标点在世界坐标系y轴的投影值,
Figure PCTCN2020111190-appb-000063
为目标点在世界坐标系z轴的投影值,t为因变量,x c、y c和z c为相机原点在世界坐标系下的坐标。
根据本申请一个实施例提供了一种存储介质,存储介质存储有至少一可执行指令,该计算机可执行指令可执行上述任意方法实施例中的基于单目视觉定位的机械臂控制方法。可选的,本申请涉及的存储介质可以是计算机可读存储介质,该存储介质如计算机可读存储介质可以是非易失性的,也可以是易失性的。
图6示出了根据本申请一个实施例提供的一种计算机设备的结构示意图,本申请具体实施例并不对计算机设备的具体实现做限定。
如图6所示,该计算机设备可以包括:处理器(processor)602、通信接口(Communications Interface)604、存储器(memory)606、以及通信总线608。其中:处理器602、通信接口604、以及存储器606通过通信总线608完成相互间的通信。
通信接口604,用于与其它设备比如客户端或其它服务器等的网元通信。
处理器602,用于执行程序610,具体可以执行上述基于单目视觉定位的机械臂控制方法实施例中的相关步骤。
具体地,程序610可以包括程序代码,该程序代码包括计算机操作指令。
处理器602可能是中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本申请实施例的一个或多个集成电路。计算机设备包括的一个或多个处理器,可以是同一类型的处理器,如一个或多个CPU;也可以是不同类型的处理器,如一个或多个CPU以及一个或多个ASIC。
存储器606,用于存放程序610。存储器606可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
程序610具体可以用于使得处理器602执行以下操作:建立像素坐标系、机械臂末端相机坐标系和世界坐标系。获取目标物体的目标点在像素坐标系中的像素坐标、相机内参数矩阵、相机外参数矩阵和机械臂末端相对于世界坐标系的齐次变换矩阵。目标物体为机械臂所操作的物体,目标点为目标物体上的特定点,一般为目标物体的中心点。相机内参数矩阵和相机外参数矩阵可由相机标定获得。根据目标点的像素坐标和相机的内参数矩阵,得到目标点相对于相机坐标系的视线向量,视线向量的方向由相机坐标系的原点指向目标物体的中心。根据相机外参数矩阵机械臂末端、机械臂末端相对于世界坐标系的齐次变换矩阵及目标点相对于相机坐标系的视线向量,得到目标点相对于世界坐标系的视线向量。相机坐标系相对于世界坐标系的齐次变换矩阵和目标点相对于世界坐标系的视线向量,得到视线直线方程。在世界坐标系中,获取目标点所在平面的平面方程。根据视线直线方程及平面方程,得到目标点的世界坐标。根据目标点的世界坐标,控制机械臂按照预设策略对物体进行操作。
显然,本领域的技术人员应该明白,上述的本申请的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本申请不限制于任何特定的硬件和软件结合。
以上仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包括在本申请的保护范围之内。

Claims (20)

  1. 一种基于单目视觉定位的机械臂控制方法,应用于单目视觉定位的机械臂控制系统,所述系统包括机械臂及安装在所述机械臂末端的单目像机,其中,包括:
    建立像素坐标系、相机坐标系和世界坐标系;
    获取目标物体的目标点在所述像素坐标系中的像素坐标、相机内参数矩阵、相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵;
    根据所述目标点的像素坐标、相机的内参数矩阵、所述相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,得到所述目标点相对于所述世界坐标系的视线向量;
    根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵和所述目标点相对于所述世界坐标系的视线向量,得到视线直线方程;
    在所述世界坐标系中,获取所述目标点所在平面的平面方程;
    根据所述视线直线方程及所述平面方程,得到所述目标点的世界坐标;
    根据所述目标点的世界坐标,控制所述机械臂按照预设策略对物体进行操作。
  2. 根据权利要求1所述的方法,其中,所述根据所述目标点的像素坐标、相机的内参数矩阵、所述相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,得到所述目标点相对于所述世界坐标系的视线向量包括:
    根据所述目标点的像素坐标和相机的内参数矩阵,得到所述目标点相对于所述相机坐标系的视线向量,所述视线向量的方向由所述相机坐标系的原点指向所述目标物体的中心;
    根据所述相机外参数矩阵、所述机械臂末端相对于所述世界坐标系的齐次变换矩阵及所述目标点相对于所述相机坐标系的视线向量,得到所述目标点相对于所述世界坐标系的视线向量。
  3. 根据权利要求2所述的方法,其中,所述根据所述目标点的像素坐标和相机的内参数矩阵,得到所述目标点相对于所述相机坐标系的视线向量包括:
    根据所述目标点的像素坐标和相机的内参数矩阵,得到所述目标点在相机坐标系下的三维坐标表达式,所述目标点在相机坐标系下的三维坐标表达式为
    Figure PCTCN2020111190-appb-100001
    其中,
    Figure PCTCN2020111190-appb-100002
    为所述目标点的像素坐标,s为齐次变换匹配系数,
    Figure PCTCN2020111190-appb-100003
    为相机内参数矩阵,f x为像素平面X轴的焦距系数,f y为像素平面y轴的焦距系数,α为像素平面x轴的偏移量,β为像素平面y轴的偏移量,
    Figure PCTCN2020111190-appb-100004
    为目标点在相机坐标系下的三维坐标;
    根据所述目标点在相机坐标系下的三维坐标表达式,得到所述目标点相对于所述相机坐标系的视线向量,所述视线向量为
    Figure PCTCN2020111190-appb-100005
    其中,
    Figure PCTCN2020111190-appb-100006
    为所述目标点在相机坐标系X轴的投影值,
    Figure PCTCN2020111190-appb-100007
    为所述目标点在相机坐标系y轴的投影值,
    Figure PCTCN2020111190-appb-100008
    为所述目标点在相机坐标系z轴的投影值,f x为像素平面X轴的焦距系数,f y为像素平面y轴的焦距系数,μ和ν为所述目标点的像素坐标。
  4. 根据权利要求2所述的方法,其中,所述根据所述相机外参数矩阵、所述机械臂末端相对于所述世界坐标系的齐次变换矩阵及所述目标点相对于所述相机坐标系的视线向量,得到所述目标点相对于所述世界坐标系的视线向量包括:
    根据所述相机坐标系相对于所述机械臂末端的齐次变换矩阵、所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,按照如下公式,得到所述相机坐标系相对于所述世界坐标系的齐次变换矩阵;
    Figure PCTCN2020111190-appb-100009
    其中,
    Figure PCTCN2020111190-appb-100010
    为所述相机坐标系相对于所述世界坐标系的齐次变换矩阵,
    Figure PCTCN2020111190-appb-100011
    为所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,
    Figure PCTCN2020111190-appb-100012
    为所述相机外参数矩阵;
    根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵及所述目标点相对于所述相机坐标系的视线向量,按照如下公式,得到所述目标点相对于所述世界坐标系的视线向量;
    Figure PCTCN2020111190-appb-100013
    其中,
    Figure PCTCN2020111190-appb-100014
    为所述目标点相对于所述世界坐标系的视线向量,
    Figure PCTCN2020111190-appb-100015
    为所述目标点在世界坐标系x轴的投影值,
    Figure PCTCN2020111190-appb-100016
    为所述目标点在世界坐标系y轴的投影值,
    Figure PCTCN2020111190-appb-100017
    为所述目标点在世界坐标系z轴的投影值,相机
    Figure PCTCN2020111190-appb-100018
    为相机坐标系相对于所述世界坐标系的齐次变换矩阵,
    Figure PCTCN2020111190-appb-100019
    为目标点相对于所述世界坐标系的视线向量,
    Figure PCTCN2020111190-appb-100020
    为目标点在相机坐标系X轴的投影值,
    Figure PCTCN2020111190-appb-100021
    为目标点在相机坐标系y轴的投影值,
    Figure PCTCN2020111190-appb-100022
    为目标点在相机坐标系z轴的投影值。
  5. 根据权利要求1所述的方法,其中,所述根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵和所述目标点相对于所述世界坐标系的视线向量,得到视线直线方程包括:
    根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵,得到相机原点在世界坐标系下的坐标;
    根据所述相机原点坐标和所述目标点相对于所述世界坐标系的视线向量,得到视线直线方程。
  6. 根据权利要求5所述的方法,其中,所述相机原点在世界坐标系下的坐标为
    Figure PCTCN2020111190-appb-100023
    其中,
    Figure PCTCN2020111190-appb-100024
    为相机原点在世界坐标系x轴的投影值,
    Figure PCTCN2020111190-appb-100025
    为相机原点在世界坐标系x轴的投影值,
    Figure PCTCN2020111190-appb-100026
    为相机原点在世界坐标系x轴的投影值;
    所述视线直线方程为
    Figure PCTCN2020111190-appb-100027
    其中,x、y和z为所述视线方程上的点的坐标,
    Figure PCTCN2020111190-appb-100028
    为所述目标点在世界坐标系x轴的投影值,
    Figure PCTCN2020111190-appb-100029
    为所述目标点在世界坐标系y轴的投影值,
    Figure PCTCN2020111190-appb-100030
    为所述目标点在世界坐标系z轴的投影值,t为因变量,xc、yc和zc为所述相机原点在 世界坐标系下的坐标。
  7. 一种基于单目视觉定位的机械臂控制装置,其中,包括:
    坐标系构建模块,用于建立像素坐标系、相机坐标系和世界坐标系;
    第一获取模块,用于获取目标物体的目标点在所述像素坐标系中的像素坐标、相机内参数矩阵、相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵;
    视线向量确定模块,用于根据所述目标点的像素坐标、相机的内参数矩阵、所述相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,得到所述目标点相对于所述世界坐标系的视线向量;
    视线直线方程确定模块,用于根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵和所述目标点相对于所述世界坐标系的视线向量,得到视线直线方程;
    第二获取模块,用于在所述世界坐标系中,获取所述目标点所在平面的平面方程;
    世界坐标确定模块,用于根据所述视线直线方程及所述平面方程,得到所述目标点的世界坐标;
    控制模块,用于根据所述目标点的世界坐标,控制所述机械臂按照预设策略对物体进行操作。
  8. 根据权利要求7所述的装置,其中,所述视线向量确定模块包括:
    视线向量确定单元,用于根据目标点的像素坐标和相机的内参数矩阵,得到目标点相对于相机坐标系的视线向量,视线向量的方向由相机坐标系的原点指向目标物体的中心。
    视线向量转换单元,用于根据相机外参数矩阵、机械臂末端相对于世界坐标系的齐次变换矩阵及目标点相对于相机坐标系的视线向量,得到目标点相对于世界坐标系的视线向量。
  9. 一种存储介质,其中,所述存储介质中存储有至少一可执行指令,所述可执行指令使处理器执行以下步骤:
    建立像素坐标系、相机坐标系和世界坐标系;
    获取目标物体的目标点在所述像素坐标系中的像素坐标、相机内参数矩阵、相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵;
    根据所述目标点的像素坐标、相机的内参数矩阵、所述相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,得到所述目标点相对于所述世界坐标系的视线向量;
    根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵和所述目标点相对于所述世界坐标系的视线向量,得到视线直线方程;
    在所述世界坐标系中,获取所述目标点所在平面的平面方程;
    根据所述视线直线方程及所述平面方程,得到所述目标点的世界坐标;
    根据所述目标点的世界坐标,控制所述机械臂按照预设策略对物体进行操作。
  10. 根据权利要求9所述的存储介质,其中,所述根据所述目标点的像素坐标、相机的内参数矩阵、所述相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,得到所述目标点相对于所述世界坐标系的视线向量时,具体执行以下步骤:
    根据所述目标点的像素坐标和相机的内参数矩阵,得到所述目标点相对于所述相机坐标系的视线向量,所述视线向量的方向由所述相机坐标系的原点指向所述目标物体的中心;
    根据所述相机外参数矩阵、所述机械臂末端相对于所述世界坐标系的齐次变换矩阵及所述目标点相对于所述相机坐标系的视线向量,得到所述目标点相对于所述世界坐标系的视线向量。
  11. 根据权利要求10所述的存储介质,其中,所述根据所述目标点的像素坐标和相机的内参数矩阵,得到所述目标点相对于所述相机坐标系的视线向量时,具体执行以下步骤:
    根据所述目标点的像素坐标和相机的内参数矩阵,得到所述目标点在相机坐标系下的三维坐标表达式,所述目标点在相机坐标系下的三维坐标表达式为
    Figure PCTCN2020111190-appb-100031
    其中,
    Figure PCTCN2020111190-appb-100032
    为所述目标点的像素坐标,s为齐次变换匹配系数,
    Figure PCTCN2020111190-appb-100033
    为相机内参数矩阵,f x为像素平面X轴的焦距系数,f y为像素平面y轴的焦距系数,α为像素平面x轴的偏移量,β为像素平面y轴的偏移量,
    Figure PCTCN2020111190-appb-100034
    为目标点在相机坐标系下的三维坐标;
    根据所述目标点在相机坐标系下的三维坐标表达式,得到所述目标点相对于所述相机坐标系的视线向量,所述视线向量为
    Figure PCTCN2020111190-appb-100035
    其中,
    Figure PCTCN2020111190-appb-100036
    为所述目标点在相机坐标系X轴的投影值,
    Figure PCTCN2020111190-appb-100037
    为所述目标点在相机坐标系y轴的投影值,
    Figure PCTCN2020111190-appb-100038
    为所述目标点在相机坐标系z轴的投影值,f x为像素平面X轴的焦距系数,f y为像素平面y轴的焦距系数,μ和ν为所述目标点的像素坐标。
  12. 根据权利要求10所述的存储介质,其中,所述根据所述相机外参数矩阵、所述机械臂末端相对于所述世界坐标系的齐次变换矩阵及所述目标点相对于所述相机坐标系的视线向量,得到所述目标点相对于所述世界坐标系的视线向量时,具体执行以下步骤:
    根据所述相机坐标系相对于所述机械臂末端的齐次变换矩阵、所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,按照如下公式,得到所述相机坐标系相对于所述世界坐标系的齐次变换矩阵;
    Figure PCTCN2020111190-appb-100039
    其中,
    Figure PCTCN2020111190-appb-100040
    为所述相机坐标系相对于所述世界坐标系的齐次变换矩阵,
    Figure PCTCN2020111190-appb-100041
    为所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,
    Figure PCTCN2020111190-appb-100042
    为所述相机外参数矩阵;
    根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵及所述目标点相对于所述相机坐标系的视线向量,按照如下公式,得到所述目标点相对于所述世界坐标系的视线向量;
    Figure PCTCN2020111190-appb-100043
    其中,
    Figure PCTCN2020111190-appb-100044
    为所述目标点相对于所述世界坐标系的视线向量,
    Figure PCTCN2020111190-appb-100045
    为所述目标点在世界坐标系x轴的投影值,
    Figure PCTCN2020111190-appb-100046
    为所述目标点在世界坐标系y轴的投影值,
    Figure PCTCN2020111190-appb-100047
    为所述目标点在世界坐标系z轴的投影值,相机
    Figure PCTCN2020111190-appb-100048
    为相机坐标系相对于所述世界坐标系的齐次变换矩阵,
    Figure PCTCN2020111190-appb-100049
    为目标点相对于所述世界坐标系的视线向量,
    Figure PCTCN2020111190-appb-100050
    为目标点在相机坐标系X轴的投影值,
    Figure PCTCN2020111190-appb-100051
    为目标点在相机坐标系y轴的投影值,
    Figure PCTCN2020111190-appb-100052
    为目标点在相机坐标系z轴的投影值。
  13. 根据权利要求9所述的存储介质,其中,所述根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵和所述目标点相对于所述世界坐标系的视线向量,得到视线直线方程时,具体执行以下步骤:
    根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵,得到相机原点在世界坐标系下的坐标;
    根据所述相机原点坐标和所述目标点相对于所述世界坐标系的视线向量,得到视线直线方程。
  14. 根据权利要求13所述的存储介质,其中,所述相机原点在世界坐标系下的坐标为
    Figure PCTCN2020111190-appb-100053
    其中,
    Figure PCTCN2020111190-appb-100054
    为相机原点在世界坐标系x轴的投影值,
    Figure PCTCN2020111190-appb-100055
    为相机原点在世界坐标系x轴的投影值,
    Figure PCTCN2020111190-appb-100056
    为相机原点在世界坐标系x轴的投影值;
    所述视线直线方程为
    Figure PCTCN2020111190-appb-100057
    其中,x、y和z为所述视线方程上的点的坐标,
    Figure PCTCN2020111190-appb-100058
    为所述目标点在世界坐标系x轴的投影值,
    Figure PCTCN2020111190-appb-100059
    为所述目标点在世界坐标系y轴的投影值,
    Figure PCTCN2020111190-appb-100060
    为所述目标点在世界坐标系z轴的投影值,t为因变量,xc、yc和zc为所述相机原点在世界坐标系下的坐标。
  15. 一种计算机设备,包括:处理器、存储器、通信接口和通信总线,所述处理器、所述存储器和所述通信接口通过所述通信总线完成相互间的通信;
    所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行以下步骤:
    建立像素坐标系、相机坐标系和世界坐标系;
    获取目标物体的目标点在所述像素坐标系中的像素坐标、相机内参数矩阵、相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵;
    根据所述目标点的像素坐标、相机的内参数矩阵、所述相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,得到所述目标点相对于所述世界坐标系的视线向量;
    根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵和所述目标点相对于所述世界坐标系的视线向量,得到视线直线方程;
    在所述世界坐标系中,获取所述目标点所在平面的平面方程;
    根据所述视线直线方程及所述平面方程,得到所述目标点的世界坐标;
    根据所述目标点的世界坐标,控制所述机械臂按照预设策略对物体进行操作。
  16. 根据权利要求15所述的方法,其中,所述根据所述目标点的像素坐标、相机的内参数矩阵、所述相机外参数矩阵和所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,得到所述目标点相对于所述世界坐标系的视线向量时,具体执行以下步骤:
    根据所述目标点的像素坐标和相机的内参数矩阵,得到所述目标点相对于所述相机坐标系的视线向量,所述视线向量的方向由所述相机坐标系的原点指向所述目标物体的中心;
    根据所述相机外参数矩阵、所述机械臂末端相对于所述世界坐标系的齐次变换矩阵及所述目标点相对于所述相机坐标系的视线向量,得到所述目标点相对于所述世界坐标系的视线向量。
  17. 根据权利要求16所述的计算机设备,其中,所述根据所述目标点的像素坐标和相机的内参数矩阵,得到所述目标点相对于所述相机坐标系的视线向量时,具体执行以下步骤:
    根据所述目标点的像素坐标和相机的内参数矩阵,得到所述目标点在相机坐标系下的三维坐标表达式,所述目标点在相机坐标系下的三维坐标表达式为
    Figure PCTCN2020111190-appb-100061
    其中,
    Figure PCTCN2020111190-appb-100062
    为所述目标点的像素坐标,s为齐次变换匹配系数,
    Figure PCTCN2020111190-appb-100063
    为相机内参数矩阵,f x为像素平面X轴的焦距系数,f y为像素平面y轴的焦距系数,α为像素平面x轴的偏移量,β为像素平面y轴的偏移量,
    Figure PCTCN2020111190-appb-100064
    为目标点在相机坐标系下的三维坐标;
    根据所述目标点在相机坐标系下的三维坐标表达式,得到所述目标点相对于所述相机坐标系的视线向量,所述视线向量为
    Figure PCTCN2020111190-appb-100065
    其中,
    Figure PCTCN2020111190-appb-100066
    为所述目标点在相机坐标系X轴的投影值,
    Figure PCTCN2020111190-appb-100067
    为所述目标点在相机坐标系y轴的投影值,
    Figure PCTCN2020111190-appb-100068
    为所述目标点在相机坐标系z轴的投影值,f x为像素平面X轴的焦距系数,f y为像素平面y轴的焦距系数,μ和ν为所述目标点的像素坐标。
  18. 根据权利要求16所述的计算机设备,其中,所述根据所述相机外参数矩阵、所述机械臂末端相对于所述世界坐标系的齐次变换矩阵及所述目标点相对于所述相机坐标系的视线向量,得到所述目标点相对于所述世界坐标系的视线向量时,具体执行以下步骤:
    根据所述相机坐标系相对于所述机械臂末端的齐次变换矩阵、所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,按照如下公式,得到所述相机坐标系相对于所述世界坐标系的齐次变换矩阵;
    Figure PCTCN2020111190-appb-100069
    其中,
    Figure PCTCN2020111190-appb-100070
    为所述相机坐标系相对于所述世界坐标系的齐次变换矩阵,
    Figure PCTCN2020111190-appb-100071
    为所述机械臂末端相对于所述世界坐标系的齐次变换矩阵,
    Figure PCTCN2020111190-appb-100072
    为所述相机外参数矩阵;
    根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵及所述目标点相对于所述相机坐标系的视线向量,按照如下公式,得到所述目标点相对于所述世界坐标系的视线向 量;
    Figure PCTCN2020111190-appb-100073
    其中,
    Figure PCTCN2020111190-appb-100074
    为所述目标点相对于所述世界坐标系的视线向量,
    Figure PCTCN2020111190-appb-100075
    为所述目标点在世界坐标系x轴的投影值,
    Figure PCTCN2020111190-appb-100076
    为所述目标点在世界坐标系y轴的投影值,
    Figure PCTCN2020111190-appb-100077
    为所述目标点在世界坐标系z轴的投影值,相机
    Figure PCTCN2020111190-appb-100078
    为相机坐标系相对于所述世界坐标系的齐次变换矩阵,
    Figure PCTCN2020111190-appb-100079
    为目标点相对于所述世界坐标系的视线向量,
    Figure PCTCN2020111190-appb-100080
    为目标点在相机坐标系X轴的投影值,
    Figure PCTCN2020111190-appb-100081
    为目标点在相机坐标系y轴的投影值,
    Figure PCTCN2020111190-appb-100082
    为目标点在相机坐标系z轴的投影值。
  19. 根据权利要求15所述的计算机设备,其中,所述根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵和所述目标点相对于所述世界坐标系的视线向量,得到视线直线方程时,具体执行以下步骤:
    根据所述相机坐标系相对于所述世界坐标系的齐次变换矩阵,得到相机原点在世界坐标系下的坐标;
    根据所述相机原点坐标和所述目标点相对于所述世界坐标系的视线向量,得到视线直线方程。
  20. 根据权利要求19所述的计算机设备,其中,所述相机原点在世界坐标系下的坐标为
    Figure PCTCN2020111190-appb-100083
    其中,
    Figure PCTCN2020111190-appb-100084
    为相机原点在世界坐标系x轴的投影值,
    Figure PCTCN2020111190-appb-100085
    为相机原点在世界坐标系x轴的投影值,
    Figure PCTCN2020111190-appb-100086
    为相机原点在世界坐标系x轴的投影值;
    所述视线直线方程为
    Figure PCTCN2020111190-appb-100087
    其中,x、y和z为所述视线方程上的点的坐标,
    Figure PCTCN2020111190-appb-100088
    为所述目标点在世界坐标系x轴的投影值,
    Figure PCTCN2020111190-appb-100089
    为所述目标点在世界坐标系y轴的投影值,
    Figure PCTCN2020111190-appb-100090
    为所述目标点在世界坐标系z轴的投影值,t为因变量,xc、yc和zc为所述相机原点在世界坐标系下的坐标。
PCT/CN2020/111190 2020-04-28 2020-08-26 一种基于单目视觉定位的机械臂控制方法及装置 WO2021217976A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
SG11202113181UA SG11202113181UA (en) 2020-04-28 2020-08-26 Mechanical arm control method and device based on monocular vision positioning

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010349776.XA CN111673735A (zh) 2020-04-28 2020-04-28 一种基于单目视觉定位的机械臂控制方法及装置
CN202010349776.X 2020-04-28

Publications (1)

Publication Number Publication Date
WO2021217976A1 true WO2021217976A1 (zh) 2021-11-04

Family

ID=72452610

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/111190 WO2021217976A1 (zh) 2020-04-28 2020-08-26 一种基于单目视觉定位的机械臂控制方法及装置

Country Status (3)

Country Link
CN (1) CN111673735A (zh)
SG (1) SG11202113181UA (zh)
WO (1) WO2021217976A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114055501A (zh) * 2021-11-17 2022-02-18 长春理工大学 一种机器人抓取系统及其控制方法
CN114332231A (zh) * 2022-03-04 2022-04-12 成都创像科技有限公司 视觉检测设备中机械手与相机的定位方法、装置及介质
CN114378822A (zh) * 2022-01-19 2022-04-22 合肥工业大学 一种基于视觉的机器人机械臂末端位姿的调整方法
CN114683214A (zh) * 2022-03-30 2022-07-01 武汉海微科技有限公司 一种车载屏幕壳体自动化打螺丝的视觉定位方法
CN114782533A (zh) * 2022-04-19 2022-07-22 常州机电职业技术学院 一种基于单目视觉的线缆盘轴位姿确定方法
CN115556109A (zh) * 2022-10-24 2023-01-03 深圳市通用测试系统有限公司 一种测试系统中机械臂的定位方法和装置
CN115578677A (zh) * 2022-10-28 2023-01-06 众芯汉创(北京)科技有限公司 一种基于视频流的隐患抓拍和识别的智能装置
CN116408800A (zh) * 2023-03-27 2023-07-11 中铁隧道局集团有限公司 一种基于孔位坐标的锚杆台车自动定位方法
CN116912333A (zh) * 2023-09-12 2023-10-20 安徽炬视科技有限公司 一种基于作业围栏标定杆的相机姿态自标定方法
CN117290980A (zh) * 2023-11-27 2023-12-26 江西格如灵科技股份有限公司 一种基于Unity平台的机械臂仿真方法及系统

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112545B (zh) * 2021-04-15 2023-03-21 西安电子科技大学 基于计算机视觉的手持移动打印装置定位方法
CN113407030B (zh) * 2021-06-25 2023-08-25 浙江商汤科技开发有限公司 视觉定位方法及相关装置、设备和存储介质
CN113781575B (zh) * 2021-08-09 2024-01-12 上海奥视达智能科技有限公司 一种相机参数的标定方法、装置、终端和存储介质
CN115781665B (zh) * 2022-11-01 2023-08-08 深圳史河机器人科技有限公司 一种基于单目相机的机械臂控制方法、装置及存储介质
CN115564836B (zh) * 2022-11-10 2023-03-17 凌度(广东)智能科技发展有限公司 幕墙机器人的单目坐标转换方法、装置及电子设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103471500A (zh) * 2013-06-05 2013-12-25 江南大学 一种单目机器视觉中平面坐标与空间三维坐标点的转换方法
JP2017071033A (ja) * 2015-10-09 2017-04-13 キヤノン株式会社 作業用基準物体、作業用基準物体の製造方法、ロボットアームの調整方法、ビジョンシステム、ロボット装置、及び指標用部材
CN107883929A (zh) * 2017-09-22 2018-04-06 中冶赛迪技术研究中心有限公司 基于多关节机械臂的单目视觉定位装置及方法
CN108920996A (zh) * 2018-04-10 2018-11-30 泰州职业技术学院 一种基于机器人视觉的小目标检测方法
CN109472829A (zh) * 2018-09-04 2019-03-15 顺丰科技有限公司 一种物体定位方法、装置、设备和存储介质
CN109961485A (zh) * 2019-03-05 2019-07-02 南京理工大学 一种基于单目视觉进行目标定位的方法
CN110370286A (zh) * 2019-08-13 2019-10-25 西北工业大学 基于工业机器人和单目相机的定轴运动刚体空间位置识别方法
US20200023521A1 (en) * 2018-07-18 2020-01-23 Canon Kabushiki Kaisha Method and device of controlling robot system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103471500A (zh) * 2013-06-05 2013-12-25 江南大学 一种单目机器视觉中平面坐标与空间三维坐标点的转换方法
JP2017071033A (ja) * 2015-10-09 2017-04-13 キヤノン株式会社 作業用基準物体、作業用基準物体の製造方法、ロボットアームの調整方法、ビジョンシステム、ロボット装置、及び指標用部材
CN107883929A (zh) * 2017-09-22 2018-04-06 中冶赛迪技术研究中心有限公司 基于多关节机械臂的单目视觉定位装置及方法
CN108920996A (zh) * 2018-04-10 2018-11-30 泰州职业技术学院 一种基于机器人视觉的小目标检测方法
US20200023521A1 (en) * 2018-07-18 2020-01-23 Canon Kabushiki Kaisha Method and device of controlling robot system
CN109472829A (zh) * 2018-09-04 2019-03-15 顺丰科技有限公司 一种物体定位方法、装置、设备和存储介质
CN109961485A (zh) * 2019-03-05 2019-07-02 南京理工大学 一种基于单目视觉进行目标定位的方法
CN110370286A (zh) * 2019-08-13 2019-10-25 西北工业大学 基于工业机器人和单目相机的定轴运动刚体空间位置识别方法

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114055501A (zh) * 2021-11-17 2022-02-18 长春理工大学 一种机器人抓取系统及其控制方法
CN114378822A (zh) * 2022-01-19 2022-04-22 合肥工业大学 一种基于视觉的机器人机械臂末端位姿的调整方法
CN114378822B (zh) * 2022-01-19 2023-09-01 合肥工业大学 一种基于视觉的机器人机械臂末端位姿的调整方法
CN114332231A (zh) * 2022-03-04 2022-04-12 成都创像科技有限公司 视觉检测设备中机械手与相机的定位方法、装置及介质
CN114332231B (zh) * 2022-03-04 2022-06-14 成都创像科技有限公司 视觉检测设备中机械手与相机的定位方法、装置及介质
CN114683214A (zh) * 2022-03-30 2022-07-01 武汉海微科技有限公司 一种车载屏幕壳体自动化打螺丝的视觉定位方法
CN114782533B (zh) * 2022-04-19 2023-05-23 常州机电职业技术学院 一种基于单目视觉的线缆盘轴位姿确定方法
CN114782533A (zh) * 2022-04-19 2022-07-22 常州机电职业技术学院 一种基于单目视觉的线缆盘轴位姿确定方法
CN115556109A (zh) * 2022-10-24 2023-01-03 深圳市通用测试系统有限公司 一种测试系统中机械臂的定位方法和装置
CN115556109B (zh) * 2022-10-24 2024-06-11 深圳市通用测试系统有限公司 一种测试系统中机械臂的定位方法和装置
CN115578677A (zh) * 2022-10-28 2023-01-06 众芯汉创(北京)科技有限公司 一种基于视频流的隐患抓拍和识别的智能装置
CN116408800A (zh) * 2023-03-27 2023-07-11 中铁隧道局集团有限公司 一种基于孔位坐标的锚杆台车自动定位方法
CN116408800B (zh) * 2023-03-27 2024-01-09 中铁隧道局集团有限公司 一种基于孔位坐标的锚杆台车自动定位方法
CN116912333A (zh) * 2023-09-12 2023-10-20 安徽炬视科技有限公司 一种基于作业围栏标定杆的相机姿态自标定方法
CN116912333B (zh) * 2023-09-12 2023-12-26 安徽炬视科技有限公司 一种基于作业围栏标定杆的相机姿态自标定方法
CN117290980A (zh) * 2023-11-27 2023-12-26 江西格如灵科技股份有限公司 一种基于Unity平台的机械臂仿真方法及系统
CN117290980B (zh) * 2023-11-27 2024-02-02 江西格如灵科技股份有限公司 一种基于Unity平台的机械臂仿真方法及系统

Also Published As

Publication number Publication date
CN111673735A (zh) 2020-09-18
SG11202113181UA (en) 2021-12-30

Similar Documents

Publication Publication Date Title
WO2021217976A1 (zh) 一种基于单目视觉定位的机械臂控制方法及装置
CN110276806B (zh) 用于四自由度并联机器人立体视觉手眼系统的在线手眼标定和抓取位姿计算方法
WO2019114339A1 (zh) 机械臂运动的校正方法和装置
CN109658460A (zh) 一种机械臂末端相机手眼标定方法和系统
JP2017112602A (ja) パノラマ魚眼カメラの画像較正、スティッチ、および深さ再構成方法、ならびにそのシステム
CN113379849B (zh) 基于深度相机的机器人自主识别智能抓取方法及系统
CN110751691B (zh) 一种基于双目视觉的管件自动抓取方法
CN107545591B (zh) 一种基于六点触点法的机器人手眼标定方法
WO2021043213A1 (zh) 标定方法、装置、航拍设备和存储介质
CN111801198A (zh) 一种手眼标定方法、系统及计算机存储介质
CN109767474A (zh) 一种多目相机标定方法、装置及存储介质
CN113442169A (zh) 机器人的手眼标定方法、装置、计算机设备和可读存储介质
CN112949478A (zh) 基于云台相机的目标检测方法
CN110136211A (zh) 一种基于主动双目视觉技术的工件定位方法及系统
WO2020063058A1 (zh) 一种多自由度可动视觉系统的标定方法
CN113276106A (zh) 一种攀爬机器人空间定位方法及空间定位系统
WO2021004416A1 (zh) 一种基于视觉信标建立信标地图的方法、装置
CN112308925A (zh) 可穿戴设备的双目标定方法、设备及存储介质
CN111360821A (zh) 一种采摘控制方法、装置、设备及计算机刻度存储介质
CN107633497A (zh) 一种图像景深渲染方法、系统及终端
CN114851201A (zh) 一种基于tsdf三维重建的机械臂六自由度视觉闭环抓取方法
JP7427370B2 (ja) 撮像装置、画像処理装置、画像処理方法、撮像装置の校正方法、ロボット装置、ロボット装置を用いた物品の製造方法、制御プログラムおよび記録媒体
CN117173254A (zh) 一种相机标定方法、系统、装置和电子设备
CN117340879A (zh) 一种基于图优化模型的工业机器人参数辨识方法和系统
JP2009301181A (ja) 画像処理装置、画像処理プログラム、画像処理方法、および電子機器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933955

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20933955

Country of ref document: EP

Kind code of ref document: A1