CN112907586B - Vision-based mechanical arm control method, device and system and computer equipment - Google Patents

Vision-based mechanical arm control method, device and system and computer equipment Download PDF

Info

Publication number
CN112907586B
CN112907586B CN202110341428.2A CN202110341428A CN112907586B CN 112907586 B CN112907586 B CN 112907586B CN 202110341428 A CN202110341428 A CN 202110341428A CN 112907586 B CN112907586 B CN 112907586B
Authority
CN
China
Prior art keywords
mechanical arm
scene image
target object
workbench
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110341428.2A
Other languages
Chinese (zh)
Other versions
CN112907586A (en
Inventor
黄海松
饶期捷
张松松
范青松
白鑫宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou University
Original Assignee
Guizhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou University filed Critical Guizhou University
Priority to CN202110341428.2A priority Critical patent/CN112907586B/en
Publication of CN112907586A publication Critical patent/CN112907586A/en
Application granted granted Critical
Publication of CN112907586B publication Critical patent/CN112907586B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Manipulator (AREA)

Abstract

The application relates to a vision-based mechanical arm control method, a device, a system and computer equipment, comprising: acquiring a scene image acquired by an imaging device on a mechanical arm workbench, wherein the scene image is acquired under the condition that the plane of an imaging lens of the imaging device is parallel to the mechanical arm workbench; determining pixel coordinates of a target object on a mechanical arm workbench in a scene image; acquiring a calibration coefficient corresponding to the camera device and the mechanical arm workbench; determining a two-dimensional coordinate of the target object according to the pixel coordinate and the calibration coefficient of the target object in the scene image, and determining a three-dimensional coordinate of the target object according to the two-dimensional coordinate and the height information of the mechanical arm workbench; the mechanical arm is controlled to move according to the three-dimensional coordinates of the target object, so that the problems that in the related art, the target object is grabbed by the mechanical arm controlled by the camera, a matrix calculation or three-dimensional reconstruction target object has a complex data processing method, the calculated amount is large, and the performance requirement of a processor is high are solved.

Description

Vision-based mechanical arm control method, device and system and computer equipment
Technical Field
The present application relates to the technical field of industrial robots, and in particular, to a method, an apparatus, a system, and a computer device for controlling a vision-based mechanical arm.
Background
In the technical field of industrial robots, in the process of grabbing an object by a mechanical arm based on vision, a camera is required to acquire a picture of the object, calibrate the relative position between the camera and the mechanical arm, and calculate the three-dimensional information of the object.
In the related art, three types of vision-based mechanical arm object grabbing are mainly used: firstly, fixing the positions of a camera and a mechanical arm, calibrating the hand-eye relationship, acquiring pictures by using a color camera in combination with a camera of a depth camera, calculating the three-dimensional coordinate of a target object under a camera coordinate system, and calculating the three-dimensional coordinate of the target object under the mechanical arm coordinate system according to a conversion matrix between the camera coordinate system and the mechanical arm coordinate system; then controlling the mechanical arm to grasp; secondly, fixing the positions of the camera and the mechanical arm, calibrating the hand-eye relationship, acquiring information of an object by adopting a binocular camera or a plurality of cameras, performing three-dimensional reconstruction on the target object to obtain shape information and three-dimensional coordinates of the target object, and finally controlling the mechanical arm to grasp; thirdly, the positions of the camera and the mechanical arm are fixed, the hand-eye relation is not calibrated, and the position errors of the robot hand and the target object are directly observed from the image by utilizing the jacobian matrix, so that the expected movement of the robot hand to eliminate the errors is calculated. The camera is used for controlling the mechanical arm to grasp the target object, and the three coordinate systems such as a target image coordinate system, a mechanical arm coordinate system and a camera coordinate system are involved.
At present, aiming at the problems of complex data processing method, large calculation amount and higher performance requirement of a processor, an effective solution is not proposed yet in the related art, which aims at controlling a mechanical arm to grasp a target object through a camera and relates to matrix calculation or three-dimensional reconstruction of the target object.
Disclosure of Invention
The embodiment of the application provides a control method, a device, a system and computer equipment for a mechanical arm based on vision, which at least solve the problems that in the related art, the mechanical arm is controlled by a camera to grasp a target object, matrix calculation or three-dimensional reconstruction of the target object is involved, a data processing method is complex, the calculation amount is large, and the performance requirement of a processor is high.
In a first aspect, an embodiment of the present application provides a vision-based mechanical arm control method, where the method includes:
acquiring a scene image on a mechanical arm workbench, wherein the scene image is acquired through a camera device, and the scene image is acquired under the condition that the plane of a camera lens of the camera device is parallel to the mechanical arm workbench;
determining pixel coordinates of a target object on the mechanical arm workbench in the scene image by carrying out target detection on the scene image;
Acquiring a calibration coefficient of the camera device corresponding to the mechanical arm workbench;
determining a two-dimensional coordinate of the target object according to the pixel coordinate of the target object in the scene image and the calibration coefficient, and determining a three-dimensional coordinate of the target object according to the two-dimensional coordinate and the height information of the mechanical arm workbench;
and controlling the mechanical arm to move according to the three-dimensional coordinates of the target object.
In some embodiments, the image pickup device is mounted at an end of the mechanical arm, and a plane of an image pickup lens of the image pickup device is perpendicular to an extending direction of the mechanical arm; before acquiring the image of the scene on the robotic arm table, the method further comprises:
and adjusting the posture of the mechanical arm so that the mechanical arm is perpendicular to the mechanical arm workbench.
In some of these embodiments, the target object comprises a material to be grasped;
controlling the movement of the mechanical arm according to the three-dimensional coordinates of the target object comprises: and controlling the mechanical arm to move to a position corresponding to the material to be grabbed according to the three-dimensional coordinates of the material to be grabbed, and grabbing the material to be grabbed.
In some embodiments, before controlling the mechanical arm to move to a position corresponding to the material to be grabbed and grabbing the material to be grabbed according to the three-dimensional coordinates of the material to be grabbed, the method further includes:
Acquiring the distance between adjacent materials to be grabbed and the materials to be grabbed;
and if the distance is smaller than a preset distance threshold, acquiring the azimuth of the adjacent materials to be grabbed, and rotating a clamp holder used for clamping the materials to be grabbed on the mechanical arm according to the azimuth.
In some of these embodiments, the target object comprises a material to be grasped;
by performing object detection on the scene image, determining pixel coordinates of a target object on the mechanical arm workbench in the scene image includes: and carrying out target detection on the material to be grabbed in the scene image by adopting a Hough transformation algorithm, and determining pixel coordinates of the material to be grabbed on the mechanical arm workbench in the scene image.
In some embodiments, the method further includes, before performing object detection on the material to be grabbed in the scene image by using a Hough transform algorithm to determine pixel coordinates of the material to be grabbed in the scene image on the mechanical arm workbench:
acquiring pixel values of the scene image;
and if the pixel value of the scene image is smaller than a preset first pixel threshold value or larger than a preset second pixel threshold value, adjusting the brightness of the scene image, wherein the second pixel threshold value is larger than the first pixel threshold value.
In some of these embodiments, the target object comprises a carrier for receiving the material to be grasped;
controlling the movement of the mechanical arm according to the three-dimensional coordinates of the target object comprises: and controlling the mechanical arm to move to a position corresponding to the bearing object according to the three-dimensional coordinates of the bearing object, and placing the grabbed material to be grabbed on the bearing object.
In some of these embodiments, the target object comprises a carrier for receiving the material to be grasped;
by performing object detection on the scene image, determining pixel coordinates of a target object on the mechanical arm workbench in the scene image includes: and carrying out target detection on the bearing object in the scene image by adopting a Canny edge detection algorithm, and determining the pixel coordinates of the bearing object in the scene image on the mechanical arm workbench.
In some embodiments, performing object detection on the carrier in the scene image by using a Canny edge detection algorithm, and determining pixel coordinates of the carrier on the mechanical arm workbench in the scene image includes:
denoising the scene image through a Gaussian filter to determine a denoised scene image;
Determining gradient values and gradient directions of pixel points in the scene image through operators of the Canny edge detection algorithm, and filtering non-maximum pixel points in the scene image according to the gradient values and the gradient directions;
and detecting edges in the scene image through a detection threshold value, and determining pixel coordinates of the bearing object on the mechanical arm workbench in the scene image.
In some of these embodiments, obtaining the detection threshold includes:
determining the maximum gray level of the scene image by a maximum inter-class variance method;
and determining the detection threshold according to the maximized gray level.
In some of these embodiments, determining pixel coordinates of a target object on the robotic arm table in the scene image by performing target detection on the scene image comprises:
determining a first pixel coordinate of the target object in the scene image on the mechanical arm workbench by performing target detection on the scene image, and determining a second pixel coordinate of an opposite point in the scene image according to the scene image;
a difference between the first pixel coordinates and the second pixel coordinates is determined and defined as pixel coordinates of the target object in the scene image.
In some embodiments, obtaining the calibration coefficient of the image capturing device corresponding to the robotic arm table includes:
calibrating the camera device through a Zhengyou calibration algorithm, and determining parameters of the camera device;
and acquiring the height of the camera device from the mechanical arm workbench, and determining a calibration coefficient corresponding to the camera device and the mechanical arm workbench according to the height of the camera device from the mechanical arm workbench and the parameters.
In a second aspect, the present application provides a vision-based robotic arm control device, the device comprising: the device comprises an acquisition module, an identification positioning module and a control module;
the acquisition module is used for acquiring a scene image on the mechanical arm workbench, wherein the scene image is acquired through the camera device and is acquired under the condition that the plane of the camera lens of the camera device is parallel to the mechanical arm workbench; the acquisition module is also used for acquiring calibration coefficients of the camera device corresponding to the mechanical arm workbench;
the identification and positioning module is used for determining pixel coordinates of a target object on the mechanical arm workbench in the scene image by carrying out target detection on the scene image; the identification positioning module is also used for determining the two-dimensional coordinates of the target object according to the pixel coordinates of the target object in the scene image and the calibration coefficient, and determining the three-dimensional coordinates of the target object according to the two-dimensional coordinates and the height information of the mechanical arm workbench;
And the control module is used for controlling the mechanical arm to move according to the three-dimensional coordinates of the target object.
In a third aspect, the present application provides a vision-based robotic arm control system, the system comprising a camera device, a robotic arm, and a processor;
the processor is connected with the camera device and is used for acquiring a scene image on the mechanical arm workbench, wherein the scene image is acquired through the camera device and is acquired under the condition that the plane of a camera lens of the camera device is parallel to the mechanical arm workbench;
the processor is used for determining pixel coordinates of a target object on the mechanical arm workbench in the scene image by carrying out target detection on the scene image; the processor is also used for acquiring a calibration coefficient corresponding to the camera device and the mechanical arm workbench;
the processor is used for determining two-dimensional coordinates of the target object according to pixel coordinates of the target object in the scene image and the calibration coefficient, determining three-dimensional coordinates of the target object according to the two-dimensional coordinates and the height information of the mechanical arm workbench, and controlling the mechanical arm to move according to the three-dimensional coordinates of the target object.
In a fourth aspect, embodiments of the present application provide a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the vision-based robotic arm control method according to the first aspect described above when the processor executes the computer program.
In a fifth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a vision-based robotic arm control method as described in the first aspect above.
Compared with the related art, the vision-based mechanical arm control method, the device, the system and the computer equipment provided by the embodiment of the application are characterized in that scene images on a mechanical arm workbench are acquired through an imaging device, and the scene images are acquired under the condition that the plane of an imaging lens of the imaging device is parallel to the mechanical arm workbench; determining pixel coordinates of a target object on the mechanical arm workbench in the scene image by carrying out target detection on the scene image; acquiring a calibration coefficient of the camera device corresponding to the mechanical arm workbench; determining a two-dimensional coordinate of the target object according to the pixel coordinate of the target object in the scene image and the calibration coefficient, and determining a three-dimensional coordinate of the target object according to the two-dimensional coordinate and the height information of the mechanical arm workbench; and controlling the mechanical arm to move according to the three-dimensional coordinates of the target object, so that the problems that the target object is grabbed by the mechanical arm controlled by the camera in the related technology, the matrix calculation or three-dimensional reconstruction of the target object is involved, the data processing method is complex, the calculated amount is large, and the performance requirement of a processor is high are solved, the calculated amount in the mechanical arm control is reduced, and the data processing efficiency in the mechanical arm control is improved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a flow chart of a vision-based robotic arm control method according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of controlling a robotic arm with a target object including an object to be grasped according to an embodiment of the present application;
FIG. 3 is a flow chart of a method of controlling a robotic arm with a target object including a load according to an embodiment of the present application;
FIG. 4 is a block diagram of a vision-based robotic arm control device according to an embodiment of the present application;
FIG. 5 is a block diagram of a vision-based robotic arm control system according to an embodiment of the present application;
fig. 6 is a schematic diagram of an internal structure of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described and illustrated below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden on the person of ordinary skill in the art based on the embodiments provided herein, are intended to be within the scope of the present application.
It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is possible for those of ordinary skill in the art to apply the present application to other similar situations according to these drawings without inventive effort. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the embodiments described herein can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar terms herein do not denote a limitation of quantity, but rather denote the singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The term "plurality" as used herein refers to two or more.
In the related art, vision-based mechanical arm object gripping mainly has three kinds: firstly, fixing the positions of a camera and a mechanical arm, calibrating the hand-eye relation, acquiring pictures by using a color camera in combination with a camera of a depth camera, calculating the three-dimensional coordinates of a target object under a camera coordinate system, calculating the three-dimensional coordinates of the target object under the mechanical arm coordinate system according to a conversion matrix between the camera coordinate system and the mechanical arm coordinate system, and then controlling the mechanical arm to grasp; secondly, fixing the positions of the camera and the mechanical arm, calibrating the hand-eye relationship, acquiring information of an object by adopting a binocular camera or a plurality of cameras, performing three-dimensional reconstruction on the target object to obtain shape information and three-dimensional coordinates of the target object, and finally controlling the mechanical arm to grasp; thirdly, the positions of the camera and the mechanical arm are fixed, the hand-eye relation is not calibrated, and the position errors of the robot hand and the target object are directly observed from the image by utilizing the jacobian matrix, so that the expected movement of the robot hand to eliminate the errors is calculated. In the related art, the mechanical arm is controlled by the camera to capture the target object, so that three coordinate systems such as a target image coordinate system, a mechanical arm coordinate system and a camera coordinate system are involved, and the three methods involve matrix calculation or three-dimensional reconstruction of the target object, so that the problems of complex data processing method and large calculation amount exist.
The application provides a vision-based mechanical arm control method, which comprises the steps of obtaining a scene image on a mechanical arm workbench, carrying out target detection on the scene image to determine pixel coordinates of a target object on the mechanical arm workbench in the scene image, wherein the scene image is obtained through acquisition of a camera device, the scene image is obtained under the condition that a plane of a camera lens of the camera device is parallel to the mechanical arm workbench, further obtaining a calibration coefficient corresponding to the camera device and the mechanical arm workbench, determining two-dimensional coordinates of the target object according to the pixel coordinates and the calibration coefficient of the target object in the scene image, determining three-dimensional coordinates of the target object according to the two-dimensional coordinates and the height information of the mechanical arm workbench, and finally controlling the mechanical arm to move according to the three-dimensional coordinates of the target object; compared with the prior art that a camera is used for controlling a mechanical arm to grasp a target object, which relates to matrix calculation or three-dimensional reconstruction of the target object, the problems of complex data processing method, large calculation amount and higher performance requirements of a processor exist.
The embodiment provides a vision-based mechanical arm control method, and fig. 1 is a flowchart of the vision-based mechanical arm control method according to an embodiment of the present application, as shown in fig. 1, and the vision-based mechanical arm control method includes the following steps:
step S101, acquiring a scene image on a mechanical arm workbench, wherein the scene image is acquired through a camera device, and the scene image is acquired under the condition that the plane of a camera lens of the camera device is parallel to the mechanical arm workbench;
specifically, if the image pickup device is mounted at the end of the mechanical arm, the plane of the image pickup lens of the image pickup device is perpendicular to the extending direction of the mechanical arm; before a scene image on a mechanical arm workbench is acquired, the mechanical arm is perpendicular to the mechanical arm workbench by adjusting the posture of the mechanical arm, so that the plane of an imaging lens of an imaging device is parallel to the mechanical arm workbench, and therefore, the scene image can be acquired conveniently under the condition that the plane of the imaging lens of the imaging device is parallel to the mechanical arm workbench by installing the imaging device at the end part of the mechanical arm and the plane of the imaging lens of the imaging device is perpendicular to the extending direction of the mechanical arm;
Step S102, determining pixel coordinates of a target object on a mechanical arm workbench in a scene image by performing target detection on the scene image; it should be noted that the target object may include a material to be grabbed or a carrier for accommodating the material, and may also include the material to be grabbed and the carrier;
step S103, obtaining a calibration coefficient of the camera device corresponding to the mechanical arm workbench;
step S104, determining the two-dimensional coordinates of the target object according to the pixel coordinates and the calibration coefficients of the target object in the scene image, and determining the three-dimensional coordinates of the target object according to the two-dimensional coordinates and the height information of the mechanical arm workbench;
specifically, determining the 'height information of the mechanical arm workbench' in the three-dimensional coordinates of the target object according to the two-dimensional coordinates and the height information of the mechanical arm workbench, wherein the 'height information of the mechanical arm workbench' is the height of the mechanical arm workbench if the three-dimensional coordinates of the target object take the ground of the mechanical arm workbench as the coordinate origin, and the 'height information of the mechanical arm workbench' is the height of the target object from the plane of the mechanical arm workbench if the three-dimensional coordinates of the target object take the plane of the mechanical arm workbench as the coordinate origin;
Step S105, controlling the mechanical arm to move according to the three-dimensional coordinates of the target object;
if the target object includes a material to be grabbed, controlling the movement of the mechanical arm according to the three-dimensional coordinates of the target object includes: according to the three-dimensional coordinates of the material to be grabbed, controlling the mechanical arm to move to a position corresponding to the material to be grabbed and grabbing the material to be grabbed; if the target object includes a carrier, controlling the movement of the mechanical arm according to the three-dimensional coordinates of the target object includes: according to the three-dimensional coordinates of the bearing object, controlling the mechanical arm to move to a position corresponding to the bearing object and placing the grabbed material to be grabbed on the bearing object; if the target object comprises a material to be grabbed and a carrier, controlling the mechanical arm to move according to the three-dimensional coordinates of the target object comprises the following steps: controlling the mechanical arm to move to a position corresponding to the material to be grabbed according to the three-dimensional coordinates of the material to be grabbed, grabbing the material to be grabbed, controlling the mechanical arm to move to a position corresponding to the carrier according to the three-dimensional coordinates of the carrier, and placing the grabbed material to be grabbed on the carrier;
through the steps S101 to S105, firstly, a scene image acquired by the camera device on the mechanical arm workbench is acquired under the condition that the plane of the camera lens of the camera device is parallel to the mechanical arm workbench, secondly, the scene image is acquired, the pixel coordinates of the target object on the mechanical arm workbench in the scene image, namely the two-dimensional coordinates of the target object in the scene image are determined through target detection, the compression ratio of the scene image acquired by the camera device to the mechanical arm workbench scene is different considering that the distance between the camera device and the mechanical arm workbench is different, further, the calibration coefficient corresponding to the camera device and the mechanical arm workbench is acquired, the actual two-dimensional coordinates of the target object are determined according to the pixel coordinates and the calibration coefficient of the target object in the scene image, the three-dimensional coordinates of the target object are determined according to the height information of the two-dimensional coordinates and the mechanical arm workbench, the conversion from the pixel coordinates in the scene image to the three-dimensional working space coordinates is simplified to the working space coordinates of the two-dimensional plane, finally, the three-dimensional coordinates of the target object can be determined, the mechanical arm is controlled to move according to the three-dimensional coordinates of the target object, the relative coordinates of the target object is controlled, the mechanical arm is controlled to move, the target object is calculated, the data is greatly, the problem is solved by the relative to the mechanical arm is solved, the calculation of the target object is calculated, the processing performance is greatly is improved, and the mechanical object is calculated, and the processing performance is greatly is required is controlled by the mechanical object is controlled.
In some embodiments, if the target object includes a material to be grabbed, the control method for controlling the mechanical arm based on vision further includes, according to the three-dimensional coordinates of the material to be grabbed, before the mechanical arm moves to a position corresponding to the material to be grabbed and grabs the material to be grabbed:
acquiring the distance between adjacent materials to be grabbed and the materials to be grabbed;
if the distance is smaller than a preset distance threshold, acquiring the directions of adjacent materials to be grabbed, and rotating a clamp holder used for clamping the materials to be grabbed on the mechanical arm according to the directions;
specifically, if the distance is smaller than a preset distance threshold, determining the position of the adjacent material to be grasped compared with the material to be grasped according to the central coordinates of the adjacent material to be grasped, wherein the position can be one of southeast, southwest, northeast and northwest, so that a clamp for clamping the material to be grasped can conveniently rotate according to the position, for example, the clamp can rotate by 90 degrees or 180 degrees and the like; and if the distance between the material to be grabbed and the adjacent material to be grabbed is too short, the situation that the adjacent material to be grabbed touches the next material when the material to be grabbed is grabbed can be avoided by rotating the angle of the clamp holder according to the position of the adjacent material to be grabbed compared with the position of the material to be grabbed.
In some embodiments, fig. 2 is a flowchart of a method for controlling a robot arm in which a target object includes an object to be grasped according to an embodiment of the present application, and in the case where the target object includes a material to be grasped, as shown in fig. 2, the method for controlling a robot arm based on vision includes the following steps:
step S201, acquiring a scene image on a mechanical arm workbench, wherein the scene image is acquired through a camera device, and the scene image is acquired under the condition that the plane of a camera lens of the camera device is parallel to the mechanical arm workbench;
step S202, performing target detection on a material to be grabbed in a scene image by adopting a Hough transformation algorithm, and determining pixel coordinates of the material to be grabbed on a mechanical arm workbench in the scene image; specifically, considering that the Hough transformation algorithm has higher precision when detecting objects with relatively small volumes, further in the embodiment of the application, the Hough transformation algorithm is adopted to detect the materials to be grabbed;
step S203, obtaining a calibration coefficient of the camera device corresponding to the mechanical arm workbench;
step S204, determining two-dimensional coordinates of the materials to be grabbed according to pixel coordinates and calibration coefficients of the materials to be grabbed in the scene images, and determining three-dimensional coordinates of the materials to be grabbed according to the two-dimensional coordinates and height information of the mechanical arm workbench;
Step S205, according to the three-dimensional coordinates of the material to be grabbed, the mechanical arm is controlled to move to the position corresponding to the material to be grabbed and grabs the material to be grabbed.
In some embodiments, the method further includes, before performing object detection on the material to be grabbed in the scene image by using a Hough transform algorithm and determining pixel coordinates of the material to be grabbed on the mechanical arm workbench in the scene image:
acquiring a pixel value of a scene image, and if the pixel value of the scene image is smaller than a preset first pixel threshold or larger than a preset second pixel threshold, adjusting the brightness of the scene image, wherein the second pixel threshold is larger than the first pixel threshold;
it should be noted that, the calculation formula of the pixel value of the scene image is:
in the above formula 1, F (x, y) represents an average pixel value, that is, a pixel value of a scene image; r (x, y), G (x, y), B (x, y) respectively represent pixel values of the respective channels; specifically, by adjusting the brightness of the scene image, the collected scene image is prevented from being too bright or too dark, so that the identification of the target object in the scene image is enhanced;
it should be further described that, in order to solve the misjudgment phenomenon that easily occurs when the circular target is identified in the practical application process, in the embodiment of the present application, according to the geometric characteristic that the distances from the circle center to any point of the circular edge are equal, the number of scattered point samples selected on the circular edge is increased, and the number of the scattered point samples is set to a certain threshold value, so as to solve the misjudgment on the non-circular target.
In some embodiments, fig. 3 is a flowchart of a method for controlling a robot arm in which a target object includes a carrier according to an embodiment of the present application, and in the case where the target object includes a carrier as shown in fig. 3, the method for controlling a robot arm based on vision includes the steps of:
step S301, acquiring a scene image on a mechanical arm workbench, wherein the scene image is acquired through a camera device, and the scene image is acquired under the condition that the plane of a camera lens of the camera device is parallel to the mechanical arm workbench;
step S302, carrying out target detection on a bearing object in a scene image by adopting a Canny edge detection algorithm, and determining pixel coordinates of the bearing object in the scene image on a mechanical arm workbench; specifically, considering that the accuracy is higher when the Canny edge detection algorithm is suitable for detecting objects with relatively large volumes, further in the embodiment of the application, the Canny edge detection algorithm is adopted to detect the carrier;
step S303, obtaining a calibration coefficient of the camera device corresponding to the mechanical arm workbench;
step S304, determining two-dimensional coordinates of the bearing object according to pixel coordinates and calibration coefficients of the bearing object in the scene image, and determining three-dimensional coordinates of the bearing object according to the two-dimensional coordinates and height information of the mechanical arm workbench;
Step S305, according to the three-dimensional coordinates of the bearing object, the mechanical arm is controlled to move to the position corresponding to the bearing object, and the grabbed material to be grabbed is placed on the bearing object.
In some embodiments, the method for detecting the object carried in the scene image by using the Canny edge detection algorithm includes:
denoising the scene image through a Gaussian filter to determine a denoised scene image; determining gradient values and gradient directions of pixel points in the scene image by an operator of a Canny edge detection algorithm, and filtering non-maximum pixel points in the scene image according to the gradient values and the gradient directions; edges in the scene image are detected by detecting thresholds and pixel coordinates of the carriage in the scene image on the robotic arm table are determined.
Specifically, the first stage: the scene image is convolved with a gaussian filter for smoothing the image to eliminate noise, the gaussian filter kernel is generated as follows:
in the above formula 2, k is the gray level of the scene image, i, j is the pixel coordinates of the scene image;
and a second stage: calculating first derivative values of horizontal Gx and vertical Gy directions of all pixels in the scene image by using an operator of a Canny edge detection algorithm, thereby determining gradient values and gradient directions of the pixels in the scene image, wherein the gradient values and the gradient directions of the pixels in the scene image are respectively calculated according to the following formulas:
θ=arctan(G y /G y ) Equation 4
In the above formulas 3 and 4, gx is the first derivative value of the pixel point in the scene image in the horizontal direction, gy is the first derivative value of the pixel point in the scene image in the vertical direction, G is the gradient value of the pixel point in the scene image, and θ is the gradient direction of the pixel point in the scene image; in the second stage, after the gradient value and gradient direction of the pixel points in the scene image are determined, non-maximum pixel points in the scene image are filtered according to the gradient value and gradient direction, so that stray response caused by edge detection is eliminated;
in the third stage, the real and potential edges in the scene image are determined by detecting the scene image by detecting the threshold, and the edge detection of the final scene image can be completed by suppressing isolated weak edges.
It should be noted that, obtaining the detection threshold includes: determining the maximum gray level of the scene image by a maximum inter-class variance method; determining a detection threshold according to the maximized gray level;
the maximum inter-class variance method is abbreviated as OTSU algorithm, and is a method capable of adaptively determining a threshold according to gray level histogram information of an image, and is used for the situation that the inter-class variance is maximum, and the basic idea of the algorithm is that the threshold omega exists to divide acquired image pixels into two classes of C1 (smaller than omega) and C2 (larger than omega), and setting is carried out: the gray level of the scene image is k, the average value of two types of pixels is M1 and M2 respectively, the average value of the overall scene image pixels is MG, meanwhile, the probability that the pixels are divided into two types of C1 and C2 is p1 and p2 respectively, and the following formula exists:
P 1 *M 1 +P 2 *M 2 =mg equation 5
P 1 +P 2 =1 equation 6
Secondly, the accumulated average value M of the gray level k and the global pixel average value MG of the scene image are respectively as follows:
according to the variance concept, combining the formulas 5 to 11, and finally obtaining an inter-class variance formula as follows:
traversing 0 to 255 gray levels according to formula 12, determining a gray level k maximizing the formula 12, specifically, the gray level k maximizing the formula 12 is an OTSU threshold, that is, a detection threshold; the OTSU algorithm is used for replacing traditional manual setting, and the definition of the generated edge image is improved by realizing self-adaptive acquisition of the detection threshold value.
In some embodiments, obtaining calibration coefficients of the camera device corresponding to the robotic arm table includes: calibrating the camera device through a Zhengyou calibration algorithm, and determining parameters of the camera device; acquiring the height of the camera device from the mechanical arm workbench, and determining a calibration coefficient corresponding to the camera device and the mechanical arm workbench according to the height of the camera device from the mechanical arm workbench and parameters;
specifically, parameters of the image pickup device can be known through a Zhang Zhengyou calibration algorithm, wherein the parameters of the image pickup device comprise distortion coefficients of the image pickup device, and further, calibration coefficients of the image pickup device corresponding to the mechanical arm workbench are determined according to the distortion parameters of the image pickup device and the height of the image pickup device from the mechanical arm workbench.
In some of these embodiments, determining pixel coordinates of a target object on the robotic arm table in the scene image by performing target detection on the scene image includes:
determining a first pixel coordinate of a target object on a mechanical arm workbench in a scene image by performing target detection on the scene image, and determining a second pixel coordinate of an opposite point in the scene image according to the scene image; determining a difference value between the first pixel coordinate and the second pixel coordinate, and defining the difference value as the pixel coordinate of the target object in the scene image;
specifically, the relative point in the scene image refers to a reference point selected for determining the pixel coordinates of the target object in the scene image, and thus the relative point in the scene image may be the center point of the scene image, and the second pixel coordinates are the pixel coordinates of the center point of the scene image. It should be noted that, in the embodiment of the present application, determining the three-dimensional coordinates of the target object according to the two-dimensional coordinates of the target object and the height information of the mechanical arm workbench includes: determining the coordinates of the target object under a mechanical arm base coordinate system according to the two-dimensional coordinates of the target object and the height information of the mechanical arm workbench; controlling the movement of the mechanical arm according to the three-dimensional coordinates of the target object comprises: controlling the mechanical arm to move according to the coordinates of the target object under the mechanical arm base coordinate system; the coordinates of the target object under the mechanical arm base coordinate system are determined according to the two-dimensional coordinates of the target object and the height information of the mechanical arm workbench and can be converted through the following conversion formula:
α= (β - γ) ×cc+d+pa+o formula 13
In the above formula 13, α represents the coordinates of the target object in the robot arm base coordinate system; beta represents the pixel coordinates of the target object in the scene image; gamma represents the pixel coordinates of the center point of the scene image; pa represents the pose of the mechanical arm; o represents the offset of the mechanical arm end effector and the camera device in a two-dimensional coordinate x and y coordinate system, wherein the mechanical arm end effector refers to a clamp holder at the end of the mechanical arm for clamping a target object; cc represents a calibration coefficient of the imaging device; d represents the distance from the camera to the mechanical arm workbench; the conversion from the two-dimensional coordinates of the target object in the scene image to the coordinates under the mechanical arm base coordinate system can be realized through the formula 13, so that the operation of the mechanical arm on the target object can be controlled according to the coordinates of the target object under the mechanical arm base coordinate system, the calculated amount in the mechanical arm control is reduced, and the data processing efficiency in the mechanical arm control is improved;
further, firstly, the method can be multi-target grabbing, and a plurality of different objects can be identified; secondly, flexible grabbing is realized, and compared with fixed grabbing in a factory, the method does not need to fix the placement area and the position of the object; then, compared with other methods, the scheme only needs one color camera or black-and-white camera, and the performance requirement on a processor is very low; and the mobility is good, and all round or quasi-round objects can be detected. Simulation training is not required to be carried out again because of detection of a new target; and finally, the calculation process is simple, the calculation amount is small, and three-dimensional reconstruction and the like are not needed.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
The embodiment also provides a vision-based mechanical arm control device, which is used for implementing the above embodiment and the preferred embodiment, and is not described again. As used below, the terms "module," "unit," "sub-unit," and the like may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
In some embodiments, fig. 4 is a block diagram of a vision-based robotic arm control device according to an embodiment of the present application, as shown in fig. 4, the vision-based robotic arm control device includes: an acquisition module 41, an identification positioning module 42 and a control module 43;
an acquiring module 41, configured to acquire a scene image on the mechanical arm workbench, where the scene image is acquired by the image capturing device, and the scene image is acquired when a plane where an image capturing lens of the image capturing device is located is parallel to the mechanical arm workbench; the acquisition module 41 is further configured to acquire a calibration coefficient corresponding to the mechanical arm workbench of the image capturing device;
The identifying and positioning module 42 is configured to determine pixel coordinates of a target object on the mechanical arm workbench in the scene image by performing target detection on the scene image; the recognition and positioning module 42 is further configured to determine a two-dimensional coordinate of the target object according to the pixel coordinate and the calibration coefficient of the target object in the scene image, and determine a three-dimensional coordinate of the target object according to the two-dimensional coordinate and the height information of the mechanical arm workbench;
and the control module 43 is used for controlling the mechanical arm to move according to the three-dimensional coordinates of the target object.
In some embodiments, the obtaining module 41, the identifying and positioning module 42, and the control module 43 are further configured to implement the steps in the vision-based mechanical arm control method provided in the foregoing embodiments, which are not described herein.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
The embodiment also provides a vision-based mechanical arm control system, and fig. 5 is a structural block diagram of the vision-based mechanical arm control system according to an embodiment of the present application, as shown in fig. 5, the vision-based mechanical arm control system includes an image pickup device 51, a mechanical arm 52, and a processor 53;
The processor 53 is connected to the image pickup device 51, and is configured to acquire a scene image on the mechanical arm workbench, where the scene image is acquired by the image pickup device 51, and the scene image is acquired when a plane where an image pickup lens of the image pickup device 51 is located is parallel to the mechanical arm workbench;
the processor 53 is configured to determine pixel coordinates of a target object on the robotic arm workbench in the scene image by performing target detection on the scene image; the processor 53 is further configured to obtain a calibration coefficient of the image capturing device 51 corresponding to the mechanical arm workbench;
the processor 53 is configured to determine two-dimensional coordinates of the target object according to the pixel coordinates and the calibration coefficients of the target object in the scene image, determine three-dimensional coordinates of the target object according to the two-dimensional coordinates and the height information of the robot arm stage, and control the robot arm to move according to the three-dimensional coordinates of the target object.
In some embodiments, the processor 53 is further configured to implement the steps in the vision-based mechanical arm control method provided in the foregoing embodiments, which is not described herein.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a vision-based robotic arm control method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
In one embodiment, fig. 6 is a schematic diagram of an internal structure of a computer device according to an embodiment of the present application, and as shown in fig. 6, a computer device is provided, which may be a server, and an internal structure diagram thereof may be shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of vision-based robotic arm control.
It will be appreciated by those skilled in the art that the structure shown in fig. 6 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided that includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the method for vision-based robotic arm control provided in the above embodiments when executing the computer program.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps in the vision-based robotic arm control method provided by the above embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (9)

1. A vision-based robotic arm control method, the method comprising:
acquiring a scene image on a mechanical arm workbench, wherein the scene image is acquired through a camera device, and the scene image is acquired under the condition that the plane of a camera lens of the camera device is parallel to the mechanical arm workbench;
Determining pixel coordinates of a target object on the mechanical arm workbench in the scene image by carrying out target detection on the scene image;
acquiring a calibration coefficient of the camera device corresponding to the mechanical arm workbench;
determining a two-dimensional coordinate of the target object according to the pixel coordinate of the target object in the scene image and the calibration coefficient, and determining a three-dimensional coordinate of the target object according to the two-dimensional coordinate and the height information of the mechanical arm workbench;
controlling the mechanical arm to move according to the three-dimensional coordinates of the target object;
the obtaining the calibration coefficient of the camera device corresponding to the mechanical arm workbench comprises the following steps:
calibrating the camera device through a Zhengyou calibration algorithm, and determining parameters of the camera device;
and acquiring the height of the camera device from the mechanical arm workbench, and determining a calibration coefficient corresponding to the camera device and the mechanical arm workbench according to the height of the camera device from the mechanical arm workbench and the parameters.
2. The vision-based robot arm control method according to claim 1, wherein the image pickup device is mounted at an end of the robot arm, and a plane in which an image pickup lens of the image pickup device is located is perpendicular to an extending direction of the robot arm; before acquiring the image of the scene on the robotic arm table, the method further comprises:
And adjusting the posture of the mechanical arm so that the mechanical arm is perpendicular to the mechanical arm workbench.
3. The vision-based robotic arm control method of claim 1, wherein the target object comprises a material to be grasped;
controlling the movement of the mechanical arm according to the three-dimensional coordinates of the target object comprises: according to the three-dimensional coordinates of the material to be grabbed, controlling the mechanical arm to move to a position corresponding to the material to be grabbed and grabbing the material to be grabbed;
according to the three-dimensional coordinates of the material to be grabbed, the mechanical arm is controlled to move to a position corresponding to the material to be grabbed, and before the material to be grabbed is grabbed, the method further comprises the steps of:
acquiring the distance between adjacent materials to be grabbed and the materials to be grabbed;
and if the distance is smaller than a preset distance threshold, acquiring the azimuth of the adjacent materials to be grabbed, and rotating a clamp holder used for clamping the materials to be grabbed on the mechanical arm according to the azimuth.
4. The vision-based robotic arm control method of claim 1, wherein the target object comprises a material to be grasped;
by performing object detection on the scene image, determining pixel coordinates of a target object on the mechanical arm workbench in the scene image includes: performing target detection on the material to be grabbed in the scene image by adopting a Hough transformation algorithm, and determining pixel coordinates of the material to be grabbed on the mechanical arm workbench in the scene image;
Performing target detection on the material to be grabbed in the scene image by adopting a Hough transformation algorithm, and determining the pixel coordinates of the material to be grabbed in the scene image on the mechanical arm workbench, wherein the method further comprises the following steps:
acquiring pixel values of the scene image;
and if the pixel value of the scene image is smaller than a preset first pixel threshold value or larger than a preset second pixel threshold value, adjusting the brightness of the scene image, wherein the second pixel threshold value is larger than the first pixel threshold value.
5. The vision-based robotic arm control method of claim 1, wherein the target object comprises a carrier for receiving a material to be grasped;
controlling the movement of the mechanical arm according to the three-dimensional coordinates of the target object comprises: and controlling the mechanical arm to move to a position corresponding to the bearing object according to the three-dimensional coordinates of the bearing object, and placing the grabbed material to be grabbed on the bearing object.
6. The vision-based robotic arm control method of claim 1 or 5, wherein the target object comprises a carrier for receiving a material to be grasped;
By performing object detection on the scene image, determining pixel coordinates of a target object on the mechanical arm workbench in the scene image includes: and carrying out target detection on the bearing object in the scene image by adopting a Canny edge detection algorithm, and determining the pixel coordinates of the bearing object in the scene image on the mechanical arm workbench.
7. The vision-based robotic arm control method of claim 6, wherein performing object detection on the carrier in the scene image using a Canny edge detection algorithm, determining pixel coordinates of the carrier in the scene image on the robotic arm workstation comprises:
denoising the scene image through a Gaussian filter to determine a denoised scene image;
determining gradient values and gradient directions of pixel points in the scene image through operators of the Canny edge detection algorithm, and filtering non-maximum pixel points in the scene image according to the gradient values and the gradient directions;
detecting edges in the scene image through a detection threshold value, and determining pixel coordinates of the bearing object on the mechanical arm workbench in the scene image; wherein, obtaining the detection threshold includes: and determining the maximum gray level of the scene image by a maximum inter-class variance method, and determining the detection threshold according to the maximum gray level.
8. The vision-based robotic arm control method of claim 1, wherein determining pixel coordinates of a target object on the robotic arm stage in the scene image by performing target detection on the scene image comprises:
determining a first pixel coordinate of the target object in the scene image on the mechanical arm workbench by performing target detection on the scene image, and determining a second pixel coordinate of an opposite point in the scene image according to the scene image;
a difference between the first pixel coordinates and the second pixel coordinates is determined and defined as pixel coordinates of the target object in the scene image.
9. A vision-based mechanical arm control system, which is characterized by comprising a camera device, a mechanical arm and a processor;
the processor is connected with the camera device and is used for acquiring a scene image on the mechanical arm workbench, wherein the scene image is acquired through the camera device and is acquired under the condition that the plane of a camera lens of the camera device is parallel to the mechanical arm workbench;
The processor is used for determining pixel coordinates of a target object on the mechanical arm workbench in the scene image by carrying out target detection on the scene image; the processor is also used for acquiring a calibration coefficient corresponding to the camera device and the mechanical arm workbench;
the processor is used for determining two-dimensional coordinates of the target object according to pixel coordinates of the target object in the scene image and the calibration coefficient, determining three-dimensional coordinates of the target object according to the two-dimensional coordinates and the height information of the mechanical arm workbench, and controlling the mechanical arm to move according to the three-dimensional coordinates of the target object;
the processor is used for calibrating the camera device through a Zhengyou calibration algorithm and determining parameters of the camera device; and acquiring the height of the camera device from the mechanical arm workbench, and determining a calibration coefficient corresponding to the camera device and the mechanical arm workbench according to the height of the camera device from the mechanical arm workbench and the parameters.
CN202110341428.2A 2021-03-30 2021-03-30 Vision-based mechanical arm control method, device and system and computer equipment Active CN112907586B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110341428.2A CN112907586B (en) 2021-03-30 2021-03-30 Vision-based mechanical arm control method, device and system and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110341428.2A CN112907586B (en) 2021-03-30 2021-03-30 Vision-based mechanical arm control method, device and system and computer equipment

Publications (2)

Publication Number Publication Date
CN112907586A CN112907586A (en) 2021-06-04
CN112907586B true CN112907586B (en) 2024-02-02

Family

ID=76109768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110341428.2A Active CN112907586B (en) 2021-03-30 2021-03-30 Vision-based mechanical arm control method, device and system and computer equipment

Country Status (1)

Country Link
CN (1) CN112907586B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113733100B (en) * 2021-09-29 2022-10-28 珠海优特电力科技股份有限公司 Target positioning method, device, equipment and storage medium of inspection operation robot
CN113838042B (en) * 2021-09-30 2023-11-10 清华大学 Double-mechanical-arm operation question answering method and device, electronic equipment and storage medium
CN114543669B (en) * 2022-01-27 2023-08-01 珠海亿智电子科技有限公司 Mechanical arm calibration method, device, equipment and storage medium
CN117689716B (en) * 2023-12-15 2024-05-17 广州赛志系统科技有限公司 Plate visual positioning, identifying and grabbing method, control system and plate production line
CN117415826B (en) * 2023-12-19 2024-02-23 苏州一目万相科技有限公司 Control method and device of detection system and readable storage medium
CN117649449B (en) * 2024-01-30 2024-05-03 鲁东大学 Mechanical arm grabbing and positioning system based on computer vision

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011143497A (en) * 2010-01-13 2011-07-28 Ihi Corp Device and method for tray transfer
CN107767423A (en) * 2017-10-10 2018-03-06 大连理工大学 A kind of mechanical arm target positioning grasping means based on binocular vision
WO2021023315A1 (en) * 2019-08-06 2021-02-11 华中科技大学 Hand-eye-coordinated grasping method based on fixation point of person's eye

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011143497A (en) * 2010-01-13 2011-07-28 Ihi Corp Device and method for tray transfer
CN107767423A (en) * 2017-10-10 2018-03-06 大连理工大学 A kind of mechanical arm target positioning grasping means based on binocular vision
WO2021023315A1 (en) * 2019-08-06 2021-02-11 华中科技大学 Hand-eye-coordinated grasping method based on fixation point of person's eye

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于机器视觉的目标定位与机器人规划系统研究;杨三永;曾碧;;计算机测量与控制(第12期);全文 *

Also Published As

Publication number Publication date
CN112907586A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN112907586B (en) Vision-based mechanical arm control method, device and system and computer equipment
CN107633536B (en) Camera calibration method and system based on two-dimensional plane template
WO2020010945A1 (en) Image processing method and apparatus, electronic device and computer-readable storage medium
CN111627072B (en) Method, device and storage medium for calibrating multiple sensors
CN109479082B (en) Image processing method and apparatus
CN109920004B (en) Image processing method, device, calibration object combination, terminal equipment and calibration system
CN108805938B (en) Detection method of optical anti-shake module, mobile terminal and storage medium
CN112132874B (en) Calibration-plate-free heterogeneous image registration method and device, electronic equipment and storage medium
US20060215881A1 (en) Distance measurement apparatus, electronic device, distance measurement method, distance measurement control program and computer-readable recording medium
CN112907675B (en) Calibration method, device, system, equipment and storage medium of image acquisition equipment
CN109255818B (en) Novel target and extraction method of sub-pixel level angular points thereof
CN110722558B (en) Origin correction method and device for robot, controller and storage medium
JP2019518276A (en) Failure analysis device and method
CN111652937B (en) Vehicle-mounted camera calibration method and device
CN116433737A (en) Method and device for registering laser radar point cloud and image and intelligent terminal
CN111383264A (en) Positioning method, positioning device, terminal and computer storage medium
CN113172636B (en) Automatic hand-eye calibration method and device and storage medium
CN117173254A (en) Camera calibration method, system, device and electronic equipment
CN116071562A (en) Plant seed identification method and device, electronic equipment and storage medium
CN113450335B (en) Road edge detection method, road edge detection device and road surface construction vehicle
CN116051652A (en) Parameter calibration method, electronic equipment and storage medium
CN112241984A (en) Binocular vision sensor calibration method and device, computer equipment and storage medium
CN107783310B (en) Calibration method and device of cylindrical lens imaging system
CN116012242A (en) Camera distortion correction effect evaluation method, device, medium and equipment
CN113635299B (en) Mechanical arm correction method, terminal device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant