CN114463244A - Vision robot grabbing system and control method thereof - Google Patents

Vision robot grabbing system and control method thereof Download PDF

Info

Publication number
CN114463244A
CN114463244A CN202011241452.0A CN202011241452A CN114463244A CN 114463244 A CN114463244 A CN 114463244A CN 202011241452 A CN202011241452 A CN 202011241452A CN 114463244 A CN114463244 A CN 114463244A
Authority
CN
China
Prior art keywords
robot
workpiece
program
grabbing
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011241452.0A
Other languages
Chinese (zh)
Inventor
韩子怡
马倩
张保勇
周国鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202011241452.0A priority Critical patent/CN114463244A/en
Publication of CN114463244A publication Critical patent/CN114463244A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Quality & Reliability (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a vision robot grabbing system and a control method thereof. The system comprises an industrial personal computer main control system, a robot slave control system, a vision system and a grabbing system. The method comprises the following steps: the vision system automatically acquires the position information of the object through the image shot by the camera and determines the grabbing position of the robot tail end grabbing system; meanwhile, the image processing method of the vision system determines the posture of the gripping tool according to the shape and the placing state of the gripped object in the observed image, and finally successfully grips the workpiece placed at will. The robot can be applied to grabbing workpieces with different sizes and adapting to different industrial automatic production scenes, the complexity of work programming is simplified, and the application range of the robot and the working efficiency in actual production are improved; when only the vision sensor is used as input, the self-adaptive grabbing can be performed on the specified object in the multiple objects, the vision tracking of the moving object can also be completed, and the method is strong in real-time performance, high in detection rate, strong in robustness and good in universality.

Description

Vision robot grabbing system and control method thereof
Technical Field
The invention relates to the technical field of computer vision and robot intelligent control, in particular to a vision robot gripping system and a control method thereof.
Background
A robot is a machine device that can automatically perform work. The intelligent robot can not only accept human instructions, but also run preset programs, and simultaneously perform actions according to principles specified by artificial intelligence technology. Its task is to assist or even replace human job sites in real production, for example in service, construction, factory production, or dangerous jobs.
At present, vision control software of a vision company in the market has very strong specialty, and main users are professional vision engineers and need to have certain professional knowledge to know a complex image processing algorithm. The drawbacks of such vision control software systems are: on one hand, the method is strong in professional, the user group aims at professional technicians, the learning cost of common workers is very high, and the method is difficult to popularize and apply on a large scale; on the other hand, the usability is low, and even professional technicians create a task, long debugging time is needed, and the production efficiency is reduced. The market lacks a control method which is simple to operate and can adapt to different workpieces by adjusting parameters.
Disclosure of Invention
The invention aims to provide a visual robot gripping system which can identify and grip workpieces with different sizes, is simple to operate, has strong adaptability and high automation degree, and a control method thereof.
The technical solution for realizing the purpose of the invention is as follows: a vision robot grabbing system comprises a master control system, a slave control system, a vision system and a grabbing system, wherein the master control system is used for sending instructions to respectively control the slave control system, the vision system and the grabbing system;
the main control system comprises a main processor, a memory, a data acquisition module, a data processing module, a coordinate system conversion module, a display screen and a communication interface;
the slave control system comprises a robot working environment modeling, a robot road force planning, a robot controller and a simulation environment simulation grabbing;
the vision system comprises a camera calibration module, image preprocessing, image identification and matching;
the grabbing system comprises a workpiece posture conversion module, a workpiece grabbing mapping module and an end effector module;
a master control system program module is arranged in a master processor in the master control system, a slave control system program module is arranged in a master processor in the slave control system, and a visual system program module is arranged in a master processor in the visual system.
Further, the main control system comprises a main processor, a memory, a data acquisition module, a data processing module, a coordinate system conversion module, a display screen and a communication interface, and the operation is as follows:
the main processor is a control core of a master control system and a slave control system;
the memory is used for storing data and program modules generated in the whole working process of the grabbing system;
the data acquisition module comprises two CCD cameras which are fixed right above a conveyor belt through which a workpiece passes and are used for acquiring workpiece image information of an industrial production scene in a visual field area of the CCD cameras and sending the image information to the memory, the main processor calculates an execution data signal according to the image information, and the end effector module executes corresponding actions according to the execution data signal;
the data processing module performs value domain on workpiece image information acquired by two CCD cameras in the data acquisition module to form a binary image, then performs Gaussian smoothing processing and morphological boundary extraction processing, calculates a workpiece contour moment and an external minimum rectangle to identify and match the workpiece, and simultaneously obtains a central coordinate of the workpiece;
the coordinate system conversion module interconnects the two CCD cameras of the data acquisition module and the data of the visual robot to obtain the coordinate parameter conversion of a camera coordinate system and a robot coordinate system;
the display screen is used for displaying a graphical user control interface of the upper computer in real time and matched workpiece center coordinate data;
the communication interface is used for accessing information when the main processor is upgraded and communicating with a slave control system.
Further, the slave control system comprises robot working environment modeling, robot path planning, a robot controller and simulation environment simulation grabbing, wherein:
in the premise that the obstacle of the robot working environment is known, the robot working environment modeling is an optimal path searched for the robot under the conditions that the robot can safely and does not collide with the known environment in the process of traveling from the initial position to the target point, wherein the optimal path comprises the modeling of the robot working environment and the searching of the optimal path;
the robot path planning means that a coordinate system conversion module interconnects two CCD cameras of the data acquisition module and the data of the robot, then the coordinate parameter conversion from a camera coordinate system to a robot coordinate system is obtained through calculation, and the robot obtains feasible path points through a path planning algorithm according to the obtained current coordinate parameters and target coordinate parameters;
the robot controller is used for controlling the action execution and the motion planning of the robot;
the simulated environment simulated grabbing means that after the working environment modeling and the robot path planning behaviors of the robot are completed, workpieces and obstacles in the working environment of the robot are modeled in simulation software corresponding to the robot, then the calculated planned path is input into the simulation software corresponding to the robot, and the robot completes simulated grabbing in the simulation environment.
Further, the vision system comprises a camera calibration module, image preprocessing, image recognition and matching;
the camera calibration module is used for restoring the position of an object imaged in the CCD camera in the real world and converting a world coordinate system into an image coordinate system by calculating a conversion matrix;
the image preprocessing is to perform value domain conversion on the workpiece image acquired by the data acquisition module into a binary image, and then perform Gaussian smoothing processing and morphological boundary extraction processing;
and the data processing module identifies and matches the workpiece by calculating the contour moment of the workpiece and the external minimum rectangle, obtains the center coordinate of the workpiece, and identifies and matches the workpiece contour calculated by the data processing module with the actual workpiece contour.
Further, the grasping system comprises a workpiece posture conversion, workpiece grasping mapping and end effector module;
the workpiece pose conversion means that the robot action is converted into the workpiece pose on the premise that the robot grasps the optimal planned path of the workpiece from the current position to the target position in the simulation environment;
the mapping workpiece grabbing refers to workpiece grabbing for mapping simulation environment simulation grabbing actions to a world coordinate system after the workpiece posture is converted;
and the end effector module is used for mapping workpiece grabbing and simultaneously executing the action of grabbing the workpiece from the current position to the target position.
Further, a master control system program module is arranged in a master processor in the master control system, a slave control system program module is arranged in a master processor in the slave control system, and a vision system program module is arranged in a master processor in the vision system, specifically as follows:
(1) the main control system program comprises a variable parameter initialization program, a self-detection program, a camera starting program, a robot starting program, a camera and robot cooperative control program, a fault diagnosis and fault processing program and an upper computer interface control program;
in the variable parameter initialization program, the variables defined in the visual robot grabbing system comprise a counting variable, a position variable, a timer variable and a minimum external rectangle variable in a linked list, and the variables are initialized;
the self-detection program is used for carrying out automatic detection during starting and detecting whether all control objects of each controller are at the original point, if the control objects are not at the original point, the self-detection program automatically calls the reset program to reset all the control objects, and if the control objects still do not return to the original point after the reset program is executed, the self-detection program gives an alarm;
the camera starting program is used for starting the camera, calibrating the parameters of the camera and simultaneously taking pictures by the camera;
the camera and robot cooperative control program meets defined timer variables, photographs are taken, the photographs are sent to a vision system program, and meanwhile the robot finishes a grabbing action according to coordinates obtained by the vision system;
the fault diagnosis and processing program is used for detecting and processing the fault generated by the control system, the fault processing program carries out different processing according to different detected fault types, the different detected fault types are displayed on the upper computer interface, and meanwhile, the alarm device is started to give an alarm;
the upper computer interface control program displays the variable parameter initialization program, the self-detection program, the camera starting program, the robot starting program, the camera and robot cooperative control program and the fault diagnosis and fault processing program on an upper computer interface, and is adjusted by a button defined on the upper computer interface;
(2) the slave control system program obtains the workpiece initial coordinate through the processing of the vision system program, establishes a slave system control program through the inverse kinematics equation of the robot, and controls the robot to carry the workpiece to the target position from the initial position;
(3) the vision system program performs value domain conversion on a workpiece image obtained by photographing through a camera to form a binary image, and then performs Gaussian smoothing processing and morphological boundary extraction processing to match position coordinates of the workpiece.
A control method of a visual robot grabbing system comprises the steps of collecting workpiece images by a data collection module, obtaining preprocessed workpiece image data by an image preprocessing link, identifying and matching the workpieces by a visual system, solving the optimal solution of a robot traveling path by a path planning algorithm, sending signals to a robot controller in a slave control system by a master control system, and controlling an end effector module to grab and carry the workpieces; the method specifically comprises the following steps:
step 1, calibrating a camera of a visual robot grabbing system;
step 2, image preprocessing is carried out to smooth and extract the boundary of the image data: after the camera in the step 1 is calibrated, carrying out binarization processing on a workpiece image shot by a CCD camera, carrying out Gaussian filtering smoothing processing on the image, and finally carrying out boundary extraction processing on the smoothed image by using a morphological boundary extraction method;
step 3, image identification and matching: after the smoothing and boundary extraction processing is carried out on the image in the step 2, drawing a minimum circumscribed rectangle on the image, and finding out the outline with the maximum area to match with the workpiece image;
and 4, finishing the simulated capture of the motion path of the robot in the robot simulation software: after the workpiece image is identified and matched in the step 3, the coordinates of the central point of the workpiece are obtained, the working environment of the robot is modeled in the robot simulation software, and meanwhile, the workpiece capturing action is simulated in the robot simulation software;
step 5, completing the grabbing of the intelligent robot in the real environment: and 4, simulating the grabbing action in the robot simulation software in the step 4, mapping the action in the simulation environment to a real environment, and simultaneously completing the grabbing action of the intelligent robot to the workpiece.
Further, the calibration of the camera of the vision robot gripping system in step 1 is specifically as follows:
according to the conversion between a world coordinate system and a camera coordinate system of a camera calibration module used for a workpiece image in a vision system and the correction of image distortion, determining the mutual relation between the three-dimensional geometric position of a certain point on the surface of a space object and the corresponding point in the image, and establishing a geometric model of camera imaging, wherein the geometric model parameters are camera parameters;
the relationship between the image coordinate system and the plane coordinate system is as follows:
Figure BDA0002768487810000051
wherein u, v represents coordinate values of row vector and column vector of pixel point in image coordinate system, x, y represents coordinate values in plane coordinate system, dx,dyRepresenting the physical size of a single pixel point in the x and y directions;
the conversion model of the robot coordinate system and the depth coordinate system of the binocular CCD camera is as follows:
Figure BDA0002768487810000052
wherein, [ X ]C YC ZC]Is a coordinate value of [ X ] in the camera coordinate systemW YW ZW]Is a coordinate value, [ Delta T ], in the world coordinate systemx ΔTy ΔTz]A coordinate system translation increment;
the conversion model of the plane coordinate system and the pixel coordinate system is as follows:
Figure BDA0002768487810000053
wherein f isx=f/dx、fyF/dy; m is a 3X 4 projection matrix, the internal parameter fx、fy、cx、cyDetermines the internal reference M of the camera1,fx、fyRepresenting scale factors on the u-axis and v-axis of the image, respectively, (c)x,cy) As principal point coordinates, M2Is an extrinsic parameter of the camera.
Compared with the prior art, the invention has the following remarkable advantages: (1) the method for defining the parameters is used, and workpieces with different sizes are grabbed by adjusting the defined parameters, so that the adaptability of the grabbing system is enhanced; (2) the robot is applied to grabbing workpieces with different sizes and adapting to different industrial automatic production scenes, the complexity of work programming is simplified, and the application range of the robot and the working efficiency in actual production are improved; (3) when only the vision sensor is used for input, the self-adaptive grabbing can be performed on the specified object in the multiple objects, the vision tracking on the moving object can also be completed, and the method is strong in real-time performance, high in detection rate, strong in robustness and good in universality.
Drawings
Fig. 1 is a schematic structural diagram of a vision robot gripping system of the present invention.
Fig. 2 is an overall program architecture diagram of the present invention.
Fig. 3 is a program control flow chart of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the embodiments.
The structure module architecture of the invention is shown in figure 1, and the vision robot gripping system comprises a master control system, a slave control system, a vision system and a gripping system, wherein the master control system sends instructions to respectively control the slave control system, the vision system and the gripping system;
the main control system comprises a main processor, a memory, a data acquisition module, a data processing module, a coordinate system conversion module, a display screen and a communication interface;
the main processor is a control core of a master control system and a slave control system;
the memory is used for storing data and programs generated in the whole working process of the grabbing system;
the data acquisition module comprises two CCD cameras which are fixed right above a conveyor belt through which a workpiece passes and are responsible for acquiring the workpiece image information of an industrial production scene in a visual field area of the CCD cameras and sending the image information to the main control system, the main control system calculates an execution data signal according to the image information, and the end effector module executes corresponding actions according to the execution data signal;
the data processing module sends the workpiece image information acquired by the two CCD cameras in the data acquisition module to the main control system, the data processing module in the main control system performs value domain on the workpiece image information to form a binary image, then performs Gaussian smoothing processing and morphological boundary extraction processing, calculates the workpiece contour moment and the external minimum rectangle to identify and match the workpiece, and simultaneously obtains the central coordinate of the workpiece;
the coordinate system conversion module is used for interconnecting the two CCD cameras of the data acquisition module with the data of the robot so as to obtain the coordinate parameter conversion of a camera coordinate system and a robot coordinate system;
the display screen is used for displaying a graphical user control interface of the upper computer in real time and matched workpiece center coordinate data;
the communication interface is used for accessing information when the main processor is upgraded and communicating with a slave control system.
The slave control system comprises a robot working environment modeling, a robot road force planning, a robot controller and a simulation environment simulation grabbing;
the robot work environment modeling is an optimal path searched for the robot under the conditions that the robot can safely and does not collide with a known environment in the process of traveling from an initial position to a target point in the premise that the obstacle of the robot work environment is known, and comprises modeling of the robot work environment and searching of the optimal path.
And the robot path planning is to calculate the coordinate parameter conversion from a camera coordinate system to a robot coordinate system after the coordinate system conversion module interconnects the two CCD cameras of the data acquisition module and the data of the robot, and the robot obtains feasible path points through a path planning algorithm according to the obtained current coordinate parameters and the obtained target coordinate parameters.
The robot controller is used for controlling the action execution and the motion planning of the robot;
and the simulated environment simulated grabbing is to model the workpiece and the obstacle of the robot working environment in the simulation software corresponding to the robot after the modeling of the robot working environment and the planning of the robot path are completed, then input the calculated planned path into the simulation software corresponding to the robot, and finally complete the simulated grabbing in the simulation environment.
The vision system comprises a camera calibration module, image preprocessing, image identification and matching;
the camera calibration module is used for restoring the position of an object imaged in the CCD camera in the real world and converting a world coordinate system into an image coordinate system through a multi-calculation conversion matrix;
the image preprocessing is to perform value domain conversion on the workpiece image acquired by the data acquisition module in the visual robot gripping system to obtain a binary image, and then perform Gaussian smoothing processing and morphological boundary extraction processing;
and the image identification and matching is to calculate the contour moment of the workpiece and the external minimum rectangle of the data processing module in the vision robot gripping system to identify and match the workpiece, obtain the center coordinate of the workpiece, and identify and match the workpiece contour calculated by the data processing module with the actual workpiece contour.
The grabbing system comprises a workpiece posture conversion module, a workpiece grabbing mapping module and an end effector module;
the workpiece pose conversion is to convert the robot action into a workpiece pose on the premise of an optimal planning path from the current position to the target position of the workpiece grabbed by the robot in the simulation environment simulation grabbing completion simulation environment of the simulation environment;
after the conversion of the workpiece posture is completed, the mapping workpiece grabbing maps the simulated environment simulation grabbing action to workpiece grabbing of a world coordinate system;
the end effector module is used for mapping the workpiece grabbing and simultaneously executing the action of grabbing the workpiece from the current position to the target position.
The overall program architecture diagram of the invention is shown in fig. 2, a main processor in the main control system is internally provided with a main control system program, a main processor in the slave control system is internally provided with a slave control system program, and a main processor in the main control system is internally provided with a vision system program;
the main control system program comprises a variable parameter initialization program, a self-detection program, a camera starting program, a robot starting program, a camera and robot cooperative control program, a fault diagnosis and fault processing program and an upper computer interface control program;
in the variable parameter initialization program, the variables defined in the visual robot grabbing system comprise a counting variable, a position variable, a timer variable and a minimum external rectangle variable in a linked list, and the variables are initialized;
the self-detection program is used for carrying out automatic detection during starting and detecting whether all control objects of each controller are at the original point, if the control objects are not at the original point, the self-detection program automatically calls the reset program to reset all the control objects, and if the control objects still do not return to the original point after the reset program is executed, the self-detection program gives an alarm;
the camera starting program is used for starting the camera, calibrating the parameters of the camera and photographing the camera;
the camera and robot cooperative control program meets defined timer variables, photographs are taken, the photographs are sent to a vision system program, and meanwhile the robot finishes a grabbing action according to coordinates obtained by the vision system;
the fault diagnosis and processing program is used for detecting and processing the fault generated by the control system, the fault processing program carries out different processing according to different detected fault types, the different detected fault types are displayed on the upper computer interface, and meanwhile, the alarm device is started to give an alarm;
the upper computer interface control program displays the variable parameter initialization program, the self-detection program, the camera starting program, the robot starting program, the camera and robot cooperative control program and the fault diagnosis and fault processing program on an upper computer interface, and can be adjusted through buttons defined on the upper computer interface.
The slave control system program comprises a robot working environment modeling program, a robot motion path planning program and a robot grabbing program;
the robot working environment modeling program is used for modeling the robot working environment and the robot;
the robot motion path planning program acquires workpiece initial coordinates through processing of the vision system program, establishes an equation through a robot inverse kinematics equation, and controls the robot to carry the workpiece to a target position from an initial position;
and the robot grabbing program sends grabbing signals to the relay, and simultaneously connects the workpiece manipulator to grab the workpiece.
The vision system program comprises a camera calibration program, an image preprocessing program and an image identification and matching program;
the camera calibration program is used for correcting the internal reference and the external reference of the camera and the distortion of the camera and converting a world coordinate system into an image coordinate system;
the image preprocessing program is used for performing value domain conversion on a workpiece image obtained by photographing through a camera to obtain a binary image, and then performing Gaussian smoothing processing and morphological boundary extraction processing;
and the image recognition and matching program is used for simultaneously obtaining the central coordinate of the workpiece by calculating the contour moment of the workpiece and the minimum external rectangle, and recognizing and matching the workpiece contour obtained by the calculation of the data processing module and the actual workpiece contour.
The program control flow chart of the invention is shown in fig. 3, and the control method of the vision robot gripping system comprises the following steps:
step 1, completing camera calibration of a grabbing system;
according to the conversion between a world coordinate system and a camera coordinate system of a workpiece image and the correction of image distortion of a camera calibration module in the vision system, determining the mutual relation between the three-dimensional geometric position of a certain point on the surface of a space object and the corresponding point in the image, and establishing a geometric model of camera imaging, wherein the geometric model parameters are camera parameters;
the relationship between the image coordinate system and the plane coordinate system is as follows:
Figure BDA0002768487810000091
wherein u, v represents coordinate values of row vector and column vector of pixel point in image coordinate system, x, y represents coordinate values in plane coordinate system, dx,dyRepresenting the physical dimensions of a single pixel point in the x and y directions;
the conversion model of the robot coordinate system and the depth coordinate system of the binocular CCD camera is as follows:
Figure BDA0002768487810000092
wherein [ X ]C YC ZC]Is a coordinate value of [ X ] in the camera coordinate systemW YW ZW]Is a coordinate value, [ Delta T ], in the world coordinate systemx ΔTy ΔTz]A coordinate system translation increment;
the conversion model of the plane coordinate system and the pixel coordinate system is as follows:
Figure BDA0002768487810000093
wherein f isxF/dx and fyF/dy. M is a 3X 4 projection matrix, the internal parameter fxfycxcyDetermine the internal reference M of the camera1,fxfyRepresenting scale factors on the u-axis and v-axis of the image, respectively, (c)x,cy) As principal point coordinates, M2Is an extrinsic parameter of the camera.
Step 2, carrying out image preprocessing to carry out smoothing and boundary extraction on image data;
after the calibration of the camera, carrying out binarization processing on a workpiece image shot by a CCD camera, carrying out Gaussian filtering smoothing processing on the image, and finally carrying out boundary extraction processing on the smoothed image by using a morphological boundary extraction method;
step 3, image recognition and matching;
and after smoothing and boundary extraction processing is carried out through the image, the central coordinate of the workpiece is obtained by calculating the contour moment of the workpiece and the external minimum rectangle, and the workpiece contour obtained by calculation of the data processing module is identified and matched with the actual workpiece contour.
Step 4, completing the simulated grabbing of the motion path of the robot in the robot simulation software;
and after the workpiece image is identified and matched, obtaining the coordinates of the central point of the workpiece, modeling the working environment of the robot in robot simulation software, and simultaneously simulating the workpiece grabbing action in the robot simulation software.
Step 5, completing the grabbing of the intelligent robot in a real environment;
the robot simulation software simulates the grabbing action to complete, the action in the simulation environment is mapped to the real environment, and the grabbing action of the intelligent robot to the workpiece is completed at the same time.
In conclusion, the method for defining the parameters is used, workpieces with different sizes can be grabbed by adjusting the defined parameters, and the adaptability of the grabbing system is enhanced. The grabbing system is simple to operate, and the grabbing system can be operated without too much professional theoretical knowledge. The grabbing system can improve the automation degree and safety of an industrial production line, saves manpower, reduces the production cost and improves the working efficiency.

Claims (8)

1. A vision robot grabbing system is characterized by comprising a master control system, a slave control system, a vision system and a grabbing system, wherein the master control system is used for sending instructions to respectively control the slave control system, the vision system and the grabbing system;
the main control system comprises a main processor, a memory, a data acquisition module, a data processing module, a coordinate system conversion module, a display screen and a communication interface;
the slave control system comprises a robot working environment modeling, a robot road force planning, a robot controller and a simulation environment simulation grabbing;
the vision system comprises a camera calibration module, image preprocessing, image identification and matching;
the grabbing system comprises a workpiece posture conversion module, a workpiece grabbing mapping module and an end effector module;
a master control system program module is arranged in a master processor in the master control system, a slave control system program module is arranged in a master processor in the slave control system, and a vision system program module is arranged in a master processor in the vision system.
2. The vision robot grabbing system of claim 1, wherein the master control system comprises a main processor, a memory, a data acquisition module, a data processing module, a coordinate system conversion module, a display screen and a communication interface, and is specifically operated as follows:
the main processor is a control core of a master control system and a slave control system;
the memory is used for storing data and program modules generated in the whole working process of the grabbing system;
the data acquisition module comprises two CCD cameras which are fixed right above a conveyor belt through which a workpiece passes and are used for acquiring workpiece image information of an industrial production scene in a visual field area of the CCD cameras and sending the image information to the memory, the main processor calculates an execution data signal according to the image information, and the end effector module executes corresponding actions according to the execution data signal;
the data processing module performs value domain on workpiece image information acquired by two CCD cameras in the data acquisition module to form a binary image, then performs Gaussian smoothing processing and morphological boundary extraction processing, calculates a workpiece contour moment and an external minimum rectangle to identify and match the workpiece, and simultaneously obtains a central coordinate of the workpiece;
the coordinate system conversion module interconnects the two CCD cameras of the data acquisition module and the data of the visual robot to obtain the coordinate parameter conversion of a camera coordinate system and a robot coordinate system;
the display screen is used for displaying a graphical user control interface of the upper computer in real time and matched workpiece center coordinate data;
the communication interface is used for accessing information when the main processor is upgraded and communicating with a slave control system.
3. The visual robotic grasping system according to claim 1, wherein the slave control system includes robotic work environment modeling, robotic path planning, robotic controller, simulated environment simulation grasping, wherein:
in the premise that the obstacle of the robot working environment is known, the robot working environment modeling is an optimal path searched for the robot under the conditions that the robot can safely and does not collide with the known environment in the process of traveling from the initial position to the target point, wherein the optimal path comprises the modeling of the robot working environment and the searching of the optimal path;
the robot path planning means that after the coordinate system conversion module interconnects the two CCD cameras of the data acquisition module and the data of the robot, the coordinate parameter conversion from the camera coordinate system to the robot coordinate system is obtained through calculation, and the robot obtains feasible path points through a path planning algorithm according to the obtained current coordinate parameters and target coordinate parameters;
the robot controller is used for controlling the action execution and the motion planning of the robot;
the simulated environment simulated grabbing means that after the working environment modeling and the robot path planning behaviors of the robot are completed, workpieces and obstacles in the working environment of the robot are modeled in simulation software corresponding to the robot, then the calculated planned path is input into the simulation software corresponding to the robot, and the robot completes simulated grabbing in the simulation environment.
4. The visual robotic grasping system according to claim 1, wherein the visual system includes a camera calibration module, image pre-processing, image recognition and matching;
the camera calibration module is used for restoring the position of an object imaged in the CCD camera in the real world and converting a world coordinate system into an image coordinate system by calculating a conversion matrix;
the image preprocessing is to perform value domain conversion on the workpiece image acquired by the data acquisition module into a binary image, and then perform Gaussian smoothing processing and morphological boundary extraction processing;
and the data processing module identifies and matches the workpiece by calculating the contour moment of the workpiece and the external minimum rectangle, obtains the center coordinate of the workpiece, and identifies and matches the workpiece contour calculated by the data processing module with the actual workpiece contour.
5. The visual robotic gripper system according to claim 1, wherein the gripper system comprises a transform workpiece pose, a map workpiece gripper, an end effector module;
the workpiece pose conversion means that the robot action is converted into the workpiece pose on the premise that the robot grasps the optimal planned path of the workpiece from the current position to the target position in the simulation environment;
the mapping workpiece grabbing refers to workpiece grabbing for mapping simulation environment simulation grabbing actions to a world coordinate system after the workpiece posture is converted;
and the end effector module is used for mapping workpiece grabbing and simultaneously executing the action of grabbing the workpiece from the current position to the target position.
6. The vision robot grabbing system of claim 1, wherein a master control system program module is arranged in a master processor in the master control system, a slave control system program module is arranged in a master processor in the slave control system, and a vision system program module is arranged in the master processor in the vision system, and the system comprises the following components:
(1) the main control system program comprises a variable parameter initialization program, a self-detection program, a camera starting program, a robot starting program, a camera and robot cooperative control program, a fault diagnosis and fault processing program and an upper computer interface control program;
in the variable parameter initialization program, the variables defined in the visual robot grabbing system comprise a counting variable, a position variable, a timer variable and a minimum external rectangle variable in a linked list, and the variables are initialized;
the self-detection program is used for carrying out automatic detection during starting and detecting whether all control objects of each controller are at the original point, if the control objects are not at the original point, the self-detection program automatically calls the reset program to reset all the control objects, and if the control objects still do not return to the original point after the reset program is executed, the self-detection program gives an alarm;
the camera starting program is used for starting the camera, calibrating the parameters of the camera and simultaneously taking pictures by the camera;
the camera and robot cooperative control program meets defined timer variables, photographs are taken, the photographs are sent to a vision system program, and meanwhile the robot finishes a grabbing action according to coordinates obtained by the vision system;
the fault diagnosis and processing program is used for detecting and processing the fault generated by the control system, the fault processing program carries out different processing according to different detected fault types, the different detected fault types are displayed on the upper computer interface, and meanwhile, the alarm device is started to give an alarm;
the upper computer interface control program displays the variable parameter initialization program, the self-detection program, the camera starting program, the robot starting program, the camera and robot cooperative control program and the fault diagnosis and fault processing program on an upper computer interface, and is adjusted by a button defined on the upper computer interface;
(2) the slave control system program obtains the workpiece initial coordinate through the processing of the vision system program, establishes a slave system control program through the inverse kinematics equation of the robot, and controls the robot to carry the workpiece to the target position from the initial position;
(3) the vision system program performs value domain conversion on a workpiece image obtained by photographing through a camera to form a binary image, and then performs Gaussian smoothing processing and morphological boundary extraction processing to match position coordinates of the workpiece.
7. A control method of a visual robot grabbing system is characterized in that a data acquisition module is used for acquiring workpiece images, an image preprocessing link is used for obtaining preprocessed workpiece image data, a visual system is used for identifying and matching workpieces, a path planning algorithm is used for solving the optimal solution of a robot traveling path, a master control system is used for sending signals to a robot controller in a slave control system, and an end effector module is controlled to grab and carry the workpieces; the method specifically comprises the following steps:
step 1, calibrating a camera of a visual robot grabbing system;
step 2, image preprocessing is carried out to smooth and extract the boundary of the image data: after the camera in the step 1 is calibrated, carrying out binarization processing on a workpiece image shot by a CCD camera, carrying out Gaussian filtering smoothing processing on the image, and finally carrying out boundary extraction processing on the smoothed image by using a morphological boundary extraction method;
step 3, image identification and matching: after the smoothing and boundary extraction processing is carried out on the image in the step 2, drawing a minimum circumscribed rectangle on the image, and finding out the outline with the maximum area to match with the workpiece image;
and 4, finishing the simulated capture of the motion path of the robot in the robot simulation software: after the workpiece image is identified and matched in the step 3, the coordinates of the central point of the workpiece are obtained, the working environment of the robot is modeled in the robot simulation software, and meanwhile, the workpiece capturing action is simulated in the robot simulation software;
step 5, completing the grabbing of the intelligent robot in the real environment: and 4, simulating the grabbing action in the robot simulation software in the step 4, mapping the action in the simulation environment to a real environment, and simultaneously completing the grabbing action of the intelligent robot to the workpiece.
8. The method for controlling a vision robot gripping system according to claim 7, wherein the calibration of the camera of the vision robot gripping system in step 1 is as follows:
according to the conversion between a world coordinate system and a camera coordinate system of a camera calibration module used for a workpiece image in a vision system and the correction of image distortion, determining the mutual relation between the three-dimensional geometric position of a certain point on the surface of a space object and the corresponding point in the image, and establishing a geometric model of camera imaging, wherein the geometric model parameters are camera parameters;
the relationship between the image coordinate system and the plane coordinate system is as follows:
Figure FDA0002768487800000041
wherein u, v represents coordinate values of row vector and column vector of pixel point in image coordinate system, x, y represents coordinate values in plane coordinate system, dx,dyRepresenting the physical size of a single pixel point in the x and y directions;
the conversion model of the robot coordinate system and the depth coordinate system of the binocular CCD camera is as follows:
Figure FDA0002768487800000042
wherein, [ X ]C YC ZC]Is a coordinate value of [ X ] in the camera coordinate systemW YW ZW]Is a coordinate value, [ Delta T ], in the world coordinate systemxΔTy ΔTz]A coordinate system translation increment;
the conversion model of the plane coordinate system and the pixel coordinate system is as follows:
Figure FDA0002768487800000051
wherein,fx=f/dx、fyf/dy; m is a 3X 4 projection matrix, the intrinsic parameter fx、fy、cx、cyDetermines the internal reference M of the camera1,fx、fyRepresenting scale factors on the u-axis and v-axis of the image, respectively, (c)x,cy) As principal point coordinates, M2Is an extrinsic parameter of the camera.
CN202011241452.0A 2020-11-09 2020-11-09 Vision robot grabbing system and control method thereof Pending CN114463244A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011241452.0A CN114463244A (en) 2020-11-09 2020-11-09 Vision robot grabbing system and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011241452.0A CN114463244A (en) 2020-11-09 2020-11-09 Vision robot grabbing system and control method thereof

Publications (1)

Publication Number Publication Date
CN114463244A true CN114463244A (en) 2022-05-10

Family

ID=81404032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011241452.0A Pending CN114463244A (en) 2020-11-09 2020-11-09 Vision robot grabbing system and control method thereof

Country Status (1)

Country Link
CN (1) CN114463244A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116423471A (en) * 2023-06-13 2023-07-14 中国农业科学院蔬菜花卉研究所 Intelligent cooperative robot for flux experiment operation
CN117576787A (en) * 2024-01-16 2024-02-20 北京大学深圳研究生院 Method, device and equipment for handing over based on active tracking and self-adaptive gesture recognition
WO2024113216A1 (en) * 2022-11-30 2024-06-06 青岛理工大学(临沂) High-precision grasping method of industrial mold intelligent manufacturing robot

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024113216A1 (en) * 2022-11-30 2024-06-06 青岛理工大学(临沂) High-precision grasping method of industrial mold intelligent manufacturing robot
CN116423471A (en) * 2023-06-13 2023-07-14 中国农业科学院蔬菜花卉研究所 Intelligent cooperative robot for flux experiment operation
CN116423471B (en) * 2023-06-13 2023-08-15 中国农业科学院蔬菜花卉研究所 Intelligent cooperative robot for flux experiment operation
CN117576787A (en) * 2024-01-16 2024-02-20 北京大学深圳研究生院 Method, device and equipment for handing over based on active tracking and self-adaptive gesture recognition
CN117576787B (en) * 2024-01-16 2024-04-16 北京大学深圳研究生院 Method, device and equipment for handing over based on active tracking and self-adaptive gesture recognition

Similar Documents

Publication Publication Date Title
US11338435B2 (en) Gripping system with machine learning
CN113696186B (en) Mechanical arm autonomous moving and grabbing method based on visual-touch fusion under complex illumination condition
CN111462154B (en) Target positioning method and device based on depth vision sensor and automatic grabbing robot
CN114463244A (en) Vision robot grabbing system and control method thereof
CN111421539A (en) Industrial part intelligent identification and sorting system based on computer vision
CN111923053A (en) Industrial robot object grabbing teaching system and method based on depth vision
CN111347411B (en) Two-arm cooperative robot three-dimensional visual recognition grabbing method based on deep learning
CN113379849B (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
CN110378325B (en) Target pose identification method in robot grabbing process
CN112906797A (en) Plane grabbing detection method based on computer vision and deep learning
CN112509063A (en) Mechanical arm grabbing system and method based on edge feature matching
CN107030692B (en) Manipulator teleoperation method and system based on perception enhancement
CN115816460B (en) Mechanical arm grabbing method based on deep learning target detection and image segmentation
CN114912287A (en) Robot autonomous grabbing simulation system and method based on target 6D pose estimation
CN114952809A (en) Workpiece identification and pose detection method and system and grabbing control method of mechanical arm
CN116673962B (en) Intelligent mechanical arm grabbing method and system based on Faster R-CNN and GRCNN
CN115861780B (en) Robot arm detection grabbing method based on YOLO-GGCNN
CN112372641B (en) Household service robot character grabbing method based on visual feedforward and visual feedback
CN110605711A (en) Method, device and system for controlling cooperative robot to grab object
CN115070781B (en) Object grabbing method and two-mechanical-arm cooperation system
JPH0780790A (en) Three-dimensional object grasping system
CN116985141B (en) Industrial robot intelligent control method and system based on deep learning
CN113894774A (en) Robot grabbing control method and device, storage medium and robot
CN114670189A (en) Storage medium, and method and system for generating control program of robot
KR102452315B1 (en) Apparatus and method of robot control through vision recognition using deep learning and marker

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination