CN108656120B - Teaching and processing method based on image contrast - Google Patents

Teaching and processing method based on image contrast Download PDF

Info

Publication number
CN108656120B
CN108656120B CN201810319631.8A CN201810319631A CN108656120B CN 108656120 B CN108656120 B CN 108656120B CN 201810319631 A CN201810319631 A CN 201810319631A CN 108656120 B CN108656120 B CN 108656120B
Authority
CN
China
Prior art keywords
teaching
processing
data
robot
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810319631.8A
Other languages
Chinese (zh)
Other versions
CN108656120A (en
Inventor
陈小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201810319631.8A priority Critical patent/CN108656120B/en
Publication of CN108656120A publication Critical patent/CN108656120A/en
Application granted granted Critical
Publication of CN108656120B publication Critical patent/CN108656120B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0081Programme-controlled manipulators with master teach-in means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Numerical Control (AREA)

Abstract

The invention discloses a teaching and processing method based on image contrast, wherein the relative position of a teaching processing device and a teaching visual induction module after connection is the same as the relative position of a flow processing device processing visual induction module after connection; the structure of the teaching system equipment is the same as that of the processing system equipment; and generating a comparison data set by the teaching construction module, the teaching attitude sensor and the teaching given camera, and correspondingly enabling the robot to reproduce the teaching process. The teaching visual sensing module is provided with the attitude sensor, so that each track point on the visual sensing module can be corresponding to attitude data and image data when teaching wandering is carried out, teaching processing equipment with the teaching visual sensing module is installed on a robot hand after the teaching of the robot hand is completed, the conversion from a working coordinate system to a robot coordinate system is realized through an image contrast confirmation method, and the professional requirements on operators and the system data conversion operation amount are greatly reduced. The teaching robot is used for teaching robots.

Description

Teaching and processing method based on image contrast
Technical Field
The invention relates to the field of teaching robots, in particular to a teaching and processing method based on image contrast.
Background
The teaching programming of the current industrial robot requires that programming operators are very familiar with the robot and are skilled in mastering relevant programming knowledge, which results in that longer time and higher cost are spent for training the programming operators of the industrial robot, so that the threshold of the application of the industrial robot is increased, and the cost of the application of the industrial robot is also increased.
At present, most of vision systems are used for solving the technical application in a certain professional field, and the vision systems for teaching programming have the problems of low repeatability precision, large calculation amount, low response speed and easy shielding of industrial robot systems.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: a teaching and processing method based on image contrast is provided.
The solution of the invention for solving the technical problem is as follows:
a teaching method based on image contrast is characterized in that:
the method comprises the following steps:
step a) a teaching processing device, a teaching system device with a teaching working coordinate system and a teaching robot arranged beside the teaching system device are equipped, the teaching system device comprises a teaching visual induction module moving in the teaching working coordinate system, a teaching working table is arranged beside the teaching visual induction module, a teaching construction module with a teaching construction camera for constructing the teaching working coordinate system and inputting the walking track of the teaching visual induction module is arranged beside the teaching working table and/or above the teaching working table, and a teaching attitude sensor and a teaching given camera are arranged on the teaching visual induction module; teaching the built camera to be a binocular camera or a multi-view camera; teaching that the given camera is a binocular camera or a multi-view camera;
placing a workpiece on a teaching worktable surface, and fixedly connecting a teaching visual sensing module with teaching processing equipment;
step b) starting a teaching construction module, a teaching attitude sensor and a teaching given camera, enabling an operator to hold a teaching processing device to process a workpiece, enabling the teaching construction camera of the teaching construction module to be recorded into a track of a teaching visual sensing module in a teaching working coordinate system to form track data comprising a plurality of track points, enabling the teaching given camera to shoot the workpiece to form given image data, enabling the teaching attitude sensor to sense the attitude of the teaching visual sensing module to form attitude data, and sending the track data with the given image data and the attitude data to a programming system to form a comparison data set, wherein the comparison data set enables track points of the track data to serve as mother data, each mother data corresponds to a group of subdata, and the subdata comprises the given image data of the teaching visual sensing module at the position of each track point, Attitude data;
step c) connecting the teaching processing equipment with the teaching visual sensing module with the teaching robot, guiding the teaching robot to drive the teaching visual sensing module to sequentially pass through actual space points determined by all the parent data by using track data, adjusting the posture of the teaching visual sensing module when reaching the position of each parent data into a given posture, setting the given posture according to the posture data of child data corresponding to the parent data, respectively calling the adjacent two parent data as parent data M and parent data N, carrying out image shooting on a workpiece by using the teaching given camera to form current image data before the teaching visual sensing module reaches the position of the parent data N from the position of the parent data M, carrying out image comparison on the current image data and the given image data in the child data corresponding to the parent data M to obtain given image deviation, and driving the robot according to the given image deviation and the teaching posture deviation to adjust the teaching visual sensing module relative to the teaching robot Finally, enabling the contrast deviation of the given image data in the subdata corresponding to the current image data and the parent data M to be within a tolerance range, and then recording the robot position and posture data of the current teaching robot, wherein the robot position and posture data of the current teaching robot is called as robot basic data; and integrating all the basic robot data to form processing track data.
As a further improvement of the above scheme, in step c), the teaching robot drives the teaching visual sensing module to move from the position of the mother data M to the position before the mother data N, and the position change process of the teaching visual sensing module is as follows:
process a) the position of the current teaching visual sensing module is P, the current image data of P is obtained and compared with the given image data in the subdata corresponding to the parent data M to generate the given image deviation, and process b) is executed;
process b) judging the deviation of the given image, and if the deviation of the given image is beyond the tolerance range, executing process c); if the given image deviation is within the tolerance range, recording robot position and posture data of the current teaching robot, and driving the teaching robot to drive the teaching visual sensing module to move to the position determined by the mother data N;
and c) enabling the teaching robot to drive the teaching visual sensing module to adjust the position and/or the posture according to the given image deviation, and then executing the process a).
As a further improvement of the above, there is further provided a step c1) after step a) and after step b1), step a1) after step a), step b1) after step b) and after step a 1);
step a1) using a teaching processing device fixed with a teaching system device as a modeling device, using a teaching robot as a teaching robot, installing the modeling device on the teaching robot, arranging a plurality of constraint points around a workpiece, enabling the teaching robot to drive the modeling device to pass through the constraint points in sequence in a specified path and a specified posture based on the robot coordinates of the teaching robot, and shooting the workpiece, so that each track point of the specified path has three-dimensional basic data of posture information and image information, and integrating all the three-dimensional basic data, thereby generating a real object three-dimensional model of the workpiece;
step b1) presenting all the posture data points/linear visualization and the physical solid model together in the mother data and/or the subdata, wherein the visualized mother data and/or subdata are called simulation data, and part of the simulation data is adjusted and/or the simulation data is increased or decreased;
and c1) enabling the teaching robot to drive the modeling equipment to enable the robot coordinate of the teaching robot to pass through the actual space point determined by the simulation data, wherein the actual space point is called as a simulation point, and the posture of the modeling equipment on each simulation point is determined by the simulation data, so that the parent data in the teaching work coordinate system and the child data corresponding to the parent data are regenerated.
As a further improvement of the above scheme, a posture sensor is arranged in each teaching construction module, and comprises at least two teaching construction modules, at least one teaching construction module is arranged on the left side or the right side or the front side or the rear side of the teaching worktable surface, at least one teaching construction module is arranged above the teaching worktable surface, at least one teaching construction module is called as a global construction module, and the monitoring range of a teaching construction camera of the global construction module covers all other teaching construction modules; and c), after the basic data of each robot is formed in the step c), the teaching construction camera of the teaching construction module of the teaching robot can be monitored to take pictures of the teaching robot, so that robot posture image data corresponding to the basic data of the robot one by one is formed.
A processing method based on image contrast,
after all the steps of any one of the above teaching methods based on image contrast are executed, the following steps are executed:
and d1) preparing a processing working table and a processing robot arranged beside the processing working table, wherein the processing robot is provided with a flow processing device, the workpiece is placed on the processing working table, and the processing robot is driven to drive the processing device to process the workpiece according to the processing track data.
A processing method based on image contrast,
after all the steps of any one of the above teaching methods based on image contrast are executed, the following steps are executed:
step d2) preparing a flow processing device, a processing system device with a processing working coordinate system and a processing robot arranged beside the processing system device, wherein the processing system device comprises a processing visual sense module moving in the processing working coordinate system and a processing working table surface beside the processing visual sense module, a processing construction module which is used for constructing the processing working coordinate system and inputting the walking track of the processing visual sense module is arranged beside the processing working table surface and/or above the processing working table surface, the processing construction module has the same structure as the teaching construction module, and a processing attitude sensor and a processing current camera are arranged on the processing visual sense module; processing the current camera to be a binocular camera or a multi-view camera;
placing a workpiece on a processing working table, fixedly connecting a processing visual sensing module with a flow processing device, and connecting the flow processing device with a processing robot;
step e) placing a workpiece on the surface of the machining workbench, enabling the machining construction module to construct a machining working coordinate system by taking the workpiece as a base point, and driving the machining vision sensing module to sequentially pass through the track points by the machining robot according to the machining track data;
after the processing vision induction module reaches the track points of the comparison data set, the processing robot confirms or adjusts the gesture of the processing vision induction module according to gesture data in the subdata corresponding to the track points, so that the gesture of the processing vision induction module during processing is the same as the gesture of the teaching vision induction module on the same track point;
the current camera is processed to shoot the image of the workpiece to form current image data, the current image data is compared with given image data in the subdata corresponding to the track point where the teaching visual sensing module is located, if the image comparison result of the current image data and the given image data is within the tolerance range, the flow processing equipment is started or kept to process the workpiece, and if the picture comparison result of the current image data and the given image data exceeds the tolerance range, the machining of the workpiece by the line machining equipment is suspended, so that the machining robot drives the machining vision sensing module to adjust the posture and the position, and continuously acquires current image data and compares the newly acquired current image data with a picture of the given image data, therefore, the posture and the position of the processing visual sensing module are the same as those of the subdata, and then the flow processing equipment is started to process the workpiece;
the relative position of the teaching processing equipment and the teaching visual sensing module after connection is the same as the relative position of the flow processing equipment after connection; the structure of the teaching system equipment is the same as that of the processing system equipment; the teaching processing equipment has the same structure as the flow processing equipment.
As a further improvement of the scheme, the device also comprises a movable workbench, and the upper surface of the movable workbench is the processing workbench surface.
As a further improvement of the above scheme, the teaching system equipment and the processing system equipment are the same equipment; the teaching processing equipment and the flow processing equipment are the same equipment.
As a further improvement of the scheme, the processing robot and the teaching robot are the same equipment, the processing working table surface and the teaching working table surface are the same equipment, and the teaching construction module and the processing construction module are the same equipment. The invention has the beneficial effects that: the teaching visual sensing module is provided with the attitude sensor, so that when the visual sensing module faces a workpiece and teaching and walking are carried out, each track point on the visual sensing module can be corresponding to attitude data and image data, after the teaching of a human hand is completed, teaching processing equipment with the teaching visual sensing module is installed on the robot hand, the conversion from a working coordinate system to a robot coordinate system is realized through an image contrast confirmation method, and the professional requirements on operators and the system data conversion operation amount are greatly reduced. The teaching robot is used for teaching robots.
Detailed Description
The conception, the specific structure, and the technical effects produced by the present invention will be clearly and completely described in conjunction with the embodiments below, so that the objects, the features, and the effects of the present invention can be fully understood. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and those skilled in the art can obtain other embodiments without inventive effort based on the embodiments of the present invention, and all embodiments are within the protection scope of the present invention. In addition, all the coupling/connection relationships mentioned herein do not mean that the components are directly connected, but mean that a better coupling structure can be formed by adding or reducing coupling accessories according to specific implementation conditions. The technical characteristics of the invention can be combined interactively on the premise of not conflicting with each other.
This is an embodiment of the invention, specifically:
a teaching method based on image contrast is characterized in that:
the method comprises the following steps:
step a) a teaching processing device, a teaching system device with a teaching working coordinate system and a teaching robot arranged beside the teaching system device are equipped, the teaching system device comprises a teaching visual induction module moving in the teaching working coordinate system, a teaching working table is arranged beside the teaching visual induction module, a teaching construction module with a teaching construction camera for constructing the teaching working coordinate system and inputting the walking track of the teaching visual induction module is arranged beside the teaching working table and/or above the teaching working table, and a teaching attitude sensor and a teaching given camera are arranged on the teaching visual induction module; teaching the built camera to be a binocular camera or a multi-view camera; teaching that the given camera is a binocular camera or a multi-view camera;
placing a workpiece on a teaching worktable surface, and fixedly connecting a teaching visual sensing module with teaching processing equipment;
step b) starting a teaching construction module, a teaching attitude sensor and a teaching given camera, enabling an operator to hold a teaching processing device to process a workpiece, enabling the teaching construction camera of the teaching construction module to record the track of a teaching visual sensing module in a teaching working coordinate system, synchronously generating attitude data of the teaching attitude sensor of the teaching visual sensing module when the teaching visual sensing module moves in the teaching working coordinate system so as to form track data containing a plurality of track points, enabling the teaching construction camera to photograph the workpiece to form given image data, enabling the teaching attitude sensor to sense the attitude of the teaching visual sensing module to form attitude data, sending the track data with the given image data and the attitude data to a programming system so as to form a comparison data set, and enabling the track points of the track data to serve as mother data in the comparison data set, each parent data corresponds to a group of subdata, and the subdata comprises given image data and posture data of the teaching visual sensing module at the position of each track point;
step c) connecting the teaching processing equipment with the teaching visual sensing module with the teaching robot, guiding the teaching robot to drive the teaching visual sensing module to sequentially pass through actual space points determined by all the parent data by using track data, adjusting the posture of the teaching visual sensing module when reaching the position of each parent data into a given posture, setting the given posture according to the posture data of child data corresponding to the parent data, respectively calling the adjacent two parent data as parent data M and parent data N, carrying out image shooting on a workpiece by using the teaching given camera to form current image data before the teaching visual sensing module reaches the position of the parent data N from the position of the parent data M, carrying out image comparison on the current image data and the given image data in the child data corresponding to the parent data M to obtain given image deviation, and driving the robot according to the given image deviation and the teaching posture deviation to adjust the teaching visual sensing module relative to the teaching robot Finally, enabling the contrast deviation of the given image data in the subdata corresponding to the current image data and the parent data M to be within a tolerance range, and then recording the robot position and posture data of the current teaching robot, wherein the robot position and posture data of the current teaching robot is called as robot basic data; and integrating all the basic robot data to form processing track data.
The teaching visual sensing module is provided with a posture sensor, so that when the visual sensing module faces a workpiece and is taught and walked, each track point on the visual sensing module can be corresponding to posture data and image data, after the teaching of a human hand is completed, teaching processing equipment with the teaching visual sensing module is installed on the robot hand, the conversion from a working coordinate system to a robot coordinate system is realized by an image contrast confirmation method, the professional requirements and the system data conversion operand for operators are greatly reduced, the operators can not know teaching programming, the tracks of processing track data can be generated only by executing the steps on duty, and the tracks of the processing track data in the step c) are just established on the basis of the robot coordinates of the teaching robot and can directly drive the robot.
The comparison data set, the robot position and posture data, the robot basic data and the drive line processing equipment can read and modify the workpiece by following the moving track of the processing robot, so that the workpiece is conveniently transplanted to other robots for processing, the prior art is avoided, and at least one teaching is required to be carried out on each robot.
In the step c), the teaching robot drives the teaching visual sensing module to move from the position of the mother data M to the process before the mother data N, and the position change process of the teaching visual sensing module is as follows:
process a) the position of the current teaching visual sensing module is P, the current image data of P is obtained and compared with the given image data in the subdata corresponding to the parent data M to generate the given image deviation, and process b) is executed;
process b) judging the deviation of the given image, and if the deviation of the given image is beyond the tolerance range, executing process c); if the given image deviation is within the tolerance range, recording robot position and posture data of the current teaching robot, and driving the teaching robot to drive the teaching visual sensing module to move to the position determined by the mother data N;
and c) enabling the teaching robot to drive the teaching visual sensing module to adjust the position and/or the posture according to the given image deviation, and then executing the process a).
The adjusting mode can well realize rapid adjustment of the robot pose of the robot and reduce the operation burden of the system.
The invention also comprises a step c1) provided after step a) at step a1), after step b) and after step a1), before step c) and after step b 1);
step a1) using a teaching processing device fixed with a teaching system device as a modeling device, using a teaching robot as a teaching robot, installing the modeling device on the teaching robot, arranging a plurality of constraint points around a workpiece, enabling the teaching robot to drive the modeling device to pass through the constraint points in sequence in a specified path and a specified posture based on the robot coordinates of the teaching robot, and shooting the workpiece, so that each track point of the specified path has three-dimensional basic data of posture information and image information, and integrating all the three-dimensional basic data, thereby generating a real object three-dimensional model of the workpiece;
step a1) using a teaching processing device fixed with a teaching system device as a modeling device, using a teaching robot as a teaching robot, installing the modeling device on the teaching robot, arranging a plurality of constraint points around a workpiece, enabling the teaching robot to drive the modeling device to pass through the constraint points in sequence in a specified path and a specified posture based on the robot coordinates of the teaching robot, and shooting the workpiece, so that each track point of the specified path has three-dimensional basic data of posture information and image information, and integrating all the three-dimensional basic data, thereby generating a real object three-dimensional model of the workpiece;
the object three-dimensional model generated by the method can be directly displayed on a computer screen, and each track point can be displayed on the computer screen, so that man-machine interaction is conveniently realized, the specialized difficulty in adjusting the track points is reduced, and positioning data can be adjusted by anyone without high education and abundant programming experience and capability through simple training, thereby realizing accurate positioning in the machining process.
Step b1) presenting all the posture data points/linear visualization and the physical solid model together in the mother data and/or the subdata, wherein the visualized mother data and/or subdata are called simulation data, and part of the simulation data is adjusted and/or the simulation data is increased or decreased;
the real object three-dimensional model is a curved surface model corresponding to the surface of the solid model or the upper part of the workpiece, the real object three-dimensional model is generated, and the simulation data visually presents the track points and the real object three-dimensional model together, so that specialized program modification can be avoided, the simulation data can be adjusted through visual operation, and the specialized difficulty of teaching programming is reduced.
Each pose of the teaching visual sensing module is determined by the pose sensed by the pose sensor and the corresponding object image, and the teaching visual sensing module belongs to closed-loop control; the control precision is higher than that of the robot, so that the repetition precision of the robot is greatly improved; meanwhile, the attitude sensor directly measures attitude data and adjusts the attitude, and the robot only needs to do up-and-down, left-and-right and front-and-back translational motion during image comparison, so that a large amount of complex transportation is saved, and the system can run efficiently.
Since the simulation data is created by the robot coordinates, the simulation data can be quickly converted into the parent data and the child data corresponding to the parent data in the teaching task coordinate system by the step c 1).
And c1) enabling the teaching robot to drive the modeling equipment to enable the robot coordinate of the teaching robot to pass through the actual space point determined by the simulation data, wherein the actual space point is called as a simulation point, and the posture of the modeling equipment on each simulation point is determined by the simulation data, so that the parent data in the teaching work coordinate system and the child data corresponding to the parent data are regenerated.
Step c1) avoids the traditional coordinate operation mapping operation, reduces the burden of system operation, avoids accumulated errors and improves the repetition precision.
The invention does not adopt the original three-dimensional model of the workpiece but adopts the physical three-dimensional model generated by the workpiece, and can very well and accurately adjust the master data and the subdata. And the steps a1), b1) and c1) can reduce the calculation load of the system through the conversion of the coordinate system, avoid accumulated errors, realize high-precision machining, avoid the problem of insufficient machining precision caused by insufficient precision of the robot, and greatly reduce a large amount of compensation calculation caused by compensating the motion precision of the robot.
The teaching construction module is internally provided with an attitude sensor which comprises at least two teaching construction modules, at least one teaching construction module is arranged on the left side or the right side or the front side or the rear side of the teaching working table, at least one teaching construction module is arranged above the teaching working table and is called as a global construction module, and the monitoring range of the teaching construction camera of the global construction module covers all other teaching construction modules; and c), after the basic data of each robot is formed in the step c), the teaching construction camera of the teaching construction module of the teaching robot can be monitored to take pictures of the teaching robot, so that robot posture image data corresponding to the basic data of the robot one by one is formed. The arrangement can prevent the monitoring of the robot from being interrupted, can position a plurality of processing positions of the robot, and can compare a real-time picture obtained by a processing construction camera with the posture image data of the robot by a picture comparison method during processing, thereby realizing rapid robot position navigation and self posture adjustment of the robot.
A processing method based on image contrast,
after all the steps of any one of the above teaching methods based on image contrast are executed, the following steps are executed:
and d1) preparing a processing working table and a processing robot arranged beside the processing working table, wherein the processing robot is provided with a flow processing device, the workpiece is placed on the processing working table, and the processing robot is driven to drive the processing device to process the workpiece according to the processing track data.
Such a machining method is suitable for general machining accuracy and high machining accuracy, and when the machining accuracy is required to be very high, it is necessary to adopt the following method:
a processing method based on image contrast,
after all the steps of any one of the above teaching methods based on image contrast are executed, the following steps are executed:
step d2) preparing a flow processing device, a processing system device with a processing working coordinate system and a processing robot arranged beside the processing system device, wherein the processing system device comprises a processing visual sense module moving in the processing working coordinate system and a processing working table surface beside the processing visual sense module, a processing construction module which is used for constructing the processing working coordinate system and inputting the walking track of the processing visual sense module is arranged beside the processing working table surface and/or above the processing working table surface, the processing construction module has the same structure as the teaching construction module, and a processing attitude sensor and a processing current camera are arranged on the processing visual sense module; processing the current camera to be a binocular camera or a multi-view camera;
placing a workpiece on a processing working table, fixedly connecting a processing visual sensing module with a flow processing device, and connecting the flow processing device with a processing robot;
step e) placing a workpiece on the surface of the machining workbench, enabling the machining construction module to construct a machining working coordinate system by taking the workpiece as a base point, and driving the machining vision sensing module to sequentially pass through the track points by the machining robot according to the machining track data;
after the processing vision induction module reaches the track points of the comparison data set, the processing robot confirms or adjusts the gesture of the processing vision induction module according to gesture data in the subdata corresponding to the track points, so that the gesture of the processing vision induction module during processing is the same as the gesture of the teaching vision induction module on the same track point;
the current camera is processed to shoot the image of the workpiece to form current image data, the current image data is compared with given image data in the subdata corresponding to the track point where the teaching visual sensing module is located, if the image comparison result of the current image data and the given image data is within the tolerance range, the flow processing equipment is started or kept to process the workpiece, and if the picture comparison result of the current image data and the given image data exceeds the tolerance range, the machining of the workpiece by the line machining equipment is suspended, so that the machining robot drives the machining vision sensing module to adjust the posture and the position, and continuously acquires current image data and compares the newly acquired current image data with a picture of the given image data, therefore, the posture and the position of the processing visual sensing module are the same as those of the subdata, and then the flow processing equipment is started to process the workpiece;
the relative position of the teaching processing equipment and the teaching visual sensing module after connection is the same as the relative position of the flow processing equipment after connection; the structure of the teaching system equipment is the same as that of the processing system equipment; the teaching processing equipment has the same structure as the flow processing equipment.
Since the machining method always follows the workpiece, the machining robot can accurately complete the operation specified by the teaching program and can excellently restore the teaching operation regardless of which robot is used in the actual line machining and whether the workpiece is moving. Moreover, the teaching program is very convenient to transplant, one-time teaching can be realized, the teaching program can be used for a plurality of processing systems, and once the robot beside each processing system is damaged, the teaching program is loaded on the standby processing robot, and the standby processing robot can be conveniently installed beside the original processing robot to realize processing. And processing and constructing the camera and can let processing response module carry out quick location, and the picture contrast result of current image data and given image data can let processing robot can carry out accurate position, posture adjustment to processing response module to let the realization of realizing accurate teaching action, the place of teaching even can all be different with the place of processing. According to the image contrast analysis method, the processing error can be automatically compensated, the problem of heavy system operation load caused by reducing the error through an algorithm is avoided, and the repetition precision can be greatly improved. The errors comprise motion errors of the robot, accumulated errors of track data, errors caused by self vibration of the robot during processing, errors caused by abrasion after the robot is used for a long time, and errors caused by deflection of each arm of the robot due to the fact that the robot grabs heavy objects.
In order to facilitate continuous processing, the embodiment further comprises a movable workbench, and the upper surface of the movable workbench is the processing workbench surface. The accurate machining of the moving workpiece can be realized just by the tracking teaching and machining method of the image contrast of the workpiece.
The teaching system equipment and the processing system equipment are the same equipment; the teaching processing equipment and the flow processing equipment are the same equipment.
The processing robot and the teaching robot are the same equipment, the processing working table surface and the teaching working table surface are the same equipment, and the teaching construction module and the processing construction module are the same equipment.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (7)

1. A teaching method based on image contrast is characterized in that:
the method comprises the following steps:
step a) a teaching processing device, a teaching system device with a teaching working coordinate system and a teaching robot arranged beside the teaching system device are equipped, the teaching system device comprises a teaching visual induction module moving in the teaching working coordinate system, a teaching working table is arranged beside the teaching visual induction module, a teaching construction module with a teaching construction camera for constructing the teaching working coordinate system and inputting the walking track of the teaching visual induction module is arranged beside the teaching working table and/or above the teaching working table, and a teaching attitude sensor and a teaching given camera are arranged on the teaching visual induction module; teaching the built camera to be a binocular camera or a multi-view camera; teaching that the given camera is a binocular camera or a multi-view camera;
placing a workpiece on a teaching worktable surface, and fixedly connecting a teaching visual sensing module with teaching processing equipment;
step b) starting a teaching construction module, a teaching attitude sensor and a teaching given camera, enabling an operator to hold a teaching processing device to process a workpiece, enabling the teaching construction camera of the teaching construction module to be recorded into a track of a teaching visual sensing module in a teaching working coordinate system to form track data comprising a plurality of track points, enabling the teaching given camera to shoot the workpiece to form given image data, enabling the teaching attitude sensor to sense the attitude of the teaching visual sensing module to form attitude data, and sending the track data with the given image data and the attitude data to a programming system to form a comparison data set, wherein the comparison data set enables track points of the track data to serve as mother data, each mother data corresponds to a group of subdata, and the subdata comprises the given image data of the teaching visual sensing module at the position of each track point, Attitude data;
step c) connecting the teaching processing equipment with the teaching visual sensing module with the teaching robot, guiding the teaching robot to drive the teaching visual sensing module to sequentially pass through actual space points determined by all the parent data by using track data, adjusting the posture of the teaching visual sensing module when reaching the position of each parent data into a given posture, setting the given posture according to the posture data of child data corresponding to the parent data, respectively calling the adjacent two parent data as parent data M and parent data N, carrying out image shooting on a workpiece by using the teaching given camera to form current image data before the teaching visual sensing module reaches the position of the parent data N from the position of the parent data M, carrying out image comparison on the current image data and the given image data in the child data corresponding to the parent data M to obtain given image deviation, and driving the robot according to the given image deviation and the teaching posture deviation to adjust the teaching visual sensing module relative to the teaching robot Finally, enabling the contrast deviation of the given image data in the subdata corresponding to the current image data and the parent data M to be within a tolerance range, and then recording the robot position and posture data of the current teaching robot, wherein the robot position and posture data of the current teaching robot is called as robot basic data; integrating all the basic robot data to form processing track data;
in the step c), the teaching robot drives the teaching visual sensing module to move from the position of the mother data M to the process before the mother data N, and the position change process of the teaching visual sensing module is as follows:
process a) the position of the current teaching visual sensing module is P, the current image data of P is obtained and compared with the given image data in the subdata corresponding to the parent data M to generate the given image deviation, and process b) is executed;
process b) judging the deviation of the given image, and if the deviation of the given image is beyond the tolerance range, executing process c); if the given image deviation is within the tolerance range, recording robot position and posture data of the current teaching robot, and driving the teaching robot to drive the teaching visual sensing module to move to the position determined by the mother data N;
the teaching robot drives the teaching visual sensing module to adjust the position and/or the posture according to the given image deviation, and then the process a) is executed;
further comprising a step c1) provided after step a) of step a1), after step b) and after step a1), before step c) and after step b 1);
step a1) using a teaching processing device fixed with a teaching system device as a modeling device, using a teaching robot as a teaching robot, installing the modeling device on the teaching robot, arranging a plurality of constraint points around a workpiece, enabling the teaching robot to drive the modeling device to pass through the constraint points in sequence in a specified path and a specified posture based on the robot coordinates of the teaching robot, and shooting the workpiece, so that each track point of the specified path has three-dimensional basic data of posture information and image information, and integrating all the three-dimensional basic data, thereby generating a real object three-dimensional model of the workpiece;
step b1) presenting all the posture data points/linear visualization and the physical solid model together in the mother data and/or the subdata, wherein the visualized mother data and/or subdata are called simulation data, and part of the simulation data is adjusted and/or the simulation data is increased or decreased;
and c1) enabling the teaching robot to drive the modeling equipment to enable the robot coordinate of the teaching robot to pass through the actual space point determined by the simulation data, wherein the actual space point is called as a simulation point, and the posture of the modeling equipment on each simulation point is determined by the simulation data, so that the parent data in the teaching work coordinate system and the child data corresponding to the parent data are regenerated.
2. An image contrast based teaching method according to claim 1, wherein: the teaching construction module is internally provided with an attitude sensor which comprises at least two teaching construction modules, at least one teaching construction module is arranged on the left side or the right side or the front side or the rear side of the teaching working table, at least one teaching construction module is arranged above the teaching working table and is called as a global construction module, and the monitoring range of the teaching construction camera of the global construction module covers all other teaching construction modules; and c), after the basic data of each robot is formed in the step c), the teaching construction camera of the teaching construction module of the teaching robot can be monitored to take pictures of the teaching robot, so that robot posture image data corresponding to the basic data of the robot one by one is formed.
3. A processing method based on image contrast is characterized in that:
after performing all the steps of an image contrast based teaching method according to any of claims 1 to 2, the following steps are performed:
and d1) preparing a processing working table and a processing robot arranged beside the processing working table, wherein the processing robot is provided with a flow processing device, the workpiece is placed on the processing working table, and the processing robot is driven to drive the processing device to process the workpiece according to the processing track data.
4. A processing method based on image contrast is characterized in that:
after performing all the steps of an image contrast based teaching method according to any of claims 1 to 2, the following steps are performed:
step d2) preparing a flow processing device, a processing system device with a processing working coordinate system and a processing robot arranged beside the processing system device, wherein the processing system device comprises a processing visual sense module moving in the processing working coordinate system and a processing working table surface beside the processing visual sense module, a processing construction module which is used for constructing the processing working coordinate system and inputting the walking track of the processing visual sense module is arranged beside the processing working table surface and/or above the processing working table surface, the processing construction module has the same structure as the teaching construction module, and a processing attitude sensor and a processing current camera are arranged on the processing visual sense module; processing the current camera to be a binocular camera or a multi-view camera;
placing a workpiece on a processing working table, fixedly connecting a processing visual sensing module with a flow processing device, and connecting the flow processing device with a processing robot;
step e) placing a workpiece on the surface of the machining workbench, enabling the machining construction module to construct a machining working coordinate system by taking the workpiece as a base point, and driving the machining vision sensing module to sequentially pass through the track points by the machining robot according to the machining track data;
after the processing vision induction module reaches the track points of the comparison data set, the processing robot confirms or adjusts the gesture of the processing vision induction module according to gesture data in the subdata corresponding to the track points, so that the gesture of the processing vision induction module during processing is the same as the gesture of the teaching vision induction module on the same track point;
the current camera is processed to shoot the image of the workpiece to form current image data, the current image data is compared with given image data in the subdata corresponding to the track point where the teaching visual sensing module is located, if the image comparison result of the current image data and the given image data is within the tolerance range, the flow processing equipment is started or kept to process the workpiece, and if the picture comparison result of the current image data and the given image data exceeds the tolerance range, the machining of the workpiece by the line machining equipment is suspended, so that the machining robot drives the machining vision sensing module to adjust the posture and the position, and continuously acquires current image data and compares the newly acquired current image data with a picture of the given image data, therefore, the posture and the position of the processing visual sensing module are the same as those of the subdata, and then the flow processing equipment is started to process the workpiece;
the relative position of the teaching processing equipment and the teaching visual sensing module after connection is the same as the relative position of the flow processing equipment after connection; the structure of the teaching system equipment is the same as that of the processing system equipment; the teaching processing equipment has the same structure as the flow processing equipment.
5. An image contrast based processing method according to claim 4, characterized in that: the processing workbench is characterized by also comprising a movable workbench, wherein the upper surface of the movable workbench is the processing workbench surface.
6. An image contrast based processing method according to claim 4, characterized in that: the teaching system equipment and the processing system equipment are the same equipment; the teaching processing equipment and the flow processing equipment are the same equipment.
7. An image contrast based processing method according to claim 4, characterized in that: the processing robot and the teaching robot are the same equipment, the processing working table surface and the teaching working table surface are the same equipment, and the teaching construction module and the processing construction module are the same equipment.
CN201810319631.8A 2018-04-11 2018-04-11 Teaching and processing method based on image contrast Active CN108656120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810319631.8A CN108656120B (en) 2018-04-11 2018-04-11 Teaching and processing method based on image contrast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810319631.8A CN108656120B (en) 2018-04-11 2018-04-11 Teaching and processing method based on image contrast

Publications (2)

Publication Number Publication Date
CN108656120A CN108656120A (en) 2018-10-16
CN108656120B true CN108656120B (en) 2020-10-30

Family

ID=63783299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810319631.8A Active CN108656120B (en) 2018-04-11 2018-04-11 Teaching and processing method based on image contrast

Country Status (1)

Country Link
CN (1) CN108656120B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109500812A (en) * 2018-11-13 2019-03-22 上海智殷自动化科技有限公司 A kind of robotic programming method positioned in real time by visual pattern
CN111452039B (en) * 2020-03-16 2022-05-17 华中科技大学 Robot posture adjusting method and device under dynamic system, electronic equipment and medium
CN111899629B (en) * 2020-08-04 2022-06-10 菲尼克斯(南京)智能制造技术工程有限公司 Flexible robot teaching system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103406905A (en) * 2013-08-20 2013-11-27 西北工业大学 Robot system with visual servo and detection functions
CN106327569A (en) * 2015-06-30 2017-01-11 遵义林棣科技发展有限公司 Three-dimensional modeling method of digital controlled lathe workpiece
CN107309882A (en) * 2017-08-14 2017-11-03 青岛理工大学 Robot teaching programming system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103406905A (en) * 2013-08-20 2013-11-27 西北工业大学 Robot system with visual servo and detection functions
CN106327569A (en) * 2015-06-30 2017-01-11 遵义林棣科技发展有限公司 Three-dimensional modeling method of digital controlled lathe workpiece
CN107309882A (en) * 2017-08-14 2017-11-03 青岛理工大学 Robot teaching programming system and method

Also Published As

Publication number Publication date
CN108656120A (en) 2018-10-16

Similar Documents

Publication Publication Date Title
US11345042B2 (en) Robot system equipped with video display apparatus that displays image of virtual object in superimposed fashion on real image of robot
CN112122840B (en) Visual positioning welding system and welding method based on robot welding
CN110039520B (en) Teaching and processing system based on image contrast
CN108656120B (en) Teaching and processing method based on image contrast
Pan et al. Recent progress on programming methods for industrial robots
CN103406905B (en) Robot system with visual servo and detection functions
JP4347386B2 (en) Processing robot program creation device
JP4021413B2 (en) Measuring device
WO2015120734A1 (en) Special testing device and method for correcting welding track based on machine vision
US11951575B2 (en) Automatic welding system and method for large structural parts based on hybrid robots and 3D vision
CN109159151A (en) A kind of mechanical arm space tracking tracking dynamic compensation method and system
CN104325268A (en) Industrial robot three-dimensional space independent assembly method based on intelligent learning
EP3407088A1 (en) Systems and methods for tracking location of movable target object
CN105094049B (en) Learning path control
CN105572130A (en) Touch screen terminal test method and device
CN105607651B (en) A kind of quick vision guide alignment system and method
JP2018001393A (en) Robot device, robot control method, program and recording medium
CN107340788A (en) Industrial robot field real-time temperature compensation method based on visual sensor
CN111482964A (en) Novel robot hand-eye calibration method
CN114536346B (en) Mechanical arm accurate path planning method based on man-machine cooperation and visual detection
CN112958974A (en) Interactive automatic welding system based on three-dimensional vision
TWI812078B (en) Dual-arm robot assembling system
CN105479431A (en) Inertial navigation type robot demonstration equipment
CN113618367B (en) Multi-vision space assembly system based on seven-degree-of-freedom parallel double-module robot
CN108673514B (en) Target teaching and processing method based on image contrast

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant