CN110039520B - Teaching and processing system based on image contrast - Google Patents

Teaching and processing system based on image contrast Download PDF

Info

Publication number
CN110039520B
CN110039520B CN201910267314.0A CN201910267314A CN110039520B CN 110039520 B CN110039520 B CN 110039520B CN 201910267314 A CN201910267314 A CN 201910267314A CN 110039520 B CN110039520 B CN 110039520B
Authority
CN
China
Prior art keywords
data
building
robot
module
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910267314.0A
Other languages
Chinese (zh)
Other versions
CN110039520A (en
Inventor
陈小龙
陈诗琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CN110039520A publication Critical patent/CN110039520A/en
Application granted granted Critical
Publication of CN110039520B publication Critical patent/CN110039520B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0081Programme-controlled manipulators with master teach-in means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)
  • Numerical Control (AREA)

Abstract

The invention discloses a teaching and processing system based on image comparison, which comprises: a robot provided with a clamping part and having a mechanical coordinate system; the visual sensing module comprises a body capable of moving in a working coordinate system, and the body is provided with a first attitude sensor, a positioning camera and a structured light generator; the building module is provided with a second attitude sensor, a building camera and a structured light generator, the right side of the robot is provided with a working table board, and the building module is arranged on the side and/or the top of the working table board and used for building a working coordinate system and recording the walking track of the visual sensing module. The invention provides a teaching and processing system with high repetition precision and processing precision.

Description

Teaching and processing system based on image contrast
Technical Field
The invention relates to the field of robots, in particular to a teaching and processing system based on image comparison.
Background
The teaching programming of the current industrial robot requires that programming operators are very familiar with the robot and are skilled in mastering relevant programming knowledge, which results in that longer time and higher cost are spent for training the programming operators of the industrial robot, so that the threshold of the application of the industrial robot is increased, and the cost of the application of the industrial robot is also increased.
At present, most of vision systems are used for solving the technical application in a certain professional field, and the vision systems for teaching programming have the problems of low repeatability precision, large calculation amount, low response speed and easy shielding of industrial robot systems.
Disclosure of Invention
The present invention has been made to solve at least one of the problems occurring in the prior art, and an object of the present invention is to provide a teaching and machining system based on image contrast with high machining accuracy.
According to a first aspect of the present invention, there is provided an image contrast based teaching system comprising:
a robot provided with a clamping part and having a mechanical coordinate system;
the visual sensing module can move in a working coordinate system and comprises a body, wherein the body is provided with a first attitude sensor, a positioning camera and a structured light generator, and the positioning camera is a binocular camera measuring module or a multi-view camera measuring module;
the building module is provided with a second attitude sensor, a building camera and a structured light generator, the right side of the robot is provided with a working table board, and the building module is arranged on the side and/or the top of the working table board and used for building a working coordinate system and recording the walking track of the visual sensing module.
The teaching system based on image contrast has at least the following beneficial effects: the teaching system based on image contrast comprises a body, wherein a first attitude sensor, a positioning camera and a structured light generator are arranged on the body. Because the first attitude sensor is arranged, when the visual sensing module faces a workpiece and teaches and walks, each track point on the visual sensing module can be corresponding to attitude data and image data, the coordinate data of the track point is called as mother data, the corresponding attitude data and image data are called as subdata, the workpiece can be subjected to real object modeling by integrating the subdata corresponding to the mother data and the mother data, the visual sensing module can also be installed on a robot, the robot drives the visual sensing module to process and walk facing the workpiece, and the position judgment of the processing and walking is based on the comparison result of the attitude data, the image data and the subdata obtained in real time during the processing and walking. When the teaching system provided by the invention is used for teaching, a workpiece is firstly processed by manually holding a processing device by hand, and standard attitude data and image data are obtained; when the robot is taught, the machining equipment is connected to a clamping part of the robot, current attitude data and image data are acquired in real time according to a positioning camera and a first attitude sensor of the visual sensing module, and the position relation between the machining equipment and a workpiece is adjusted according to the error between the current data and standard data, so that the error between the current data and the standard data is reduced to be within an acceptable tolerance range. Therefore, by using the teaching system based on image comparison provided by the first aspect of the invention, the position relation between the processing equipment and the workpiece can be adjusted in real time according to the comparison between the image data and the attitude data when the robot is taught, so that the processing equipment can obtain higher repetition precision, and the processing precision is improved.
As a further improvement of the above scheme, the device further comprises a processing device, wherein the processing device is fixedly connected with the visual sensing module and is used for processing the workpiece placed on the working table;
the processing equipment is used for processing a workpiece: the construction camera of the construction module is used for recording the track of the visual sensing module in a working coordinate system to form track data comprising a plurality of track points; the positioning camera of the visual sensing module is used for photographing a workpiece to form image data, and the first attitude sensor of the visual sensing module is used for sensing the attitude of the visual sensing module to form attitude data.
As a further improvement of the above scheme, the system further comprises a programming device, wherein the programming device is configured to form a comparison data set from track data with image data and gesture data, the comparison data set uses track points of the track data as parent data, each parent data corresponds to a group of child data, and the child data includes image data and gesture data of the visual sensing module at the position of each track point;
when connecting a processing device to a robot and teaching the robot: the trajectory data contained in the mother data of the comparison data set is used for guiding the robot to drive the vision sensing module to sequentially pass through the actual space points determined by all the mother data; and when the vision induction module reaches the position of each parent data, driving the robot to adjust the position of the vision induction module relative to the workpiece according to the image data and the posture data of the subdata corresponding to each parent data, recording the robot posture data of the robot at the current position, and integrating the robot posture data at the positions of the actual space points corresponding to all the parent data to form processing track data.
As a further improvement of the scheme, the building device comprises at least two building modules, at least one building module is arranged on the left side or the right side or the front side or the rear side of the working table, and at least one building module is arranged above the working table.
As a further improvement of the above scheme, a second attitude sensor is arranged in each building module, and comprises at least two building modules, at least one building module is arranged on the left side or the right side or the front side or the rear side of the working table, at least one building module is arranged above the working table and is called a global building module, and the monitoring range of the building camera of the global building module covers all other building modules; the monitoring range of the building camera of the global building block is intersected with the monitoring ranges of the building cameras of all other building blocks.
As a further improvement of the above scheme, when the robot enters a range where the global building module intersects with the monitoring areas of other building modules from the monitoring area of the global building module, the building module closest to the robot space among all the building modules is used for building the working coordinate system and recording the walking track of the visual sensing module.
As a further improvement of the scheme, the device also comprises a movable workbench, and the upper surface of the movable workbench is the workbench surface.
As a further improvement of the above solution, the first attitude sensor is a gyroscope.
As a further improvement of the above scheme, the camera is constructed to be a binocular camera measuring module or a multi-view camera measuring module.
According to a second aspect of the present invention, there is provided an image contrast based processing system comprising an image contrast based teaching system according to the first aspect above;
and the robot drives the processing equipment to process the workpiece on the working table according to the processing track data.
The processing system based on image contrast has at least the following beneficial effects: the processing system based on image contrast provided by the second aspect of the invention comprises the teaching system based on image contrast of the first aspect, and on the basis that the teaching system acquires the processing track data through a graph contrast method, the robot drives the processing equipment to process the workpiece placed on the working table according to the acquired processing track data, so that the processing precision is high.
Drawings
In order to more clearly illustrate the technical solution in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly described below. It is clear that the described figures are only some embodiments of the invention, not all embodiments, and that a person skilled in the art can also derive other designs and figures from them without inventive effort.
FIG. 1 is a schematic structural diagram of an embodiment of the present invention;
FIG. 2 is a flow chart of a method of using an embodiment of the processing system of the present invention.
In the figure, 1-clamping part, 2-construction module, 3-visual sensing module and 4-mobile workbench.
Detailed Description
The conception, the specific structure, and the technical effects produced by the present invention will be clearly and completely described below in conjunction with the embodiments and the accompanying drawings to fully understand the objects, the features, and the effects of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and those skilled in the art can obtain other embodiments without inventive effort based on the embodiments of the present invention, and all embodiments are within the protection scope of the present invention. In addition, all the coupling/connection relationships mentioned herein do not mean that the components are directly connected, but mean that a better coupling structure can be formed by adding or reducing coupling accessories according to specific implementation conditions. The technical characteristics of the invention can be combined interactively on the premise of not conflicting with each other.
Referring to fig. 1, in a first aspect of the present invention, there is provided an image contrast-based teaching system, comprising:
a robot provided with a clamping part and having a mechanical coordinate system;
the visual sensing module 3 comprises a body capable of moving in a working coordinate system, and the body is provided with a first attitude sensor, a positioning camera and a structured light generator; the first attitude sensor employed in this implementation is a gyroscope.
The construction module 2 is provided with a second attitude sensor, a construction camera and a structured light generator, a working table is arranged on the right side of the robot, and the construction module 2 is arranged beside and/or above the working table and used for constructing the working coordinate system and inputting the walking track of the visual sensing module 3;
the processing equipment is fixedly connected with the visual sensing module 3 and is used for processing a workpiece placed on the working table;
the processing equipment is used for processing a workpiece: the construction camera of the construction module 2 is used for recording the track of the visual sensing module 3 in the working coordinate system to form track data comprising a plurality of track points; the positioning camera of the visual sensing module 3 is used for photographing a workpiece to form image data, and the first attitude sensor of the visual sensing module 3 is used for sensing the attitude of the visual sensing module 3 to form attitude data.
The process of acquiring trajectory data, image data and pose data using the image contrast based teaching system of the present invention is described in the following specific embodiments (for ease of distinction, the image data acquired by the positioning camera is named focus data and contrast data in different states, respectively):
the robot has a mechanical coordinate system, is equipped with clamping part 1 on the robot, still includes the vision response module 3 that removes in the work coordinate system, and vision response module 3 includes the body, is equipped with first attitude sensor, location camera, structured light generator on the body, the location camera is two mesh cameras measurement module or many mesh cameras measurement module. Because the first attitude sensor is arranged, when the visual sensing module 3 faces a workpiece and teaching wandering is carried out, each track point on the visual sensing module 3 can be corresponding to attitude data and image data, the coordinate data of the track point is called as mother data, the corresponding attitude data and image data are called as subdata, the workpiece can be subjected to real object modeling by integrating the mother data and the subdata corresponding to the mother data, the visual sensing module 3 can also be installed on a robot, the robot drives the visual sensing module 3 to process wandering facing the workpiece, and the position judgment of the processing wandering is based on the comparison results of the attitude data, the image data and the subdata which are obtained in real time during the processing wandering. The right side of robot is equipped with table surface, and table surface's side and/or table surface's top are equipped with and are used for the construction work coordinate system, type into the construction module 2 of the walking orbit of vision response module 3, be equipped with second attitude sensor, construction camera, structured light generator on the construction module 22. When the robot is used, the construction module 2 is started, a workpiece is used as a reference of a working coordinate system, the visual sensing module 3 is fixed on processing equipment, the shooting direction of the positioning camera is the same as the processing direction of the processing equipment, then the processing equipment is manually operated to teach and process the workpiece, meanwhile, the construction module 2 records the track of the visual sensing module 3 or the processing equipment to form track data, the positioning camera continuously shoots pictures on the workpiece to form focusing data, the first attitude sensor can simultaneously obtain the attitude data of the visual sensing module 3, after the control system of the robot obtains the track data, the focusing data and the attitude data, the processing equipment with the visual sensing module 3 is fixed on the clamping part 1, and the robot drives the processing equipment with the visual sensing module 3 to move in the working coordinate system along with the track data, meanwhile, the processing equipment is adjusted to the gesture during teaching by using the gesture data, the workpiece is photographed by using the positioning camera to form contrast data, the control system compares the contrast data with the focusing data, and the processing equipment is finely adjusted, so that accurate positioning is realized. The robot does not need to be dragged during teaching, so the teaching operation is flexible, and during processing, the coordinate system is based on the workpiece, so the good processing can be realized no matter whether the workpiece is in a moving state, and the mechanical coordinate of the robot does not participate in the teaching and the positioning of the processing, so when the robot breaks down, the standby robot can be connected seamlessly without teaching again.
On the basis of the embodiment, in order to bring the robot into practical production after teaching, the robot teaching system further comprises a programming device, wherein the programming device is used for forming a comparison data set from track data with image data and posture data, the comparison data set enables track points of the track data to serve as mother data, each mother data corresponds to a group of subdata, and the subdata comprises the image data and the posture data of the visual sensing module 3 at the position of each track point;
when connecting a processing device to a robot and teaching the robot: the trajectory data contained in the parent data of the comparison data set is used for guiding the robot to drive the vision sensing module 3 to sequentially pass through the actual space points determined by all the parent data; when the vision induction module 3 reaches the position of each parent data, the robot is driven to adjust the position of the vision induction module 3 relative to the workpiece according to the image data and the posture data of the subdata corresponding to each parent data, the robot posture data of the robot at the current position is recorded, and the robot posture data at the positions of the actual space points corresponding to all the parent data are integrated to form processing track data.
In order to facilitate those skilled in the art to obtain the processing trajectory data by using the image contrast-based teaching system of the present invention, the following embodiments are provided, which specifically include the following steps (hereinafter, for distinction, the image data acquired in the two states are named as given image data and current image data, respectively):
step a) preparing processing equipment, teaching system equipment with a teaching working coordinate system and a robot arranged beside the teaching system equipment, wherein the teaching system equipment comprises a visual sensing module 3 moving in the teaching working coordinate system, a working table is arranged beside the visual sensing module 3, a construction module 2 with a construction camera for constructing the teaching working coordinate system and recording the walking track of the visual sensing module 3 is arranged beside the working table and/or above the working table, and a first attitude sensor and a given camera are arranged on the visual sensing module 3; constructing a camera as a binocular camera or a multi-view camera; the given camera is a binocular camera or a multi-view camera;
placing a workpiece on a working table, and fixedly connecting the visual sensing module 3 with processing equipment;
step b) starting the construction module 2, the first attitude sensor and the given camera, enabling an operator to hold the processing equipment by hand to process the workpiece, enabling the construction camera of the construction module 2 to record the track of the visual sensing module 3 in the teaching working coordinate system, synchronously generating attitude data of the first attitude sensor of the visual sensing module 3 when the visual sensing module 3 moves in the teaching working coordinate system so as to form track data containing a plurality of track points, enabling the construction camera to photograph the workpiece to form given image data, enabling the first attitude sensor to sense the attitude of the visual sensing module 3 to form attitude data, sending the track data with the given image data and the attitude data to programming equipment so as to form a comparison data set, enabling the track points of the track data to serve as mother data and enabling each mother data to correspond to a group of subdata, the subdata comprises given image data and posture data of the visual sensing module 3 at the position of each track point;
step c) connecting the processing equipment with the visual sensing module 3 with the robot, guiding the robot to drive the visual sensing module 3 to sequentially pass through the actual space points determined by all the parent data by using track data, adjusting the posture when the visual sensing module 3 reaches the position of each parent data into a given posture, setting the given posture according to the posture data of the child data corresponding to the parent data, respectively calling the adjacent two parent data M and the parent data N, assigning a given camera to carry out image shooting on the workpiece to form current image data before the visual sensing module 3 reaches the position of the parent data N from the position of the parent data M, carrying out image comparison on the current image data and the given image data in the child data corresponding to the parent data M to obtain given image deviation, and driving the robot to adjust the position of the visual sensing module 3 relative to the workpiece according to the given image deviation and the teaching posture deviation, finally, the contrast deviation of a given image of given image data in the subdata corresponding to the current image data and the parent data M is within a tolerance range, then robot pose data of the current robot are recorded, and the robot pose data of the current robot are called as robot basic data; and integrating all the basic robot data to form processing track data.
The vision induction module 3 is provided with a first attitude sensor, so that when the vision induction module 3 is opposite to a workpiece and taught and walked, each track point on the vision induction module 3 can be corresponding to attitude data and image data, after the teaching of a human hand is completed, the processing equipment with the vision induction module 3 is installed on the robot hand, the conversion from a working coordinate system to a robot coordinate system is realized by an image contrast confirmation method, the professional requirements and the system data conversion operation amount on operators are greatly reduced, the operators can not understand teaching programming, the track of processing track data can be generated only by executing the steps on duty, and the track of the processing track data in the step c) is just established based on the robot coordinates of the robot and can directly drive the robot.
The comparison data set, the robot position and posture data, the robot basic data and the drive line processing equipment can read and modify the workpiece by following the moving track of the processing robot, so that the workpiece is conveniently transplanted to other robots for processing, the prior art is avoided, and at least one teaching is required to be carried out on each robot.
In step c), the robot drives the vision sensing module 3 to move from the position of the mother data M to the position of the mother data N, and the position change process of the vision sensing module 3 is as follows:
process a) the position of the visual sensing module 3 is P, the current image data of P is obtained and compared with the given image data in the subdata corresponding to the parent data M to generate the given image deviation, and the process b) is executed;
process b) judging the deviation of the given image, and if the deviation of the given image is beyond the tolerance range, executing process c); if the deviation of the given image is within the tolerance range, recording the robot posture data of the current robot, and driving the robot to drive the vision sensing module 3 to move to the position determined by the mother data N;
the process c) leads the robot to drive the vision sensing module 3 to adjust the position and/or the posture according to the given image deviation, and then the process a) is executed.
The adjusting mode can well realize rapid adjustment of the robot pose of the robot and reduce the operation burden of the system.
On the basis of the embodiment, the first attitude sensor directly measures attitude data and adjusts the attitude, and the robot only needs to perform vertical, horizontal and front-back translation motion during image comparison, so that a large amount of complex transportation is omitted, and the system can run efficiently.
Preferably, in order to prevent the workpiece from shielding the processing equipment, the processing equipment comprises at least two building modules 2, at least one building module 2 is arranged on the left side or the right side or the front side or the rear side of the working table, and at least one building module 2 is arranged above the working table. Specifically, a second attitude sensor is arranged in each building module 2 and comprises at least two building modules 2, at least one building module 2 is arranged on the left side or the right side or the front side or the rear side of the working table, at least one building module 2 is arranged above the working table, at least one building module 2 is called a global building module 2, and the monitoring range of a building camera of the global building module 2 covers all other building modules 2; the surveillance scope of the building camera of the global building block 2 intersects the surveillance scope of all other building blocks 2. After the basic data of each robot is formed in the step c), the construction camera of the construction module 2 of the robot can be monitored to take pictures of the robot, so that the robot posture image data corresponding to the basic data of the robot one by one is formed. The arrangement can prevent the monitoring of the robot from being interrupted, can position a plurality of processing positions of the robot, and can compare a real-time picture obtained by the camera with the posture image data of the robot by a picture comparison method during processing, thereby realizing rapid robot position navigation and posture adjustment of the robot.
Based on the above embodiment, it can be seen that in the teaching process, the global building module and other building modules adopt a global synchronous working manner, that is, all the building cameras of the building modules take pictures of the robot to generate corresponding data (or a local synchronous working manner, that is, only the building modules of the robot can be observed to perform synchronous working), how to improve the repetition precision of the robot through the cooperative working of a plurality of building modules, how to select the optimal data from the data generated by the plurality of building modules, and need to consider the following aspects to obtain the optimal selection policy:
the first is the position relation: the positional relationship between the global building block and the other building blocks can be obtained by measuring data of three or more points of different straight lines on the monitoring intersection range, calculating the positional relationship between the three or more points, and correcting the deviation of the second attitude sensor. Thus, the robot pose image data monitored by the global building block and other building blocks are universal and interchangeable, with only a distinction in accuracy.
Secondly, according to the work demand: the global building block and other building blocks should cover the activity space of the robot; if the tail end (module) of the robot cannot be observed for a long time, the system gives an alarm, and the robot is moved to a safe parking space under the detection of the global building module according to a set instruction.
By considering the two points, a selection strategy of the robot posture image data is obtained: due to the position relation, the data accuracy of other building modules is higher than that of the global building module, and all positions requiring high accuracy are monitored by the other building modules in the teaching process. 1. When the robot enters the intersection range of the global building module and other building module monitoring areas from the global building module monitoring area in a high-precision working mode, high-precision data of other building modules are automatically adopted, and the high-precision data are used as long as the robot is in the monitoring area range of other building modules; and the data of the global building block is not automatically adopted until the robot leaves the monitoring area range of other building blocks. 2. When the robot works in a high-efficiency working mode, the robot automatically adopts the data of the global building module; only when leaving the monitoring area of the global building block, the high-precision data of other building blocks are automatically adopted. 3. In other modes of operation, data using the global building blocks or high precision data of other building blocks are manually set as required.
The selection strategy considers the condition that the robot enters the intersection range of one other building module and the global building module monitoring area at the same time, and the system automatically selects the highest-precision data or manually sets the highest-precision data when the robot enters the monitoring areas of two or more other building modules. In general, the building block closest to the robot space among all the building blocks is defined with the highest accuracy. When the robot enters the range where the global building module intersects with the monitoring areas of other building modules from the monitoring area of the global building module, the building module closest to the space of the robot in all the building modules is used for building the working coordinate system and recording the walking track of the visual sensing module. The multiple construction systems are arranged, so that the problem that the robot is shielded in the re-teaching process is solved, the data precision obtained by the construction modules can be greatly improved through the selection strategies, and the repeating precision of the robot in the teaching process is improved.
Preferably, the building module 2 is provided with a second attitude sensor, a building camera and a structured light generator. The camera is constructed to be a binocular camera measuring module or a multi-view camera measuring module. Therefore, the three-dimensional coordinate data of the processing equipment can be obtained by comparing the images of the two cameras.
In order to facilitate continuous processing, the present embodiment further includes a movable table 4, and the upper surface of the movable table 4 is the table surface. It is precisely with the teaching system of the present invention that the image of the workpiece is compared to achieve accurate machining of the moving workpiece.
As shown in fig. 2, on the basis of the above embodiment, the second aspect of the present invention further provides a processing system based on image contrast, the processing system uses the robot and the processing device equipped in the teaching system based on image contrast provided in the first aspect of the present invention to process the workpiece placed on the work table, and the processing system uses the robot connected with the processing device to drive the processing device to process the workpiece placed on the work table according to the processing trajectory data after the teaching system acquires the processing trajectory data, so that the facility used by the processing system is the same as that of the teaching system.
The specific embodiment comprises the following steps:
step d1) a working table is equipped, a robot is arranged beside the working table, a processing device is arranged on the robot, a workpiece is placed on the working table, and the robot is driven to drive the processing device to process the workpiece according to the processing track data.
Such a machining method is suitable for general machining accuracy and high machining accuracy, and when the machining accuracy is required to be very high, it is necessary to adopt the following method:
step d2) preparing a processing device, a processing system device with a working coordinate system and a processing robot arranged beside the processing system device, wherein the processing system device comprises a visual sensing module 3 moving in the working coordinate system and a side working table top of the visual sensing module 3, a building module 2 which is used for building the working coordinate system and inputting the walking track of the visual sensing module 3 is arranged beside the working table top and/or above the working table top, and the visual sensing module 3 is provided with a first attitude sensor and a given camera; the given camera is a binocular camera or a multi-view camera;
placing a workpiece on a working table, fixedly connecting the visual sensing module 3 with processing equipment, and connecting the processing equipment with a robot;
step e), placing a workpiece on the working table surface, enabling the construction module 2 to construct a working coordinate system by taking the workpiece as a base point, and driving the vision sensing module 3 to sequentially pass through the track points by the robot according to the processing track data;
after the visual sense module 3 reaches the track point of the comparison data set, the robot confirms or adjusts the gesture of the visual sense module 3 according to gesture data in the subdata corresponding to the track point, so that the gesture of the visual sense module 3 during processing is the same as the gesture of the visual sense module 3 on the same track point when the hand-held processing equipment is taught by a person;
the method comprises the steps that a given camera shoots images of a workpiece to form image data, the image data are compared with image data in subdata corresponding to a track point where a visual sensing module 3 is located when a human-held processing device conducts teaching, if the comparison result of the image data and the image data is within a tolerance range, the processing device is started or kept to process the workpiece, and if the comparison result of the image data and the subdata is beyond the tolerance range, the processing device is suspended from processing the workpiece, a processing robot drives the visual sensing module 3 to adjust the posture and the position, the given camera is continuously allowed to obtain current image data, the newly obtained current image data is subjected to image comparison with the image data when the human-held processing device conducts teaching, so that the posture and the position of the current visual sensing module 3 are the same as those of the subdata, and then the processing device is started to process the workpiece;
since the machining method always follows the workpiece, the machining robot can accurately complete the operation specified by the teaching program and can excellently restore the teaching operation regardless of which robot is used in the actual line machining and whether the workpiece is moving. Moreover, the teaching program is very convenient to transplant, one-time teaching can be realized, the teaching program can be used for a plurality of processing systems, and once the robot beside each processing system is damaged, the teaching program is loaded on the standby processing robot, and the standby processing robot can be conveniently installed beside the original processing robot to realize processing. And processing and constructing the camera and can let processing response module carry out quick location, and the picture contrast result of current image data and given image data can let processing robot can carry out accurate position, posture adjustment to processing response module to let the realization of realizing accurate teaching action, the place of teaching even can all be different with the place of processing. According to the image contrast analysis method, the processing error can be automatically compensated, the problem of heavy system operation load caused by reducing the error through an algorithm is avoided, and the repetition precision can be greatly improved. The errors comprise motion errors of the robot, accumulated errors of track data, errors caused by self vibration of the robot during processing, errors caused by abrasion after the robot is used for a long time, and errors caused by deflection of each arm of the robot due to the fact that the robot grabs heavy objects.
In order to facilitate continuous processing, the present embodiment further includes a movable worktable 4, and the upper surface of the movable worktable 4 is the processing worktable surface. The accurate machining of the moving workpiece can be realized just by the tracking teaching and machining method of the image contrast of the workpiece.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. An image-contrast-based teaching system, comprising:
a robot provided with a clamping part and having a mechanical coordinate system;
the visual sensing module can move in a working coordinate system and comprises a body, wherein the body is provided with a first attitude sensor, a positioning camera and a structured light generator, and the positioning camera is a multi-camera measuring module;
the building module is provided with a second attitude sensor, a building camera and a structured light generator, a working table is arranged on the right side of the robot, and the building module is arranged beside and/or above the working table and used for building the working coordinate system and recording the walking track of the visual sensing module;
the processing equipment is fixedly connected with the visual sensing module and is used for processing the workpiece placed on the working table; the processing equipment is used for processing a workpiece: the construction camera of the construction module is used for recording the track of the visual sensing module in a working coordinate system to form track data comprising a plurality of track points; the positioning camera of the visual sensing module is used for photographing a workpiece to form image data, and the first attitude sensor of the visual sensing module is used for sensing the attitude of the visual sensing module to form attitude data;
the programming device is used for forming a comparison data set from track data with image data and posture data, the comparison data set takes track points of the track data as parent data, each parent data corresponds to a group of subdata, and the subdata comprises the image data and the posture data of the visual sensing module at the position of each track point; when connecting a processing device to a robot and teaching the robot: the trajectory data contained in the mother data of the comparison data set is used for guiding the robot to drive the vision sensing module to sequentially pass through the actual space points determined by all the mother data; and when the vision induction module reaches the position of each parent data, driving the robot to adjust the position of the vision induction module relative to the workpiece according to the image data and the posture data of the subdata corresponding to each parent data, recording the robot posture data of the robot at the current position, and integrating the robot posture data at the positions of the actual space points corresponding to all the parent data to form processing track data.
2. An image contrast based teaching system according to claim 1 wherein: the building structure comprises at least two building modules, at least one building module is arranged on the left side or the right side or the front side or the rear side of the working table, and at least one building module is arranged above the working table.
3. An image contrast based teaching system according to claim 1 wherein: the building modules are internally provided with a second attitude sensor which comprises at least two building modules, at least one building module is arranged on the left side or the right side or the front side or the rear side of the working table, at least one building module is arranged above the working table and is called as a global building module, and the monitoring range of the building camera of the global building module covers all other building modules; the monitoring range of the building camera of the global building block is intersected with the monitoring ranges of the building cameras of all other building blocks.
4. An image contrast based teaching system according to claim 3 wherein: when the robot enters the range where the global building module intersects with the monitoring areas of other building modules from the monitoring area of the global building module, the building module closest to the space of the robot in all the building modules is used for building the working coordinate system and recording the walking track of the visual sensing module.
5. An image contrast based teaching system according to claim 1 wherein: the movable worktable is characterized by further comprising a movable worktable, and the upper surface of the movable worktable is the worktable surface.
6. An image contrast based teaching system according to claim 1 wherein: the first attitude sensor is a gyroscope.
7. An image contrast based teaching system according to claim 1 wherein: the built camera is a multi-view camera measuring module.
8. An image contrast-based processing system, characterized by: an image contrast based teaching system comprising any of the preceding claims 2-7;
and the robot drives the processing equipment to process the workpiece on the working table according to the processing track data.
CN201910267314.0A 2018-04-11 2019-04-03 Teaching and processing system based on image contrast Active CN110039520B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201820515520X 2018-04-11
CN201820515520 2018-04-11

Publications (2)

Publication Number Publication Date
CN110039520A CN110039520A (en) 2019-07-23
CN110039520B true CN110039520B (en) 2020-11-10

Family

ID=67296882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910267314.0A Active CN110039520B (en) 2018-04-11 2019-04-03 Teaching and processing system based on image contrast

Country Status (1)

Country Link
CN (1) CN110039520B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110625597B (en) * 2019-09-27 2021-01-01 陈小龙 Robot system based on SLAM and teaching method thereof
CN111002295A (en) * 2019-12-30 2020-04-14 中国地质大学(武汉) Teaching glove and teaching system of two-finger grabbing robot
CN111002294A (en) * 2019-12-30 2020-04-14 中国地质大学(武汉) Two fingers grab demonstrator and teaching system of robot
CN114670212B (en) * 2022-04-26 2023-04-21 南通新蓝机器人科技有限公司 IMU and vision-based robot guiding handle and use method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103406905A (en) * 2013-08-20 2013-11-27 西北工业大学 Robot system with visual servo and detection functions
CN107144236A (en) * 2017-05-25 2017-09-08 西安交通大学苏州研究院 A kind of robot automatic scanner and scan method
CN107309882A (en) * 2017-08-14 2017-11-03 青岛理工大学 Robot teaching programming system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103406905A (en) * 2013-08-20 2013-11-27 西北工业大学 Robot system with visual servo and detection functions
CN107144236A (en) * 2017-05-25 2017-09-08 西安交通大学苏州研究院 A kind of robot automatic scanner and scan method
CN107309882A (en) * 2017-08-14 2017-11-03 青岛理工大学 Robot teaching programming system and method

Also Published As

Publication number Publication date
CN110039520A (en) 2019-07-23

Similar Documents

Publication Publication Date Title
CN110039520B (en) Teaching and processing system based on image contrast
CN112122840B (en) Visual positioning welding system and welding method based on robot welding
JP4021413B2 (en) Measuring device
KR102280663B1 (en) Calibration method for robot using vision technology
CN101733558B (en) Intelligent laser cutting system provided with master-slave camera and cutting method thereof
CN108827154B (en) Robot non-teaching grabbing method and device and computer readable storage medium
Nele et al. An image acquisition system for real-time seam tracking
JP6855492B2 (en) Robot system, robot system control device, and robot system control method
EP3407088A1 (en) Systems and methods for tracking location of movable target object
WO2015120734A1 (en) Special testing device and method for correcting welding track based on machine vision
US20220331970A1 (en) Robot-mounted moving device, system, and machine tool
CN114434059B (en) Automatic welding system and method for large structural part with combined robot and three-dimensional vision
JP2005074600A (en) Robot and robot moving method
CN104325268A (en) Industrial robot three-dimensional space independent assembly method based on intelligent learning
JP2021035708A (en) Production system
JP2016187846A (en) Robot, robot controller and robot system
CN114474041A (en) Welding automation intelligent guiding method and system based on cooperative robot
CN108656120B (en) Teaching and processing method based on image contrast
CN113618367B (en) Multi-vision space assembly system based on seven-degree-of-freedom parallel double-module robot
CN117047237B (en) Intelligent flexible welding system and method for special-shaped parts
CN108748155B (en) The automatic aligning method of more scenes
WO2023032400A1 (en) Automatic transport device, and system
JP7482364B2 (en) Robot-mounted mobile device and system
CN114800574A (en) Robot automatic welding system and method based on double three-dimensional cameras
JP6507792B2 (en) Robot and robot system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant