CN108000499B - Programming method of robot visual coordinate - Google Patents

Programming method of robot visual coordinate Download PDF

Info

Publication number
CN108000499B
CN108000499B CN201610951817.6A CN201610951817A CN108000499B CN 108000499 B CN108000499 B CN 108000499B CN 201610951817 A CN201610951817 A CN 201610951817A CN 108000499 B CN108000499 B CN 108000499B
Authority
CN
China
Prior art keywords
robot
coordinate system
programming
point
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610951817.6A
Other languages
Chinese (zh)
Other versions
CN108000499A (en
Inventor
王培睿
黄钟贤
夏绍基
陈世国
黄识忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Techman Robot Inc
Original Assignee
Techman Robot Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Techman Robot Inc filed Critical Techman Robot Inc
Priority to CN201610951817.6A priority Critical patent/CN108000499B/en
Publication of CN108000499A publication Critical patent/CN108000499A/en
Application granted granted Critical
Publication of CN108000499B publication Critical patent/CN108000499B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1671Programme controls characterised by programming, planning systems for manipulators characterised by simulation, either to verify existing program or to create and verify new program, CAD/CAM oriented, graphic oriented programming systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/02Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian coordinate type
    • B25J9/04Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian coordinate type by rotating at least one arm, excluding the head movement itself, e.g. cylindrical coordinate type or polar coordinate type
    • B25J9/046Revolute coordinate type

Abstract

A programming method of robot visual coordinate includes towing robot to operation point position, setting up the coordinate of operation point position and shooting operation as new point position, shooting teaching image, creating visual coordinate system and setting up the next new point position in said system. When the robot works, at the shot working point, the shot image of the servo robot is compared with the teaching image, the same teaching image is searched, and the same corresponding position relation is maintained when the visual coordinate system is taught, so that the robot is accurately controlled.

Description

Programming method of robot visual coordinate
Technical Field
The invention relates to a robot, in particular to a method for establishing a visual coordinate system and programming operation point positions of the robot by utilizing shot images of an industrial robot.
Background
The robot has the characteristics of flexible movement, precise positioning and continuous operation, and becomes the best tool for manufacturing and assembling on a product production line. Simplifying the programming of robot operation and enabling the robot to be rapidly added into the production line become important subjects for improving the production efficiency of the robot.
As shown in fig. 10, the coordinate system programmed by the robot 1 in the related art generally includes a robot coordinate system (RobotBase) R, a world coordinate system (Global Base) G, a Tool coordinate system (Tool Base) T, a Workpiece coordinate system (Workpiece Base) W, and the like. The workpiece coordinate system W is particularly important, because the robot 1 can select the point P moved in stages to be recorded in the workpiece coordinate system W of the workpiece 2 during programming. When the total 6 degrees of freedom of the origin position and the three-dimensional azimuth of the workpiece coordinate system W changes in movement relative to the robot coordinate system R, the point location P recorded in the workpiece coordinate system W can be determined to be the location relative to the robot coordinate system R as long as the robot 1 reacquires the change value of the workpiece coordinate system W relative to the robot coordinate system R, so as to accurately control the robot 1 to the point location P. The coordinates of the point P are recorded in the workpiece coordinate system W, and are not changed along with the movement of the robot coordinate system R, and the display on the human-computer interface is unchanged, so that the point is easily set to perform the operation programming of the robot 1.
In addition, the visual device 3 is usually integrated at the end or outside of the robot 1, the robot 1 utilizes the image shot by the visual device 3 to calculate the coordinates of the image feature D on the image plane, and compares the coordinates with the coordinates of the known environment feature D in the working environment to generate the coordinate offset value of the image feature D, such as X axis 3 pixel (pixel), Y axis 6 pixel (pixel), angle 0.5 degree, and then uses the offset value to convert the pixel into the actual distance such as 6mm,12mm and the like through image processing, and when the point is programmed and set, the offset value is compensated by the user, so that the motion of the robot 1 is corrected along with the image, and the control accuracy is improved.
However, in the aforementioned programming method, the offset values of the set points are compensated for one by one according to the captured images, which not only makes it difficult to set the points, but also makes it difficult to program and record the points more complicated and difficult to understand after the offset values are compensated for by the images captured by the vision device 3 for many times, for example, when the points are shifted to L along with the captured image K and then to M along with the captured image L, which makes it difficult to program the robot 1, and the set points are difficult to manage, and the points requiring precise calculation cannot be easily reused, which results in a reduction in programming efficiency.
Disclosure of Invention
The invention aims to provide a programming method of a robot visual coordinate, which establishes a visual coordinate system through a teaching image shot by a robot visual device at a point location, and programs the point location in the visual coordinate system so as to simplify the programming of the robot operation.
Another object of the present invention is to provide a method for programming visual coordinates of a robot, in which after programming, a servo robot searches for images of the same teaching image, confirms a visual coordinate system, and rapidly moves point locations, thereby improving the efficiency of robot operation.
The invention further provides a method for programming visual coordinates of a robot, which includes selecting a stored coordinate system when point location programming is provided by using a human-computer interface, marking the coordinate system of a point location in a flow block, and logging in the point location coordinates by a point location management page to enhance management of the point location.
In order to achieve the purpose of the invention, the programming method of the robot visual coordinate comprises the steps of firstly drawing the robot to an operation point location, selecting a coordinate system recorded by the operation point location, setting the operation point location as a newly added point location, setting the coordinate and the operation of the newly added point location, checking whether the operation of the newly added point location is shooting operation, shooting a teaching image at the newly added point location, establishing a visual coordinate system, if the checking is not finished, setting a subsequent newly added point location in the established visual coordinate system, and if the checking is finished, finishing the programming. And if the operation for verifying the newly added point location is not the shooting operation, directly verifying whether the programming is finished or not.
The programming method of the robot vision coordinate of the invention, after finishing programming, move the robot to the working point location with the coordinate system presumed, check whether the working point location is the shooting operation, the servo robot shoots the picture to compare with teaching picture, calculate displacement amount and rotation angle difference of the picture, search the identical teaching picture, check and shoot the difference of picture and teaching picture to be smaller than the default value, confirm the vision coordinate system and maintain the same corresponding position relation while teaching, in order to control the robot to move the working point location.
When the verification difference is not smaller than the preset value, the same teaching image is continuously searched. In addition, after the visual coordinate system is confirmed, the shooting posture is recorded to establish a new visual coordinate system so as to update the visual coordinate system. And after updating the visual coordinate system, if the operation is not completed in the verification, continuously moving the point location, and if the operation is completed, ending the operation. And when the check operation point is not the shooting operation, executing the operation set by the operation point, and then checking whether the operation is completed. The invention relates to a programming method of a robot visual coordinate, which is characterized in that after the coordinate of a newly added point location and operation are set, the mark of a visual coordinate system is displayed and recorded on the subscript of a flow block of the newly added point location on a programming picture of a human-computer interface of the robot.
Drawings
Fig. 1 is a diagram of a programming system for robot visual coordinates according to the present invention.
Fig. 2 is a schematic diagram of the robot of the present invention for establishing a visual coordinate system during operation.
Fig. 3 is a schematic diagram of a robot vision coordinate system of the present invention.
Fig. 4 is a schematic diagram of the moving point positions of the robot according to the present invention.
FIG. 5 is a diagram of a human-machine interface programming of the present invention.
Fig. 6 is a screen view of the present invention.
FIG. 7 is a diagram of point location management according to the present invention.
Fig. 8 is a flowchart of a programming method of robot visual coordinates according to the present invention.
Fig. 9 is a flowchart of the operation method of the robot visual coordinate system of the present invention.
Fig. 10 is a diagram of a coordinate system for programming a prior art robot.
Description of the figures
10 programming system
11 robot
12 visual device
13 controller
14 human-machine interface
15 storage device
16 fixed end
17 movable end
18 work flow
20 workpiece
21 image
22 newly-added point key
23, 28, 29, 32, 35 flow blocks
24 point location record
25 robot coordinate system project
26 item of the object coordinate system
27-point recording picture
30 first visual coordinate system item
31, 34 marks
33 second visual coordinate system item
36 object
40 point management picture
41 selective recording coordinate system
42 original coordinate system
43 new coordinate system
Detailed Description
To achieve the above objects, the present invention provides a method and a device for detecting and controlling a temperature of a liquid crystal display panel.
Referring to fig. 1, fig. 2 and fig. 3, fig. 1 is a programming system of a robot vision coordinate system of the present invention, fig. 2 is a schematic diagram of the robot vision coordinate system of the present invention during operation, and fig. 3 is a schematic diagram of the robot vision coordinate system of the present invention. In fig. 1, a programming system 10 of the present invention mainly includes a robot 11, a vision device 12, a controller 13, a human-machine interface 14 and a storage device 15. Wherein the fixed end 16 of the robot 11 forms the robot coordinate system R and the movable end 17 of the robot 11 is provided with the vision apparatus 12, the robot 11 being connected to the controller 13. The user programs the workflow 18 of the robot 11 using the human machine interface 14 connected to the controller 13, enters the storage device 15 of the controller 13, and controls the movement of the robot 11 by the controller 13 according to the programming. And captures an image 21 of the workpiece 20 using the vision device 12 carried by the movable end 17, and stores the captured pose and image 21 in the storage device 15 in the controller 13. The controller 13 performs image processing again on the image 21 stored in the storage device 15. Because the Vision device 12 is fixed on the movable end 17 of the robot 11, the controller 13 can recognize and record the coordinates of the movable end 17 in the robot coordinate system R and the relationship of the Vision Base V of the Vision device 12 in each shooting posture with respect to the robot coordinate system R according to the rotation relationship of the servo motor of each shaft joint through the shooting posture.
The present invention detects the relative position of the workpiece 20 in the visual coordinate system V of the robot 11 by a servo-vision method, for example. When the teaching robot 11 is programmed to operate, the robot 11 is drawn to shoot a teaching image a of the workpiece 20 by using the vision device 12 in a shooting posture a, and a vision coordinate system V is established according to the position of the vision device 12, so that the origin of the vision coordinate system V coincides with the origin of the teaching image a plane coordinate system, the XYZ axes of the vision coordinate system V coincide with the XYZ axes of the image a plane coordinate system, that is, the vision coordinate system V is established by using the 6-dimensional relative relationship of the vision device 12 relative to the robot coordinate system R at the moment. And stores the established visual coordinate system V in the robot 11 storage 15. Since there are many ways to establish a visual coordinate system based on the vision device 12 camera on the active end 17 of the robot 11, the present invention includes and is not limited to the foregoing examples. The robot 11 is then pulled to the processing point P where the workpiece 20 is processed, and the coordinates of the processing point P are recorded by the established visual coordinate system V, and although the coordinates of the processing point P and the position relative to the workpiece 20 are relatively fixed, the workpiece 20 has a relative relationship with the movable end 17 of the robot 11, the distance of the workpiece 20 is unknown, and the position in the visual coordinate system V is still unclear.
Fig. 2 is a schematic diagram illustrating the robot of the present invention establishing a visual coordinate system during operation. In the robot 11 of the present invention, the workpiece 20 and the robot 11 are displaced relative to each other during the work due to the conveyance of the workpiece 20 or the displacement of the robot 11, and the visual coordinate system V during teaching cannot be maintained. The robot 11 first takes an image of the workpiece 20 at the same imaging posture a to obtain an image a ', which is different from the teaching image a in azimuth, and compares the teaching image a with the image a' taken in actual work on the image plane to calculate the amount of displacement and the amount of rotation difference. Then, the robot 11 is moved by the servo, and the images of the workpiece 20 are continuously captured for searching, so that the captured images are the same as the teaching image a or the difference between the captured images and the teaching image a is smaller than a preset threshold value, so as to complete the visual servo, and the posture B of the robot 11 completing the visual servo is recorded.
Fig. 3 shows the relative positional relationship of the movable end 17 of the robot 11, the processing point P, and the workpiece 20 during teaching, when the image taken at the time of completion of the visual servoing is the same as the teaching image. And establishing a new visual coordinate system V 'according to the gesture B which completes the visual servo, wherein the new visual coordinate system V' is equivalent to the 6-dimensional coordinate R1 'of the visual device 12 of the robot 11 in the gesture B in the robot coordinate system R, the visual coordinate system V during teaching is added, and the stored description value of the visual coordinate system V relative to the robot coordinate system R is updated by using the new visual coordinate system V'. The workpiece 20 is maintained in the same corresponding positional relationship with the visual coordinate system V, but only the coordinates of the movable end 17 of the robot 11 are changed with respect to the coordinates of the robot coordinate system R. Therefore, after the update is completed, the movable end 17 of the robot 11 can be quickly moved to the processing point P set in the visual coordinate system V to process the workpiece 20 without positioning the coordinates of the workpiece 20.
Referring to fig. 4 to fig. 6, fig. 4 is a schematic diagram of a moving point of a robot according to the present invention, fig. 5 is a screen diagram of a human-machine interface programming according to the present invention, and fig. 6 is a screen diagram recorded in the point of the present invention. Fig. 4 illustrates a programming and operation flow of the robot 11 taking an image of the workpiece 20 from a far position by the vision device 12 from the first point P1, determining an approximate position of the workpiece 20, moving to a second point P2 close to the workpiece 20, taking an image of the object 36 placed at a position on the workpiece 20 at a near position, so that the object 36 occupies a larger pixel on the image plane of the vision device 12, determining the orientation of the object 36 more accurately, moving to the object 36 which is most suitable for the third point P3 to grip the workpiece 20, returning to the fourth point P4 at a far position, and placing the object 36 at another position on the workpiece 20.
In fig. 5, when the man-machine interface 14 is used to program the operation flow, the start point is programmed first, the robot 11 is pulled to the start point P0, the point recording position 24 is pulled down on the man-machine interface 14, and the coordinate system set by the point is selected. The coordinate system items appearing at the point location recording position 24 generally include the robot coordinate system item 25 and some coordinate system items remaining useful in teaching tests, for example, when programming is just started, only the robot coordinate system item 25 and the workpiece coordinate system item 26 are selected, the robot coordinate system item 25 is selected, the new point adding position key 22 is pressed on the screen of the human-computer interface 14, the flow block 23 appears on the screen of the human-computer interface 14, and the robot 11 automatically inputs the coordinates of the robot coordinate system of the starting point P0. Pressing the flow block 23, the screen 27 (see fig. 5) where the point location description appears, the coordinates and operations of the set start point P0 are verified, and the flow block 23 is stored as the start block.
Returning to the picture programming of fig. 4, the first point P1, the robot 11 is pulled to the first point P1, the point recording position 24 is pulled down, the robot coordinate system item 25 is selected, the newly added point key 22 of the human-machine interface 14 is pressed, the picture shows a flow block 28, the robot 11 automatically inputs the first point P1 coordinate of the robot coordinate system R, the flow block 28 of the human-machine interface 14 is pressed, the picture 27 (see fig. 5) recorded by the point appears, the flow block 28 is verified and stored as a first shooting block, the robot 11 shoots a first teaching image of the remote workpiece 20 at the first point P1, a first visual coordinate system V1 is established by the first teaching image, and the first teaching image is stored in the storage device of the robot 11.
Then go back to the picture programming second point P2 in fig. 4, pull the robot 11 to the second point P2 of the close distance, pull down the point record position 24, select the first visual coordinate system V1 project 30 just established, press the newly added point key 22 of the human-computer interface 14, the picture appears the flow block 29, the robot 11 automatically inputs the second point P2 coordinate of the first visual coordinate system V1, press the flow block 29 of the human-computer interface 14, the picture 27 (see fig. 5) of the point record appears, check the setting and store the flow block 29 as the second shooting block, because the flow block 29 is recorded with the first visual coordinate system V1 coordinate, the subscript of the flow block 29 will appear the mark 31 of the first visual coordinate system V1, so as to form the distinction with the unmarked robot coordinate system R, remind the user. Then, the robot 11 captures a second teaching image of the workpiece 20 at a close distance at a second point P2, and establishes a second visual coordinate system V2 from the second teaching image, and stores the second teaching image in the storage device of the robot 11.
Returning to the picture in fig. 4, continuing to program the third point P3, using the second teaching image of the object 36 on the workpiece 20 captured at a close distance, considering the better clamping position, drawing the robot 11 to the third point P3 of the clamping point for clamping the object 36, pulling down the point recording position 24, selecting the item 33 of the second visual coordinate system V2 established, pressing the new point key 22 of the human-machine interface 14, generating a flow block 32 of the picture, the robot 11 automatically inputting the coordinates of the third point P3 of the second visual coordinate system V2, pressing the flow block 32 of the human-machine interface 14, generating the picture 27 (see fig. 5) of the point record, verifying, setting and storing the flow block 32 as the third point P3 of the clamping operation, and generating the mark 34 of the second visual coordinate system V2 under the subscript of the flow block 32.
Then, the process returns to the fourth point P4 programmed in the picture of fig. 4, the robot 11 for pulling the gripping object 36 to the fourth point P4 for placing the point, the point recording position 24 is pulled down, the established first visual coordinate system V1 item 30 is selected, the newly added point key 22 of the human-computer interface 14 is pressed, the picture shows a flow block 35, the robot 11 automatically inputs the coordinates of the fourth point P4 of the first visual coordinate system V1, the flow block 35 of the human-computer interface 14 is pressed, the picture 27 recorded by the point (see fig. 5) appears, the flow block 35 is verified, set and stored as the fourth point P4 for placing, the subscript of the flow block 35 shows the mark 31 of the first visual coordinate system V1, and the programming for gripping the robot 11 is completed.
After the present invention is programmed, when performing actual work, the robot 11 starts moving from the starting point P0 to the first point P1 with the robot coordinate system, photographs the remote workpiece 20 at the first point P1 according to the setting of the first photographing point, compares the photographed image with the first teaching image, calculates the amount of displacement and the difference in rotation angle, searches for the photographed image identical to the first teaching image by the servo robot 11, establishes a new first visual coordinate system V1', confirms the relative position relationship between the robot 11 and the workpiece 20 at the first point P1, and moves to the second point P2. In accordance with the setting of the second shot point at the second point P2, the shot image is shot at a short distance at the second point P2, the shot image is compared with the second teaching image, the displacement amount and the rotation angle difference amount are calculated, the servo robot 11 searches for the same shot image as the second teaching image, a new second visual coordinate system V2' is established, the relative positional relationship between the robot 11 and the object 36 on the workpiece 20 at the second point P2 is confirmed, and then the robot is moved to the third point P3. According to the setting of the third point P3, the object 36 is gripped, then the object is moved to the fourth point P4 according to the new first visual coordinate system V1', and according to the setting of the fourth point P4, the object 36 is placed at the same placing position on the workpiece 20 as that in the initial teaching, so as to complete the pick-and-place operation after the relative relationship between the workpiece 20 and the robot is changed in the teaching when the operation is started.
When the vision coordinate system of each image is executed, although the point location is the coordinate of each vision coordinate system, the programming system records the data relative to the robot coordinate system after the correction of each vision coordinate system when each vision coordinate system is established, and the robot can be controlled to execute the operation flow from the first point location to the fourth point location and take and place the workpiece as long as the coordinate of the point location vision coordinate system is converted into the coordinate of the robot coordinate system.
Fig. 7 is a diagram of a point management screen according to the present invention. The point location management screen 40 of the present invention records the coordinate system 41 of each point location selection record, and further includes a method for performing point location coordinate system transcription on the point location storage management interface, when the user selects to record the point again in another coordinate system, the robot controller will maintain the description of the original coordinate system 42 recorded by the point location, and calculate and record the coordinate value of the point location in the new coordinate system according to the new coordinate system 43 selected by the user, so that the point location hard to calculate by the user can be continuously stored and retained, and the point location need to be recalculated due to deletion or modification in the wrong coordinate system is avoided.
Fig. 8 shows a flow of the programming method of the robot visual coordinate according to the present invention. According to the description of the foregoing embodiment, the detailed steps of the programming method of the robot visual coordinate of the present invention are described as follows, firstly, in step S1, the robot starts to be programmed; step S2, drawing the robot to an operation point; step S3, selecting a coordinate system of the operation point location record; step S4, setting the operation point as the new point, and setting the coordinates and operation of the new point, and then going to step S5, checking whether the new point is the shooting operation. If the image is shot, the step S6 is executed, the teaching image is shot at the new augmented point, and the step S7 is executed, and a visual coordinate system is established at the shot new augmented point; proceeding to step S8, it is verified whether programming is complete. If not, returning to the step S2 to provide the vision coordinate system selected to be established to continue establishing the newly added point, if the programming is completed, returning to the step S9 to finish the programming; in addition, in step S5, if the job for verifying the new added point is not the shooting job, it is directly proceeded to step S8 to verify whether the programming is completed.
Fig. 9 shows a flow of the robot operation method after programming according to the present invention. According to the description of the foregoing embodiment, the detailed steps of the flow of the robot post-programming operation method of the present invention are described as follows, first, in step T1, the robot starts the post-programming operation; step T2, moving the robot to the working position by the set coordinate system; in step T3, it is checked whether the job site is a shooting job. If the image is shot, the servo robot compares the shot image with the teaching image in step T4, calculates the difference between the displacement and the rotation angle of the image, searches for the same teaching image, and then checks whether the difference between the shot image and the teaching image is less than a predetermined value in step T5. If the difference is not less than the preset value, returning to the step T4 to continue searching for the same teaching image, if the difference is less than the preset value, returning to the step T6 to confirm that the visual coordinate system maintains the same corresponding position relation during teaching so as to control the robot to shift, recording the shooting gesture and establishing a new visual coordinate system so as to update the visual coordinate system; go to step T7, check if the job is completed. If the operation is not completed, returning to the step T2 to continue moving the point, if the operation is completed, returning to the step T8 to finish the operation; in addition, in step T3, if the job site is not verified as the shooting job, the process proceeds directly to step T9, the job site setting is executed, and the process proceeds to step T7 to verify whether the job is completed.
Therefore, the programming method of the robot visual coordinate can shoot the teaching image through the robot visual device, establish the visual coordinate system by using the teaching image, and directly program the point position in the visual coordinate system, thereby achieving the purpose of simplifying the programming of the robot. After programming, the servo robot shoots and searches images of the same teaching image, confirms a visual coordinate system, and rapidly moves point positions, so that the aim of improving the operation efficiency of the robot is fulfilled. In addition, the programming method of the robot visual coordinate can select the switching of the programming point positions among the visual coordinate systems of the established images, thereby facilitating the repeated utilization of the images or the point positions of the visual coordinate systems, reducing the shooting and image processing time and further improving the programming efficiency. In the programming method of the robot visual coordinate, when the point location programming is provided by using the human-computer interface, the stored coordinate system is selected, the coordinate system of the point location is marked in the flow block, and the point location coordinate is logged in by the point location management page, so that the management of the point location is enhanced, and the programming reference is facilitated.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention, and the scope of the present invention is not limited to these preferred embodiments, and any modifications made according to the present invention will fall within the scope of the claims of the present invention without departing from the spirit of the present invention.

Claims (8)

1. A method for programming visual coordinates of a robot, comprising the steps of:
the robot is drawn to an operation point;
selecting a coordinate system of the operation point location record;
setting an operation point location as a newly added point location, and setting the coordinate and operation of the newly added point location;
checking whether the operation of the newly added point location is shooting operation;
shooting a teaching image at the newly added point, and establishing a visual coordinate system;
setting the continuous new points in the established visual coordinate system,
after the robot finishes programming, the method for executing programming comprises the following steps:
moving the robot to an operation point position by a set coordinate system;
checking whether the operation point location is shooting operation;
comparing the image shot by the servo robot with the teaching image, calculating the displacement and the rotation angle difference of the image, and searching the same teaching image;
checking whether the difference between the shot image and the teaching image is smaller than a preset value;
and confirming that the vision coordinate system maintains the same corresponding position relation during teaching so as to control the robot to move the point location for operation.
2. The programming method of robot vision coordinates of claim 1, wherein after the vision coordinate system is established, if the verification does not complete the programming, the new point location is continuously established, and if the programming is completed, the programming is ended.
3. The programming method of robot vision coordinates of claim 2, wherein if the operation of verifying the newly added point location is not a photographing operation, it is directly verified whether the programming is completed.
4. The programming method of robot vision coordinates of claim 3, wherein if the verification difference amount is not less than the preset value verification, the same teaching image is continuously searched.
5. The programming method of robot vision coordinate as claimed in claim 3, wherein after confirming the vision coordinate system, recording the shooting gesture to establish a new vision coordinate system to update the vision coordinate system.
6. The programming method of robot vision coordinates of claim 5, wherein after updating the vision coordinate system, if the verification does not complete the job, the point location is continuously moved, and if the job is completed, the job is ended.
7. The programming method of robot visual coordinates according to claim 6, wherein if it is verified that the working site is not the photographing work, the work set by the working site is performed, and then it is verified whether the work is completed.
8. The programming method of robot visual coordinate as claimed in claim 1, wherein after setting the coordinates of the newly added point locations and the operation, the programming screen of the human-machine interface of the robot displays the marks of the recorded visual coordinate system under the subscripts of the flow blocks of the newly added point locations.
CN201610951817.6A 2016-10-27 2016-10-27 Programming method of robot visual coordinate Active CN108000499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610951817.6A CN108000499B (en) 2016-10-27 2016-10-27 Programming method of robot visual coordinate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610951817.6A CN108000499B (en) 2016-10-27 2016-10-27 Programming method of robot visual coordinate

Publications (2)

Publication Number Publication Date
CN108000499A CN108000499A (en) 2018-05-08
CN108000499B true CN108000499B (en) 2020-07-31

Family

ID=62047366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610951817.6A Active CN108000499B (en) 2016-10-27 2016-10-27 Programming method of robot visual coordinate

Country Status (1)

Country Link
CN (1) CN108000499B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7337495B2 (en) * 2018-11-26 2023-09-04 キヤノン株式会社 Image processing device, its control method, and program
CN110465944B (en) * 2019-08-09 2021-03-16 琦星智能科技股份有限公司 Method for calculating coordinates of industrial robot based on plane vision
CN111015660B (en) * 2019-12-24 2022-07-05 江苏生益特种材料有限公司 Use method of CCL (CCL) laminating production robot vision system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI254662B (en) * 2003-05-29 2006-05-11 Fanuc Ltd Robot system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3733364B2 (en) * 2003-11-18 2006-01-11 ファナック株式会社 Teaching position correction method
CN101637908B (en) * 2008-07-29 2010-11-03 上海发那科机器人有限公司 Visual positioning method for robot transport operation
US9533418B2 (en) * 2009-05-29 2017-01-03 Cognex Corporation Methods and apparatus for practical 3D vision system
CN103264738B (en) * 2013-06-07 2015-07-01 上海发那科机器人有限公司 Automatic assembling system and method for vehicle windshield glass
CN103706568B (en) * 2013-11-26 2015-11-18 中国船舶重工集团公司第七一六研究所 Based on the robot method for sorting of machine vision
JP5850962B2 (en) * 2014-02-13 2016-02-03 ファナック株式会社 Robot system using visual feedback
JP6429473B2 (en) * 2014-03-20 2018-11-28 キヤノン株式会社 Robot system, robot system calibration method, program, and computer-readable recording medium
CN204525481U (en) * 2015-04-10 2015-08-05 马鞍山方宏自动化科技有限公司 A kind of unpowered articulated arm teaching machine
CN104827474B (en) * 2015-05-04 2017-06-27 南京理工大学 Learn the Virtual Demonstration intelligent robot programmed method and servicing unit of people

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI254662B (en) * 2003-05-29 2006-05-11 Fanuc Ltd Robot system

Also Published As

Publication number Publication date
CN108000499A (en) 2018-05-08

Similar Documents

Publication Publication Date Title
US11207781B2 (en) Method for industrial robot commissioning, industrial robot system and control system using the same
JP6420229B2 (en) A robot system including a video display device that superimposes and displays an image of a virtual object on a video of a robot
JP3946711B2 (en) Robot system
JP6812095B2 (en) Control methods, programs, recording media, robotic devices, and manufacturing methods for articles
JP5471355B2 (en) 3D visual sensor
US7966094B2 (en) Workpiece picking apparatus
CN108000499B (en) Programming method of robot visual coordinate
JP2004351570A (en) Robot system
US20060013470A1 (en) Device for producing shape model
CN104802186A (en) Robot programming apparatus for creating robot program for capturing image of workpiece
JP2016187846A (en) Robot, robot controller and robot system
JP2018202514A (en) Robot system representing information for learning of robot
US10786901B2 (en) Method for programming robot in vision base coordinate
CN114174007A (en) Autonomous robot tooling system, control method and storage medium
CN108015762B (en) Verification method for robot visual positioning
CN110363811B (en) Control method and device for grabbing equipment, storage medium and electronic equipment
JP5803119B2 (en) Robot apparatus, position detection apparatus, position detection program, and position detection method
CN107442973B (en) Welding bead positioning method and device based on machine vision
CN108857130B (en) Three-dimensional positioning method for ship universal structural part based on image frame position processing
JP2003089086A (en) Robot controller
TWI610245B (en) Method for programming a robot in a vision coordinate
JP2019077026A (en) Control device, robot system, and control device operating method and program
JP6792230B1 (en) Information processing equipment, methods and programs
TW202218836A (en) Robot control device, and robot system
TW202235239A (en) Device for adjusting parameter, robot system, method, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200414

Address after: Taoyuan City, Taiwan, China

Applicant after: Daming robot Co., Ltd

Address before: Taoyuan City, Taiwan, China

Applicant before: QUANTA STORAGE Inc.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant