CN112478779A - Base plate visual positioning method and system and base plate carrying joint robot device - Google Patents

Base plate visual positioning method and system and base plate carrying joint robot device Download PDF

Info

Publication number
CN112478779A
CN112478779A CN202011360261.6A CN202011360261A CN112478779A CN 112478779 A CN112478779 A CN 112478779A CN 202011360261 A CN202011360261 A CN 202011360261A CN 112478779 A CN112478779 A CN 112478779A
Authority
CN
China
Prior art keywords
base plate
judgment result
picture
joint robot
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011360261.6A
Other languages
Chinese (zh)
Other versions
CN112478779B (en
Inventor
张锐
刘陈华
黄军芬
张瑞英
邹勇
薛龙
姜振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Petrochemical Technology
Original Assignee
Beijing Institute of Petrochemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Petrochemical Technology filed Critical Beijing Institute of Petrochemical Technology
Priority to CN202011360261.6A priority Critical patent/CN112478779B/en
Publication of CN112478779A publication Critical patent/CN112478779A/en
Application granted granted Critical
Publication of CN112478779B publication Critical patent/CN112478779B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G47/00Article or material-handling devices associated with conveyors; Methods employing such devices
    • B65G47/74Feeding, transfer, or discharging devices of particular kinds or types
    • B65G47/90Devices for picking-up and depositing articles or materials
    • B65G47/92Devices for picking-up and depositing articles or materials incorporating electrostatic or magnetic grippers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G41/00Supporting frames or bases for conveyors as a whole, e.g. transportable conveyor frames
    • EFIXED CONSTRUCTIONS
    • E01CONSTRUCTION OF ROADS, RAILWAYS, OR BRIDGES
    • E01BPERMANENT WAY; PERMANENT-WAY TOOLS; MACHINES FOR MAKING RAILWAYS OF ALL KINDS
    • E01B9/00Fastening rails on sleepers, or the like
    • E01B9/68Pads or the like, e.g. of wood, rubber, placed under the rail, tie-plate, or chair
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Architecture (AREA)
  • Civil Engineering (AREA)
  • Structural Engineering (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Manipulator (AREA)

Abstract

The application relates to a base plate visual positioning method, a base plate visual positioning system and a base plate carrying joint robot device, wherein the base plate visual positioning method comprises the following steps: collecting a base plate picture; calculating the average value of the pixels; judging whether the pixel average value is larger than a preset threshold value or not to obtain a first judgment result; if the first judgment result is negative, judging whether the acquisition times do not exceed the preset times; if so, adjusting the exposure time during acquisition and acquiring again; otherwise, alarming; if the first judgment result is yes, extracting edge points of the image to determine circle centers corresponding to the two through holes of the base plate; and outputting the circle center information to complete the positioning of the base plate. So set up, based on the structural feature of backing plate self, confirm the position of backing plate through the position of two centre of a circle, when having solved the robot transport, can't confirm the problem of backing plate position. The base plate is moved and transported by using the joint robot, so that the processing time is reduced, the processing cost is reduced, and the problem of low working efficiency in manual carrying is solved.

Description

Base plate visual positioning method and system and base plate carrying joint robot device
Technical Field
The application relates to the technical field of railway base plates, in particular to a base plate visual positioning method and system and a base plate carrying joint robot device.
Background
The tie plate is an important part of the railway turnout and has the functions of bearing and locking the steel rail at the upper part and connecting with the turnout sleeper at the lower part through a turnout sleeper bolt to form a stable integral structure of the turnout track. The railway turnout base plate in China always adopts a group welding type structure, and the base plate comprises a metal plate with two through holes and a welding part on the metal plate. The assembly welding type structure is long in time for machining in the production and welding processes, machining cost is high, the base plate is manually moved in the welding and moving processes, and working efficiency is low. If a robot is used for carrying, it is a problem how to determine the position of the tie plate during the carrying process.
Disclosure of Invention
In order to overcome the problems in the related art at least to a certain extent, the application aims to provide a method and a system for visually positioning a base plate and a base plate carrying joint robot device, which can solve the problem that the position of the base plate cannot be determined in the carrying process if a robot is used for carrying in the process of producing and welding the base plate.
The application provides a base plate visual positioning method, which comprises the following steps:
collecting a base plate picture;
preprocessing the base plate picture;
calculating the pixel average value of the preprocessed base plate picture;
judging whether the pixel average value is larger than a preset threshold value or not to obtain a first judgment result, wherein the first judgment result is yes or no;
if the first judgment result is negative, judging whether the times of acquiring the base plate pictures exceed the preset times to obtain a second judgment result, wherein the second judgment result is yes or no;
if the second judgment result is negative, adjusting the exposure time when the base plate picture is collected, and collecting the base plate picture again;
if so, carrying out error alarm;
if so, extracting an image edge point by a preset operator;
determining circle centers corresponding to the two through holes of the base plate based on the image edge points;
and outputting the circle center information to complete the positioning of the backing plate.
Optionally, the acquiring the base plate picture comprises:
acquiring a base plate picture through a preset laser vision type electromagnetic gripper; the preset laser vision type electromagnetic gripper comprises a frame mechanism, a lighting mechanism, a sensing mechanism, a connecting mechanism and at least one suction mechanism, wherein the connecting mechanism is used for being connected with a tail end shaft of a joint robot, the suction mechanism is used for adsorbing a base plate, the connecting mechanism is fixedly connected above the frame mechanism, the sensing mechanism is arranged between the frame mechanism and the connecting mechanism and used for collecting base plate pictures and sending the base plate pictures to the preset joint robot, and the lighting mechanism is arranged at two ends of the frame mechanism and used for providing illumination for the sensing mechanism; the sucking mechanism is arranged below the frame mechanism and can slide relative to the frame mechanism together with the base plate, and the sucking mechanism comprises an electromagnetic chuck for adsorbing the base plate.
Optionally, the sensing mechanism includes: the device comprises a sensor mounting bracket, a vision sensor and a laser ranging sensor; the vision sensor is used for collecting the base plate picture.
Optionally, the preprocessing the pad picture includes:
and extracting an effective area in the backing plate picture, and defining the effective area as the pretreated backing plate picture.
Optionally, the adjusting the exposure time when the pad image is collected includes:
and multiplying the exposure time when the cushion plate picture is acquired last time by a preset coefficient to obtain the adjusted exposure time.
Optionally, the preset coefficients are: 1.2, 1.5 or 2.0.
Optionally, the preset operator includes a Roberts operator, a Sobel operator, a Prewitt operator, a Kirsch operator, a Robinson operator, a Laplacian operator, a Canny operator, and a LoG operator.
Optionally, the preset threshold is 70, 100 or 150.
The application also provides a backing plate vision positioning system, including:
the acquisition module is used for acquiring the base plate picture;
the preprocessing module is used for preprocessing the base plate picture;
the calculation module is used for calculating the pixel average value of the preprocessed base plate picture;
the first judgment module is used for judging whether the pixel average value is larger than a preset threshold value or not to obtain a first judgment result, and the first judgment result is yes or no;
the second judgment module is used for judging whether the times of acquiring the base plate pictures exceed the preset times or not if the first judgment result is negative, and obtaining a second judgment result, wherein the second judgment result is yes or no;
the adjusting module is used for adjusting the exposure time when the base plate picture is acquired and acquiring the base plate picture again if the second judgment result is negative;
the alarm module is used for carrying out error alarm if the second judgment result is yes;
the extraction module is used for extracting the edge points of the image by a preset operator if the first judgment result is yes;
the determining module is used for determining the circle centers corresponding to the two through holes of the base plate based on the image edge points;
and the output module is used for outputting the circle center information to complete the positioning of the base plate.
The application also provides a base plate carrying joint robot device which comprises a laser vision type electromagnetic gripper, a joint robot, a robot walking track and a cable drag chain; the joint robot is arranged on the robot walking track; the joint robot is connected with a cable through the cable drag chain; the laser vision type electromagnetic gripper comprises a frame mechanism, an illuminating mechanism, a sensing mechanism, a connecting mechanism and at least one suction mechanism, wherein the connecting mechanism is used for being connected with a tail end shaft of a joint robot, the suction mechanism is used for adsorbing a base plate, the connecting mechanism is fixedly connected above the frame mechanism, the sensing mechanism is arranged between the frame mechanism and the connecting mechanism and used for acquiring base plate pictures and sending the base plate pictures to the joint robot, and the illuminating mechanism is arranged at two ends of the frame mechanism and used for providing illumination for the sensing mechanism;
the joint robot grabs the base plate based on the collected base plate pictures;
the joint robot grabs the base plate based on the collected base plate picture, and the base plate is positioned by the method.
In the scheme that this application provided, through using articulated robot to carry out the removal of backing plate and transporting, it is long when having reduced processing, has reduced the processing cost. The problem of work efficiency low when having avoided artifical transport, simultaneously based on the structural feature of backing plate self, confirm the position of backing plate through the position of two centre of a circle, solved the robot when carrying, can't confirm the problem of backing plate position.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a diagram of the dimensions of a shim plate according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of an embodiment of a visual positioning method of the present application;
FIG. 3 is a perspective view of a pad-handling articulated robotic device shown in accordance with some exemplary embodiments;
FIG. 4 is a perspective view of a laser-vision type electromagnetic gripper, according to some exemplary embodiments;
fig. 5 is a perspective view of a visual positioning system shown in accordance with some exemplary embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
First, an application scenario of the embodiment of the invention is explained, wherein a base plate is an important part of a railway turnout and has the functions of bearing and locking a steel rail at the upper part and connecting with a turnout sleeper at the lower part through a turnout sleeper bolt to form a stable integral structure of the turnout track. The railway turnout base plate in China always adopts a group welding type structure, and referring to fig. 1, the base plate comprises a metal plate with two through holes and a welding part on the metal plate. Specifically, the width is 170-220 mm; length 380-; the thickness range is 18-29 mm; the maximum weight is 68 Kg. The assembly welding type structure is long in time for machining in the production and welding processes, machining cost is high, the base plate is manually moved in the welding and moving processes, and working efficiency is low. If a robot is used for carrying, it is a problem how to determine the position of the tie plate during the carrying process. The present application proposes a corresponding solution to this problem.
Referring to fig. 1-5, the present detailed description provides a flowchart of an embodiment of a visual positioning method, including:
s101, acquiring a base plate picture;
specifically, the acquisition of the base plate picture comprises:
acquiring a base plate picture through a preset laser vision type electromagnetic gripper; the preset laser visual type electromagnetic gripper comprises a frame mechanism 11, a lighting mechanism 15, a sensing mechanism 14, a connecting mechanism 13 and at least one suction mechanism 12, wherein the connecting mechanism 13 is used for being connected with a tail end shaft of a joint robot, the suction mechanism 12 is used for adsorbing a base plate, the connecting mechanism 13 is fixedly connected above the frame mechanism 11, the sensing mechanism 14 is arranged between the frame mechanism 11 and the connecting mechanism 13 and used for collecting base plate pictures and sending the base plate pictures to the preset joint robot, and the lighting mechanism 15 is arranged at two ends of the frame mechanism 11 and used for providing illumination for the sensing mechanism 14; the suction mechanism 12 is arranged below the frame mechanism 11 and can slide relative to the frame mechanism 11 with the base plate, and the suction mechanism 12 comprises an electromagnetic chuck for adsorbing the base plate.
Specifically, the sensing mechanism 14 includes: the device comprises a sensor mounting bracket, a vision sensor and a laser ranging sensor; the vision sensor is used for collecting the base plate picture. The visual sensor may be a camera.
It should be noted that the exposure time during the image acquisition in this step is adjustable.
S102, preprocessing the base plate picture;
the preprocessing the pad picture comprises:
and extracting an effective area in the backing plate picture, and defining the effective area as the pretreated backing plate picture.
S103, calculating the pixel average value of the preprocessed base plate picture;
s104, judging whether the pixel average value is larger than a preset threshold value or not to obtain a first judgment result, wherein the first judgment result is yes or no;
specifically, the preset threshold may be, but is not limited to, 70, 100, or 150.
S105, if the first judgment result is negative, judging whether the times of acquiring the base plate pictures exceed the preset times to obtain a second judgment result, wherein the second judgment result is positive or negative;
s106, if the second judgment result is negative, adjusting the exposure time when the base plate picture is collected, and collecting the base plate picture again;
wherein, the exposure time when adjusting and gathering the backing plate picture includes:
and multiplying the exposure time when the cushion plate picture is acquired last time by a preset coefficient to obtain the adjusted exposure time.
Specifically, the preset coefficient may be, but is not limited to: 1.2, 1.5 and 2.0.
The exposure factor may be, but is not limited to, 1.2, 1.5, and 2.0.
It should be noted that, in general, the exposure time is between 100 microseconds and 1000000 microseconds,
when setting the preset coefficient and the preset times, the exposure time is always between 100 microseconds and 1000000 microseconds.
When a new pad image is acquired, the exposure time and the acquisition times need to be initialized, and the exposure time is always within a proper range.
S107, if the second judgment result is yes, carrying out error alarm;
it is noted that the specific way of making an error alarm may be an audible alarm or an optical alarm. Namely, after an error occurs, related workers need to be prompted to timely process the error so as to avoid causing other losses.
S108, if the first judgment result is yes, extracting image edge points by using a preset operator;
the preset operators comprise a Roberts operator, a Sobel operator, a Prewitt operator, a Kirsch operator, a Robinson operator, a Laplacian operator, a Canny operator and a LoG operator. It should be noted that Roberts operator, Sobel operator, Prewitt operator, Kirsch operator, Robinson operator, Laplacian operator, Canny operator, and LoG operator are the operators for calculating the edge points of the image that are mature at present. The edge points of the image can be accurately calculated based on the operators.
S109, determining circle centers corresponding to the two through holes of the base plate based on the image edge points;
specifically, after the edge points of the image are determined, the part of the edge points of the image which form the circle can be easily determined, and then based on the part of the edge points which form the circle, the part of the edge points of the image which form the circle center is determined, namely: and determining the centers of circles corresponding to the two through holes.
And S110, outputting the circle center information to complete the positioning of the base plate.
It should be noted that, in the scheme provided by the present application, the position of the pad plate is determined by the positions of the two circle centers based on the structural characteristics of the pad plate itself. In the process of determining the center of a circle, a large amount of calculation is required. The part of calculation can be executed by a laser vision type electromagnetic gripper, and after the circle center information is determined by the laser vision type electromagnetic gripper, the circle center information is sent to the joint robot. The part of calculation can also be executed by the joint robot, after the laser vision type electromagnetic gripper collects the picture, the collected picture is sent to the joint robot, and the joint robot judges whether the picture is qualified. And if the image is not qualified, the joint robot controls the laser vision type electromagnetic gripper to acquire again, and adjusts the exposure time during acquisition until a qualified image is acquired. And the joint robot determines circle center information based on the qualified base plate picture.
In the scheme that this application provided, through using articulated robot to carry out the removal of backing plate and transporting, it is long when having reduced processing, has reduced the processing cost. The problem of work efficiency low when having avoided artifical transport, simultaneously based on the structural feature of backing plate self, confirm the position of backing plate through the position of two centre of a circle, learned the robot when carrying, can't confirm the problem of backing plate position.
The application also provides a backing plate vision positioning system, including:
the acquisition module 501 is used for acquiring a base plate picture;
a preprocessing module 502 for preprocessing the pad picture;
a calculating module 503, configured to calculate a pixel average value of the pad image after the preprocessing;
a first determining module 504, configured to determine whether the pixel average value is greater than a preset threshold, to obtain a first determination result, where the first determination result is yes or no;
a second determining module 505, configured to determine whether the number of times of acquiring the pad image exceeds a preset number of times if the first determining result is negative, to obtain a second determining result, where the second determining result is yes or no;
an adjusting module 506, configured to adjust an exposure time when the pad image is acquired and acquire the pad image again if the second determination result is negative;
the alarm module 507 is used for performing error alarm if the second judgment result is yes;
the extracting module 508 is configured to extract an image edge point by using a preset operator if the first determination result is yes;
a determining module 509, configured to determine circle centers corresponding to the two through holes of the pad plate based on the image edge points;
and the output module 510 is configured to output the circle center information to complete positioning of the pad plate.
The application also provides a base plate carrying joint robot device which comprises a laser vision type electromagnetic gripper 1, a joint robot 2, a robot walking track 3 and a cable drag chain 4; the joint robot is arranged on the robot walking track 3; the joint robot 2 is connected with a cable through the cable drag chain 4; the laser visual type electromagnetic gripper comprises a frame mechanism 11, an illuminating mechanism 15, a sensing mechanism 14, a connecting mechanism 13 and at least one suction mechanism 12, wherein the connecting mechanism 13 is used for being connected with a tail end shaft of a joint robot, the suction mechanism 12 is used for adsorbing a base plate, the connecting mechanism 13 is fixedly connected above the frame mechanism 11, the sensing mechanism 4 is arranged between the frame mechanism 11 and the connecting mechanism 13 and used for collecting base plate pictures and sending the base plate pictures to the joint robot, and the illuminating mechanism 15 is arranged at two ends of the frame mechanism 11 and used for providing illumination for the sensing mechanism 4; the joint robot 2 grabs the base plate based on the collected base plate pictures; the joint robot 2 grabs the base plate based on the collected base plate picture, and the base plate is positioned according to the base plate vision positioning method provided by the application.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present application, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a programmable gate array PGA, a field programmable gate array FPGA, or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A visual positioning method for a base plate is characterized by comprising the following steps:
collecting a base plate picture;
preprocessing the base plate picture;
calculating the pixel average value of the preprocessed base plate picture;
judging whether the pixel average value is larger than a preset threshold value or not to obtain a first judgment result, wherein the first judgment result is yes or no;
if the first judgment result is negative, judging whether the times of acquiring the base plate pictures exceed the preset times to obtain a second judgment result, wherein the second judgment result is yes or no;
if the second judgment result is negative, adjusting the exposure time when the base plate picture is collected, and collecting the base plate picture again;
if so, carrying out error alarm;
if so, extracting an image edge point by a preset operator;
determining circle centers corresponding to the two through holes of the base plate based on the image edge points;
and outputting the circle center information to complete the positioning of the backing plate.
2. The visual pallet positioning method of claim 1, wherein capturing a pallet picture comprises:
acquiring a base plate picture through a preset laser vision type electromagnetic gripper; the preset laser visual type electromagnetic gripper comprises a frame mechanism (11), a lighting mechanism (15), a sensing mechanism (14), a connecting mechanism (13) and at least one suction mechanism (12), wherein the connecting mechanism (13) is used for being connected with a tail end shaft of a joint robot, the suction mechanism (12) is used for adsorbing a base plate, the connecting mechanism (13) is fixedly connected above the frame mechanism (11), the sensing mechanism (14) is arranged between the frame mechanism (11) and the connecting mechanism (13) and used for collecting base plate pictures and sending the base plate pictures to the preset joint robot, and the lighting mechanism (15) is arranged at two ends of the frame mechanism (11) and used for providing illumination for the sensing mechanism (14); the sucking mechanism (12) is arranged below the frame mechanism (11) and can slide relative to the frame mechanism (11) with the base plate, and the sucking mechanism (12) comprises an electromagnetic chuck for adsorbing the base plate.
3. The visual positioning method of tie plates according to claim 2, characterized in that said sensing means (14) comprise: the device comprises a sensor mounting bracket, a vision sensor and a laser ranging sensor; the vision sensor is used for collecting the base plate picture.
4. The visual positioning method of tie plates according to claim 3, wherein said preprocessing said tie plate picture comprises:
and extracting an effective area in the backing plate picture, and defining the effective area as the pretreated backing plate picture.
5. The visual pad positioning method according to claim 1, wherein the adjusting of the exposure time for capturing the pad image comprises:
and multiplying the exposure time when the cushion plate picture is acquired last time by a preset coefficient to obtain the adjusted exposure time.
6. The visual positioning method for the base plate according to claim 1, wherein the preset coefficients are: 1.2, 1.5 or 2.0.
7. The visual positioning method for the base plate according to claim 1, wherein the preset operators comprise a Roberts operator, a Sobel operator, a Prewitt operator, a Kirsch operator, a Robinson operator, a Laplacian operator, a Canny operator, and a LoG operator.
8. The visual positioning method for tie plates according to claim 1, wherein the preset threshold is 70, 100 or 150.
9. A visual positioning system for a tie plate, comprising:
the acquisition module is used for acquiring the base plate picture;
the preprocessing module is used for preprocessing the base plate picture;
the calculation module is used for calculating the pixel average value of the preprocessed base plate picture;
the first judgment module is used for judging whether the pixel average value is larger than a preset threshold value or not to obtain a first judgment result, and the first judgment result is yes or no;
the second judgment module is used for judging whether the times of acquiring the base plate pictures exceed the preset times or not if the first judgment result is negative, and obtaining a second judgment result, wherein the second judgment result is yes or no;
the adjusting module is used for adjusting the exposure time when the base plate picture is acquired and acquiring the base plate picture again if the second judgment result is negative;
the alarm module is used for carrying out error alarm if the second judgment result is yes;
the extraction module is used for extracting the edge points of the image by a preset operator if the first judgment result is yes;
the determining module is used for determining the circle centers corresponding to the two through holes of the base plate based on the image edge points;
and the output module is used for outputting the circle center information to complete the positioning of the base plate.
10. A base plate carrying joint robot device is characterized by comprising a laser vision type electromagnetic gripper (1), a joint robot (2), a robot walking track (3) and a cable drag chain (4); the joint robot is arranged on the robot walking track (3); the joint robot (2) is connected with a cable through the cable drag chain (4); the laser vision type electromagnetic gripper comprises a frame mechanism (11), an illuminating mechanism (15), a sensing mechanism (14), a connecting mechanism (13) and at least one suction mechanism (12), wherein the connecting mechanism (13) is used for being connected with a tail end shaft of a joint robot, the suction mechanism (12) is used for adsorbing a base plate, the connecting mechanism (13) is fixedly connected above the frame mechanism (11), the sensing mechanism (14) is arranged between the frame mechanism (11) and the connecting mechanism (13), the sensing mechanism (14) is connected with the joint robot (2) in a communication mode, the sensing mechanism (14) is used for collecting base plate pictures and sending the base plate pictures to the joint robot (2), and the illuminating mechanism (15) is arranged at two ends of the frame mechanism (11) and provides illumination for the sensing mechanism (14); the joint robot (2) captures a base plate based on the collected base plate picture; the joint robot (2) grabs the base plate based on the collected base plate picture, and the method for positioning the base plate is the base plate visual positioning method according to any one of claims 1 to 8.
CN202011360261.6A 2020-11-27 2020-11-27 Base plate visual positioning method and system and base plate carrying joint robot device Active CN112478779B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011360261.6A CN112478779B (en) 2020-11-27 2020-11-27 Base plate visual positioning method and system and base plate carrying joint robot device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011360261.6A CN112478779B (en) 2020-11-27 2020-11-27 Base plate visual positioning method and system and base plate carrying joint robot device

Publications (2)

Publication Number Publication Date
CN112478779A true CN112478779A (en) 2021-03-12
CN112478779B CN112478779B (en) 2022-07-12

Family

ID=74936265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011360261.6A Active CN112478779B (en) 2020-11-27 2020-11-27 Base plate visual positioning method and system and base plate carrying joint robot device

Country Status (1)

Country Link
CN (1) CN112478779B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281437A (en) * 2008-01-29 2008-10-08 埃派克森微电子(上海)有限公司 Method for regulating optical indication device image quality controlling parameter
CN101334263A (en) * 2008-07-22 2008-12-31 东南大学 Circular target circular center positioning method
CN102783137A (en) * 2010-05-10 2012-11-14 松下电器产业株式会社 Imaging apparatus
CN104270570A (en) * 2014-10-17 2015-01-07 北京英泰智软件技术发展有限公司 Binocular video camera and image processing method thereof
KR20160147336A (en) * 2015-06-15 2016-12-23 봉민 System for supplying a plate
CN110562741A (en) * 2019-10-21 2019-12-13 苏州和自兴智能科技有限公司 Machine vision fused separating type station robot and production platform

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281437A (en) * 2008-01-29 2008-10-08 埃派克森微电子(上海)有限公司 Method for regulating optical indication device image quality controlling parameter
CN101334263A (en) * 2008-07-22 2008-12-31 东南大学 Circular target circular center positioning method
CN102783137A (en) * 2010-05-10 2012-11-14 松下电器产业株式会社 Imaging apparatus
CN104270570A (en) * 2014-10-17 2015-01-07 北京英泰智软件技术发展有限公司 Binocular video camera and image processing method thereof
KR20160147336A (en) * 2015-06-15 2016-12-23 봉민 System for supplying a plate
CN110562741A (en) * 2019-10-21 2019-12-13 苏州和自兴智能科技有限公司 Machine vision fused separating type station robot and production platform

Also Published As

Publication number Publication date
CN112478779B (en) 2022-07-12

Similar Documents

Publication Publication Date Title
CN107443428A (en) A kind of band visual identity flapping articulation manipulator and visual identity method
CN109975317B (en) Method for detecting defects on whole surface of cylindrical roller
CN109821763B (en) Fruit sorting system based on machine vision and image identification method thereof
CN215449031U (en) Laminate polymer battery surface defect check out test set
CN112478779B (en) Base plate visual positioning method and system and base plate carrying joint robot device
CN211100241U (en) A removing devices for flaw visual detection
CN204721786U (en) Multi-functional full-automatic inserter
CN109967389A (en) A kind of detonation tool defect automatic checkout system and its detection method
JPH0929693A (en) Fixed weight cutter
CN210214044U (en) Automatic control system for overturning of multi-vehicle type mixed line production vehicle frame
CN113077414B (en) Steel plate surface defect detection method and system
CN112255244B (en) Patch detection device integrated in femto-camera and detection method
CN109978941A (en) Non-contact type sleeper localization method under a kind of tamping operation
CN113532292B (en) Online size detection system for plates and working method thereof
CN108748360A (en) A kind of water pipe cutter device that can be precisely oriented to and can avoid water pipe brittle failure
CN112123336A (en) Method for guiding robot to suck buzzer for dust removal
EP0974830A3 (en) Apparatus and method for detecting low-contrast blemishes
CN220363975U (en) Automatic steel plate centering device based on machine vision
CN105172839A (en) Full-automatic detection system for train guide rail contours
CN219624695U (en) Tobacco shred width detection device
CN219362499U (en) Sectional type self-adaptive conveying device for license plates
CN111060011A (en) Positioning system of bearing saddle, automatic bearing saddle detection system and method
CN217443200U (en) Automatic detection device that snatchs of cut-parts
CN220131151U (en) Industrial camera vision automatic calibration device and material conveying equipment
CN219326294U (en) Full-automatic feeding and discharging test bench with visual recognition function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant