CN117019671A - Appearance defect detection device of metal corrugated pipe - Google Patents
Appearance defect detection device of metal corrugated pipe Download PDFInfo
- Publication number
- CN117019671A CN117019671A CN202310939258.7A CN202310939258A CN117019671A CN 117019671 A CN117019671 A CN 117019671A CN 202310939258 A CN202310939258 A CN 202310939258A CN 117019671 A CN117019671 A CN 117019671A
- Authority
- CN
- China
- Prior art keywords
- detection
- feeding
- coordinates
- image
- bellows
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 175
- 230000007547 defect Effects 0.000 title claims abstract description 33
- 239000002184 metal Substances 0.000 title claims abstract description 20
- 230000007246 mechanism Effects 0.000 claims abstract description 78
- 230000000007 visual effect Effects 0.000 claims abstract description 36
- 238000007599 discharging Methods 0.000 claims abstract description 18
- 238000003384 imaging method Methods 0.000 claims description 57
- 230000033001 locomotion Effects 0.000 claims description 38
- 238000006243 chemical reaction Methods 0.000 claims description 33
- 239000011159 matrix material Substances 0.000 claims description 24
- 238000000034 method Methods 0.000 claims description 20
- 238000009826 distribution Methods 0.000 claims description 11
- 239000000463 material Substances 0.000 claims description 10
- 210000000078 claw Anatomy 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 230000002159 abnormal effect Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 2
- 239000000047 product Substances 0.000 description 14
- 238000001914 filtration Methods 0.000 description 8
- 230000002950 deficient Effects 0.000 description 6
- 238000009434 installation Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000002360 preparation method Methods 0.000 description 4
- 241000565357 Fraxinus nigra Species 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 210000001503 joint Anatomy 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003647 oxidation Effects 0.000 description 1
- 238000007254 oxidation reaction Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000011265 semifinished product Substances 0.000 description 1
- 239000012780 transparent material Substances 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B07—SEPARATING SOLIDS FROM SOLIDS; SORTING
- B07C—POSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
- B07C5/00—Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
- B07C5/34—Sorting according to other particular properties
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B07—SEPARATING SOLIDS FROM SOLIDS; SORTING
- B07C—POSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
- B07C5/00—Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
- B07C5/02—Measures preceding sorting, e.g. arranging articles in a stream orientating
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B07—SEPARATING SOLIDS FROM SOLIDS; SORTING
- B07C—POSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
- B07C5/00—Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
- B07C5/36—Sorting apparatus characterised by the means used for distribution
- B07C5/361—Processing or control devices therefor, e.g. escort memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30136—Metal
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an appearance defect detection device of a metal corrugated pipe, which comprises an upper computer, a side visual system, a feeding frame, a feeding mechanism, a detection moving module, a moving wheel set, a detection visual system, a discharging mechanism and a discharging frame, wherein the upper computer is respectively and electrically connected with the side visual system, the feeding mechanism, the detection moving module, the detection visual system and the discharging mechanism; meanwhile, the invention greatly reduces the manual workload and the labor cost.
Description
Technical Field
The invention belongs to the field of detection, and relates to an appearance defect detection device for a metal corrugated pipe.
Background
The metal corrugated pipe is a wave-like pipe with a regular shape; a be used for being inconvenient for making the connection of pipeline with the occasion of fixed elbow installation, perhaps pipeline and equipment are connected, and the metal bellows can appear the black grey defect condition of outward appearance in the processing engineering, if not by the rejection to this kind of bellows finished product that exists black grey defect, flow into the market, very easily by oxidation in the use, lead to the pipe wall to become thin, appear the leak even, seriously influence bellows product quality, also have great potential safety hazard simultaneously.
At present, a manual visual inspection mode is generally adopted to carry out full inspection on the annealed corrugated pipe, the corrugated pipe semi-finished product with black ash defects is manually picked out, and the following defects exist in manual detection: (1) higher vision requirements; (2) labor intensity is high, and eyes are damaged; (2) The randomness is high, and the quality of the corrugated pipe cannot be guaranteed; (4) The efficiency is low, the continuous working time cannot be too long, and the production efficiency is affected; (5) The higher and higher labor costs also put great pressure on the enterprise.
Disclosure of Invention
The invention provides an appearance defect detection device of a metal corrugated pipe, which aims to overcome the defects of the prior art.
In order to achieve the above purpose, the present invention adopts the following technical scheme: the utility model provides an appearance defect detection device of metal bellows, including the host computer, the side vision system, the material loading frame, the feed mechanism, detect the removal module, remove the wheelset, detect the vision system, unloading mechanism and unloading frame, the host computer respectively with the side vision system, the feed mechanism, detect the removal module, detect the vision system, unloading mechanism electric connection, the side vision system gathers and wait to detect the bellows image, detect the location coordinate of waiting to detect the bellows, the feed mechanism moves to the material loading position according to location coordinate clamp and gets to wait to detect the bellows, detect the removal module and press from the feed mechanism to get to wait to detect the bellows and remove to detect the position at the material loading position with waiting to detect the bellows image that detects the vision system gathers and be located the detection position and carry out outward appearance detection to the bellows that waits to detect in the image, unloading mechanism presss from both sides and gets the bellows that has been detected that is located down the material level and sorts according to outward appearance testing result.
Further, the shooting surface of the side vision system is parallel to the detection surface of the feeding frame, the side vision system 1 collects images of the detection surface of the feeding frame 2, the side vision system 1 performs visual positioning detection on the images, and the visual positioning detection comprises detection of whether the feeding frame is inclined or not, detection of the positioning coordinates of the images of the bellows to be detected and conversion of the image coordinates of the bellows into physical coordinates of the feeding mechanism.
Further, the feeding mechanism comprises a feeding movement module and a feeding clamping jaw mechanism, wherein the feeding movement module is connected with the feeding clamping jaw mechanism, and the feeding movement module controls the feeding clamping jaw mechanism to move in three directions in space.
Further, the feeding clamping jaw mechanism comprises a feeding clamping jaw and a feeding photoelectric switch, wherein the feeding photoelectric switch is used for judging whether the feeding clamping jaw moves to a feeding level, and the feeding clamping jaw mechanism clamps the bellows to be detected through the feeding clamping jaw.
Further, the detection vision system comprises four groups of vision imaging acquisition modules, wherein each vision imaging acquisition module comprises an area array camera, a fixed focus lens and a dome light source.
Further, the positions of the four groups of visual imaging acquisition modules in spatial distribution are as follows:
On the plane of bellows circumferencial direction place, four sets of vision imaging collection module take the central axis of bellows as central annular distribution, and the contained angle between the adjacent vision imaging collection module is 90, and the direction of detection of four sets of vision imaging collection module is towards the bellows, on the plane of bellows axial place, four sets of vision imaging collection module set for the distance along bellows axial each other interval.
Further, the movable wheel set is located between the detection movable module and the detection visual system, the movable wheel set comprises two groups, the two groups of movable wheel sets are distributed symmetrically up and down, and the central line between the two groups of movable wheel sets is collinear or approximately collinear with the central line of the corrugated pipe on the detection clamping assembly.
Further, the movable wheel group comprises a roller, a cylinder for controlling the roller to move up and down and a roller position detection sensor, the roller position detection sensor detects whether the roller reaches an initial position, and when the roller is at the initial position, the distance between the two groups of rollers is larger than the diameter of the corrugated pipe.
Further, the step of converting the bellows image coordinates into physical coordinates of the feeding mechanism by the side vision system is as follows:
step 1.1: placing an empty feeding frame in a detection area of a side vision system;
The detection surface of the feeding frame is parallel to the shooting surface of the lateral vision system; the object distance of the side vision system reaches a set value, and focusing is clear; the shooting surface of the side vision system is parallel to the XZ plane of the feeding clamping claw;
step 1.2: the upper computer controls the feeding mechanism to clamp the pipe and move 9 points in the feeding frame and records the actual coordinates of the 9 points, the lateral vision system acquires 9 images of the 9 points, and the distribution of the superimposed characteristic patterns in the 9 images is consistent with the distribution of the 9 points of the pipe moving in the feeding frame;
step 1.3: respectively searching circles in the detection surface areas of the 9 images by using a Hough circle searching algorithm to obtain circle center pixel coordinates, wherein the circle center pixel coordinates are image pixel coordinates of point positions in the 9 images, and the image pixel coordinates of the point positions correspond to the actual coordinates one by one;
step 1.4: obtaining a conversion matrix between the image pixel coordinates and the actual coordinates of the same point position according to the two coordinates;
step 1.5: calling the conversion matrix obtained in the step 4 to convert the pixel coordinates of the 9 points into the physical coordinates of the 9 points in the feeding mechanism;
step 1.6: comparing the physical coordinates of the 9 points obtained by conversion in the step 5 with the actual coordinates of the 9 points to obtain statistical data, if the statistical data is in a threshold range, judging that the calculated conversion matrix is effective, storing the conversion matrix, and ending the 9-point calibration flow; otherwise, judging that the 9-point calibration conversion is abnormal, and re-calibrating and executing the step 1.2;
The method for obtaining the conversion matrix in the step 1.4 comprises the following steps:
the pixel coordinates of the dot position image are set as (Xn, yn), the actual coordinates of the dot position are set as (Xn, yn, zn), and the transformation matrix is set asWherein n is a point sequence number, n=1, 2..9, the point image pixel coordinates and the point actual coordinates are known parameters, the point image pixel coordinates, the point actual coordinates and the conversion matrix satisfy the formula (1),
according to formula (1), formulas (2) and (3) can be obtained,
axn+byn+c=Xn;(2)
dxn+eyn+f=Yn;(3)
substituting the image pixel coordinates of 9 points and the actual coordinates of 9 points into the formula (2) respectively to obtain the following 9 groups of formulas,
ax1+by1+c=X1ax4+by4+c=X4ax7+by7+c=X7
ax2+by2+c=X2 ax5+by5+c=X5 ax8+by8+c=X8
ax3+by3+c=X3ax6+by6+c=X6ax9+by9+c=X9
the above formula is calculated by using a least square method to obtain a formula (4), S (a, b, c) = [ (ax) 1 +by 1 +c)-X 1 ] 2 +[(ax 2 +by 2 +c)-X 2 ] 2 +[(ax 3 +by 3 +c)-X 3 ] 2 +......+[(ax 9 +by 9 +c)-X 9 ];(4)
Obtaining partial derivatives of S (a, b, c) and making the value of the first derivative be 0 to obtain
According to the formula (5), obtaining parameters a, b and c; d, e and f are obtained by the same process, and then a conversion matrix is obtained
Further, the side vision system performs visual positioning detection on the image, the visual positioning detection comprises the steps of detecting whether the feeding frame is inclined or not, and detecting the positioning coordinates of the to-be-detected bellows image, wherein the steps are as follows:
step 2.1: a frame of single-channel 8-bit-depth gray level image is collected, the gray level is divided into 256 levels, the range is 0-255,0 represents black, and 255 represents white;
Step 2.2: obtaining edge points on two sides of a feeding frame by an edge searching method to obtain the height, width and center coordinate parameters of a detection surface of the feeding frame;
step 2.3: comparing the obtained width parameter with the width parameter of the feeding frame input in the step 2, if the ratio is in the range of 0.98 to 1.02, judging that the placing position of the feeding frame is accurate, and executing the step 2.4; if the ratio is not in the range of 0.98 to 1.02, judging that the feeding frame is placed obliquely, prompting that the feeding frame needs to be replaced, and ending the step;
step 2.4: extracting a detection area image of the feeding frame according to the obtained height, width and center coordinate parameters;
step 2.5: extracting a target circle in the feeding frame by adopting a Hough circle finding algorithm on the image of the detection area of the feeding frame, and obtaining the center coordinates and the radius of the target circle;
the radius of the target circle is in the range of 0.9 to 1.1 of the radius of the outer diameter dimension input in the step 2; the target circle is not smaller than a threshold value of the number of circles considered by a point where a plurality of curves in the Hough circle finding algorithm intersect;
step 2.6: converting the circle center coordinates into physical coordinates of the feeding mechanism 3;
step 2.7: according to the clamping order from top to bottom and from left to right, all the physical coordinates obtained in the step 2.6 are sent to the feeding mechanism 3 according to the clamping order;
Step 2.8: ending the steps of,
In summary, the invention has the following advantages:
the invention realizes the automatic non-contact optical detection of the appearance of the semi-finished metal corrugated pipe, sorts the corrugated pipe according to the appearance detection result, eliminates good products, and solves the defects existing in manual detection, thereby ensuring the product quality; meanwhile, the invention greatly reduces the manual workload and the labor cost.
Drawings
FIG. 1 is a schematic diagram of an appearance defect detecting apparatus according to the present invention.
FIG. 2 is a schematic diagram of an appearance defect detecting apparatus according to the present invention.
Fig. 3 is a schematic diagram of a mobile wheelset and inspection vision system of the present invention.
Fig. 4 is a schematic diagram of a mobile wheelset and inspection vision system of the present invention.
Fig. 5 is a flow chart of the detection and positioning of a single frame image of the side vision system of the present invention.
Fig. 6 is a flow chart of the single frame image appearance detection of the detection vision system of the present invention.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
All directional indications (such as up, down, left, right, front, rear, lateral, longitudinal … …) in embodiments of the present invention are merely used to explain the relative positional relationship, movement, etc. between the components in a particular gesture, and if the particular gesture changes, the directional indication changes accordingly.
For reasons of installation errors, the parallel relationship referred to in the embodiments of the present invention may be an approximately parallel relationship, and the perpendicular relationship may be an approximately perpendicular relationship.
Embodiment one:
as shown in fig. 1-6, an appearance defect detection device for a metal corrugated pipe comprises an upper computer, a side vision system 1, a feeding frame 2, a feeding mechanism 3, a detection moving module 4, a moving wheel set 5, a detection vision system 6, a discharging mechanism 7 and a discharging frame 8, wherein the upper computer is electrically connected with the side vision system 1, the feeding mechanism 3, the detection moving module 4, the detection vision system 6 and the discharging mechanism 6 respectively.
The bellows to be detected is placed in the feeding frame 2, the shooting direction of the side vision system 1 faces the feeding frame 2, and the side plate of the feeding frame 2 facing one side of the side vision system 1 is made of transparent materials, so that the side vision system 1 shoots the bellows image in the feeding frame 2, and visual positioning detection of the bellows is realized.
The side vision system 1 collects an image of a detection surface of the feeding frame 2, the image comprises a bellows image to be detected, the side vision system 1 performs visual positioning detection on the image, the visual positioning detection comprises detection of whether the feeding frame is inclined or not, detection of a bellows image coordinate to be detected and conversion of the bellows image coordinate into a physical coordinate of a feeding mechanism;
the feeding mechanism 3 comprises a feeding movement module and a feeding clamping jaw mechanism 34, the feeding movement module is connected with the feeding clamping jaw mechanism 34, the feeding movement module is set to be a space movement module, the feeding clamping jaw mechanism 34 is controlled to move in three directions in space, the upper computer controls the feeding clamping jaw mechanism 34 to move to clamping coordinates of a bellows to be detected from a feeding initial position through the feeding movement module, the clamping coordinates are the feeding level, and the bellows is clamped. The feeding gripper mechanism 34 and the clamped bellows move to the feeding level, and the feeding gripper mechanism 34 is in butt joint with the detection moving module 4 at the feeding level, but the feeding level is the same as the feeding starting position in this embodiment, but the feeding gripper mechanism is not limited to this, and the feeding gripper mechanism and the detection moving module can be different in other embodiments.
The feeding gripper mechanism 34 includes a feeding jaw and a feeding photoelectric switch, and the feeding photoelectric switch is used for judging whether the feeding gripper moves to a feeding level, and the feeding gripper mechanism 34 grips the bellows to be detected through the feeding gripper.
In this embodiment, the feeding gripper mechanism 34 includes two groups, and the two groups of feeding gripper mechanisms 34 operate individually so as to be suitable for gripping corrugated pipes of different lengths.
The feeding motion module comprises a feeding first direction motion module 31, a feeding second direction motion module 32 and a feeding third direction motion module 33, wherein two groups of feeding clamping claw mechanisms 34 are respectively installed on the two groups of feeding third direction motion modules 33, the two groups of feeding third direction motion modules 33 respectively drive the feeding clamping claw mechanisms 34 to move along the third direction, the two groups of feeding third direction motion modules 33 are respectively installed on the two groups of feeding second direction motion modules 32, the two groups of feeding second direction motion modules 32 respectively drive the two groups of feeding third direction motion modules 33 to move along the second direction, the two groups of feeding second direction motion modules 32 are installed on the feeding first direction motion module 31, the feeding first direction motion module 31 drives the two groups of feeding second direction motion modules 32 to synchronously move along the first direction, and the feeding first direction motion module 31 is installed on a bracket (not shown).
As shown in fig. 1, the first direction is the X direction, the second direction is the Y direction, and the third direction is the Z direction.
The detection mobile module 4 comprises a mobile module 41, a detection clamping component 42 and a bellows position detection sensor 43, wherein the detection clamping component 42 comprises two groups, the two groups of detection clamping components 42 are respectively arranged on the mobile module 41, and the mobile module 41 controls the two groups of detection clamping components 42 to synchronously move.
The detection clamping assembly 42 comprises a detection clamping moving module 421, a detection clamping jaw 422 and a detection clamping jaw judging sensor 423, wherein the detection clamping jaw 422 is installed on the detection clamping moving module 421, the detection clamping jaw judging sensor 423 is installed on the detection clamping jaw 422, and the detection clamping moving module 421 controls the detection clamping jaw 422 to move.
The detecting jaw judging sensor 423 is used for judging whether the detecting jaw 422 reaches the upper material level, and when the detecting jaw 422 reaches the upper material level, the detecting clamping moving module 421 stops running;
detection jaw 422 grips the bellows on the feed gripper mechanism 34 in the upper discharge position.
In order to facilitate the detection of the appearance of the corrugated tube by the detection vision system 6, the two groups of detection clamping moving modules 421 move reversely to straighten the corrugated tube, and the moment value set by the motor of the detection clamping moving modules 421 is used for judging whether the corrugated tube is straightened, the moving modules 41 move the detection clamping assembly 42 with the straightened corrugated tube to the position of the detection vision system 6, and the corrugated tube position detection sensor 43 is used for detecting whether the corrugated tube reaches the detection preparation position.
The moving wheel set 5 is located between the detecting moving module 4 and the detecting vision system 6, the moving wheel set 5 comprises two groups, the two groups of moving wheel sets 5 are distributed symmetrically up and down in the Z direction, and the center line between the two groups of moving wheel sets 5 is collinear or approximately collinear with the center line of the corrugated pipe on the detecting clamping component 42.
The moving wheel set 5 comprises a roller 51, an air cylinder 52 for controlling the roller to move up and down and a roller position detection sensor 53, the roller position detection sensor 53 detects whether the roller 51 reaches an initial position, when the roller 51 is at the initial position, the distance between the two groups of rollers 51 is larger than the diameter of the corrugated pipe, and the detection preparation position is positioned at the front side of the moving wheel set 5, so that when the roller 51 is at the initial position, the corrugated pipe reaches the detection preparation position, and the problem of collision of the corrugated pipe and the roller 51 can be avoided.
The upper computer controls the cylinder 52 to move downwards to a designated position according to the diameter of the corrugated pipe, the corrugated pipe continuously moves between the two groups of rollers 51, the rollers 51 rotate along with the movement of the corrugated pipe, the resistance born by the corrugated pipe is reduced, the corrugated pipe is supported and limited by the two groups of rollers 51 in the Z direction, the corrugated pipe is restrained from shaking in the movement process, and the imaging of the detection vision system 6 is clear.
The detection vision system 6 comprises four groups of vision imaging acquisition modules 61, the vision imaging acquisition modules 61 comprise an area array camera, a fixed focus lens and a dome light source, and the positions of the four groups of vision imaging acquisition modules 61 in spatial distribution are as follows: on the XZ plane (namely, the plane in which the circumferential direction of the corrugated pipe is located), four groups of visual imaging acquisition modules 61 are uniformly distributed in a ring shape by taking the central axis of the corrugated pipe as the center, the included angle between every two adjacent visual imaging acquisition modules 61 is 90 degrees, the detection directions of the four groups of visual imaging acquisition modules 61 face the corrugated pipe, and on the XY or YZ plane (namely, the plane in which the axial direction of the corrugated pipe is located), the four groups of visual imaging acquisition modules 61 are mutually spaced at a certain distance along the axial direction of the corrugated pipe, so that the interference of lighting among the four groups of visual imaging acquisition modules 61 is avoided.
The moving module 41 passes through a detection preparation position, the moving wheel group 5 reaches the position of the detection vision system 6, the position is a detection position, the four groups of vision imaging acquisition modules 61 acquire images of the circumference and the axial different positions of the corrugated pipe positioned at the detection position, appearance detection is carried out, whether the appearance of the corrugated pipe is defective or not is judged, the corrugated pipe is divided into good products and defective products according to the judgment result, the moving module 41 restores the detected corrugated pipe to a state before being not straightened and moves to a discharging position, the moving module 41 is in butt joint with the discharging mechanism 7 when discharging the material, the discharging frame 8 comprises a good product frame 81 and a defective product frame 82, the upper computer controls the discharging mechanism 7 to take and move the good product corrugated pipe to the good product discharging position and place the good product frame 81 or take and move the defective product corrugated pipe to the defective product discharging position and place the defective product frame 82 according to the judgment result.
The structure of the discharging mechanism 7 is the same as that of the feeding mechanism 3, and a detailed description thereof will be omitted.
The moving direction of the moving module 41 and the detecting clamp moving module 421 is the Y direction.
The feeding first direction movement module 31, the feeding second direction movement module 32, the feeding third direction movement module 33, the movement module 41, the detection clamping movement module 421 and the transmission structure adopting a ball screw and a linear guide rail are the existing structure, the transmission structure adopting the ball screw and the linear guide rail is not improved, and the structures of the feeding first direction movement module 31 and the feeding second direction movement module 32 are not repeated here.
In other embodiments, the feeding movement module, the moving module 41, the detecting and clamping movement module 421 can be realized by selecting a linear hydraulic cylinder to be matched with a hydraulic circuit or a motor to be matched with a conveyor belt or other transmission modes, the application is not limited to the specific transmission modes, and only one technical scheme is provided in the embodiment.
The installation device with the structure omitted in the attached drawings can adopt a conventional installation frame, an installation plate and the like, and is not an improvement point of the application and is not described herein.
The physical sitting specimen embodiment of the bellows image coordinate conversion to the feeding mechanism by the side vision system adopts 9-point calibration to realize coordinate conversion, and specifically comprises the following steps:
Step 1.1: placing an empty feeding frame in a detection area of the side vision system 1;
the detection surface of the feeding frame is parallel to the shooting surface of the side vision system 1; the object distance of the side vision system 1 reaches a set value, and focusing is clear; the shooting surface of the side vision system is parallel to the XZ plane of the feeding clamping claw;
step 1.2: the upper computer controls the feeding mechanism to clamp the pipe, the pipe moves for 9 points in the feeding frame 2, the actual coordinates of the 9 points are recorded, the side vision system 1 collects 9 images of the 9 points, and the distribution of the superimposed characteristic patterns in the 9 images is consistent with the distribution of the 9 points of the pipe moving in the feeding frame 2;
the pipe is rigid and straight, the length of the pipe is matched with the feeding frame, and the outer diameter size and the bellows to be detected are the same in specification;
step 1.3: respectively searching circles in the detection surface areas of the 9 images by using a Hough circle searching algorithm to obtain circle center pixel coordinates, wherein the circle center pixel coordinates are image pixel coordinates of point positions in the 9 images, and the image pixel coordinates of the point positions correspond to the actual coordinates one by one;
step 1.4: obtaining a conversion matrix between the image pixel coordinates and the actual coordinates of the same point position according to the two coordinates;
step 1.5: calling the obtained conversion matrix to convert the pixel coordinates of the 9 points into the physical coordinates of the 9 points in the feeding mechanism 3;
Step 1.6: comparing the physical coordinates of the 9 points obtained by conversion with the actual coordinates of the 9 points to obtain statistical data, if the statistical data is in a threshold range, judging that the calculated conversion matrix is effective, storing the conversion matrix, and ending the 9-point calibration flow; otherwise, judging that the 9-point calibration conversion is abnormal, and re-calibrating and executing the step 1.2;
the method for obtaining the conversion matrix in the step 1.4 comprises the following steps:
the pixel coordinates of the dot image are set to (x) n ,y n ) The actual coordinates of the point location are set to (X n ,Y n ,Z n ) The conversion matrix is set asWherein n is a point sequence number, n=1, 2..9, the point image pixel coordinates and the point actual coordinates are known parameters, the point image pixel coordinates, the point actual coordinates and the conversion matrix satisfy the formula (1),
according to formula (1), formulas (2) and (3) can be obtained,
axn+byn+c=Xn;(2)
dxn+eyn+f=Yn;(3)
substituting the image pixel coordinates of 9 points and the actual coordinates of 9 points into the formula (2) respectively to obtain the following 9 groups of formulas,
ax1+by1+c=X1ax4+by4+c=X4ax7+by7+c=X7
ax2+by2+c=X2 ax5+by5+c=X5 ax8+by8+c=X8
ax3+by3+c=X3ax6+by6+c=X6ax9+by9+c=X9
the above formula is calculated by using a least square method to obtain a formula (4), S (a, b, c) = [ (ax) 1 +by 1 +c)-X 1 ] 2 +[(ax 2 +by 2 +c)-X 2 ] 2 +[(ax 3 +by 3 +c)-X 3 ] 2 +......+[(ax 9 +by 9 +c)-X 9 ];(4)
Obtaining partial derivatives of S (a, b, c) and making the value of the first derivative be 0 to obtain
According to the formula (5), obtaining parameters a, b and c; d, e and f are obtained by the same process, and then a conversion matrix is obtained
In this embodiment, the coordinate system of the side vision system 1 and the feeding mechanism 33 is unified, and the XYZ coordinates shown in fig. 1 are established, and in this coordinate system, the detection plane of the feeding frame and the imaging plane of the side vision system 1 are parallel to the plane in which XZ is located, so that the pixel coordinates of the image detected by the side vision system 1 are set to (x) as the pixel coordinates of the point-location image n ,z n ) The actual coordinates of the point location are set to (X n ,Z n ,Y n ) And (2) and
the mode of the side vision system 1 for collecting the image of the bellows to be detected is triggering collection, namely, an upper computer is connected with the side vision system 1 through a communication interface to collect a frame of image; the side vision system performs visual positioning detection on the image, the visual positioning detection comprises the detection of whether the feeding frame is inclined or not, and the detection steps of the positioning coordinates of the bellows image to be detected are as follows:
step 2.1: a frame of single-channel 8-bit-depth gray level image is collected, the gray level is divided into 256 levels, the range is 0-255,0 represents black, and 255 represents white;
step 2.2: obtaining edge points on two sides of a feeding frame by an edge searching method to obtain the height, width and center coordinate parameters of a detection surface of the feeding frame;
step 2.3: comparing the obtained width parameter with the input width parameter of the feeding frame 2, if the ratio is in the range of 0.98-1.02, judging that the placing position of the feeding frame is accurate, and executing the step 2.4; if the ratio is not in the range of 0.98 to 1.02, judging that the feeding frame is placed obliquely, prompting that the feeding frame needs to be replaced, and ending the step;
step 2.4: extracting a detection area image of the feeding frame according to the obtained height, width and center coordinate parameters;
Step 2.5: extracting a target circle in the feeding frame by adopting a Hough circle finding algorithm on the image of the detection area of the feeding frame, and obtaining the center coordinates and the radius of the target circle;
the radius of the target circle is in the range of 0.9 to 1.1 of the radius of the outer diameter dimension input in the step 2; the target circle is not smaller than a threshold value of the number of circles considered by a point where a plurality of curves in the Hough circle finding algorithm intersect;
step 2.6: converting the circle center coordinates into physical coordinates of the feeding mechanism 3;
step 2.7: according to the clamping order from top to bottom and from left to right, all the physical coordinates obtained in the step 2.6 are sent to the feeding mechanism 3 according to the clamping order;
step 2.8: and (5) ending the step.
The edge searching method in the step 2.2 specifically comprises the following steps:
step 2.2.1: transversely dividing the acquired gray level image into a plurality of sub-areas with equal widths;
step 2.2.2: traversing each pixel point from two sides to the middle direction for each row of each sub-area to extract and process edge points of two sides of the feeding frame;
step 2.2.3: respectively storing the extracted edge points into a left edge point set A of the feeding frame and a right edge point set B of the feeding frame;
step 2.2.4: according to the pixel point coordinates of the feeding frame left edge point set A, calculating an average value mu and a standard deviation sigma of the abscissa, removing outlier pixel points with the abscissa not meeting [ mu-sigma, mu+sigma ] from the feeding frame left edge point set A to obtain a left gathering edge point set A ', processing the feeding frame right edge point set B by the same method to obtain a right gathering edge point set B',
Step 2.2.5: performing straight line fitting on the left aggregation edge point set A' by using a least square method to obtain a straight line X A =α,X A α is the left edge abscissa position of the feeding frame,
X A the sum of squares of the distances of all pixels of set a' to the line is minimized by =α, where α is a constant.
Step 2.2.6: performing straight line fitting on the right side aggregation edge point set B' by using a least square method to obtain a straight line X B =β,X B The =β is the left edge abscissa position of the feed frame,
X B the sum of squares of the distances of all pixels of set B' to the line is minimized by =β, where β is a constant.
Step 2.2.7: obtaining the width, height and center coordinates of the detection surface of the feeding frame,
the width of the detection surface is Wj, W j =X A -X B =α-β;
The height of the detection surface is the difference between the maximum value and the minimum value of the ordinate of the pixel points in the set A 'or B',
obtaining a center coordinate according to the width and the height of the detection surface of the feeding frame;
the edge point processing method in step 2.2.2 specifically comprises the following steps:
step S.1: carrying out projection processing on the sub-region to generate projection lines, and calculating the average concentration of each projection line to obtain an average concentration waveform of the projection lines; wherein the average density is the average gray value per projection line.
Step S.2: performing differential treatment on the average concentration waveform of the projection line to obtain a differential waveform;
Step S.3: filtering the differential waveform to remove peaks smaller than a set threshold value to obtain a filtered waveform;
step S.4: extracting a left projection waveform point corresponding to a first peak value in the left middle direction and a right projection waveform point corresponding to a first peak value in the right middle direction in the filter waveform;
the left projection waveform point corresponds to the average value of the lateral coordinates of the left edge point of the feeding frame in the subarea, and the right projection waveform point corresponds to the average value of the lateral coordinates of the right edge point of the feeding frame in the subarea.
Step S.5: and traversing 3 pixel points before and after the average value of the lateral coordinates of the left edge point of the feeding frame from small to large in each row of the sub-region, comparing the gray value difference values of the two adjacent pixel points before and after, and selecting the pixel point with smaller coordinates in the two adjacent pixel points with the largest difference value as the edge point. And storing the edge points of each row into a left edge point set of the feeding frame of each sub-area to obtain a left edge point set A of the feeding frame of each sub-area.
Step S.6: and traversing 3 pixel points before and after the average value of the lateral coordinates of the right edge point of the feeding frame from large to small in each row of the subarea, comparing gray value difference values of two adjacent pixel points before and after, selecting the pixel point with the largest difference value and the larger coordinate in the two adjacent pixel points as an edge point, and storing the edge point of each row into the right edge point set of the feeding frame of each subarea to obtain the right edge point set B of the feeding frame of each subarea.
In the above, the projection processing refers to performing vertical scanning with respect to the search direction (i.e., the lateral direction) of the extraction edge.
In the step 2.5, the principle of adopting a Hough circle finding algorithm to the image of the detection area of the feeding frame is as follows:
for (x 0, y 0) of a setpoint, all circles passing through the setpoint are defined collectively as:
in the aboveThe circle center coordinate is adopted, and r is the radius of the circle; θ is the inclination angle of the fixed point (x 0, y 0) with respect to the center of the circle, thus each group +.>Representing a circle passing through point (x 0, y 0).
For fixed point (x 0 ,y 0 ) Drawing all circles passing through the three-dimensional rectangular coordinate system to obtain a three-dimensional curve;
if the curve obtained by the above operation is in space, the method can be used for detecting the image of the feeding frame in the spaceIntersecting, wherein the two pixel points are considered to be on the same circle;
the more curves intersect at one point, the more points the circle represented by the intersection is considered to be, the threshold number is set, and when the curves not smaller than the threshold number intersect at one point, the target circle is determined.
According to the length parameter of the input bellows to be detected, the distance between the two groups of feeding clamping claw mechanisms 34 is adjusted by the following method,
setting the length of the bellows to be detected as L, setting the distance between the clamping position of the feeding clamping jaw mechanism 34 in the length direction of the bellows to be detected and the end face of the bellows to be detected as L, setting the distance between the two groups of feeding clamping jaw mechanisms 34 as L0, obtaining l0=L-2×l, taking the center of the two groups of feeding clamping jaw mechanisms 34 as a starting point, and the moving distance of the single feeding clamping jaw mechanism 34 as L
The visual imaging acquisition module 61 acquires images of bellows to be detected located at the detection position in a continuous acquisition mode with a certain frequency of the detection visual system and performs appearance detection on the bellows to be detected in the images, and the appearance detection of a single frame image comprises the following steps:
step 8.1: a frame of single-channel 8-bit-depth gray level image is collected, the gray level is divided into 256 levels, the range is 0-255,0 represents black, and 255 represents white;
step 8.2: extracting a detection area image according to an imaging area of the corrugated pipe in a view field, wherein the length of the detection area image is equal to the length of the imaging area of the corrugated pipe, the width of the detection area image is 1.5 times of the width of the imaging area of the corrugated pipe, and the center of the detection area image is the center of the imaging area of the corrugated pipe;
step 8.3: reducing the size of the detection area image extracted in the step 8.2 by 1/4 to obtain a reduced image, wherein the gray value of each pixel point of the reduced image is the average value of the gray values of four adjacent pixel points corresponding to the original image before reduction;
step 8.4: carrying out gray threshold binarization on the reduced image to obtain a gray image, comparing whether the gray value of a pixel point in the gray image is larger than a set value, if so, setting the gray value of the pixel point to 255, and if not, setting the gray value of the pixel point to 0;
The set point is 30.
Step 8.5: extracting the outer contour of the gray level image to obtain an outer contour set, wherein the outer contour comprises an integral imaging contour of the edge of the corrugated pipe and an interference contour with fewer points;
step 8.6: traversing the outer contour set, and extracting the integral imaging contour of the edge of the corrugated pipe, wherein the integral imaging contour of the edge of the corrugated pipe is the contour with the most points;
step 8.7: calculating the minimum circumscribed rectangle of the overall imaging outline of the edge of the corrugated pipe, and obtaining the parameters of the minimum circumscribed rectangle, wherein the parameters comprise the center, the rotation angle, the length and the width of the minimum circumscribed rectangle;
step 8.8: setting a distance value according to the minimum circumscribed rectangle parameter and the view field, and extracting a detection area image of the object to be detected from the detection area image in the step 8.2;
the area center of the image of the detection area of the object to be detected is the center of the smallest circumscribed rectangle, the length is 4 times of the length of the smallest circumscribed rectangle, the width is the set interval of the view field, the attitude angle is the rotation angle of the smallest circumscribed rectangle, and the set interval value of the view field is determined according to the pixel interval corresponding to the radial 1/4 circular ring surface imaging of the corrugated pipe.
Step 8.9: carrying out median filtering on the image of the detection area of the object to be detected to obtain a median filtering image;
the image noise interference signals of the detection area of the object to be detected are filtered, and meanwhile the imaging edge of the corrugated pipe can be protected from being blurred.
Step 8.10: carrying out gray threshold binarization on the median filtering image to obtain a gray binarization image, comparing whether the gray value of a pixel point in the gray binarization image is larger than a set value, if so, setting the gray value of the pixel point to 255, and if not, setting the gray value of the pixel point to 0;
step 8.11: performing morphological closing operation on the gray level binarized image, filtering out black areas with the areas of the connected areas smaller than the area set value, and obtaining a new area image;
the area setting value is set as the area of 4 pixel points;
step 8.12: extracting the outline of the new area image to obtain an outline set, wherein the outline set comprises an outer outline set and an inner outline set contained in a closed outer outline;
step 8.13: traversing the contour set in the step 8.12, extracting the outer contour and the inner contour with the most points, and recording the points of the outer contour and the inner contour, wherein the outer contour and the inner contour are imaging contours of a detection area of an object to be detected;
step 8.14: judging whether the number of the outer contour points obtained in the step 8.13 is larger than the number of the set outer contour points, if so, judging that the gray-black defect exists and the black-gray defect is adhered to the boundary; if not, executing the step 8.15;
step 8.15: judging whether the number of the inner contour points obtained in the step 8.13 is larger than the number of the set inner contour points, if so, judging that black and gray defects exist; if not, judging that the black ash defect does not exist.
The application also comprises an image appearance detection device, which comprises a detection vision system 6, wherein the detection vision system 6 comprises four groups of vision imaging acquisition modules 61, the vision imaging acquisition modules 61 comprise an area array camera, a fixed focus lens and a dome light source, and the positions of the four groups of vision imaging acquisition modules 61 in spatial distribution are as follows: on the XZ plane (namely, the plane in which the circumferential direction of the corrugated pipe is located), four groups of visual imaging acquisition modules 61 are uniformly distributed in a ring shape by taking the central axis of the corrugated pipe as the center, the included angle between every two adjacent visual imaging acquisition modules 61 is 90 degrees, the detection directions of the four groups of visual imaging acquisition modules 61 face the corrugated pipe, and on the XY or YZ plane (namely, the plane in which the axial direction of the corrugated pipe is located), the four groups of visual imaging acquisition modules 61 are mutually spaced at a certain distance along the axial direction of the corrugated pipe, so that the interference of lighting among the four groups of visual imaging acquisition modules 61 is avoided.
The four groups of visual imaging acquisition modules 61 acquire images of different positions of the circumference and the axial direction of the bellows to be detected in a continuous acquisition mode with a certain frequency and perform appearance detection, and the appearance detection of a single frame of image comprises the following steps:
step 8.1: a frame of single-channel 8-bit-depth gray level image is collected, the gray level is divided into 256 levels, the range is 0-255,0 represents black, and 255 represents white;
Step 8.2: extracting a detection area image according to an imaging area of the corrugated pipe in a view field, wherein the length of the detection area image is equal to the length of the imaging area of the corrugated pipe, the width of the detection area image is 1.5 times of the width of the imaging area of the corrugated pipe, and the center of the detection area image is the center of the imaging area of the corrugated pipe;
step 8.3: reducing the size of the detection area image extracted in the step 8.2 by 1/4 to obtain a reduced image, wherein the gray value of each pixel point of the reduced image is the average value of the gray values of four adjacent pixel points corresponding to the original image before reduction;
step 8.4: carrying out gray threshold binarization on the reduced image to obtain a gray image, comparing whether the gray value of a pixel point in the gray image is larger than a set value, if so, setting the gray value of the pixel point to 255, and if not, setting the gray value of the pixel point to 0;
the set point is 30.
Step 8.5: extracting the outer contour of the gray level image to obtain an outer contour set, wherein the outer contour comprises an integral imaging contour of the edge of the corrugated pipe and an interference contour with fewer points;
step 8.6: traversing the outer contour set, and extracting the integral imaging contour of the edge of the corrugated pipe, wherein the integral imaging contour of the edge of the corrugated pipe is the contour with the most points;
step 8.7: calculating the minimum circumscribed rectangle of the overall imaging outline of the edge of the corrugated pipe, and obtaining the parameters of the minimum circumscribed rectangle, wherein the parameters comprise the center, the rotation angle, the length and the width of the minimum circumscribed rectangle;
Step 8.8: setting a distance value according to the minimum circumscribed rectangle parameter and the view field, and extracting a detection area image of the object to be detected from the detection area image in the step 8.2;
the area center of the image of the detection area of the object to be detected is the center of the smallest circumscribed rectangle, the length is 4 times of the length of the smallest circumscribed rectangle, the width is the set interval of the view field, the attitude angle is the rotation angle of the smallest circumscribed rectangle, and the set interval value of the view field is determined according to the pixel interval corresponding to the radial 1/4 circular ring surface imaging of the corrugated pipe.
Step 8.9: carrying out median filtering on the image of the detection area of the object to be detected to obtain a median filtering image;
the image noise interference signals of the detection area of the object to be detected are filtered, and meanwhile the imaging edge of the corrugated pipe can be protected from being blurred.
Step 8.10: carrying out gray threshold binarization on the median filtering image to obtain a gray binarization image, comparing whether the gray value of a pixel point in the gray binarization image is larger than a set value, if so, setting the gray value of the pixel point to 255, and if not, setting the gray value of the pixel point to 0;
step 8.11: performing morphological closing operation on the gray level binarized image, filtering out black areas with the areas of the connected areas smaller than the area set value, and obtaining a new area image;
The area setting value is set as the area of 4 pixel points;
step 8.12: extracting the outline of the new area image to obtain an outline set, wherein the outline set comprises an outer outline set and an inner outline set contained in a closed outer outline;
step 8.13: traversing the contour set in the step 8.12, extracting the outer contour and the inner contour with the most points, and recording the points of the outer contour and the inner contour, wherein the outer contour and the inner contour are imaging contours of a detection area of an object to be detected;
step 8.14: judging whether the number of the outer contour points obtained in the step 8.13 is larger than the number of the set outer contour points, if so, judging that the gray-black defect exists and the black-gray defect is adhered to the boundary; if not, executing the step 8.15;
step 8.15: judging whether the number of the inner contour points obtained in the step 8.13 is larger than the number of the set inner contour points, if so, judging that black and gray defects exist; if not, judging that the black ash defect does not exist.
It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
Claims (10)
1. An appearance defect detection device of a metal corrugated pipe is characterized in that: the automatic detection device comprises an upper computer, a side vision system, a feeding frame, a feeding mechanism, a detection moving module, a moving wheel set, a detection vision system, a discharging mechanism and a discharging frame, wherein the upper computer is respectively and electrically connected with the side vision system, the feeding mechanism, the detection moving module, the detection vision system and the discharging mechanism, the side vision system collects images of bellows to be detected, the detection coordinates of the bellows to be detected are detected, the feeding mechanism moves to the upper material level according to the positioning coordinates to clamp the bellows to be detected, the detection moving module clamps the bellows to be detected from the feeding mechanism at the material outlet level to move the bellows to be detected to the detection position, the detection vision system collects the images of the bellows to be detected at the detection position and carries out appearance detection on the bellows to be detected in the images, and the discharging mechanism clamps the bellows to be detected at the material outlet level and sorts the bellows according to appearance detection results.
2. The device for detecting an appearance defect of a metal bellows according to claim 1, wherein: the shooting surface of the side vision system is parallel to the detection surface of the feeding frame, the side vision system 1 collects images of the detection surface of the feeding frame 2, the side vision system 1 performs visual positioning detection on the images, and the visual positioning detection comprises detection of whether the feeding frame is inclined or not, detection of the positioning coordinates of the bellows image to be detected and conversion of the bellows image coordinates into physical coordinates of the feeding mechanism.
3. The device for detecting an appearance defect of a metal bellows according to claim 1, wherein: the feeding mechanism comprises a feeding movement module and a feeding clamping jaw mechanism, wherein the feeding movement module is connected with the feeding clamping jaw mechanism, and the feeding movement module controls the feeding clamping jaw mechanism to move in three directions in space.
4. A metal bellows appearance defect detecting device according to claim 3, wherein: the feeding clamping jaw mechanism comprises a feeding clamping jaw and a feeding photoelectric switch, wherein the feeding photoelectric switch is used for judging whether the feeding clamping jaw moves to a feeding level, and the feeding clamping jaw mechanism clamps a bellows to be detected through the feeding clamping jaw.
5. The device for detecting an appearance defect of a metal bellows according to claim 1, wherein: the visual detection system comprises four groups of visual imaging acquisition modules, wherein each visual imaging acquisition module comprises an area array camera, a fixed focus lens and a dome light source.
6. The device for detecting an appearance defect of a metal bellows according to claim 5, wherein:
the four groups of vision imaging acquisition modules are in the position of spatial distribution: on the plane of the corrugated pipe in the circumferential direction, four groups of vision imaging acquisition modules are distributed annularly by taking the central axis of the corrugated pipe as the center,
The included angle between the adjacent vision imaging acquisition modules is 90 degrees, the detection directions of the four groups of vision imaging acquisition modules face the corrugated pipe, and the four groups of vision imaging acquisition modules are spaced apart from each other along the axial direction of the corrugated pipe by a set distance on the plane where the axial direction of the corrugated pipe is located.
7. The device for detecting an appearance defect of a metal bellows according to claim 1, wherein:
the movable wheel sets are positioned between the detection movable module and the detection visual system, each movable wheel set comprises two groups, the two groups of movable wheel sets are distributed symmetrically up and down, and the central line between the two groups of movable wheel sets is collinear or approximately collinear with the central line of the corrugated pipe on the detection clamping assembly.
8. The apparatus for detecting an appearance defect of a metal bellows according to claim 7, wherein:
the moving wheel group comprises a roller, a cylinder for controlling the roller to move up and down and a roller position detection sensor, wherein the roller position detection sensor detects whether the roller reaches an initial position, and when the roller is at the initial position, the distance between the two groups of rollers is larger than the diameter of the corrugated pipe.
9. The device for detecting an appearance defect of a metal bellows according to claim 2, wherein:
the step of converting the bellows image coordinates into physical coordinates of the feeding mechanism by the side vision system is as follows:
Step 1.1: placing an empty feeding frame in a detection area of a side vision system;
the detection surface of the feeding frame is parallel to the shooting surface of the lateral vision system; the object distance of the side vision system reaches a set value, and focusing is clear; the shooting surface of the side vision system is parallel to the XZ plane of the feeding clamping claw;
step 1.2: the upper computer controls the feeding mechanism to clamp the pipe and move 9 points in the feeding frame and records the actual coordinates of the 9 points, the lateral vision system acquires 9 images of the 9 points, and the distribution of the superimposed characteristic patterns in the 9 images is consistent with the distribution of the 9 points of the pipe moving in the feeding frame;
step 1.3: respectively searching circles in the detection surface areas of the 9 images by using a Hough circle searching algorithm to obtain circle center pixel coordinates, wherein the circle center pixel coordinates are image pixel coordinates of point positions in the 9 images, and the image pixel coordinates of the point positions correspond to the actual coordinates one by one;
step 1.4: obtaining a conversion matrix between the image pixel coordinates and the actual coordinates of the same point position according to the two coordinates;
step 1.5: calling the conversion matrix obtained in the step 4 to convert the pixel coordinates of the 9 points into the physical coordinates of the 9 points in the feeding mechanism;
step 1.6: comparing the physical coordinates of the 9 points obtained by conversion in the step 5 with the actual coordinates of the 9 points to obtain statistical data, if the statistical data is in a threshold range, judging that the calculated conversion matrix is effective, storing the conversion matrix, and ending the 9-point calibration flow; otherwise, judging that the 9-point calibration conversion is abnormal, and re-calibrating and executing the step 1.2;
The method for obtaining the conversion matrix in the step 1.4 comprises the following steps:
the pixel coordinates of the dot position image are set as (Xn, yn), the actual coordinates of the dot position are set as (Xn, yn, zn), and the transformation matrix is set asWherein n is a point sequence number, n=1, 2..9, the point image pixel coordinates and the point actual coordinates are known parameters, the point image pixel coordinates, the point actual coordinates and the conversion matrix satisfy the formula (1),
according to formula (1), formulas (2) and (3) can be obtained,
axn+byn+c=Xn; (2)
dxn+eyn+f=Yn; (3)
substituting the image pixel coordinates of 9 points and the actual coordinates of 9 points into the formula (2) respectively to obtain the following 9 groups of formulas,
ax1+by1+c=X1 ax4+by4+c=X4 ax7+by7+c=X7
ax2+by2+c=X2 ax5+by5+c=X5 ax8+by8+c=X8
ax3+by3+c=X3 ax6+by6+c=X6 ax9+by9+c=X9
the above formula is calculated by using a least square method to obtain a formula (4), S (a, b, c) = [ (ax) 1 +by 1 +c)-X 1 ] 2 +[(ax2+by 2 +c)-X 2 ] 2 +[(ax 3 +by 3 +c)-X 3 ] 2 For S (a, b, c) + 9 +by 9 +c)-X 9 ];(4)
Obtaining partial derivative and making the value of first derivative be 0 to obtain
According to the formula (5), obtaining parameters a, b and c; d, e and f are obtained by the same process, and then a conversion matrix is obtained
10. The device for detecting an appearance defect of a metal bellows according to claim 2, wherein: the side vision system performs visual positioning detection on the image, the visual positioning detection comprises the detection of whether the feeding frame is inclined or not, and the detection steps of the positioning coordinates of the bellows image to be detected are as follows:
step 2.1: a frame of single-channel 8-bit-depth gray level image is collected, the gray level is divided into 256 levels, the range is 0-255,0 represents black, and 255 represents white;
Step 2.2: obtaining edge points on two sides of a feeding frame by an edge searching method to obtain the height, width and center coordinate parameters of a detection surface of the feeding frame;
step 2.3: comparing the obtained width parameter with the width parameter of the feeding frame input in the step 2, if the ratio is in the range of 0.98 to 1.02, judging that the placing position of the feeding frame is accurate, and executing the step 2.4; if the ratio is not in the range of 0.98 to 1.02, judging that the feeding frame is placed obliquely, prompting that the feeding frame needs to be replaced, and ending the step;
step 2.4: extracting a detection area image of the feeding frame according to the obtained height, width and center coordinate parameters;
step 2.5: extracting a target circle in the feeding frame by adopting a Hough circle finding algorithm on the image of the detection area of the feeding frame, and obtaining the center coordinates and the radius of the target circle;
the radius of the target circle is in the range of 0.9 to 1.1 of the radius of the outer diameter dimension input in the step 2; the target circle is not smaller than a threshold value of the number of circles considered by a point where a plurality of curves in the Hough circle finding algorithm intersect;
step 2.6: converting the circle center coordinates into physical coordinates of the feeding mechanism 3;
step 2.7: according to the clamping order from top to bottom and from left to right, all the physical coordinates obtained in the step 2.6 are sent to the feeding mechanism 3 according to the clamping order;
Step 2.8: and (5) ending the step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310939258.7A CN117019671A (en) | 2023-07-28 | 2023-07-28 | Appearance defect detection device of metal corrugated pipe |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310939258.7A CN117019671A (en) | 2023-07-28 | 2023-07-28 | Appearance defect detection device of metal corrugated pipe |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117019671A true CN117019671A (en) | 2023-11-10 |
Family
ID=88629064
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310939258.7A Pending CN117019671A (en) | 2023-07-28 | 2023-07-28 | Appearance defect detection device of metal corrugated pipe |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117019671A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117816565A (en) * | 2024-03-04 | 2024-04-05 | 山东豪迈机械制造有限公司 | Heat exchange tube detects and letter sorting equipment |
-
2023
- 2023-07-28 CN CN202310939258.7A patent/CN117019671A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117816565A (en) * | 2024-03-04 | 2024-04-05 | 山东豪迈机械制造有限公司 | Heat exchange tube detects and letter sorting equipment |
CN117816565B (en) * | 2024-03-04 | 2024-05-28 | 山东豪迈机械制造有限公司 | Heat exchange tube detects and letter sorting equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106903426B (en) | A kind of laser welding localization method based on machine vision | |
CN117019671A (en) | Appearance defect detection device of metal corrugated pipe | |
CN101576509B (en) | Method and device for automatically detecting surface defects of spherules based on machine vision | |
CN109821763B (en) | Fruit sorting system based on machine vision and image identification method thereof | |
CN107402216B (en) | Film-coated product detection system and method | |
CN203124215U (en) | Frame sealant coating machine | |
CN102621156B (en) | Image-processing-based automatic micro part sorting system | |
CN107443428A (en) | A kind of band visual identity flapping articulation manipulator and visual identity method | |
CN112598701B (en) | Automatic tracking and monitoring video acquisition system and method for farm targets | |
CN104483320A (en) | Digitized defect detection device and detection method of industrial denitration catalyst | |
CN206223683U (en) | A kind of tabular workpiece with hole surface defect detection apparatus | |
CN107966102A (en) | A kind of plate production six-face detection device | |
CN114713522A (en) | Stamping part detection system | |
CN109085179A (en) | A kind of board surface flaw detection device and detection method | |
CN109785319A (en) | A kind of coding character machining system, device and detection method based on image procossing | |
CN106353336A (en) | Lens coating automatic detection system | |
CN113460716A (en) | Remove brick anchor clamps and intelligent sign indicating number brick robot based on visual identification | |
CN111330874A (en) | Detection device and detection method for pollution or impurity defects of bottom area of medicine bottle | |
CN106097323B (en) | Engine cylinder block casting positioning method based on machine vision | |
CN116748163A (en) | Appearance defect detection method for metal corrugated pipe | |
CN103383730A (en) | Automatic BNC terminal detecting machine and work method thereof | |
CN111595266A (en) | Spatial complex trend catheter visual identification method | |
CN214201214U (en) | Seamless steel pipe defect detection system based on machine vision | |
CN213580627U (en) | Semi-automatic detection device for rubber sealing ring | |
CN206546340U (en) | A kind of piston rod surface defective vision detection device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |