CN116972813A - Fuse body positioning pin grabbing and positioning detection system and method based on machine vision - Google Patents

Fuse body positioning pin grabbing and positioning detection system and method based on machine vision Download PDF

Info

Publication number
CN116972813A
CN116972813A CN202310914291.4A CN202310914291A CN116972813A CN 116972813 A CN116972813 A CN 116972813A CN 202310914291 A CN202310914291 A CN 202310914291A CN 116972813 A CN116972813 A CN 116972813A
Authority
CN
China
Prior art keywords
point cloud
workpiece
machine vision
clamping jaw
grabbing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310914291.4A
Other languages
Chinese (zh)
Inventor
马国庆
曹国华
刘福迪
贾冰
唐晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202310914291.4A priority Critical patent/CN116972813A/en
Publication of CN116972813A publication Critical patent/CN116972813A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • B25J15/08Gripping heads and other end effectors having finger members
    • B25J15/10Gripping heads and other end effectors having finger members with three or more finger members
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a fuse body positioning pin grabbing, positioning and detecting system and method based on machine vision, comprising a robot module, a machine vision module and a workpiece platform; the robot module comprises a six-axis industrial robot, a clamping jaw mechanism, a force sensor and a PLC control box; the machine vision module comprises a structured light camera and an industrial personal computer, wherein machine vision software is arranged in the industrial personal computer, the machine vision software converts pixel coordinates of a workpiece into global coordinates, and the position coordinates of the workpiece to be detected are obtained after analysis and operation aiming at different poses of the locating pin, and a signal is sent to the robot module to guide the six-axis industrial robot to grab the workpiece with the detection position and move to the detection area.

Description

Fuse body positioning pin grabbing and positioning detection system and method based on machine vision
Technical Field
The invention belongs to the technical field of precision assembly of military industry based on non-contact depth measurement of a hole cavity, and particularly relates to a fuse body positioning pin grabbing and positioning detection system and method based on machine vision.
Background
Ensuring the positioning precision of the fuse body positioning pin is a key technology for ensuring the precision of workpieces. According to the search, the existing locating pin detection mostly adopts contact detection, and the grabbing and locating of the locating pin still adopts manual work. The contact type detection has certain requirements on the hardness of the workpiece material, dead angles exist in the workpiece internal detection, the measurement error also becomes large along with the increase of the length, and the curved surface measurement also can introduce lateral head error compensation, so that the speed is low.
The automatic replacement of manpower can protect the personal safety of workers and improve the production efficiency.
The non-contact depth measurement of the cavity is an important means for quality control and management of deep hole parts, and the rapid detection and high-precision positioning technology of the fuse body positioning pin is a technical guarantee for implementing quality standards. Because the non-contact measurement is carried out on the fuse body positioning pin, the measurement precision is limited, and therefore, the realization of the non-contact measurement is the key point of the grabbing and positioning research of the fuse body positioning pin at present.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a system and a method for capturing and positioning a fuse body positioning pin based on machine vision, which are based on a non-contact depth measurement technology of a cavity to realize positioning detection of the fuse body positioning pin, and the positioning detection precision is ensured to be within +/-0.005 mm.
The invention aims at realizing the following technical scheme:
the invention provides a fuse body positioning pin grabbing, positioning and detecting system based on machine vision, which comprises a robot module, a machine vision module and a workpiece platform; the robot module and the vision module thereof are placed on the workpiece platform;
the robot module comprises a six-axis industrial robot, a clamping jaw mechanism, a force sensor and a PLC control box; the clamping jaw mechanism is arranged on a rotating manipulator at the tail end of the six-axis industrial robot; the force sensor is arranged on the clamping jaw of the clamping jaw mechanism; the six-axis industrial robot and the clamping jaw mechanism are respectively connected with the PLC control box;
the machine vision module comprises a structured light camera and an industrial personal computer, and machine vision software is arranged in the industrial personal computer; the structure light camera is fixed on the six-axis industrial robot, and the structure light camera and the industrial personal computer are respectively connected with a PLC control box of the robot module; the machine vision software converts pixel coordinates of the workpiece into global coordinates, obtains the position coordinates of the workpiece to be detected after analysis and operation aiming at different poses of the locating pin, and sends signals to the robot module to guide the six-axis industrial robot to grab the workpiece with the detection position and move the workpiece to the detection area;
the workpiece platform is provided with a photographing detection position and a position to be detected, the photographing detection position is positioned in the middle of the working platform, and the position to be detected is positioned on one side of the working platform.
Further, the clamping jaw mechanism comprises a connecting flange plate, an air cylinder, clamping jaws, an air cylinder valve seat and a supporting frame; the connecting flange plate is connected with the tail end rotating manipulator of the six-axis industrial robot and the supporting frame; the cylinder is fixed on the support frame and is used for pushing the clamping jaw to open and close; the cylinder valve seat is arranged on the support frame, a reversing valve of a cylinder is arranged on the cylinder valve seat, and the clamping jaw is controlled to grasp the positioning pin; the clamping jaw is mainly used for grabbing and placing the locating pin.
Further, the structured light camera is a Mei Kaman DePRO S camera, and the camera is provided with LED blue light.
Further, the machine vision software is Mei Kaman De machine vision software, calculation and analysis are performed on workpieces with different poses through Mei Kaman De software, pixel coordinates are converted into global coordinates, then specific position coordinates of the workpiece are obtained, and coordinate information is transmitted to the robot module.
Further, the force sensor adopts a PVDF piezoelectric film sensor which is arranged on the clamping jaw.
Further, the sensitive units in the PVDF piezoelectric film sensor are divided into three layers: the surface layer and the bottom layer adopt silicon rubber as a contact protection layer and a substrate buffer layer; the middle layer is PVDF piezoelectric film with a film thickness of 200 μm.
Further, an anti-static positioning block is placed on the working platform, and the positioning block is made of steel plates and is used for positioning a workpiece in the grabbing process.
The invention also provides a machine vision-based fuse body positioning pin grabbing and positioning detection method, which is realized by the fuse body positioning pin grabbing and positioning detection system, and comprises the following steps of:
step one, calibrating a structured light camera;
step two, photographing and positioning: starting a six-axis industrial robot, and photographing by a structured light camera to determine the position of a workpiece;
detecting the grabbing force of the clamping jaw by using a force sensor;
step four, the robot controls the clamping jaw to grasp the workpiece according to the grasping force of the clamping jaw;
step five, after the workpiece is placed at the position to be detected, detecting the position precision of the positioning pin by a structured light camera:
s1, a structured light camera shoots and acquires an image, and a depth map of a locating pin, a camera color map, point cloud data and color point cloud are generated;
s2, preprocessing an original point cloud: preprocessing according to the depth map and the color map of the locator to generate a color point cloud and a point cloud after clustering of the two parts;
s3, converting one part of clustered point clouds into point clouds with normal directions, and converting the other part of clustered point clouds into common point clouds;
s4, calculating the pose and the size of the planar point cloud by the point cloud with the normal direction to generate the pose and the size of the point cloud; estimating a point cloud edge by a common point cloud by using a 3D method to generate the point cloud edge;
s5, processing the edge of the point cloud into a point cloud with a normal direction, and filtering out part of noise points in the point cloud through point filtration to generate the point cloud with the normal direction;
s6, setting the size and the confidence of the point cloud, and processing the point cloud with the normal direction to obtain the point cloud with the normal direction of the highest layer;
s7, performing point cloud downsampling processing on the point cloud with the normal direction to obtain the simplified point cloud with the normal direction;
s8, using the simplified normal point cloud to calculate the pose and the size of the plane point cloud to obtain the pose of the point cloud;
s9, calculating the position and the posture of the point cloud obtained in the step S8 and the position and the posture obtained by calculating Ping Miandian cloud position and size, calculating the normal distance of the two positions and further calculating the normal distance between the two positions, and obtaining the distance between two measuring planes;
s10, outputting the calculation result, and storing the calculation result into a set folder.
Further, in the first step, a plane calibration plate is selected as a calibration reference, a workpiece coordinate system is selected to describe pose information of the workpiece, external parameters dx, dy and dz of a structured light camera are calculated, an origin offset of the workpiece coordinate system relative to a six-axis industrial robot base coordinate system and rotation angles of all joints of the six-axis industrial robot are obtained, and a spatial relationship between the workpiece coordinate system and the six-axis robot base coordinate system is determined to complete calibration.
Further, in the second step, if the detection position does not have the workpiece, the six-axis industrial robot is controlled to move the workpiece to the detection position, and the structured light camera is waited to shoot the workpiece to acquire a coordinate point.
The invention has the following advantages:
according to the invention, the robot module is combined with the machine vision system, and according to the characteristic that the length of the locating pin is not large, the pixel coordinates of the locating pin are converted into actual coordinates through the machine vision software of the Mei Kaman De machine vision system, so that the robot is controlled to grasp the locating pin and place the locating pin at a position to be detected for the next detection of the locating precision. On the one hand, the robot is used for grabbing the fuse body locating pin and combining the structural light camera for detection and identification, so that the traditional manual contact type grabbing and identification is replaced, the defects that the traditional contact identification is slow in speed, deep holes and small gaps cannot be measured and the like are overcome, the production efficiency is improved, and the development trend of future intellectualization is met. On the other hand, the intelligent, accurate, reliable and stable grabbing of the fuse body locating pin is guaranteed, and the surface accuracy of the locating pin is not damaged when the locating pin is grabbed.
Drawings
Fig. 1 is a schematic structural diagram of a fuse body positioning pin grabbing and positioning detection system based on machine vision according to embodiment 1 of the present invention;
FIG. 2 is a schematic view of a jaw mechanism according to an embodiment of the present invention;
in the figure:
1-six-axis industrial robot; 2-a clamping jaw mechanism; 3-PLC control box; 4-structured light cameras; 5-an industrial personal computer; 6-positioning blocks; 7-force sensor;
2-1-connecting flange plates; 2-2-reversing valve; 2-3-cylinder valve seats; 2-4-clamping jaws; 2-5-cylinder; 2-6-cylinder frame; 2-7 supporting frames.
Detailed Description
In order to make the technical solution of the present invention better understood by those skilled in the art, the following further details of the positioning accuracy measurement method are described with reference to the examples and the accompanying drawings.
Example 1
As shown in fig. 1 and fig. 2, the present embodiment is a machine vision-based fuse body positioning pin grabbing and positioning detection system, which includes a robot module, a machine vision module and a workpiece platform; the robot module and the vision module thereof are placed on the workpiece platform;
the robot module comprises a six-axis industrial robot 1, a clamping jaw mechanism 2, a force sensor 7 and a PLC control box 3; the clamping jaw mechanism 2 is arranged on a rotating manipulator at the tail end of the six-axis industrial robot 1; the force sensor 7 is arranged on the clamping jaw 2-4 of the clamping jaw mechanism 2; the six-axis industrial robot 1 and the clamping jaw mechanism 2 are respectively connected with the PLC control box 3;
the machine vision module comprises a structured light camera 4 and an industrial personal computer 5, and machine vision software is arranged in the industrial personal computer 5; the structure light camera 4 is fixed on the six-axis industrial robot 1 through a bracket, and the structure light camera 4 and the industrial personal computer 5 are respectively connected with the PLC control box 3 of the robot module; the machine vision software converts pixel coordinates of the workpiece into global coordinates, obtains the position coordinates of the workpiece to be detected according to different poses of the locating pins through analysis and operation, and sends signals to the robot module to guide the six-axis industrial robot 1 to grab the workpiece with the detection position and move the workpiece to the detection area;
the workpiece platform is provided with a photographing detection position and a position to be detected, the photographing detection position is positioned in the middle of the working platform, and the position to be detected is positioned on one side of the working platform.
Further, as shown in fig. 2, the clamping jaw mechanism 2 comprises a connecting flange plate 2-1, an air cylinder 2-5, clamping jaws 2-4, an air cylinder valve seat 2-3 and a supporting frame 2-7, wherein the upper part of the connecting flange plate 2-1 is connected with the tail end rotating manipulator of the six-axis industrial robot 1, and the lower part of the connecting flange plate is connected with the supporting frame 2-7, and is made of 45# steel; the cylinder 2-5 is fixed on the support frame 2-7 through the cylinder frame 2-6, and the cylinder 2-5 is used for pushing the clamping jaw 2-4 to open and close; the cylinder valve seat 2-3 is arranged on the supporting frame 2-7, the reversing valve 2-2 of the cylinder is arranged on the cylinder valve seat 2-3, and the clamping jaw is controlled to grasp the positioning pin; the clamping jaw is mainly used for grabbing and placing the locating pin, the clamping jaw is composed of adjustable three-jaw clamping jaws, and each clamping jaw in the three clamping jaws is provided with a slideway for clamping and expanding the clamping jaw. Only the cylinder can enable the clamping jaw to move, and when the cylinder does not move, the clamping jaw cannot move by itself, so that the clamping jaw can clamp the positioning pin. And when the robot moves to the designated position, the three clamping jaws of the clamping jaws are moved outwards along the track by the movement of the air cylinder to place the positioning pin to the designated position.
Further, the structured light camera 4 is a Mei Kaman De PRO S camera, the camera is provided with LED blue light, and the camera can take a picture to obtain an image only by setting parameters such as exposure times, interested ROI (region of interest) and the like, so that the picture quality is guaranteed comprehensively by a built-in algorithm.
The structured light camera generally adopts a plurality of stripe gratings, namely, the stripe gratings are projected on the surface of an object to be measured in sequence according to a time sequence through a grating projection module, then the gratings on the surface of the object are photographed through binocular, decoding and binocular parallax matching are performed based on a pre-coding rule, and therefore high-precision 3D point cloud is obtained. Because the structured light 3D camera adopts a plurality of gratings for encoding, in principle, the encoding precision can be finer to 1 pixel or even sub-pixels, and compared with the point cloud quality and precision of other principles, the structured light 3D camera has become the mainstream in the industry. The selected structured light camera is an industrial grade 3D camera of Mei Kaman De brand, the precision reaches micron level, the self-grinding fusion imaging algorithm is contained, the imaging effect on the high-brightness workpiece is excellent, and the method can be widely applied to detection/measurement of position degree, clearance, surface difference and the like in the processes of automobile part production, assembly and the like. The resolution of the camera can reach 1920 multiplied by 1200, and the light source is a blue LED.
Further, the machine vision software is Mei Kaman de machine vision software.
And converting the coordinates of the workpiece to be detected by using Mei Kaman German self-matching software and an algorithm. The method comprises the steps of converting pixel coordinates of a workpiece into global coordinates for convenient calculation and observation, calculating and analyzing workpieces with different poses through Mei Kaman De software, converting the pixel coordinates into the global coordinates to obtain specific position coordinates of the workpiece (locating pin), transmitting coordinate information to a robot, grabbing the locating pin by the robot, and calculating the locating precision of the workpiece.
Furthermore, the force sensor adopts a PVDF piezoelectric film sensor, the PVDF piezoelectric film sensor is arranged on the clamping jaw, and the force sensor is used for controlling the air inflow of the air cylinder so as to control the clamping force of the clamping jaw for clamping the positioning pin. The sensitive units in the PVDF piezoelectric film sensor are divided into three layers: the surface layer and the bottom layer adopt silicon rubber as a contact protection layer and a substrate buffer layer so as to enable the sensor to have flexibility; the middle layer is PVDF piezoelectric film with a film thickness of 200 μm.
When the clamping jaw is grabbed, the clamping force of the clamping jaw mechanism and the stress condition of the positioning pin are detected in real time by adopting the PVDF piezoelectric film sensor as a force sensor according to the factors that the materials of different positioning pins are different and the stress of a workpiece is different when the workpiece is grabbed.
Further, an anti-static positioning block 6 is placed on the working platform, and the positioning block 6 is made of steel plates and is used for positioning workpieces in the grabbing process.
Example 2
A machine vision-based method for detecting grabbing and positioning of a fuse body positioning pin is realized by the system for detecting grabbing and positioning of the fuse body positioning pin in embodiment 1.
The method for detecting the grabbing and positioning of the fuse body positioning pin comprises the following steps:
step one, calibrating a structured light camera:
and selecting a plane calibration plate as a calibration reference object, selecting a workpiece coordinate system to describe pose information of the workpiece, calculating external parameters dx, dy and dz of a structured light camera, acquiring an original point offset of the workpiece coordinate system relative to a six-axis industrial robot base coordinate system and rotation angles of all joints of the six-axis industrial robot, and determining a spatial relationship between the workpiece coordinate system and the six-axis robot base coordinate system to finish calibration.
Step two, photographing and positioning:
after the PLC control box controls the six-axis industrial robot to reset to a reset point, the six-axis industrial robot is started, the position of a workpiece is determined by photographing by the structured light camera, if the workpiece does not exist at the detection position, the six-axis industrial robot and the tail end clamping jaw are controlled to move the workpiece to the detection position, and the structured light camera waits for photographing the workpiece to collect coordinate points so as to detect the positioning accuracy.
Step three, sensing the grabbing force of the clamping jaw by using a PVDF piezoelectric film sensor:
the charge generated by the PVDF sensitive unit is converted into a voltage signal through a charge amplifying device, and then the voltage signal is subjected to data acquisition and processing through a data acquisition and processing system and fed back to a robot control center.
And fourthly, the robot controls the clamping jaw to grab the workpiece with proper force, so that the surface of the workpiece is prevented from being damaged during grabbing.
Step five, after the workpiece is placed at the position to be detected, detecting the position precision of the positioning pin by a structured light camera:
s1, firstly, taking a picture by a structured light camera to obtain an image, and generating a depth map of a locating pin, a camera color map, point cloud data and color point cloud.
S2, preprocessing the original point cloud, so that the processing time of the subsequent steps is shortened conveniently: and generating color point cloud and clustered point cloud through preprocessing according to the camera depth map and the color map generated in the previous step. At this time, the two-part clustered point cloud should be generated for the next processing link.
S3, converting one part of clustered point clouds into point clouds with normal directions, and converting the other part of clustered point clouds into common point clouds.
S4, calculating the pose and the size of the planar point cloud to generate the pose and the size of the point cloud by the part of the point cloud with the normal direction; another part of the common point cloud estimates the point cloud edge by using a 3D method to generate the point cloud edge.
S5, processing the edge of the point cloud into a point cloud with a normal direction, filtering out partial noise points in the point cloud through point filtration, and generating the point cloud with the normal direction.
S6, setting the size and the confidence of the point cloud, and processing the point cloud with the normal direction to obtain the point cloud with the highest layer, wherein the obtained point cloud with the highest layer is also the point cloud with the normal direction.
S7, performing point cloud downsampling processing on the obtained highest-layer point cloud to enable the point cloud to be more simplified, and obtaining the simplified point cloud with the normal direction.
S8, using the simplified point cloud to calculate the pose and the size of the planar point cloud, and obtaining the pose of the point cloud.
S9, calculating the normal phase distance between the two poses, namely the distance between the two planes, by calculating the normal phase distance between the two poses and the pose obtained by calculating Ping Miandian cloud pose and size.
S10, outputting the calculation result by using an output module, and storing the calculation result into a set folder.
Furthermore, in the calibration of the structured light camera in the step one, the structured light camera is installed at the position where the axis is perpendicular to the plane of the workbench and at the center, and because the height of the positioning pin is not too large and the size of the workbench is moderate, the position error of the structured light camera is very small during installation, and thus the image error caused in the later photographing process is also small. The positioning error and the height detection error caused by the positioning pin are small because the positioning pin is not high. Based on the calibration of the structured light camera, the integral calibration can be completed by only calculating the external parameters of the structured light camera without considering the internal parameters of the structured light camera. The mounting position of the structured light camera is calibrated to be a constant value, and the coordinate system is converted, so that the pose of the camera is unchanged relative to the robot base coordinate system. The structural light camera can be calibrated according to the locating pin, a plane calibration plate is required to be selected to serve as a reference object, then a workpiece coordinate system is selected to obtain the pose of the locating pin, and each pixel point on an image acquired by the structural light camera corresponds to physical dimensions dx and dy on a coordinate XOY plane on the workpiece coordinate system to be calculated. Then, after dx and dy are determined, the offset of the origin of the workpiece coordinate system and the origin of the robot base coordinate system and the rotation angle of each coordinate axis are known, so that the pose of the locating pin in the workpiece coordinate system can be converted into the pose under the robot base coordinate system. And then the calibration on the Z-axis direction is completed through the steps after the Z-axis movement passes through the determination of dz. The offset of the origin of the workpiece coordinate system and the origin of the robot base coordinate system and the rotation angle of each coordinate axis are obtained by directly selecting points on the X axis, the Y axis and the Z axis of the workpiece coordinate system as specific points and then positioning the points under the robot base coordinate system by operating the robot.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the invention thereto, but to limit the invention thereto, and any modifications, equivalents, improvements and equivalents thereof may be made without departing from the spirit and principles of the invention.

Claims (10)

1. The machine vision-based fuse body positioning pin grabbing and positioning detection system is characterized by comprising a robot module, a machine vision module and a workpiece platform; the robot module and the vision module thereof are placed on the workpiece platform;
the robot module comprises a six-axis industrial robot, a clamping jaw mechanism, a force sensor and a PLC control box; the clamping jaw mechanism is arranged on a rotating manipulator at the tail end of the six-axis industrial robot; the force sensor is arranged on the clamping jaw of the clamping jaw mechanism; the six-axis industrial robot and the clamping jaw mechanism are respectively connected with the PLC control box;
the machine vision module comprises a structured light camera and an industrial personal computer, and machine vision software is arranged in the industrial personal computer; the structure light camera is fixed on the six-axis industrial robot, and the structure light camera and the industrial personal computer are respectively connected with a PLC control box of the robot module; the machine vision software converts pixel coordinates of the workpiece into global coordinates, obtains the position coordinates of the workpiece to be detected after analysis and operation aiming at different poses of the locating pin, and sends signals to the robot module to guide the six-axis industrial robot to grab the workpiece with the detection position and move the workpiece to the detection area;
the workpiece platform is provided with a photographing detection position and a position to be detected, the photographing detection position is positioned in the middle of the working platform, and the position to be detected is positioned on one side of the working platform.
2. The machine vision-based fuze body positioning pin grabbing and positioning detection system according to claim 1, wherein the clamping jaw mechanism comprises a connecting flange plate, an air cylinder, clamping jaws, an air cylinder valve seat and a supporting frame; the connecting flange plate is connected with the tail end rotating manipulator of the six-axis industrial robot and the supporting frame; the cylinder is fixed on the support frame and is used for pushing the clamping jaw to open and close; the cylinder valve seat is arranged on the support frame, a reversing valve of a cylinder is arranged on the cylinder valve seat, and the clamping jaw is controlled to grasp the positioning pin; the clamping jaw is mainly used for grabbing and placing the locating pin.
3. The machine vision-based fuse body positioning pin grabbing and positioning detection system according to claim 1, wherein the structured light camera is a Mei Kaman de PRO camera, and the camera is provided with LED blue light.
4. The machine vision-based fuse body positioning pin grabbing and positioning detection system according to claim 3, wherein the machine vision software is Mei Kaman De machine vision software, calculation and analysis are performed on workpieces with different poses through Mei Kaman De software, pixel coordinates are converted into global coordinates, specific position coordinates of the workpiece are obtained, and coordinate information is transmitted to the robot module.
5. The machine vision-based fuze body positioning pin grabbing and positioning detection system according to claim 1, wherein the force sensor is a PVDF piezoelectric film sensor, and the PVDF piezoelectric film sensor is arranged on the clamping jaw.
6. The machine vision-based fuse body positioning pin grabbing and positioning detection system according to claim 5, wherein the sensing units in the PVDF piezoelectric film sensor are divided into three layers: the surface layer and the bottom layer adopt silicon rubber as a contact protection layer and a substrate buffer layer; the middle layer is PVDF piezoelectric film with a film thickness of 200 μm.
7. The machine vision-based fuse body positioning pin grabbing and positioning detection system according to claim 1, wherein an anti-static positioning block is placed on the working platform, and the positioning block is made of steel plates and is used for positioning a workpiece in the grabbing process.
8. The method for detecting the grabbing and positioning of the fuse body positioning pin based on machine vision is realized by the system for detecting the grabbing and positioning of the fuse body positioning pin according to claim 1, and is characterized by comprising the following steps:
step one, calibrating a structured light camera;
step two, photographing and positioning: starting a six-axis industrial robot, and photographing by a structured light camera to determine the position of a workpiece;
detecting the grabbing force of the clamping jaw by using a force sensor;
step four, the robot controls the clamping jaw to grasp the workpiece according to the grasping force of the clamping jaw;
step five, after the workpiece is placed at the position to be detected, detecting the position precision of the positioning pin by a structured light camera:
s1, a structured light camera shoots and acquires an image, and a depth map of a locating pin, a camera color map, point cloud data and color point cloud are generated;
s2, preprocessing an original point cloud: preprocessing according to the depth map and the color map of the locator to generate a color point cloud and a point cloud after clustering of the two parts;
s3, converting one part of clustered point clouds into point clouds with normal directions, and converting the other part of clustered point clouds into common point clouds;
s4, calculating the pose and the size of the planar point cloud by the point cloud with the normal direction to generate the pose and the size of the point cloud; estimating a point cloud edge by a common point cloud by using a 3D method to generate the point cloud edge;
s5, processing the edge of the point cloud into a point cloud with a normal direction, and filtering out part of noise points in the point cloud through point filtration to generate the point cloud with the normal direction;
s6, setting the size and the confidence of the point cloud, and processing the point cloud with the normal direction to obtain the point cloud with the normal direction of the highest layer;
s7, performing point cloud downsampling processing on the point cloud with the normal direction to obtain the simplified point cloud with the normal direction;
s8, using the simplified normal point cloud to calculate the pose and the size of the plane point cloud to obtain the pose of the point cloud;
s9, calculating the position and the posture of the point cloud obtained in the step S8 and the position and the posture obtained by calculating Ping Miandian cloud position and size, calculating the normal distance of the two positions and further calculating the normal distance between the two positions, and obtaining the distance between two measuring planes;
s10, outputting the calculation result, and storing the calculation result into a set folder.
9. The method for detecting the grabbing and positioning of the fuse body positioning pin based on machine vision according to claim 8, wherein in the first step, a plane calibration plate is selected as a calibration reference, a workpiece coordinate system is selected to describe pose information of a workpiece, external parameters dx, dy and dz of a structured light camera are calculated, an origin offset of the workpiece coordinate system relative to a six-axis industrial robot base coordinate system and rotation angles of all joints of the six-axis industrial robot are obtained, and a spatial relationship between the workpiece coordinate system and the six-axis robot base coordinate system is determined to complete calibration.
10. The method for capturing, positioning and detecting the fuse body positioning pin based on machine vision according to claim 8, wherein in the second step, if the detection position does not have a workpiece, the six-axis industrial robot is controlled to move the workpiece to the detection position, and the structured light camera is waited to capture a coordinate point for photographing the workpiece.
CN202310914291.4A 2023-07-25 2023-07-25 Fuse body positioning pin grabbing and positioning detection system and method based on machine vision Pending CN116972813A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310914291.4A CN116972813A (en) 2023-07-25 2023-07-25 Fuse body positioning pin grabbing and positioning detection system and method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310914291.4A CN116972813A (en) 2023-07-25 2023-07-25 Fuse body positioning pin grabbing and positioning detection system and method based on machine vision

Publications (1)

Publication Number Publication Date
CN116972813A true CN116972813A (en) 2023-10-31

Family

ID=88482601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310914291.4A Pending CN116972813A (en) 2023-07-25 2023-07-25 Fuse body positioning pin grabbing and positioning detection system and method based on machine vision

Country Status (1)

Country Link
CN (1) CN116972813A (en)

Similar Documents

Publication Publication Date Title
CN110370286B (en) Method for identifying rigid body space position of dead axle motion based on industrial robot and monocular camera
CN109029257B (en) Large-scale workpiece pose measurement system and method based on stereoscopic vision and structured light vision
CN110006905B (en) Large-caliber ultra-clean smooth surface defect detection device combined with linear area array camera
CN113674345B (en) Two-dimensional pixel-level three-dimensional positioning system and positioning method
TW201325811A (en) Method for calibrating camera measurement system
CN111531407B (en) Workpiece attitude rapid measurement method based on image processing
CN110136047B (en) Method for acquiring three-dimensional information of static target in vehicle-mounted monocular image
Zhu et al. Noncontact 3-D coordinate measurement of cross-cutting feature points on the surface of a large-scale workpiece based on the machine vision method
CN102589424A (en) On-line detection vision positioning method for combination surface hole group of engine cylinder
JP7427370B2 (en) Imaging device, image processing device, image processing method, calibration method for imaging device, robot device, method for manufacturing articles using robot device, control program, and recording medium
CN110044266B (en) Photogrammetry system based on speckle projection
JPS6332306A (en) Non-contact three-dimensional automatic dimension measuring method
CN110779933A (en) Surface point cloud data acquisition method and system based on 3D visual sensing array
EP2521635B1 (en) System and method for picking and placement of chip dies
CN113188473A (en) Surface topography measuring device and method
CN116972813A (en) Fuse body positioning pin grabbing and positioning detection system and method based on machine vision
CN114998422B (en) High-precision rapid three-dimensional positioning system based on error compensation model
KR20040028495A (en) Offset Measurement Mechanism and Method for Bonding Apparatus
CN106276285A (en) Group material buttress position automatic testing method
CN111598945B (en) Three-dimensional positioning method for curved bearing bush cover of automobile engine
CN112507871A (en) Inspection robot and detection method thereof
CN113715935A (en) Automatic assembling system and automatic assembling method for automobile windshield
CN117119325B (en) Area array sensor camera and mounting position adjusting method thereof
Zhao et al. Study on the Technologies of Close Range Photogrammetry and Applications in the Manufacture of Aviation
CN110962121B (en) Movement device for loading 3D detection unit and material grabbing method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination