CN112497219B - Columnar workpiece classifying and positioning method based on target detection and machine vision - Google Patents
Columnar workpiece classifying and positioning method based on target detection and machine vision Download PDFInfo
- Publication number
- CN112497219B CN112497219B CN202011419779.2A CN202011419779A CN112497219B CN 112497219 B CN112497219 B CN 112497219B CN 202011419779 A CN202011419779 A CN 202011419779A CN 112497219 B CN112497219 B CN 112497219B
- Authority
- CN
- China
- Prior art keywords
- target
- workpiece
- eye
- workpieces
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a columnar workpiece classification high-precision positioning method based on target detection and machine vision, which comprises two parts of target detection, defect detection, coarse positioning and high-precision positioning of the machine vision of yolov 3. The yolov3 part comprises the steps of making a data set, improving a network structure, adjusting candidate frame parameters, carrying out real-time positioning identification and defect detection. And acquiring a workpiece image by an eye-to-hand camera, fusing an image enhancement algorithm, and improving the candidate frame parameters by adopting a vector similarity measurement method. And a machine vision part, wherein the yolov3 algorithm coarse positioning position guides an eye-in-hand camera to obtain an image, image features are extracted, maximum constraint is adopted to reject abnormal features, and finally workpiece contour features are fitted to obtain the high-precision position of the target workpiece.
Description
Technical Field
The invention relates to industrial robots and machine vision applications, in particular to a columnar workpiece classification high-precision positioning method based on target detection and machine vision.
Background
Along with the development of intelligent manufacturing, industrial robots have the advantages of good universality, high repeated positioning accuracy and the like, and most of the industrial robots are taught by the robots in some industrial automation fields. There is a great distance for realizing true intelligent manufacturing, and the conventional teaching cannot meet the requirements of intelligent manufacturing. The machine vision technology well solves the position control requirement of the robot, but simultaneously has the problem that the recognition flexibility and the precision are difficult to be compatible. The target detection technology based on deep learning can better meet the flexibility requirement of multi-target recognition, but has the problem of insufficient positioning precision. The traditional machine vision detection technology has high recognition precision, but the recognition characteristic is single.
The patent with publication number of CN111238450A discloses a visual positioning method and a visual positioning device, wherein multiple frames of images are acquired for a single target workpiece, the visual positioning information corresponding to each frame of image satisfies the acquisition pose transformation relation corresponding to each frame of image, and the multiple target workpiece cannot be identified and positioned. The patent with publication number CN106272416A discloses a precise assembly system and a precise assembly method for a robot slender shaft based on force sense and vision, and the precise assembly method has certain limitation by means of various sensors such as vision, position, force sense and the like when precise assembly is realized.
Target detection can be realized based on deep learning, but the accuracy is poor; the traditional machine vision recognition has high positioning precision but the detection target is too single. Therefore, classification and high-precision positioning of multi-target workpieces are a highly desirable problem in the field of industrial robot and machine vision applications.
Disclosure of Invention
The invention provides a columnar workpiece classification high-precision positioning method based on target detection and machine vision. And performing target detection on a target workpiece through deep learning to finish workpiece classification and target coarse positioning, guiding a manipulator to the upper part of the workpiece by the coarse positioning target position, and finishing target precision positioning through machine vision. Therefore, the classification recognition and high-precision positioning of the multi-target workpieces are realized.
Therefore, the invention provides a columnar workpiece classification high-precision positioning method based on target detection and machine vision, which comprises the following steps:
multi-target recognition and coarse positioning and defect recognition processes based on yolov3 target detection algorithm:
and acquiring images of the multi-target workpiece by using an Eye-To-Hand camera of the experimental platform. The experimental platform comprises a manipulator, a vision control system, a Eye-To-Hand camera and an Eye-In-Hand camera. The Eye-To-Hand camera is fixed right above the test bed, and the camera has a higher working distance from the test platform so as To image different types of multi-target workpieces on the visual field surface. The Eye-To-Hand camera has lower coarse recognition positioning accuracy due To larger working distance.
S1: the Eye-To-Hand camera acquires images of the multi-target workpiece on the test bed, inputs the acquired images into a yolov3 algorithm with an improved network structure, trains a yolov3 algorithm model with the improved network structure, and performs target detection by using a trained yolov3 algorithm multi-target detection model To obtain image coordinates of various categories and coarse precision of the multi-target workpiece.
S2: based on coordinate transformation, performing Hand-Eye calibration on the Eye-To-Hand camera and the tail end of the manipulator through a calibration plate calibration method, combining the obtained image coordinates of the multi-target workpiece with Hand-Eye calibration parameters, calculating the world coordinates of each target workpiece, and returning To the category of each target workpiece.
S3: the yolov3 algorithm model with the improved network structure trains multiple target workpiece types during training, and trains typical defects of each workpiece. When the yolov3 algorithm with the trained improved network structure is used for target detection, key defects of targets such as scratches, unfilled corners and the like are identified.
High-precision positioning process of target workpiece based on machine vision:
s4: the coarse positioning workpiece coordinates are obtained by recognizing a yolov3 algorithm model with an improved network structure, the position coordinates are transmitted to a vision control system based on a communication protocol, and the control system sends the position coordinates to a manipulator. The visual control system is acted by the industrial personal computer; the Eye-In-Hand camera is connected together at the tail end of the manipulator. The Eye-In-Hand camera moves along with the robot arm above the target workpiece.
S5: the Eye-In-Hand camera moves to the upper part of the workpiece to collect images of the workpiece, the workpiece is placed above the test bed, the system performs image processing and feature extraction on the collected images to obtain key feature coordinates of the workpiece, and the Eye-Hand calibration parameters of the Eye-In-Hand camera are combined to obtain high-precision world coordinates of the workpiece and send the world coordinates to the vision system.
S6: and the system processor guides the manipulator to clamp, carry or assemble according to the high-precision coordinates.
S7: and repeating the steps S4-S6, and carrying out high-precision positioning on different types of target workpieces to realize high-precision positioning of multiple target workpieces.
The workpiece is a shaft part; the multi-target workpiece comprises four different types of workpieces; the camera and the vision system communicate based on the GigE protocol to transmit images; and the vision system and the manipulator are communicated based on TCP/IP protocol to carry out position coordinate transmission.
Further, the step S1 specifically includes:
s11: and acquiring images of the target To be detected by using an Eye-To-Hand camera on the test workbench, and performing marking classification on different types of workpieces after acquisition To manufacture a training data set. The workpiece marks are classified into five major categories, including four different types of shaft parts and four different types of workpieces with defects.
S12: and (3) carrying out enhancement processing on the training data set, and inputting the data set subjected to the enhancement processing into an improved yolov3 algorithm model for training to obtain a parameter model.
S13: and inputting the original multi-target workpiece image to be identified into a yolov3 model of a trained improved network, and outputting corresponding defect detection and classification identification coarse positioning results.
S14: and measuring the parameters of the candidate frames in the training set by adopting a vector similarity measurement method, carrying out statistical analysis on the parameters according to the standardized Euclidean distance, carrying out statistical analysis on the parameters of the candidate frames according to the standardized Euclidean distance, writing the parameters with the minimum error into a configuration file, and improving the yolov3 target detection frame.
The yolov3 model with the improved network structure is improved based on the network result of the dark net53, and the target detection requirement of the multi-target workpiece is met. In the target detection and defect identification method provided by the invention, the network structure model of the yolov3 algorithm is optimized and improved, and the method specifically comprises the following steps:
the original network model of the Yolov3 target detection algorithm obtains detection results at three scales of 13×13×75, 26×26×75 and 52×52×75 by a series of downsampling processes, wherein 13, 26 and 52 represent sampling scales. 75 is split into 3× (4+1+20), 3 representing three dimensions of the detection box,4 representing the position information of each detection box, which includes the width and height of the detection box and the center position coordinates of the detection box, 1 representing the probability of recognition, and 20 representing the target species that can be detected. The yolov3 algorithm of the network structure is improved, the modified network structure can meet the requirement of target detection of four different types of multi-target workpieces, and meanwhile, different types of defective workpieces can be identified, and three different-scale outputs of 13×13×39, 26×26×39 and 52×52×39 are obtained.
Further, the step S2 specifically includes:
s21: the Eye-Eye calibration is carried out on the Eye-To-Hand camera by a calibration method of a calibration plate based on the halcon;
s22: the Hand-Eye calibration is carried out To obtain the external parameters of the Eye-To-Hand camera, and the parameters are standardized into a matrix form;
s23: and combining the image coordinates obtained by the yolov3 target detection model with an external parameter matrix, and converting the obtained image coordinates into world coordinates of the robot.
Further, the step S5 specifically includes:
s51: after photographing a single target workpiece by the Eye-In-Hand camera, performing operations such as image preprocessing, noise reduction and the like; and carrying out self-adaptive binarization on the preprocessed image to obtain the edge characteristic information of the columnar workpiece.
S52: fitting the circular outline of the columnar workpiece according to the edge characteristic information of the circle based on an abnormal value detection method, and obtaining the maximum excircle outline of the columnar workpiece by adopting a maximum value constraint selection_max_length_contour method to realize high precision of visual positioning.
And the selection_max_length_contour method carries out maximum constraint on the concentric circular outline of the columnar workpiece obtained after fitting the workpiece key information, and returns outline characteristic information of the columnar workpiece. The method comprises the steps of initializing the longest length and the index of the longest length, traversing the acquired contour characteristic length, storing the length and the index of the longest contour, and finally returning the index of the length of the longest contour.
The classification positioning precision of the columnar workpieces can reach the micron level, and meanwhile, the columnar workpieces of various different types can be identified. The recognition accuracy rate of the columnar workpiece recognition device can reach more than 90% for a plurality of different types of columnar workpieces, and the recognition speed can reach more than 50 fps.
Compared with the prior art, the invention has the following advantages:
1. the method can be used for carrying out high-precision positioning on the multi-target workpiece on the test workbench and is matched with the manipulator, so that full automation in the process of clamping, carrying and assembling the multi-target workpiece is realized, no manual intervention in the whole process is realized, and the production efficiency is greatly improved.
2. The Eye-To-Hand camera fixed on the test workbench is based on a yolov3 model of an improved network, so that target detection is automatically carried out on the multi-target workpiece, coarse positioning is completed, and defect detection is carried out on the defective workpiece.
3. And the coordinate position returned by the rough positioning is transmitted to the manipulator, and the manipulator moves to the position above the target workpiece with the Eye-In-Hand camera, so that the high-precision positioning is performed on the target workpiece. The method can overcome the problems that the target detection technology based on deep learning can better meet the flexibility requirement of multi-target recognition, but the positioning accuracy is not enough, the recognition accuracy is high compared with the traditional machine vision detection technology, and the recognition characteristic is single.
Drawings
Fig. 1 is a schematic diagram of a camera layout according to the present invention.
Fig. 2 is a flow chart of a method for classifying and positioning columnar workpieces with high precision based on target detection and machine vision.
FIG. 3 is a schematic diagram of a modified yolov3 target detection model structure of the present invention.
FIG. 4 is a flowchart of a select_max_length_conf algorithm employed by the present invention.
Fig. 5 is an effect diagram of a columnar workpiece classification high-precision positioning method based on target detection and machine vision.
Detailed Description
The invention provides a columnar workpiece classification high-precision positioning method based on target detection and machine vision. And performing target detection on a target workpiece through deep learning to finish workpiece classification and target coarse positioning, guiding a manipulator to the upper part of the workpiece by the coarse positioning target position, and finishing target precision positioning through machine vision. Therefore, the classification recognition and high-precision positioning of the multi-target workpieces are realized.
Therefore, the invention provides a columnar workpiece classification high-precision positioning method based on target detection and machine vision, which comprises the following steps:
multi-target recognition and coarse positioning and defect recognition processes based on yolov3 target detection algorithm:
and acquiring images of the multi-target workpiece by using an Eye-To-Hand camera of the experimental platform. The experimental platform comprises a manipulator, a vision control system, a Eye-To-Hand camera and an Eye-In-Hand camera. The Eye-To-Hand camera is fixed right above the test bed, and the camera has a higher working distance from the test platform so as To image different types of multi-target workpieces on the visual field surface. The Eye-To-Hand camera has lower coarse recognition positioning accuracy due To larger working distance.
S1: the Eye-To-Hand camera acquires images of the multi-target workpiece on the test bed, inputs the acquired images into a yolov3 algorithm with an improved network structure, trains a yolov3 algorithm model with the improved network structure, and performs target detection by using a trained yolov3 algorithm multi-target detection model To obtain image coordinates of various categories and coarse precision of the multi-target workpiece.
S2: based on coordinate transformation, performing Hand-Eye calibration on the Eye-To-Hand camera and the tail end of the manipulator through a calibration plate calibration method, combining the obtained image coordinates of the multi-target workpiece with Hand-Eye calibration parameters, calculating the world coordinates of each target workpiece, and returning To the category of each target workpiece.
S3: the yolov3 algorithm model with the improved network structure trains multiple target workpiece types during training, and trains typical defects of each workpiece. When the yolov3 algorithm with the trained improved network structure is used for target detection, key defects of targets such as scratches, unfilled corners and the like are identified.
High-precision positioning process of target workpiece based on machine vision:
s4: the coarse positioning workpiece coordinates are obtained by recognizing a yolov3 algorithm model with an improved network structure, the position coordinates are transmitted to a vision control system based on a communication protocol, and the control system sends the position coordinates to a manipulator. The visual control system is acted by the industrial personal computer; the Eye-In-Hand camera is connected together at the tail end of the manipulator. The Eye-In-Hand camera moves along with the robot arm above the target workpiece.
S5: the Eye-In-Hand camera moves to the upper part of the workpiece to collect images of the workpiece, the workpiece is placed above the test bed, the system performs image processing and feature extraction on the collected images to obtain key feature coordinates of the workpiece, and the Eye-Hand calibration parameters of the Eye-In-Hand camera are combined to obtain high-precision world coordinates of the workpiece and send the world coordinates to the vision system.
S6: and the system processor guides the manipulator to clamp, carry or assemble according to the high-precision coordinates.
S7: and repeating the steps S4-S6, and carrying out high-precision positioning on different types of target workpieces to realize high-precision positioning of multiple target workpieces.
The workpiece is a shaft part; the multi-target workpiece comprises four different types of workpieces; the camera and the vision system communicate based on the GigE protocol to transmit images; and the vision system and the manipulator are communicated based on TCP/IP protocol to carry out position coordinate transmission.
Further, the step S1 specifically includes:
s11: and acquiring images of the target To be detected by using an Eye-To-Hand camera on the test workbench, and performing marking classification on different types of workpieces after acquisition To manufacture a training data set. The workpiece marks are classified into five major categories, including four different types of shaft parts and four different types of workpieces with defects.
S12: and (3) carrying out enhancement processing on the training data set, and inputting the data set subjected to the enhancement processing into an improved yolov3 algorithm model for training to obtain a parameter model.
S13: and inputting the original multi-target workpiece image to be identified into a yolov3 model of a trained improved network, and outputting corresponding defect detection and classification identification coarse positioning results.
S14: and measuring the parameters of the candidate frames in the training set by adopting a vector similarity measurement method, carrying out statistical analysis on the parameters according to the standardized Euclidean distance, carrying out statistical analysis on the parameters of the candidate frames according to the standardized Euclidean distance, writing the parameters with the minimum error into a configuration file, and improving the yolov3 target detection frame.
The yolov3 model with the improved network structure is improved based on the network result of the dark net53, and the target detection requirement of the multi-target workpiece is met. In the target detection and defect identification method provided by the invention, the network structure model of the yolov3 algorithm is optimized and improved, and the method specifically comprises the following steps:
the original network model of the Yolov3 target detection algorithm obtains detection results at three scales of 13×13×75, 26×26×75 and 52×52×75 by a series of downsampling processes, wherein 13, 26 and 52 represent sampling scales. 75 is split into 3× (4+1+20), 3 representing three dimensions of the detection box,4 representing the position information of each detection box, which includes the width and height of the detection box and the center position coordinates of the detection box, 1 representing the probability of recognition, and 20 representing the target species that can be detected. The yolov3 algorithm of the network structure is improved, the modified network structure can meet the requirement of target detection of four different types of multi-target workpieces, and meanwhile, different types of defective workpieces can be identified, and three different-scale outputs of 13×13×39, 26×26×39 and 52×52×39 are obtained.
Further, the step S2 specifically includes:
s21: the Eye-Eye calibration is carried out on the Eye-To-Hand camera by a calibration method of a calibration plate based on the halcon;
s22: the Hand-Eye calibration is carried out To obtain the external parameters of the Eye-To-Hand camera, and the parameters are standardized into a matrix form;
s23: and combining the image coordinates obtained by the yolov3 target detection model with an external parameter matrix, and converting the obtained image coordinates into world coordinates of the robot.
Further, the step S5 specifically includes:
s51: after photographing a single target workpiece by the Eye-In-Hand camera, performing operations such as image preprocessing, noise reduction and the like; and carrying out self-adaptive binarization on the preprocessed image to obtain the edge characteristic information of the columnar workpiece.
S52: fitting the circular outline of the columnar workpiece according to the edge characteristic information of the circle based on an abnormal value detection method, and obtaining the maximum excircle outline of the columnar workpiece by adopting a maximum value constraint selection_max_length_contour method to realize high precision of visual positioning.
And the selection_max_length_contour method carries out maximum constraint on the concentric circular outline of the columnar workpiece obtained after fitting the workpiece key information, and returns outline characteristic information of the columnar workpiece. The method comprises the steps of initializing the longest length and the index of the longest length, traversing the acquired contour characteristic length, storing the length and the index of the longest contour, and finally returning the index of the length of the longest contour.
The classification positioning precision of the columnar workpieces can reach the micron level, and meanwhile, the columnar workpieces of various different types can be identified. The recognition accuracy rate of the columnar workpiece recognition device can reach more than 90% for a plurality of different types of columnar workpieces, and the recognition speed can reach more than 50 fps.
Claims (8)
1. A columnar workpiece classification high-precision positioning method based on target detection and machine vision is characterized by comprising the following steps:
acquiring images of the multi-target workpiece by using an Eye-To-Hand camera of the experimental platform; the experimental platform comprises a manipulator, a visual control system, a Eye-To-Hand camera and an Eye-In-Hand camera; the Eye-To-Hand camera is fixed above the experimental platform, and images different types of multi-target workpieces on the visual field surface; the multi-target workpiece includes four types of columnar workpieces;
s1: the Eye-To-Hand camera acquires images of the multi-target workpieces on the experimental platform, inputs the acquired images into a yolov3 algorithm model with an improved network structure, trains the yolov3 algorithm model with the improved network structure, and performs target detection by using the trained yolov3 algorithm model with the improved network structure To obtain image coordinates of various categories and coarse precision of the multi-target workpieces;
s2: based on coordinate transformation, performing Hand-Eye calibration on the Eye-To-Hand camera and the tail end of the manipulator through a calibration plate calibration method, combining the obtained image coordinates of the multi-target workpiece with Hand-Eye calibration parameters, calculating the world coordinates of each target workpiece, and returning To the category of each target workpiece;
s3: the yolov3 algorithm model with the improved network structure trains multiple target workpiece types during training, and trains scratch and unfilled corner target defects of all workpieces; when the yolov3 algorithm model with the trained improved network structure is used for target detection, identifying scratch and unfilled corner target defects;
s4: the image coordinates of each category and coarse precision of the multi-target workpiece are identified and obtained by a yolov3 algorithm model with an improved network structure, the image coordinates are transmitted to a vision control system based on a communication protocol, and the vision control system sends the image coordinates to a manipulator; the visual control system is acted by the industrial personal computer; the Eye-In-Hand camera is connected with the tail end of the manipulator; the Eye-In-Hand camera moves to the position above the target workpiece along with the manipulator;
s5: the Eye-In-Hand camera moves to the upper part of the workpiece to collect images of the workpiece, the workpiece is placed above the experimental platform, the vision control system performs image processing and feature extraction on the collected images to obtain key feature coordinates of the workpiece, and the Eye-Hand calibration parameters of the Eye-In-Hand camera are combined to obtain high-precision world coordinates of the workpiece and send the world coordinates to the vision control system;
s6: the system processor guides the manipulator to clamp, carry or assemble according to the high-precision world coordinates;
s7: and repeating the steps S4-S6, and carrying out high-precision positioning on different types of target workpieces to realize high-precision positioning of multiple target workpieces.
2. The high-precision positioning method for classifying cylindrical workpieces based on target detection and machine vision according To claim 1, wherein the Eye-To-Hand camera, the Eye-In-Hand camera and the vision control system are communicated based on a GigE protocol for image transmission; and the vision control system and the manipulator are communicated based on TCP/IP protocol to carry out position coordinate transmission.
3. The method for classifying and positioning columnar workpieces with high precision based on target detection and machine vision according to claim 1, wherein the step S1 specifically comprises the following steps:
s11: acquiring images of target workpieces To be detected by using an Eye-To-Hand camera on an experimental platform, and performing marking classification on different types of workpieces after acquisition To manufacture a training data set;
s12: performing enhancement processing on the training data set, and inputting the data set subjected to the enhancement processing into a yolov3 algorithm model with an improved network structure for training to obtain a parameter model;
s13: inputting an original multi-target workpiece image to be identified into a yolov3 algorithm model of a trained improved network structure, and outputting corresponding defect detection and classification identification coarse positioning results;
s14: and measuring the candidate frame parameters in the training set by adopting a vector similarity measurement method, carrying out statistical analysis on the candidate frame parameters according to the standardized Euclidean distance, writing the parameters with the minimum error into a configuration file, and improving the yolov3 target detection frame.
4. The columnar workpiece classification high-precision positioning method based on target detection and machine vision according to claim 1, wherein the yolov3 algorithm model with an improved network structure is improved based on a network result of a dark net53, and the requirement of multi-target workpiece target detection is met.
5. The method for classifying and positioning columnar workpieces with high precision based on target detection and machine vision according to claim 1, wherein an original network model of a yolov3 algorithm model obtains detection results under three scales of 13×13×75, 26×26×75 and 52×52×75 by a series of downsampling processes, wherein 13, 26 and 52 represent sampling scales; 75 is split into 3× (4+1+20), 3 represents three dimensions of the detection boxes, 4 represents the position information of each detection box, which includes the width and height of the detection box and the central position coordinates of the detection box, 1 represents the probability of identification, and 20 represents the detected target species; the yolov3 algorithm model with improved network structure is a modified network structure, can meet the target detection of four different types of multi-target workpieces, and can identify different types of defective workpieces to obtain three different-scale outputs of 13×13×39, 26×26×39 and 52×52×39.
6. The method for classifying and positioning columnar workpieces with high precision based on object detection and machine vision according to claim 1, wherein the step S2 is specifically as follows:
s21: the Eye-Eye calibration is carried out on the Eye-To-Hand camera by a calibration method of a calibration plate based on the halcon;
s22: the Hand-Eye calibration is carried out To obtain the external parameters of the Eye-To-Hand camera, and the parameters are standardized into a matrix form;
s23: and combining the image coordinates obtained by the yolov3 algorithm model with an external parameter matrix, and converting the obtained image coordinates into world coordinates of the manipulator.
7. The method for classifying and positioning columnar workpieces with high precision based on object detection and machine vision according to claim 1, wherein the step S5 specifically comprises:
s51: after photographing a single target workpiece by the Eye-In-Hand camera, performing image preprocessing and noise reduction operation; performing self-adaptive binarization on the preprocessed image to obtain edge characteristic information of the columnar workpiece;
s52: fitting the circular outline of the columnar workpiece according to the edge characteristic information of the circle based on an abnormal value detection method, and obtaining the maximum excircle outline of the columnar workpiece by adopting a maximum value constraint selection_max_length_contour method to realize high precision of visual positioning.
8. The high-precision positioning method for classifying cylindrical workpieces based on target detection and machine vision according to claim 1, wherein the classification positioning precision of the cylindrical workpieces is in a micron level, and a plurality of different types of cylindrical workpieces can be identified.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011419779.2A CN112497219B (en) | 2020-12-06 | 2020-12-06 | Columnar workpiece classifying and positioning method based on target detection and machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011419779.2A CN112497219B (en) | 2020-12-06 | 2020-12-06 | Columnar workpiece classifying and positioning method based on target detection and machine vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112497219A CN112497219A (en) | 2021-03-16 |
CN112497219B true CN112497219B (en) | 2023-09-12 |
Family
ID=74971073
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011419779.2A Active CN112497219B (en) | 2020-12-06 | 2020-12-06 | Columnar workpiece classifying and positioning method based on target detection and machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112497219B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113134683A (en) * | 2021-05-13 | 2021-07-20 | 兰州理工大学 | Laser marking method and device based on machine learning |
CN113538417A (en) * | 2021-08-24 | 2021-10-22 | 安徽顺鼎阿泰克科技有限公司 | Transparent container defect detection method and device based on multi-angle and target detection |
CN113657551B (en) * | 2021-09-01 | 2023-10-20 | 陕西工业职业技术学院 | Robot grabbing gesture task planning method for sorting and stacking multiple targets |
CN113814987B (en) * | 2021-11-24 | 2022-06-03 | 季华实验室 | Multi-camera robot hand-eye calibration method and device, electronic equipment and storage medium |
CN115159149B (en) * | 2022-07-28 | 2024-05-24 | 深圳市罗宾汉智能装备有限公司 | Visual positioning-based material taking and unloading method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102229146A (en) * | 2011-04-27 | 2011-11-02 | 北京工业大学 | Remote control humanoid robot system based on exoskeleton human posture information acquisition technology |
CN105690386A (en) * | 2016-03-23 | 2016-06-22 | 北京轩宇智能科技有限公司 | Teleoperation system and teleoperation method for novel mechanical arm |
CN108555908A (en) * | 2018-04-12 | 2018-09-21 | 同济大学 | A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras |
CN109448054A (en) * | 2018-09-17 | 2019-03-08 | 深圳大学 | The target Locate step by step method of view-based access control model fusion, application, apparatus and system |
CN109483554A (en) * | 2019-01-22 | 2019-03-19 | 清华大学 | Robotic Dynamic grasping means and system based on global and local vision semanteme |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8111905B2 (en) * | 2009-10-29 | 2012-02-07 | Mitutoyo Corporation | Autofocus video tool and method for precise dimensional inspection |
-
2020
- 2020-12-06 CN CN202011419779.2A patent/CN112497219B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102229146A (en) * | 2011-04-27 | 2011-11-02 | 北京工业大学 | Remote control humanoid robot system based on exoskeleton human posture information acquisition technology |
CN105690386A (en) * | 2016-03-23 | 2016-06-22 | 北京轩宇智能科技有限公司 | Teleoperation system and teleoperation method for novel mechanical arm |
CN108555908A (en) * | 2018-04-12 | 2018-09-21 | 同济大学 | A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras |
CN109448054A (en) * | 2018-09-17 | 2019-03-08 | 深圳大学 | The target Locate step by step method of view-based access control model fusion, application, apparatus and system |
CN109483554A (en) * | 2019-01-22 | 2019-03-19 | 清华大学 | Robotic Dynamic grasping means and system based on global and local vision semanteme |
Non-Patent Citations (1)
Title |
---|
基于图像分辨率处理与卷积神经网络的工件识别分类系统;陈春谋;系统仿真技术;第15卷(第2期);第99-106页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112497219A (en) | 2021-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112497219B (en) | Columnar workpiece classifying and positioning method based on target detection and machine vision | |
CN111537517B (en) | Unmanned intelligent stamping defect identification method | |
CN110806736B (en) | Method for detecting quality information of forge pieces of die forging forming intelligent manufacturing production line | |
CN111243017A (en) | Intelligent robot grabbing method based on 3D vision | |
CN110146017B (en) | Industrial robot repeated positioning precision measuring method | |
CN114355953B (en) | High-precision control method and system of multi-axis servo system based on machine vision | |
CN112037219A (en) | Metal surface defect detection method based on two-stage convolution neural network | |
CN113393426B (en) | Steel rolling plate surface defect detection method | |
CN115439458A (en) | Industrial image defect target detection algorithm based on depth map attention | |
CN114290016B (en) | High-precision wood furniture assembling system and method based on binocular parallax calculation | |
CN114913346B (en) | Intelligent sorting system and method based on product color and shape recognition | |
CN113822810A (en) | Method for positioning workpiece in three-dimensional space based on machine vision | |
CN114310883A (en) | Mechanical arm autonomous assembling method based on multiple knowledge bases | |
CN112729112A (en) | Engine cylinder bore diameter and hole site detection method based on robot vision | |
CN113936291A (en) | Aluminum template quality inspection and recovery method based on machine vision | |
CN111415384B (en) | Industrial image component accurate positioning system based on deep learning | |
CN116843615B (en) | Lead frame intelligent total inspection method based on flexible light path | |
CN111189396B (en) | Displacement detection method of incremental absolute grating ruler based on neural network | |
CN110021027B (en) | Edge cutting point calculation method based on binocular vision | |
CN118314138B (en) | Laser processing method and system based on machine vision | |
CN117260003B (en) | Automatic arranging, steel stamping and coding method and system for automobile seat framework | |
CN118038103B (en) | Visual loop detection method based on improved dynamic expansion model self-adaptive algorithm | |
CN111145258B (en) | Method for automatically feeding and discharging various kinds of automobile glass by industrial robot | |
CN117299596B (en) | Material screening system and method for automatic detection | |
CN117207191A (en) | High-precision welding robot hand-eye calibration method based on machine vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |