CN112497219A - Columnar workpiece classification positioning method based on target detection and machine vision - Google Patents

Columnar workpiece classification positioning method based on target detection and machine vision Download PDF

Info

Publication number
CN112497219A
CN112497219A CN202011419779.2A CN202011419779A CN112497219A CN 112497219 A CN112497219 A CN 112497219A CN 202011419779 A CN202011419779 A CN 202011419779A CN 112497219 A CN112497219 A CN 112497219A
Authority
CN
China
Prior art keywords
workpiece
target
eye
detection
precision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011419779.2A
Other languages
Chinese (zh)
Other versions
CN112497219B (en
Inventor
刘志峰
雷旦
赵永胜
李龙飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202011419779.2A priority Critical patent/CN112497219B/en
Publication of CN112497219A publication Critical patent/CN112497219A/en
Application granted granted Critical
Publication of CN112497219B publication Critical patent/CN112497219B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a columnar workpiece classification high-precision positioning method based on target detection and machine vision, which comprises two parts of target detection, defect detection and rough positioning of yolov3 and high-precision positioning of machine vision. The parts of yolov3 include data set making, network structure improvement, candidate frame parameter adjustment, real-time positioning identification and defect detection. And acquiring a workpiece image through an eye-to-hand camera, fusing an image enhancement algorithm, and improving the candidate frame parameters by adopting a vector similarity measurement method. In the machine vision part, an eye-in-hand camera is guided to obtain an image by a coarse positioning position of yolov3 algorithm, the characteristic of the image is extracted, the abnormal characteristic is removed by adopting maximum value constraint, and finally the contour characteristic of the workpiece is fitted to obtain the high-precision position of the target workpiece.

Description

Columnar workpiece classification positioning method based on target detection and machine vision
Technical Field
The invention relates to industrial robots and machine vision applications, in particular to a columnar workpiece classification high-precision positioning method based on target detection and machine vision.
Background
Along with the development of intelligent manufacturing, industrial robots have the advantages of good universality, high repeated positioning precision and the like, and most of the industrial automation fields adopt a robot teaching method. There is a long distance for realizing real intelligent manufacturing, and the traditional teaching can not meet the requirement of intelligent manufacturing. The machine vision technology well solves the position control requirement of the robot, but simultaneously has the problem that the recognition flexibility and the precision are difficult to be compatible. The target detection technology based on deep learning can better meet the flexible requirement of multi-target identification, but has the problem of insufficient positioning precision. The traditional machine vision detection technology has high identification precision, but the identification characteristic is single.
The patent with publication number CN111238450A discloses a visual positioning method and device, in which multiple frames of images are collected for a single target workpiece, and the collected pose transformation relation corresponding to each frame of image is satisfied by the visual positioning information corresponding to each frame of image, so that the multi-target workpiece cannot be identified and positioned. The patent with publication number CN106272416A discloses a robot slender shaft precision assembly system and method based on force sense and vision, and the invention has certain limitations by means of various sensors such as vision, position, force sense and the like when realizing precision assembly.
Target detection can be realized based on deep learning, but the precision is poor; the traditional machine vision recognition has high positioning precision but the detection target is too single. Therefore, the classification and high-precision positioning of multi-target workpieces are a problem to be solved in the field of industrial robots and machine vision application.
Disclosure of Invention
The invention provides a columnar workpiece classification high-precision positioning method based on target detection and machine vision. And performing target detection on the target workpiece through deep learning to finish workpiece classification and target rough positioning, guiding the manipulator to the position above the workpiece by the rough positioning target position, and finishing target high-precision positioning through machine vision. Therefore, classification recognition and high-precision positioning of the multi-target workpiece are achieved.
Therefore, the invention provides a columnar workpiece classification high-precision positioning method based on target detection and machine vision, which comprises the following steps:
the multi-target identification, rough positioning and defect identification process based on yolov3 target detection algorithm is as follows:
and acquiring images of the multi-target workpiece by using an Eye-To-Hand camera of the experimental platform. The experimental platform comprises a mechanical arm, a visual control system, an Eye-To-Hand camera and an Eye-In-Hand camera. The Eye-To-Hand camera is fixed right above the test bed, and the camera has a higher working distance from the experimental platform so as To image different types of multi-target workpieces on a visual field surface. The Eye-To-Hand camera has lower coarse identification and positioning accuracy due To larger working distance.
S1: the Eye-To-Hand camera acquires images of multiple target workpieces on a test bed, inputs the acquired images into a yolov3 algorithm of an improved network structure, trains a yolov3 algorithm model of the improved network structure, and performs target detection by using the trained yolov3 algorithm multi-target detection model To obtain image coordinates of each category and coarse precision of the multiple target workpieces.
S2: and based on coordinate transformation, performing Hand-Eye calibration on the Eye-To-Hand camera and the tail end of the manipulator by a calibration plate calibration method, solving world coordinates of each target workpiece by combining the obtained image coordinates of the multi-target workpiece coarse precision with Hand-Eye calibration parameters, and returning the category of each target workpiece.
S3: the yolov3 algorithm model of the improved network structure trains multiple target workpiece types during training, and trains typical defects of each workpiece at the same time. When the trained yolov3 algorithm for improving the network structure is used for target detection, key defects of targets such as scratches, unfilled corners and the like are identified.
The high-precision positioning process of the target workpiece based on machine vision comprises the following steps:
s4: the coordinates of the roughly positioned workpiece are obtained by identifying a yolov3 algorithm model of an improved network structure, the position coordinates are transmitted to a vision control system based on a communication protocol, and the control system sends the position coordinates to a manipulator. The visual control system is acted by an industrial personal computer; the Eye-In-Hand camera is connected with the tail end of the manipulator. The Eye-In-Hand camera moves along with the robot arm to above the target workpiece.
S5: the Eye-In-Hand camera moves to the position above a workpiece to acquire images of the workpiece, the workpiece is placed above a test bed, the system performs image processing and feature extraction on the acquired image to acquire key feature coordinates of the workpiece, and high-precision world coordinates of the workpiece are acquired and sent to a vision system In combination with Hand-Eye calibration parameters of the Eye-In-Hand camera.
S6: and the system processor guides the manipulator to clamp, carry or assemble according to the high-precision coordinates.
S7: and repeating the steps S4-S6 to perform high-precision positioning on the target workpieces of different types, so as to realize the high-precision positioning of the multi-target workpieces.
The workpiece is a shaft part; the multi-target workpieces comprise four different types of workpieces; the camera and the vision system are communicated based on a GigE protocol to transmit images; and the visual system and the mechanical arm are communicated based on a TCP/IP protocol to transmit position coordinates.
Further, the step S1 is specifically:
s11: and acquiring images of the target To be detected by using an Eye-To-Hand camera on the test workbench, and marking and classifying different types of workpieces after acquisition To prepare a training data set. The workpiece marks are classified into five categories, including four different types of shaft parts and four different types of workpieces with defects.
S12: and (3) performing enhancement processing on the training data set, and inputting the enhanced data set into an improved yolov3 algorithm model for training to obtain a parameter model.
S13: and inputting the original multi-target workpiece image to be identified into a yolov3 model of the trained improved network, and outputting corresponding defect detection and classification identification coarse positioning results.
S14: and measuring the parameters of the candidate frame in the training set by adopting a vector similarity measurement method, performing statistical analysis on the parameters according to the standardized Euclidean distance, performing statistical analysis on the parameters of the candidate frame according to the standardized Euclidean distance, writing the parameter with the minimum error into a configuration file, and improving the yolv 3 target detection frame.
The yolov3 model of the improved network structure is improved based on the network result of the darknet53, and the requirement of multi-target workpiece target detection is met. In the target detection and defect identification method provided by the invention, the optimization and improvement of the yolov3 algorithm network structure model specifically comprises the following steps:
the original network model of the Yolov3 target detection algorithm obtains detection results under three scales of 13 × 13 × 75, 26 × 26 × 75 and 52 × 52 × 75 by a series of downsampling processes, wherein 13, 26 and 52 represent sampling scales. 75 is divided into 3 × (4+1+20), 3 represents three scales of detection boxes, 4 represents position information of each detection box, which includes the width and height of the detection box and the center position coordinates of the detection box, 1 represents the probability of recognition, and 20 represents the kind of target that can be detected. The yolov3 algorithm of the improved network structure is that the modified network structure can meet the target detection of four different types of multi-target workpieces, and can identify different types of defective workpieces to obtain the outputs of three different scales of 13 multiplied by 39, 26 multiplied by 39 and 52 multiplied by 39.
Further, the step S2 is specifically:
s21: the Eye-To-Hand camera is calibrated by a calibration method of a calibration plate based on halcon;
s22: calibrating the Hand and the Eye To obtain external parameters of the Eye-To-Hand camera, and standardizing the parameters into a matrix form;
s23: and combining the image coordinates obtained by the yolov3 target detection model with an external parameter matrix, and converting the obtained image coordinates into the world coordinates of the robot.
Further, the step S5 is specifically:
s51: the Eye-In-Hand camera performs operations such as image preprocessing, noise reduction and the like after photographing a single target workpiece; and carrying out self-adaptive binarization on the preprocessed image to obtain edge characteristic information of the columnar workpiece.
S52: according to the edge characteristic information of the circle, the circle contour of the columnar workpiece is fitted based on an abnormal value detection method, the maximum excircle contour of the columnar workpiece is obtained by adopting a selection _ max _ length _ contourer method constrained by a maximum value, and high precision of visual positioning is realized.
The select _ max _ length _ constraint method performs maximum value constraint on concentric circle profiles of the columnar workpieces after fitting the critical information of the workpieces, and returns profile characteristic information of the columnar workpieces. The method comprises the steps of initializing the longest length and the longest length index, traversing the acquired profile characteristic length, storing the length and the index of the longest profile, and finally returning the index of the longest profile length.
The method can achieve micron-level classification and positioning accuracy of the columnar workpieces, and can identify various columnar workpieces of different types. The accuracy rate of the identification of various different types of columnar workpieces can reach more than 90%, and the identification speed can reach more than 50 fps.
Compared with the prior art, the invention has the following advantages:
1. the method can be used for carrying out high-precision positioning on the multi-target workpiece on the test workbench, and is matched with the manipulator, so that full automation in the processes of clamping, carrying and assembling of the multi-target workpiece is realized, no manual intervention in the whole process is realized, and the production efficiency is greatly improved.
2. The Eye-To-Hand camera fixed on the test workbench automatically performs target detection on multi-target workpieces and completes coarse positioning on the basis of a yolov3 model of an improved network, and simultaneously performs defect detection on defective workpieces.
3. And transmitting the coordinate position returned by the rough positioning to a mechanical arm, and driving an Eye-In-Hand camera to move to the upper part of the target workpiece by the mechanical arm to perform high-precision positioning on the target workpiece. The method can overcome the problem that the target detection technology based on deep learning can better meet the flexible requirement of multi-target identification, but has insufficient positioning precision and high identification precision of the traditional machine vision detection technology, but has single identification characteristic.
Drawings
FIG. 1 is a schematic view of a camera layout according to the present invention.
Fig. 2 is a schematic flow chart of the high-precision columnar workpiece classification positioning method based on target detection and machine vision provided by the invention.
Fig. 3 is a schematic diagram of the improved yolov3 target detection model structure according to the present invention.
FIG. 4 is a flow chart of the select _ max _ length _ constant algorithm employed in the present invention.
FIG. 5 is an effect diagram of a columnar workpiece classification high-precision positioning method based on target detection and machine vision.
FIG. 6 is a flow chart of the present invention.
Detailed Description
The invention provides a columnar workpiece classification high-precision positioning method based on target detection and machine vision. And performing target detection on the target workpiece through deep learning to finish workpiece classification and target rough positioning, guiding the manipulator to the position above the workpiece by the rough positioning target position, and finishing target high-precision positioning through machine vision. Therefore, classification recognition and high-precision positioning of the multi-target workpiece are achieved.
Therefore, the invention provides a columnar workpiece classification high-precision positioning method based on target detection and machine vision, which comprises the following steps:
the multi-target identification, rough positioning and defect identification process based on yolov3 target detection algorithm is as follows:
and acquiring images of the multi-target workpiece by using an Eye-To-Hand camera of the experimental platform. The experimental platform comprises a mechanical arm, a visual control system, an Eye-To-Hand camera and an Eye-In-Hand camera. The Eye-To-Hand camera is fixed right above the test bed, and the camera has a higher working distance from the experimental platform so as To image different types of multi-target workpieces on a visual field surface. The Eye-To-Hand camera has lower coarse identification and positioning accuracy due To larger working distance.
S1: the Eye-To-Hand camera acquires images of multiple target workpieces on a test bed, inputs the acquired images into a yolov3 algorithm of an improved network structure, trains a yolov3 algorithm model of the improved network structure, and performs target detection by using the trained yolov3 algorithm multi-target detection model To obtain image coordinates of each category and coarse precision of the multiple target workpieces.
S2: and based on coordinate transformation, performing Hand-Eye calibration on the Eye-To-Hand camera and the tail end of the manipulator by a calibration plate calibration method, solving world coordinates of each target workpiece by combining the obtained image coordinates of the multi-target workpiece coarse precision with Hand-Eye calibration parameters, and returning the category of each target workpiece.
S3: the yolov3 algorithm model of the improved network structure trains multiple target workpiece types during training, and trains typical defects of each workpiece at the same time. When the trained yolov3 algorithm for improving the network structure is used for target detection, key defects of targets such as scratches, unfilled corners and the like are identified.
The high-precision positioning process of the target workpiece based on machine vision comprises the following steps:
s4: the coordinates of the roughly positioned workpiece are obtained by identifying a yolov3 algorithm model of an improved network structure, the position coordinates are transmitted to a vision control system based on a communication protocol, and the control system sends the position coordinates to a manipulator. The visual control system is acted by an industrial personal computer; the Eye-In-Hand camera is connected with the tail end of the manipulator. The Eye-In-Hand camera moves along with the robot arm to above the target workpiece.
S5: the Eye-In-Hand camera moves to the position above a workpiece to acquire images of the workpiece, the workpiece is placed above a test bed, the system performs image processing and feature extraction on the acquired image to acquire key feature coordinates of the workpiece, and high-precision world coordinates of the workpiece are acquired and sent to a vision system In combination with Hand-Eye calibration parameters of the Eye-In-Hand camera.
S6: and the system processor guides the manipulator to clamp, carry or assemble according to the high-precision coordinates.
S7: and repeating the steps S4-S6 to perform high-precision positioning on the target workpieces of different types, so as to realize the high-precision positioning of the multi-target workpieces.
The workpiece is a shaft part; the multi-target workpieces comprise four different types of workpieces; the camera and the vision system are communicated based on a GigE protocol to transmit images; and the visual system and the mechanical arm are communicated based on a TCP/IP protocol to transmit position coordinates.
Further, the step S1 is specifically:
s11: and acquiring images of the target To be detected by using an Eye-To-Hand camera on the test workbench, and marking and classifying different types of workpieces after acquisition To prepare a training data set. The workpiece marks are classified into five categories, including four different types of shaft parts and four different types of workpieces with defects.
S12: and (3) performing enhancement processing on the training data set, and inputting the enhanced data set into an improved yolov3 algorithm model for training to obtain a parameter model.
S13: and inputting the original multi-target workpiece image to be identified into a yolov3 model of the trained improved network, and outputting corresponding defect detection and classification identification coarse positioning results.
S14: and measuring the parameters of the candidate frame in the training set by adopting a vector similarity measurement method, performing statistical analysis on the parameters according to the standardized Euclidean distance, performing statistical analysis on the parameters of the candidate frame according to the standardized Euclidean distance, writing the parameter with the minimum error into a configuration file, and improving the yolv 3 target detection frame.
The yolov3 model of the improved network structure is improved based on the network result of the darknet53, and the requirement of multi-target workpiece target detection is met. In the target detection and defect identification method provided by the invention, the optimization and improvement of the yolov3 algorithm network structure model specifically comprises the following steps:
the original network model of the Yolov3 target detection algorithm obtains detection results under three scales of 13 × 13 × 75, 26 × 26 × 75 and 52 × 52 × 75 by a series of downsampling processes, wherein 13, 26 and 52 represent sampling scales. 75 is divided into 3 × (4+1+20), 3 represents three scales of detection boxes, 4 represents position information of each detection box, which includes the width and height of the detection box and the center position coordinates of the detection box, 1 represents the probability of recognition, and 20 represents the kind of target that can be detected. The yolov3 algorithm of the improved network structure is that the modified network structure can meet the target detection of four different types of multi-target workpieces, and can identify different types of defective workpieces to obtain the outputs of three different scales of 13 multiplied by 39, 26 multiplied by 39 and 52 multiplied by 39.
Further, the step S2 is specifically:
s21: the Eye-To-Hand camera is calibrated by a calibration method of a calibration plate based on halcon;
s22: calibrating the Hand and the Eye To obtain external parameters of the Eye-To-Hand camera, and standardizing the parameters into a matrix form;
s23: and combining the image coordinates obtained by the yolov3 target detection model with an external parameter matrix, and converting the obtained image coordinates into the world coordinates of the robot.
Further, the step S5 is specifically:
s51: the Eye-In-Hand camera performs operations such as image preprocessing, noise reduction and the like after photographing a single target workpiece; and carrying out self-adaptive binarization on the preprocessed image to obtain edge characteristic information of the columnar workpiece.
S52: according to the edge characteristic information of the circle, the circle contour of the columnar workpiece is fitted based on an abnormal value detection method, the maximum excircle contour of the columnar workpiece is obtained by adopting a selection _ max _ length _ contourer method constrained by a maximum value, and high precision of visual positioning is realized.
The select _ max _ length _ constraint method performs maximum value constraint on concentric circle profiles of the columnar workpieces after fitting the critical information of the workpieces, and returns profile characteristic information of the columnar workpieces. The method comprises the steps of initializing the longest length and the longest length index, traversing the acquired profile characteristic length, storing the length and the index of the longest profile, and finally returning the index of the longest profile length.
The method can achieve micron-level classification and positioning accuracy of the columnar workpieces, and can identify various columnar workpieces of different types. The accuracy rate of the identification of various different types of columnar workpieces can reach more than 90%, and the identification speed can reach more than 50 fps.

Claims (9)

1. A columnar workpiece classification high-precision positioning method based on target detection and machine vision is characterized by comprising the following steps:
acquiring images of the multi-target workpiece by using an Eye-To-Hand camera of the experimental platform; the experimental platform comprises a mechanical arm, a visual control system, an Eye-To-Hand camera and an Eye-In-Hand camera; the Eye-To-Hand camera is fixed right above the test bed, and the camera is away from the experiment platform, so that different types of multi-target workpieces are imaged on a visual field surface;
s1: the Eye-To-Hand camera acquires images of a multi-target workpiece on a test bed, inputs the acquired images into a yolov3 algorithm of an improved network structure, trains a yolov3 algorithm model of the improved network structure, and performs target detection by using the trained yolov3 algorithm multi-target detection model To obtain image coordinates of each category and coarse precision of the multi-target workpiece;
s2: based on coordinate transformation, performing Hand-Eye calibration on the Eye-To-Hand camera and the tail end of the manipulator by a calibration plate calibration method, solving world coordinates of each target workpiece by combining the obtained image coordinates of the multi-target workpiece coarse precision with Hand-Eye calibration parameters, and returning the category of each target workpiece;
s3: the yolov3 algorithm model with the improved network structure is used for training the types of multiple targets of workpieces during training, and training typical defects of each workpiece; when the trained yolov3 algorithm for improving the network structure is used for target detection, the key defects of scratch and unfilled corner targets are identified;
s4: the coordinates of the roughly positioned workpiece are obtained by identifying a yolov3 algorithm model of an improved network structure, the position coordinates are transmitted to a visual control system based on a communication protocol, and the control system sends the position coordinates to a manipulator; the visual control system is acted by an industrial personal computer; the Eye-In-Hand camera is connected with the tail end of the manipulator; the Eye-In-Hand camera moves to the upper part of the target workpiece along with the manipulator;
s5: the Eye-In-Hand camera moves to a position above a workpiece to acquire an image of the workpiece, the workpiece is placed above a test bed, the system performs image processing and feature extraction on the acquired image to acquire key feature coordinates of the workpiece, and high-precision world coordinates of the workpiece are acquired and sent to a vision system In combination with Hand-Eye calibration parameters of the Eye-In-Hand camera;
s6: the system processor guides the manipulator to clamp, carry or assemble according to the high-precision coordinates;
s7: and repeating the steps S4-S6 to perform high-precision positioning on the target workpieces of different types, so as to realize the high-precision positioning of the multi-target workpieces.
2. The method for classifying and positioning the columnar workpiece with high precision based on the target detection and the machine vision as claimed in claim 1, wherein the workpiece is a shaft part; the multi-target workpieces comprise four different types of workpieces; the camera and the vision system are communicated based on a GigE protocol to transmit images; and the visual system and the mechanical arm are communicated based on a TCP/IP protocol to transmit position coordinates.
3. The method as claimed in claim 1, wherein the step S1 comprises:
s11: collecting images of a target To be detected by using an Eye-To-Hand camera on a test workbench, and marking and classifying different types of workpieces after collection To manufacture a training data set;
s12: enhancing the training data set, inputting the enhanced data set into an improved yolov3 algorithm model for training to obtain a parameter model;
s13: inputting an original multi-target workpiece image to be identified into a yolov3 model of a trained improved network, and outputting corresponding defect detection and classification identification coarse positioning results;
s14: and measuring the parameters of the candidate frame in the training set by adopting a vector similarity measurement method, performing statistical analysis on the parameters according to the standardized Euclidean distance, performing statistical analysis on the parameters of the candidate frame according to the standardized Euclidean distance, writing the parameter with the minimum error into a configuration file, and improving the yolv 3 target detection frame.
4. The method as claimed in claim 1, wherein the yolov3 model of the improved network structure is improved based on the network result of darknet53, so as to meet the target detection requirement of multiple targets of workpieces.
5. The method for high-precision positioning of columnar workpiece classification based on target detection and machine vision according to claim 1, characterized in that the original network model of Yolov3 target detection algorithm obtains detection results at three scales of 13 × 13 × 75, 26 × 26 × 75 and 52 × 52 × 75 by a series of down-sampling processes, wherein 13, 26 and 52 represent sampling scales; 75, dividing the detection box into 3 x (4+1+20), wherein 3 represents detection boxes with three scales, 4 represents position information of each detection box, the position information comprises width and height of the detection boxes and center position coordinates of the detection boxes, 1 represents recognition probability, and 20 represents detected target types; the yolov3 algorithm of the improved network structure is that the modified network structure can meet the target detection of four different types of multi-target workpieces, and can identify different types of defective workpieces to obtain the outputs of three different scales of 13 multiplied by 39, 26 multiplied by 39 and 52 multiplied by 39.
6. The method as claimed in claim 1, wherein the step S2 comprises:
s21: the Eye-To-Hand camera is calibrated by a calibration method of a calibration plate based on halcon;
s22: calibrating the Hand and the Eye To obtain external parameters of the Eye-To-Hand camera, and standardizing the parameters into a matrix form;
s23: and combining the image coordinates obtained by the yolov3 target detection model with an external parameter matrix, and converting the obtained image coordinates into the world coordinates of the robot.
7. The method as claimed in claim 1, wherein the step S5 comprises:
s51: the Eye-In-Hand camera performs operations such as image preprocessing, noise reduction and the like after photographing a single target workpiece; carrying out self-adaptive binarization on the preprocessed image to obtain edge characteristic information of the columnar workpiece;
s52: according to the edge characteristic information of the circle, the circle contour of the columnar workpiece is fitted based on an abnormal value detection method, the maximum excircle contour of the columnar workpiece is obtained by adopting a selection _ max _ length _ contourer method constrained by a maximum value, and high precision of visual positioning is realized.
8. The method for classifying and positioning the columnar workpiece with high precision based on the target detection and the machine vision as claimed in claim 1, wherein the select _ max _ length _ constraint method performs maximum value constraint on concentric contours of the columnar workpiece obtained after fitting the critical information of the workpiece, and returns the contour characteristic information of the columnar workpiece; the method comprises the steps of initializing the longest length and the longest length index, traversing the acquired profile characteristic length, storing the length and the index of the longest profile, and finally returning the index of the longest profile length.
9. The method for high-precision classified positioning of the columnar workpiece based on the target detection and the machine vision as claimed in claim 1, wherein the method has micron-scale classified positioning precision of the columnar workpiece and can identify a plurality of different types of columnar workpieces.
CN202011419779.2A 2020-12-06 2020-12-06 Columnar workpiece classifying and positioning method based on target detection and machine vision Active CN112497219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011419779.2A CN112497219B (en) 2020-12-06 2020-12-06 Columnar workpiece classifying and positioning method based on target detection and machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011419779.2A CN112497219B (en) 2020-12-06 2020-12-06 Columnar workpiece classifying and positioning method based on target detection and machine vision

Publications (2)

Publication Number Publication Date
CN112497219A true CN112497219A (en) 2021-03-16
CN112497219B CN112497219B (en) 2023-09-12

Family

ID=74971073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011419779.2A Active CN112497219B (en) 2020-12-06 2020-12-06 Columnar workpiece classifying and positioning method based on target detection and machine vision

Country Status (1)

Country Link
CN (1) CN112497219B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113134683A (en) * 2021-05-13 2021-07-20 兰州理工大学 Laser marking method and device based on machine learning
CN113657551A (en) * 2021-09-01 2021-11-16 陕西工业职业技术学院 Robot grabbing posture task planning method for sorting and stacking multiple targets
CN113814987A (en) * 2021-11-24 2021-12-21 季华实验室 Multi-camera robot hand-eye calibration method and device, electronic equipment and storage medium
CN115159149A (en) * 2022-07-28 2022-10-11 深圳市罗宾汉智能装备有限公司 Material taking and unloading method and device based on visual positioning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110103679A1 (en) * 2009-10-29 2011-05-05 Mitutoyo Corporation Autofocus video tool and method for precise dimensional inspection
CN102229146A (en) * 2011-04-27 2011-11-02 北京工业大学 Remote control humanoid robot system based on exoskeleton human posture information acquisition technology
CN105690386A (en) * 2016-03-23 2016-06-22 北京轩宇智能科技有限公司 Teleoperation system and teleoperation method for novel mechanical arm
CN108555908A (en) * 2018-04-12 2018-09-21 同济大学 A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras
CN109448054A (en) * 2018-09-17 2019-03-08 深圳大学 The target Locate step by step method of view-based access control model fusion, application, apparatus and system
CN109483554A (en) * 2019-01-22 2019-03-19 清华大学 Robotic Dynamic grasping means and system based on global and local vision semanteme

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110103679A1 (en) * 2009-10-29 2011-05-05 Mitutoyo Corporation Autofocus video tool and method for precise dimensional inspection
CN102229146A (en) * 2011-04-27 2011-11-02 北京工业大学 Remote control humanoid robot system based on exoskeleton human posture information acquisition technology
CN105690386A (en) * 2016-03-23 2016-06-22 北京轩宇智能科技有限公司 Teleoperation system and teleoperation method for novel mechanical arm
CN108555908A (en) * 2018-04-12 2018-09-21 同济大学 A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras
CN109448054A (en) * 2018-09-17 2019-03-08 深圳大学 The target Locate step by step method of view-based access control model fusion, application, apparatus and system
CN109483554A (en) * 2019-01-22 2019-03-19 清华大学 Robotic Dynamic grasping means and system based on global and local vision semanteme

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈春谋: "基于图像分辨率处理与卷积神经网络的工件识别分类系统", 系统仿真技术, vol. 15, no. 2, pages 99 - 106 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113134683A (en) * 2021-05-13 2021-07-20 兰州理工大学 Laser marking method and device based on machine learning
CN113657551A (en) * 2021-09-01 2021-11-16 陕西工业职业技术学院 Robot grabbing posture task planning method for sorting and stacking multiple targets
CN113657551B (en) * 2021-09-01 2023-10-20 陕西工业职业技术学院 Robot grabbing gesture task planning method for sorting and stacking multiple targets
CN113814987A (en) * 2021-11-24 2021-12-21 季华实验室 Multi-camera robot hand-eye calibration method and device, electronic equipment and storage medium
CN113814987B (en) * 2021-11-24 2022-06-03 季华实验室 Multi-camera robot hand-eye calibration method and device, electronic equipment and storage medium
CN115159149A (en) * 2022-07-28 2022-10-11 深圳市罗宾汉智能装备有限公司 Material taking and unloading method and device based on visual positioning
CN115159149B (en) * 2022-07-28 2024-05-24 深圳市罗宾汉智能装备有限公司 Visual positioning-based material taking and unloading method and device

Also Published As

Publication number Publication date
CN112497219B (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN112497219B (en) Columnar workpiece classifying and positioning method based on target detection and machine vision
CN108765378B (en) Machine vision detection method for workpiece contour flash bulge under guidance of G code
WO2015120734A1 (en) Special testing device and method for correcting welding track based on machine vision
CN110580725A (en) Box sorting method and system based on RGB-D camera
CN105783723B (en) Precision die surface processing accuracy detection device and method based on machine vision
CN110146017B (en) Industrial robot repeated positioning precision measuring method
CN111476841B (en) Point cloud and image-based identification and positioning method and system
CN108460552B (en) Steel storage control system based on machine vision and PLC
CN113146172A (en) Multi-vision-based detection and assembly system and method
CN114355953B (en) High-precision control method and system of multi-axis servo system based on machine vision
CN114290016B (en) High-precision wood furniture assembling system and method based on binocular parallax calculation
CN112561886A (en) Automatic workpiece sorting method and system based on machine vision
CN112729112B (en) Engine cylinder bore diameter and hole site detection method based on robot vision
Hsu et al. Development of a faster classification system for metal parts using machine vision under different lighting environments
CN114758236A (en) Non-specific shape object identification, positioning and manipulator grabbing system and method
CN111784688A (en) Flower automatic grading method based on deep learning
CN113822810A (en) Method for positioning workpiece in three-dimensional space based on machine vision
CN114913346A (en) Intelligent sorting system and method based on product color and shape recognition
CN115294198A (en) Vision-based global performance measurement system and method for mechanical arm
CN112720449A (en) Robot positioning device and control system thereof
CN114851206A (en) Method for grabbing stove based on visual guidance mechanical arm
CN111415384B (en) Industrial image component accurate positioning system based on deep learning
CN111189396B (en) Displacement detection method of incremental absolute grating ruler based on neural network
CN113936291A (en) Aluminum template quality inspection and recovery method based on machine vision
CN110021027B (en) Edge cutting point calculation method based on binocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant