CN113269234B - Connecting piece assembly detection method and system based on target detection - Google Patents
Connecting piece assembly detection method and system based on target detection Download PDFInfo
- Publication number
- CN113269234B CN113269234B CN202110504071.5A CN202110504071A CN113269234B CN 113269234 B CN113269234 B CN 113269234B CN 202110504071 A CN202110504071 A CN 202110504071A CN 113269234 B CN113269234 B CN 113269234B
- Authority
- CN
- China
- Prior art keywords
- image
- connecting piece
- target detection
- predicted
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 173
- 238000003062 neural network model Methods 0.000 claims abstract description 66
- 238000000034 method Methods 0.000 claims abstract description 33
- 238000012549 training Methods 0.000 claims abstract description 28
- 238000004519 manufacturing process Methods 0.000 claims abstract description 4
- 238000007781 pre-processing Methods 0.000 claims description 20
- 238000010586 diagram Methods 0.000 claims description 10
- 238000012360 testing method Methods 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 9
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000002372 labelling Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 238000011900 installation process Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 description 7
- 230000011218 segmentation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a connecting piece assembly detection method and a connecting piece assembly detection system based on target detection, wherein the method comprises the steps of collecting an original image and manufacturing a sample image; training a target detection neural network model through a sample image to obtain an optimal target detection neural network model and a sample prediction image; selecting a standard image from the sample prediction image, and making a characteristic image set; acquiring a real-time image in the assembling process, making a to-be-predicted image, and predicting the to-be-predicted image through an optimal target detection neural network model to obtain a predicted image; comparing the predicted image with the characteristic image to determine each connecting piece in the predicted image; and judging the assembled connecting piece at the current moment, the position of the assembled connecting piece and the assembling condition through the prediction graphs at the current moment and the previous moment. The invention can realize the identification of the connecting piece and the judgment of the assembly process on the premise of ensuring the detection accuracy and the detection real-time property, and is more suitable for the field of mechanical assembly.
Description
Technical Field
The invention relates to a connecting piece assembly detection method and system based on target detection, and belongs to the technical field of intelligent manufacturing and assembly process monitoring.
Background
The mechanical assembly is an important component in the mechanical manufacturing industry, and is a process of realizing the combination of mechanical parts and components and finishing the assembly of a machine according to technical requirements. Assembly process monitoring is an important means of ensuring product quality in mechanical assembly. At present, three types of assembly process monitoring methods are mainly used, namely monitoring of operators, monitoring of an assembly body and monitoring of assembly force/moment. The method is a common means for monitoring the assembly body by identifying various connecting pieces and postures thereof on the assembly body through machine vision technologies such as image recognition, target detection, instance segmentation and the like and judging the problems of wrong assembly and neglected assembly of the assembly body.
At present, three types of methods are mainly involved in the identification of mechanical assembly connecting pieces by using image information. The first type is a traditional matching algorithm, and because the shapes and colors of mechanical assembly connecting pieces are similar, the method directly adopted can cause the problems of large identification error and low identification precision. The second type is an object detection algorithm, the method can select the connecting pieces in the image in a frame mode, the identification speed is high, but the serial number of the detected connecting piece cannot be identified, and the method is not suitable for object identification in a changing scene. The third category is the example segmentation algorithm. The method can simultaneously identify the serial number, the type and the position of the mechanical assembly connecting piece. However, the method has large calculation amount, high performance of required computing equipment and poor detection real-time performance.
Disclosure of Invention
In order to overcome the problems, the invention provides a connecting piece assembly detection method and a connecting piece assembly detection system based on target detection, which can realize connecting piece identification and assembly process judgment on the premise of ensuring detection accuracy and detection real-time performance and are more suitable for the field of mechanical assembly.
The technical scheme of the invention is as follows,
the first technical scheme is as follows:
a connecting piece assembly detection method based on target detection comprises the following steps:
acquiring an original image, wherein the original image comprises a state diagram at each moment in the assembly process of a connecting piece, preprocessing the original image, and labeling the connecting piece and an assembly tool in the original image to obtain a sample image and a document containing labeling information;
establishing a target detection neural network model for detecting a connecting piece and an assembling tool, and training the target detection neural network model through the sample image and the document to obtain an optimal target detection neural network model and a plurality of sample prediction graphs; selecting a sample prediction image as a standard image, and acquiring a plurality of characteristic images through the standard image, wherein each characteristic image comprises a connecting piece and is integrated into a characteristic image set;
when the assembly condition of a connecting piece is detected, acquiring an image to be detected at the current moment, wherein the image to be detected is a real-time image in the assembly process, and preprocessing the image to be detected to obtain an image to be predicted; predicting the image to be predicted through the optimal target detection neural network model to obtain a predicted image;
comparing the predicted image with the characteristic image in the characteristic image set at each moment to determine each connecting piece in each predicted image;
selecting predicted images at the current moment and the previous moment; determining the assembled connecting piece at the current moment according to the relation between the image connecting piece and the assembling tool predicted at the current moment; and judging the assembling condition according to the relation between the connecting piece and the assembling tool at two moments.
Further, the preprocessing the original image comprises denoising the image, converting the image into a set size and normalizing; the original image comprises RGB information and depth image information; and the marking is to select the connecting pieces and the assembling tools in the original image to obtain rectangular frames and documents, wherein the documents comprise the coordinates, the sizes and the types of the rectangular frames.
Further, the step of establishing a target detection neural network model for detecting the connecting piece and the assembling tool, and training the target detection neural network model through the sample image and the document to obtain an optimal target detection neural network model and a plurality of sample prediction maps specifically comprises the following steps:
establishing a target detection neural network model for detecting a connecting piece and an assembling tool, and initializing;
dividing the sample image and the corresponding document into a training set, a test set and a verification set according to the proportion;
inputting the training set into an initial target detection neural network model in batches, and training the initial target detection neural network model;
after the set training batch is reached, verifying the prediction effect of the target detection neural network model by using the verification set, and preventing the target detection neural network model from being over-fitted;
repeatedly training and verifying the target detection neural network, applying the test set to evaluate the detection performance of the target detection neural network model after the set training times are reached, and storing the target detection neural network model reaching the set standard; if the model does not meet the standard, initializing a target detection network model, adjusting parameters of the target detection network model, and training, verifying and testing the new initial target detection network model;
and repeating the steps to obtain a plurality of trained target detection neural network models, and selecting the neural network model with the best selectivity from the trained target detection neural network models as the optimal target detection neural network model and the sample prediction graph thereof.
Further, the selecting a sample prediction graph as a standard graph, obtaining a plurality of feature images through the standard graph, each feature image including a connector, and integrating the feature images into a feature image set specifically includes the following steps:
selecting the sample prediction graphs which contain the most connecting pieces from a plurality of sample prediction graphs as standard graphs;
marking the type and the serial number of each connecting piece;
setting a threshold value k (k is larger than 0), and intercepting a region with the width of (1+ k) × w and the height of (1+ k) × h as a characteristic image according to the central point coordinates (x, y) of each detection frame in the standard graph; the feature image includes corresponding connectors and connector surrounding information.
Further, acquiring an image to be detected at the current moment, wherein the image to be detected is a real-time image in the assembly process, and preprocessing the image to be detected to obtain an image to be predicted; predicting the image to be predicted through the optimal target detection neural network model to obtain a predicted image, and specifically comprises the following steps:
a worker uses an assembling tool to install each connecting piece on the assembling body, and a state diagram at each moment is collected in the installation process and is used as an image to be detected; the image to be detected comprises a connecting piece and an assembling tool; the image to be detected comprises RGB image information and depth image information;
preprocessing the image to be detected to obtain an image to be predicted; the preprocessing comprises the steps of carrying out noise reduction processing on the image, converting the image into a set size and carrying out normalization processing;
inputting a to-be-predicted image into the optimal target detection neural network model to predict the to-be-predicted image to obtain a predicted image;
further, the step of comparing the predicted image at each time with the feature images in the feature image set to determine each link in each predicted image specifically includes the following steps:
comparing the predicted image with the characteristic images in the characteristic image set according to the acquisition time sequence, determining the characteristic image which is most matched with each connecting piece in the predicted image, and judging the characteristic images to be the same connecting pieces; the comparison method is a characteristic point matching method;
and marking each connecting piece in the prediction graph, wherein the marking content comprises the type and the serial number of the connecting piece in each detection frame.
Further, the predicted images at the current moment and the last moment are selected; determining the assembled connecting piece at the current moment according to the relation between the image connecting piece and the assembling tool at the current moment; the step of judging the assembling condition according to the relation between the connecting piece and the assembling tool at two moments specifically comprises the following steps:
calculating the vertex coordinates of the detection frame of the assembly tool in the current prediction image, wherein the formula is as follows:
(x 1 ,y 1 )=(x+w,y+h);
(x 1 ,y 2 )=(x+w,y-h);
(x 2 ,y 2 )=(x-w,y-h);
(x 2 ,y 1 )=(x-w,y+h);
wherein, (x, y) is the coordinate of the center of the assembly tool detection frame; (w, h) the assembly tool detection frame size; (x) 1 ,y 1 ),(x 1 ,y 2 ),(x 2 ,y 2 ),(x 2 ,y 1 ) Detecting vertex coordinates of a lower right corner, a lower left corner and an upper left corner of the frame for the assembly tool respectively;
Determining the assembled connecting pieces at the current moment through the center coordinates of the assembling tool detection frame and the connecting piece detection frames;
and judging the relative positions of the assembling tool and the assembled connecting piece according to the detection frame, wherein the formula is as follows:
x p =x w -x b ;
y p =y w -y b ;
wherein (x) p ,y p ) Are relative coordinates; (x) w ,y w ) Detecting a frame center coordinate for the assembly tool; (x) b ,y b ) Detecting a frame center coordinate for the assembled connecting piece;
if x p Is positive, y p If the voltage is negative, the assembled connecting piece is positioned at the upper left of the assembling tool; if x p And y p Meanwhile, if the direction is positive, the assembled connecting piece is positioned at the lower left of the assembling tool; if x p Is negative, y p If the angle is positive, the assembled connecting piece is positioned at the lower right part of the assembling tool; if x p And y p Meanwhile, if the voltage is negative, the assembled connecting piece is positioned at the upper right of the assembling tool;
calculating the vertex coordinates of the detection frame of the assembled connecting piece at the current moment, wherein the formula is as follows:
(x′ 1 ,y′ 1 )=(x′+w′,y′+h′);
(x′ 1 ,y′ 2 )=(x′+w′,y′-h′);
(x′ 2 ,y′ 2 )=(x′-w′,y′-h′);
(x′ 2 ,y′ 1 )=(x′-w′,y′+h′);
wherein, (x ', y') is the central coordinate of the detection frame of the connected piece; (w ', h') is the size of the detection frame of the connected piece; (x' 1 ,y′ 1 ),(x′ 1 ,y′ 2 ),(x′ 2 ,y′ 2 ),(x′ 2 ,y′ 1 ) Respectively representing the vertex coordinates of the lower right corner, the lower left corner and the upper left corner of the detection frame of the connected piece;
according to the assembly toolSelecting vertex coordinates of the detection frames of the connected piece from relative positions of the detection frames and the detection frames of the connected piece to be assembled, and selecting coordinates (x ') if the connected piece to be assembled is positioned at the upper left side or the lower right side of the assembling tool' 1 ,y′ 2 ) And (x' 2 ,y′ 1 ) Selecting coordinates (x ') if the assembled link is located at the upper right or lower left of the assembly tool' 1 ,y′ 1 ) And (x' 2 ,y′ 2 );
Obtaining a vector alpha according to the two coordinates;
calculating the vertex coordinates of the assembly tool of the prediction diagram at the last moment and the detection frame of the assembled connecting piece to obtain the vectors at two momentsAndT 1 and T 2 Respectively at the two predicted image acquisition moments;
the second technical scheme is as follows:
a connecting piece assembly detection system based on target detection comprises an image acquisition module, a target detection module, an assembly part detection module and an assembly tool analysis module;
the image acquisition module acquires a real-time image of an operator assembly connecting piece, performs preprocessing and obtains a to-be-predicted image;
the target detection module is stored with the optimal target detection neural network model of any one of claims 1 to 7, and can predict the image to be predicted and obtain a predicted image;
the assembled part detection module is stored with the characteristic image set, and can compare the predicted image with the characteristic images in the characteristic image set and determine each connecting piece in the predicted image;
the assembling tool analysis module can determine the assembled connecting pieces at the current moment according to the relation between the assembling tools and the connecting pieces in the image predicted at the current moment, and can also determine the assembling conditions of the connecting pieces according to the relation between the assembling tools and the connecting pieces in the image predicted at the current moment and the previous moment;
the system can finally output the assembled connecting piece at the current moment, the position of the assembled connecting piece and the assembling condition; the assembly condition includes an assembly tool rotation angle.
The invention has the following beneficial effects:
1. the detection method and the detection system can detect the assembled connecting piece in real time and feed back the assembling information at the current moment to an operator.
2. The detection method and the system have high identification precision, and avoid feedback information errors caused by identification errors.
3. The detection method and the system have high identification speed, meet the requirement of real-time feedback, and can simultaneously feed back the type, the serial number and the position information of the connecting piece.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a flow chart of an embodiment of the present invention.
FIG. 3 is a flow chart of the assembled part detection module operation of an embodiment of the present invention.
FIG. 4 is an assembly tool analysis module workflow diagram of an embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and the specific embodiments.
Referring to fig. 1-4, a method for detecting the assembly of a connector based on object detection includes the following steps:
acquiring an original image, wherein the original image comprises a state diagram at each moment in the assembly process of a connecting piece, preprocessing the original image, and labeling the connecting piece and an assembly tool in the original image to obtain a sample image and a document containing labeling information;
establishing a target detection neural network model for detecting a connecting piece and an assembling tool, and training the target detection neural network model through the sample image and the document to obtain an optimal target detection neural network model and a plurality of sample prediction graphs; selecting one sample prediction graph as a standard graph, and acquiring a plurality of characteristic images through the standard graph, wherein each characteristic image comprises a connecting piece, and all the characteristic images are integrated into a characteristic image set;
when the assembly condition of a connecting piece is detected, acquiring an image to be detected at the current moment, wherein the image to be detected is a real-time image in the assembly process, and preprocessing the image to be detected to obtain an image to be predicted; predicting the image to be predicted through the optimal target detection neural network model to obtain a predicted image;
comparing the predicted image with the characteristic image in the characteristic image set at each moment to determine each connecting piece in each predicted image;
selecting predicted images at the current moment and the previous moment; determining the assembled connecting piece at the current moment according to the relation between the image connecting piece and the assembling tool predicted at the current moment; and judging the assembling condition according to the relation between the connecting piece and the assembling tool at two moments.
In at least one embodiment, the preprocessing the original image includes denoising, converting to a set size, and normalizing the image; the original image comprises RGB information and depth image information; and the marking is to select the connecting pieces and the assembling tools in the original image to obtain rectangular frames and documents, wherein the documents comprise the coordinates, the sizes and the types of the rectangular frames.
In at least one embodiment, the establishing a target detection neural network model for detecting a connection component and an assembly tool, and training the target detection neural network model through the sample image and the document to obtain an optimal target detection neural network model and a plurality of sample prediction maps specifically includes the following steps:
establishing a target detection neural network model for detecting a connecting piece and an assembling tool, and initializing;
dividing the sample image and the corresponding document into a training set, a test set and a verification set according to the proportion;
inputting the training set into an initial target detection neural network model in batches, and training the initial target detection neural network model;
after a set training batch is reached, verifying the prediction effect of the target detection neural network model by applying a verification set, and preventing the target detection neural network model from being over-fitted;
repeatedly training and verifying the target detection neural network, applying the test set to evaluate the detection performance of the target detection neural network model after the set training times are reached, and storing the target detection neural network model reaching the set standard; if the model does not meet the standard, initializing a target detection network model, adjusting parameters of the target detection network model, and training, verifying and testing the new initial target detection network model;
and repeating the steps to obtain a plurality of trained target detection neural network models, and selecting the neural network model with the best selectivity from the trained target detection neural network models as the optimal target detection neural network model and the sample prediction graph thereof.
In at least one embodiment, the selecting a sample prediction map as a standard map, obtaining a plurality of feature images from the standard map, each feature image comprising a connector, and integrating the feature images into a feature image set comprises:
selecting the sample prediction graphs which contain the most connecting pieces from a plurality of sample prediction graphs as standard graphs;
marking the type and the serial number of each connecting piece;
setting a threshold value k (k is larger than 0), and intercepting a region with the width of (1+ k) w and the height of (1+ k) h as a characteristic image according to the center point coordinates (x, y) of each detection frame in the standard graph; the feature image includes corresponding connectors and connector surrounding information.
In at least one embodiment, the image to be detected at the current moment is acquired, the image to be detected is a real-time image in the assembly process, and the image to be detected is preprocessed to obtain an image to be predicted; predicting the image to be predicted through the optimal target detection neural network model to obtain a predicted image, and specifically comprises the following steps:
a worker uses an assembling tool to install each connecting piece on the assembly body, and a state diagram at each moment is collected in the installation process and is used as an image to be detected; the image to be detected comprises a connecting piece and an assembling tool; the image to be detected comprises RGB image information and depth image information;
preprocessing the image to be detected to obtain an image to be predicted; the preprocessing comprises the steps of carrying out noise reduction processing on the image, converting the image into a set size and carrying out normalization processing;
inputting a to-be-predicted image into the optimal target detection neural network model to predict the to-be-predicted image to obtain a predicted image;
in at least one embodiment, the comparing the predicted image with the feature images in the feature image set at each time point and determining each link in each predicted image specifically includes the following steps:
comparing the predicted image with the characteristic images in the characteristic image set according to the acquisition time sequence, determining the characteristic image which is most matched with each connecting piece in the predicted image, and judging the characteristic images to be the same connecting pieces; the comparison method is a characteristic point matching method;
and marking each connecting piece in the prediction graph, wherein the marking content comprises the type and the serial number of the connecting piece in each detection frame.
In at least one embodiment, the selecting of the predicted images at the current time and the previous time; determining the assembled connecting piece at the current moment according to the relation between the image connecting piece and the assembling tool at the current moment; the step of judging the assembling condition according to the relation between the connecting piece and the assembling tool at two moments specifically comprises the following steps:
calculating the vertex coordinates of the detection frame of the assembly tool in the current prediction image, wherein the formula is as follows:
(x 1 ,y 1 )=(x+w,y+h);
(x 1 ,y 2 )=(x+w,y-h);
(x 2 ,y 2 )=(x-w,y-h);
(x 2 ,y 1 )=(x-w,y+h);
wherein, (x, y) is the coordinate of the center of the assembly tool detection frame; (w, h) the assembly tool detection frame size; (x) 1 ,y 1 ),(x 1 ,y 2 ),(x 2 ,y 2 ),(x 2 ,y 1 ) Respectively detecting vertex coordinates of the lower right corner, the lower left corner and the upper left corner of the frame for the assembling tool;
determining the assembled connecting piece at the current moment through the center coordinates of the assembling tool detection frame and each connecting piece detection frame;
and judging the relative positions of the assembling tool and the assembled connecting piece according to the detection frame, wherein the formula is as follows:
x p =x w -x b ;
y p =y w -y b ;
wherein (x) p ,y p ) Are relative coordinates; (x) w ,y w ) Detecting frame center coordinates for the assembly tool; (x) b ,y b ) Detecting a frame center coordinate for the assembled connecting piece;
if x p Is positive, y p If the voltage is negative, the assembled connecting piece is positioned at the upper left of the assembling tool; if x p And y p Meanwhile, if the direction is positive, the assembled connecting piece is positioned at the lower left of the assembling tool; if x p Is negative, y p If the direction is positive, the assembled connecting piece is positioned at the lower right of the assembling tool; if x p And y p Meanwhile, if the voltage is negative, the assembled connecting piece is positioned at the upper right of the assembling tool;
calculating the vertex coordinates of the detection frame of the assembled connecting piece at the current moment, wherein the formula is as follows:
(x′ 1 ,y′ 1 )=(x′+w′,y′+h′);
(x′ 1 ,y′ 2 )=(x′+w′,y′-h′);
(x′ 2 ,y′ 2 )=(x′-w′,y′-h′);
(x′ 2 ,y′ 1 )=(x′-w′,y′+h′);
wherein, (x ', y') is the central coordinate of the detection frame of the connected piece; (w ', h') is the size of the detection frame of the connected piece; (x' 1 ,y′ 1 ),(x′ 1 ,y′ 2 ),(x′ 2 ,y′ 2 ),(x′ 2 ,y′ 1 ) Respectively representing the vertex coordinates of the lower right corner, the lower left corner and the upper left corner of the detection frame of the connected piece;
selecting the vertex coordinates of the detection frames of the connected pieces according to the relative positions of the detection frames of the assembling tool and the detection frames of the connected pieces to be assembled, and selecting the coordinates (x ') if the connected pieces to be assembled are positioned at the upper left side or the lower right side of the assembling tool' 1 ,y′ 2 ) And (x' 2 ,y′ 1 ) Selecting coordinates (x ') if the assembled link is located at the upper right or lower left of the assembly tool' 1 ,y′ 1 ) And (x' 2 ,y′ 2 );
Obtaining a vector alpha according to the two coordinates;
calculating the vertex coordinates of the assembly tool of the prediction diagram at the last moment and the detection frame of the assembled connecting piece to obtain the vectors at two momentsAndT 1 and T 2 Respectively at the two predicted image acquisition moments;
and outputting the type and the serial number of the assembled connecting piece, the central coordinate of the detection frame and the rotating angle of the assembling tool.
A connecting piece assembly detection system based on target detection comprises an image acquisition module, a target detection module, an assembly part detection module and an assembly tool analysis module;
the image acquisition module acquires a real-time image of an operator assembly connecting piece, performs preprocessing and obtains a to-be-predicted image;
the target detection module is stored with the optimal target detection neural network model of any one of claims 1 to 7, and can predict the image to be predicted and obtain a predicted image;
the assembled part detection module is stored with the characteristic image set, and can compare the predicted image with the characteristic images in the characteristic image set and determine each connecting piece in the predicted image;
the assembling tool analysis module can determine the assembled connecting pieces at the current moment according to the relation between the assembling tools and the connecting pieces in the image predicted at the current moment, and can also determine the assembling conditions of the connecting pieces according to the relation between the assembling tools and the connecting pieces in the image predicted at the current moment and the previous moment;
the system can finally output the assembled connecting piece at the current moment, the position of the assembled connecting piece and the assembling condition; the assembly condition includes an assembly tool rotation angle.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures made by using the contents of the specification and the drawings of the present invention or directly or indirectly applied to other related technical fields are included in the scope of the present invention.
Claims (2)
1. A connecting piece assembly detection method based on target detection is characterized by comprising the following steps:
acquiring an original image, wherein the original image comprises a state diagram at each moment in the assembly process of a connecting piece, preprocessing the original image, and labeling the connecting piece and an assembly tool in the original image to obtain a sample image and a document containing labeling information;
establishing a target detection neural network model for detecting a connecting piece and an assembling tool, and training the target detection neural network model through the sample image and the document to obtain an optimal target detection neural network model and a plurality of sample prediction graphs; selecting a sample prediction image as a standard image, and acquiring a plurality of characteristic images through the standard image, wherein each characteristic image comprises a connecting piece and is integrated into a characteristic image set;
when the assembly condition of a connecting piece is detected, acquiring an image to be detected at the current moment, wherein the image to be detected is a real-time image in the assembly process, and preprocessing the image to be detected to obtain an image to be predicted; predicting the image to be predicted through the optimal target detection neural network model to obtain a predicted image;
comparing the predicted image with the characteristic image in the characteristic image set at each moment to determine each connecting piece in each predicted image;
selecting predicted images at the current moment and the previous moment; determining the assembled connecting piece at the current moment according to the relation between the image connecting piece and the assembling tool predicted at the current moment; judging the assembling condition according to the relation between the connecting piece and the assembling tool at two moments;
preprocessing the original image, namely denoising the image, converting the image into a set size and normalizing the image; the original image comprises RGB information and depth image information; the marking is to select the connecting pieces and the assembling tools in the original image to obtain rectangular frames and documents, wherein the documents comprise the coordinates, the sizes and the types of the rectangular frames;
the step of establishing a target detection neural network model for detecting the connecting piece and the assembling tool, and training the target detection neural network model through the sample image and the document to obtain an optimal target detection neural network model and a plurality of sample prediction images specifically comprises the following steps:
establishing a target detection neural network model for detecting a connecting piece and an assembling tool, and initializing;
dividing the sample image and the corresponding document into a training set, a test set and a verification set according to the proportion;
inputting the training set into an initial target detection neural network model in batches, and training the initial target detection neural network model;
after the set training batch is reached, verifying the prediction effect of the target detection neural network model by using the verification set, and preventing the target detection neural network model from being over-fitted;
repeatedly training and verifying the target detection neural network, applying the test set to evaluate the detection performance of the target detection neural network model after the set training times are reached, and storing the target detection neural network model reaching the set standard; if the model does not meet the standard, initializing a target detection network model, adjusting parameters of the target detection network model, and training, verifying and testing the new initial target detection network model;
repeating the steps to obtain a plurality of trained target detection neural network models, and selecting the neural network model with the best selectivity from the trained target detection neural network models as the optimal target detection neural network model and a sample prediction graph thereof;
the method comprises the following steps of selecting a sample prediction graph as a standard graph, obtaining a plurality of characteristic images through the standard graph, wherein each characteristic image comprises a connecting piece, and integrating the characteristic images into a characteristic image set specifically comprises the following steps:
selecting the sample prediction graphs which contain the most connecting pieces from a plurality of sample prediction graphs as standard graphs;
marking the type and the serial number of each connecting piece;
setting a threshold value k (k >0), and intercepting a region with the width of (1+ k) w and the height of (1+ k) h as a characteristic image according to the center point coordinates (x, y) of each detection frame in the standard graph; the characteristic image comprises corresponding connecting pieces and information around the connecting pieces;
the method comprises the steps of collecting an image to be detected at the current moment, wherein the image to be detected is a real-time image in the assembling process, and preprocessing the image to be detected to obtain an image to be predicted; predicting the image to be predicted through the optimal target detection neural network model to obtain a predicted image, and specifically comprises the following steps:
a worker uses an assembling tool to install each connecting piece on the assembly body, and a state diagram at each moment is collected in the installation process and is used as an image to be detected; the image to be detected comprises a connecting piece and an assembling tool; the image to be detected comprises RGB image information and depth image information;
preprocessing the image to be detected to obtain an image to be predicted; the preprocessing comprises the steps of carrying out noise reduction processing on the image, converting the image into a set size and carrying out normalization processing;
inputting a to-be-predicted image into the optimal target detection neural network model to predict the to-be-predicted image to obtain a predicted image;
the step of comparing the predicted image at each moment with the feature images in the feature image set and determining each connecting piece in each predicted image specifically includes the following steps:
comparing the predicted image with the characteristic images in the characteristic image set according to the acquisition time sequence, determining the characteristic image which is most matched with each connecting piece in the predicted image, and judging the characteristic images to be the same connecting pieces; comparing the characteristic images by adopting a characteristic point matching method;
marking each connecting piece in the prediction graph, wherein the marking content comprises the type and the serial number of the connecting piece in each detection frame;
selecting predicted images at the current moment and the last moment; determining the assembled connecting piece at the current moment according to the relation between the image connecting piece and the assembling tool predicted at the current moment; the step of judging the assembling condition according to the relation between the connecting piece and the assembling tool at two moments specifically comprises the following steps:
calculating the vertex coordinates of the detection frame of the assembly tool in the current-time prediction image, wherein the formula is as follows:
(x 1 ,y 1 )=(x+w,y+h);
(x 1 ,y 2 )=(x+w,y-h);
(x 2 ,y 2 )=(x-w,y-h);
(x 2 ,y 1 )=(x-w,y+h);
wherein, (x, y) is the coordinate of the center of the assembly tool detection frame; (2w, 2h) detecting the frame size for the assembly tool; (x) 1 ,y 1 ),(x 1 ,y 2 ),(x 2 ,y 2 ),(x 2 ,y 1 ) Respectively detecting vertex coordinates of the lower right corner, the lower left corner and the upper left corner of the frame for the assembling tool;
determining the assembled connecting piece at the current moment through the center coordinates of the assembling tool detection frame and each connecting piece detection frame;
and judging the relative positions of the assembling tool and the assembled connecting piece according to the detection frame, wherein the formula is as follows:
x p =x w -x b ;
y p =y w -y b ;
wherein (x) p ,y p ) Are relative coordinates; (x) w ,y w ) Detecting a frame center coordinate for the assembly tool; (x) b ,y b ) Detecting a frame center coordinate for the assembled connecting piece;
if x p Is positive, y p If the voltage is negative, the assembled connecting piece is positioned at the upper left of the assembling tool; if x p And y p Meanwhile, if the direction is positive, the assembled connecting piece is positioned at the lower left of the assembling tool; if x p Is negative, y p If the direction is positive, the assembled connecting piece is positioned at the lower right of the assembling tool; if x p And y p Meanwhile, if the voltage is negative, the assembled connecting piece is positioned at the upper right of the assembling tool;
calculating the vertex coordinates of the detection frame of the assembled connecting piece at the current moment, wherein the formula is as follows:
(x′ 1 ,y′ 1 )=(x′+w′,y′+h′);
(x′ 1 ,y′ 2 )=(x′+w′,y′-h′);
(x′ 2 ,y′ 2 )=(x′-w′,y′-h′);
(x′ 2 ,y′ 1 )=(x′-w′,y′+h′);
wherein, (x ', y') is the central coordinate of the detection frame of the connected piece; (2w ', 2 h') is the size of the detection frame of the connected piece; (x' 1 ,y′ 1 ),(x′ 1 ,y′ 2 ),(x′ 2 ,y′ 2 ),(x′ 2 ,y′ 1 ) Respectively representing the vertex coordinates of the lower right corner, the lower left corner and the upper left corner of the detection frame of the connected piece;
selecting the vertex coordinates of the detection frames of the connected pieces according to the relative positions of the assembling tool detection frame and the detection frames of the connected pieces to be assembled, and selecting coordinates (x ') if the connected pieces to be assembled are positioned at the upper left side or the lower right side of the assembling tool' 1 ,y′ 2 ) And (x' 2 ,y′ 1 ) If the fabricated connector is located at the upper right or lower left of the fabrication tool, then the coordinate (x ') is selected' 1 ,y′ 1 ) And (x' 2 ,y′ 2 );
Obtaining a vector alpha according to the two coordinates;
calculating the vertex coordinates of the assembly tool and the assembled connecting piece detection frame of the prediction diagram at the last moment and the current moment to obtain the vectors of the two momentsAndT 1 and T 2 Respectively at the two predicted image acquisition moments;
2. A connecting piece assembly detection system based on target detection is characterized by comprising an image acquisition module, a target detection module, an assembly part detection module and an assembly tool analysis module;
the image acquisition module acquires a real-time image of an operator for assembling the connecting piece, and performs preprocessing on the real-time image to obtain an image to be predicted;
the target detection module is stored with the optimal target detection neural network model of claim 1, and can predict the image to be predicted and obtain a predicted image;
the assembled part detection module is stored with the characteristic image set, and can compare the predicted image with the characteristic images in the characteristic image set and determine each connecting piece in the predicted image;
the assembling tool analysis module can determine the assembled connecting pieces at the current moment according to the relation between the assembling tools and the connecting pieces in the current-moment predicted image and can also determine the assembling conditions of the connecting pieces according to the relation between the assembling tools and the connecting pieces in the current-moment predicted image and the previous-moment predicted image;
finally outputting the assembled connecting piece at the current moment, the position of the assembled connecting piece and the assembling condition; the assembly condition includes an assembly tool rotation angle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110504071.5A CN113269234B (en) | 2021-05-10 | 2021-05-10 | Connecting piece assembly detection method and system based on target detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110504071.5A CN113269234B (en) | 2021-05-10 | 2021-05-10 | Connecting piece assembly detection method and system based on target detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113269234A CN113269234A (en) | 2021-08-17 |
CN113269234B true CN113269234B (en) | 2022-09-20 |
Family
ID=77230188
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110504071.5A Active CN113269234B (en) | 2021-05-10 | 2021-05-10 | Connecting piece assembly detection method and system based on target detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113269234B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113888494B (en) * | 2021-09-28 | 2022-08-30 | 广州市华颉电子科技有限公司 | Artificial intelligence interface pin quality detection method of automobile domain controller |
CN114782778B (en) * | 2022-04-25 | 2023-01-06 | 广东工业大学 | Assembly state monitoring method and system based on machine vision technology |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107633267A (en) * | 2017-09-22 | 2018-01-26 | 西南交通大学 | A kind of high iron catenary support meanss wrist-arm connecting piece fastener recognition detection method |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10885625B2 (en) * | 2019-05-10 | 2021-01-05 | Advanced New Technologies Co., Ltd. | Recognizing damage through image analysis |
CN111160440B (en) * | 2019-12-24 | 2023-11-21 | 广东省智能制造研究所 | Deep learning-based safety helmet wearing detection method and device |
-
2021
- 2021-05-10 CN CN202110504071.5A patent/CN113269234B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107633267A (en) * | 2017-09-22 | 2018-01-26 | 西南交通大学 | A kind of high iron catenary support meanss wrist-arm connecting piece fastener recognition detection method |
Also Published As
Publication number | Publication date |
---|---|
CN113269234A (en) | 2021-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106875381B (en) | Mobile phone shell defect detection method based on deep learning | |
CN106650770B (en) | Mura defect detection method based on sample learning and human eye visual characteristics | |
CN112950667B (en) | Video labeling method, device, equipment and computer readable storage medium | |
CN111080693A (en) | Robot autonomous classification grabbing method based on YOLOv3 | |
CN107085846B (en) | Workpiece surface defect image identification method | |
CN110119680B (en) | Automatic error checking system of regulator cubicle wiring based on image recognition | |
CN113269234B (en) | Connecting piece assembly detection method and system based on target detection | |
CN110135514B (en) | Workpiece classification method, device, equipment and medium | |
CN112419299A (en) | Bolt loss detection method, device, equipment and storage medium | |
CN113724231A (en) | Industrial defect detection method based on semantic segmentation and target detection fusion model | |
CN103729631A (en) | Vision-based connector surface feature automatically-identifying method | |
CN113592839B (en) | Distribution network line typical defect diagnosis method and system based on improved fast RCNN | |
CN110781913A (en) | Zipper cloth belt defect detection method | |
CN107944453A (en) | Based on Hu not bushing detection methods of bending moment and support vector machines | |
CN117197700B (en) | Intelligent unmanned inspection contact net defect identification system | |
CN113989196A (en) | Vision-based earphone silica gel gasket appearance defect detection method | |
CN111652200A (en) | Processing method, device and equipment for distinguishing multiple vehicles from pictures in vehicle insurance case | |
CN114594102B (en) | Machine vision-based data line interface automatic detection method | |
CN113947563A (en) | Cable process quality dynamic defect detection method based on deep learning | |
CN113592789A (en) | Dim light image identification method, device, equipment and storage medium | |
CN106127100B (en) | A kind of robot vision identifying system and its image processing method | |
KR101392823B1 (en) | Method to detect vehicle parts using image alalysis and apparatus therefor | |
CN117649564B (en) | Aircraft cabin assembly deviation recognition device and quantitative evaluation method | |
CN117798087B (en) | Visual sorting system and terminal for lithium battery based on appearance defect detection | |
CN114399789B (en) | Mechanical arm remote control method based on static gesture recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230522 Address after: No.2661, Jingxuan Road, Qingdao City, Shandong Province Patentee after: QINGDAO QIANSHAO PRECISION INSTRUMENT Co.,Ltd. Address before: No.777, Jialingjiang Road, Qingdao Economic and Technological Development Zone, Qingdao, Shandong 266000 Patentee before: Qindao University of Technology |