CN113269234A - Connecting piece assembly detection method and system based on target detection - Google Patents

Connecting piece assembly detection method and system based on target detection Download PDF

Info

Publication number
CN113269234A
CN113269234A CN202110504071.5A CN202110504071A CN113269234A CN 113269234 A CN113269234 A CN 113269234A CN 202110504071 A CN202110504071 A CN 202110504071A CN 113269234 A CN113269234 A CN 113269234A
Authority
CN
China
Prior art keywords
image
connecting piece
predicted
target detection
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110504071.5A
Other languages
Chinese (zh)
Other versions
CN113269234B (en
Inventor
陈成军
黄凯
刘庭煜
李东年
洪军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Qianshao Precision Instrument Co ltd
Original Assignee
Qingdao University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao University of Technology filed Critical Qingdao University of Technology
Priority to CN202110504071.5A priority Critical patent/CN113269234B/en
Publication of CN113269234A publication Critical patent/CN113269234A/en
Application granted granted Critical
Publication of CN113269234B publication Critical patent/CN113269234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention relates to a connecting piece assembly detection method and a connecting piece assembly detection system based on target detection, wherein the method comprises the steps of collecting an original image and manufacturing a sample image; training a target detection neural network model through a sample image to obtain an optimal target detection neural network model and a sample prediction image; selecting a standard image from the sample prediction image, and making a characteristic image set; acquiring a real-time image in the assembling process, making a to-be-predicted image, and predicting the to-be-predicted image through an optimal target detection neural network model to obtain a predicted image; comparing the predicted image with the characteristic image to determine each connecting piece in the predicted image; and judging the assembled connecting piece at the current moment, the position of the assembled connecting piece and the assembling condition through the prediction graphs at the current moment and the previous moment. The invention can realize the identification of the connecting piece and the judgment of the assembly process on the premise of ensuring the detection accuracy and the detection real-time property, and is more suitable for the field of mechanical assembly.

Description

Connecting piece assembly detection method and system based on target detection
Technical Field
The invention relates to a connecting piece assembly detection method and system based on target detection, and belongs to the technical field of intelligent manufacturing and assembly process monitoring.
Background
The mechanical assembly is an important component in the mechanical manufacturing industry, and is a process of realizing the combination of mechanical parts and components and finishing the assembly of a machine according to technical requirements. Assembly process monitoring is an important means of ensuring product quality in mechanical assembly. Currently, there are three main types of assembly process monitoring methods, namely monitoring operators, monitoring assemblies and monitoring assembly force/moment. The method is a common means for monitoring the assembly body by identifying various connecting pieces and postures thereof on the assembly body through machine vision technologies such as image recognition, target detection, instance segmentation and the like and judging the problems of wrong assembly and neglected assembly of the assembly body.
At present, three types of methods are mainly involved in the identification of mechanical assembly connecting pieces by using image information. The first type is a traditional matching algorithm, and because the shapes and colors of mechanical assembly connecting pieces are similar, the method directly adopted can cause the problems of large identification error and low identification precision. The second type is an object detection algorithm, the method can select the connecting pieces in the image in a frame mode, the identification speed is high, but the serial number of the detected connecting piece cannot be identified, and the method is not suitable for object identification in a changing scene. The third category is the example segmentation algorithm. The method can simultaneously identify the serial number, the type and the position of the mechanical assembly connecting piece. However, the method has large calculation amount, high performance of required computing equipment and poor detection real-time performance.
Disclosure of Invention
In order to overcome the problems, the invention provides a connecting piece assembly detection method and a connecting piece assembly detection system based on target detection, which can realize connecting piece identification and assembly process judgment on the premise of ensuring detection accuracy and detection real-time performance and are more suitable for the field of mechanical assembly.
The technical scheme of the invention is as follows,
the first technical scheme is as follows:
a connecting piece assembly detection method based on target detection comprises the following steps:
acquiring an original image, wherein the original image comprises a state diagram at each moment in the assembly process of a connecting piece, preprocessing the original image, and labeling the connecting piece and an assembly tool in the original image to obtain a sample image and a document containing labeling information;
establishing a target detection neural network model for detecting a connecting piece and an assembling tool, and training the target detection neural network model through the sample image and the document to obtain an optimal target detection neural network model and a plurality of sample prediction graphs; selecting a sample prediction image as a standard image, and acquiring a plurality of characteristic images through the standard image, wherein each characteristic image comprises a connecting piece and is integrated into a characteristic image set;
when the assembly condition of a connecting piece is detected, acquiring an image to be detected at the current moment, wherein the image to be detected is a real-time image in the assembly process, and preprocessing the image to be detected to obtain an image to be predicted; predicting the image to be predicted through the optimal target detection neural network model to obtain a predicted image;
comparing the predicted image with the characteristic image in the characteristic image set at each moment to determine each connecting piece in each predicted image;
selecting predicted images at the current moment and the previous moment; determining the assembled connecting piece at the current moment according to the relation between the image connecting piece and the assembling tool predicted at the current moment; and judging the assembling condition according to the relation between the connecting piece and the assembling tool at two moments.
Further, the preprocessing the original image comprises denoising the image, converting the image into a set size and normalizing; the original image comprises RGB information and depth image information; and the marking is to select the connecting pieces and the assembling tools in the original image to obtain rectangular frames and documents, wherein the documents comprise the coordinates, the sizes and the types of the rectangular frames.
Further, the step of establishing a target detection neural network model for detecting the connecting piece and the assembling tool, and training the target detection neural network model through the sample image and the document to obtain an optimal target detection neural network model and a plurality of sample prediction maps specifically comprises the following steps:
establishing a target detection neural network model for detecting a connecting piece and an assembling tool, and initializing;
dividing the sample image and the corresponding document into a training set, a test set and a verification set according to the proportion;
inputting the training set into an initial target detection neural network model in batches, and training the initial target detection neural network model;
after the set training batch is reached, verifying the prediction effect of the target detection neural network model by using the verification set, and preventing the target detection neural network model from being over-fitted;
repeatedly training and verifying the target detection neural network, applying the test set to evaluate the detection performance of the target detection neural network model after the set training times are reached, and storing the target detection neural network model reaching the set standard; if the model does not meet the standard, initializing a target detection network model, adjusting parameters of the target detection network model, and training, verifying and testing the new initial target detection network model;
and repeating the steps to obtain a plurality of trained target detection neural network models, and selecting the neural network model with the best selectivity from the trained target detection neural network models as the optimal target detection neural network model and the sample prediction graph thereof.
Further, the selecting a sample prediction graph as a standard graph, obtaining a plurality of feature images through the standard graph, each feature image including a connector, and integrating the feature images into a feature image set specifically includes the following steps:
selecting the sample prediction graphs which contain the most connecting pieces from a plurality of sample prediction graphs as standard graphs;
marking the type and the serial number of each connecting piece;
setting a threshold value k (k is larger than 0), and intercepting a region with the width of (1+ k) w and the height of (1+ k) h as a characteristic image according to the center point coordinates (x, y) of each detection frame in the standard graph; the feature image includes corresponding connectors and connector surrounding information.
Further, acquiring an image to be detected at the current moment, wherein the image to be detected is a real-time image in the assembly process, and preprocessing the image to be detected to obtain an image to be predicted; predicting the image to be predicted through the optimal target detection neural network model to obtain a predicted image, and specifically comprises the following steps:
a worker uses an assembling tool to install each connecting piece on the assembly body, and a state diagram at each moment is collected in the installation process and is used as an image to be detected; the image to be detected comprises a connecting piece and an assembling tool; the image to be detected comprises RGB image information and depth image information;
preprocessing the image to be detected to obtain an image to be predicted; the preprocessing comprises the steps of carrying out noise reduction processing on the image, converting the image into a set size and carrying out normalization processing;
inputting a to-be-predicted image into the optimal target detection neural network model to predict the to-be-predicted image to obtain a predicted image;
further, the step of comparing the predicted image at each time with the feature images in the feature image set to determine each link in each predicted image specifically includes the following steps:
comparing the predicted image with the characteristic images in the characteristic image set according to the acquisition time sequence, determining the characteristic image which is most matched with each connecting piece in the predicted image, and judging the characteristic images to be the same connecting pieces; the comparison method is a characteristic point matching method;
and marking each connecting piece in the prediction graph, wherein the marking content comprises the type and the serial number of the connecting piece in each detection frame.
Further, the predicted images at the current moment and the last moment are selected; determining the assembled connecting piece at the current moment according to the relation between the image connecting piece and the assembling tool predicted at the current moment; the step of judging the assembling condition according to the relation between the connecting piece and the assembling tool at two moments specifically comprises the following steps:
calculating the vertex coordinates of the detection frame of the assembly tool in the current-time prediction image, wherein the formula is as follows:
(x1,y1)=(x+w,y+h);
(x1,y2)=(x+w,y-h);
(x2,y2)=(x-w,y-h);
(x2,y1)=(x-w,y+h);
wherein, (x, y) is the coordinate of the center of the assembly tool detection frame; (w, h) the assembly tool detection frame size; (x)1,y1),(x1,y2),(x2,y2),(x2,y1) Respectively detecting vertex coordinates of the lower right corner, the lower left corner and the upper left corner of the frame for the assembling tool;
determining the assembled connecting piece at the current moment through the center coordinates of the assembling tool detection frame and each connecting piece detection frame;
and judging the relative positions of the assembling tool and the assembled connecting piece according to the detection frame, wherein the formula is as follows:
xp=xw-xb
yp=yw-yb
wherein (x)p,yp) Are relative coordinates; (x)w,yw) Detecting a frame center coordinate for the assembly tool; (x)b,yb) Detecting a frame center coordinate for the assembled connecting piece;
if xpIs positive, ypIf the voltage is negative, the assembled connecting piece is positioned at the upper left of the assembling tool; if xpAnd ypMeanwhile, if the direction is positive, the assembled connecting piece is positioned at the lower left of the assembling tool; if xpIs negative, ypIf the direction is positive, the assembled connecting piece is positioned at the lower right of the assembling tool; if xpAnd ypMeanwhile, if the voltage is negative, the assembled connecting piece is positioned at the upper right of the assembling tool;
calculating the vertex coordinates of the detection frame of the assembled connecting piece at the current moment, wherein the formula is as follows:
(x′1,y′1)=(x′+w′,y′+h′);
(x′1,y′2)=(x′+w′,y′-h′);
(x′2,y′2)=(x′-w′,y′-h′);
(x′2,y′1)=(x′-w′,y′+h′);
wherein, (x ', y') is the central coordinate of the detection frame of the connected piece; (w ', h') is the size of the detection frame of the connected piece; (x'1,y′1),(x′1,y′2),(x′2,y′2),(x′2,y′1) Respectively representing the vertex coordinates of the lower right corner, the lower left corner and the upper left corner of the detection frame of the connected piece;
selecting the vertex coordinates of the detection frames of the connected pieces according to the relative positions of the detection frames of the assembling tool and the detection frames of the connected pieces to be assembled, and selecting the coordinates (x ') if the connected pieces to be assembled are positioned at the upper left side or the lower right side of the assembling tool'1,y′2) And (x'2,y′1) Selecting coordinates (x ') if the assembled link is located at the upper right or lower left of the assembly tool'1,y′1) And (x'2,y′2);
Obtaining a vector alpha according to the two coordinates;
calculating the vertex coordinates of the assembly tool of the prediction diagram at the last moment and the detection frame of the assembled connecting piece to obtain the vectors at two moments
Figure BDA0003057617120000041
And
Figure BDA0003057617120000042
T1and T2Respectively at the two predicted image acquisition moments;
calculating two vectors
Figure BDA0003057617120000043
And
Figure BDA0003057617120000044
the included angle is the rotation angle of the assembling tool;
the second technical scheme is as follows:
a connecting piece assembly detection system based on target detection comprises an image acquisition module, a target detection module, an assembly part detection module and an assembly tool analysis module;
the image acquisition module acquires a real-time image of an operator assembly connecting piece, performs preprocessing and obtains a to-be-predicted image;
the target detection module is stored with the optimal target detection neural network model of any one of claims 1 to 7, and can predict the image to be predicted and obtain a predicted image;
the assembled part detection module is stored with the characteristic image set, and can compare the predicted image with the characteristic images in the characteristic image set and determine each connecting piece in the predicted image;
the assembling tool analysis module can determine the assembled connecting pieces at the current moment according to the relation between the assembling tools and the connecting pieces in the image predicted at the current moment, and can also determine the assembling conditions of the connecting pieces according to the relation between the assembling tools and the connecting pieces in the image predicted at the current moment and the previous moment;
the system can finally output the assembled connecting piece at the current moment, the position of the assembled connecting piece and the assembling condition; the assembly condition includes an assembly tool rotation angle.
The invention has the following beneficial effects:
1. the detection method and the detection system can detect the assembled connecting piece in real time and feed back the assembling information at the current moment to an operator.
2. The detection method and the system have high identification precision, and avoid feedback information errors caused by identification errors.
3. The detection method and the system have high identification speed, meet the requirement of real-time feedback, and can simultaneously feed back the type, the serial number and the position information of the connecting piece.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a flow chart of an embodiment of the present invention.
FIG. 3 is a flow chart of the assembled part detection module operation of an embodiment of the present invention.
FIG. 4 is an assembly tool analysis module workflow diagram of an embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and the specific embodiments.
Referring to fig. 1-4, a method for detecting the assembly of a connector based on object detection includes the following steps:
acquiring an original image, wherein the original image comprises a state diagram at each moment in the assembly process of a connecting piece, preprocessing the original image, and labeling the connecting piece and an assembly tool in the original image to obtain a sample image and a document containing labeling information;
establishing a target detection neural network model for detecting a connecting piece and an assembling tool, and training the target detection neural network model through the sample image and the document to obtain an optimal target detection neural network model and a plurality of sample prediction graphs; selecting a sample prediction image as a standard image, and acquiring a plurality of characteristic images through the standard image, wherein each characteristic image comprises a connecting piece and is integrated into a characteristic image set;
when the assembly condition of a connecting piece is detected, acquiring an image to be detected at the current moment, wherein the image to be detected is a real-time image in the assembly process, and preprocessing the image to be detected to obtain an image to be predicted; predicting the image to be predicted through the optimal target detection neural network model to obtain a predicted image;
comparing the predicted image with the characteristic image in the characteristic image set at each moment to determine each connecting piece in each predicted image;
selecting predicted images at the current moment and the previous moment; determining the assembled connecting piece at the current moment according to the relation between the image connecting piece and the assembling tool predicted at the current moment; and judging the assembling condition according to the relation between the connecting piece and the assembling tool at two moments.
In at least one embodiment, the preprocessing the original image includes denoising, converting to a set size, and normalizing the image; the original image comprises RGB information and depth image information; and the marking is to select the connecting pieces and the assembling tools in the original image to obtain rectangular frames and documents, wherein the documents comprise the coordinates, the sizes and the types of the rectangular frames.
In at least one embodiment, the establishing a target detection neural network model for detecting a connector and an assembly tool, and training the target detection neural network model through the sample image and the document to obtain an optimal target detection neural network model and a plurality of sample prediction maps specifically includes the following steps:
establishing a target detection neural network model for detecting a connecting piece and an assembling tool, and initializing;
dividing the sample image and the corresponding document into a training set, a test set and a verification set according to the proportion;
inputting the training set into an initial target detection neural network model in batches, and training the initial target detection neural network model;
after the set training batch is reached, verifying the prediction effect of the target detection neural network model by using the verification set, and preventing the target detection neural network model from being over-fitted;
repeatedly training and verifying the target detection neural network, applying the test set to evaluate the detection performance of the target detection neural network model after the set training times are reached, and storing the target detection neural network model reaching the set standard; if the model does not meet the standard, initializing a target detection network model, adjusting parameters of the target detection network model, and training, verifying and testing the new initial target detection network model;
and repeating the steps to obtain a plurality of trained target detection neural network models, and selecting the neural network model with the best selectivity from the trained target detection neural network models as the optimal target detection neural network model and the sample prediction graph thereof.
In at least one embodiment, the selecting a sample prediction map as a standard map, obtaining a plurality of feature images from the standard map, each feature image comprising a connector, and integrating the feature images into a feature image set comprises:
selecting the sample prediction graphs which contain the most connecting pieces from a plurality of sample prediction graphs as standard graphs;
marking the type and the serial number of each connecting piece;
setting a threshold value k (k is larger than 0), and intercepting a region with the width of (1+ k) w and the height of (1+ k) h as a characteristic image according to the center point coordinates (x, y) of each detection frame in the standard graph; the feature image includes corresponding connectors and connector surrounding information.
In at least one embodiment, the image to be detected at the current moment is acquired, the image to be detected is a real-time image in the assembly process, and the image to be detected is preprocessed to obtain an image to be predicted; predicting the image to be predicted through the optimal target detection neural network model to obtain a predicted image, and specifically comprises the following steps:
a worker uses an assembling tool to install each connecting piece on the assembly body, and a state diagram at each moment is collected in the installation process and is used as an image to be detected; the image to be detected comprises a connecting piece and an assembling tool; the image to be detected comprises RGB image information and depth image information;
preprocessing the image to be detected to obtain an image to be predicted; the preprocessing comprises the steps of carrying out noise reduction processing on the image, converting the image into a set size and carrying out normalization processing;
inputting a to-be-predicted image into the optimal target detection neural network model to predict the to-be-predicted image to obtain a predicted image;
in at least one embodiment, the comparing the predicted image with the feature images in the feature image set at each time point and determining each link in each predicted image specifically includes the following steps:
comparing the predicted image with the characteristic images in the characteristic image set according to the acquisition time sequence, determining the characteristic image which is most matched with each connecting piece in the predicted image, and judging the characteristic images to be the same connecting pieces; the comparison method is a characteristic point matching method;
and marking each connecting piece in the prediction graph, wherein the marking content comprises the type and the serial number of the connecting piece in each detection frame.
In at least one embodiment, the selecting of the predicted images at the current time and the previous time; determining the assembled connecting piece at the current moment according to the relation between the image connecting piece and the assembling tool predicted at the current moment; the step of judging the assembling condition according to the relation between the connecting piece and the assembling tool at two moments specifically comprises the following steps:
calculating the vertex coordinates of the detection frame of the assembly tool in the current-time prediction image, wherein the formula is as follows:
(x1,y1)=(x+w,y+h);
(x1,y2)=(x+w,y-h);
(x2,y2)=(x-w,y-h);
(x2,y1)=(x-w,y+h);
wherein, (x, y) is the coordinate of the center of the assembly tool detection frame; (w, h) the assembly tool detection frame size; (x)1,y1),(x1,y2),(x2,y2),(x2,y1) Respectively detecting vertex coordinates of the lower right corner, the lower left corner and the upper left corner of the frame for the assembling tool;
determining the assembled connecting piece at the current moment through the center coordinates of the assembling tool detection frame and each connecting piece detection frame;
and judging the relative positions of the assembling tool and the assembled connecting piece according to the detection frame, wherein the formula is as follows:
xp=xw-xb
yp=yw-yb
wherein (x)p,yp) Are relative coordinates; (x)w,yw) Detecting a frame center coordinate for the assembly tool; (x)b,yb) Detecting a frame center coordinate for the assembled connecting piece;
if xpIs positive, ypIf the voltage is negative, the assembled connecting piece is positioned at the upper left of the assembling tool; if xpAnd ypMeanwhile, if the direction is positive, the assembled connecting piece is positioned at the lower left of the assembling tool; if xpIs negative, ypIf the direction is positive, the assembled connecting piece is positioned at the lower right of the assembling tool; if xpAnd ypMeanwhile, if the voltage is negative, the assembled connecting piece is positioned at the upper right of the assembling tool;
calculating the vertex coordinates of the detection frame of the assembled connecting piece at the current moment, wherein the formula is as follows:
(x′1,y′1)=(x′+w′,y′+h′);
(x′1,y′2)=(x′+w′,y′-h′);
(x′2,y′2)=(x′-w′,y′-h′);
(x′2,y′1)=(x′-w′,y′+h′);
wherein, (x ', y') is the central coordinate of the detection frame of the connected piece; (w ', h') is the size of the detection frame of the connected piece; (x'1,y′1),(x′1,y′2),(x′2,y′2),(x′2,y′1) Respectively representing the vertex coordinates of the lower right corner, the lower left corner and the upper left corner of the detection frame of the connected piece;
selecting the vertex coordinates of the detection frames of the connected pieces according to the relative positions of the detection frames of the assembling tool and the detection frames of the connected pieces to be assembled, and selecting the coordinates (x ') if the connected pieces to be assembled are positioned at the upper left side or the lower right side of the assembling tool'1,y′2) And (x'2,y′1) Selecting coordinates (x ') if the assembled link is located at the upper right or lower left of the assembly tool'1,y′1) And (x'2,y′2);
Obtaining a vector alpha according to the two coordinates;
calculating the vertex coordinates of the assembly tool of the prediction diagram at the last moment and the detection frame of the assembled connecting piece to obtain the vectors at two moments
Figure BDA0003057617120000081
And
Figure BDA0003057617120000082
T1and T2Respectively at the two predicted image acquisition moments;
calculating two vectors
Figure BDA0003057617120000083
And
Figure BDA0003057617120000084
the included angle is the rotation angle of the assembling tool;
and outputting the type and the serial number of the assembled connecting piece, the central coordinate of the detection frame and the rotating angle of the assembling tool.
A connecting piece assembly detection system based on target detection comprises an image acquisition module, a target detection module, an assembly part detection module and an assembly tool analysis module;
the image acquisition module acquires a real-time image of an operator assembly connecting piece, performs preprocessing and obtains a to-be-predicted image;
the target detection module is stored with the optimal target detection neural network model of any one of claims 1 to 7, and can predict the image to be predicted and obtain a predicted image;
the assembled part detection module is stored with the characteristic image set, and can compare the predicted image with the characteristic images in the characteristic image set and determine each connecting piece in the predicted image;
the assembling tool analysis module can determine the assembled connecting pieces at the current moment according to the relation between the assembling tools and the connecting pieces in the image predicted at the current moment, and can also determine the assembling conditions of the connecting pieces according to the relation between the assembling tools and the connecting pieces in the image predicted at the current moment and the previous moment;
the system can finally output the assembled connecting piece at the current moment, the position of the assembled connecting piece and the assembling condition; the assembly condition includes an assembly tool rotation angle.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures made by using the contents of the specification and the drawings of the present invention or directly or indirectly applied to other related technical fields are included in the scope of the present invention.

Claims (8)

1. A connecting piece assembly detection method based on target detection is characterized by comprising the following steps:
acquiring an original image, wherein the original image comprises a state diagram at each moment in the assembly process of a connecting piece, preprocessing the original image, and labeling the connecting piece and an assembly tool in the original image to obtain a sample image and a document containing labeling information;
establishing a target detection neural network model for detecting a connecting piece and an assembling tool, and training the target detection neural network model through the sample image and the document to obtain an optimal target detection neural network model and a plurality of sample prediction graphs; selecting a sample prediction image as a standard image, and acquiring a plurality of characteristic images through the standard image, wherein each characteristic image comprises a connecting piece and is integrated into a characteristic image set;
when the assembly condition of a connecting piece is detected, acquiring an image to be detected at the current moment, wherein the image to be detected is a real-time image in the assembly process, and preprocessing the image to be detected to obtain an image to be predicted; predicting the image to be predicted through the optimal target detection neural network model to obtain a predicted image;
comparing the predicted image with the characteristic image in the characteristic image set at each moment to determine each connecting piece in each predicted image;
selecting predicted images at the current moment and the previous moment; determining the assembled connecting piece at the current moment according to the relation between the image connecting piece and the assembling tool predicted at the current moment; and judging the assembling condition according to the relation between the connecting piece and the assembling tool at two moments.
2. The object detection-based connecting piece assembly detection method according to claim 1, wherein the preprocessing of the original image comprises denoising, converting to a set size and normalizing the image; the original image comprises RGB information and depth image information; and the marking is to select the connecting pieces and the assembling tools in the original image to obtain rectangular frames and documents, wherein the documents comprise the coordinates, the sizes and the types of the rectangular frames.
3. The method for detecting the assembly of the connecting piece based on the object detection as claimed in claim 2, wherein the step of establishing an object detection neural network model for detecting the connecting piece and the assembly tool, and training the object detection neural network model through the sample image and the document to obtain an optimal object detection neural network model and a plurality of sample prediction maps specifically comprises the following steps:
establishing a target detection neural network model for detecting a connecting piece and an assembling tool, and initializing;
dividing the sample image and the corresponding document into a training set, a test set and a verification set according to the proportion;
inputting the training set into an initial target detection neural network model in batches, and training the initial target detection neural network model;
after the set training batch is reached, verifying the prediction effect of the target detection neural network model by using the verification set, and preventing the target detection neural network model from being over-fitted;
repeatedly training and verifying the target detection neural network, applying the test set to evaluate the detection performance of the target detection neural network model after the set training times are reached, and storing the target detection neural network model reaching the set standard; if the model does not meet the standard, initializing a target detection network model, adjusting parameters of the target detection network model, and training, verifying and testing the new initial target detection network model;
and repeating the steps to obtain a plurality of trained target detection neural network models, and selecting the neural network model with the best selectivity from the trained target detection neural network models as the optimal target detection neural network model and the sample prediction graph thereof.
4. The method of claim 3, wherein the step of selecting a sample prediction map as a standard map, obtaining a plurality of feature images from the standard map, each feature image comprising a connector, and the step of integrating the feature images into a feature image set comprises the steps of:
selecting the sample prediction graphs which contain the most connecting pieces from a plurality of sample prediction graphs as standard graphs;
marking the type and the serial number of each connecting piece;
setting a threshold value k (k is larger than 0), and intercepting a region with the width of (1+ k) w and the height of (1+ k) h as a characteristic image according to the center point coordinates (x, y) of each detection frame in the standard graph; the feature image includes corresponding connectors and connector surrounding information.
5. The method for detecting the assembly of the connecting piece based on the target detection as claimed in claim 4, wherein the image to be detected at the current moment is collected, the image to be detected is a real-time image in the assembly process, and the image to be detected is preprocessed to obtain an image to be predicted; predicting the image to be predicted through the optimal target detection neural network model to obtain a predicted image, and specifically comprises the following steps:
a worker uses an assembling tool to install each connecting piece on the assembly body, and a state diagram at each moment is collected in the installation process and is used as an image to be detected; the image to be detected comprises a connecting piece and an assembling tool; the image to be detected comprises RGB image information and depth image information;
preprocessing the image to be detected to obtain an image to be predicted; the preprocessing comprises the steps of carrying out noise reduction processing on the image, converting the image into a set size and carrying out normalization processing;
and inputting the image to be predicted into the optimal target detection neural network model to predict the image to be predicted to obtain a predicted image.
6. The method according to claim 5, wherein the step of comparing the predictive image with the feature images in the feature image set at each moment in time to determine each connector in each predictive image comprises the steps of:
comparing the predicted image with the characteristic images in the characteristic image set according to the acquisition time sequence, determining the characteristic image which is most matched with each connecting piece in the predicted image, and judging the characteristic images to be the same connecting pieces; the comparison method is a characteristic point matching method;
and marking each connecting piece in the prediction graph, wherein the marking content comprises the type and the serial number of the connecting piece in each detection frame.
7. The method for detecting the assembly of a connecting piece based on the object detection as claimed in claim 6, wherein the predicted images at the current time and the last time are selected; determining the assembled connecting piece at the current moment according to the relation between the image connecting piece and the assembling tool predicted at the current moment; the step of judging the assembling condition according to the relation between the connecting piece and the assembling tool at two moments specifically comprises the following steps:
calculating the vertex coordinates of the detection frame of the assembly tool in the current-time prediction image, wherein the formula is as follows:
(x1,y1)=(x+w,y+h);
(x1,y2)=(x+w,y-h);
(x2,y2)=(x-w,y-h);
(x2,y1)=(x-w,y+h);
wherein, (x, y) is the coordinate of the center of the assembly tool detection frame; (w, h) the assembly tool detection frame size; (x)1,y1),(x1,y2),(x2,y2),(x2,y1) Respectively detecting the vertexes of the lower right corner, the lower left corner and the upper left corner of the frame for the assembling toolMarking;
determining the assembled connecting piece at the current moment through the center coordinates of the assembling tool detection frame and each connecting piece detection frame;
and judging the relative positions of the assembling tool and the assembled connecting piece according to the detection frame, wherein the formula is as follows:
xp=xw-xb
yp=yw-yb
wherein (x)p,yp) Are relative coordinates; (x)w,yw) Detecting a frame center coordinate for the assembly tool; (x)b,yb) Detecting a frame center coordinate for the assembled connecting piece;
if xpIs positive, ypIf the voltage is negative, the assembled connecting piece is positioned at the upper left of the assembling tool; if xpAnd ypMeanwhile, if the direction is positive, the assembled connecting piece is positioned at the lower left of the assembling tool; if xpIs negative, ypIf the direction is positive, the assembled connecting piece is positioned at the lower right of the assembling tool; if xpAnd ypMeanwhile, if the voltage is negative, the assembled connecting piece is positioned at the upper right of the assembling tool;
calculating the vertex coordinates of the detection frame of the assembled connecting piece at the current moment, wherein the formula is as follows:
(x′1,y′1)=(x′+w′,y′+h′);
(x′1,y′2)=(x′+w′,y′-h′);
(x′2,y′2)=(x′-w′,y′-h′);
(x′2,y′1)=(x′-w′,y′+h′);
wherein, (x ', y') is the central coordinate of the detection frame of the connected piece; (w ', h') is the size of the detection frame of the connected piece; (x'1,y′1),(x′1,y′2),(x′2,y′2),(x′2,y′1) Respectively representing the vertex coordinates of the lower right corner, the lower left corner and the upper left corner of the detection frame of the connected piece;
according to the assemblerSelecting vertex coordinates of the detection frame of the connected piece from relative positions of the detection frame and the detection frame of the connected piece to be assembled, and selecting coordinates (x ') if the connected piece to be assembled is positioned at the upper left side or the lower right side of the assembling tool'1,y′2) And (x'2,y′1) Selecting coordinates (x ') if the assembled link is located at the upper right or lower left of the assembly tool'1,y′1) And (x'2,y′2);
Obtaining a vector alpha according to the two coordinates;
calculating the vertex coordinates of the assembly tool of the prediction diagram at the last moment and the detection frame of the assembled connecting piece to obtain the vectors at two moments
Figure FDA0003057617110000041
And
Figure FDA0003057617110000042
T1and T2Respectively at the two predicted image acquisition moments;
calculating two vectors
Figure FDA0003057617110000043
And
Figure FDA0003057617110000044
the included angle is the rotating angle of the assembling tool.
8. A connecting piece assembly detection system based on target detection is characterized by comprising an image acquisition module, a target detection module, an assembly part detection module and an assembly tool analysis module;
the image acquisition module acquires a real-time image of an operator assembly connecting piece, performs preprocessing and obtains a to-be-predicted image;
the target detection module is stored with the optimal target detection neural network model of any one of claims 1 to 7, and can predict the image to be predicted and obtain a predicted image;
the assembled part detection module is stored with the characteristic image set, and can compare the predicted image with the characteristic images in the characteristic image set and determine each connecting piece in the predicted image;
the assembling tool analysis module can determine the assembled connecting pieces at the current moment according to the relation between the assembling tools and the connecting pieces in the image predicted at the current moment, and can also determine the assembling conditions of the connecting pieces according to the relation between the assembling tools and the connecting pieces in the image predicted at the current moment and the previous moment;
the system can finally output the assembled connecting piece at the current moment, the position of the assembled connecting piece and the assembling condition; the assembly condition includes an assembly tool rotation angle.
CN202110504071.5A 2021-05-10 2021-05-10 Connecting piece assembly detection method and system based on target detection Active CN113269234B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110504071.5A CN113269234B (en) 2021-05-10 2021-05-10 Connecting piece assembly detection method and system based on target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110504071.5A CN113269234B (en) 2021-05-10 2021-05-10 Connecting piece assembly detection method and system based on target detection

Publications (2)

Publication Number Publication Date
CN113269234A true CN113269234A (en) 2021-08-17
CN113269234B CN113269234B (en) 2022-09-20

Family

ID=77230188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110504071.5A Active CN113269234B (en) 2021-05-10 2021-05-10 Connecting piece assembly detection method and system based on target detection

Country Status (1)

Country Link
CN (1) CN113269234B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888494A (en) * 2021-09-28 2022-01-04 广州市华颉电子科技有限公司 Artificial intelligence interface pin quality detection method of automobile domain controller
CN114782778A (en) * 2022-04-25 2022-07-22 广东工业大学 Assembly state monitoring method and system based on machine vision technology

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633267A (en) * 2017-09-22 2018-01-26 西南交通大学 A kind of high iron catenary support meanss wrist-arm connecting piece fastener recognition detection method
CN111160440A (en) * 2019-12-24 2020-05-15 广东省智能制造研究所 Helmet wearing detection method and device based on deep learning
US20200357111A1 (en) * 2019-05-10 2020-11-12 Alibaba Group Holding Limited Recognizing damage through image analysis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633267A (en) * 2017-09-22 2018-01-26 西南交通大学 A kind of high iron catenary support meanss wrist-arm connecting piece fastener recognition detection method
US20200357111A1 (en) * 2019-05-10 2020-11-12 Alibaba Group Holding Limited Recognizing damage through image analysis
CN111160440A (en) * 2019-12-24 2020-05-15 广东省智能制造研究所 Helmet wearing detection method and device based on deep learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888494A (en) * 2021-09-28 2022-01-04 广州市华颉电子科技有限公司 Artificial intelligence interface pin quality detection method of automobile domain controller
CN114782778A (en) * 2022-04-25 2022-07-22 广东工业大学 Assembly state monitoring method and system based on machine vision technology
CN114782778B (en) * 2022-04-25 2023-01-06 广东工业大学 Assembly state monitoring method and system based on machine vision technology

Also Published As

Publication number Publication date
CN113269234B (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN106875381B (en) Mobile phone shell defect detection method based on deep learning
CN106650770B (en) Mura defect detection method based on sample learning and human eye visual characteristics
CN111062915B (en) Real-time steel pipe defect detection method based on improved YOLOv3 model
CN107085846B (en) Workpiece surface defect image identification method
CN111080693A (en) Robot autonomous classification grabbing method based on YOLOv3
CN107424142B (en) Weld joint identification method based on image significance detection
CN110119680B (en) Automatic error checking system of regulator cubicle wiring based on image recognition
CN113269234B (en) Connecting piece assembly detection method and system based on target detection
US20040247171A1 (en) Image processing method for appearance inspection
CN110135514B (en) Workpiece classification method, device, equipment and medium
CN113724231A (en) Industrial defect detection method based on semantic segmentation and target detection fusion model
CN110108712A (en) Multifunctional visual sense defect detecting system
CN111402224A (en) Target identification method for power equipment
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN112862744B (en) Intelligent detection method for internal defects of capacitor based on ultrasonic image
Wah et al. Analysis on feature extraction and classification of rice kernels for Myanmar rice using image processing techniques
CN110781913A (en) Zipper cloth belt defect detection method
CN117197700B (en) Intelligent unmanned inspection contact net defect identification system
CN113052234A (en) Jade classification method based on image features and deep learning technology
CN111652200A (en) Processing method, device and equipment for distinguishing multiple vehicles from pictures in vehicle insurance case
CN110969135A (en) Vehicle logo recognition method in natural scene
CN114594102A (en) Machine vision-based data line interface automatic detection method
CN113947563A (en) Cable process quality dynamic defect detection method based on deep learning
CN113139946A (en) Shirt stain positioning device based on vision
CN111523583A (en) Method for automatically identifying and classifying equipment nameplate photos by using unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230522

Address after: No.2661, Jingxuan Road, Qingdao City, Shandong Province

Patentee after: QINGDAO QIANSHAO PRECISION INSTRUMENT Co.,Ltd.

Address before: No.777, Jialingjiang Road, Qingdao Economic and Technological Development Zone, Qingdao, Shandong 266000

Patentee before: QINGDAO TECHNOLOGICAL University