CN115239657B - Industrial part increment identification method based on deep learning target segmentation - Google Patents

Industrial part increment identification method based on deep learning target segmentation Download PDF

Info

Publication number
CN115239657B
CN115239657B CN202210842527.3A CN202210842527A CN115239657B CN 115239657 B CN115239657 B CN 115239657B CN 202210842527 A CN202210842527 A CN 202210842527A CN 115239657 B CN115239657 B CN 115239657B
Authority
CN
China
Prior art keywords
identified
template
contour
standard part
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210842527.3A
Other languages
Chinese (zh)
Other versions
CN115239657A (en
Inventor
顾毅
张校源
李书霞
辛伟
张琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Xuelang Shuzhi Technology Co ltd
Original Assignee
Wuxi Xuelang Shuzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Xuelang Shuzhi Technology Co ltd filed Critical Wuxi Xuelang Shuzhi Technology Co ltd
Priority to CN202210842527.3A priority Critical patent/CN115239657B/en
Publication of CN115239657A publication Critical patent/CN115239657A/en
Application granted granted Critical
Publication of CN115239657B publication Critical patent/CN115239657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T3/02
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application discloses an industrial part increment identification method based on deep learning target segmentation, which relates to the technical field of image processing, and comprises the following steps: determining the identified part model, screening out corresponding standard part information and storing the standard part information in a template use list; collecting a part image to be identified, and sending the part image to be identified into a depth instance segmentation model to obtain a single part contour map to be identified; calculating part size information according to the profile, comparing and screening with the size information of each standard part in the table, and deleting the standard part information which does not meet the preset condition from the table; affine change is carried out on the profile map and the standard part template map in the table, the degree of overlapping of the profile, the degree of similarity and the number of inner holes are compared one by one, a scoring equation is established, standard part models recorded by standard part information, wherein the score of the standard part information is larger than a set threshold value and is the maximum value in the updated template use list, are used as the models of parts to be identified, and the problem of repeated training of the models in the problem of incremental identification of industrial parts can be effectively solved.

Description

Industrial part increment identification method based on deep learning target segmentation
Technical Field
The application relates to the technical field of image processing, in particular to an industrial part increment identification method based on deep learning target segmentation.
Background
Industrial part sorting has been an important component of industrial production applications and is one of the markers for measuring the level of industrial automation. Along with the development of modern technology, an intelligent sorting system formed by combining machine equipment and a vision system is gradually replacing manpower, and becomes an important index for converting factories into digitalization and intellectualization. Meanwhile, how the intelligent equipment can meet complex and diverse industrial scenes like manual work is always the topic of common attention and research in enterprises and academia.
Currently, vision systems used in the industry mainly include both traditional machine vision and deep learning vision types. In the conventional machine vision recognition task, the target features mainly depend on an artificially designed extractor, a certain expertise and skill are required, the parameter adjusting process is complex, and the generalization capability and the robustness of the model are poor. With the development of hardware computing power and deep learning technology, the machine vision technology based on the deep convolutional neural network has far exceeded the traditional machine vision recognition method in terms of detection precision and real-time.
Meanwhile, if the sort of the part sorting in the industrial scene is fixed, the sorting effect can be realized by adopting the traditional machine vision and the deep learning vision to replace the manual work to different degrees. However, if the types of parts to be identified are constantly changing and gradually updated, the parameters of both the conventional machine vision and the deep learning vision need to be repeatedly adjusted to retrain the model, which greatly increases the development period and cost of the visual identification system, and also causes the later maintenance and upgrading of the system to be frequent and difficult.
Disclosure of Invention
Aiming at the problems and the technical requirements, the inventor provides an industrial part increment recognition method based on deep learning target segmentation, and the problem of increment recognition of industrial parts can be effectively solved based on the combination of a depth instance segmentation model and contour feature contrast, so that dependence on manual experience is reduced.
The technical scheme of the application is as follows:
an industrial part increment identification method based on deep learning target segmentation comprises the following steps:
determining the part model to be identified in the current batch, and if the corresponding standard part information can be screened out from the standard part template library according to the part model, storing all the screened standard part information in a template use list; otherwise, obtaining standard part information of the part to be identified, storing the standard part information in a standard part template library, and screening out corresponding standard part information from the standard part template library according to the model of the part; the standard part template library stores the model numbers, template diagrams and part size information of all standard parts as standard part information;
acquiring a part image to be identified, and sending the part image to a depth instance segmentation model for target detection segmentation to obtain a single part contour map to be identified;
calculating the size information of the part to be identified according to the part profile diagram to be identified, comparing and screening with the size information of each standard part in the template use list, and deleting the standard part information which does not meet the preset condition from the template use list;
affine change is carried out on the part contour map to be identified and the standard part template map in the updated template use list, and contour overlapping degree, contour similarity and inner hole number are calculated in a one-by-one comparison mode;
and establishing a scoring equation according to the contour overlapping degree, the contour similarity and the number of inner holes, and taking the model of the standard part recorded by the standard part information with the score larger than the set threshold value and the maximum value in the updated template use list as the model of the part to be identified.
The method comprises the further technical scheme that a depth instance segmentation model is a classification model, a classification result comprises a part surface contour and a part surface inner hole in an image, the model is realized by introducing a self-correction convolution module to improve a Mask R-CNN main network, and the operation mode of the self-correction convolution module comprises the following steps:
equally dividing channels of an input feature map of a backbone network to obtain a first feature map and a second feature map, inputting the first feature map to a self-correction convolution module to obtain a self-correction transformation feature map, carrying out convolution operation on the second feature map, and carrying out feature map channel combination with the self-correction transformation feature map to obtain an improved feature map.
The further technical scheme is that the first feature map is input to a self-correction convolution module to obtain a self-correction transformation feature map, and the method comprises the following steps:
the first feature map sequentially carries out mean pooling downsampling, first convolution operation and bilinear interpolation upsampling transformation, adds the transformation result to the first feature map to obtain a attention feature map of a space layer, and carries out Sigmoid function transformation on the attention feature map, wherein the transformation process is as follows:
wherein Up () represents bilinear interpolation upsampling transform, C B1 () Representing a first convolution operation, augPool () represents a mean-pooling downsampling transformation, σ () represents a Sigmoid function transformation, X 1 For the first characteristic diagram, T 1 For attention profile, Y 1 ' is the transformation result of the first branch of the self-correcting convolution module;
the first feature map also carries out a second convolution operation, carries out vector element product operation with the transformation result of the first branch, and then carries out a third convolution operation to obtain a self-correcting transformation feature map, wherein the transformation process is as follows:
Y 1 =C B3 (Y 1 ′·Y 1 ″)=C B3 (Y 1 ′·C B2 (X 1 ));
wherein C is B2 () Representing a second convolution operation, C B3 () Representing a third convolution operation, Y 1 "is the result of the transformation of the second branch of the self-correcting convolution module, Y 1 And outputting a self-correction transformation characteristic diagram for the self-correction convolution module.
The method further comprises the steps of comparing and screening the size information of the part to be identified with the size information of each standard part in the template use list, deleting the standard part information which does not meet the preset condition from the template use list, and comprising the following steps:
the calculated size information of the part to be identified comprises length and width information of the part to be identified, and a small-size part contour map to be identified output by the depth instance segmentation model is filtered according to the length and width information of the part to be identified;
selecting a first threshold value and a second threshold value according to the length information of the part to be identified;
traversing the size information of each standard part in the template use list, calculating the length difference and the width difference of the standard part and the part to be identified, screening the size information of the standard part with the length difference and the width difference smaller than a first threshold value, and calculating the sum of the corresponding length difference and the width difference;
and selecting a minimum value from the sums corresponding to the standard part size information obtained by screening, calculating the difference value between each sum and the minimum value, deleting the standard part size information with the difference value larger than a second threshold value from the template use list, and updating the template use list.
The method comprises the further technical scheme that affine change is carried out on the part profile diagram to be identified and standard part template diagrams in the updated template use list, wherein the affine change comprises the following steps of, for each standard part template diagram:
mapping the part contour map to be identified to the pixel size of the standard part template map, and respectively calculating the minimum circumscribed rectangle of the mapped part contour map to be identified and the standard part template map;
affine change is carried out according to the degree of an included angle between the long side of the minimum circumscribed rectangle and the horizontal interface, and an affine change outline drawing and an affine change template drawing are generated, wherein the affine change outline drawing and the affine change template drawing are the minimum circumscribed rectangle in the horizontal direction or the vertical direction;
determining the size of the affine contrast graph according to the sizes of the affine change outline graph and the affine change template graph;
and mapping the affine change outline map and the affine change template map to an affine contrast map with black background respectively to form a part outline contrast map to be identified and a standard part template contrast map.
According to the further technical scheme, according to the size of the affine change outline map and the affine change template map, the calculation process for determining the size of the affine contrast map is as follows:
wherein S is width 、S height High-width information representing affine contrast map, round () represents rounding, w 1 、h 1 Width and height information, w, representing affine change profile 2 、h 2 Width and height information representing affine variation template maps.
The further technical scheme is that the contour overlapping degree is calculated by comparing one by one, and the contour overlapping degree comprises the following steps of, for each standard part template diagram:
rotating the contour contrast diagram of the part to be identified obtained by affine change from the initial position by 360 degrees, and calculating the contour overlapping degree of the contour contrast diagram of the part to be identified and the standard part template contrast diagram obtained by affine change when the contour contrast diagram rotates by 1 degree, and marking as IOU a The contour overlapping degree of the image-flipped part contour contrast diagram to be identified and the standard part template contrast diagram is also calculated and is recorded as IOU ma
Taking the maximum value of the two calculation results as the contour overlapping degree calculated under the current angle, and recording the mirror image state; taking the maximum value from the contour overlapping degree calculated from all angles as a final calculation result, wherein the expression is as follows:
wherein IOU is the final value of contour overlapping degree calculation, w is the initial angle of the contour comparison graph of the part to be identified, S L S is a standard part template comparison diagram, S is a part contour comparison diagram to be identified, S m And a is a mirror-inverted part contour comparison diagram to be identified, and a is a rotation angle.
The further technical scheme is that the contour similarity is calculated by comparing one by one, and the contour similarity comprises the following steps of, for each standard part template diagram:
obtaining a part contour comparison graph to be identified under the rotation angle corresponding to a final calculation result of calculating the contour overlapping degree, carrying out corresponding affine change, and respectively obtaining the Hu moment of the part contour comparison graph to be identified after the change and the Hu moment of a standard part template comparison graph obtained by affine change;
the method comprises the steps of calculating the distance after logarithmic conversion of Hu moments of the two parts to be identified as the contour similarity between a contour contrast diagram of the part to be identified and a template contrast diagram of a standard part, wherein the expression is as follows:
wherein,represents the j-order Hu moment corresponding to the standard part template contrast diagram after logarithmic transformation, and is ++>Obtaining a j-order Hu moment corresponding to a standard part template contrast diagram; />Representing the j-order Hu moment corresponding to the logarithmic transformation of the contour contrast diagram of the part to be identified,/>And (3) representing the j-order Hu moment corresponding to the obtained profile comparison graph of the part to be identified, and j is 0-6.
The further technical proposal is that the expression of the scoring equation is: f (F) i =IOU i -D(S Li ,S)-C;
Wherein i is an index item of a template use list; f (F) i Scoring the ith item in the template usage list; IOU (input output Unit) i Use item i in the list for templates and imitateA final calculation result of the contour overlapping degree of the contour contrast diagram of the part to be identified, which is obtained by the radial variation; d (S) Li S) the contour similarity of the contour contrast diagram of the part to be identified under the rotation angle corresponding to the final calculation result of the i-th item and the calculated contour overlapping degree in the template use list; c is a correction coefficient, if the number of the inner holes of the part profile diagram to be identified is the same as that of the standard part template diagram, C is 0, otherwise, a set experience value is taken.
The further technical scheme is that the method for acquiring the standard part information of the part to be identified comprises the following steps:
drawing a 2D top view of the part to be identified, from which the dimension marking and groove line information are removed;
converting the 2D top view into an image format, and uniformly manufacturing a template diagram with a part area of white RGB and a background area of black RGB;
calculating the size information of the part to be identified according to the drawing proportion of the template diagram and the camera calibration parameters, wherein the size information comprises the real length, the width, the perimeter and the area and the number of inner holes of the part;
and storing the size information of the part to be identified, the specified part model and the template diagram into a standard part template library.
The beneficial technical effects of the application are as follows:
according to the method, a standard part template library is built in advance, part images to be identified are collected and sent into a depth instance segmentation model, contours of the parts to be identified are separated, contours overlapping degree, contour similarity and inner hole number are calculated by comparing the contours with the contours of the standard parts in the standard part template library one by one, model numbers of the parts to be identified and corresponding part information are finally screened out step by step according to an established scoring equation, identification requirements of new types and quantity changes of the parts are met while identification speed is not influenced, the problems of secondary parameter adjustment and repeated model training caused by traditional machine vision and deep learning vision in the identification process are effectively solved, development period and cost are reduced, subsequent system maintenance and upgrading are simplified, and meanwhile generalization and robustness of a vision system can be remarkably improved.
Drawings
FIG. 1 is an overall flow chart of an industrial part incremental identification method based on deep learning target segmentation.
Fig. 2 (a) is a schematic illustration of a part drawing provided by the present application.
Fig. 2 (b) is a schematic illustration of a part template provided by the present application.
Fig. 3 is a flow chart of a self-correcting convolution module provided by the present application.
Fig. 4 (a) is a nodding imaging diagram of a part to be identified provided by the application.
Fig. 4 (b) is a contour diagram of a part to be identified output by the depth instance segmentation model provided by the application.
Fig. 4 (c) is a contour map of a part to be identified provided by the present application.
Fig. 4 (d) is a profile map of another part to be identified provided by the present application.
FIG. 5 is a profile view of a part to be identified with holes output by the depth instance segmentation model provided by the application.
FIG. 6 is a flow chart of contour feature comparison of a part to be identified with a master part template.
Fig. 7 (a) is a contour comparison diagram of a part to be identified provided by the application.
Fig. 7 (b) is a template comparison diagram of a master part provided by the application.
Detailed Description
The following describes the embodiments of the present application further with reference to the drawings.
As shown in FIG. 1, the application provides an industrial part increment identification method based on deep learning target segmentation, which specifically comprises the following steps:
step 0: and obtaining standard part information to construct a standard part template library.
(1) Preparing and drawing a 2D drawing of the part to be identified, only reserving a top view of the part, removing information such as dimension marking and groove lines, and storing the drawing in dwg or dxf format after drawing is completed as shown in fig. 2 (a).
(2) The 2D top view is converted into an image format, and a template diagram with a part area of white RGB (255 ) and a background area of black RGB (0, 0) is uniformly manufactured, as shown in fig. 2 (b).
The conversion mode specifically comprises the following steps: the drawing is converted into pdf format by means of an open source tool, and then the pdf format is converted into jpg images.
(3) And calculating the size information of the part according to the drawing proportion of the template diagram and the camera calibration parameters, and storing the size information of the part, the specified part model and the template diagram as standard part information into a standard part template library.
The size information comprises physical information such as real length, width, circumference, area and the like, and the number of inner holes of the part.
Step 1: determining the part model to be identified in the current batch, if the corresponding standard part information can be screened out from the standard part template library according to the part model, storing all the screened standard part information in a csv-format template use list L 1 Is a kind of medium.
Otherwise, obtaining standard part information of the part to be identified and storing the standard part information in a standard part template library so as to expand the standard part template library, and then screening corresponding standard part information from the standard part template library according to the part model. The method for acquiring the standard part information of the part to be identified refers to the step 0.
Step 2: and acquiring an image of the part to be identified by an industrial camera, as shown in fig. 4 (a), and sending the image into a depth instance segmentation model for target detection segmentation to obtain a single part contour map to be identified.
The depth instance segmentation model adopted in the embodiment is realized by improving a Mask R-CNN backbone network by introducing a self-correcting convolution module, so that the original model backbone network generates a feature expression with more feature information and more discrimination capability.
As shown in fig. 3, the self-correcting convolution module operates in a manner including:
equally dividing channels of an input feature map X of a backbone network to obtain a first feature map and a second feature map { X } 1 ,X 2 First feature map X 1 Input to a self-correcting convolution module to obtain a self-correcting transformation characteristicCharacterization map, second characterization map X 2 And carrying out convolution operation, and then carrying out feature map channel combination with the self-correction transformation feature map to obtain an improved feature map.
There are two branches in the self-correcting convolution module, under the first branch, a first feature map X 1 Sequentially performing average value pooling downsampling, first convolution operation and bilinear interpolation upsampling transformation, and combining the transformation result with a first characteristic diagram X 1 Adding to obtain a attention feature map of a space layer, and carrying out Sigmoid function transformation on the attention feature map, wherein the transformation process is as follows:
wherein Up () represents bilinear interpolation upsampling transform, C B1 () Representing a first convolution operation, augPool () represents a mean-pooling downsampling transformation, σ () represents a Sigmoid function transformation, X 1 For the first characteristic diagram, T 1 For attention profile, Y 1 ' is the result of the transformation of the first branch of the self-correcting convolution module.
Under the second branch, the first characteristic diagram X 1 And performing a second convolution operation, performing vector element product operation with the transformation result of the first branch, and performing a third convolution operation to obtain a self-correcting transformation characteristic diagram, wherein the transformation process is as follows:
Y 1 =C B3 (Y 1 ′·Y 1 ″)=C B3 (Y 1 ′·C B2 (X 1 ));
wherein C is B2 () Representing a second convolution operation, C B3 () Representing a third convolution operation, Y 1 "is the result of the transformation of the second branch of the self-correcting convolution module, Y 1 And outputting a self-correction transformation characteristic diagram for the self-correction convolution module.
Finally, the overall operation process of the improved feature map is as follows:
wherein Concat { } represents feature map channel merging, C B4 () Representing a conventional convolution operation.
For the improved Mask R-CNN, the image information marking, training and optimizing to a certain extent are carried out by collecting the actual workpiece image in advance, and the finally optimized segmentation model is stored for subsequent detection. Because of the requirement of subsequent increment recognition, the application considers that the improved segmentation model only marks two categories when the model training is carried out, one category is the surface profile of the part in the image, and the other category is the inner hole of the surface of the part. When the identified part does not have an inner hole, the segmentation detection result is the part surface profile, as shown in fig. 4 (b); when the inner hole exists in the part, the surface profile segmentation result and the inner hole segmentation result are required to be differenced, and an actual part segmentation detection result is obtained, as shown in fig. 5.
When the acquired part image to be identified contains a plurality of parts, noise information is filtered through size, subordinate and distance relations among the outer contours of the detected parts, the segmented contour of each part is remapped into a background image RGB (0, 0) with the same size as the part image to be identified, and a single part contour image to be identified is obtained, as shown in fig. 4 (c) and (d), so that subsequent processing can be performed.
Optionally, as shown in fig. 6, the subsequent steps further include step 3: and carrying out preliminary screening on all the part profile diagrams to be identified according to the positions and the sizes.
Searching the longest straight line segment in the part contour map to be identified, calculating the included angle between the straight line segment and the horizontal direction, rotating the part contour according to the included angle, achieving the purpose of correcting the part contour, and calculating the minimum circumscribed rectangle of the rotated part contour. And screening out the part profile which is not fully shot or distorted beyond the identification area according to the position relation of the minimum circumscribed rectangle of each part profile in the image.
Step 4: calculating the size information of the part to be identified according to the corrected part profile and the minimum circumscribed rectangular area, and using the size information and a template using list L 1 Dimension of each standard part in (a)The information is further compared and screened, and standard part information which does not meet the preset conditions is selected from a template use list L 1 And deleted.
Wherein, the method for further comparative screening comprises the following steps:
(1) The calculated size information of the part to be identified comprises length and width information of the part to be identified, and a small-size part contour map to be identified output by the depth instance segmentation model is filtered according to the length and width information of the part to be identified, so that interference of non-part contours such as scrap iron and the like is eliminated.
(2) Selecting a first threshold t according to the length information of the part to be identified 1 And a second threshold t 2 The calculation mode given in this example is as follows:
the length is length information of the part to be identified. t is t 1 And t 2 The present application is given by way of example only and is not limited to the above-described ranges, depending on the actual industrial requirements.
(3) Traversing template usage list L 1 Calculating the length difference and the width difference of the standard part and the part to be identified, screening the standard part size information that the length difference and the width difference are smaller than a first threshold value, and calculating the sum s of the corresponding length difference and the width difference 1i The expression is:
x i representing template usage list L 1 Is included.
(4) Sum s corresponding to standard part size information obtained from screening 1i Selecting the minimum value, calculating the difference value between each sum and the minimum value, deleting the standard part size information with the difference value larger than a second threshold value from the template use list, and updating the template use list, wherein the expression is as follows:
L 1 =(x i ,s 1i )if[(s 1i -min(s 1i ))<t 2 ];
after the above operation, if Table L 1 If no standard part size information exists, indicating that the part is not the part model to be identified in the batch, if Table L 1 Standard part size information still exists, then pair L one by one 1 The information in the process is compared with the profile of the part to be identified in comprehensive characteristics, and the specific implementation process is as follows:
step 5: affine change is carried out on the part contour map to be identified and standard part template maps in the updated template use list, contour overlapping degree, contour similarity and inner hole number are calculated in a one-by-one comparison mode, and the method specifically comprises the following substeps of, for each standard part template map:
step 51: affine variations include:
mapping the part contour map to be identified to the pixel size of the corresponding standard part template map, and respectively calculating the minimum circumscribed rectangle of the mapped part contour map to be identified and the standard part template map. Affine change is carried out according to the degree of the included angle between the long side of the minimum circumscribed rectangle and the horizontal interface, and an affine change outline drawing and an affine change template drawing are generated, wherein the minimum circumscribed rectangle in the horizontal direction or the vertical direction is shown in the drawing.
According to the sizes of the affine change outline map and the affine change template map, determining the size of the affine contrast map, wherein the calculation process is as follows:
wherein S is width 、S height High-width information representing affine contrast map, round () represents rounding, w 1 、h 1 Representing the width and height of an affine change profileInformation, w 2 、h 2 Width and height information representing affine variation template maps.
And mapping the affine change outline map and the affine change template map into an affine contrast map with black background respectively to form a part outline contrast map to be identified and a standard part template contrast map, as shown in fig. 7 (a) and fig. 7 (b).
Optionally, after the profile comparison and the master part template comparison are obtained, the dimensions of the profile comparison and the master part template comparison are downsampled (300 ) in view of increasing the recognition speed, taking account of the difference in computational costs of the different part dimensions.
Step 52: calculating the contour overlap from the comparison comprises:
rotating the profile comparison graph of the part to be identified obtained through affine change from 0 degree to 360 degrees, and calculating the profile overlapping degree of the profile comparison graph of the part to be identified and the standard part template comparison graph obtained through affine change when the profile comparison graph rotates for 1 degree, wherein the profile overlapping degree is recorded as IOU a The method comprises the steps of carrying out a first treatment on the surface of the Because of mirror deviation of the asymmetric part, when the contour overlapping degree is calculated each time, the contour overlapping degree of the mirror-inverted part contour contrast diagram to be identified and the standard part template contrast diagram is calculated, and is recorded as IOU ma
And taking the maximum value of the two calculation results as the contour overlapping degree calculated under the current angle, and recording the mirror image state when the maximum value is obtained by mirror image overturning calculation. Taking the maximum value from the contour overlapping degree calculated from all angles as a final calculation result, wherein the expression is as follows:
wherein IOU is the final value of contour overlapping degree calculation, w is the initial angle of the contour comparison graph of the part to be identified, and the value of the initial angle is 0,S L S is a standard part template comparison diagram, S is a part contour comparison diagram to be identified, S m The contour contrast diagram of the part to be identified is mirror image overturning, a is a rotation angle, and the value range is 0-360 degrees.
Step 53: the step of calculating the contour similarity by comparison comprises the following steps:
and (3) taking a part contour comparison graph to be identified under the rotation angle corresponding to a final calculation result of calculating the contour overlapping degree, carrying out corresponding affine change, and respectively obtaining the Hu moment of the part contour comparison graph to be identified after the change and the Hu moment of a standard part template comparison graph obtained by affine change.
The method comprises the steps of calculating the distance after logarithmic conversion of Hu moments of the two parts to be identified as the contour similarity between a contour contrast diagram of the part to be identified and a template contrast diagram of a standard part, wherein the expression is as follows:
wherein,represents the j-order Hu moment corresponding to the standard part template contrast diagram after logarithmic transformation, and is ++>Obtaining a j-order Hu moment corresponding to a standard part template contrast diagram; />Representing the j-order Hu moment corresponding to the logarithmic transformation of the contour contrast diagram of the part to be identified,/>And (3) representing the j-order Hu moment corresponding to the obtained profile comparison graph of the part to be identified, and j is 0-6.
Step 54: and judging whether the number of the inner holes of the part contour diagram to be identified is equal to that of the inner holes of the standard part template diagram.
Step 6: establishing a scoring equation according to the contour overlapping degree, the contour similarity and the number of inner holes, and using an updated template use list L 1 The medium score is greater than a set threshold t 3 And the standard part model recorded by the standard part information with the maximum value is used as the model of the part to be identified.
Updating the template usage list L according to the set screening conditions 1 The process of (2) is as follows:
wherein i is an index item of a template use list; f (F) i Scoring the ith item in the template usage list; IOU (input output Unit) i The final calculation result of the contour overlapping degree of the contour contrast diagram of the part to be identified, which is obtained by the ith item and affine change in the template; d (S) Li S) the contour similarity of the contour contrast diagram of the part to be identified under the rotation angle corresponding to the final calculation result of the i-th item and the calculated contour overlapping degree in the template use list; c is a correction coefficient, if the number of the inner holes of the part profile diagram to be identified is the same as that of the standard part template diagram, C is 0, otherwise, a set experience value is taken, and the example is that C is 0.2; threshold t 3 The value range given by the example is 0.5-0.7 according to the judgment and selection of the distribution condition of the parts.
Table L after the above condition judgment 1 If there are a plurality of standard component information, then compare table L 1 Each item of score F i The final part model M can be determined by taking the maximum value, and the process is expressed as follows:
M=max(L 1 (x i ,F i ));
and (3) returning to the step (2) to start to identify the next group of parts, and returning to the step (1) to execute the steps again when the parts of the next batch are identified.
According to the industrial part increment recognition method provided by the application, the standard part template library is built in advance, the part images to be recognized are collected and sent into the depth instance segmentation model, the single part contours to be recognized are separated, the contour overlapping degree, the contour similarity and the number of inner holes are calculated by comparing with the standard part contours in the standard part template library one by one, the model number of the part to be recognized and the corresponding part information are finally screened out step by step according to the established grading equation, the recognition requirements of the new part variety and the number change are met while the recognition speed is not influenced, the problems of parameter secondary adjustment and model repeated training caused by traditional machine vision and deep learning vision in the recognition process are effectively solved, the development period and cost are reduced, the subsequent system maintenance and upgrading are simpler, meanwhile, the generalization and the robustness of a vision system can be obviously improved, and the similar increment recognition problems can be solved by adopting the method provided by the application.
The above is only a preferred embodiment of the present application, and the present application is not limited to the above examples. It is to be understood that other modifications and variations which may be directly derived or contemplated by those skilled in the art without departing from the spirit and concepts of the present application are deemed to be included within the scope of the present application.

Claims (9)

1. An industrial part increment identification method based on deep learning target segmentation, which is characterized by comprising the following steps:
determining part types to be identified in the current batch, and if the corresponding standard part information can be screened out from a standard part template library according to the part types, storing all screened standard part information in a template use list; otherwise, obtaining standard part information of the part to be identified, storing the standard part information in the standard part template library, and screening out corresponding standard part information from the standard part template library according to the part model; the standard part template library stores the model numbers, template diagrams and part size information of all standard parts as the standard part information;
acquiring a part image to be identified, and sending the part image to a depth instance segmentation model for target detection segmentation to obtain a single part contour map to be identified;
calculating the size information of the part to be identified according to the part profile diagram to be identified, comparing and screening the size information with the size information of each standard part in the template use list, and deleting the standard part information which does not meet the preset condition from the template use list;
affine change is carried out on the part contour map to be identified and the updated standard part template map in the template use list, and contour overlapping degree, contour similarity and inner hole number are calculated in a one-by-one comparison mode;
establishing a scoring equation according to the contour overlapping degree, the contour similarity and the number of inner holes, and taking the model of the standard part recorded by the standard part information, which is scored more than a set threshold and is the maximum value, in the updated template use list as the model of the part to be identified;
the expression of the scoring equation is: f (F) i =IOU i -D(S Li ,S)-C;
Wherein i is an index item of the template use list; f (F) i Using the score of item i in the list for the template; IOU (input output Unit) i Using a final calculation result of the contour overlapping degree of the contour contrast diagram of the part to be identified, which is obtained by the ith item and affine change in the list, for the template; d (S) Li S) is the template comparison diagram S of the ith standard part in the template use list Li Profile similarity of the part profile to be identified in comparison with the profile similarity of the profile comparison graph S under the rotation angle corresponding to the final calculation result of the calculated profile overlapping degree; c is a correction coefficient, if the number of the inner holes of the part profile diagram to be identified is the same as that of the standard part template diagram, C is 0, otherwise, a set experience value is taken.
2. The method for identifying industrial part increment based on deep learning object segmentation according to claim 1, wherein the depth instance segmentation model is a classification model, the classification result comprises a part surface contour and a part surface inner hole in an image, the model is realized by introducing a self-correcting convolution module to improve a Mask R-CNN backbone network, and the self-correcting convolution module operates in a manner comprising:
equally dividing channels of an input feature map of a backbone network to obtain a first feature map and a second feature map, inputting the first feature map to the self-correction convolution module to obtain a self-correction transformation feature map, carrying out convolution operation on the second feature map, and carrying out feature map channel combination on the second feature map and the self-correction transformation feature map to obtain an improved feature map.
3. The method for incremental identification of industrial parts based on deep learning object segmentation of claim 2 wherein said inputting the first signature to the self-correcting convolution module results in a self-correcting transformed signature comprising:
the first feature map sequentially carries out mean pooling downsampling, first convolution operation and bilinear interpolation upsampling transformation, adds a transformation result to the first feature map to obtain a attention feature map of a space layer, and carries out Sigmoid function transformation on the attention feature map, wherein the transformation process is as follows:
wherein Up () represents bilinear interpolation upsampling transform, C B1 () Representing a first convolution operation, augPool () represents a mean-pooling downsampling transformation, σ () represents a Sigmoid function transformation, X 1 For the first feature map, T 1 For the attention profile, Y 1 ' is the transformation result of the first branch of the self-correcting convolution module;
the first feature map further carries out second convolution operation, carries out vector element product operation with the transformation result of the first branch, and then carries out third convolution operation to obtain a self-correcting transformation feature map, wherein the transformation process is as follows:
Y 1 =C B3 (Y 1 ′·Y 1 ″)=C B3 (Y 1 ′·C B2 (X 1 ));
wherein C is B2 () Representing a second convolution operation, C B3 () Representing a third convolution operation, Y 1 "is the result of the transformation of the second branch of the self-correcting convolution module, Y 1 And outputting a self-correction transformation characteristic diagram for the self-correction convolution module.
4. The incremental identification method of industrial parts based on deep learning object segmentation according to claim 1, wherein the comparing and screening of the size information of the parts to be identified with the size information of each standard part in the template use list, and deleting the standard part information which does not meet the predetermined condition from the template use list, comprises:
the calculated size information of the part to be identified comprises length and width information of the part to be identified, and a small-size part to be identified outline map output by the depth instance segmentation model is filtered according to the length and width information of the part to be identified;
selecting a first threshold value and a second threshold value according to the length information of the part to be identified;
traversing the size information of each standard part in the template use list, calculating the length difference and the width difference of the standard part and the part to be identified, screening out the size information of the standard part with the length difference and the width difference smaller than the first threshold value, and calculating the sum of the corresponding length difference and the width difference;
and selecting a minimum value from the sums corresponding to the standard part size information obtained by screening, calculating the difference value between each sum and the minimum value, deleting the standard part size information with the difference value larger than the second threshold value from the template use list, and updating the template use list.
5. The method for incremental identification of industrial parts based on deep learning object segmentation according to claim 1, wherein affine varying the part contour map to be identified with standard part template maps in the updated template usage list comprises, for each of the standard part template maps:
mapping the part contour map to be identified to the pixel size of the standard part template map, and respectively calculating the minimum circumscribed rectangle of the mapped part contour map to be identified and the standard part template map;
affine change is carried out according to the degree of the included angle between the long side of the minimum circumscribed rectangle and the horizontal interface, and an affine change outline drawing and an affine change template drawing are generated, wherein the affine change outline drawing and the affine change template drawing are the minimum circumscribed rectangle in the horizontal direction or the vertical direction;
determining the size of the affine contrast graph according to the sizes of the affine change outline graph and the affine change template graph;
and mapping the affine change outline map and the affine change template map to affine contrast maps with black background respectively to form a part outline contrast map to be identified and a standard part template contrast map.
6. The industrial part increment recognition method based on the deep learning object segmentation according to claim 5, wherein the calculation process for determining the size of the affine contrast map according to the size of the affine change outline map and the affine change template map is as follows:
wherein S is width 、S height High-width information representing affine contrast map, round () represents rounding, w 1 、h 1 Width and height information, w, representing affine change profile 2 、h 2 Width and height information representing affine variation template maps.
7. The method for incremental identification of industrial parts based on deep learning object segmentation of claim 1 wherein computing contour overlap by contrast comprises, for each of the master part template maps:
rotating the contour contrast diagram of the part to be identified obtained by affine change by 360 degrees from the initial position, and calculating the contour overlapping degree of the contour contrast diagram of the part to be identified and the standard part template contrast diagram obtained by affine change when the contour contrast diagram rotates by 1 degree, and marking the contour overlapping degree as IOU a The contour overlapping degree of the contour contrast diagram of the part to be identified and the standard part template contrast diagram of the mirror image overturning is calculated and recorded as IOU ma
Taking the maximum value of the two calculation results as the contour overlapping degree calculated under the current angle, and recording the mirror image state; taking the maximum value from the contour overlapping degree calculated from all angles as a final calculation result, wherein the expression is as follows:
wherein IOU is the final value of contour overlapping degree calculation, w is the initial angle of the contour comparison graph of the part to be identified, S L S is the contour comparison diagram of the part to be identified, S is the standard part template comparison diagram m And a is a rotation angle for the contour contrast diagram of the part to be identified of the mirror image overturning.
8. The method for incremental identification of industrial parts based on deep learning object segmentation of claim 7 wherein computing contour similarity by contrast comprises, for each of the master part template maps:
obtaining a part contour comparison graph to be identified under the rotation angle corresponding to a final calculation result of calculating the contour overlapping degree, carrying out corresponding affine change, and respectively obtaining the Hu moment of the part contour comparison graph to be identified after the change and the Hu moment of a standard part template comparison graph obtained by affine change;
the method comprises the steps of calculating the distance after logarithmic conversion of Hu moments of the two parts to be identified as the contour similarity between a contour contrast diagram of the part to be identified and a template contrast diagram of a standard part, wherein the expression is as follows:
wherein,represents the j-order Hu moment corresponding to the standard part template contrast diagram after logarithmic transformation, and is ++>Obtaining a j-order Hu moment corresponding to a standard part template contrast diagram; />Representing the j-order Hu moment corresponding to the logarithmic transformation of the contour contrast diagram of the part to be identified,/>And (3) representing the j-order Hu moment corresponding to the obtained profile comparison graph of the part to be identified, and j is 0-6.
9. The method for incremental identification of industrial parts based on deep learning object segmentation according to claim 1, wherein the obtaining standard part information of the parts to be identified comprises:
drawing a 2D top view of the part to be identified, from which the dimension marking and groove line information are removed;
converting the 2D top view into an image format, and uniformly manufacturing a template diagram with a part area of white RGB and a background area of black RGB;
calculating the size information of the part to be identified according to the drawing proportion of the template diagram and the camera calibration parameters, wherein the size information comprises the real length, the width, the perimeter and the area and the number of inner holes of the part;
and storing the size information of the part to be identified, the specified part model and the template diagram into a standard part template library.
CN202210842527.3A 2022-07-18 2022-07-18 Industrial part increment identification method based on deep learning target segmentation Active CN115239657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210842527.3A CN115239657B (en) 2022-07-18 2022-07-18 Industrial part increment identification method based on deep learning target segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210842527.3A CN115239657B (en) 2022-07-18 2022-07-18 Industrial part increment identification method based on deep learning target segmentation

Publications (2)

Publication Number Publication Date
CN115239657A CN115239657A (en) 2022-10-25
CN115239657B true CN115239657B (en) 2023-11-21

Family

ID=83673541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210842527.3A Active CN115239657B (en) 2022-07-18 2022-07-18 Industrial part increment identification method based on deep learning target segmentation

Country Status (1)

Country Link
CN (1) CN115239657B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709909A (en) * 2016-12-13 2017-05-24 重庆理工大学 Flexible robot vision recognition and positioning system based on depth learning
CN109117822A (en) * 2018-08-31 2019-01-01 贵州大学 A kind of part case segmentation recognition method based on deep learning
CN110097568A (en) * 2019-05-13 2019-08-06 中国石油大学(华东) A kind of the video object detection and dividing method based on the double branching networks of space-time
CN112381788A (en) * 2020-11-13 2021-02-19 北京工商大学 Part surface defect increment detection method based on double-branch matching network
CN114494272A (en) * 2022-02-21 2022-05-13 苏州才炬智能科技有限公司 Metal part fast segmentation method based on deep learning
CN114627290A (en) * 2022-02-25 2022-06-14 中国科学院沈阳自动化研究所 Mechanical part image segmentation algorithm based on improved DeepLabV3+ network
CN114758236A (en) * 2022-04-13 2022-07-15 华中科技大学 Non-specific shape object identification, positioning and manipulator grabbing system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709909A (en) * 2016-12-13 2017-05-24 重庆理工大学 Flexible robot vision recognition and positioning system based on depth learning
CN109117822A (en) * 2018-08-31 2019-01-01 贵州大学 A kind of part case segmentation recognition method based on deep learning
CN110097568A (en) * 2019-05-13 2019-08-06 中国石油大学(华东) A kind of the video object detection and dividing method based on the double branching networks of space-time
CN112381788A (en) * 2020-11-13 2021-02-19 北京工商大学 Part surface defect increment detection method based on double-branch matching network
CN114494272A (en) * 2022-02-21 2022-05-13 苏州才炬智能科技有限公司 Metal part fast segmentation method based on deep learning
CN114627290A (en) * 2022-02-25 2022-06-14 中国科学院沈阳自动化研究所 Mechanical part image segmentation algorithm based on improved DeepLabV3+ network
CN114758236A (en) * 2022-04-13 2022-07-15 华中科技大学 Non-specific shape object identification, positioning and manipulator grabbing system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于邻层数据匹配的工业CT图像生成G代码方法";谭川东;《仪器仪表学报》;全文 *

Also Published As

Publication number Publication date
CN115239657A (en) 2022-10-25

Similar Documents

Publication Publication Date Title
CN108960245B (en) Tire mold character detection and recognition method, device, equipment and storage medium
CN108596066B (en) Character recognition method based on convolutional neural network
CN109086714B (en) Form recognition method, recognition system and computer device
CN109800824B (en) Pipeline defect identification method based on computer vision and machine learning
CN112819772B (en) High-precision rapid pattern detection and recognition method
CN116205919B (en) Hardware part production quality detection method and system based on artificial intelligence
CN109903331B (en) Convolutional neural network target detection method based on RGB-D camera
CN103886589B (en) Object-oriented automated high-precision edge extracting method
CN116664559B (en) Machine vision-based memory bank damage rapid detection method
CN113870235A (en) Method for detecting defects of circular stamping part based on quantum firework arc edge extraction
CN108846831B (en) Band steel surface defect classification method based on combination of statistical characteristics and image characteristics
CN110929713B (en) Steel seal character recognition method based on BP neural network
CN114549981A (en) Intelligent inspection pointer type instrument recognition and reading method based on deep learning
CN112907519A (en) Metal curved surface defect analysis system and method based on deep learning
CN113808180B (en) Heterologous image registration method, system and device
CN112613097A (en) BIM rapid modeling method based on computer vision
CN109815923B (en) Needle mushroom head sorting and identifying method based on LBP (local binary pattern) features and deep learning
CN112184655A (en) Wide and thick plate contour detection method based on convolutional neural network
CN107871137A (en) A kind of material matching process based on image recognition
CN113052215A (en) Sonar image automatic target identification method based on neural network visualization
CN110363196B (en) Method for accurately recognizing characters of inclined text
CN111008649A (en) Defect detection data set preprocessing method based on three decisions
CN112381140B (en) Abrasive particle image machine learning identification method based on new characteristic parameters
CN106778777A (en) A kind of vehicle match method and system
CN115239657B (en) Industrial part increment identification method based on deep learning target segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant