CN114782778A - Assembly state monitoring method and system based on machine vision technology - Google Patents

Assembly state monitoring method and system based on machine vision technology Download PDF

Info

Publication number
CN114782778A
CN114782778A CN202210440344.9A CN202210440344A CN114782778A CN 114782778 A CN114782778 A CN 114782778A CN 202210440344 A CN202210440344 A CN 202210440344A CN 114782778 A CN114782778 A CN 114782778A
Authority
CN
China
Prior art keywords
state
fan rotor
picture
pictures
stage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210440344.9A
Other languages
Chinese (zh)
Other versions
CN114782778B (en
Inventor
魏丽军
王孙康宏
姚绍文
刘婷
刘强
王满贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202210440344.9A priority Critical patent/CN114782778B/en
Publication of CN114782778A publication Critical patent/CN114782778A/en
Application granted granted Critical
Publication of CN114782778B publication Critical patent/CN114782778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

An assembly state monitoring method based on a machine vision technology is applied to an aviation fan rotor equipment process, and is characterized by comprising the following steps: step S1: periodically acquiring state pictures of the aviation fan rotor in an assembly stage, and performing visualization operation and preprocessing operation on the state pictures; step S2: inputting the processed state picture into a recognition model to judge the state of the aviation fan rotor in the state picture, and acquiring the state stage of the current aviation fan rotor; step S3: and matching the state stage of the current aviation fan rotor with the assembly stage of the current aviation fan rotor, and prompting that the current aviation fan rotor is installed wrongly if the state stage of the current aviation fan rotor is not matched with the assembly stage of the current aviation fan rotor. The templates of all the assembling states are obtained through the recognition model, and the state pictures of the aviation fan rotor in the assembling stage are collected in real time to be matched with the templates, so that the automatic judgment of the assembling states is realized, and the assembling accuracy is improved.

Description

Assembly state monitoring method and system based on machine vision technology
Technical Field
The invention relates to the technical field of monitoring, in particular to an assembly state monitoring method and system based on a machine vision technology.
Background
The fan rotor is used as an important part of an aircraft engine and has the characteristics of complex structure, large quantity of parts and connecting pieces, high precision requirement, high manufacturing cost and the like. The assembly is a key process for guaranteeing the quality, the performance and the service life of a heavy product, the workload accounts for a large percentage, and the manufacturing rate of the aeroengine accounts for more than 40%. The assembly of the fan rotor of the aircraft engine is realized by centralized assembly in a limited space of discrete fixed stations, the process difficulty is high, the manual operation amount is large, and the assembly quality reliability and the performance stability are difficult to control. In order to ensure the high efficiency and reliability of the assembly quality, workers need to be ensured to operate strictly according to the process standard. In the traditional assembly process, the state information of an assembly field can not be sensed in real time, and meanwhile, workers can often not operate according to process standards for convenience, so that the assembly quality reliability and the assembly qualification rate of products are low.
Disclosure of Invention
In view of the above-mentioned drawbacks, the present invention provides an assembly status monitoring method and system based on machine vision technology, which can realize self-recognition and judgment of the assembly status of the device and improve the assembly accuracy.
In order to achieve the purpose, the invention adopts the following technical scheme: an assembly state monitoring method based on a machine vision technology is applied to an aviation fan rotor equipment process and comprises the following steps:
step S1: periodically acquiring state pictures of the aviation fan rotor in an assembly stage, and performing visualization operation and preprocessing operation on the state pictures;
step S2: inputting the processed state picture into a recognition model to judge the state of the aviation fan rotor in the state picture, and acquiring the state stage of the current aviation fan rotor;
step S3: and matching the state stage of the current aviation fan rotor with the assembly stage of the current aviation fan rotor, and prompting that the current aviation fan rotor is installed wrongly if the state stage of the current aviation fan rotor is not matched with the assembly stage of the current aviation fan rotor.
Preferably, the training process of identifying the template in step S2 is as follows:
step S21: shooting a plurality of groups of pictures of the aviation fan rotor at different assembly stages;
step S22: dividing a plurality of groups of state pictures into training set pictures and verification set pictures according to a proportion;
step S23: marking an assembly stage in the picture;
step S24: performing data enhancement operation on the picture, wherein the data enhancement operation comprises turning, rotating, scaling, cutting, translating and adding noise on the picture;
step S25: and inputting the training set pictures and the verification set pictures into the recognition model, acquiring the characteristic vectors, and finishing the training of the recognition template.
Preferably, in step S25, a YOLO model is selected as an identification model to obtain feature vectors in a training set picture and a verification set picture, a convolution attention module is arranged in the YOLO model, and the convolution attention module is arranged between a backbone and a nack of the YOLO model;
the convolution attention module comprises a CAM submodule and an SAM submodule, and the training set picture and the verification set picture sequentially pass through the CAM submodule and the SAM submodule;
performing maximum pooling and average pooling on the feature vectors in the CAM sub-module at the same time, then performing element-by-element addition on the feature vectors output by the shared MLP and performing sigmoid function activation on the feature vectors through the shared MLP to obtain channel attention Mc;
and sequentially carrying out maximum pooling and average pooling on the characteristic vectors in the SAM submodule along the channel direction of the SAM submodule, then obtaining a middle vector of the characteristic vectors, carrying out convolution operation on the middle vector, and taking the result of the convolution operation as the precise and alive input of the sigmoid function to obtain the spatial attention MS.
Preferably, only one hidden layer is provided in the shared MLP.
Preferably, the loss function for the accuracy judgment of the YOLO model is as follows:
Loss=λ1GIOV+λ2DIOV+λ3CIOV;
wherein said λ1、λ2、λ3Is a proportionality coefficient, λ1、λ2、λ3Satisfies the following relationship lambda123When 1, GIOV is the shape loss, DIOV is the area loss, CIOV is the position loss;
wherein
Figure BDA0003614856220000031
Figure BDA0003614856220000032
Figure BDA0003614856220000033
Wherein
Figure BDA0003614856220000034
Wherein a is the length of a prediction frame of the feature vector in the state picture, b is the width of the prediction frame of the feature vector in the state picture, c is the length of a template in a YOLO model, d is the width of the template in the YOLO model, and IOU is the sum of the prediction frame and the width of the prediction frame in the state pictureCross-over ratio of templates, p2Representing the Euclidean distance between the center of the prediction frame and the center of the template in the state picture, bt,bgtRespectively representing the center of the template and the center of the prediction frame in the state picture, E representing the shortest diagonal length of the prediction frame in the state picture and the minimum surrounding frame of the template, beta being a positive weight coefficient, and v being an aspect ratio consistency coefficient.
An assembly state monitoring system based on a machine vision technology uses the assembly state monitoring method based on the machine vision technology, and is characterized by comprising an equipment layer, a control layer and a model layer;
the equipment layer is provided with camera equipment, state pictures of the aviation fan rotor in the assembling process are periodically acquired through the camera equipment, and the visualized state pictures are sent to the control layer;
the control layer is provided with a display device, and the display device is used for displaying visual state pictures and receiving feedback of the model layer;
the model layer comprises a model processing module and a judging module, the model processing module is used for identifying the state stage of the aviation fan rotor in the state picture,
the judging module is used for acquiring the state stage of the aviation fan rotor, matching the current state stage of the aviation fan rotor with the current assembly stage of the aviation fan rotor, and feeding back that the current aviation fan rotor of the control layer is installed wrongly if the current state stage of the aviation fan rotor is not matched with the current assembly stage of the aviation fan rotor.
Preferably, the model processing module further comprises a template training sub-module;
the template training submodule is used for:
shooting a plurality of groups of pictures of the aviation fan rotor at different assembly stages;
dividing a plurality of groups of state pictures into training set pictures and verification set pictures according to a proportion;
marking an assembly stage in the picture;
performing data enhancement operation on the picture, wherein the data enhancement operation comprises turning, rotating, scaling, cutting, translating and adding noise on the picture;
and inputting the training set picture and the verification set picture into the recognition model, acquiring the characteristic vector and finishing the training of the recognition template.
Preferably, the template training sub-module further comprises a model sub-unit;
the model subunit selects a YOLO model as a recognition model to obtain the feature vectors in the training set picture and the verification set picture, wherein a convolution attention module is arranged in the YOLO model, and the convolution attention module is arranged between a backbone and a neck of the YOLO model;
the convolution attention module comprises a CAM submodule and an SAM submodule, and the training set picture and the verification set picture sequentially pass through the CAM submodule and the SAM submodule;
performing maximum pooling and average pooling on the feature vectors in the CAM sub-module at the same time, then performing element-by-element addition on the feature vectors output by the shared MLP and performing sigmoid function activation on the feature vectors through the shared MLP to obtain channel attention Mc;
and sequentially carrying out maximum pooling and average pooling on the characteristic vectors in the SAM submodule along the channel direction of the SAM submodule, then obtaining a middle vector of the characteristic vectors, carrying out convolution operation on the middle vector, and taking the result of the convolution operation as the precise and alive input of the sigmoid function to obtain the spatial attention MS.
One of the above technical solutions has the following advantages or beneficial effects: 1. the templates of all the assembling states are obtained through the recognition model, and the state pictures of the aviation fan rotor in the assembling stage are collected in real time to be matched with the templates, so that the automatic judgment of the assembling states is realized, and the assembling accuracy is improved.
2. Compared with the shape loss GIOV only used in the traditional YOLO, the position loss CIOV in the invention adds a penalty term of the ratio of the center distance of the prediction frame and the template and the length and the width in the state picture into the loss term, so that the network can ensure the faster convergence of the prediction frame during training and obtain higher regression positioning precision.
Drawings
FIG. 1 is a flow chart of one embodiment of the method of the present invention.
Fig. 2 is a schematic structural diagram of one embodiment of the system of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention and are not to be construed as limiting the present invention.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "transverse," "length," "width," "thickness," "upper," "lower," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "axial," "radial," "circumferential," and the like are used in the orientations and positional relationships indicated in the drawings, which are simply for convenience of description and simplicity of description, and are not intended to indicate or imply that the device or element so referred to must have a particular orientation, be constructed and operated in a particular orientation, and are not to be construed as limiting the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
As shown in fig. 1-2, an assembly state monitoring method based on a machine vision technology, which is applied to an aviation fan rotor assembling process, includes the following steps:
step S1: periodically acquiring state pictures of the aviation fan rotor in an assembly stage, and performing visualization operation and preprocessing operation on the state pictures;
step S2: inputting the processed state picture into a recognition model to judge the state of the aviation fan rotor in the state picture, and acquiring the state stage of the current aviation fan rotor;
step S3: and matching the state stage of the current aviation fan rotor with the assembly stage of the current aviation fan rotor, and prompting that the current aviation fan rotor is installed wrongly if the state stage of the current aviation fan rotor is not matched with the assembly stage of the current aviation fan rotor.
Because the prior art adopts the mode of manual inspection to detect the assembly of aviation fan rotor, its efficiency is very low, moreover because the quantity of inspection is big, need increase the inspection workman of joining in marriage more usually and support in the inspection of production. Therefore, in the method, the state pictures of the aviation fan rotor in the assembly stage are periodically acquired, then the state pictures are visually operated, and the state pictures are displayed in a human observable mode, for example, the state pictures are converted into a format which can be displayed by a display screen and displayed in the display screen, workers can manually detect the aviation fan rotors through the display screen, in addition, a plurality of folders are set according to the assembly process of the aviation fan rotor before the method is executed, and each folder only stores the state picture of one assembly process.
After the state pictures of the aviation fan rotor are obtained, preprocessing operation is carried out on the state pictures, wherein the preprocessing operation comprises the steps of going through the state pictures under each folder, obtaining indexes of the corresponding assembly stages of the folders, quickly finding out the state pictures at different time points in the assembly stages through the indexes, and tracing the source in time.
And inputting the state pictures into the recognition model according to an index sequence, wherein the recognition model is trained in advance, after the state pictures are input, the recognition model can output the current state stage of the aviation fan rotor, then the state stage of the previous aviation fan rotor is matched with the assembly stage of the aviation fan rotor, if the state stages are successfully matched, the current aviation fan rotor is correctly installed, and no installation error exists, and if the state stages of the aviation fan rotor are not matched with the assembly stage of the aviation fan rotor, the installation error exists, at the moment, the information of the installation error is fed back to a display screen to remind an inspection worker. The following example is by way of illustration:
and a certain camera device acquires a state picture of the aviation fan rotor in the third assembly stage, and then the state picture of the aviation fan rotor in the third assembly stage is input into the identification model, the identification result output by the identification model is that the current state stage of the aviation fan rotor is the first assembly stage, the current state stage of the aviation fan rotor is not matched with the assembly stage during shooting, and a prompt that the current aviation fan rotor is installed wrongly is sent out.
Preferably, the training process of identifying the template in step S2 is as follows:
step S21: shooting a plurality of groups of pictures of the aviation fan rotor at different assembly stages;
step S22: dividing a plurality of groups of state pictures into training set pictures and verification set pictures according to a proportion;
step S23: marking an assembly stage in the picture;
step S24: performing data enhancement operation on the picture, wherein the data enhancement operation comprises turning, rotating, zooming, cutting, translating and adding noise on the picture;
step S25: and inputting the training set pictures and the verification set pictures into the recognition model, acquiring the characteristic vectors, and finishing the training of the recognition template.
In the invention, the identification model needs to be trained in advance, firstly, a plurality of pictures of the rotor of the aviation fan at different assembly stages need to be shot before training, and about 100 pictures need to be shot at each assembly stage, so that the identification model can be trained by enough materials. The pictures are then divided into a training set picture and a verification set picture in proportion, which in one embodiment is 7: 3. and then, an assembly stage is marked on the picture, so that the assembly stage is hooked with the recognition result of the recognition model, and then data enhancement operation is performed on the picture, wherein the data enhancement operation can increase the number of samples of the recognition model and the diversity of the samples during training, so that the generalization capability and robustness of the recognition model are improved, and the influence of additional factors on the recognition in each aspect is reduced.
Preferably, in step S25, a YOLO model is selected as an identification model to obtain feature vectors in a training set picture and a verification set picture, a convolution attention module is arranged in the YOLO model, and the convolution attention module is arranged between a backbone and a nack of the YOLO model;
the convolution attention module comprises a CAM submodule and an SAM submodule, and the training set picture and the verification set picture sequentially pass through the CAM submodule and the SAM submodule;
performing maximum pooling and average pooling on the feature vectors in the CAM sub-module at the same time, then performing element-by-element addition on the feature vectors output by the shared MLP and performing sigmoid function activation on the feature vectors to obtain channel attention Mc;
and sequentially performing maximum pooling and average pooling on the feature vectors in the SAM submodule along the channel direction of the SAM submodule, then obtaining a middle vector of the feature vectors, performing convolution operation on the middle vector, and taking the result of the convolution operation as precise and active input of a sigmoid function to obtain the spatial attention MS.
Because in the assembly process of aviation fan rotor, the workman can place one side of aviation fan rotor to the part of installation, and the part can influence the discernment of discernment model. In order to improve the recognition model for the aviation fan rotor, a convolution attention module is added in the structure of the YOLO model, wherein the structure of the YOLO model is input → back bone → negk → head. The convolution attention module is fused between the backbone and the neck in the present invention. The backsbone in the YOLO is the most critical characteristic part for extraction, so that the convolutional attention module is fused after the backsbone and before the characteristics of the Neck network, the reason for doing so is that the characteristics extraction is completed in the backsbone, the prediction output is performed on different characteristic diagrams after the characteristics fusion, the convolutional attention module performs attention reconstruction at the position and can play a role in making up the best, the CAM submodule and the SAM submodule are arranged in the convolutional attention module, the CAM submodule and the SAM submodule generate weight for each characteristic channel through parameters, the importance degree of each characteristic channel is modeled, and then different channels are enhanced or inhibited according to different tasks. The method has the advantages that the characteristic diagram in the middle of the network is reconstructed, important characteristics are emphasized, common characteristics are inhibited, and the purpose of improving the target detection effect is achieved.
Preferably, only one hidden layer is provided in the shared MLP.
Only one hidden layer is arranged, so that the calculation amount of the feature vector can be reduced, and the training speed is improved.
Preferably, the loss function for the accuracy judgment of the YOLO model is as follows:
Loss=λ1GIOV+λ2DIOV+λ3CIOV;
wherein said λ1、λ2、λ3Is a proportionality coefficient, λ1、λ2、λ3Satisfies the following relationship lambda123When 1, GIOV is the shape loss, DIOV is the area loss, CIOV is the position loss;
wherein
Figure BDA0003614856220000091
Figure BDA0003614856220000092
Figure BDA0003614856220000101
Wherein
Figure BDA0003614856220000102
Wherein a is the length of a prediction frame of the feature vector in the state picture, b is the width of the prediction frame of the feature vector in the state picture, c is the length of a template in a YOLO model, d is the width of the template in the YOLO model, IOU is the intersection ratio of the prediction frame and the template in the state picture, and p is2Representing the Euclidean distance between the center of the prediction frame and the center of the template in the state picture, bt,bgtRespectively representing the center of the template and the center of the prediction frame in the state picture, E representing the shortest diagonal length of the prediction frame in the state picture and the minimum surrounding frame of the template, beta being a positive weight coefficient, and v being an aspect ratio consistency coefficient.
In the invention, the aerial fan rotor may not be completely appeared in the state picture during shooting, or the placement position of workers is uncertain, so that the size ratio of the aerial fan rotor in the state picture is uncertain. Therefore, the shape loss, the area loss and the position loss are added in the setting of the loss function of the invention to make up for the misjudgment of the above situation in the recognition model.
The advantage of shape-loss GIOV is scale invariance, i.e. the similarity of the prediction box and the template in the state picture is independent of their spatial scale size. However, the shape loss GIOV has a problem in that when a prediction frame or a template is completely surrounded by the other side in a state picture, the shape loss GIOV is completely degenerated into a loss of an intersection ratio, since it is heavily dependent on the intersection ratio term, the convergence speed is too slow in actual training, and the accuracy of a predicted bounding box is low.
The area loss DIOV is added in the method, the defect of the shape loss GIOV is considered, the calculation mode is changed into an Euclidean distance between the central points of all detection frames, and therefore the degradation problem in the shape loss GIOV is solved.
The position loss CIOV increases the loss of the prediction frame dimension on the basis of the shape loss DIOV, and simultaneously considers the overlapping area, the center point distance and the length-width ratio of the prediction frame and the template in the current state picture.
Compared with the shape loss GIOV only used in the traditional YOLO, the position loss CIOV in the invention adds a penalty term of the ratio of the center distance of the prediction frame and the template and the length and the width in the state picture into the loss term, so that the network can ensure the faster convergence of the prediction frame during training and obtain higher regression positioning precision.
An assembly state monitoring system based on a machine vision technology uses the assembly state monitoring method based on the machine vision technology, and is characterized by comprising an equipment layer, a control layer and a model layer;
the equipment layer is provided with camera equipment, state pictures of the aviation fan rotor in the assembling process are periodically acquired through the camera equipment, and the visualized state pictures are sent to the control layer;
the control layer is provided with a display device, and the display device is used for displaying visual state pictures and receiving feedback of the model layer;
the model layer comprises a model processing module and a judging module, the model processing module is used for identifying the state stage of the aviation fan rotor in the state picture,
the judging module is used for acquiring the state stage of the aviation fan rotor, matching the current state stage of the aviation fan rotor with the current assembly stage of the aviation fan rotor, and feeding back that the current aviation fan rotor of the control layer is installed wrongly if the current state stage of the aviation fan rotor is not matched with the current assembly stage of the aviation fan rotor.
Preferably, the model processing module further comprises a template training sub-module;
the template training submodule is used for:
shooting a plurality of groups of pictures of the aviation fan rotor at different assembly stages;
dividing a plurality of groups of state pictures into training set pictures and verification set pictures according to a proportion;
marking an assembly stage in the picture;
performing data enhancement operation on the picture, wherein the data enhancement operation comprises turning, rotating, zooming, cutting, translating and adding noise on the picture;
and inputting the training set pictures and the verification set pictures into the recognition model, acquiring the characteristic vectors, and finishing the training of the recognition template.
Preferably, the template training submodule further comprises a model subunit;
the model subunit selects a YOLO model as a recognition model to obtain feature vectors in a training set picture and a verification set picture, wherein the YOLO model is provided with a convolution attention module, and the convolution attention module is arranged between a backbone and a tack of the YOLO model;
the convolution attention module comprises a CAM submodule and an SAM submodule, and the training set picture and the verification set picture sequentially pass through the CAM submodule and the SAM submodule;
performing maximum pooling and average pooling on the feature vectors in the CAM sub-module at the same time, then performing element-by-element addition on the feature vectors output by the shared MLP and performing sigmoid function activation on the feature vectors to obtain channel attention Mc;
and sequentially carrying out maximum pooling and average pooling on the characteristic vectors in the SAM submodule along the channel direction of the SAM submodule, then obtaining a middle vector of the characteristic vectors, carrying out convolution operation on the middle vector, and taking the result of the convolution operation as the precise and alive input of the sigmoid function to obtain the spatial attention MS.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (8)

1. An assembly state monitoring method based on a machine vision technology is applied to an aviation fan rotor equipment process, and is characterized by comprising the following steps of:
step S1: periodically acquiring state pictures of the aviation fan rotor in an assembly stage, and performing visualization operation and preprocessing operation on the state pictures;
step S2: inputting the processed state picture into a recognition model to judge the state of the aviation fan rotor in the state picture, and acquiring the state stage of the current aviation fan rotor;
step S3: and matching the state stage of the current aviation fan rotor with the assembly stage of the current aviation fan rotor, and prompting that the current aviation fan rotor is installed wrongly if the state stage of the current aviation fan rotor is not matched with the assembly stage of the current aviation fan rotor.
2. The assembly state monitoring method based on machine vision technology as claimed in claim 1, wherein the training process of identifying the template in step S2 is as follows:
step S21: shooting a plurality of groups of pictures of the aviation fan rotor at different assembly stages;
step S22: dividing a plurality of groups of state pictures into training set pictures and verification set pictures according to a proportion;
step S23: marking an assembly stage in the picture;
step S24: performing data enhancement operation on the picture, wherein the data enhancement operation comprises turning, rotating, zooming, cutting, translating and adding noise on the picture;
step S25: and inputting the training set pictures and the verification set pictures into the recognition model, acquiring the characteristic vectors, and finishing the training of the recognition template.
3. The assembly status monitoring method based on machine vision technology as claimed in claim 2, wherein in step S25, a YOLO model is selected as the recognition model to obtain the feature vectors in the training set picture and the verification set picture, a convolution attention module is disposed in the YOLO model, and the convolution attention module is disposed between a backbone and a nic of the YOLO model;
the convolution attention module comprises a CAM submodule and an SAM submodule, and the training set picture and the verification set picture sequentially pass through the CAM submodule and the SAM submodule;
performing maximum pooling and average pooling on the feature vectors in the CAM sub-module at the same time, then performing element-by-element addition on the feature vectors output by the shared MLP and performing sigmoid function activation on the feature vectors through the shared MLP to obtain channel attention Mc;
and sequentially carrying out maximum pooling and average pooling on the characteristic vectors in the SAM submodule along the channel direction of the SAM submodule, then obtaining a middle vector of the characteristic vectors, carrying out convolution operation on the middle vector, and taking the result of the convolution operation as the precise and alive input of the sigmoid function to obtain the spatial attention MS.
4. The assembly state monitoring method based on machine vision technology as claimed in claim 3, wherein only one hidden layer is provided in the shared MLP.
5. The assembly state monitoring method based on machine vision technology as claimed in claim 3, wherein the loss function for the accuracy judgment of the YOLO model is as follows:
Loss=λ1GIOV+λ2DIOV+λ3CIOV;
wherein said λ1、λ2、λ3Is a proportionality coefficient, λ1、λ2、λ3Satisfies the following relationship lambda123When 1, GIOV is the shape loss, DIOV is the area loss, CIOV is the position loss;
wherein
Figure FDA0003614856210000021
Figure FDA0003614856210000022
Figure FDA0003614856210000023
Wherein
Figure FDA0003614856210000024
Wherein a is the length of a prediction frame of the feature vector in the state picture, b is the width of the prediction frame of the feature vector in the state picture, c is the length of a template in a YOLO model, d is the width of the template in the YOLO model, IOU is the intersection ratio of the prediction frame and the template in the state picture, and p is2Euclidean distance between the center of the prediction frame and the center of the template in the representation state picture, bt,bgtThe center of the template and the center of the prediction frame in the state picture are respectively represented, E represents the shortest diagonal length of the prediction frame in the state picture and the minimum surrounding frame of the template, beta is a positive weight coefficient, and v is an aspect ratio consistency coefficient.
6. An assembly state monitoring system based on machine vision technology, which uses the assembly state monitoring method based on machine vision technology as claimed in any one of claims 1-5, and is characterized by comprising an equipment layer, a control layer and a model layer;
the equipment layer is provided with camera equipment, state pictures of the aviation fan rotor in the assembling process are periodically acquired through the camera equipment, and the visualized state pictures are sent to the control layer;
the control layer is provided with a display device, and the display device is used for displaying visual state pictures and receiving feedback of the model layer;
the model layer comprises a model processing module and a judging module, the model processing module is used for identifying the state stage of the aviation fan rotor in the state picture,
the judging module is used for acquiring the state stage of the aviation fan rotor, matching the state stage of the current aviation fan rotor with the assembling stage of the current aviation fan rotor, and feeding back that the current aviation fan rotor of the control layer is installed wrongly if the state stage of the current aviation fan rotor is not matched with the assembling stage of the current aviation fan rotor.
7. The machine-vision-technology-based assembly state monitoring system as claimed in claim 6, wherein the model processing module further comprises a template training sub-module;
the template training submodule is used for:
shooting a plurality of groups of pictures of the aviation fan rotor at different assembly stages;
dividing a plurality of groups of state pictures into training set pictures and verification set pictures according to a proportion;
marking an assembly stage in the picture;
performing data enhancement operation on the picture, wherein the data enhancement operation comprises turning, rotating, zooming, cutting, translating and adding noise on the picture;
and inputting the training set picture and the verification set picture into the recognition model, acquiring the characteristic vector and finishing the training of the recognition template.
8. The assembly state monitoring system based on machine vision technology as claimed in claim 7, wherein the template training sub-module further comprises a model sub-unit;
the model subunit selects a YOLO model as a recognition model to obtain feature vectors in a training set picture and a verification set picture, wherein the YOLO model is provided with a convolution attention module, and the convolution attention module is arranged between a backbone and a tack of the YOLO model;
the convolution attention module comprises a CAM submodule and an SAM submodule, and the training set picture and the verification set picture sequentially pass through the CAM submodule and the SAM submodule;
performing maximum pooling and average pooling on the feature vectors in the CAM sub-module at the same time, then performing element-by-element addition on the feature vectors output by the shared MLP and performing sigmoid function activation on the feature vectors to obtain channel attention Mc;
and sequentially performing maximum pooling and average pooling on the feature vectors in the SAM submodule along the channel direction of the SAM submodule, then obtaining a middle vector of the feature vectors, performing convolution operation on the middle vector, and taking the result of the convolution operation as precise and active input of a sigmoid function to obtain the spatial attention MS.
CN202210440344.9A 2022-04-25 2022-04-25 Assembly state monitoring method and system based on machine vision technology Active CN114782778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210440344.9A CN114782778B (en) 2022-04-25 2022-04-25 Assembly state monitoring method and system based on machine vision technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210440344.9A CN114782778B (en) 2022-04-25 2022-04-25 Assembly state monitoring method and system based on machine vision technology

Publications (2)

Publication Number Publication Date
CN114782778A true CN114782778A (en) 2022-07-22
CN114782778B CN114782778B (en) 2023-01-06

Family

ID=82433967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210440344.9A Active CN114782778B (en) 2022-04-25 2022-04-25 Assembly state monitoring method and system based on machine vision technology

Country Status (1)

Country Link
CN (1) CN114782778B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070195089A1 (en) * 2006-02-17 2007-08-23 Hitachi Software Engineering Co., Ltd. Image processing system and image processing method for aerial photograph
CN106842020A (en) * 2016-12-26 2017-06-13 青岛海尔空调器有限总公司 The detection method and air-conditioner of the motor setup error of air-conditioner
US20180342069A1 (en) * 2017-05-25 2018-11-29 General Electric Company Neural network feature recognition system
CN109190575A (en) * 2018-09-13 2019-01-11 深圳增强现实技术有限公司 Assemble scene recognition method, system and electronic equipment
CN109657535A (en) * 2018-10-30 2019-04-19 银河水滴科技(北京)有限公司 Image identification method, target device and cloud platform
CN109816049A (en) * 2019-02-22 2019-05-28 青岛理工大学 A kind of assembly monitoring method, equipment and readable storage medium storing program for executing based on deep learning
CN109948207A (en) * 2019-03-06 2019-06-28 西安交通大学 A kind of aircraft engine high pressure rotor rigging error prediction technique
CN111079630A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Fault identification method for railway wagon brake beam with incorrect installation position
CN111429418A (en) * 2020-03-19 2020-07-17 天津理工大学 Industrial part detection method based on YO L O v3 neural network
WO2020164282A1 (en) * 2019-02-14 2020-08-20 平安科技(深圳)有限公司 Yolo-based image target recognition method and apparatus, electronic device, and storage medium
CN111624215A (en) * 2020-05-26 2020-09-04 戴姆勒股份公司 Method for the non-destructive testing of internal assembly defects of a part
CN111707458A (en) * 2020-05-18 2020-09-25 西安交通大学 Rotor monitoring method based on deep learning signal reconstruction
CN112503725A (en) * 2020-12-08 2021-03-16 珠海格力电器股份有限公司 Air conditioner self-cleaning control method and device and air conditioner
CN112581430A (en) * 2020-12-03 2021-03-30 厦门大学 Deep learning-based aeroengine nondestructive testing method, device, equipment and storage medium
CN112794274A (en) * 2021-04-08 2021-05-14 南京东富智能科技股份有限公司 Safety monitoring method and system for oil filling port at bottom of oil tank truck
CN113269234A (en) * 2021-05-10 2021-08-17 青岛理工大学 Connecting piece assembly detection method and system based on target detection
CN113838013A (en) * 2021-09-13 2021-12-24 中国民航大学 Blade crack real-time detection method and device in aero-engine operation and maintenance based on YOLOv5
CN114329806A (en) * 2021-11-02 2022-04-12 上海海事大学 Engine rotor bolt assembling quality evaluation method based on BP neural network

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070195089A1 (en) * 2006-02-17 2007-08-23 Hitachi Software Engineering Co., Ltd. Image processing system and image processing method for aerial photograph
CN106842020A (en) * 2016-12-26 2017-06-13 青岛海尔空调器有限总公司 The detection method and air-conditioner of the motor setup error of air-conditioner
US20180342069A1 (en) * 2017-05-25 2018-11-29 General Electric Company Neural network feature recognition system
CN109190575A (en) * 2018-09-13 2019-01-11 深圳增强现实技术有限公司 Assemble scene recognition method, system and electronic equipment
CN109657535A (en) * 2018-10-30 2019-04-19 银河水滴科技(北京)有限公司 Image identification method, target device and cloud platform
WO2020164282A1 (en) * 2019-02-14 2020-08-20 平安科技(深圳)有限公司 Yolo-based image target recognition method and apparatus, electronic device, and storage medium
CN109816049A (en) * 2019-02-22 2019-05-28 青岛理工大学 A kind of assembly monitoring method, equipment and readable storage medium storing program for executing based on deep learning
CN109948207A (en) * 2019-03-06 2019-06-28 西安交通大学 A kind of aircraft engine high pressure rotor rigging error prediction technique
CN111079630A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Fault identification method for railway wagon brake beam with incorrect installation position
CN111429418A (en) * 2020-03-19 2020-07-17 天津理工大学 Industrial part detection method based on YO L O v3 neural network
CN111707458A (en) * 2020-05-18 2020-09-25 西安交通大学 Rotor monitoring method based on deep learning signal reconstruction
CN111624215A (en) * 2020-05-26 2020-09-04 戴姆勒股份公司 Method for the non-destructive testing of internal assembly defects of a part
CN112581430A (en) * 2020-12-03 2021-03-30 厦门大学 Deep learning-based aeroengine nondestructive testing method, device, equipment and storage medium
CN112503725A (en) * 2020-12-08 2021-03-16 珠海格力电器股份有限公司 Air conditioner self-cleaning control method and device and air conditioner
CN112794274A (en) * 2021-04-08 2021-05-14 南京东富智能科技股份有限公司 Safety monitoring method and system for oil filling port at bottom of oil tank truck
CN113269234A (en) * 2021-05-10 2021-08-17 青岛理工大学 Connecting piece assembly detection method and system based on target detection
CN113838013A (en) * 2021-09-13 2021-12-24 中国民航大学 Blade crack real-time detection method and device in aero-engine operation and maintenance based on YOLOv5
CN114329806A (en) * 2021-11-02 2022-04-12 上海海事大学 Engine rotor bolt assembling quality evaluation method based on BP neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
孙振军等: "工业机器人和机器视觉在风力发电机组变桨轴承装配中的应用", 《上海电气技术》 *
张丽秀等: "基于改进的YOLO V3算法汽车零件配置辨识", 《组合机床与自动化加工技术》 *
张俊等: "基于YOLO3的活塞连杆智能防错系统开发", 《内燃机》 *
魏中雨等: "基于机器视觉和深度神经网络的零件装配检测", 《组合机床与自动化加工技术》 *
魏棵榕等: "双锥静压轴承结构设计与FLUENT仿真分析", 《机械》 *

Also Published As

Publication number Publication date
CN114782778B (en) 2023-01-06

Similar Documents

Publication Publication Date Title
CN109523518B (en) Tire X-ray defect detection method
CN111368690B (en) Deep learning-based video image ship detection method and system under influence of sea waves
CN112037219B (en) Metal surface defect detection method based on two-stage convolutional neural network
CN111062915A (en) Real-time steel pipe defect detection method based on improved YOLOv3 model
CN110648310B (en) Weak supervision casting defect identification method based on attention mechanism
US11783474B1 (en) Defective picture generation method and apparatus applied to industrial quality inspection
CN103577831B (en) For the method and apparatus generating training pattern based on feedback
WO2020093603A1 (en) High-intensity multi-directional fdm 3d printing method based on stereoscopic vision monitoring
CN111553485A (en) View display method, device, equipment and medium based on federal learning model
CN117152484B (en) Small target cloth flaw detection method based on improved YOLOv5s
CN113396424A (en) System and method for automated material extraction
CN110119768A (en) Visual information emerging system and method for vehicle location
CN114782778B (en) Assembly state monitoring method and system based on machine vision technology
CN110956656A (en) Spindle positioning method based on depth target detection
CN111361819A (en) Wire rod tag hanging system and method
CN113269234B (en) Connecting piece assembly detection method and system based on target detection
CN117237367B (en) Spiral blade thickness abrasion detection method and system based on machine vision
CN115131334B (en) Machine learning-based aerial small hole type identification and automatic sequencing method
US20240078654A1 (en) System and method for inspection of a wind turbine blade shell part
US20240169510A1 (en) Surface defect detection model training method, and surface defect detection method and system
CN113837184B (en) Mosquito detection method, device and storage medium
CN116188374A (en) Socket detection method, device, computer equipment and storage medium
CN113222947B (en) Intelligent detection method and system for welding defects of non-metallic materials
CN116307300B (en) Production process optimization method and system for plastic pipeline
US20230316547A1 (en) Information processing apparatus, method and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant