CN116958739A - Attention mechanism-based carbon fiber channel real-time dynamic numbering method - Google Patents

Attention mechanism-based carbon fiber channel real-time dynamic numbering method Download PDF

Info

Publication number
CN116958739A
CN116958739A CN202310752546.1A CN202310752546A CN116958739A CN 116958739 A CN116958739 A CN 116958739A CN 202310752546 A CN202310752546 A CN 202310752546A CN 116958739 A CN116958739 A CN 116958739A
Authority
CN
China
Prior art keywords
carbon fiber
network
value
channel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310752546.1A
Other languages
Chinese (zh)
Other versions
CN116958739B (en
Inventor
古玲
武继杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Jushi Technology Co ltd
Original Assignee
Nanjing Jushi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Jushi Technology Co ltd filed Critical Nanjing Jushi Technology Co ltd
Priority to CN202310752546.1A priority Critical patent/CN116958739B/en
Publication of CN116958739A publication Critical patent/CN116958739A/en
Application granted granted Critical
Publication of CN116958739B publication Critical patent/CN116958739B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

A carbon fiber channel real-time dynamic numbering method based on an attention mechanism relates to the technical field of image processing. Obtaining sampling data by using an industrial camera; preprocessing the sampling data, wherein the preprocessing comprises training a channel prediction network and marking carbon fiber channel numbers in actual scenes; and outputting the processing result to obtain the marking number of the carbon fiber yarn channel. The invention adopts a visual transducer network to process carbon fiber images acquired by a linear array industrial camera, and sequentially carries out real-time dynamic numbering on carbon fiber yarns from top to bottom on image data. And under the condition of no obvious distinguishing characteristics, better processing is carried out on the data such as doubling or broken yarn and the like.

Description

Attention mechanism-based carbon fiber channel real-time dynamic numbering method
Technical Field
The invention relates to the technical field of image processing, in particular to a carbon fiber yarn channel real-time dynamic numbering method based on an attention mechanism.
Background
In the production process of the carbon fiber yarn, the problem of surface defects (broken yarn, joint and the like) is prominent, and the quality of carbon fiber yarn products (such as manufacturing carbon fiber composite materials, reinforced carbon fiber concrete structures and the like) is seriously threatened. The surface defects are generated because of the fact that the carbon fiber defects are difficult to locate and trace due to the fact that certain viscosity exists on the surface of the yarn with low solidification speed or the yarn is mixed due to uneven impregnation of an oiling agent in the production of the carbon fiber, and the quality and the service life of a carbon fiber product are seriously influenced by the defects.
Because of the complex and diversified production environments, the quality level of the carbon fiber yarn images acquired by the industrial linear camera is uneven, and the characteristic of distinguishing yarn paths is not obvious. In addition, the carbon fiber yarn in the actual environment has various changes, and is easy to break, shift in the position of the carbon yarn and the like. The traditional image processing algorithm cannot cope with the complex data, so that the accuracy of the screw channel prediction cannot be guaranteed.
Due to the problems of doubling, broken filaments and the like, the algorithm based on the traditional image processing on the market is not applicable at present. That is, there are a plurality of carbon filaments in the image data that overlap together or that appear intermediate discontinuous, and these carbon filaments are collected by an industrial camera and often have no distinguishing property in the visual sense, nor have obvious distinguishing features in the image. Conventional algorithms have difficulty processing such data. Therefore, how to number the doubling or breakage and other conditions more accurately without obvious distinguishing characteristics is a worth discussing problem.
Disclosure of Invention
The invention provides a real-time carbon fiber yarn channel numbering algorithm based on an attention mechanism, which utilizes a transducer to process carbon fiber images acquired by a linear array industrial camera, outputs the prediction results of carbon fiber yarns and backgrounds in the images, and realizes the determination and numbering of carbon fiber yarn channels.
A carbon fiber channel real-time dynamic numbering method based on an attention mechanism comprises the following steps:
step S1: obtaining sampling data by using an industrial camera;
step S2: preprocessing the sampled data;
step S3: and outputting the processing result to obtain the marking number of the carbon fiber yarn channel.
Preferably, the preprocessing of the sampled data in step S2 of the present invention includes the step S21 of training the silk channel prediction network, which specifically includes the following steps:
step S211: preprocessing data; specific:
will sample the dataDivide into training sets->And test set->Two parts; wherein x is i Original image of carbon fiber yarn of 3 Xh Xw, y i A 1 xw silk label, a silk position value of 1, and a background position value of 0;
for the training set, each original image of the carbon fiber yarn is respectively subjected to the preprocessing operations of Resize, normalization, random enhancement of color brightness and saturation, and random vertical overturning, and each yarn path label is required to be adjusted according to the width of the original image of the carbon fiber yarn after Resize; the method comprises the steps that a bilinear interpolation algorithm is selected for the Resize operation of an original image of the carbon fiber yarn, and a bicubic interpolation algorithm is selected for the adjustment of a label;
for the verification set, only the pretreatment operation of Resize and normalization is needed for each original image of the carbon fiber yarn, and each yarn path label is needed to be adjusted according to the width of the image after Resize;
the sizes of images after the training set and the verification set are the same;
step S212: outputting a classification predicted value and a position predicted value;
step S213: calculating loss;
step S214: optimizing and predicting network parameters;
step S215: and outputting the optimal prediction model.
Preferably, the output classification predicted value and the position predicted value of step S212 of the present invention; the specific process is as follows:
initializing parameters, including Linear mapping layer Linear, transducer coding network, classification network and wire numbering network; an optimizer Adam; marking the carbon wire data set;
taking an image of [3, i, w ] as an input sample, starting from the i-th position, i.e. taking images from [3, i, w ] to [3, f+h, w ];
then using a size of [ h, w ] p ]Grid pair [3, h, w ]]Dividing the input sample of (2) to obtainAn image block; compressing the first 3 dimensions to get +.>Is input to the computer; let 3 Xh Xw p =d,/>Obtaining an input sample of the i-th layer->
Linear mapping of input samples:splice-on position embedding vectorInput to get a transducer network ≡>
Will x i Inputting the code into a transducer coding network to obtain a code output o i =Transformer(x i )∈R (d+1)×k
O is set to i Respectively send into the classification network f cls And a lane numbering network f loc Classification prediction to obtain k input blocksAnd position prediction +.>
Preferably, the calculation loss of step S213 of the present invention; the specific process is as follows:
calculating classification prediction lossesWhere MSELoss is the mean square error loss, y cls ∈R 1×k The value of k real categories is 0 or 1; when the area occupied by the carbon wire in the corresponding kth input image is larger than a certain threshold value (0.5), the value is 1, otherwise, the value is 0;
calculating position prediction losses wherein yloc ∈R 1×k Is a real label corresponding to the channel number;
wherein ,j=1, …, N is the number of all tracks for the true value corresponding to the kth image block;
calculate total loss = l cls +l loc Updating network parameters using Adam optimizer, including Linear mapping layer Linear, transducer coding network, classification network f cls Silk road numbering network f loc
Preferably, the output optimal prediction model of step S215 of the present invention comprises the following specific processes:
optimizing the whole network parameters by using an Adam optimizer;
adding 1 to the value of i, and repeating the whole process from the step 1; until i=w-w p The i value is classified as 0, the next picture is replaced, and the whole process is repeated from the step 1;
calculating loss under the verification set by using the verification set, selecting a model with minimum loss as an optimal prediction network, and outputting a final silk channel prediction network
Preferably, the preprocessing of the sampled data in the step S2 of the present invention further includes labeling the carbon fiber channel number in the actual scene, and the specific process is as follows:
for test dataCarrying out the operations of restoration and normalization on each original picture;
inputting the processed test image into the final screw channel prediction network obtained in step S215Obtain the output predictive vector +.>The carbon fiber channel numbers are numbered.
The method comprises the steps of training a visual transducer by utilizing images and labeling data of carbon fiber filaments in the early stage, and predicting the channel numbers in the carbon fiber filament images. The model has the advantages of less parameter, quick and efficient calculation, less operation needed by post-treatment, realization of real-time dynamic numbering of the filament tracks of the carbon fiber filaments, and certain generalization capability and stability.
Drawings
FIG. 1 is a flow chart of the real-time dynamic numbering method of the present invention.
FIG. 2 is a flow chart of the training phase of the present invention;
FIG. 3 is a flow chart of the prediction phase of the present invention;
fig. 4 is a network configuration diagram of the present invention.
Detailed Description
The technical scheme of the invention is described in detail below with reference to the accompanying drawings:
as shown in fig. 1, a method for dynamically numbering carbon fiber channels in real time based on an attention mechanism comprises the following steps:
step S1: obtaining sampling data by using an industrial camera;
step S2: preprocessing the sampled data;
step S3: and outputting the processing result to obtain the marking number of the carbon fiber yarn channel.
As shown in fig. 2, the preprocessing of the sampled data in step S2 of the present invention includes step S21 of training a silk channel prediction network, which specifically includes the following steps:
step S211: preprocessing data; specific:
will sample the dataDivide into training sets->And verification set->Two parts; wherein x is i Original image of carbon fiber yarn of 3 Xh Xw, y i A carbon fiber yarn path label of 1 Xw, a carbon fiber yarn path position value of 1, a background position value of 0, N train 、N val The total number of samples of the training set and the verification set respectively;
for the training set, preprocessing operations of Resize, normalization, color disturbance and random vertical overturning are respectively carried out on each carbon fiber yarn original image, and each yarn channel label is required to be adjusted according to the width of the carbon fiber yarn original image after Resize; the method comprises the steps that a bilinear interpolation algorithm is selected for the Resize operation of an original image of the carbon fiber yarn, and a bicubic interpolation algorithm is selected for the adjustment of a label;
the specific operation of normalization is as follows:mu is the mean of the sample and sigma is the variance.
The specific operation of the color disturbance is as follows: for one pixel (r, g, b), it is assumed that the value of each channel is between [0,1 ]. ColorJitter will do the following for each channel:
wherein Ci For the value of the current channel,for the new value after processing, α is the adjustment intensity, typically [0,1]The fraction in between, rand (-1, 1) is a fraction uniformly distributed in [ -1,1]Random numbers in between. After the above operation, the new color is wherein :
where trunc () is a rounding function, rounds the calculation to an integer, and converts it to a pixel value on the image.
Random vertical flip: current image sample x according to a certain probability p i Vertical flipping is performed randomly, with p set to 0.5.
Resize: let us assume that we want to compute the size m×n of the image Resize as m×n (M < M, N < N), first, the corresponding position of each pixel after scaling down in the original image, and let us say that the position of the (x, y) th pixel in the original image after scaling down is (i, j), then: x=i×m/M, y=j×n/N. The bilinear interpolation algorithm and the bicubic interpolation algorithm in the Resize are respectively:
(1) Bilinear interpolation algorithm:
1. calculating coordinates of four nearest neighbor pixels of each position (i, j) of the image after Resize at the corresponding position (x, y) in the original image: (x) 1 ,y 1 ),(x 1 ,y 2 ),(x 2 ,y 1) and (x2 ,y 2), wherein x1 and y1 To the maximum satisfy x 1 I is less than or equal to i and y 1 Integer less than or equal to j, x 2 and y2 To minimum satisfy x 2 Not less than i and y 2 Integer of j.
2. For each coordinate position (i, j) in the Resize image, the pixel value is f (i, j), which is calculated as follows:
f(i,j)=(1-w)(1-h)f(x 1 ,y 1 )+w(1-h)f(x 2 ,y 1 )+(1-w)hf(x 1 ,y 2 )+whf(x 2 ,y 2 ),
wherein w= (i-x) 1 )/(x 2 -x 1 ) And h= (j-y) 1 )/(y 2 -y 1 ) Is a weight coefficient.
(2) Bicubic interpolation algorithm:
we need to calculate the pixel value f (i, j) of the scaled down image from the value of (x, y). Specifically, for the (i, j) th pixel, its pixel value can be calculated by the following formula:
wherein The position of the upper left corner pixel of the 16 pixels adjacent to (i, j) is indicated. g (k, l) represents the pixel values of the kth row and the first column, w k-a,l-b Is a weighting coefficient calculated from (x, y)To (a) and (b). Specifically, the weighting coefficient can be calculated from the following formula:
w k-a,i-b =w(s)·w(t)
where s=k-x+1, t=l-y+1, w(s) and w (t) represent weighting coefficients in s, t directions, respectively, which can be calculated by the following formula:
the w (z) coefficient in the above formula is called a bicubic interpolation weight function, and its function is to control the smoothness of interpolation, when z is 0, the value of w (z) is the largest, the gray value of the corresponding pixel has the largest influence on the interpolation result, and when z is far away from 0, the value of w (z) gradually decreases, and the influence on the interpolation result is smaller and smaller.
For verification set dataThe original image of each carbon fiber wire only needs to do the preprocessing operation of Resize (bilinear interpolation algorithm, the same as above) and normalization, and each wire label needs to adjust Resize (bicubic interpolation algorithm, the same as above) according to the width of the image Resize;
the sizes of images after the training set and the verification set are the same;
step S212: outputting a classification predicted value and a position predicted value; the specific process is as follows:
initializing parameters, including Linear mapping layer Linear, transducer coding network, classification network and wire numbering network; an optimizer Adam; an original image of carbon fiber yarn and a yarn path label corresponding to the original image; as shown in figure 4 of the drawings,
taking an image of [3, i, w ] as an input sample, starting from the i-th position, i.e. taking images from [3, i, w ] to [3, i+h, w ];
then using a size of [ h, w ] p ]Grid pair [3, h, w ]]Dividing the input sample of (2) to obtainAn image block; compressing the first 3 dimensions to get +.>Is input to the computer; let 3 Xh Xw p =d,/>Obtain the input sample of the i-th position +.>k is the total number of the image blocks divided, and d is the dimension of each image block after compression. R real set vector space, R { d x k }, represents vector space of d by k dimensions; if the width w of each image block p Setting to 5, marking a picture to obtain +.>The number of training samples can realize that the labeling quantity of the total picture is controlled within 100.
Linear mapping of input samples:splice-on position embedding vectorInput to get a transducer network ≡>
Will x i Inputting the code into a transducer coding network to obtain a code output o i =Transformer(x i )∈R (d+1)×k
Output o of code i Respectively send into the classification network f cls And a lane numbering network f loc Classification prediction to obtain k input blocksAnd position prediction +.>Specifically calculated as +.> wherein Wcls and bcls Respectively a classification network f cls σ (x) is a ReLU activation function, and the calculation formula is σ (x) =max (0, x); wherein Wloc and bloc Respectively numbering network f for silk channels loc Weight parameters and bias of (a).
Step S213: calculating loss; the specific process is as follows:
calculating classification prediction lossesWhere MSELoss is the mean square error loss, y cls ∈R 1×k The value of the real category corresponding to the k image blocks is 0 or 1; when the occupied area of the carbon fiber channels in the corresponding kth input image block is larger than a certain threshold value (0.5), the value is 1, otherwise, the value is 0;
calculating position prediction losses wherein yloc ∈R 1×k Is a real label corresponding to the channel number;
wherein ,j=1, …, N are all the number of tracks, j is the index of the track from left to right; calculate total loss = l cls +l loc Updating network parameters using Adam optimizer, including Linear mapping layer Linear, transducer coding network, classification network f cls Silk road numbering network f loc . The purpose of the classification network is to assist learning, and only the lane numbering network is used for lane prediction in the test stage.
Step S214: optimizing and predicting network parameters;
step S215: the optimal prediction model is output, and the specific process is as follows:
optimizing the whole network parameters by using an Adam optimizer;
adding 1 to the value of i, and repeating the whole process from the step 1; until i=w-w p The i value is classified as 0, the next picture is replaced, and the whole process is repeated from the step 1;
calculating loss under the verification set by using the verification set, selecting a model with minimum loss as an optimal prediction network, and outputting a final silk channel prediction network
As shown in fig. 3, the preprocessing of the sampled data in step S2 of the present invention further includes labeling of carbon fiber channel numbers in actual scenes, and the specific process is as follows:
for test dataCarrying out the operations of restoration and normalization on each original picture;
inputting the processed test image into the final screw channel prediction network obtained in step S215Obtain the output predictive vector +.>Specifically calculated as +.>From left to right, when one or more consecutive values greater thanWhen the value of the threshold value (0.5) is reached, the current position or the continuous region is judged as a new carbon fiber yarn, and the number of the carbon fiber yarn is added with 1 (the initial value is 0). The carbon fiber channel numbers are numbered accordingly.
The invention utilizes a linear array industrial camera, and utilizes images and labeling data of carbon fiber filaments in the early stage to train a visual transducer to predict the number of the channel in the carbon fiber filament images. The model has the advantages of less parameter, quick and efficient calculation, less operation needed by post-treatment, realization of real-time dynamic numbering of the filament tracks of the carbon fiber filaments, and certain generalization capability and stability. The invention can directly number the silk channels of the image by only a shallow visual transducer structure, has less overall model parameters, less hardware cost and high calculation efficiency, and can number the carbon fiber silk channels in real time.

Claims (6)

1. A carbon fiber channel real-time dynamic numbering method based on an attention mechanism is characterized by comprising the following steps:
step S1: obtaining sampling data by using an industrial camera;
step S2: preprocessing the sampled data;
step S3: and outputting the processing result to obtain the marking number of the carbon fiber yarn channel.
2. The attention mechanism-based carbon fiber channel real-time dynamic numbering method according to claim 1, wherein the preprocessing of the sampled data in step S2 comprises the training of a channel prediction network in step S21, and the specific process is as follows:
step S211: preprocessing data; specific:
will sample the dataDivide into training sets->And test set->Two parts; wherein x is i Original image of carbon fiber yarn of 3 Xh Xw, y i A 1 xw silk label, a silk position value of 1, and a background position value of 0;
for the training set, each original image of the carbon fiber yarn is respectively subjected to the preprocessing operations of Resize, normalization, random enhancement of color brightness and saturation, and random vertical overturning, and each yarn path label is required to be adjusted according to the width of the original image of the carbon fiber yarn after Resize; the method comprises the steps that a bilinear interpolation algorithm is selected for the Resize operation of an original image of the carbon fiber yarn, and a bicubic interpolation algorithm is selected for the adjustment of a label;
for the verification set, only the pretreatment operation of Resize and normalization is needed for each original image of the carbon fiber yarn, and each yarn path label is needed to be adjusted according to the width of the image after Resize;
the sizes of images after the training set and the verification set are the same;
step S212: outputting a classification predicted value and a position predicted value;
step S213: calculating loss;
step S214: optimizing and predicting network parameters;
step S215: and outputting the optimal prediction model.
3. The attention mechanism-based carbon fiber channel real-time dynamic numbering method according to claim 2, wherein the output classification predicted value and the position predicted value of step S212; the specific process is as follows:
initializing parameters, including Linear mapping layer Linear, transducer coding network, classification network and wire numbering network; an optimizer Adam; marking the carbon wire data set;
taking an image of [3, i, w ] as an input sample, starting from the i-th position, i.e. taking images from [3, i, w ] to [3, i+h, w ];
then using a size of [ h, w ] p ]Grid pair [3, h, w ]]Dividing the input sample of (2) to obtainAn image block; compressing the first 3 dimensions to get +.>Is input to the computer; let 3 Xh Xw p =d,/>Obtaining an input sample of the i-th layer->
Linear mapping of input samples: splicing position embedded vector->Input to get a transducer network ≡>
Will x i Inputting the code into a transducer coding network to obtain a code output o i =Transformer(x i )∈R (d+1)×k
O is set to i Respectively send into the classification network f cls And a lane numbering network f loc Classification prediction to obtain k input blocksAnd position prediction +.>
4. The attention mechanism based carbon fiber channel real-time dynamic numbering method according to claim 3, wherein the calculation loss of step S213; the specific process is as follows:
calculating classification prediction lossesWhere MSELoss is the mean square error loss, y cls ∈R 1×k The value of k real categories is 0 or 1; when the area occupied by the carbon wire in the corresponding kth input image is larger than a certain threshold value (0.5), the value is 1, otherwise, the value is 0;
calculating position prediction losses wherein yloc ∈R 1×k Is a real label corresponding to the channel number;
wherein ,j=1, …, N is the number of all tracks for the true value corresponding to the kth image block;
calculate total loss = l cls +l loc Updating network parameters using Adam optimizer, including Linear mapping layer Linear, transducer coding network, classification network f cls Silk road numbering network f loc
5. The attention mechanism-based carbon fiber channel real-time dynamic numbering method according to claim 4, wherein the outputting of the optimal prediction model in step S215 comprises the following specific steps:
optimizing the whole network parameters by using an Adam optimizer;
i value is added with 1, and the whole process is repeated from the 1 st stepA program; until i=w-w p The i value is classified as 0, the next picture is replaced, and the whole process is repeated from the step 1;
calculating loss under the verification set by using the verification set, selecting a model with minimum loss as an optimal prediction network, and outputting a final silk channel prediction network
6. The convolutional neural network-based carbon fiber channel identification and numbering method according to claim 5, wherein the preprocessing of the sampled data in the step S2 further comprises labeling of carbon fiber channel numbers in actual scenes, and the specific process is as follows:
for test dataCarrying out the operations of restoration and normalization on each original picture;
inputting the processed test image into the final silk channel prediction network obtained in step s215Obtain the output predictive vector +.>The carbon fiber channel numbers are numbered.
CN202310752546.1A 2023-06-25 2023-06-25 Attention mechanism-based carbon fiber channel real-time dynamic numbering method Active CN116958739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310752546.1A CN116958739B (en) 2023-06-25 2023-06-25 Attention mechanism-based carbon fiber channel real-time dynamic numbering method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310752546.1A CN116958739B (en) 2023-06-25 2023-06-25 Attention mechanism-based carbon fiber channel real-time dynamic numbering method

Publications (2)

Publication Number Publication Date
CN116958739A true CN116958739A (en) 2023-10-27
CN116958739B CN116958739B (en) 2024-06-21

Family

ID=88445312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310752546.1A Active CN116958739B (en) 2023-06-25 2023-06-25 Attention mechanism-based carbon fiber channel real-time dynamic numbering method

Country Status (1)

Country Link
CN (1) CN116958739B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6375875B1 (en) * 2000-01-27 2002-04-23 Ut-Battelle, Llc Diagnostic monitor for carbon fiber processing
CN110175985A (en) * 2019-04-22 2019-08-27 国网江苏省电力有限公司电力科学研究院 Carbon fiber composite core wire damage detecting method, device and computer storage medium
CN110781729A (en) * 2019-09-16 2020-02-11 长安大学 Evaluation model and evaluation method for fiber dispersibility of carbon fiber reinforced cement-based material
US20210166350A1 (en) * 2018-07-17 2021-06-03 Xi'an Jiaotong University Fusion network-based method for image super-resolution and non-uniform motion deblurring
CN113673489A (en) * 2021-10-21 2021-11-19 之江实验室 Video group behavior identification method based on cascade Transformer
WO2022009596A1 (en) * 2020-07-08 2022-01-13 国立大学法人愛媛大学 Device for inspecting composite material, method for inspecting composite material, and program for inspecting composite material
CN114140687A (en) * 2021-11-22 2022-03-04 浙江省轻工业品质量检验研究院 Wool and cashmere fiber identification method based on improved Mask R-CNN neural network
CN114387269A (en) * 2022-03-22 2022-04-22 南京矩视科技有限公司 Fiber yarn defect detection method based on laser
JP2022155690A (en) * 2021-03-31 2022-10-14 キヤノン株式会社 Image processing device, image processing method, and program
CN115294077A (en) * 2022-08-10 2022-11-04 东华大学 Textile fiber nondestructive testing method, device and storage medium
CN115358977A (en) * 2022-08-11 2022-11-18 南京耘瞳科技有限公司 Carbon filament surface defect detection method based on deep learning
CN115830323A (en) * 2022-12-08 2023-03-21 浙江理工大学 Deep learning segmentation method for carbon fiber composite material data set
CN115984215A (en) * 2022-12-29 2023-04-18 南京矩视科技有限公司 Fiber bundle defect detection method based on twin network
CN116012344A (en) * 2023-01-29 2023-04-25 东北林业大学 Cardiac magnetic resonance image registration method based on mask self-encoder CNN-transducer
CN116051410A (en) * 2023-01-18 2023-05-02 内蒙古工业大学 Wool cashmere fiber surface morphology structure diagram identification method based on image enhancement
CN116091830A (en) * 2023-02-07 2023-05-09 广东技术师范大学 Multi-view image classification method based on global filtering module
CN116189139A (en) * 2022-12-16 2023-05-30 重庆邮电大学 Traffic sign detection method based on Transformer
WO2023092813A1 (en) * 2021-11-25 2023-06-01 苏州大学 Swin-transformer image denoising method and system based on channel attention

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6375875B1 (en) * 2000-01-27 2002-04-23 Ut-Battelle, Llc Diagnostic monitor for carbon fiber processing
US20210166350A1 (en) * 2018-07-17 2021-06-03 Xi'an Jiaotong University Fusion network-based method for image super-resolution and non-uniform motion deblurring
CN110175985A (en) * 2019-04-22 2019-08-27 国网江苏省电力有限公司电力科学研究院 Carbon fiber composite core wire damage detecting method, device and computer storage medium
CN110781729A (en) * 2019-09-16 2020-02-11 长安大学 Evaluation model and evaluation method for fiber dispersibility of carbon fiber reinforced cement-based material
WO2022009596A1 (en) * 2020-07-08 2022-01-13 国立大学法人愛媛大学 Device for inspecting composite material, method for inspecting composite material, and program for inspecting composite material
JP2022155690A (en) * 2021-03-31 2022-10-14 キヤノン株式会社 Image processing device, image processing method, and program
CN113673489A (en) * 2021-10-21 2021-11-19 之江实验室 Video group behavior identification method based on cascade Transformer
CN114140687A (en) * 2021-11-22 2022-03-04 浙江省轻工业品质量检验研究院 Wool and cashmere fiber identification method based on improved Mask R-CNN neural network
WO2023092813A1 (en) * 2021-11-25 2023-06-01 苏州大学 Swin-transformer image denoising method and system based on channel attention
CN114387269A (en) * 2022-03-22 2022-04-22 南京矩视科技有限公司 Fiber yarn defect detection method based on laser
CN115294077A (en) * 2022-08-10 2022-11-04 东华大学 Textile fiber nondestructive testing method, device and storage medium
CN115358977A (en) * 2022-08-11 2022-11-18 南京耘瞳科技有限公司 Carbon filament surface defect detection method based on deep learning
CN115830323A (en) * 2022-12-08 2023-03-21 浙江理工大学 Deep learning segmentation method for carbon fiber composite material data set
CN116189139A (en) * 2022-12-16 2023-05-30 重庆邮电大学 Traffic sign detection method based on Transformer
CN115984215A (en) * 2022-12-29 2023-04-18 南京矩视科技有限公司 Fiber bundle defect detection method based on twin network
CN116051410A (en) * 2023-01-18 2023-05-02 内蒙古工业大学 Wool cashmere fiber surface morphology structure diagram identification method based on image enhancement
CN116012344A (en) * 2023-01-29 2023-04-25 东北林业大学 Cardiac magnetic resonance image registration method based on mask self-encoder CNN-transducer
CN116091830A (en) * 2023-02-07 2023-05-09 广东技术师范大学 Multi-view image classification method based on global filtering module

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
ALEXEY DOSOVITSKIY, ET.AL: "An image is worth 16*16 words: transformers for image recognition at scale", 《ARXIV:2010.11929V2, 20210603》, pages 1 - 22 *
MARTIN SZARSKI, ET.AL: "An unsupervised defect detection model for a dry carbon fiber textile", 《JOURNAL OF INTELLIGENT MANUFACTURING》, vol. 33, 6 June 2022 (2022-06-06), pages 2075 - 2092 *
XU BAOTENG, ET.AL: "Fiber bundle image restoration using conditional generative adversarial network", 《CONFERENCE ON AI IN OPTICS AND PHOTONICS》, 30 November 2020 (2020-11-30), pages 1 - 5, XP060135113, DOI: 10.1117/12.2579861 *
YA-LAN TAN, ET.AL: "Image processing technology in broken viscose filament automatic detection system", 《9TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI)》 *
李书乡等: "碳纤维国家标准解读及产品系列化考量", 《新材料产业》, no. 2, 15 May 2012 (2012-05-15), pages 15 - 19 *
沈鼎: "基于机器视觉的碳纤维丝束角度检测方法研究", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅰ辑)》, no. 3, pages 016 - 1117 *
赵麟坤等: "基于改进的Faster RCNN碳纤维编织物缺陷检测", 《棉纺织技术》, no. 2, 28 February 2023 (2023-02-28), pages 48 - 54 *

Also Published As

Publication number Publication date
CN116958739B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN109919032B (en) Video abnormal behavior detection method based on motion prediction
CN111968150B (en) Weak surveillance video target segmentation method based on full convolution neural network
CN112381788B (en) Part surface defect increment detection method based on double-branch matching network
CN111654698B (en) Fast CU partition decision method for H.266/VVC
CN112669324B (en) Rapid video target segmentation method based on time sequence feature aggregation and conditional convolution
CN112291562A (en) Fast CU partition and intra mode decision method for H.266/VVC
CN112215859B (en) Texture boundary detection method based on deep learning and adjacency constraint
CN111666852A (en) Micro-expression double-flow network identification method based on convolutional neural network
CN117237279A (en) Blind quality evaluation method and system for non-uniform distortion panoramic image
CN116030361A (en) CIM-T architecture-based high-resolution image change detection method
CN117115152B (en) Steel strand production monitoring method based on image processing
CN109308709B (en) Vibe moving target detection algorithm based on image segmentation
CN116958739B (en) Attention mechanism-based carbon fiber channel real-time dynamic numbering method
CN116152699B (en) Real-time moving target detection method for hydropower plant video monitoring system
CN109583584B (en) Method and system for enabling CNN with full connection layer to accept indefinite shape input
CN112446245A (en) Efficient motion characterization method and device based on small displacement of motion boundary
CN114373109B (en) Natural image matting method and matting device based on deep learning
CN114596433A (en) Insulator identification method
CN112070851B (en) Index map prediction method based on genetic algorithm and BP neural network
CN111402223B (en) Transformer substation defect problem detection method using transformer substation video image
CN114565764A (en) Port panorama sensing system based on ship instance segmentation
CN113570611A (en) Mineral real-time segmentation method based on multi-feature fusion decoder
CN116503615B (en) Convolutional neural network-based carbon fiber channel identification and numbering method
CN111432208A (en) Method for determining intra-frame prediction mode by using neural network
CN111488907A (en) Robust image identification method based on dense PCANet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant