CN111402320A - Fiber section diameter detection method based on deep learning - Google Patents

Fiber section diameter detection method based on deep learning Download PDF

Info

Publication number
CN111402320A
CN111402320A CN202010183578.0A CN202010183578A CN111402320A CN 111402320 A CN111402320 A CN 111402320A CN 202010183578 A CN202010183578 A CN 202010183578A CN 111402320 A CN111402320 A CN 111402320A
Authority
CN
China
Prior art keywords
fiber
neural network
mask
network model
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010183578.0A
Other languages
Chinese (zh)
Inventor
徐运海
董兰兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing United Vision Technology Co ltd
Original Assignee
Beijing United Vision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing United Vision Technology Co ltd filed Critical Beijing United Vision Technology Co ltd
Priority to CN202010183578.0A priority Critical patent/CN111402320A/en
Publication of CN111402320A publication Critical patent/CN111402320A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Abstract

The invention provides a fiber section diameter detection method based on deep learning, which comprises the steps of establishing a convolutional neural network model by utilizing pre-training parameters, wherein the convolutional neural network model is a Mask R-CNN model, a convolutional kernel of the convolutional neural network model is a rectangular convolutional kernel with unequal length, obtaining a training set and a verification set of fiber pictures, training the convolutional neural network model through the training set and the verification set of the fiber pictures to obtain a trained convolutional neural network model, determining a fiber picture to be detected, identifying the fiber picture to be detected according to the trained convolutional neural network model to obtain a shape Mask of each fiber section in the fiber picture, wherein the training set, the verification set and the fiber picture to be identified are fiber pictures with sizes of 1024 × 1024, and calculating parameters of each section Mask outline based on the shape Mask of the fiber section, wherein the parameters comprise the area, the perimeter and the diameter of each section Mask outline.

Description

Fiber section diameter detection method based on deep learning
Technical Field
The invention belongs to the technical field of fiber detection, and particularly relates to a fiber section diameter detection method based on deep learning.
Background
At present, in the prior art, a fiber bundle to be measured is cut into millimeter-sized small sections, the fiber sections are scattered on a glass sheet by a dispersing device, and the glass sheet is placed under a microscope lens to be imaged by a camera. The diameter of a large number of fiber segments in an image is measured by a classical digital image method, and the process is conventionally called longitudinal measurement in the textile field. A more critical problem here is that a large number of fiber segments are scattered over the slide, where there are very many intersections, bends and partial overlaps, classical imaging algorithms do not cope well, there are very many repeated measurements (because the intersecting fibers are divided into segments that are not necessarily grouped together as one fiber) and merged measurements (and two together count as one), which are in fact pseudo-data. The data tested by such a system has certain randomness, and particularly, the error is larger for a sample with larger fiber diameter dispersion. On the other hand, the number of fibers measurable in one field of view is about 10 on average in the scattered fiber collection measurement, and if a large-capacity test of thousands of fibers is to be achieved, the efficiency is low corresponding to the mobile collection of hundreds of fields of view. In addition, the measured diameter of the fiber having a cross-sectional shape not close to a circular shape also has a certain deviation.
Therefore, how to accurately and efficiently detect the diameters of a large number of fibers is a problem to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a fiber section diameter detection method based on deep learning, aiming at the defects of the prior art, which utilizes a Mask-RCN model to automatically determine the fiber characteristics and accurately calculate the diameter (area) of the fiber section.
The embodiment of the invention provides a fiber section diameter detection method based on deep learning, which comprises the following steps:
step 1, creating a convolutional neural network model by using pre-training parameters; the convolutional neural network model is a Mask R-CNN model, and convolution kernels of the model are rectangular convolution kernels with different lengths;
step 2, acquiring a training set and a verification set of the fiber picture, and training the convolutional neural network model through the training set and the verification set of the fiber picture to obtain a trained convolutional neural network model;
step 3, determining a fiber picture to be detected, and identifying the fiber picture to be detected according to the trained convolutional neural network model to obtain a shape mask of each fiber section in the fiber picture, wherein the training and verifying set and the fiber picture to be identified are 1024 × 1024 fiber pictures;
step 4, calculating the parameters of the mask contour of each cross section based on the shape mask of the fiber cross section; the parameters include area, perimeter, diameter of the cross-section mask outline.
Compared with the prior art, the invention has the beneficial effects that:
1. the cross-sectional area of the fiber can be accurately obtained regardless of the cross-sectional shape of the fiber, so that the accurate equivalent diameter can be calculated.
2. The fibers in the field of view in the cross-sectional mode do not have the problem of crossing and therefore the problem of repeated measurement.
3. The number density of fibers in a visual field is high, usually hundreds of fibers in one visual field can be used for measurement, and the efficiency is greatly improved compared with that of a longitudinal surface method.
4. Based on the AI neural network algorithm, the method well solves the problem that the adjacent sections are difficult to accurately divide, which is difficult to solve by the traditional digital image algorithm, and lays a foundation for section calculation.
Drawings
FIG. 1 is a flow chart of a method for detecting fiber section diameter based on deep learning according to the present invention;
FIG. 2 is a simplified diagram of the Mask R-CNN structure of the present invention.
Detailed Description
The present invention is described in detail with reference to the embodiments shown in the drawings, but it should be understood that these embodiments are not intended to limit the present invention, and those skilled in the art should understand that functional, methodological, or structural equivalents or substitutions made by these embodiments are within the scope of the present invention.
Referring to fig. 1, the present embodiment provides a method for detecting a fiber section diameter based on deep learning, including:
step S1, creating a convolutional neural network model by using the pre-training parameters on ImageNet; the convolutional neural network model is a Mask R-CNN model, and convolution kernels of the model are rectangular convolution kernels with different lengths;
step S2, acquiring a training set and a verification set of the fiber picture, and training the convolutional neural network model through the training set and the verification set of the fiber picture to obtain a trained convolutional neural network model;
step S3, determining a fiber picture to be detected, and identifying the fiber picture to be detected according to the trained convolutional neural network model to obtain a shape mask of each fiber section in the fiber picture, wherein the training and verifying set and the fiber picture to be identified are 1024 × 1024-sized fiber pictures;
step S4, calculating the parameters of each cross section mask outline based on the shape mask of the fiber cross section; the parameters include area, perimeter, diameter of the cross-section mask outline.
The Mask R-CNN model consists of 5 parts, which are a feature extraction network, a feature combination network, a regional delivery network (RPN), a regional feature aggregation network (roiign), and a functional network, respectively, as shown in fig. 2.
The feature extraction network is a backbone network of the deep neural network and is the part with the largest calculation amount of the whole model. According to different application requirements, different feature extraction networks can be selected. Taking ResNet50 as an example, taking 4 characteristic graphs output by 4 Residualblock thereof, and marking as C2,C3,C4,C5,Respectively representing features at different depths in the image. The feature combination network is used for recombining the features of different depths, and the newly generated feature map simultaneously contains feature information of different depths. Use of FPN in Mask R-CNN to combine feature C2,C3,C4,C5Become a new characteristic diagram P2,P3,P4,P5,P6For i ═ 5,4,3,2, U6When the value is 0, the characteristic combination processing process is shown as the formula (1): p'i
Figure BDA0002413423510000041
Wherein: conv represents the convolution operation, sum represents the element-by-element bitwise summation operation, upsamplale represents the upsampling operation that causes the feature to become 2 times longer and wider, and posing represents the maximum pooling operation with a step size (stride) of 2.
The function of the area submission network is to calculate a candidate frame capable of representing the position of an object in an image by using a feature map, and an Anchor point (Anchor) technology is adopted to complete the area submission function. RPN in Mask R-CNN is based on P2,P3,P4,P5,P6The 5 feature maps are regressed to obtain a 5 n-dimensional vector for each feature vector in each feature map to describe the correction values of n anchors, wherein the correction value of each Anchor comprises △ x, △ y, △ h, △ w and P, P is the foreground confidence, and the Anchor is a preset Box for P2,P3,P4,P5,P6Each point in the color space is preset with a plurality of anchors with different widths and heights by taking the coordinate as the center. And then, correcting the center, the width and the height of each Anchor according to a correction value obtained by RPN network regression, thereby obtaining a new Box. Equation (2) gives the Anchor correction calculation process.
Figure BDA0002413423510000042
Wherein: x and y represent coordinates of the center of the Anchor, and w and h represent the width and height of the Anchor, respectively.
After Anchor correction is completed, a large number of boxes are generated, and then a more accurate candidate Box can be transited by using non-maximum suppression (NMS) according to the p value of each Box.
After the candidate frames are obtained, the conventional method cuts out a corresponding area from the original image according to the position of each candidate frame, and then classifies and segments the area. However, the inputs required to consider these functional networks are all derived from the profile P2,P3,P4,P5,P6Therefore, the ROIAlign algorithm is adopted to directly cut out the features of the corresponding positions of the candidate frames from the feature map, and bilinear interpolation and pooling are carried out to convert the features into a uniform ruler, ROIAThe lign algorithm can be viewed as a pooling process that introduces bilinear interpolation, which turns the originally discrete pooling into continuity.
After obtaining the feature of the same size of the corresponding area of each candidate box, the feature is used as the input of some functional networks called heads to participate in the subsequent calculation. For the two-stage correction of the candidate frame, MaskRCNN regresses each category to obtain a correction value of a 5-dimensional vector, and the correction process is consistent with the formula (2).
The Mask-RCNN network in this embodiment has two main parts.
The first is a regional proposal network that generates approximately 2000 regional proposals per image. During training, each of these proposals (ROIs) goes through the second part, object detection and mask prediction network. Since the mask prediction branches run in parallel with the tag and box prediction branches, for each given ROI the network will predict masks belonging to all classes.
The second is that during the reasoning process, the region proposal goes through non-maximal suppression, and the mask prediction branch processes only the 1000 detection boxes with the highest score, so, with 1000 ROIs and 2 object classes, the mask prediction part of the network will output a 4D tensor of size 1000 × 2 × 28 × 28, with each mask of size 28 × 28.
It should be noted that, since the fibers to be detected in the present scheme are mainly plush, other fibers may have certain differences when using the present invention.
According to the scheme, the size of a fiber picture is set to be 1024 × 1024, so that the sizes of a training set of the fiber picture, the fiber picture in a verification set and the fiber picture to be identified are 1024 × 1024, the picture length can be guaranteed to be evenly divided by 32, as the picture is large, cashmere sections possibly exist in each place of the fiber picture, an Anchor is set to be [64, 128, 256, 512 and 1024], the maximum number which can be detected by each fiber picture is set to be 1000, moreover, during training, as the wool pile foreground on the fiber picture exceeds more than 90%, serious imbalance of positive and negative samples is caused, a loss function needs to be adjusted on a model, and therefore a Focal L oss function is used for reducing loss caused by imbalance of the positive and negative samples.
The specific process is that a fiber picture is acquired through a special device, a Mask-RCNN divides a network through an FCN, a ROIAlign output characteristic diagram of 14 × 14 is used as input, 4 convolutional layers of 3 × 3 are used, the size of 14 × 14 is kept unchanged, then 1 deconvolution layer of 2 × 2 is used for upsampling the output size to 28 × 28, finally, an output of 28 × 28 is obtained through a convolutional layer of 1 × 1 and a sigmoid activation layer, each point in the output represents the foreground and background confidence of the shape of a certain class of a candidate frame, finally, an object shape Mask is obtained by using 0.5 as a confidence threshold, for a displayed image, the network needs to detect a plurality of objects, and for each object, the array comprises a predicted class score (indicating the probability that the object belongs to the predicted class), the detected object is arranged at the left, the upper part, the right part and the lower part of the boundary frame in the frame, and the class ids from the array are used for extracting the corresponding masks from the output masks of the predicted branches.
For a displayed image, the network needs to detect multiple objects. For each object, it outputs an array containing predicted class scores (indicating the probability that the object belongs to the predicted class), and the detected object is at the left, top, right and bottom positions of the bounding box in the box. The class id from this array is used to extract the corresponding mask from the output of the mask predicted branch.
In the scheme, MaskRCNN comprises 50 operation blocks consisting of conv and BatchNorm based on a ResNet50 model structure. After the training of the network model is completed, the network model which is only used for forward operation has some redundant calculation steps, and can be completed in advance by adopting a parameter combination mode. In addition, the derivative, the parameter number and the parameter value of the model are adjusted, and a more suitable model for detecting the fiber section diameter (area) is provided.
The model input image selected in the scheme is a 3-channel color image of 1024 × 1024, categories, regression and mask values corresponding to the categories and the regression are output, the section diameter (area) of a wool pile in a fiber image is calculated according to the values, finally, MaskRCNN applies an FPN technology, a plurality of feature maps with different depths and scales are generated after each input image is subjected to feature combination of the FPN, MaskRCNN selects one feature map to perform ROIAlign operation according to the size of a candidate frame, and the principle of selection is that the feature map with the larger depth is selected for the candidate frame with the larger area.
The input of the system constructed by the embodiment is a fluff fiber section image under an optical microscope, the output is a shape mask of each fiber section in the image, and then parameters such as the area, the perimeter, the diameter and the like of each section mask contour are calculated according to a classical image algorithm. The image adopts Mask R-CNN to perform forward calculation to obtain the specific information of the main image in the image, and then the diameter of the image fiber is calculated by using the fiber section Mask result output by the model, and all links do not need personnel to participate. Compared with the traditional section calculation method, the method has the advantages that the precision and the speed are greatly improved. This scheme applies to fibre cross-section diameter (area) test detection area with degree of depth study for fibre cross-section diameter (area) detects and becomes simply, no longer need artifical the setting to extract a large amount of characteristics, but can independently learn the fibre characteristic, can realize high-efficient and analysis operation, raises the efficiency, can greatly reduce the operation cost again.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (1)

1. A fiber section diameter detection method based on deep learning is characterized by comprising the following steps:
step 1, creating a convolutional neural network model by using pre-training parameters; the convolutional neural network model is a MaskR-CNN model, and the convolutional kernel of the model is a rectangular convolutional kernel with different lengths;
step 2, acquiring a training set and a verification set of the fiber picture, and training the convolutional neural network model through the training set and the verification set of the fiber picture to obtain a trained convolutional neural network model;
step 3, determining a fiber picture to be detected, and identifying the fiber picture to be detected according to the trained convolutional neural network model to obtain a shape mask of each fiber section in the fiber picture, wherein the training and verifying set and the fiber picture to be identified are 1024 × 1024 fiber pictures;
step 4, calculating the parameters of the mask contour of each cross section based on the shape mask of the fiber cross section; the parameters include area, perimeter, diameter of the cross-section mask outline.
CN202010183578.0A 2020-03-17 2020-03-17 Fiber section diameter detection method based on deep learning Pending CN111402320A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010183578.0A CN111402320A (en) 2020-03-17 2020-03-17 Fiber section diameter detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010183578.0A CN111402320A (en) 2020-03-17 2020-03-17 Fiber section diameter detection method based on deep learning

Publications (1)

Publication Number Publication Date
CN111402320A true CN111402320A (en) 2020-07-10

Family

ID=71430893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010183578.0A Pending CN111402320A (en) 2020-03-17 2020-03-17 Fiber section diameter detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN111402320A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180276825A1 (en) * 2017-03-23 2018-09-27 Petuum, Inc. Structure Correcting Adversarial Network for Chest X-Rays Organ Segmentation
CN109102502A (en) * 2018-08-03 2018-12-28 西北工业大学 Pulmonary nodule detection method based on Three dimensional convolution neural network
CN109165645A (en) * 2018-08-01 2019-01-08 腾讯科技(深圳)有限公司 A kind of image processing method, device and relevant device
CN109886307A (en) * 2019-01-24 2019-06-14 西安交通大学 A kind of image detecting method and system based on convolutional neural networks
CN109948712A (en) * 2019-03-20 2019-06-28 天津工业大学 A kind of nanoparticle size measurement method based on improved Mask R-CNN
WO2019178561A2 (en) * 2018-03-16 2019-09-19 The United States Of America, As Represented By The Secretary, Department Of Health & Human Services Using machine learning and/or neural networks to validate stem cells and their derivatives for use in cell therapy, drug discovery, and diagnostics
CN110866365A (en) * 2019-11-22 2020-03-06 北京航空航天大学 Mechanical equipment intelligent fault diagnosis method based on partial migration convolutional network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180276825A1 (en) * 2017-03-23 2018-09-27 Petuum, Inc. Structure Correcting Adversarial Network for Chest X-Rays Organ Segmentation
WO2019178561A2 (en) * 2018-03-16 2019-09-19 The United States Of America, As Represented By The Secretary, Department Of Health & Human Services Using machine learning and/or neural networks to validate stem cells and their derivatives for use in cell therapy, drug discovery, and diagnostics
CN109165645A (en) * 2018-08-01 2019-01-08 腾讯科技(深圳)有限公司 A kind of image processing method, device and relevant device
CN109102502A (en) * 2018-08-03 2018-12-28 西北工业大学 Pulmonary nodule detection method based on Three dimensional convolution neural network
CN109886307A (en) * 2019-01-24 2019-06-14 西安交通大学 A kind of image detecting method and system based on convolutional neural networks
CN109948712A (en) * 2019-03-20 2019-06-28 天津工业大学 A kind of nanoparticle size measurement method based on improved Mask R-CNN
CN110866365A (en) * 2019-11-22 2020-03-06 北京航空航天大学 Mechanical equipment intelligent fault diagnosis method based on partial migration convolutional network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
朱有产;王雯瑶;: "基于改进Mask R-CNN的绝缘子目标识别方法", 微电子学与计算机, no. 02 *
陈晓娟;卜乐平;李其修;: "基于图像处理的明火火灾探测研究", 海军工程大学学报, no. 03 *

Similar Documents

Publication Publication Date Title
CN111027547B (en) Automatic detection method for multi-scale polymorphic target in two-dimensional image
EP4280153A1 (en) Defect detection method, apparatus and system
KR102114357B1 (en) Method and device for constructing a table including information on a pooling type and testing method and testing device using the same
WO2021093451A1 (en) Pathological section image processing method, apparatus, system, and storage medium
CN110070531B (en) Model training method for detecting fundus picture, and fundus picture detection method and device
US11373309B2 (en) Image analysis in pathology
CN109284779A (en) Object detecting method based on the full convolutional network of depth
CN112215217B (en) Digital image recognition method and device for simulating doctor to read film
CN113240623B (en) Pavement disease detection method and device
CN113269737B (en) Fundus retina artery and vein vessel diameter calculation method and system
CN110503623A (en) A method of Bird's Nest defect on the identification transmission line of electricity based on convolutional neural networks
CN110930414A (en) Lung region shadow marking method and device of medical image, server and storage medium
CN115546605A (en) Training method and device based on image labeling and segmentation model
US20090070089A1 (en) Method of Analyzing Cell or the Like Having Linear Shape, Method of Analyzing Nerve Cell and Apparatus and Program for Performing These Methods
CN117495735B (en) Automatic building elevation texture repairing method and system based on structure guidance
CN113096080B (en) Image analysis method and system
CN112861958A (en) Method and device for identifying and classifying kidney disease immunofluorescence pictures
CN109829879B (en) Method and device for detecting vascular bundle
CN111402320A (en) Fiber section diameter detection method based on deep learning
CN111951271A (en) Method and device for identifying cancer cells in pathological image
CN116385717A (en) Foliar disease identification method, foliar disease identification device, electronic equipment, storage medium and product
CN113096079B (en) Image analysis system and construction method thereof
CN112949614B (en) Face detection method and device for automatically allocating candidate areas and electronic equipment
CN107527365A (en) A kind of method and device of dynamic calculation class circle object morphology diameter
Zehra et al. Dr-net: Cnn model to automate diabetic retinopathy stage diagnosis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination