CN113313000B - Gas-liquid two-phase flow intelligent identification method based on optical image - Google Patents

Gas-liquid two-phase flow intelligent identification method based on optical image Download PDF

Info

Publication number
CN113313000B
CN113313000B CN202110546145.1A CN202110546145A CN113313000B CN 113313000 B CN113313000 B CN 113313000B CN 202110546145 A CN202110546145 A CN 202110546145A CN 113313000 B CN113313000 B CN 113313000B
Authority
CN
China
Prior art keywords
phase flow
gas
liquid
training
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110546145.1A
Other languages
Chinese (zh)
Other versions
CN113313000A (en
Inventor
沈继红
郭春雨
谭思超
关昊夫
张康慧
王宇晴
王淑娟
戴运桃
乔守旭
韩阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202110546145.1A priority Critical patent/CN113313000B/en
Publication of CN113313000A publication Critical patent/CN113313000A/en
Application granted granted Critical
Publication of CN113313000B publication Critical patent/CN113313000B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an optical image-based gas-liquid two-phase flow intelligent identification method, which comprises the steps of preparing a training data set and a test data set; constructing a full convolution network model FCN; identifying bubbles in the gas-liquid two-phase flow according to the trained FCN full convolution network model: for the FCN full convolution network model after training, inputting a picture to be identified of gas-liquid two-phase flow for the model, identifying bubbles in the picture almost accurately through a network, and calculating and obtaining the accuracy of bubble identification. The FCN method based on deep supervised learning and data extraction is introduced into gas-liquid two-phase flow identification, information can be automatically extracted from a pixel level through multilayer convolution operation to extract an abstract semantic concept, results are further optimized by using an upper sampling layer and a multi-scale fusion technology, characteristics of a low-level subnet are fused for multiple times by a high-level subnet, extremely high resolution is kept, and therefore the accuracy of bubble identification is improved.

Description

Gas-liquid two-phase flow intelligent identification method based on optical image
Technical Field
The invention relates to an intelligent gas-liquid two-phase flow identification method based on an optical image, in particular to a gas-liquid two-phase flow identification method based on an improved full convolution network model (FCN), which particularly aims at the problem of bubble identification in liquid and belongs to the field of gas-liquid two-phase flow identification and classification.
Background
Both natural and industrial processes often involve multiphase flow problems, with gas-liquid two-phase flow being the most common. Based on the complex hydrodynamic characteristics of bubbles in numerous fields such as chemical engineering, biopharmaceuticals, geophysics, and wastewater management, it is necessary to better understand the interaction between two-phase flows through experimental studies. Basic experimental research supports the development of various engineering fields by establishing closed experimental models. In addition, detailed experimental results can be used to compare and verify the accuracy of these models. In experiments, it is most important to measure parameters such as bubble size and shape, velocity, interfacial area concentration and porosity to assess accuracy and develop models.
Experimental techniques that focus on the characterization of bubble parameters can be divided into two broad categories: probe-based invasive (contact) and non-invasive (non-contact) methods. Non-invasive methods are fundamentally different from probe-based methods because they do not interfere with the flow being studied, avoiding most of the disadvantages of invasive-based methods and therefore generally exhibit higher spatial resolution. Typical non-invasive methods are the use of laser doppler anemometers and image processing techniques. In order to determine parameters of the gas-liquid two-phase flow, such as the mean velocity of the continuous phase, the local gas concentration, the flow characteristics and their fluctuations in the dispersed phase, some model-based feature extraction method is required to extract information about the exact location of the bubbles in space and their size. These features not only can accurately capture the bubble characteristics in the two-phase flow, but also play an important role in the subsequent research in the fields of bubble tracking and the like. Specifically, image processing methods are used to identify different bubbles in a two-phase flow and calculate relevant parameters. Conventional image processing methods identify bubbles in an image through a series of filtering and manipulation steps, such as image type conversion, image filtering, image de-noising and enhancement, image filling, and the like. After applying these image processing steps, the geometric features of the image can be seen from the final image. Finally, edge detection and further required parameter operation are carried out through a discriminator algorithm, and specific geometric characteristics of the image are output. However, in the conventional image processing method, since feature selection is subjective and artificial, and feature extraction is not comprehensive, intelligent selection of image features is a problem to be solved in the field.
In recent years, a method based on deep learning is capable of intelligently processing an image, and is widely applied to image recognition, classification, and processing. For example, Convolutional Neural Network (CNN) based image processing methods are favored by developers due to their robustness and versatility. Thus, some authors propose the use of deep learning for bubble identification in two-phase gas-liquid flow, which may provide similar or better results than classical image processing methods. Due to the fact that underwater bubble flow data are high in complexity, the robustness of the model is poor, and most industrial experiments require extremely high bubble identification accuracy, therefore, aiming at the problem of bubble identification in the field of gas-liquid two-phase flow identification, an improved deep learning model is introduced to accurately identify bubbles, and the method has high application value in the field.
Disclosure of Invention
Aiming at the prior art, the technical problem to be solved by the invention is to provide an optical image-based gas-liquid two-phase flow intelligent identification method aiming at bubble identification, a model capable of identifying bubbles in gas-liquid two-phase flow is established through improved full convolution FCN network model training, and higher accuracy can be achieved.
The purpose of the invention is realized as follows:
step 1: a training data set and a test data set are prepared. For the existing gas-liquid two-phase flow video, a picture of each frame in the video is extracted by Python, and a picture of each frame is subjected to data annotation by using Labelme. The label of each picture X includes two pixel types, where the background corresponds to a pixel 0 and the bubble corresponds to a pixel 1, where the ratio of the background pixel to the bubble pixel may not be balanced. And then constructing each picture and the corresponding label into a Dataset, performing reading, decoding, normalization and standardization preprocessing operations, and randomly taking 80% of the pictures and the corresponding labels thereof as a training Dataset and taking the rest 20% of the pictures and the corresponding labels thereof as a testing Dataset.
Step 2: the full convolution network model FCN is constructed by firstly utilizing a VGG16 network module to perform migration learning, using a convolution base part in a VGG16 network, removing a full connection layer, and using weights pre-trained on an ImageNet data set to perform training. Because semantic segmentation needs to classify each pixel on an image, and the process of convolution kernel pooling is a 'down-sampling' process, the length and width of the image are made smaller and smaller, so that a method of deconvolution up-sampling needs to be used to restore the finally obtained output to the size of an original image, and then Sigmoid is used to activate and output a value, so that classification is realized. In the update of the parameters of the FCN of the network model,
Figure BDA0003073740030000021
for each output signature of the convolutional layer, E is the loss function, Dice _ loss. The specific updating mode is
Figure BDA0003073740030000022
Figure BDA0003073740030000023
Figure BDA0003073740030000024
Wherein M isjInput feature map combination, k, representing selectionijIs a convolution kernel for the connection between the input i-th feature map and the output j-th feature map, bjIs the bias and sensitivity corresponding to the j-th characteristic diagram
Figure BDA0003073740030000031
(u, v) represents the element positions in the sensitivity matrix,
Figure BDA0003073740030000032
is that
Figure BDA0003073740030000033
When making convolution, with kijEvery patch for convolution is made.
And step 3: training the constructed FCN network model specifically comprises
Step 3.1: initializing parameters: inputting batch _ size of training data, training iteration times epoch, a hyper-parameter gamma (learning rate) and buffer _ size;
step 3.2: setting a loss function Dice _ loss: considering a sample, the loss function of the nth sample is
Figure BDA0003073740030000034
Wherein c is lThe dimension of abel, for the classification problem, means that these samples can be classified as c.
Figure BDA0003073740030000035
Label t representing the nth samplenThe (c) th dimension of (a),
Figure BDA0003073740030000036
is the kth dimension of the output (predict label) of the nth sample network.
The optimizer updates the weight parameters of the network using an Adam optimization algorithm, where mt、ntRespectively a first moment estimate and a second moment estimate of the gradient,
Figure BDA0003073740030000037
is to mt、ntCorrection of thetat+1The updated parameters are specifically updated in the following way
mt=β1mt-1+(1-β1)gt
Figure BDA0003073740030000038
Figure BDA0003073740030000039
Figure BDA00030737400300000310
Figure BDA00030737400300000311
Wherein, eta, beta1,β2Default is η ═ 0.001, β1=0.9,β2=0.999,ε=10-8,β1And beta2Are all numbers close to 1, ε is to prevent division by 0, gtThe gradient is indicated.
Step 3.3: precision, Recall, of the network is calculated and output, and accuracy of bubble identification is calculated separately. The Precision is embodied as
Figure BDA00030737400300000312
The Recall rate Recall is specifically expressed as
Figure BDA0003073740030000041
Where TP is an example where the classifier considers positive samples and is indeed positive samples, FP is an example where the classifier considers positive samples but is not actually positive samples, and FN is an example where the classifier considers negative samples but is not actually negative samples.
And 4, step 4: identifying bubbles in the gas-liquid two-phase flow according to the trained FCN full convolution network model: for the FCN full convolution network model after training, inputting a picture to be identified of gas-liquid two-phase flow for the model, identifying bubbles in the picture almost accurately through a network, and calculating and obtaining the accuracy of bubble identification.
Compared with the prior art, the invention has the beneficial effects that:
aiming at the problem of identifying bubbles in liquid based on an optical image, the invention introduces an improved FCN full convolution network and establishes an intelligent gas-liquid two-phase flow identification model based on the optical image. The method has the advantages that: (1) aiming at the problem that a large number of bubble training samples are difficult to label, Labelme is adopted for carrying out data intelligent labeling, noise in an original image can be removed, and a background and bubbles are divided into two types so as to facilitate network training. (2) The VGG16 is used for transfer learning, the similarity among the models is fully utilized, and the stability, generalization performance and learning efficiency of the models are improved. (3) Because the distribution proportion of two types of pixels, namely bubbles and background, in the acquired image data is not balanced, when a binary cross entropy loss function is used, the training result of the model is more biased to the type with a large number of pixels in the image, and therefore, the Dice _ loss is used as the loss function for training. (4) An FCN method based on deep supervised learning and data extraction is introduced into gas-liquid two-phase flow identification, and information can be automatically extracted from a pixel level through multilayer convolution operation so as to extract an abstract semantic concept. (5) And an up-sampling layer and a multi-scale fusion technology are used for further optimizing the result, so that the high-level subnet fuses the characteristics of the low-level subnet for multiple times, and extremely high resolution is kept, thereby improving the accuracy of bubble identification.
Drawings
FIG. 1 is a schematic diagram of the FCN full convolution network framework improved by the present invention using VGG16 for migration learning;
FIG. 2 is a data annotation of the present invention using Labelme for gas-liquid two-phase flow data;
3(a) through 3(c) are the convergence results, accuracy and recall results of the present invention using the improved FCN model for gas-liquid two-phase flow data;
FIG. 4 is an individual bubble identification accuracy of the present invention using an improved FCN model for gas-liquid two-phase flow data;
fig. 5 is the result of the present invention using improved FCN model identification for gas-liquid two-phase flow data.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
The invention provides a gas-liquid two-phase flow intelligent identification technology based on an optical image aiming at gas-liquid two-phase flow image sequence data based on a full convolution network model FCN in a semantic segmentation algorithm. On the model structure, the classical network VGG16 was first introduced for migration learning, the fully connected layers were removed, the convolution basis parts in the network were retained, and training was performed using weights pre-trained on the ImageNet dataset. The core idea of semantic segmentation, deconvolution upsampling and skipping structure, is introduced next. Because the network needs to classify each pixel on the image, and the process of convolution kernel pooling is a down-sampling process, the length and width of the image are smaller and smaller, and the 'deconvolution' is the inverse process of convolution, so that the length and width of the image are restored, and the result of high resolution is retained; then introducing a jump fusion structure to make up the lost characteristics of the previous convolutional layer and pooling layer, combining jump level structures of results of different depth layers, up-sampling the results of different pooling layers, and combining the results to optimize output; the method can ensure robustness and accuracy and repair the restored image. In terms of the loss function, a binary cross entropy loss function is used, and the training result of the model is more biased to the class with a large number of pixels in the image, so that Dice _ loss is used as the loss function for training. The above improvements in model structure and loss function can make the model converge faster and data quality higher. An FCN improved model is introduced to train a gas-liquid two-phase flow image sequence dataset, accurate identification aiming at bubbles can be obtained, and the accuracy of the gas-liquid two-phase flow identification model is improved.
Examples
The invention provides a gas-liquid two-phase flow intelligent identification method based on an improved full convolution network FCN optical image, which comprises the following steps:
the method comprises the following steps: the video source of the gas-liquid two-phase flow is used as original data, a picture of each frame in the video is extracted by using Python, and a picture of each frame is subjected to data annotation by using Labelme. The label of each picture X comprises two pixel types, the pixel corresponding to the background is 0, the pixel corresponding to the bubble is 1, and the proportions of the background pixel and the bubble pixel are not balanced. And then constructing each picture and the corresponding label into a Dataset, and performing reading, decoding, normalization and standardization preprocessing, wherein 80% of the pictures and the corresponding labels are used as a training Dataset, and the rest 20% of the pictures and the corresponding labels are used as a testing Dataset.
Step two: first, we construct a migration learning based on the full convolution network FCN model using the full convolution part of the VGG16 network module as the convolution base, remove the full connection layer, and train with weights pre-trained on the ImageNet dataset. Since semantic segmentation requires classification of individual pixels on an image, while the process of convolution kernel pooling is a "down-sampling" process,the length and width of the image are smaller and smaller, the finally obtained output is up-sampled to the size of the original image by using a deconvolution up-sampling method, and then the Sigmoid is used for activation and outputting a value, so that classification is realized. In improving FCN network model parameter updates, xjFor each output signature of the convolutional layer, E is the loss function, Dice _ loss.
The specific updating mode is
Figure BDA0003073740030000051
Figure BDA0003073740030000061
Figure BDA0003073740030000062
Wherein M isjInput feature map combination, k, representing selectionijIs a convolution kernel for the connection between the input i-th feature map and the output j-th feature map, bjIs the bias and sensitivity corresponding to the j-th characteristic diagram
Figure BDA0003073740030000063
(u, v) represents the element positions in the sensitivity matrix,
Figure BDA0003073740030000064
is that
Figure BDA0003073740030000065
When making convolution, with kijEvery patch for convolution is made.
Step three: a loss function is calculated. If the binary cross entropy loss function binarycross entropy is used, under the condition that the number of bubble pixels of each picture in the data set is too small and the number of background pixels is too large, the training result is heavier than the background category, so that the network cannot well identify the bubbles. Therefore, we use here the Loss function Dice Loss of medical image segmentation, expressed as
Figure BDA0003073740030000066
In the above formula, c is the dimension of label, and for the classification problem, it means that these samples can be classified into c types.
Figure BDA0003073740030000067
Label t representing the nth samplenThe (c) th dimension of (a),
Figure BDA0003073740030000068
is the kth dimension of the nth sample net output (predict label).
According to the formula, if the model is completely fitted, the value of the Loss function is infinitely close to 0, whether the model is converged can be judged according to the value of the Loss function in the training process, and if the Dice Loss function value is not reduced, the network model is converged, and the training is completed.
Step four: an improved FCN full convolution network model is trained. As shown in fig. 1, firstly, inputting a gas-liquid two-phase flow picture and a picture labeled by Labelme in a pair into a network to extract features through convolution, and pooling to change the length, width and channel of the picture; then, the high-level sub-network and the previous low-level sub-network are subjected to feature fusion through upsampling, the finally obtained output is restored to the size of the original image, the output is optimized by combining the results, and meanwhile, the robustness and the accuracy are ensured; and then calculating a loss function of the network, updating weight parameters according to an Adam optimization algorithm, and iteratively training parameters of the network according to the mode in a circulating mode until the loss function value is not reduced or kept stable any more, then converging the network model, finishing the training, inputting a gas-liquid two-phase flow to-be-identified picture, and generating a result after semantic segmentation by using the network model.
Step five: the trained model inputs a picture to be identified of gas-liquid two-phase flow, wherein each picture X comprises two pixel types, the pixel corresponding to the background is 0, the pixel corresponding to the bubble is 1, and the proportion of the background pixel and the bubble pixel is not balanced.
In combination with the specific parameter embodiment, the data of the embodiment is from a two-phase flow experiment performed by simulating a closed loop, the vertical rising and the vertical falling of the gas-liquid two-phase flow can be controlled by adjusting a pipeline valve, for the vertical falling two-phase flow, the water pump drives the deionized water in the water tank to respectively enter the bubble generator and the vertical test section through the filter and then divided into two paths, and the deionized water returns to the water tank after being subjected to gas-water separation through the gas-water separator. The integrated air compressor generates compressed air which is stored in a compressed air tank with the volume of 300L, and the compressed air enters the bubble generator through the electromagnetic valve and then enters the experimental section and the gas-water separator, and finally is discharged into the atmosphere. The total length of the experimental section is about 3.7m, and the experimental section is formed by connecting organic glass pipe sections with different lengths and the collecting window, wherein the inner diameter of each organic glass pipe section is 50.8 mm. By adopting the planar design, high-speed shooting can be performed. By adjusting the relative positions of the organic glass pipe section and the acquisition window, two-phase flow parameter measurement at different positions along the flow direction can be carried out. A two-phase flow picture data set is obtained by intercepting each frame of a video, and is used as a training set for network model training, and data are normalized and standardized before training.
And (3) simulating the analysis of the two-phase flow experiment bubble identification result of the closed loop:
the experimental data set is 9351 gas-liquid two-phase flow pictures intercepted by a video source obtained by simulating a two-phase flow experiment of a closed loop, wherein 80% of the pictures are randomly extracted as a training set, the rest 20% of the pictures are taken as a test set, training is carried out according to a constructed improved FCN network model and a training mode, and Table 1 shows that under the condition that network parameters are configured, all result indexes obtained by using the model are 4 indexes for judging whether a network is fitted or not, namely the number of the training sets and the test set, a loss function value, Precision, Recall rate Recall and accuracy rate of bubble identification test on the 9351 pictures. Lower values of the loss function indicate better classification. The higher the accuracy, the higher the proportion of the classifier considered as positive classes, and the part that is actually positive classes accounts for all the classifiers considered as positive classes. The higher the recall rate, the higher the proportion of all positive classes that the classifier considers to be positive classes and that the positive classes are true. Fig. 3(a) to 3(c) are a convergence value curve, an accuracy curve and a recall ratio curve of the model, and it can be seen that as the number of iterations increases, the convergence and the stability of the model are remarkably improved.
Based on an improved full convolution network FCN model, 7481 gas-liquid two-phase flow pictures are used as a training data set, 1870 gas-liquid two-phase flow pictures are used as a test data set, and the training data set and the test data set are input into the network for training. Fig. 4 is a final accuracy curve of the calculated individual bubble identification in the network, from which it can be seen that the accuracy can reach 98.37%, and the calculation complexity is low and the calculation amount is small. Fig. 5 shows the result of identifying the bubbles in the gas-liquid two-phase flow picture by the network, and we can conclude from both the test result and the picture prediction result that the network has extremely high accuracy for gas-liquid two-phase flow identification, especially for bubble identification.
TABLE 1 final output of training set and test set data of the invention
Figure BDA0003073740030000071
Figure BDA0003073740030000081

Claims (1)

1. An intelligent identification method of gas-liquid two-phase flow based on optical images is characterized by comprising the following steps:
step 1: preparing a training data set and a testing data set: extracting a picture of each frame in a gas-liquid two-phase flow video by using Python, performing data labeling on the picture of each frame by using Labelme, wherein a label of each picture X comprises two pixel types, a pixel corresponding to a background is 0, a pixel corresponding to a bubble is 1, then constructing each picture and a label corresponding to the picture into a Dataset, performing reading, decoding, normalization and standardization preprocessing operations, randomly taking 80% of the pictures and the labels corresponding to the pictures as training datasets, and taking the rest 20% of the pictures and the labels corresponding to the pictures as testing datasets;
step 2: constructing a full convolution network model FCN, firstly carrying out migration learning by using a VGG16 network module, using a convolution base part in a VGG16 network, removing a full connection layer, using weights pre-trained on an ImageNet data set for training, wherein each pixel on an image needs to be classified due to semantic segmentation, and a process of convolution kernel pooling is a process of down-sampling, so that the length and the width of the image are smaller and smaller, the finally obtained output needs to be restored to the size of an original image by using a method of deconvolution up-sampling, then using Sigmoid for activation and outputting a value, thereby realizing classification, in updating parameters of the network model FCN,
Figure FDA0003073740020000017
for each output profile of the convolutional layer, E is the loss function Dice _ loss, which is updated specifically in the manner of
Figure FDA0003073740020000011
Figure FDA0003073740020000012
Figure FDA0003073740020000013
Wherein M isjInput feature map combination, k, representing selectionijIs a volume for connecting the input ith characteristic diagram and the output jth characteristic diagramBuild-up of nuclei, bjIs the bias and sensitivity corresponding to the j-th characteristic diagram
Figure FDA0003073740020000014
(u, v) represents the element positions in the sensitivity matrix,
Figure FDA0003073740020000015
is that
Figure FDA0003073740020000016
When making convolution, with kijMaking each patch of the convolution;
and step 3: training the constructed FCN network model specifically comprises
Step 3.1: initializing parameters: inputting batch _ size of training data, training iteration times epoch, hyper-parameter gamma learning rate and buffer _ size buffer area size;
step 3.2: setting a loss function Dice _ loss: considering a sample, the loss function of the nth sample is
Figure FDA0003073740020000021
Where c is the dimension of label, for the classification problem, it means that these samples can be classified into c classes,
Figure FDA0003073740020000022
label t representing the nth samplenThe (c) th dimension of (a),
Figure FDA0003073740020000023
is the kth dimension of the output (predict label) of the nth sample network;
the optimizer updates the weight parameters of the network using an Adam optimization algorithm, where mt、ntRespectively a first moment estimate and a second moment estimate of the gradient,
Figure FDA0003073740020000024
is to mt、ntCorrection of thetat+1The updated parameters are specifically updated in the following way
mt=β1mt-1+(1-β1)gt
Figure FDA0003073740020000025
Figure FDA0003073740020000026
Figure FDA0003073740020000027
Figure FDA0003073740020000028
Wherein, eta, beta1,β2Default is η ═ 0.001, β1=0.9,β2=0.999,ε=10-8,β1And beta2Are all numbers close to 1, ε is to prevent division by 0, gtRepresents a gradient;
step 3.3: calculating and outputting Precision, Recall, of the network, embodied as Precision, and calculating the accuracy of bubble identification separately
Figure FDA0003073740020000029
The Recall rate Recall is specifically expressed as
Figure FDA00030737400200000210
Where TP is an example where the classifier considers positive samples and is indeed positive samples, FP is an example where the classifier considers positive samples but is not actually positive samples, and FN is an example where the classifier considers negative samples but is not actually negative samples;
and 4, step 4: identifying bubbles in the gas-liquid two-phase flow according to the trained FCN full convolution network model: for the FCN full convolution network model after training, inputting a picture to be identified of gas-liquid two-phase flow for the model, identifying bubbles in the picture almost accurately through a network, and calculating and obtaining the accuracy of bubble identification.
CN202110546145.1A 2021-05-19 2021-05-19 Gas-liquid two-phase flow intelligent identification method based on optical image Active CN113313000B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110546145.1A CN113313000B (en) 2021-05-19 2021-05-19 Gas-liquid two-phase flow intelligent identification method based on optical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110546145.1A CN113313000B (en) 2021-05-19 2021-05-19 Gas-liquid two-phase flow intelligent identification method based on optical image

Publications (2)

Publication Number Publication Date
CN113313000A CN113313000A (en) 2021-08-27
CN113313000B true CN113313000B (en) 2022-04-29

Family

ID=77373867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110546145.1A Active CN113313000B (en) 2021-05-19 2021-05-19 Gas-liquid two-phase flow intelligent identification method based on optical image

Country Status (1)

Country Link
CN (1) CN113313000B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821072B (en) * 2022-06-08 2023-04-18 四川大学 Method, device, equipment and medium for extracting bubbles from dynamic ice image
CN115861751B (en) * 2022-12-06 2024-04-16 常熟理工学院 Oil-water two-phase flow multi-oil drop identification method based on integrated characteristics
CN117611844A (en) * 2023-11-08 2024-02-27 中移互联网有限公司 Training method and device for image similarity recognition model and image similarity recognition method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073037A (en) * 2011-01-05 2011-05-25 哈尔滨工程大学 Iterative current inversion method based on adaptive threshold selection technique
CN105426889A (en) * 2015-11-13 2016-03-23 浙江大学 PCA mixed feature fusion based gas-liquid two-phase flow type identification method
CN108304770A (en) * 2017-12-18 2018-07-20 中国计量大学 A method of the flow pattern of gas-liquid two-phase flow based on time frequency analysis algorithm combination deep learning theory
CN111028217A (en) * 2019-12-10 2020-04-17 南京航空航天大学 Image crack segmentation method based on full convolution neural network
CN111553373A (en) * 2020-04-30 2020-08-18 上海理工大学 CNN + SVM-based pressure bubble image recognition algorithm
CN111882579A (en) * 2020-07-03 2020-11-03 湖南爱米家智能科技有限公司 Large infusion foreign matter detection method, system, medium and equipment based on deep learning and target tracking

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10713540B2 (en) * 2017-03-07 2020-07-14 Board Of Trustees Of Michigan State University Deep learning system for recognizing pills in images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073037A (en) * 2011-01-05 2011-05-25 哈尔滨工程大学 Iterative current inversion method based on adaptive threshold selection technique
CN105426889A (en) * 2015-11-13 2016-03-23 浙江大学 PCA mixed feature fusion based gas-liquid two-phase flow type identification method
CN108304770A (en) * 2017-12-18 2018-07-20 中国计量大学 A method of the flow pattern of gas-liquid two-phase flow based on time frequency analysis algorithm combination deep learning theory
CN111028217A (en) * 2019-12-10 2020-04-17 南京航空航天大学 Image crack segmentation method based on full convolution neural network
CN111553373A (en) * 2020-04-30 2020-08-18 上海理工大学 CNN + SVM-based pressure bubble image recognition algorithm
CN111882579A (en) * 2020-07-03 2020-11-03 湖南爱米家智能科技有限公司 Large infusion foreign matter detection method, system, medium and equipment based on deep learning and target tracking

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Multi-Function Radar Signal Sorting Based on Complex Network;Kun Chi等;《网页在线公开:https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9292936》;20201214;第1-5页 *
基于高速摄影传感器的气液两相流型分层模糊识别;常佃康等;《传感器与微系统》;20161231;第35卷(第11期);第58-60页 *
肖健等;基于K-means聚类的微细通道纳米流体气液两相流流型识别;《农业机械学报》;20170117;第47卷(第12期);第385-390页 *

Also Published As

Publication number Publication date
CN113313000A (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN113313000B (en) Gas-liquid two-phase flow intelligent identification method based on optical image
WO2023077816A1 (en) Boundary-optimized remote sensing image semantic segmentation method and apparatus, and device and medium
CN111598881B (en) Image anomaly detection method based on variational self-encoder
CN110781924B (en) Side-scan sonar image feature extraction method based on full convolution neural network
WO2018028255A1 (en) Image saliency detection method based on adversarial network
CN108090906B (en) Cervical image processing method and device based on region nomination
CN110276402B (en) Salt body identification method based on deep learning semantic boundary enhancement
CN111950453A (en) Optional-shape text recognition method based on selective attention mechanism
CN111340046A (en) Visual saliency detection method based on feature pyramid network and channel attention
CN115294038A (en) Defect detection method based on joint optimization and mixed attention feature fusion
CN110659601B (en) Depth full convolution network remote sensing image dense vehicle detection method based on central point
CN113705655B (en) Three-dimensional point cloud full-automatic classification method and deep neural network model
CN112232371A (en) American license plate recognition method based on YOLOv3 and text recognition
CN110969121A (en) High-resolution radar target recognition algorithm based on deep learning
CN114511710A (en) Image target detection method based on convolutional neural network
CN112507114A (en) Multi-input LSTM-CNN text classification method and system based on word attention mechanism
CN112861915A (en) Anchor-frame-free non-cooperative target detection method based on high-level semantic features
CN109472733A (en) Image latent writing analysis method based on convolutional neural networks
WO2024060416A1 (en) End-to-end weakly supervised semantic segmentation and labeling method for pathological image
CN114782798A (en) Underwater target detection method based on attention fusion
CN111612803B (en) Vehicle image semantic segmentation method based on image definition
CN112700450A (en) Image segmentation method and system based on ensemble learning
Kajabad et al. YOLOv4 for urban object detection: Case of electronic inventory in St. Petersburg
CN115620068A (en) Rock lithology automatic identification and classification method under deep learning mode
CN116721291A (en) Metal surface defect detection method based on improved YOLOv7 model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant