CN111414938A - Target detection method for bubbles in plate heat exchanger - Google Patents

Target detection method for bubbles in plate heat exchanger Download PDF

Info

Publication number
CN111414938A
CN111414938A CN202010144170.2A CN202010144170A CN111414938A CN 111414938 A CN111414938 A CN 111414938A CN 202010144170 A CN202010144170 A CN 202010144170A CN 111414938 A CN111414938 A CN 111414938A
Authority
CN
China
Prior art keywords
image
frame
picture
bubble
heat exchanger
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010144170.2A
Other languages
Chinese (zh)
Other versions
CN111414938B (en
Inventor
李孝禄
汪迁文
许沧粟
李运堂
陈源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University
Original Assignee
China Jiliang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University filed Critical China Jiliang University
Priority to CN202010144170.2A priority Critical patent/CN111414938B/en
Publication of CN111414938A publication Critical patent/CN111414938A/en
Application granted granted Critical
Publication of CN111414938B publication Critical patent/CN111414938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a target detection method for bubbles in a plate heat exchanger, which aims to solve the technical problems of complex detection process, large calculated amount and poor detection effect of the traditional detection method.

Description

Target detection method for bubbles in plate heat exchanger
Technical Field
The invention relates to the technical field of target information detection, in particular to a target detection method for bubbles in a plate heat exchanger.
Background
Along with the increasing demand of people for energy conservation and environmental protection in recent years, how to improve the heat exchange efficiency of heat exchange equipment becomes a key problem of attention of people. The state of the two-phase flow has a great influence on the heat exchange performance of the heat exchanger, so that the visual research on the two-phase flow in the heat exchanger becomes very important.
Bubble target detection is a key technology for researching two-phase flow, and has been widely concerned by scholars at home and abroad for a long time. The target detection technology is an important component in the field of computer vision, and the visualization of two-phase flow in a flow channel can be realized by using a target detection method, so that the data in the flow channel can be conveniently further mined.
Scholars at home and abroad make many attempts by combining traditional image processing technology with machine learning methods, such as HOG (histogram of oriented gradient) and SVM (support vector machine) algorithm; and carrying out target recognition on the bubbles. However, the method has the limitation that the characteristics need to be manually extracted, and the influence of the quality of the extracted characteristics on the performance of the whole target detection system cannot be directly judged, so that researchers need to carry out deep research to design the characteristics with strong robustness. Moreover, the traditional method has extremely high requirements on the experience of technicians, huge calculation amount and complex operation flow.
In addition, although some scholars perform bubble target detection by using a deep learning method and detect bubbles in a picture by using a convolutional neural network in combination with a sliding window method, the sliding window method is improved in feature extraction compared with the former method, but the amount of calculation is large and the detection speed is slow.
Disclosure of Invention
The present invention aims to provide a method for detecting the target of bubbles in a plate heat exchanger, which can solve one or more of the above technical problems.
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
a target detection method for bubbles in a plate heat exchanger comprises the following steps: the method comprises the following steps:
s1, acquiring a two-phase flow bubble picture data set in the plate heat exchanger by using a high-speed camera;
s2, building a convolutional neural network framework by using the picture data set in the S1 and determining a feature extractor so as to reduce the calculation time of feature extraction;
s3, regarding the target detection problem in each frame of picture as a target, and regarding a single bubble in each frame of picture as a normal bubble, regarding two or more connected bubbles as abnormal bubbles, thereby setting the category number C as 2, so that the number F of filters is (C +5) × 3, i.e. 21, and converting the target detection classification problem in the picture into a regression prediction problem in the convolutional neural network framework;
s4, improving the recognition rate of the picture to be detected in S1 by adopting an improved three-frame difference method;
and S5, comparing the output results in S3 and S4 by using a IoU score screening algorithm, and further removing repeated candidate frames in the comparison result to obtain a final detection result.
Compared with the traditional manual feature extraction, the method adopts the feature extraction model; the difficulty of feature extraction is reduced, and the calculation time is reduced; converting the target detection problem into a regression prediction problem by taking the target detection problem as a target; the calculated amount is greatly reduced, and the detection speed is improved.
Further, the acquisition of the picture data set in step S1 is performed by the following steps:
s11, shooting a two-phase flow video in the plate heat exchanger by using a high-speed camera;
s12, decomposing the video into single-frame pictures;
s13 makes the single-frame picture in the previous step into a bubble data set.
Further, the step S2 comprises the following processing steps of building a convolutional neural network framework by using a Darknet learning framework and adopting a Yo L o algorithm, completing a Yo L ov3-tiny model, freezing parameters in front of a full connection layer through transfer learning, and finally using the model as a feature extractor to greatly reduce the calculation time of feature extraction.
Further, the step S4 includes the following processing steps:
s41, selecting three adjacent pictures of a video, wherein the marks are P1, P2 and P3 according to the time sequence, and performing Gaussian blur operation on the three pictures to obtain pictures P1 ', P2 ' and P3 ' respectively;
s42 subtracts the previous frame picture P1 'from the intermediate frame picture P2' in S41 to obtain a frame difference image P21, and subtracts the next frame picture P3 'from the intermediate frame picture P2' to obtain a frame difference image P23;
s43 performs a binary segmentation operation on the frame difference image P21 obtained in S42 using Otsu 'S method to obtain a binary image P21'; performing binary segmentation operation on the frame difference image P23 obtained in S42 by using an Otsu 'S method to obtain a binary image P23';
s44 carries out image AND operation on the binary image P21 'and the binary image P23' in S43 to obtain an image P21 'P23';
s45 uses an image enhancement algorithm on the frame difference image P23 in S42 to improve the brightness and contrast of the image P23 to obtain an image P23 "; then, detecting the image P23 'by using a Canny edge detection algorithm to further obtain an image P23';
s46 obtains an image P2 "by segmenting the foreground bubble using background subtraction after applying the median blur algorithm to the intermediate frame image P2 in S41; removing noise of the P2 'by using image opening operation to obtain an image P2';
s47, carrying out image or operation on the images P21 'P23', P23 ', P2' to obtain a final image P L ast, and obtaining contour information and position information of the bubble.
The invention mainly aims at a detection algorithm based on bubble target apparent characteristic information, namely, targets of all bubbles are detected and positioned in an actual two-phase flow video. The difficulty is that the shape of the bubbles in the video is constantly changed due to the reasons of light, shooting angle, rapid change of the state of two-phase flow in the flow channel and the like; therefore, the overall identification precision of the contour information and the position information of the bubbles is improved by improving the three-frame difference method.
By the improved three-frame difference method, the background subtraction based on the Gaussian mixture model and the Canny edge detection algorithm are added on the basis of the traditional three-frame difference method, and the obtained three pictures are subjected to OR operation, so that the precision of the pictures is greatly improved.
Further, the step 5 comprises the following processing steps:
s51 calculates the predicted result obtained in step S3 and the result obtained in step S4 using a score IoU calculation formula to obtain a score IoU matrix, and the score IoU calculation formula is:
S=((x(1)+w(1)-x(2))(y(1)+h(1)-y(2)))/(w(1)y(1)+w(2)y(2)-(x(1)+w(1)-x(2))(y(1)+h(1)-y(2)) ); wherein x(1)、y(1)、w(1)、h(1)The x-axis coordinate, the y-axis coordinate, the width of the candidate frame, and the height of the candidate frame of the top left corner vertex of the candidate frame of the picture predicted in step S3 are respectively(2)、y(2)、w(2)、h(2)Respectively obtaining the x-axis coordinate, the y-axis coordinate, the width of the candidate frame and the height of the candidate frame of the top left corner vertex of the candidate frame predicted in the step S4 in the picture;
s52 if S in IoU score matrix obtained in S51ijAnd setting the value to be zero when the value is less than the set threshold value, removing the ith candidate box if a value which is not zero exists in the ith column vector of the IoU fractional matrix, and then outputting the data of the rest candidate boxes.
The main advantages in step S5 here are the following two points:
(a) because the Yo L o algorithm has poor detection effect on small targets and the improved three-frame difference method is very susceptible to bubble form change and the small bubble form is not easy to change, the output of the Yo L o model is used as a main result, the output of the improved three-frame difference method is used as a supplement result, and the recognition rate of the network is improved.
(b) IoU fraction matrix SijAnd setting the element to be zero when the element is smaller than the set threshold, removing the ith candidate box if a value which is not zero exists in the ith column vector of the IoU fractional matrix, and then outputting the data of the rest candidate boxes.
The invention has the technical effects that:
1. the invention uses the transfer learning technology to freeze all parameters before the full connection layer, thereby reducing the training cost.
2. The invention directly converts the regression and classification problems into regression prediction problems, reduces the calculation flow and improves the detection speed.
3. The invention adds an improved three-frame difference method, improves the detection effect of the bubble detection network on the small target, and improves the overall identification precision.
4. The invention has the advantages of small calculated amount, simple operation flow, high detection and identification rate and comprehensive detection effect.
The bubble target detection method provided by the invention effectively solves the problems of large calculated amount and complex operation flow of the traditional two-phase flow bubble detection method.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention.
In the drawings:
FIG. 1 is an overall frame diagram of the present invention;
FIG. 2 is a diagram of the structure of the Yo L ov3-tiny network;
FIG. 3 is a network training flow diagram;
FIG. 4 is a flow chart of an improved three frame difference method;
FIG. 5 is a IoU screening flow chart;
FIG. 6 is a partial training set image;
FIG. 7 is a diagram showing the detection effect of the conventional detection method (using convolutional neural network in combination with sliding window detection);
FIG. 8 is a second diagram of the detection effect of the improved three-frame difference method;
FIG. 9 is a diagram showing the detection effect of the detection method of the present invention.
Detailed Description
The present invention will now be described in detail with reference to the drawings and specific embodiments, wherein the exemplary embodiments and descriptions are provided only for the purpose of illustrating the present invention and are not to be construed as unduly limiting the invention.
A target detection method for bubbles in a plate heat exchanger is disclosed, and referring to fig. 1, firstly a high-speed camera is used for shooting a two-phase flow flowing video in a specific flow channel, after the video is decomposed into single-frame pictures, a proper picture is selected to make a bubble data set, then a training set and a verification set of the data set are input into a Yo L ov3-tiny network model for training until the network converges, then the video to be detected is input into the trained network model to obtain a prediction result, meanwhile, an improved three-frame difference method is used for detecting the bubbles of the video, and finally, a IoU screening algorithm is used for removing repeated candidate frames in the results obtained by the two methods to obtain a final result.
In the embodiment, when a training set and a verification set are manufactured, a data set format manufacturing tool required by a Yo L o network is used for manufacturing pictures in a video into a data set in a format required by a Yo L o network, and the method comprises the following specific steps of S11 shooting a two-phase flow video in a specific flow field by using a high-speed camera, S12 decomposing the video into a single-frame picture, and S13 manufacturing the single-frame picture in the previous step into a bubble data set.
In the process of building a convolutional neural network framework, the model is configured by using a Yo L ov3 algorithm, and the basic structure is as shown in fig. 2, wherein the following 4 parts exist:
(1) conv layers extracts a feature map (L ast in FIG. 2) of the image through multiple sets of convolutional, active, and pooling layers, which is shared in subsequent fully-connected layers;
(2) a Concatenate layer. And adding the corresponding dimensions of the input feature map and the output feature map.
(3) Upesample layer. The method has the effect that the small-size characteristic graph generates a large-size image through methods such as interpolation and the like.
(4) Yo L o layer, performing regression to predict the position and type of the target, and outputting the position candidate frame and the category confidence rate of the target.
Fig. 3 is a training flowchart of the network model, after the network model is built, the training set and the verification set are input to train the network until the network converges, and then the weight file is output and saved to realize the training process of the network.
The improved three-frame difference method is added to the bubble detection algorithm in this embodiment, and is illustrated as a flowchart of the entire improved three-frame difference method. It can be seen from the figure that the improved three-frame difference method mainly consists of three parts:
(1) the conventional three frame difference method. The traditional three-frame difference method is as follows: and subtracting the previous frame picture from the intermediate frame picture, subtracting the intermediate frame picture from the next frame picture, binarizing the two frame difference images, and performing image and operation to obtain a result.
(2) An edge detection section. And after image enhancement is carried out on the frame difference image obtained by subtracting the intermediate frame from the next frame, the Canny operator is used for detecting the edge of the image.
(3) A background subtraction part. And after median filtering is carried out on the intermediate frame image, segmenting the foreground and the background of the image by using background subtraction, and carrying out image opening operation on the foreground image to remove noise.
And carrying out image and operation on the pictures obtained by the three main parts to obtain a final image to be detected.
In this example, the improved three-frame difference method is added to the model, and the target video to be detected is input to obtain the prediction result of the neural network and the detection result of the improved three-frame difference method.
Due to the spatial limitation of the Yo L ov3 algorithm prediction, the improved three-frame difference method is not addedThe detection effect of the model of (2) on the small bubbles is poor. However, the detection effect of the improved three-frame difference method is greatly influenced by the change of the bubble state. For bubbles with morphological changes, the detection effect is poor, which means that more adjacent bubbles cannot be divided. Therefore, by using an IoU fractional screening algorithm, the complementary effect of the improved three-frame difference method on the detection of the model small target object can be realized by combining two results. IoU Algorithm flow chart is shown in FIG. 5, where T(1) m,4For outputting the candidate box position information matrix in step S3, wherein T(2) n,4The candidate box position information matrix is output for step S4. IoU the screening algorithm is intended to remove the overlap in the two results, enabling the model to enhance the detection capability for small bubbles.
As shown in fig. 6, which is a partial picture in the bubble data set produced in the present example, in order to further improve the accuracy and comprehensiveness of the detection, a single bubble in the picture is regarded as a normal bubble, and a plurality of connected bubbles are regarded as abnormal bubbles.
In the embodiment, bubble pictures 000004 are selected to be detected under three conditions that a convolutional neural network is combined with a sliding window detection method, only an improved three-frame difference method is used, and the detection method is used. Compared with the initial model, the model with the improved three-frame difference method has stronger detection capability on small bubbles. As can be seen from fig. 7 and 8, the model before being improved does not detect all bubbles, especially bubbles with small volume, in the picture; as can be seen from FIG. 9, after the detection method of the present invention is used, the detection result of the model on the bubbles is more accurate and more comprehensive.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. A target detection method for bubbles in a plate heat exchanger; the method is characterized in that: the method comprises the following steps:
s1, acquiring a two-phase flow bubble picture data set in the plate heat exchanger by using a high-speed camera;
s2, building a convolutional neural network framework by using the picture data set in the S1 and determining a feature extractor so as to reduce the calculation time of feature extraction;
s3, regarding the target detection problem in each frame of picture as a target, and regarding a single bubble in each frame of picture as a normal bubble, regarding two or more connected bubbles as abnormal bubbles, thereby setting the category number C as 2, so that the number F of filters is (C +5) × 3, i.e. 21, and converting the target detection classification problem in the picture into a regression prediction problem in the convolutional neural network framework;
s4, improving the recognition rate of the picture to be detected in S1 by adopting an improved three-frame difference method;
and S5, comparing the output results in S3 and S4 by using a IoU score screening algorithm, and removing repeated candidate frames in the comparison result so as to obtain a final detection result.
2. The method for detecting the object of the bubble in the plate heat exchanger according to claim 1, wherein the step S1 is implemented by the steps of:
s11, shooting a two-phase flow video in the plate heat exchanger by using a high-speed camera;
s12, decomposing the video into single-frame pictures;
s13 makes the single-frame picture in the previous step into a bubble data set.
3. The method for detecting the target of the air bubble in the plate heat exchanger according to the claim 1, wherein the step S2 comprises the following processing steps of building a convolutional neural network framework by using a Darknet learning framework and adopting a Yo L o algorithm, completing a Yo L ov3-tiny model, freezing parameters in front of a full connection layer through transfer learning, and finally using the model as a feature extractor to reduce the calculation time of feature extraction.
4. The method for detecting the object of the air bubble in the plate heat exchanger according to claim 1, wherein the step S4 includes the following processing steps:
s41, selecting three adjacent pictures of a video, wherein the marks are P1, P2 and P3 according to the time sequence, and performing Gaussian blur operation on the three pictures to obtain pictures P1 ', P2 ' and P3 ' respectively;
s42 subtracts the previous frame picture P1 'from the intermediate frame picture P2' in S41 to obtain a frame difference image P21, and subtracts the next frame picture P3 'from the intermediate frame picture P2' to obtain a frame difference image P23;
s43 performs a binary segmentation operation on the frame difference image P21 obtained in S42 using Otsu 'S method to obtain a binary image P21'; performing binary segmentation operation on the frame difference image P23 obtained in S42 by using an Otsu 'S method to obtain a binary image P23';
s44 carries out image AND operation on the binary image P21 'and the binary image P23' in S43 to obtain an image P21 'P23';
s45 uses an image enhancement algorithm on the frame difference image P23 in S42 to improve the brightness and contrast of the image P23 to obtain an image P23 "; then, detecting the image P23 'by using a Canny edge detection algorithm to further obtain an image P23';
s46 obtains an image P2 "by segmenting the foreground bubble using background subtraction after applying the median blur algorithm to the intermediate frame image P2 in S41; then, removing noise of the image P2 'by using image opening operation to obtain an image P2';
s47, carrying out image OR operation on the images P21 'P23', P23 ', P2' to obtain a final image P L ast, and obtaining bubble contour information and position information.
5. Method for the targeted detection of gas bubbles in a plate heat exchanger according to claim 1, characterized in that said step 5 comprises the following processing steps:
s51 calculates the predicted result obtained in step S3 and the result obtained in step S4 using a score IoU calculation formula to obtain a score IoU matrix, and the score IoU calculation formula is:
S=((x(1)+w(1)-x(2))(y(1)+h(1)-y(2)))/(w(1)y(1)+w(2)y(2)-(x(1)+w(1)-x(2))(y(1)+h(1)-y(2)) ); wherein x(1)、y(1)、w(1)、h(1)The x-axis coordinate, the y-axis coordinate, the width of the candidate frame, and the height of the candidate frame of the top left corner vertex of the candidate frame of the picture predicted in step S3 are respectively(2)、y(2)、w(2)、h(2)Respectively obtaining the x-axis coordinate, the y-axis coordinate, the width of the candidate frame and the height of the candidate frame of the top left corner vertex of the candidate frame predicted in the step S4 in the picture;
s52 if S in IoU score matrix obtained in S51ijAnd setting the value to be zero when the value is less than the set threshold value, removing the ith candidate box if a value which is not zero exists in the ith column vector of the IoU fractional matrix, and then outputting the data of the rest candidate boxes.
CN202010144170.2A 2020-03-04 2020-03-04 Target detection method for bubbles in plate heat exchanger Active CN111414938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010144170.2A CN111414938B (en) 2020-03-04 2020-03-04 Target detection method for bubbles in plate heat exchanger

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010144170.2A CN111414938B (en) 2020-03-04 2020-03-04 Target detection method for bubbles in plate heat exchanger

Publications (2)

Publication Number Publication Date
CN111414938A true CN111414938A (en) 2020-07-14
CN111414938B CN111414938B (en) 2023-06-20

Family

ID=71490889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010144170.2A Active CN111414938B (en) 2020-03-04 2020-03-04 Target detection method for bubbles in plate heat exchanger

Country Status (1)

Country Link
CN (1) CN111414938B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112557396A (en) * 2020-12-23 2021-03-26 东软睿驰汽车技术(沈阳)有限公司 Detection method and related equipment
CN114707015A (en) * 2022-03-14 2022-07-05 同盾科技有限公司 Trademark labeling method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013088827A1 (en) * 2011-12-16 2013-06-20 旭硝子株式会社 Video image analysis device, video image analysis method, and video image analysis program
CN103471502A (en) * 2013-08-22 2013-12-25 中国电子科技集团公司第四十八研究所 Device and method for detecting volume of gas-liquid two-phase flow bubbles
CN105389814A (en) * 2015-11-03 2016-03-09 浙江工业大学 Air bubble detection method for air tightness test
JP2018013986A (en) * 2016-07-21 2018-01-25 株式会社メガチップス Image processing apparatus
WO2018165753A1 (en) * 2017-03-14 2018-09-20 University Of Manitoba Structure defect detection using machine learning algorithms
US20190114833A1 (en) * 2018-12-05 2019-04-18 Intel Corporation Surface reconstruction for interactive augmented reality
CN110580709A (en) * 2019-07-29 2019-12-17 浙江工业大学 Target detection method based on ViBe and three-frame differential fusion
CN110766726A (en) * 2019-10-17 2020-02-07 重庆大学 Visual positioning and dynamic tracking method for moving target of large bell jar container under complex background

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013088827A1 (en) * 2011-12-16 2013-06-20 旭硝子株式会社 Video image analysis device, video image analysis method, and video image analysis program
CN103471502A (en) * 2013-08-22 2013-12-25 中国电子科技集团公司第四十八研究所 Device and method for detecting volume of gas-liquid two-phase flow bubbles
CN105389814A (en) * 2015-11-03 2016-03-09 浙江工业大学 Air bubble detection method for air tightness test
JP2018013986A (en) * 2016-07-21 2018-01-25 株式会社メガチップス Image processing apparatus
WO2018165753A1 (en) * 2017-03-14 2018-09-20 University Of Manitoba Structure defect detection using machine learning algorithms
US20190114833A1 (en) * 2018-12-05 2019-04-18 Intel Corporation Surface reconstruction for interactive augmented reality
CN110580709A (en) * 2019-07-29 2019-12-17 浙江工业大学 Target detection method based on ViBe and three-frame differential fusion
CN110766726A (en) * 2019-10-17 2020-02-07 重庆大学 Visual positioning and dynamic tracking method for moving target of large bell jar container under complex background

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PATTIRA UMYAI; PINIT KUMHOM; KOSIN CHAMNONGTHAI: "Air Bubbles Detecting on Ribbed Smoked Sheets Based On Fractal Dimension" *
杜建卫: "基于二维小波变换的三相流化床气泡边缘检测" *
郭宏晨;许四祥;孙杰;翟泽;: "基于BP神经网络的镁熔液第一气泡图像检测" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112557396A (en) * 2020-12-23 2021-03-26 东软睿驰汽车技术(沈阳)有限公司 Detection method and related equipment
CN114707015A (en) * 2022-03-14 2022-07-05 同盾科技有限公司 Trademark labeling method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111414938B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
CN105046196B (en) Front truck information of vehicles structuring output method based on concatenated convolutional neutral net
CN108334881B (en) License plate recognition method based on deep learning
CN107133969B (en) A kind of mobile platform moving target detecting method based on background back projection
CN110276264B (en) Crowd density estimation method based on foreground segmentation graph
CN111914698B (en) Human body segmentation method, segmentation system, electronic equipment and storage medium in image
CN110287826B (en) Video target detection method based on attention mechanism
CN110598698B (en) Natural scene text detection method and system based on adaptive regional suggestion network
CN113744311A (en) Twin neural network moving target tracking method based on full-connection attention module
CN110032952B (en) Road boundary point detection method based on deep learning
CN112613579A (en) Model training method and evaluation method for human face or human head image quality and selection method for high-quality image
CN113763427B (en) Multi-target tracking method based on coarse-to-fine shielding processing
CN111414938B (en) Target detection method for bubbles in plate heat exchanger
CN111507337A (en) License plate recognition method based on hybrid neural network
CN107247967B (en) Vehicle window annual inspection mark detection method based on R-CNN
CN111160107B (en) Dynamic region detection method based on feature matching
CN113936034A (en) Apparent motion combined weak and small moving object detection method combined with interframe light stream
CN111612802A (en) Re-optimization training method based on existing image semantic segmentation model and application
CN108573217B (en) Compression tracking method combined with local structured information
CN111079516A (en) Pedestrian gait segmentation method based on deep neural network
CN114387592B (en) Character positioning and identifying method under complex background
CN110570450A (en) Target tracking method based on cascade context-aware framework
CN110490170A (en) A kind of face candidate frame extracting method
CN110276260B (en) Commodity detection method based on depth camera
CN113673534A (en) RGB-D image fruit detection method based on fast RCNN
CN111986233A (en) Large-scene minimum target remote sensing video tracking method based on feature self-learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant