CN110889418A - Gas contour identification method - Google Patents

Gas contour identification method Download PDF

Info

Publication number
CN110889418A
CN110889418A CN201911067035.6A CN201911067035A CN110889418A CN 110889418 A CN110889418 A CN 110889418A CN 201911067035 A CN201911067035 A CN 201911067035A CN 110889418 A CN110889418 A CN 110889418A
Authority
CN
China
Prior art keywords
gas
neural network
detection
network model
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911067035.6A
Other languages
Chinese (zh)
Inventor
付泽强
张树峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Order Of Magnitude Shanghai Information Technology Co Ltd
Original Assignee
Order Of Magnitude Shanghai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Order Of Magnitude Shanghai Information Technology Co Ltd filed Critical Order Of Magnitude Shanghai Information Technology Co Ltd
Priority to CN201911067035.6A priority Critical patent/CN110889418A/en
Publication of CN110889418A publication Critical patent/CN110889418A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a gas contour recognition method, which is characterized in that training data in a training data set are input into a neural network model to form an integral characteristic diagram; setting a plurality of first detection frames in the feature map of a preset layer, screening all the first detection frames to obtain a plurality of candidate detection frames, and then obtaining local feature maps with the same size; processing by adopting a full connection layer in a neural network model according to the local feature map to obtain a classification error and a detection error of the candidate detection frame; performing binary segmentation processing on the candidate detection frame to obtain a segmentation error; calculating to obtain an output value of a loss function in the neural network model according to the classification error, the detection error and the segmentation error; and circulating the steps to adopt the training data set to carry out iterative training on the neural network model until the output value of the loss function is reduced to be within a preset range, and finishing the training. The invention has the beneficial effects that: the accuracy of gas profile identification is improved.

Description

Gas contour identification method
Technical Field
The invention relates to the technical field of image processing, in particular to a gas contour identification method.
Background
In recent years, the infrared hyperspectral imaging technology is developed at a high speed, and the hyperspectral infrared imaging technology provides possibility for field detection of industrial gas, so that the infrared hyperspectral imaging technology can be directly detected based on an infrared absorption spectrum and a background infrared radiation spectrum of a gas target in the atmosphere without providing an artificial infrared light source. This technique has great advantages over traditional chemical sampling methods: the gas monitoring system realizes remote real-time monitoring on the gas in a large range and improves the detection efficiency.
Because the data volume of the hyperspectral infrared image is very large, how to analyze and extract useful information from massive data and carry out field-level measurement on the type, concentration and range of gas in a detection area is a problem which needs to be solved urgently. Therefore, in the prior art, an opencv library is adopted to perform image morphological processing or set a threshold value and the like on a hyperspectral infrared image of gas detection by using an image processing method. However, due to the problems of various gas images and complex and various field environments, the discrimination method in the prior art is relatively rough, poor in robustness and low in detection accuracy, so that the detection process is more time-consuming and labor-consuming, and the generalization is not strong.
Disclosure of Invention
In view of the above problems in the prior art, a method for identifying a gas contour is provided to effectively improve the accuracy of gas contour identification.
The specific technical scheme is as follows:
a gas contour identification method comprises the following steps:
before the construction process of the identification model is executed, firstly, a plurality of gas sample images are collected, image features of each gas sample image are labeled to form a corresponding labeled image, the labeled images and the corresponding gas sample images are associated to form training data, and finally a training data set is formed;
the construction process of the recognition model specifically comprises the following steps:
step A1, inputting training data in a training data set into a neural network model, and extracting first image characteristic information of a gas sample image in the training data from a convolution layer of the neural network model to form an overall characteristic diagram, wherein the neural network model is realized by adopting a mask-rcnn neural network;
step A2, acquiring feature maps of a plurality of preset layers in the process of forming an overall feature map, setting a plurality of first detection frames in the feature maps of the preset layers by combining the FPN technology, and screening all the first detection frames to obtain a plurality of candidate detection frames;
step A3, extracting the features of all candidate detection frames to obtain local feature maps with the same size;
step A4, obtaining classification errors and detection errors of candidate detection frames by adopting full connection layer processing in a neural network model according to the local feature map;
step A5, performing binary segmentation processing on the candidate detection frame according to the local feature map to obtain a segmentation error;
step A6, calculating to obtain the output value of the loss function in the neural network model during the training according to the classification error, the detection error and the segmentation error, and then returning to the step A1;
circularly executing the steps A1-A6 to iteratively train the neural network model by adopting the training data set until the output value of the loss function is reduced to a preset range, and then outputting the neural network model as an identification model;
the method also comprises a process of identifying the gas contour by adopting the identification model:
step B1, acquiring a gas image to be identified;
and step B2, inputting the gas image to be identified into the identification model to obtain a gas contour identification result of the gas image to be identified.
Preferably, the identification method, wherein the step a2 specifically includes the following steps:
step A21, acquiring feature maps of a plurality of preset layers in the process of forming an overall feature map, and scanning pixel points on the feature maps of the preset layers by combining with the FPN technology so as to set a plurality of first detection frames on each pixel point;
step A22, classifying all the first detection frames to obtain a first type detection frame as a positive sample and a second type detection frame as a negative sample;
step A23, performing initial classification error and initial detection error training on a rpn network model in a neural network model by adopting a first type of detection frames with a first preset number and a second type of detection frames with a second preset number;
step A24, inputting all the first detection boxes into the trained rpn network model, so as to perform primary screening on all the first detection boxes to obtain a plurality of initial candidate detection boxes;
and step A25, adopting a non-maximum suppression algorithm to perform secondary screening on all the initial candidate detection frames to obtain a plurality of candidate detection frames.
Preferably, in the identification method, in step a1, a gas sample image is acquired by taking a gas within a visible range of a viewing angle of an infrared camera as a target gas.
Preferably, the identification method, wherein,
the labeling of the image features comprises:
adopting a marking frame to mark target gas in the gas injection sample image as target gas characteristics;
outlining the moving object in the gas sample image as the moving object characteristic; and
and marking other pixel points except the gas position of the target gas and the outline of the moving object in the gas sample image as image background features.
Preferably, the method is a method, wherein before the step a1 is performed for the first time, an initial neural network model is obtained by using convolutional neural network training with weight sharing.
Preferably, in the identification method, in step a3, the feature extraction is performed on the candidate detection frame by using a ROI Align processing method, which includes the following steps:
step A31, mapping the candidate detection frame of the gas sample image onto the global feature map to form a mapping feature map corresponding to the candidate detection frame on the global feature map, and obtaining the coordinates of the mapping feature map;
step A32, dividing the mapping feature map into a plurality of units;
step a33, performing a maximum pooling operation in each cell to extract a plurality of local feature maps with the same size.
Preferably, the identification method, wherein,
before the construction process of the identification model is executed, marking each marked image by adopting a real detection frame;
step a4 specifically includes the following steps:
obtaining classification errors of candidate detection frames for representing the class errors of the candidate detection frames by adopting a full connection layer according to the local feature map and the real detection frames; and
and obtaining a detection error between the coordinates of the candidate detection frame and the coordinates of the real detection frame corresponding to the candidate detection frame by adopting the full connection layer according to the local feature map and the real detection frame.
Preferably, the identification method is characterized in that a conventional activation function ReLU in the convolutional neural network is replaced by an activation function Leaky ReLU in the construction process of the identification model.
Preferably, the identification method, wherein the step a5 specifically includes the following steps:
step A51, inputting the candidate detection box into the mask branch in the neural function network model;
a52, carrying out semantic segmentation on the candidate detection box by the mask branch to obtain a mask matrix;
step A53, calculating the cross entropy of each pixel in the mask matrix by using an activation function;
and step A54, calculating the sum of the cross entropies of all the pixels, and taking the sum of the cross entropies as a segmentation error.
Preferably, the identification method, wherein the step B2 specifically includes the following steps:
step B21, inputting the gas image to be identified into the identification model to obtain a gas contour dividing result;
and step B22, performing thinning and dividing treatment on the gas contour dividing result to obtain a gas contour identification result.
Preferably, the identification method, wherein the step B22 specifically includes the following steps:
step B221, performing gray threshold division on the gas contour division result to obtain a gray threshold division result;
and step B222, fusing the gas contour division result and the gray threshold division result to obtain a gas contour identification result.
The technical scheme has the following advantages or beneficial effects:
firstly, feature extraction is carried out through a mask-rcnn neural network model, so that the defect that proper features are difficult to select or the features are not obvious in the traditional method is overcome, and the training robustness is improved;
and secondly, the anti-interference capability under a complex environment is improved, the anti-interference capability when the monitoring angle changes is improved, and the detection precision of the identification result of the gas profile is further improved.
Drawings
Embodiments of the present invention will now be described more fully hereinafter with reference to the accompanying drawings. The drawings are, however, to be regarded as illustrative and explanatory only and are not restrictive of the scope of the invention.
FIG. 1 is a schematic structural diagram of an identification model according to an embodiment of the identification method of the present invention;
FIG. 2 is a flow chart of a process for constructing a recognition model according to an embodiment of the recognition method of the present invention;
FIG. 3 is a flow chart of a process for identifying a gas profile using a recognition model according to an embodiment of the recognition method of the present invention;
FIG. 4 is a flowchart of step A2 of an embodiment of the identification method of the present invention;
FIG. 5 is a flowchart of a process of classifying all candidate detection frames using a positioning accuracy evaluation function according to an embodiment of the identification method of the present invention;
FIG. 6 is a flowchart of step A3 of an embodiment of the identification method of the present invention;
FIG. 7 is a flowchart of step A5 of an embodiment of the identification method of the present invention;
FIG. 8 is a flowchart of step B2 of an embodiment of the identification method of the present invention;
FIG. 9 is a flowchart of step B22 of an embodiment of the identification method of the present invention;
FIGS. 10A-10C are schematic views of gas images to be identified in accordance with an embodiment of the identification method of the present invention;
FIGS. 11A-11C are schematic views of gas images of the gas profile recognition result according to the embodiment of the recognition method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
The invention relates to a gas contour identification method, which comprises the following steps:
before the construction process of the identification model is executed, firstly, a plurality of gas sample images are collected, image features of each gas sample image are labeled to form a corresponding labeled image, the labeled images and the corresponding gas sample images are associated to form training data, and finally a training data set is formed;
as shown in fig. 2, the construction process of the recognition model specifically includes:
step A1, inputting training data in a training data set into a neural network model, and extracting first image characteristic information of a gas sample image in the training data from a convolution layer of the neural network model to form an overall characteristic diagram, wherein the neural network model is realized by adopting a mask-rcnn neural network;
step A2, acquiring feature maps of a plurality of preset layers in the process of forming an overall feature map, setting a plurality of first detection frames in the feature maps of the preset layers by combining the FPN technology, and screening all the first detection frames to obtain a plurality of candidate detection frames;
step A3, extracting the features of all candidate detection frames to obtain local feature maps with the same size;
step A4, obtaining classification errors and detection errors of candidate detection frames by adopting full connection layer processing in a neural network model according to the local feature map;
step A5, performing binary segmentation processing on the candidate detection frame according to the local feature map to obtain a segmentation error;
step A6, calculating to obtain the output value of the loss function in the neural network model during the training according to the classification error, the detection error and the segmentation error, and then returning to the step A1;
circularly executing the steps A1-A6 to iteratively train the neural network model by using the training data set until the training is completed when the output value of the loss function is reduced to a preset range, and then outputting the neural network model as the recognition model, as shown in FIG. 1;
as shown in fig. 3, a process of identifying the gas profile using the identification model is further included:
step B1, acquiring a gas image to be identified;
and step B2, inputting the gas image to be identified into the identification model to obtain a gas contour identification result of the gas image to be identified.
In the embodiment, the labeled image and the corresponding gas sample image are associated to form training data, a training data set is finally formed, then the training data in the training data set is input into a neural network model realized by adopting a mask-rcnn neural network, the training data set is iteratively trained by an infrared gas segmentation method of the mask-rcnn neural network, and finally the trained neural network model is used as an identification model, so that the defect that the traditional method is difficult to select proper features or unobvious features is overcome, and the training robustness is further improved;
the identification model is adopted to identify the gas image to be identified, so that the gas image to be identified is continuously convoluted and pooled in the identification model, the extracted characteristic information is analyzed, and an identification result is obtained, so that the anti-interference capability under the complex environment can be effectively improved, the anti-interference capability when the monitoring angle changes is improved, and the detection and identification precision of the identification result of the gas profile is improved.
Further, as a preferred embodiment, the process of acquiring the gas sample image before the process of constructing the recognition model, and the process of recognizing the gas profile by using the recognition model can be performed in the graphics card, so that the training speed and the recognition speed are increased.
Further, as a preferred embodiment, steps a1-a6 are executed in a loop to iteratively train the neural network model with the training data set until the output value of the loss function is reduced to a preset range and the loss curve of the loss function tends to be horizontal, so as to determine that the neural network model converges well, that is, the neural network model is trained and output, and then the output neural network model is used as the recognition model.
Further, in the above embodiment, as shown in fig. 4, step a2 specifically includes the following steps:
step A21, acquiring feature maps of a plurality of preset layers in the process of forming an overall feature map, and scanning pixel points on the feature maps of the preset layers by combining with the FPN technology so as to set a plurality of first detection frames on each pixel point;
step A22, classifying all the first detection frames to obtain a first type detection frame as a positive sample and a second type detection frame as a negative sample;
step A23, performing initial classification error and initial detection error training on a rpn network model in a neural network model by adopting a first type of detection frames with a first preset number and a second type of detection frames with a second preset number; the first preset quantity and the second preset quantity can be the same or different, and the first preset quantity and the second preset quantity can be set according to the requirements of users.
Step A24, inputting all the first-class detection boxes into a trained rpn network model, and performing primary screening on all the first-class detection boxes to obtain a plurality of initial candidate detection boxes;
and step A25, adopting a non-maximum suppression algorithm to perform secondary screening on all the initial candidate detection frames to obtain a plurality of candidate detection frames.
Further, as a preferred embodiment, as shown in fig. 1:
firstly, inputting training data in a training data set into a Retnet network in a neural network model to realize feature extraction on a gas sample image in the training data through the Retnet network to form an overall feature map;
the characteristic diagrams of different layers are integrated through the characteristic diagrams of different layers, and the process of forming the integral characteristic diagrams comprises the characteristic diagrams of different layers;
secondly, acquiring feature maps of a plurality of preset layers in the process of forming the overall feature map, and scanning pixel points on the feature maps of the preset layers by combining an FPN technology so as to set a plurality of first detection frames on each pixel point, and classifying all the first detection frames to obtain a first type detection frame serving as a positive sample and a second type detection frame serving as a negative sample;
it should be noted that the characteristic diagrams of the plurality of preset layers are as follows: a feature map of a portion of the layers in the process of forming the global feature map.
Wherein, the FPN technique is applied to the FPN structure in the neural network model, and above-mentioned FPN structure is a characteristic pyramid structure, and the FPN structure includes from bottom to top, top-down and three levels of transverse connection to fuse the characteristic of each level in the FPN structure, thereby improve the setting accuracy of first detection frame.
Then, performing initial classification error and initial detection error training on rpn network models in the neural network model by adopting all the first-type detection frames and all the second-type detection frames, and screening all the initial candidate detection frames again by adopting a non-maximum suppression algorithm to obtain a plurality of candidate detection frames;
then, extracting features of all candidate detection frames by using ROI align to obtain local feature maps with the same size;
then, processing by adopting a full connection layer in a neural network model according to the local feature map to obtain a classification error and a detection error of the candidate detection frame; and
carrying out segmentation processing on the candidate detection frame to obtain a segmentation error;
and finally, calculating to obtain an output value of the loss function in the neural network model during the training according to the classification error, the detection error and the segmentation error.
As a preferred embodiment, in step a21, 3 first detection frames may be set on each pixel point.
In step a22 in the above embodiment, all the first detection frames may be classified by using the positioning accuracy evaluation function IOU to obtain the first type detection frame as the positive sample and the second type detection frame as the negative sample.
The positive sample is the image feature of the gas contour to be identified in the training data set of the present embodiment, and the negative sample is the image feature of the non-gas contour associated with the gas contour.
Further, as a preferred embodiment, as shown in fig. 5, the process of classifying all the first detection frames by using the positioning accuracy evaluation function specifically includes the following steps:
step C1, calculating according to the positioning precision evaluation function to obtain the detection value of each first detection frame;
step C2, determining whether the detection value is greater than a first preset threshold value;
if so, setting the first detection frame as a first type detection frame;
step C3, determining whether the detection value is less than a second preset threshold value;
if yes, setting the first detection frame as a second type detection frame.
As a preferred embodiment, before the construction process of the recognition model is executed, each tagged image may be marked by using a real detection frame; and then calculating a detection value of each first detection frame according to the size of an overlapped part between each first detection frame and the candidate detection frame according to the positioning accuracy evaluation function, wherein the detection value is an IOU value, the first preset threshold value can be 0.7, and the second preset threshold value can be 0.3.
In the above preferred embodiment, the process of classifying all the first detection frames by using the positioning accuracy evaluation function specifically includes:
firstly, calculating according to a positioning precision evaluation function to obtain an IOU value of each first detection frame relative to a real detection frame;
then judging whether the detection value is larger than 0.7;
if yes, setting the first detection frame as a first type detection frame;
if not, then judging whether the detection value is less than 0.3;
if yes, setting the first detection frame as a second type detection frame.
It should be noted that the positioning accuracy evaluation function IOU defines the overlapping degree of the two detection frames.
Further, in the above embodiment, in step a24, the scores of all the first-type detection frames may be ranked from high to low, and a preset number of ranked initial candidate detection frames are obtained; the preset number can be 3000, and the preset number can be set in a user-defined mode according to the requirements of users.
In step a25 in the above embodiment, a non-maximum suppression algorithm is used to perform re-screening on all initial candidate detection boxes, so that a candidate detection box with the highest score can be obtained from all initial candidate detection boxes, and redundant initial candidate detection boxes with different sizes are removed.
Further, in the above embodiment, in step a1, the gas sample image is acquired by taking the gas within the visible range of the viewing angle of the infrared camera as the target gas.
The infrared camera can be used for acquiring better gas sample images in a complex environment.
As a preferred embodiment, the acquired gas sample image may be subjected to data expansion processing;
the data expansion processing steps may be:
performing rotation processing on the gas sample image;
adding noise processing to the gas sample image;
and performing enhancement processing on the gas sample image.
Further, in the above embodiment, the labeling of the image feature includes:
adopting a marking frame to mark target gas in the gas injection sample image as target gas characteristics;
outlining the moving object in the gas sample image as the moving object characteristic; and
and marking other pixel points except the gas position of the target gas and the outline of the moving object in the gas sample image as image background features.
Further, in the above embodiment, before the first execution of step a1, an initial Neural network model is obtained by first training with a Convolutional Neural Network (CNN) with weight sharing.
Therefore, the convolutional neural network training based on weight sharing is realized to obtain an initial neural network model, and the network training is carried out on the initial neural network model to obtain an identification model, so that the process of obtaining the initial neural network model is simplified, and the speed of constructing the identification model is increased.
Further, in the above embodiment, as shown in fig. 6, in step a3, feature extraction is performed on the candidate detection frame by using a ROI Align processing method, specifically including the following steps:
step A31, mapping the candidate detection frame of the gas sample image onto the global feature map to form a mapping feature map corresponding to the candidate detection frame on the global feature map, and obtaining the coordinates of the mapping feature map;
step A32, dividing the mapping feature map into a plurality of units;
step a33, performing a maximum pooling operation in each cell to extract a plurality of local feature maps with the same size.
In the above embodiment, the coordinates of the mapping feature map obtained in step a31 may be kept as a decimal, and the floating point boundary of the mapping feature map is not quantized;
then, the step a33 specifically includes:
uniformly taking points in each unit;
then obtaining the numerical value of the point through bilinear interpolation;
then, performing maximum pooling operation to extract local feature maps with the same size, so as to reduce neighborhood errors and mean shift errors;
as a preferred embodiment, each mapping feature map may be divided into four units in step a32, the floating point number boundary of each unit is not quantized, four points are uniformly taken in each unit in step a33, and the numerical value of each point is obtained by bilinear interpolation.
Further, in the above embodiment, before the construction process of the recognition model is performed, each annotation image is marked by using a real detection frame;
step a4 specifically includes the following steps:
obtaining classification errors of candidate detection frames for representing the class errors of the candidate detection frames by adopting a full connection layer according to the local feature map and the real detection frames; and
and obtaining a detection error between the coordinates of the candidate detection frame and the coordinates of the real detection frame corresponding to the candidate detection frame by adopting the full connection layer according to the local feature map and the real detection frame.
Further, in the above embodiment, during the construction process of the recognition model, the conventional activation function ReLU in the convolutional neural network is replaced with the activation function leak ReLU.
The activation function Leaky ReLU described above may be used to process the weights of the full convolution neural network model.
The conventional activation function ReLU in the convolutional neural network is replaced by the activation function LeakyReLU with higher convergence speed, so that the accuracy of identifying the gas contour is improved, and the problems of gradient explosion and gradient disappearance are avoided.
The activation function leak ReLU can be expressed by the following formula:
Figure BDA0002259699280000131
wherein, in the above formula (1):
aifor representing fixed parameters within the (1, + ∞) interval;
i is used for representing the number of pixel points in the characteristic diagram output by the convolution layer;
xia weight value calculating unit for calculating a weight value of a pixel point in a feature map representing the convolution layer output;
yirepresenting the output value after the activation function operation.
Further, in the above embodiment, as shown in fig. 7, step a5 specifically includes the following steps:
step A51, inputting the candidate detection box into the mask branch in the neural function network model;
a52, the mask branch carries out semantic segmentation (namely binary segmentation processing) on the candidate detection box according to the local feature map to obtain a mask matrix;
step A53, calculating the cross entropy of each pixel in the mask matrix by using an activation function;
and step A54, calculating the sum of the cross entropies of all the pixels, and taking the sum of the cross entropies as a segmentation error.
In a preferred embodiment, the output dimension of the mask branches is K m, which is a binary mask for m of K classes;
the mask branch may adopt an FCN (full convolution neural network) structure to segment the local feature map to obtain a K-layer mask matrix.
Further, in the above embodiment, as shown in fig. 8, step B2 specifically includes the following steps:
step B21, inputting the gas image to be identified into the identification model to obtain a gas contour division result, wherein the gas contour division result is a rough gas contour division result of the gas image to be identified;
and step B22, performing thinning and dividing processing on the gas contour dividing result to obtain a gas contour identification result, wherein the gas contour identification result is an accurate gas contour identification result of the gas image to be identified.
In the embodiment, by implementing a strategy of identifying the gas contour from coarse to fine, the method can realize that a part of image features are filtered firstly through coarse identification, and then the gas contour division result is further identified by fine identification, thereby improving the identification efficiency on the basis of ensuring the accuracy of the identification result.
Further, in the above embodiment, as shown in fig. 9, step B22 specifically includes the following steps:
step B221, performing gray threshold division on the gas contour division result to obtain a gray threshold division result;
and step B222, fusing the gas contour division result and the gray threshold division result to obtain a gas contour identification result.
Further, as a preferred embodiment, before step B221, a minimum rectangular frame may be found according to the gas contour dividing result by using an image morphology technique, and the gas contour dividing result is circumscribed by the minimum rectangular frame;
in step B221, a gray threshold may be set, and the image in the minimum rectangular frame is divided according to the gray threshold, so as to obtain a gray threshold division result;
when the gray value of the image is smaller than the gray threshold value, determining that the image is a gas image;
in step B223, the gas contour division result and the grayscale threshold division result are fused, and the gas contour region where the two contours are repeated is extracted to obtain a gas contour identification result; namely, the gas contour region where two contours are repeated is the gas contour recognition result.
In addition, the embodiment combines a threshold segmentation technology on the basis of the identification model obtained by the neural network model, supplements the gas contour division result obtained by the identification model, realizes a progressive gas contour identification strategy from coarse to fine, filters dissimilar images by coarse identification, and further improves the gas contour identification result by fine identification, thereby improving the accuracy and generalization capability of gas contour identification and providing more accurate information for further gas analysis.
Further, as a preferred embodiment, an infrared camera is used to acquire a first number of gas sample images and a second number of gas images to be identified;
the first number and the second number may be the same, and the gas sample image and the gas image to be identified in the present embodiment may be both 250, which may be specifically set according to the needs of the user;
the gas to be identified in the present embodiment is methane gas;
wherein the gas image to be identified is shown in fig. 10A-10C;
inputting a training data set corresponding to 250 gas sample images into a neural network model for network training to obtain a recognition model;
then inputting 250 gas images to be identified into the identification model to obtain a gas contour division result, and then performing thinning division processing on the gas contour division result to obtain a gas contour identification result;
wherein gas images corresponding to the gas contour recognition results of the gas images to be recognized shown in fig. 10A to 10C are shown in fig. 11A to 11C;
the results of the gas profile identification obtained are shown in the following table:
Figure BDA0002259699280000151
Figure BDA0002259699280000161
namely, the correct coincidence in the gas contour recognition result reaches 99%, and the false coincidence in the gas contour recognition result is 1%.
Therefore, the gas profile identification method in the embodiment can improve the accuracy of the gas profile identification result.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (11)

1. A method for identifying a gas contour comprises the following steps:
before the construction process of the identification model is executed, firstly, a plurality of gas sample images are collected, image features of each gas sample image are labeled to form a corresponding labeled image, the labeled images and the corresponding gas sample images are associated to form training data, and finally a training data set is formed;
the construction process of the identification model specifically comprises the following steps:
step A1, inputting the training data in the training data set into a neural network model, and extracting first image feature information of the gas sample image in the training data from a convolution layer of the neural network model to form an overall feature map, wherein the neural network model is implemented by adopting a mask-rcnn neural network;
step A2, acquiring feature maps of a plurality of preset layers in the process of forming the overall feature map, setting a plurality of first detection frames in the feature maps of the preset layers by combining with the FPN technology, and screening all the first detection frames to obtain a plurality of candidate detection frames;
step A3, extracting features of all the candidate detection frames to obtain local feature maps with the same size;
step A4, processing by adopting a full connection layer in the neural network model according to the local feature map to obtain a classification error and a detection error of the candidate detection frame;
step A5, performing binary segmentation processing on the candidate detection frame according to the local feature map to obtain a segmentation error;
step A6, calculating an output value of a loss function in the neural network model during the training according to the classification error, the detection error and the segmentation error, and then returning to the step A1;
performing the steps A1-A6 in a loop, so as to perform iterative training on the neural network model by using the training data set until the output value of the loss function is reduced to be within a preset range, and then outputting the neural network model as the recognition model;
the method also comprises a process of identifying the gas contour by adopting the identification model:
step B1, acquiring a gas image to be identified;
step B2, inputting the gas image to be identified into the identification model to obtain a gas contour identification result of the gas image to be identified.
2. The identification method according to claim 1, wherein the step a2 specifically comprises the steps of:
step A21, acquiring a plurality of characteristic diagrams of the preset layer in the process of forming the overall characteristic diagram, and performing pixel point scanning on the characteristic diagram of the preset layer by combining with an FPN technology so as to set a plurality of first detection frames on each pixel point;
step A22, classifying all the first detection frames to obtain a first type detection frame as a positive sample and a second type detection frame as a negative sample;
step A23, performing initial classification error and initial detection error training on a rpn network model in the neural network model by adopting a first preset number of the first type of detection frames and a second preset number of the second type of detection frames;
step A24, inputting all the first-class detection boxes into the trained rpn network model, so as to perform primary screening on all the first-class detection boxes to obtain a plurality of initial candidate detection boxes;
and step A25, adopting a non-maximum suppression algorithm to perform secondary screening on all the initial candidate detection frames to obtain a plurality of candidate detection frames.
3. The identification method according to claim 1, wherein in step a1, the gas sample image is obtained by collecting the target gas as the gas within the visible range of the infrared camera.
4. The identification method of claim 1,
the labeling of the image features comprises:
marking the target gas in the gas sample image as a target gas characteristic by using a marking frame;
outlining the outline of the moving object in the gas sample image as a moving object characteristic; and
marking other pixel points except the gas position of the target gas and the outline of the moving object in the gas sample image as image background features.
5. The method of claim 1, wherein before the first performing of the step a1, an initial neural network model is obtained by using a convolutional neural network training with weight sharing.
6. The identification method according to claim 1, wherein in the step a3, a roiign processing method is adopted to perform feature extraction on the candidate detection frame, and specifically the following steps are performed:
step A31, mapping the candidate detection frame of the gas sample image onto the global feature map to form a mapping feature map corresponding to the candidate detection frame on the global feature map, and obtaining the coordinates of the mapping feature map;
a step a32 of dividing the map feature map into a plurality of cells;
step a33, performing a maximal pooling operation in each of the units to extract a plurality of the local feature maps having the same size.
7. The identification method of claim 6,
before the construction process of the identification model is executed, marking each marked image by adopting a real detection frame;
the step a4 specifically includes the following steps:
obtaining a classification error of the candidate detection frame for representing the category error by adopting a full connection layer according to the local feature map and the real detection frame; and
and obtaining a detection error between the coordinates of the candidate detection frame and the coordinates of the real detection frame corresponding to the candidate detection frame by adopting a full connection layer according to the local feature map and the real detection frame.
8. The recognition method of claim 1, wherein during the construction of the recognition model, the regular activation function ReLU in the convolutional neural network is replaced with an activation function leak ReLU.
9. The identification method according to claim 6, wherein the step A5 specifically comprises the steps of:
step A51, inputting the candidate detection box into a mask branch in the neural function network model;
step A52, the mask branch carries out semantic segmentation on the candidate detection box to obtain a mask matrix;
step A53, calculating the cross entropy of each pixel in the mask matrix by adopting an activation function;
and step A54, calculating the sum of the cross entropies of all the pixels, and taking the sum of the cross entropies as a segmentation error.
10. The identification method according to claim 1, wherein the step B2 specifically comprises the steps of:
step B21, inputting the gas image to be identified into the identification model to obtain a gas contour dividing result;
and step B22, performing thinning and dividing processing on the gas contour dividing result to obtain the gas contour identification result.
11. The identification method according to claim 1, wherein the step B22 specifically comprises the steps of:
step B221, performing gray threshold division on the gas contour division result to obtain a gray threshold division result;
and step B222, fusing the gas contour division result and the gray threshold division result to obtain the gas contour identification result.
CN201911067035.6A 2019-11-04 2019-11-04 Gas contour identification method Pending CN110889418A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911067035.6A CN110889418A (en) 2019-11-04 2019-11-04 Gas contour identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911067035.6A CN110889418A (en) 2019-11-04 2019-11-04 Gas contour identification method

Publications (1)

Publication Number Publication Date
CN110889418A true CN110889418A (en) 2020-03-17

Family

ID=69746848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911067035.6A Pending CN110889418A (en) 2019-11-04 2019-11-04 Gas contour identification method

Country Status (1)

Country Link
CN (1) CN110889418A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084899A (en) * 2020-08-25 2020-12-15 广东工业大学 Fall event detection method and system based on deep learning
CN116563769A (en) * 2023-07-07 2023-08-08 南昌工程学院 Video target identification tracking method, system, computer and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527351A (en) * 2017-08-31 2017-12-29 华南农业大学 A kind of fusion FCN and Threshold segmentation milking sow image partition method
CN109325395A (en) * 2018-04-28 2019-02-12 二十世纪空间技术应用股份有限公司 The recognition methods of image, convolutional neural networks model training method and device
CN109919012A (en) * 2019-01-28 2019-06-21 北控水务(中国)投资有限公司 A kind of indicative microorganism image-recognizing method of sewage treatment based on convolutional neural networks
CN110097068A (en) * 2019-01-17 2019-08-06 北京航空航天大学 The recognition methods of similar vehicle and device
CN110111416A (en) * 2019-05-07 2019-08-09 西安科技大学 Mine internal model based on HoloLens glasses acquires method for building up

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527351A (en) * 2017-08-31 2017-12-29 华南农业大学 A kind of fusion FCN and Threshold segmentation milking sow image partition method
CN109325395A (en) * 2018-04-28 2019-02-12 二十世纪空间技术应用股份有限公司 The recognition methods of image, convolutional neural networks model training method and device
CN110097068A (en) * 2019-01-17 2019-08-06 北京航空航天大学 The recognition methods of similar vehicle and device
CN109919012A (en) * 2019-01-28 2019-06-21 北控水务(中国)投资有限公司 A kind of indicative microorganism image-recognizing method of sewage treatment based on convolutional neural networks
CN110111416A (en) * 2019-05-07 2019-08-09 西安科技大学 Mine internal model based on HoloLens glasses acquires method for building up

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
贺嘉琪: "基于深度学习的指针式仪表示数自动识别的研究与应用" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084899A (en) * 2020-08-25 2020-12-15 广东工业大学 Fall event detection method and system based on deep learning
CN116563769A (en) * 2023-07-07 2023-08-08 南昌工程学院 Video target identification tracking method, system, computer and storage medium
CN116563769B (en) * 2023-07-07 2023-10-20 南昌工程学院 Video target identification tracking method, system, computer and storage medium

Similar Documents

Publication Publication Date Title
CN111223088B (en) Casting surface defect identification method based on deep convolutional neural network
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN110175982B (en) Defect detection method based on target detection
CN106875381B (en) Mobile phone shell defect detection method based on deep learning
CN112418117B (en) Small target detection method based on unmanned aerial vehicle image
CN112215128B (en) FCOS-fused R-CNN urban road environment recognition method and device
CN113378686B (en) Two-stage remote sensing target detection method based on target center point estimation
CN112200045A (en) Remote sensing image target detection model establishing method based on context enhancement and application
CN115439458A (en) Industrial image defect target detection algorithm based on depth map attention
CN113298809B (en) Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN113393426A (en) Method for detecting surface defects of rolled steel plate
CN113989604A (en) Tire DOT information identification method based on end-to-end deep learning
CN113221956A (en) Target identification method and device based on improved multi-scale depth model
CN110889418A (en) Gas contour identification method
CN110992301A (en) Gas contour identification method
CN112364687A (en) Improved Faster R-CNN gas station electrostatic sign identification method and system
CN116630301A (en) Strip steel surface small target defect detection method and system based on super resolution and YOLOv8
CN116052110A (en) Intelligent positioning method and system for pavement marking defects
CN113657196B (en) SAR image target detection method, SAR image target detection device, electronic equipment and storage medium
CN116912670A (en) Deep sea fish identification method based on improved YOLO model
CN114927236A (en) Detection method and system for multiple target images
CN112199984B (en) Target rapid detection method for large-scale remote sensing image
CN113012167A (en) Combined segmentation method for cell nucleus and cytoplasm
CN113673534B (en) RGB-D image fruit detection method based on FASTER RCNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination