CN114626445B - Dam termite video identification method based on optical flow network and Gaussian background modeling - Google Patents

Dam termite video identification method based on optical flow network and Gaussian background modeling Download PDF

Info

Publication number
CN114626445B
CN114626445B CN202210187820.0A CN202210187820A CN114626445B CN 114626445 B CN114626445 B CN 114626445B CN 202210187820 A CN202210187820 A CN 202210187820A CN 114626445 B CN114626445 B CN 114626445B
Authority
CN
China
Prior art keywords
images
optical flow
termite
image
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210187820.0A
Other languages
Chinese (zh)
Other versions
CN114626445A (en
Inventor
卢鑫
龙艺
刘双美
刘建明
麻泽龙
肖翔
阚飞
邢志
惠健
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HYDRAULIC SCIENCE RESEARCH INSTITUTE OF SICHUAN PROVINCE
Original Assignee
HYDRAULIC SCIENCE RESEARCH INSTITUTE OF SICHUAN PROVINCE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HYDRAULIC SCIENCE RESEARCH INSTITUTE OF SICHUAN PROVINCE filed Critical HYDRAULIC SCIENCE RESEARCH INSTITUTE OF SICHUAN PROVINCE
Priority to CN202210187820.0A priority Critical patent/CN114626445B/en
Publication of CN114626445A publication Critical patent/CN114626445A/en
Application granted granted Critical
Publication of CN114626445B publication Critical patent/CN114626445B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Abstract

The invention discloses a dam termite video identification method based on optical flow network and Gaussian background modeling, which comprises the steps of obtaining a video sequence of monitoring termite activity of a reservoir dam, reading images frame by frame and preprocessing the images; outputting N optical flow estimated termite probability value images by using an optical flow network model; outputting N frames of motion target probability value images by using a mixed Gaussian background model; and respectively carrying out image post-processing on the termite probability value image and the moving target probability value image estimated by the optical flow, fusing, filtering targets with confidence less than T, binarizing and communicating domain analysis, and obtaining an identification image. The invention can supplement the condition of the missed detection target caused by the approaching of the pixels of the moving target and the background, eliminate the false detection target caused by the light flow brought by directly using the light flow method, automatically complete the monitoring, improve the accuracy of the identification of the termite number and the activity state, reduce the workload of manual inspection, improve the timeliness of termite monitoring and treatment, and explore a new method for monitoring the termite hazard of the reservoir dam.

Description

Dam termite video identification method based on optical flow network and Gaussian background modeling
Technical Field
The invention relates to a termite identification method, in particular to a dam termite video identification system and method based on optical flow network and Gaussian background modeling.
Background
The embankment of thousand miles is destroyed in the ant pit, which means a long dam because the biting of the termite insects is also destroyed finally. The termites referred to herein are termites. For a reservoir dam, termite hidden trouble seriously affects the operation safety of the reservoir dam, particularly, soil termites like to group, predate and nest on the soil dam, the main nest and the auxiliary nest of termites in one nest can reach hundreds, so that the inside of the soil dam of the river dam is seriously damaged, once the flood season arrives, water level rises, water flow permeates into ant tunnels and ant cavities to form piping, a large amount of sediment can be brought out by high-pressure water flow along with the rising of water pressure, the aperture of the ant tunnels is continuously enlarged, and landslide collapse is finally caused. Therefore, the termite control meaning of the reservoir dam is very important. With the development of society, a high-efficiency and low-toxicity termite control strategy is adopted on a reservoir dam, which is a necessary trend of termite control development.
At present, the termite control and monitoring process is mainly implemented by manually sampling a part of areas through construction, so that the activity condition of termites in a reservoir dam is judged, the cost is high, the workload, the difficulty and the timeliness of manual inspection are high, and the monitoring result is not necessarily accurate.
Regarding the optical flow estimation network: and stacking two frames of images containing targets, and outputting an optical flow estimated target probability value image corresponding to the two frames of images.
Modeling with respect to gaussian background: the method is to model video image data by using a mixed Gaussian model, divide each pixel point in the input image into background pixels and moving target pixels according to the pixel value characteristics of the pixel points, so as to achieve the effects of separating the background from the moving target and extracting the moving target.
Disclosure of Invention
The invention aims to provide a dam termite video identification system and a dam termite video identification method based on an optical flow network and Gaussian background modeling, which are used for solving the problems, greatly reducing the workload of manual inspection, improving the timeliness of termite monitoring and processing and accurately estimating the number and the activity state of termites.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: a dam termite video identification method based on optical flow network and Gaussian background modeling comprises the following steps of;
(1) Acquiring a video sequence of termite movement monitored by a reservoir dam, reading images frame by frame and preprocessing the images to obtain N+1 frames of monitoring images, wherein N is a positive integer;
(2) Constructing an optical flow estimation network, and training to obtain an optical flow network model, wherein the optical flow estimation network adopts a FlowNet2 network structure, inputs two-frame image stacks containing targets, and outputs an optical flow estimation target probability value image corresponding to the two-frame images;
(3) With termite as a target, sequentially inputting N+1 frames of monitoring images into an optical flow network model to obtain N optical flow estimated termite probability value images, and sequentially marking the images as P 1 ~P N
(4) Constructing a mixed Gaussian background model, inputting all monitoring images into the mixed Gaussian background model for training to obtain background images, sequentially extracting moving targets of all images to obtain an N+1 frame moving target probability value image, and sequentially marking N frames as Q after selection 1 ~Q N
(5) Image post-processing: will P 1 ~P N 、Q 1 ~Q N Respectively performing morphological filtering and noise filtering on the images, and filling the missing part to obtain a P-and-P-type image 1 ~P N Corresponding image A 1 ~A N And Q 1 ~Q N Corresponding image B 1 ~B N
(6) Fusing;
(61) Image A 1 ~A N As group A, B 1 ~B N As group B, weights of group a and group B are set;
(62) Will A i And B i Performing weighted data fusion to obtain a fused image C i ,i=1~N;
(63) Setting a threshold T and filtering C i The target with the middle confidence coefficient smaller than T is subjected to binarization processing to obtain 1 binarized image;
(7) Carrying out communication area analysis on the binarized images, considering each communication area to correspond to one termite target, calculating the minimum circumscribed rectangle of each communication area, and marking the positions of the termite targets to obtain 1 identification image;
(8) Repeating the steps (62) - (7), processing all the images of the A group and the B group to obtain N identification images, and arranging the N identification images in sequence.
As preferable: in the step (1), the image preprocessing is specifically that each image is preprocessed by adopting a filtering denoising or histogram enhancement method.
As preferable: in the step (2), the optical flow network model is obtained through training specifically,
the optical flow estimation network is a three-layer stacked network, the first layer is a FlowNetC network, the second layer and the third layer are FlowNetS networks, the optical flow estimation network is pre-trained by using the FlyingChairs data set, and then the network weight of the result is adjusted by using the FlyingThings3D data set until the model converges.
The optical flow estimation network inputs two-frame image stacks containing targets and outputs an optical flow estimation target probability value image corresponding to the two-frame images; the termite is used as a target, so that a trained termite probability value image is output as a trained termite probability value image by inputting a two-frame image stack in a video sequence for monitoring termite activity of a reservoir dam.
The mixed Gaussian background model is characterized in that all images are input into the mixed Gaussian background model for training, background images are obtained through continuous optimization, and the moving targets of each frame of images are extracted by the background images. The probability value image of the moving target output by the Gaussian mixture background model is obtained by taking N frames, and the N frames are in one-to-one correspondence with the termite probability value image estimated by the optical flow.
The termite probability value image is estimated by aiming at the optical flow, and the termite probability value image and the moving target probability value image of the Gaussian mixture model are subjected to weighted fusion.
Compared with the prior art, the invention has the advantages that: after video processing of termite monitoring in the reservoir dam, preliminary identification and extraction of termite targets are carried out through an optical flow estimation network and a Gaussian background modeling method respectively, and then the results of the two algorithms are weighted and fused to obtain a final termite identification result. The termite monitoring system can automatically complete monitoring, greatly reduces the workload of manual inspection, improves the timeliness of termite monitoring and treatment, and explores a new method for monitoring termite damage of a reservoir dam.
The recognition method based on the fusion of the optical flow method and the Gaussian background modeling method can supplement the condition of the missed detection target due to the fact that the pixels of the moving target are close to the background relative to Gaussian background modeling; compared with the direct use of the optical flow method, the method can eliminate the false detection target caused by the optical flow brought by the direct use of the optical flow method, greatly improves the accuracy of termite number and activity state identification, and provides effective clues and basis for termite control.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a block diagram of an optical flow estimation network of the present invention employing FlowNet 2;
FIG. 3a is a first frame of monitoring images randomly extracted from a video sequence of monitoring termite activity in a reservoir dam;
FIG. 3b is a graph showing the recognition result of the optical flow estimation network of FIG. 3 a;
FIG. 3c is a graph showing the recognition result of the hybrid Gaussian background model of FIG. 3 a;
FIG. 3d shows the result of the method of the present invention applied to FIG. 3 a;
FIG. 4a is a second frame of monitoring images randomly extracted from a video sequence of monitoring termite activity in a reservoir dam;
FIG. 4b is a graph showing the recognition result of the optical flow estimation network of FIG. 4 a;
FIG. 4c is a graph showing the recognition result of FIG. 4a using a mixed Gaussian background model;
FIG. 4d shows the result of the method of the present invention applied to FIG. 4 a;
FIG. 5a is a third frame of monitoring images randomly extracted from a video sequence of monitoring termite activity in a reservoir dam;
FIG. 5b is a graph showing the recognition result of the optical flow estimation network of FIG. 5 a;
FIG. 5c is a graph showing the recognition result of FIG. 5a using a mixed Gaussian background model;
FIG. 5d shows the result of the method of the present invention applied to FIG. 5 a.
Detailed Description
The invention will be further described with reference to the accompanying drawings.
Example 1: referring to fig. 1-2, a dam termite video identification method based on optical flow network and gaussian background modeling comprises the following steps of;
(1) Acquiring a video sequence of termite movement monitored by a reservoir dam, reading images frame by frame and preprocessing the images to obtain N+1 frames of monitoring images, wherein N is a positive integer;
(2) Constructing an optical flow estimation network, and training to obtain an optical flow network model, wherein the optical flow estimation network adopts a FlowNet2 network structure, inputs two-frame image stacks containing targets, and outputs an optical flow estimation target probability value image corresponding to the two-frame images;
(3) With termite as a target, sequentially inputting N+1 frames of monitoring images into an optical flow network model to obtain N optical flow estimated termite probability value images, and sequentially marking the images as P 1 ~P N
(4) Constructing a mixed Gaussian background model, inputting all monitoring images into the mixed Gaussian background model for training to obtain background images, sequentially extracting moving targets of all images to obtain an N+1 frame moving target probability value image, and sequentially marking N frames as Q after selection 1 ~Q N
(5) Image post-processing: will P 1 ~P N 、Q 1 ~Q N Respectively performing morphological filtering and noise filtering on the images, and filling the missing part to obtain a P-and-P-type image 1 ~P N Corresponding image A 1 ~A N And Q 1 ~Q N Corresponding image B 1 ~B N
(6) Fusing;
(61) Image A 1 ~A N As group A, B 1 ~B N As group B, weights of group a and group B are set;
(62) Will A i And B i Performing weighted data fusion to obtain a fused image C i ,i=1~N;
(63) Setting a threshold T and filtering C i The target with the middle confidence coefficient smaller than T is subjected to binarization processing to obtain 1 binarized image;
(7) Carrying out communication area analysis on the binarized images, considering each communication area to correspond to one termite target, calculating the minimum circumscribed rectangle of each communication area, and marking the positions of the termite targets to obtain 1 identification image;
(8) Repeating the steps (62) - (7), processing all the images of the A group and the B group to obtain N identification images, and arranging the N identification images in sequence.
In this embodiment, in the step (1), the image preprocessing is specifically preprocessing each image by adopting a filtering denoising or histogram enhancement method.
In the step (2), the optical flow network model is obtained through training specifically,
the optical flow estimation network is a three-layer stacked network, the first layer is a FlowNetC network, the second layer and the third layer are FlowNetS networks, the optical flow estimation network is pre-trained by using the FlyingChairs data set, and then the network weight of the result is adjusted by using the FlyingThings3D data set until the model converges.
Regarding the optical flow estimation network:
the first step: in the working of the step (2) of the present invention, the architecture of the optical flow estimation network is determined first, and the present invention specifically adopts a FlowNet2 architecture, and the network includes three types of modules, namely FlowNetC, flowNetS and FlowSD, and is composed of four parts, see fig. 2.
(a) FlowNetS stacks two adjacent frames of images as input.
In the encoding phase, which may also be referred to as the puncturing phase, the input is formed by a plurality of successive convolution modules, each comprising a convolution layer, a standard batch layer and corresponding activation functions, resulting in progressively puncturing feature maps having different levels of abstraction and different resolutions.
In the decoding stage, which may also be called as an expansion stage, the feature maps with different abstract levels and different resolutions are then fused with the output of the decoding module of the previous layer by the definition module to obtain the optical flow prediction result with higher resolution, and the optical flow prediction result is decoded layer by layer until the final output. The integration of the definition module consists of three parts, namely an up-sampling value of the low-resolution optical flow prediction result, a characteristic diagram output by the coding module corresponding to the decoding module and the output of the decoding module of the upper layer, and is specifically shown in the lower right part of fig. 2.
(b) The FlowNet C and FlowNet S have similar results, the difference is that the two-channel input is provided, the front and rear two frames of images are respectively input into the first three independent convolution modules to obtain feature diagrams, the correlate is used for carrying out correlation calculation on the two groups of feature diagrams, the result is input into the subsequent continuous convolution modules as a new feature diagram, and the same coding result process as the FlowNet S network is repeated.
(c) The FlowNet-SD module has a structure consistent with FlowNet s, but the cores of the first three convolution modules are replaced by 7x7, 5x5 with 3x3 so that the module can be used to handle small deformations.
And a second step of: training the optical flow estimation network to obtain an optical flow network model, wherein the optical flow network model is specifically divided into the following steps 11-15;
step11: a FlowNetS, flowNetC module in the optical flow estimation network is constructed.
step12: optical flow networks are pre-trained using published optical flow data. The network is pre-trained using FlyingChairs dataset, then the network weights are fine-tuned using the Flyingthings3D dataset until the modules converge, step13 operation is performed if all the optical flow network modules have been superimposed in the optical flow network, otherwise jump to step11.
Step13: the FlowNet2-CSS module was trained using a mixed dataset of chairsdhom and flingengthings 3D.
Step14: and constructing a FlowNet-SD module, and supplementing the FlowNet-SD module into an optical flow estimation network for training to obtain a trained model.
Step15: and (3) inputting the N+1 frame images obtained by processing the video sequence of termite activity monitored by the reservoir dam in the step (1) into an optical flow network model sequentially, outputting optical flow estimation results such as images of which the 1 st frame and the 2 nd frame contain targets to obtain the 1 st optical flow termite estimation probability value image, and obtaining the 2 nd optical flow termite estimation probability value image from the 2 nd frame image and the 3 rd frame image, so that the N+1 frame images are finally obtained to obtain the N optical flow termite estimation probability value images.
With respect to the mixed gaussian background model: the method is to model video image data by using a mixed Gaussian model, divide each pixel point in the input image into background pixels and moving target pixels according to the pixel value characteristics of the pixel points, so as to achieve the effects of separating the background from the moving target and extracting the moving target. The algorithm module establishes a Gaussian mixture model with K Gaussian distributions for each pixel point in the video, and is expressed by the following formula (1), wherein in the actual algorithm operation process, K is a value between 3 and 5, in the following formula (1),representing pixel dot +.>And simulating the probability of taking the corresponding moment of each pixel point in the corresponding video as the background for the probability of the background point. Parameters of the background model of the Gaussian mixture are estimated again after each frame of video image is input. The algorithm module considers the Gaussian score with more data support and smaller varianceThe probability that the cloth represents the background model is high. It is generally considered that when a background object is permanently stationary, the gaussian distribution generated on the surface of the object represents the background distribution, and the data of the distribution continuously accumulate with time, and the variance gradually becomes smaller; when a new object shields the original background object, new distribution is generated or the variance of the original distribution is increased. I.e. the size of the proportion of the distribution generated or accumulated, together with the size of the variance of the distribution determines whether a distribution is a background distribution.
Example 2: referring to fig. 1-2, the present embodiment includes the steps of:
(1) The method comprises the steps of (1) reading images frame by frame and preprocessing the images according to a video sequence of termite activity monitored by a reservoir dam to obtain 121-frame monitoring images;
(2) Step (2) is the same as in example 1;
(3) With termite as a target, 121 frames of monitoring images are sequentially input into an optical flow network model to obtain 120 optical flow estimated termite probability value images, and the images are sequentially marked as P 1 ~P 120
(4) Step (4) of example 1 is repeated to obtain 121 frames of motion target probability value images, and 120 frames after selection are marked as Q in sequence 1 ~Q 120
(5) Image post-processing: will P 1 ~P 120 、Q 1 ~Q 120 Respectively performing morphological filtering and noise filtering on the images, and filling the missing part to obtain a P-and-P-type image 1 ~P 120 Corresponding image A 1 ~A 120 And Q 1 ~Q 120 Corresponding image B 1 ~B 120
(6) Fusing;
(61) Image A 1 -A 120 As group A, B 1 -B 120 As a group B, setting weights a and B of the group A and the group B, wherein the weights are set according to empirical values, and the values are between 0 and 1;
(62) With A 1 And B 1 And (3) carrying out weighted data fusion to obtain C i I.e. C 1 =A 1 ×a+B 1 ×b;
(63) Setting a threshold t=0.75, filtering out C i The target with the middle confidence coefficient smaller than T is subjected to binarization processing to obtain 1 binarized image, and of course, T is not limited to 0.75;
(64) Carrying out connected region analysis on the binarized image, considering each connected region to correspond to a termite target, calculating the minimum circumscribed rectangle of each connected region, and identifying the position of the termite target to obtain 1 identification image, wherein the identification image and C 1 Corresponding to the above;
(64) Repeating steps (62) - (64) to obtain the product C 1 ~C 120 The corresponding 120 identification images are arranged in sequence.
Example 3: referring to fig. 1-5 d, to illustrate the effects of the present invention, we use optical flow network estimation, gaussian background modeling, and the method of the present invention to process video sequences of termite activity monitoring in a reservoir dam, respectively, the three methods being labeled method 1, method 2, and method 3 in that order. Then three frames of monitoring images are extracted from the video sequence, and the three methods are used for analysis as shown in fig. 3a, 4a and 5a respectively.
The results obtained by processing the fig. 3a by the methods 1, 2 and 3 are shown in fig. 3b to 3d, and the following table 1 is obtained by analyzing the fig. 3b to 3 d:
table 1 comparison table of recognition results obtained by three methods of fig. 3a
Method Identification quantity Correct number of Error number Visual statistics of quantity Accuracy rate of Recall rate of recall
Method 1 9 5 13 20 55.56% 25%
Method 2 22 15 7 20 68.18% 75%
Method 3 24 19 5 20 79.17% 95%
In table 1, accuracy = correct number/number of identifications 100%; recall = correct number/visual statistics 100%.
The results obtained by processing the fig. 4a by the methods 1, 2 and 3 are shown in fig. 4 b-4 d, and the following table 2 is obtained by analyzing the fig. 4 b-4 d:
table 2 figure 4a comparison table of recognition results obtained using three methods
Method Identification quantity Correct number of Error number Visual statistics of quantity Accuracy rate of Recall rate of recall
Method 1 18 5 13 24 27.78% 20.83%
Method 2 27 20 7 24 74.07% 83.33%
Method 3 29 24 5 24 82.76% 100%
The results obtained by processing the fig. 5a by the methods 1, 2 and 3 are shown in fig. 5 b-5 d, and the following table 3 is obtained by analyzing the fig. 5 b-5 d:
table 3 figure 5a comparison table of recognition results obtained using three methods
Method Identification quantity Correct number of Error number Visual statistics of quantity Accuracy rate of Recall rate of recall
Method 1 9 4 5 19 44.44% 21.05%
Method 2 25 19 6 19 76% 100%
Method 3 23 19 4 19 82.61% 100%
As can be seen from tables 1 to 3, the method of the present patent is superior to the conventional optical flow network or Gaussian background modeling method in terms of accuracy and recall.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (3)

1. A dam termite video identification method based on optical flow network and Gaussian background modeling is characterized in that: comprises the following steps of;
(1) Acquiring a video sequence of termite movement monitored by a reservoir dam, reading images frame by frame and preprocessing the images to obtain N+1 frames of monitoring images, wherein N is a positive integer;
(2) Constructing an optical flow estimation network, and training to obtain an optical flow network model, wherein the optical flow estimation network adopts a FlowNet2 network structure, inputs two-frame image stacks containing targets, and outputs an optical flow estimation target probability value image corresponding to the two-frame images;
(3) With termite as a target, sequentially inputting N+1 frames of monitoring images into an optical flow network model to obtain N optical flow estimated termite probability value images, and sequentially marking the images as P 1 ~P N
(4) Constructing a mixed Gaussian background model, inputting all monitoring images into the mixed Gaussian background model for training to obtain background images, sequentially extracting moving targets of all images to obtain an N+1 frame moving target probability value image, and sequentially marking N frames as Q after selection 1 ~Q N
(5) Image post-processing: will P 1 ~P N 、Q 1 ~Q N Respectively performing morphological filtering and noise filtering on the images, and filling the missing part to obtain a P-and-P-type image 1 ~P N Corresponding image A 1 ~A N And Q 1 ~Q N Corresponding image B 1 ~B N
(6) Fusing;
(61) Image A 1 ~A N As group A, B 1 ~B N As group B, weights of group a and group B are set;
(62) Will A i And B i Performing weighted data fusion to obtain a fused image C i ,i=1~N;
(63) Setting a threshold T and filtering C i The target with the middle confidence coefficient smaller than T is subjected to binarization processing to obtain 1 binarized image;
(7) Carrying out communication area analysis on the binarized images, considering each communication area to correspond to one termite target, calculating the minimum circumscribed rectangle of each communication area, and marking the positions of the termite targets to obtain 1 identification image;
(8) Repeating the steps (62) - (7), processing all the images of the A group and the B group to obtain N identification images, and arranging the N identification images in sequence.
2. The dam termite video identification method based on optical flow network and Gaussian background modeling according to claim 1, wherein the method comprises the following steps: in the step (1), the image preprocessing is specifically that each image is preprocessed by adopting a filtering denoising or histogram enhancement method.
3. The dam termite video identification method based on optical flow network and Gaussian background modeling according to claim 1, wherein the method comprises the following steps: in the step (2), the optical flow network model is obtained through training specifically,
the optical flow estimation network is a three-layer stacked network, the first layer is a FlowNetC network, the second layer and the third layer are FlowNetS networks, the optical flow estimation network is pre-trained by using the FlyingChairs data set, and then the network weight of the result is adjusted by using the FlyingThings3D data set until the model converges.
CN202210187820.0A 2022-02-28 2022-02-28 Dam termite video identification method based on optical flow network and Gaussian background modeling Active CN114626445B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210187820.0A CN114626445B (en) 2022-02-28 2022-02-28 Dam termite video identification method based on optical flow network and Gaussian background modeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210187820.0A CN114626445B (en) 2022-02-28 2022-02-28 Dam termite video identification method based on optical flow network and Gaussian background modeling

Publications (2)

Publication Number Publication Date
CN114626445A CN114626445A (en) 2022-06-14
CN114626445B true CN114626445B (en) 2024-04-09

Family

ID=81899580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210187820.0A Active CN114626445B (en) 2022-02-28 2022-02-28 Dam termite video identification method based on optical flow network and Gaussian background modeling

Country Status (1)

Country Link
CN (1) CN114626445B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797875B (en) * 2023-02-07 2023-05-09 四川省水利科学研究院 Termite monitoring system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104065931A (en) * 2014-06-30 2014-09-24 谢声 Termite video remote monitoring and trapping-killing system
CN110853074A (en) * 2019-10-09 2020-02-28 天津大学 Video target detection network system for enhancing target by utilizing optical flow
CN112381043A (en) * 2020-11-27 2021-02-19 华南理工大学 Flag detection method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7271706B2 (en) * 2002-10-09 2007-09-18 The University Of Mississippi Termite acoustic detection
CN111539879B (en) * 2020-04-15 2023-04-14 清华大学深圳国际研究生院 Video blind denoising method and device based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104065931A (en) * 2014-06-30 2014-09-24 谢声 Termite video remote monitoring and trapping-killing system
CN110853074A (en) * 2019-10-09 2020-02-28 天津大学 Video target detection network system for enhancing target by utilizing optical flow
CN112381043A (en) * 2020-11-27 2021-02-19 华南理工大学 Flag detection method

Also Published As

Publication number Publication date
CN114626445A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
CN110956094B (en) RGB-D multi-mode fusion personnel detection method based on asymmetric double-flow network
CN111062872B (en) Image super-resolution reconstruction method and system based on edge detection
CN110348376B (en) Pedestrian real-time detection method based on neural network
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN109934200B (en) RGB color remote sensing image cloud detection method and system based on improved M-Net
CN112232349A (en) Model training method, image segmentation method and device
CN112598713A (en) Offshore submarine fish detection and tracking statistical method based on deep learning
CN110866455B (en) Pavement water body detection method
CN110503610B (en) GAN network-based image rain and snow trace removing method
CN110827312B (en) Learning method based on cooperative visual attention neural network
CN112488046B (en) Lane line extraction method based on high-resolution images of unmanned aerial vehicle
CN114219984B (en) Tiny plant diseases and insect pests detection system and method based on improved YOLOv3
CN110991444A (en) Complex scene-oriented license plate recognition method and device
Zhu et al. Towards automatic wild animal detection in low quality camera-trap images using two-channeled perceiving residual pyramid networks
CN114626445B (en) Dam termite video identification method based on optical flow network and Gaussian background modeling
CN112949493A (en) Lane line detection method and system combining semantic segmentation and attention mechanism
CN113569981A (en) Power inspection bird nest detection method based on single-stage target detection network
CN113313031A (en) Deep learning-based lane line detection and vehicle transverse positioning method
CN115239672A (en) Defect detection method and device, equipment and storage medium
CN110458019B (en) Water surface target detection method for eliminating reflection interference under scarce cognitive sample condition
CN111199255A (en) Small target detection network model and detection method based on dark net53 network
CN112801021B (en) Method and system for detecting lane line based on multi-level semantic information
CN114299383A (en) Remote sensing image target detection method based on integration of density map and attention mechanism
CN111126185B (en) Deep learning vehicle target recognition method for road gate scene
CN112395990B (en) Method, device, equipment and storage medium for detecting weak and small targets of multi-frame infrared images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant