AU2021277762B2 - Water level measurement method based on deep convolutional network and random field - Google Patents
Water level measurement method based on deep convolutional network and random field Download PDFInfo
- Publication number
- AU2021277762B2 AU2021277762B2 AU2021277762A AU2021277762A AU2021277762B2 AU 2021277762 B2 AU2021277762 B2 AU 2021277762B2 AU 2021277762 A AU2021277762 A AU 2021277762A AU 2021277762 A AU2021277762 A AU 2021277762A AU 2021277762 B2 AU2021277762 B2 AU 2021277762B2
- Authority
- AU
- Australia
- Prior art keywords
- pixel
- water surface
- image
- upsampling
- downsampling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 title claims abstract description 191
- 238000000691 measurement method Methods 0.000 title claims abstract description 12
- 230000011218 segmentation Effects 0.000 claims abstract description 36
- 238000009826 distribution Methods 0.000 claims abstract description 34
- 238000000034 method Methods 0.000 claims abstract description 20
- 238000012549 training Methods 0.000 claims abstract description 10
- 239000003550 marker Substances 0.000 claims abstract description 5
- 230000000087 stabilizing effect Effects 0.000 claims abstract description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 29
- 238000012544 monitoring process Methods 0.000 claims description 29
- 238000005457 optimization Methods 0.000 claims description 28
- 238000011176 pooling Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 7
- 238000005192 partition Methods 0.000 claims description 4
- 238000012937 correction Methods 0.000 abstract description 3
- 238000005259 measurement Methods 0.000 abstract description 3
- 230000008901 benefit Effects 0.000 abstract description 2
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Classifications
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/29—Graphical models, e.g. Bayesian networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/30—Assessment of water resources
Abstract
The present invention discloses a water level measurement method based on a deep
convolutional network and a random field. The method includes: first, training a provided
deep network structure by using a constructed water surface data set; selecting an appropriate
observation position, recording coordinates of a feature pixel and a corresponding elevation
for a stable marker within an observation range, and constructing a pixel-elevation
interpolation function by using an interpolation method; and denoising and stabilizing an
observed image and then inputting the denoised and stabilized image into the trained deep
convolutional network for prediction, and performing initial segmentation according to a
prediction result and then constructing a conditional probability field model; and minimizing
a KL divergence between an approximate distribution and a target distribution by using mean
field approximation, to obtain optimal water surface segmentation, and obtaining an elevation,
i.e., a water level of pixels of a water surface region in the optimal segmentation according to
the pixels of the water surface region and the elevation interpolation function. The present
invention has the advantages that a non-contact method is used to automatically monitor the
water level in real time, and has low device layout costs and an exception correction
capability compared with other contact measurement.
1/d3
FIG. I
o
Conv~lon
upsample
S conctenete
FIG. 2
0 ategory flag X
corresponding
to pixel
FIG. 3
Description
1/d3
o
Conv~lon
upsample
S conctenete
FIG. 2
0 ategory flag X corresponding to pixel
FIG. 3
The present invention relates to the field of water level monitoring technologies, and in particular, to a water level measurement method based on a deep convolutional network and a random field.
Hydrological survey is an important basic work for countries. The water level measurement, as an important part of hydrological survey, plays an important role in the aspects to such as planning and management of water resources, flood control and drought relief. With
the improvement of informatization, the water level measurement is developing toward automation and intelligence. There is a need for a method for acquiring water level data in the case of unattended operation for a long period of time. For water level monitoring, a commonly used means is arranging a vertical water gauge for human observation, arranging a pressure sensor, and the like. Currently, the pressure sensor is widely used for automatic monitoring, which leads to high deployment costs and does not support the correction of abnormal values. In view of the wide deployment of monitoring cameras, it is of great significance to use acquired image information as an early warning method for refined water level monitoring.
Embodiments as described herein may provide a water level measurement method based on a deep convolutional network and a random field, to simulate the water level from image data. According to a first aspect, there is provided a water level measurement method based on a deep convolutional network and a random field, and the method specifically includes the
following steps: step 1: constructing a water surface data set; step 2: constructing a deep convolutional neural network, using the water surface data set as an input training data set, and performing optimization on a loss function with reference to the deep convolutional neural network, to obtain an optimized deep convolutional neural network; step 3: setting up a camera at an appropriate observation position selected in a water level monitoring site and fixing the camera, selecting a stable marker within an observation field of view, acquiring an image of the water level monitoring site by using the camera, recording coordinates of a feature pixel in the image of the water level monitoring site and recording an elevation corresponding to the feature pixel, constructing a pixel-elevation data set, and further construct a pixel-elevation interpolation function by using an interpolation method; step 4: denoising and stabilizing the image of the water level monitoring site, to obtain a to denoised and preprocessed image; step 5: inputting the denoised and preprocessed image into the optimized deep convolutional neural network for prediction, to obtain a pixel-level probability distribution with the same size as the denoised and preprocessed image, and determining a classification threshold according to quality of the image for initial segmentation, to obtain an initial
[5 segmentation image; step 6: constructing a probability field model according to the initial segmentation image, representing the probability field model as a conditional probability model by using a Gibbs distribution, approximating the conditional probability model as a mean field model, and performing further optimization by using a divergence between the conditional probability !o model and the mean field model as an optimization target, to obtain a finally distributed
segmented image; and step 7: further processing pixels belonging to a water surface region in the finally distributed segmented image according to the elevation interpolation function in step 3, to obtain an elevation, i.e., a water depth, of the pixels belonging to the water surface region in the finally distributed segmented image.
In a class of this embodiment, step 1 comprises: acquiring a plurality of images containing a water surface in various scenes as a plurality of original water surface images; and sequentially marking each original water surface image, to obtain a marked water surface image, wherein the marking means that a pixel corresponding to a water surface region in each original
water surface image is marked as 1, and a pixel corresponding to a non-water surface region is marked as 0; and the water surface data set in step 1 is:
(datak(x,y),flag k(x,y))
x E [1,X], y E [1,Y], kE [1,N] wherein X is a quantity of rows of a marked water surface image, Y is a quantity of columns of the marked water surface image, N is a quantity of marked water surface images, i.e., a quantity of samples, in the water surface data set, datak(x, y) represents a pixel in an xt row and a yth column of a kth marked water surface image in the water surface data set, flag(x, y) represents a pixel flag in the xth row and the yth column of the kth marked watersurface image in the water surface data set, flagk(x, y)=1 represents that the pixel in the xth row and the yth column of the kth marked water surface image in the water surface data set belongs to a water t0 surface, and flag(x, y)=0 represents that the pixel in the xth row and the yth column of the kth
marked water surface image in the water surface data set does not belong to the water surface. In a class of this embodiment, the deep convolutional network in step 2 is formed by connecting a downsampling module and an upsampling module; the downsampling module is formed by cascading a first downsampling convolution layer,
[5 a first downsampling pooling layer, a second downsampling convolution layer, a second downsampling pooling layer, ... , a Kth downsampling convolution layer, a Kth downsampling pooling layer, a (K+1)th downsampling convolution layer, and a fullyconnected layer; the first downsampling convolution layer, the second downsampling convolution layer,..., and the (K+1)th downsampling convolution layer respectively have convolution kernels with tO different scales; parameters and biases of the convolution kernels of the first downsampling convolution layer, the second downsampling convolution layer, ..., and the (K+1)th downsampling convolution layer are to-be-optimized parameters; a convolution operation is performed on a feature map of each of the first downsampling convolution layer, the second downsampling convolution layer, . . , and the (K+1)th
downsampling convolution layer; the upsampling module is formed by cascading a first upsampling convolution layer, a first upsampling deconvolutional layer, a second upsampling convolution layer, a second upsampling deconvolutional layer, ... , a Kth upsampling convolution layer, a Kth upsampling deconvolutional layer, and a (K+1)th upsampling convolution layer;
the first upsampling convolution layer, the second upsampling convolution layer, ... , and the (K+1)th upsampling convolution layer respectively have convolution kernels with different scales; parameters and biases of the convolution kernels of the first upsampling convolution layer, the second upsampling convolution layer, ... , and the (K+1)th upsampling convolution layer are to-be-optimized parameters; the fully connected layer in the downsampling module is connected to the first upsampling convolution layer in the upsampling module; upsampling is performed on a feature map of each of the first upsampling deconvolutional layer, the second upsampling deconvolutional layer, ... , and the (K+1)th upsampling convolution layer; the feature map of the Kth downsampling convolution layer obtained afterconvolution t0 processing and the feature map of the Kth upsampling deconvolutional layer obtained after upsampling are fused; step 2 of using the water surface data set as an input training data set comprises: using each sample, i.e., (data(x, y), flag k(X, y)), in the water surface data set in step 1 as input data of the deep convolutional neural network, wherein flag(x, y) is used as a real flag of a pixel in anxthrow and a ythcolumn in a kth marked water surface image in the water surface data set; and flag*(x, y) is used as a predicted flag, i.e., a flag outputted by the deep convolutional neural network through prediction, of the pixel in the xth row and the yth column in the kth marked water surface image in the water surface data set, wherein !o x E [1, X], y E [1, Y], and kE [1, N], X is a quantity of rows of a marked water surface image, Y is a quantity of columns of the marked water surface image, and N is a quantity of marked water surface images, i.e., a quantity of samples, in the water surface data set; step 2 of performing optimization on a loss function defined on a kth training sample and having coordinates of [x, y] with reference to the deep convolutional neural network is: L=-(flagklog(flag*k)+(1-flagk)log(1-flag*k)), wherein a water surface segmentation problem is considered as a binary classification problem, different image features are extracted by using convolution kernels with different sizes, and information with different scales is transmitted in a downsampling/upsampling manner; and optimization is performed on the parameters and biases of the convolution kernels in the plurality of upsampling convolution layers and the parameters and biases of the convolution kernels in the plurality of downsampling convolution layers through gradient descent and based on a loss function L, and optimization is performed by using the optimized parameters and biases of the convolution kernels in the plurality of upsampling convolution layers and the optimized parameters and biases of the convolution kernels in the plurality of downsampling convolution layers, to construct the optimized deep convolutional neural network in step 2. Step3 of recording coordinates of a feature pixel in the image of the water level monitoring site acquired by using the camera and recording an elevation corresponding to the feature pixel comprises: acquiring n feature pixels defined as: data, =(x,,y,) iE {1,2,...,n}, wherein
(x,,y) represents a pixel in a ythrow and in an x th column in the image of the water
to level monitoring site; the pixel-elevation data set in step 3 is: {(x,,y,),h,}
iE {l,2,...,n}, wherein
(x,y) represents a pixel of ith pixel-elevation data in the pixel-elevation data set, h,
[5 represents an elevation corresponding to the ith pixel (x,y) in the pixel-elevation data set,
and n is a quantity of pixels; and
Constructing a pixel-elevation interpolation function by using an interpolation method comprises:: !o recording d as a Euclidean distance function between coordinates of pixels:
d(x1 ,y,x ,y 1 )= (x,-x1 ) 2 + 2
coordinates of a to-be-measured pixel are recorded as (x,,y,) , and pixels
(x,y),(xdYd) adjacent to the to-be-measured pixel are found in a pixel-elevation set and
meet: min d(xu,yux,,y,)+d(XdIYdxY,)
S.t 1" , wherein
anelevation hr corresponding to the coordinates (Xryr) of the to-be-measured pixel is:
hr=(h(xy)-h(xdYd)) d(x,,Xyd) d(xU,Y,Xd,Yd)
The pixel-level probability distribution with the same size as the denoised and preprocessed image in step 5 is: p(u,v) wherein p(u, v) is a pixel probability in a uth row and a Vth column predicted by the optimized deep convolutional neural network, u E [1, X], vE [1, Y], X is a quantity of rows of the denoised and preprocessed image, and Y is a quantity of columns of the denoised and preprocessed image; and step 5 of determining a classification threshold according to quality of the image for initial segmentation is: 1, p(u, v)> 0 class(u, v)= op(uv)< 0 ,wherein
to class(u,v) represents the initial segmentation image, class(u,v)=1 represents that a
pixel in the uth row and the Vth column belongs to a water area, class(u, v)= 0 represents that the pixel in the uth row and the Vth column does not belong to the water area, and 0 is the classification threshold. In a class of this embodiment, step 6 of constructing a probability field model according to the initial segmentation image is specifically that: for the probability field model, I represents a pixel, and X represents a real category corresponding to the pixel, i.e., the initial segmentation result in step 4, wherein X={0,1}, X=1 represents that the pixel belongs to a water area, and X=O represents that the pixel does not belong to the water area; !o step 6 of representing the probability field model as a conditional probability model by
using a Gibbs distribution is:
exp(- 0'(X II))) $ P(X | I)= cC, , wherein Z(I)
Z(I) is a regularization constant and used for normalizing a probability distribution, CG
represents all clusters in the figure, a #c (*) function represents a partition function defined on cluster c, and P(X I) represents a target random field;
step 6 of approximating the condition probability model as a mean field model comprises: performing approximation on P(X II) by using the mean field model, i.e., Q(X), wherein it is assumed that Q(X) is represented by several independent distributions:
Q(X)= YIQ(X,), wherein
Q(X,) represents an ith independent distribution of Q(X) for approximating P(XII); and step 6 of using a divergence between the conditional probability model and the mean field model as an optimization target comprises:
using minimizing a KL divergence between Q(X) and P(X II) as the optimization to target, which is specifically as follows:
min KL(Q |P)=JQ(X)log Q(X) dX, wherein 9 P(X | I)
KL(Q||P) represents the KL divergence between Q(X) and P(XII);and
gradient descent is performed on a likelihood function of Q, to obtain a target optimal approximate distribution, so as to obtain the finally distributed segmented image. The present invention has the advantage of achieving real-time automatic water level mo nitoring by non-contact method. Compared with other contact measuring equipment, the pres ent invention is lower in deployment costs and has an anomaly correction capability.
FIG. 1 is a pixel-elevation feature function. FIG. 2 is a deep convolutional network structure for water area recognition.
FIG. 3 is a random field model according to Embodiment 1. FIG. 4 is refined segmentation of a random field according to Embodiment 1. FIG. 5 is a flowchart of a method according to the present invention. FIG. 6 is an original water surface image according to Embodiment 2. FIG. 7 is an optimized segmentation diagram of a random field according to
Embodiment 2. FIG. 8 is a probability graph outputted through a neural network according to Embodiment 2. FIG. 9 is a mask image according to Embodiment 2. FIG. 10 is a diagram of a continuously observed water level change according to
Embodiment 2.
To help a person skilled in the art to better understand and implement the present invention, the present invention is further described in detail below with reference to the accompanying drawings. It should be understood that the embodiments described herein are merely used to describe and explain the present invention but are not intended to limit the present invention. Embodiment 1 A specific implementation of the present invention described below with reference to t0 FIG. 1 to FIG. 5 is: a water level measurement method based on a deep convolutional network and a random field, and the method specifically includes the following steps: step 1: constructing a water surface data set; acquiring a plurality of images containing a water surface in various scenes as a plurality of original water surface images; and
[5 sequentially marking each original water surface image, to obtain a marked water surface image, wherein the marking means that a pixel corresponding to a water surface region in each original water surface image is marked as 1, and a pixel corresponding to a non-water surface region is marked as 0; and !o the water surface data set in step 1 is:
(datak(x, y), flagk(x, y)) x E [1, X], y E [1, Y], and kE [1, N]
wherein X is a quantity of rows of a marked water surface image, Y is a quantity of columns of the marked water surface image, N is a quantity of marked water surface images, i.e., a quantity of samples, in the water surface data set, datak(x, y) represents a pixel in an xth row and a yth column of a kth marked water surface image in the water surface data set, flag(x, y) represents a pixel flag in the xth row and the yth column of the kth marked water surface image in the water surface data set, flagk(x, y)=1 represents that the pixel in the xth
row and the yth column of the kth marked water surface image in the water surface data set belongs to a water surface, and flagk(x, y)=0 represents that the pixel in the x row and the yth column of the kth marked water surface image in the water surface data set does not belong to the water surface; step 2: constructing a deep convolutional neural network, using the water surface data set as an input training data set, and performing optimization on a loss function with reference to the deep convolutional neural network, to obtain an optimized deep convolutional neural network; the deep convolutional network in step 2 is formed by connecting a downsampling module and an upsampling module, as shown in FIG. 2; the downsampling module is formed by cascading a first downsampling convolution layer, a first downsampling pooling layer, a second downsampling convolution layer, a second downsampling pooling layer, ... , a Kth downsampling convolution layer, a Kth to downsampling pooling layer, a (K+1)th downsampling convolution layer, and a fully connected layer; the first downsampling convolution layer, the second downsampling convolution layer, ... , and the (K+1)th downsampling convolution layer respectively have convolution kernels with different scales;
[5 parameters and biases of the convolution kernels of the first downsampling convolution layer, the second downsampling convolution layer, ... , and the (K+1)th downsampling convolution layer are to-be-optimized parameters; a convolution operation is performed on a feature map of each of the first downsampling convolution layer, the second downsampling convolution layer, ... , and the (K+1)th tO downsampling convolution layer; the upsampling module is formed by cascading a first upsampling convolution layer, a first upsampling deconvolutional layer, a second upsampling convolution layer, a second upsampling deconvolutional layer, ... , a Kth upsampling convolution layer, a UpSampling sh deconvolutional layer, and a (K+1)th upsampling convolution layer; the first upsampling convolution layer, the second upsampling convolution layer, ... , and
the (K+1)th upsampling convolution layer respectively have convolution kernels with different scales; parameters and biases of the convolution kernels of the first upsampling convolution layer, the second upsampling convolution layer, ... , and the (K+1)th upsampling convolution layer are to-be-optimized parameters;
the fully connected layer in the downsampling module is connected to the first upsampling convolution layer in the upsampling module; upsampling is performed on a feature map of each of the first upsampling deconvolutional layer, the second upsampling deconvolutional layer, ... , and the (K+1)th upsampling convolution layer; the feature map of the Kth downsampling convolution layer obtained afterconvolution processing and the feature map of the Kth upsampling deconvolutional layer obtained after upsampling are fused; step 2 of using the water surface data set as an input training data set comprises: using each sample, i.e., (data(x, y), flag (xy)), in the water surface data set in step 1 as input data of the deep convolutional neural network, wherein flag(x, y) is used as a real flag of a pixel in an xth row and a ythcolumn in a kth marked water surface image in the water surface data set; and to flag*(x, y) is used as a predicted flag, i.e., a flag outputted by the deep convolutional neural network through prediction, of the pixel in the xth row and the yth column in the kth marked water surface image in the water surface data set, wherein x E [1, X], y E [1, Y], and kE [1, N], X is a quantity of rows of a marked water surface image, Y is a quantity of columns of the marked water surface image, and N is a quantity of
[5 marked water surface images, i.e., a quantity of samples, in the water surface data set; step 2 of performing optimization on a loss function defined on a kth training sample and having coordinates of [x, y] with reference to the deep convolutional neural network is: L=-(flagklog(flag*k)+(1-flagk)log(1-flag*k)), wherein a water surface segmentation problem is considered as a binary classification problem, tO different image features are extracted by using convolution kernels with different sizes, and information with different scales is transmitted in a downsampling/upsampling manner; and optimization is performed on the parameters and biases of the convolution kernels in the plurality of upsampling convolution layers and the parameters and biases of the convolution kernels in the plurality of downsampling convolution layers through gradient descent and based on a loss function L, and optimization is performed by using the optimized parameters
and biases of the convolution kernels in the plurality of upsampling convolution layers and the optimized parameters and biases of the convolution kernels in the plurality of downsampling convolution layers, to construct the optimized deep convolutional neural network in step 2; step 3: setting up a camera at an appropriate observation position selected in a water
level monitoring site and fixing the camera, selecting a stable marker within an observation field of view, acquiring an image of the water level monitoring site by using the camera, recording coordinates of a feature pixel in the image of the water level monitoring site and recording an elevation corresponding to the feature pixel, constructing a pixel-elevation data set, and further construct a pixel-elevation interpolation function by using an interpolation method; step 3 of recording coordinates of a feature pixel in the image of the water level monitoring site acquired by using the camera and recording an elevation corresponding to the feature pixel is that, as shown in FIG. 1: acquired n feature pixels are specifically defined as: data, = (x,,y,) i E l1, 2 ,..., n} ,wherein to (Xi, yi) represents a pixel in a y th row and in an x, th column in the image of the water level monitoring site; the pixel-elevation data set in step 3 is: {(x1 , y,),h,} i E 1, 2 ,..., n ,wherein
(xi,y1 ) represents a pixel of ith pixel-elevation data in the pixel-elevation data set,h
represents an elevation corresponding to the ith pixel (X,y) in the pixel-elevation data set,
and n is a quantity of pixels; and step 3 of constructing a pixel-elevation interpolation function by using an interpolation method is that: !o d is recorded as a Euclidean distance function between coordinates of pixels:
d (x,, y,,xj, y) = (x - x ) 2 2+(y-YY
coordinates of a to-be-measured pixel are recorded as (XrYr) , and pixels
(x.,y.),(xd Yd) adjacent to the to-be-measured pixel are found in a pixel-elevation set and
meet: min d(x,,,x,,y,)+d(d'YdXrYr)
rX < X < Xd St Y <Yr<Yd ,wherein
anelevation hr corresponding to the coordinates (XrYr) of the to-be-measured pixel
is:
h,=(h(x, y.)-h(Xd'Yd)) d(xrYX,d) d (xyxd, d,) step 4: denoising and stabilizing the image of the water level monitoring site, to obtain a denoised and preprocessed image; step 5: inputting the denoised and preprocessed image into the optimized deep convolutional neural network for prediction, to obtain a pixel-level probability distribution with the same size as the denoised and preprocessed image, and determining a classification threshold according to quality of the image for initial segmentation, to obtain an initial segmentation image; the pixel-level probability distribution with the same size as the denoised and preprocessed image in step 5 is: to P, wherein p(u, v) is a pixel probability in a uth row and a vth column predicted by the optimized deep convolutional neural network, u E [1, X], vE [1, Y], X is a quantity of rows of the denoised and preprocessed image, and Y is a quantity of columns of the denoised and preprocessed image; and step 5 of determining a classification threshold according to quality of the image for
[5 initial segmentation is:
1, p(u, v) > 0 class(u, v)= , wherein
class(u,v) represents the initial segmentation image, class(u,v) represents that a
pixel in the uth row and the vth column belongs to a water area, class(u, v) = 0 represents
that the pixel in the uth row and the vth column does not belong to the water area, and 0 is !o the classification threshold;
step 6: constructing a probability field model according to the initial segmentation image, representing the probability field model as a conditional probability model by using a Gibbs distribution, approximating the conditional probability model as a mean field model, and performing further optimization by using a divergence between the conditional probability model and the mean field model as an optimization target, to obtain a finally distributed
segmented image; As shown in FIG. 3, step 6 of constructing a probability field model according to the initial segmentation image is specifically that: for the probability field model, I represents a pixel, and X represents a real category corresponding to the pixel, i.e., the initial segmentation result in step 4, wherein X={0,1},
X=1 represents that the pixel belongs to a water area, and X=0 represents that the pixel does not belong to the water area; step 6 of representing the probability field model as a conditional probability model by using a Gibbs distribution is: exp( #c0"(X" I1I))) P(X I) = ,ECG, - , wherein Z(J)
Z(I) is a regularization constant and used for normalizing a probability distribution,
CG represents all clusters in the figure, a #c(*) function represents a partition function
defined on cluster c, and P(X I I) represents a target random field;
step 6 of approximating the condition probability model as a mean field model comprises: performing approximation on P(X II) by using the mean field model, i.e., Q(X), to wherein
it is assumed that Q(X) is represented by several independent distributions:
Q(X) = IQ(X,), wherein Q(X,) represents an ith independent distribution of Q(X) for approximating P(XII); and step 6 of using a divergence between the conditional probability model and the mean
[5 field model as an optimization target comprises:
using minimizing a KL divergence between Q(X) and P(X II) as the optimization
target, which is specifically as follows:
min KL(Q |P)=JQ(X)log Q(X) dX, wherein Q P(X | I)
KL(Q||P) represents the KL divergence between Q(X) and P(XII);and gradient descent is performed on a likelihood function of Q, to obtain a target optimal approximate distribution, so as to obtain the finally distributed segmented image,as shown in FIG. 4. Step 7: further processing pixels belonging to a water surface region in the finally distributed segmented image according to the elevation interpolation function in step 3, to obtain an elevation, i.e., a water depth, of the pixels belonging to the water surface region in
the finally distributed segmented image. Embodiment 2 Step 1 and step 2 in Embodiment 2 are the same as step 1 and step 2 in Embodiment 1. Step 3: setting up a camera at an appropriate observation position selected in a water level monitoring site and fixing the camera, selecting a stable marker within an observation field of view, acquiring an image of the water level monitoring site by using the camera, a picture photographed by a camera is processed into a three-channel RGB image with a height h of 400 pixels and a width w of 360 pixels, as shown in FIG. 6. The processed image is inputted into a neural network and a probability graph with the same size as the inputted image is outputted, where the probability graph has a height of 400 pixels and a width of 360 pixels. Coordinates of a feature pixel are recorded in an image of a water level monitoring site and an elevation corresponding to the feature pixel is recorded, a pixel-elevation data set is constructed, and a pixel-elevation interpolation function is further constructed by using an t0 interpolation method.
Step 3 of recording coordinates of a feature pixel in the image of the water level monitoring site acquired by using the camera and recording an elevation corresponding to the feature pixel is that: acquired n feature pixels are specifically defined as: data,=(x,y,)
i E l1, 2 ,..., n} , wherein
(x, y) represents a pixel in a y th row and in an x, th column in the image of the water level monitoring site; the pixel-elevation data set in step 3 is: !o { (xi,,y,), hi}
i E 1, 2 ,..., n ,wherein
(x,y1 ) represents a pixel of ith pixel-elevation data in the pixel-elevation data set, h,
represents an elevation corresponding to the ith pixel (X,y) in the pixel-elevation data set,
and n is a quantity of pixels; and step 3 of constructing a pixel-elevation interpolation function by using an interpolation
method is that: d is recorded as a Euclidean distance function between coordinates of pixels:
d (x,, y,,xj, y) = (x -x) 2 -Y_+(y 2
coordinates of a to-be-measured pixel are recorded as (XrYr) , and pixels (X.I,y),(xd,Yd) adjacent to the to-be-measured pixel are found in a pixel-elevation set and
meet: min d (x., y.,x,,y,)+ d (xd ' dX,
S1t Y <Yr<Yd , wherein
an elevation hr corresponding to the coordinates (XrYr) of the to-be-measured pixel
is:
h,=(h(x, y.)-h(Xd'Yd)) d(XrYrXd Yd) d(xI,,yI,,xdyd)
FIG. 7 is a case of a linear change of an elevation according to Embodiment 2. A pixel position at the top is the 7 6thcolumn and the 2 4 3th row, and acorresponding elevation is 80 cm. A pixel position at the bottom is the 2 2 1th row and the 3 3 6thcolumn, and acorresponding elevation is 4 cm. In different embodiments, corresponding elevations, pixels, and image t0 sizes may be different. Elevations corresponding to pixels between the top and the bottom
may be obtained through interpolation. A water level elevation corresponding to the image may be obtained by using a cut-off pixel-relationship obtained through masking. Step 4: denoising and stabilizing the image of the water level monitoring site, to obtain a denoised and preprocessed image. Step 5: inputting the denoised and preprocessed image into the optimized deep convolutional neural network for prediction, to obtain a pixel-level probability distribution with the same size as the denoised and preprocessed image, and determining a classification threshold according to quality of the image for initial segmentation, to obtain an initial segmentation image; !o the pixel-level probability distribution with the same size as the denoised and
preprocessed image in step 5 is: P, wherein p(u, v) is a pixel probability in a uth row and a vth column predicted by the optimized deep convolutional neural network, u E [1, X], vE [1, Y], X is a quantity of rows of the denoised and preprocessed image, and Y is a quantity of columns of the denoised and preprocessed image; and step 5 of determining a classification threshold according to quality of the image for initial segmentation is:
class(u, v)= r1,p(u,v) > 0 , wherein
class(u,v) represents the initial segmentation image, class(u,v) represents that a pixel in the uth row and the vth column belongs to a water area, class(u,v) = 0 represents that the pixel in the uth row and the vth column does not belong to the water area, and 0 is the classification threshold. A picture photographed by a camera is processed into a three-channel RGB image with a height h of 400 pixels and a width w of 360 pixels. The processed image is inputted into a neural network and a probability graph with the same size as the inputted image is outputted, where the probability graph has a height of 400 pixels and a width of 360 pixels. As shown in FIG. 8, a value of a pixel in a vth row and a uthcolumn in the probability graph represents a probability (see probability density below) that the pixel belongs to a water surface, and the value ranges from 0 to 1. to Step 6: constructing a probability field model according to the initial segmentation image, representing the probability field model as a conditional probability model by using a Gibbs distribution, approximating the conditional probability model as a mean field model, and performing further optimization by using a divergence between the conditional probability model and the mean field model as an optimization target, to obtain a finally distributed segmented image; step 6 of constructing a probability field model according to the initial segmentation image is specifically that: for the probability field model, I represents a pixel, and X represents a real category corresponding to the pixel, i.e., the initial segmentation result in step 4, wherein X={0,1}, tO X=1 represents that the pixel belongs to a water area, and X=0 represents that the pixel does not belong to the water area; step 6 of representing the probability field model as a conditional probability model by using a Gibbs distribution is: exp( #c0"(X" II)) P(X I) = ,ECG,; , wherein Z(I) Z(I) is a regularization constant and used for normalizing a probability distribution,
CG represents all clusters in the figure, a #c(*) function represents a partition function
defined on cluster c, and P(X I I) represents a target random field;
step 6 of approximating the condition probability model as a mean field model comprises: performing approximation on P(X II) by using the mean field model, i.e., Q(X),
wherein it is assumed that Q(X) is represented by several independent distributions:
Q(X)=]]Q(X 1 ), wherein
Q(X,) represents an ith independent distribution of Q(X) for approximating P(XII); and step 6 of using a divergence between the conditional probability model and the mean field model as an optimization target comprises: using minimizing a KL divergence between Q(X) and P(X II) as the optimization target, which is specifically as follows:
min KL(Q |P)=JQ(X)log Q ) dX, wherein Q P(X |I)
KL(Q||P) represents the KL divergence between Q(X) and P(XII);and t0 gradient descent is performed on a likelihood function of Q, to obtain a target optimal
approximate distribution, so as to obtain thefinally distributed segmented image. As shown in FIG. 9, the probability graph is combined with the original image and optimized by using a random field, to obtain a mask image the same as the inputted image, where the mask image has a height of 400 pixels, and a width of 360 pixels. If the value of the pixel in the vth row and the uth column is 0 or 1, it indicates that the pixel belongs to a water surface or does not belong to a water surface. Step 7: Further process pixels belonging to a water surface region in the finally distributed segmented image according to the elevation interpolation function in step 3, to obtain an elevation, i.e., a water depth, of the pixels belonging to the water surface region in the finally to distributed segmented image. A continuously observed water level change can be obtained through processing frame by frame, as shown in FIG. 10. The reference to any prior art in this specification is not, and should not be taken as, an acknowledgement or any form of suggestion that such prior art forms part of the common general knowledge. It will be understood that the terms "comprise" and "include" and any of their derivatives
(e.g. comprises, comprising, includes, including) as used in this specification, and the claims that follow, is to be taken to be inclusive of features to which the term refers, and is not meant to exclude the presence of any additional features unless otherwise stated or implied. The specific embodiments described in this specification are merely examples to describe the spirit of the present invention. A person skilled in the art of the present invention can make
various modifications or supplements or use a similar method for replacement, without departing from the spirit of the present invention or exceeding the scope defined by the appended claims.
Claims (5)
1. A water level measurement method based on a deep convolutional network and a random field, wherein the method comprises the following steps: step 1: constructing a water surface data set; step 2: constructing a deep convolutional neural network, using the water surface data set as an input training data set, and performing optimization on a loss function with reference to the deep convolutional neural network, to obtain an optimized deep convolutional neural network; step 3: setting up a camera at an appropriate observation position selected in a water level monitoring site and fixing the camera, selecting a stable marker within an observation field of view, acquiring an image of the water level monitoring site by using the camera, recording coordinates of a feature pixel in the image of the water level monitoring site and recording an elevation corresponding to the feature pixel, constructing a pixel-elevation data set, and further construct a pixel-elevation interpolation function by using an interpolation method; step 4: denoising and stabilizing the image of the water level monitoring site, to obtain a denoised and preprocessed image; step 5: inputting the denoised and preprocessed image into the optimized deep convolutional neural network for prediction, to obtain a pixel-level probability distribution with the same size as the denoised and preprocessed image, and determining a classification threshold according to quality of the image for initial segmentation, to obtain an initial segmentation image; step 6: constructing a probability field model according to the initial segmentation image, representing the probability field model as a conditional probability model by using a Gibbs distribution, approximating the conditional probability model as a mean field model, and performing further optimization by using a divergence between the conditional probability model and the mean field model as an optimization target, to obtain a finally distributed segmented image; and step 7: further processing pixels belonging to a water surface region in the finally distributed segmented image according to the elevation interpolation function in step 3, to obtain an elevation, i.e., a water depth, of the pixels belonging to the water surface region in the finally distributed segmented image; Wherein in step 3, recording coordinates of a feature pixel in the image of the water level monitoring site acquired by using the camera and recording an elevation corresponding to the feature pixel comprises: acquiring n feature pixels defined as: data, =(x,,y,) iE{ 1,2,...,n}, wherein
(x,.,y1 ) represents a pixel in a yth row and in an x th column in the image of the water
level monitoring site; the pixel-elevation data set in step 3 is: {(x,y,),h,}
iE{ 1,2,...,n}, wherein
(x.,y) represents a pixel of ith pixel-elevation data in the pixel-elevation data set, h,
represents an elevation corresponding to the ith pixel (x,y) in the pixel-elevation data set,
and n is a quantity of pixels; and constructing a pixel-elevation interpolation function by using an interpolation method comprises: recording d as a Euclidean distance function between coordinates of pixels:
d(x,,y,,x,,yj)= (x,-x)2+(y,-y),2
coordinates of a to-be-measured pixel are recorded as (x,,y,) , and pixels
(x,y),(x,,Yd) adjacent to the to-be-measured pixel are found in a pixel-elevation set and
meet: min d(x.,y.,x,,y,)+d(x,,,y,,,x,,y,)
S.t X, d , wherein Y, < Y, < Yd
anelevation h, corresponding to the coordinates (x,,y,) of the to-be-measured pixel is:
y,,xa , y,) h,=(h(x,, y,)-h(x,, y,)) d(x,, d(x,, y,,x,, y,,)
2. The water level measurement method based on a deep convolutional network and a random field according to claim 1, wherein step 1 comprises: acquiring a plurality of images containing a water surface in various scenes as a plurality of original water surface images; and sequentially marking each original water surface image, to obtain a marked water surface image, wherein the marking means that a pixel corresponding to a water surface region in each original water surface image is marked as 1, and a pixel corresponding to a non-water surface region is marked as 0; and the water surface data set in step 1 is: (datak(x, y), flagk(x, y)) x E [1, X], y E [1, Y], and kE [1, N] wherein X is a quantity of rows of a marked water surface image, Y is a quantity of columns of the marked water surface image, N is a quantity of marked water surface images, i.e., a quantity of samples, in the water surface data set, datak(x, y) represents a pixel in an xth row and a yth column of a kth marked water surface image in the water surface data set, flag(x, y) represents a pixel flag in the xth row and the yth column of the kth marked watersurface image in the water surface data set, flagk(x, y)=1 represents that the pixel in the xth row and the yth column of the kth marked water surface image in the water surface data set belongs to a water surface, and flag(x, y)=0 represents that the pixel in the xth row and the yth column of the kth marked water surface image in the water surface data set does not belong to the water surface.
3. The water level measurement method based on a deep convolutional network and a random field according to claim 1, wherein the deep convolutional network in step 2 is formed by connecting a downsampling module and an upsampling module; the downsampling module is formed by cascading a first downsampling convolution layer, a first downsampling pooling layer, a second downsampling convolution layer, a second downsampling pooling layer, ... , a Kth downsampling convolution layer, a Kth downsampling pooling layer, a (K+1)th downsampling convolution layer, and a fullyconnected layer; the first downsampling convolution layer, the second downsampling convolution layer,..., and the (K+1)th downsampling convolution layer respectively have convolution kernels with different scales; parameters and biases of the convolution kernels of the first downsampling convolution layer, the second downsampling convolution layer, ... , and the (K+1)th downsampling convolution layer are to-be-optimized parameters; a convolution operation is performed on a feature map of each of the first downsampling convolution layer, the second downsampling convolution layer, ... , and the (K+1)th downsampling convolution layer; the upsampling module is formed by cascading a first upsampling convolution layer, a first upsampling deconvolutional layer, a second upsampling convolution layer, a second upsampling deconvolutional layer, ... , a Kth upsampling convolution layer, a Kth upsampling deconvolutional layer, and a (K+1)th upsampling convolution layer; the first upsampling convolution layer, the second upsampling convolution layer, ... , and the (K+1)th upsampling convolution layer respectively have convolution kernels with different scales; parameters and biases of the convolution kernels of the first upsampling convolution layer, the second upsampling convolution layer, ... , and the (K+1)th upsampling convolution layer are to-be-optimized parameters; the fully connected layer in the downsampling module is connected to the first upsampling convolution layer in the upsampling module; upsampling is performed on a feature map of each of the first upsampling deconvolutional layer, the second upsampling deconvolutional layer, ... , and the (K+1)th upsampling convolution layer; the feature map of the Kth downsampling convolution layer obtained afterconvolution processing and the feature map of the Kthupsampling deconvolutional layer obtained after upsampling are fused; step 2 of using the water surface data set as an input training data set comprises: using each sample, i.e., (data(x, y), flag k(X, y)), in the water surface data set in step 1 as input data of the deep convolutional neural network, wherein flag(x, y) is used as a real flag of a pixel in anxthrow and a ythcolumn in a kth marked water surface image in the water surface data set; and flag*(x, y) is used as a predicted flag, i.e., a flag outputted by the deep convolutional neural network through prediction, of the pixel in the xth row and the yth column in the kth marked water surface image in the water surface data set, wherein x E [1, X], y E [1, Y], and kE [1, N], X is a quantity of rows of a marked water surface image, Y is a quantity of columns of the marked water surface image, and N is a quantity of marked water surface images, i.e., a quantity of samples, in the water surface data set; step 2 of performing optimization on a loss function defined on a kth training sample and having coordinates of [x, y] with reference to the deep convolutional neural network is: L=-(flagklog(flag*k)+(1-flagk)log(1-flag*k)), wherein a water surface segmentation problem is considered as a binary classification problem, different image features are extracted by using convolution kernels with different sizes, and information with different scales is transmitted in a downsampling/upsampling manner; and optimization is performed on the parameters and biases of the convolution kernels in the plurality of upsampling convolution layers and the parameters and biases of the convolution kernels in the plurality of downsampling convolution layers through gradient descent and based on a loss function L, and optimization is performed by using the optimized parameters and biases of the convolution kernels in the plurality of upsampling convolution layers and the optimized parameters and biases of the convolution kernels in the plurality of downsampling convolution layers, to construct the optimized deep convolutional neural network in step 2.
4. The water level measurement method based on a deep convolutional network and a random field according to claim 1, wherein the pixel-level probability distribution with the same size as the denoised and preprocessed image in step 5 is: p(u,v) wherein p(u, v) is a pixel probability in a uth row and a vth column predicted by the optimized deep convolutional neural network, u E [1, X], vE [1, Y], X is a quantity of rows of the denoised and preprocessed image, and Y is a quantity of columns of the denoised and preprocessed image; and step 5 of determining a classification threshold according to quality of the image for initial segmentation is: 1,p(u, v) >O class(u, v)= Op(uv)< 0 , wherein class(u,v) represents the initial segmentation image, class(u,v) represents that a pixel in the uth row and the vth column belongs to a water area, class(u,v)=0representsthat the pixel in the uth row and the vth column does not belong to the water area, and 0 isthe classification threshold.
5. The water level measurement method based on a deep convolutional network and a random field according to claim 1, wherein step 6 of constructing a probability field model according to the initial segmentation image is specifically that: for the probability field model, I represents a pixel, and X represents a real category corresponding to the pixel, i.e., the initial segmentation result in step 4, wherein X={0,1}, X=1 represents that the pixel belongs to a water area, and X=O represents that the pixel does not belong to the water area; step 6 of representing the probability field model as a conditional probability model by using a Gibbs distribution is: exp(- '.(X $ - II))) P(X | I)= - co , wherein Z(I)
Z(I) is a regularization constant and used for normalizing a probability distribution, CG
represents all clusters in the figure, a #c(*) function represents a partition function defined on
cluster c, and P(XII) represents a target random field;
step 6 of approximating the condition probability model as a meanfield model comprises: performing approximation on P(X I I) by using the mean field model, i.e., Q(X),
wherein it is assumed that Q(X) is represented by several independent distributions:
Q(X)= jfQ(X), wherein Q(X,) represents an ith independent distribution of Q(X) for approximating P(XII); and
step 6 of using a divergence between the conditional probability model and the mean field model as an optimization target comprises: using minimizing a KL divergence between Q(X) and P(X II) as the optimization
target, which is specifically as follows:
min KL(Q |P)=JQ(X)log Q(X) dX, wherein Q P(X |I)
KL(QIP) represents the KL divergence between Q(X) and P(XII);and
gradient descent is performed on a likelihood function of Q, to obtain a target optimal approximate distribution, so as to obtain the finally distributed segmented image.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011409785.X | 2020-12-04 | ||
CN202011409785.XA CN112508986B (en) | 2020-12-04 | 2020-12-04 | Water level measurement method based on deep convolutional network and random field |
Publications (2)
Publication Number | Publication Date |
---|---|
AU2021277762A1 AU2021277762A1 (en) | 2022-06-23 |
AU2021277762B2 true AU2021277762B2 (en) | 2023-05-25 |
Family
ID=74971800
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2021277762A Active AU2021277762B2 (en) | 2020-12-04 | 2021-12-03 | Water level measurement method based on deep convolutional network and random field |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112508986B (en) |
AU (1) | AU2021277762B2 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109815865A (en) * | 2019-01-11 | 2019-05-28 | 江河瑞通(北京)技术有限公司 | A kind of water level recognition methods and system based on virtual water gauge |
CN110543872A (en) * | 2019-09-12 | 2019-12-06 | 云南省水利水电勘测设计研究院 | unmanned aerial vehicle image building roof extraction method based on full convolution neural network |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11715001B2 (en) * | 2018-04-02 | 2023-08-01 | International Business Machines Corporation | Water quality prediction |
CN108985238B (en) * | 2018-07-23 | 2021-10-22 | 武汉大学 | Impervious surface extraction method and system combining deep learning and semantic probability |
CN110223341A (en) * | 2019-06-14 | 2019-09-10 | 北京国信华源科技有限公司 | A kind of Intelligent water level monitoring method based on image recognition |
CN111104889B (en) * | 2019-12-04 | 2023-09-05 | 山东科技大学 | U-net-based water remote sensing identification method |
CN111473818B (en) * | 2020-04-27 | 2021-05-11 | 河海大学 | Artificial beach multi-source monitoring data integration analysis method |
CN111598098B (en) * | 2020-05-09 | 2022-07-29 | 河海大学 | Water gauge water line detection and effectiveness identification method based on full convolution neural network |
CN111767801B (en) * | 2020-06-03 | 2023-06-16 | 中国地质大学(武汉) | Remote sensing image water area automatic extraction method and system based on deep learning |
CN111998910B (en) * | 2020-08-26 | 2021-09-24 | 河海大学 | Visual measurement method and system for water level of multi-stage water gauge |
-
2020
- 2020-12-04 CN CN202011409785.XA patent/CN112508986B/en active Active
-
2021
- 2021-12-03 AU AU2021277762A patent/AU2021277762B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109815865A (en) * | 2019-01-11 | 2019-05-28 | 江河瑞通(北京)技术有限公司 | A kind of water level recognition methods and system based on virtual water gauge |
CN110543872A (en) * | 2019-09-12 | 2019-12-06 | 云南省水利水电勘测设计研究院 | unmanned aerial vehicle image building roof extraction method based on full convolution neural network |
Also Published As
Publication number | Publication date |
---|---|
CN112508986B (en) | 2022-07-05 |
AU2021277762A1 (en) | 2022-06-23 |
CN112508986A (en) | 2021-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10015360B1 (en) | Image-based field boundary detection and identification | |
US20240020951A1 (en) | Automated plant detection using image data | |
US20180260947A1 (en) | Inventory, growth, and risk prediction using image processing | |
Ruiz-Ruiz et al. | Testing different color spaces based on hue for the environmentally adaptive segmentation algorithm (EASA) | |
CA3088641A1 (en) | Crop type classification in images | |
CN104361330B (en) | A kind of crop row recognition methods of corn accurate dispenser system | |
KR101271074B1 (en) | Monitering method for greenhouse crops and monitoring system for greenhouse crops by photographing | |
Giménez-Gallego et al. | Intelligent thermal image-based sensor for affordable measurement of crop canopy temperature | |
CN111739020B (en) | Automatic labeling method, device, equipment and medium for periodic texture background defect label | |
CN109344843A (en) | Rice seedling line extracting method, device, computer equipment and storage medium | |
CN116977960A (en) | Rice seedling row detection method based on example segmentation | |
CN117392627A (en) | Corn row line extraction and plant missing position detection method | |
Wu et al. | ALS data based forest stand delineation with a coarse-to-fine segmentation approach | |
AU2021277762B2 (en) | Water level measurement method based on deep convolutional network and random field | |
JP5352435B2 (en) | Classification image creation device | |
Omer et al. | An image dataset construction for flower recognition using convolutional neural network | |
CN114943929A (en) | Real-time detection method for abnormal behaviors of fishes based on image fusion technology | |
KR20220168875A (en) | A device for estimating the lodging area in rice using AI and a method for same | |
CN112581472B (en) | Target surface defect detection method facing human-computer interaction | |
Rahmawati et al. | Tobacco Farming Mapping To Determine The Number Of Plants Using Contour Detection Method | |
CN113807137A (en) | Method, device, agricultural machine and medium for identifying center line of planting row | |
Islam et al. | QuanCro: a novel framework for quantification of corn crops’ consistency under natural field conditions | |
CN114485612B (en) | Route generation method and device, unmanned operation vehicle, electronic equipment and storage medium | |
Dhariwal et al. | Aerial Images were used to Detect Curved-Crop Rows and Failures in Sugarcane Production | |
Kumar K et al. | Harnessing Computer Vision for Agricultural Transformation: Insights, Techniques, and Applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGA | Letters patent sealed or granted (standard patent) |