CN114494979B - Method for video recognition of ecological flow discharge - Google Patents

Method for video recognition of ecological flow discharge Download PDF

Info

Publication number
CN114494979B
CN114494979B CN202210315650.XA CN202210315650A CN114494979B CN 114494979 B CN114494979 B CN 114494979B CN 202210315650 A CN202210315650 A CN 202210315650A CN 114494979 B CN114494979 B CN 114494979B
Authority
CN
China
Prior art keywords
video
threshold value
sub
bleeding
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210315650.XA
Other languages
Chinese (zh)
Other versions
CN114494979A (en
Inventor
廖佳庆
邱志章
胡琳琳
陈洪飞
蒋元中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dingchuan Information Technology Co ltd
Original Assignee
Hangzhou Dingchuan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dingchuan Information Technology Co ltd filed Critical Hangzhou Dingchuan Information Technology Co ltd
Priority to CN202210315650.XA priority Critical patent/CN114494979B/en
Publication of CN114494979A publication Critical patent/CN114494979A/en
Application granted granted Critical
Publication of CN114494979B publication Critical patent/CN114494979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a method for video recognition of ecological flow discharge. The method comprises the steps of extracting a section of video with ecological flow discharge, carrying out frame extraction on the video to obtain n images, further comparing the images of the front frame and the rear frame to obtain the difference index of the front frame and the rear frame, and judging whether the video has continuous moving water flow or not according to the difference index so as to judge whether the ecological flow discharge exists or not. The main algorithm is strong in universal capability, high in precision and strong in robustness, meanwhile, a fuzzy check function is added, and abnormal conditions of shadow, dam overflow, water mist, sunlight irradiation and video packet loss can be well eliminated. For a few special and inapplicable scene conditions, such as weed interference, water ripple interference and the like, a submodel can be obtained by adopting convolutional neural network training, and ecological flow discharge recognition is carried out through the submodel. The identification of whether the ecological flow is leaked or not is realized by combining the two identification methods of the main algorithm and the sub model, so that the identification universality and the usability are guaranteed, and the accuracy of the identification result is also guaranteed.

Description

Method for video recognition of ecological flow discharge
Technical Field
The invention relates to the technical field of video identification and water conservancy ecological flow discharge, in particular to a method for discharging video identification ecological flow.
Background
The ecological flow refers to the flow required by keeping the ecological environment in the water flow area, generally has the requirement of minimum ecological flow when a hydropower station is built, and is the minimum water flow for maintaining the survival ecological balance of downstream organisms. Therefore, the supervision and early warning of the discharge of the ecological flow becomes particularly important.
The video recognition is a technology based on image recognition, which is to perform frame extraction on a video to acquire n images, and perform processing, analysis, comparison and the like on the previous and next frame images to recognize various targets and objects in different modes.
At present, the ecological flow monitoring is generally carried out by adopting modes such as a water level meter flow measurement mode, a flow meter flow measurement mode and the like, and meanwhile, a visual monitoring means is provided by combining the monitoring of a video camera, so that whether the camera drains water normally or not is observed. However, the ecological flow discharge environment of small hydropower stations is different, and is considered by the field environment and cost, most small hydropower stations do not have the condition of installing a sensor for measuring flow, and some small hydropower stations often have the defects of instrument panels caused by impurities in water bodies after being installed, so that false alarm and wrong report are caused, whether equipment is normal or not cannot be visually judged, and only a camera can be installed to remotely observe whether water is discharged or not. Whether the water is discharged or not is an intuitive method for observing by the camera, but the camera consumes a large amount of labor cost through manual regular observation, the observation within 24 hours is difficult, and the manual observation difficulty is high.
In order to solve the above problems, a common solution is to collect images of the ecological traffic bleeding, obtain a model through convolutional neural network training, and recognize the ecological traffic bleeding through the model. However, the method is generally used for training a model in a single point location scene, and has the disadvantages of large workload, difficulty in acquiring a data set, time consumption and poor universality.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a method for recognizing ecological flow discharge by a video. The main algorithm is strong in general capability, high in precision and strong in robustness, meanwhile, a fuzzy check function is added, and abnormal conditions such as shadow, dam overflow, water mist, sunlight irradiation, video packet loss and the like can be well eliminated. For a few particularly inapplicable scenario cases, for example: weed interference, water ripple interference and the like, a submodel can be obtained by adopting convolutional neural network training, and ecological flow discharge recognition is carried out through the submodel. The identification of whether the ecological flow is leaked or not is realized by combining the two identification modes of the main algorithm and the sub model, so that the identification universality and the usability are guaranteed, and the accuracy of the identification result is also guaranteed.
The technical scheme adopted by the invention is as follows:
a method for video recognition of ecological traffic shedding comprises the following steps:
s1, selecting scene: the method comprises the steps of determining a bleeding characteristic position in a camera scene, wherein the bleeding characteristic position is selected on a video image through a rectangular frame, and the bleeding characteristic generally comprises bleeding characteristics such as splash, bleeding water spray and water flow.
S2, setting parameters: setting the duration of extracting the video, considering the playing bandwidth of the video and the accuracy rate required by comparison and calculation, wherein 1 second is a relatively proper duration, so that the duration of extracting the video for 1 second is set; meanwhile, the extraction time of the camera scene, an AI identification method, a contrast change rate threshold value, an AI threshold value, early warning receiver information and the like are set.
Further, the extraction time of the camera scene includes two modes of timing extraction and random extraction: the timing extraction is to extract videos within a certain time range of one day according to a certain determined time interval; the random extraction is a video extraction performed at a certain time within a certain time range of a day for a random time in accordance with a certain number of extractions.
Furthermore, the AI identification method comprises a main algorithm mode and a sub-model mode, wherein the main algorithm can set a contrast change rate threshold value and an AI threshold value, and the sub-model can set the AI threshold value. The AI identification method defaults to a main algorithm, a contrast change rate threshold of the main algorithm is defaulted to 0.8, an AI threshold of the main algorithm is defaulted to 0.001, and an AI threshold of the sub-model is set according to a training result. All thresholds range between 0 and 1.
Further, the information of the early warning receiver comprises the name and the mobile phone number of the early warning receiver.
S3, the main algorithm identifies: by performing frame rate conversion on the extracted video V, a frame rate video file V1 of 25 frames/sec is obtained. Extracting all images from the video file V1 to obtain an all-image data set D1; carrying out Gaussian filtering, graying, particalization, calculation of corresponding particle mean square error and normalization on the image data of the previous frame and the next frame in sequence to obtain a particle threshold value array L1, and comparing the numerical values in the particle threshold value array L1 with a set AI threshold value in sequence to obtain a minimum threshold value minthreshold value, a maximum threshold value maxthreshold and a contrast change rate X of all images calculated by the scene; comparing the change rate X with a comparison change rate threshold ER to obtain a preliminary ecological flow discharge state D (X); and a recognition result video V2 is generated for convenient observation.
The preliminary ecological flow blowdown state d (x) comprises: 1 is normal discharge, and 0 is discharge early warning;
when the identification result D (x) is that the bleeding is normal, finishing initial ecological flow bleeding identification; when the identification result D (x) is the bleeding early warning, a video fuzzy check function is added, wherein the video fuzzy comprises the following steps: shadow, dam overflow, water mist, sunlight irradiation, video packet loss; sequentially extracting, cutting, graying, fuzzy calculation and average calculation all images in the video to obtain the fuzzy degree s of the video, and judging to obtain a final ecological flow discharge state D (x, s), wherein the final ecological flow discharge state D (x, s) comprises the following steps: and 2 is video fuzzy, 1 is normal discharge, and 0 is discharge early warning.
The Gaussian filtering adopts a Gaussian template of 3 multiplied by 3, and the image after filtering is obtained by performing convolution calculation on the image and the Gaussian template.
The gray scale calculation obtains the gray value of the corresponding pixel point through R, G, B three-primary-color weighted calculation of the filtered image, so that the gray value of all the pixel points of the whole image is obtained. The graying calculation formula is shown in formula (1):
f(x,y)=0.3R(x,y)+0.59G(x,y)+0.11B(x,y) (1)
the particles are square particles, and the edge position is not enough for removing the particles. The side length a of the square particles is calculated according to the pixel resolution width w and the height h of the monitoring area, and the calculation is shown in formula (2):
Figure DEST_PATH_IMAGE001
(2)
wherein the particle threshold value array L1 is obtained by calculating the particles of the corresponding previous frame in the previous and subsequent frames
Figure DEST_PATH_IMAGE003
And post-frame particles
Figure DEST_PATH_IMAGE005
The mean square error of the particle threshold value array L1 is obtained by obtaining the MSE of a single particle, calculating the mean square error array LMSE of all particle arrays and finally normalizing. The mean square error is calculated in formula (3), wherein n is a positive integer, and the normalization calculation is shown in formula (4):
Figure 408749DEST_PATH_IMAGE006
(3)
Figure DEST_PATH_IMAGE007
(4)
the calculation of the initial ecological flow bleeding state D (x) is that in the total image quantity C1 extracted from the video, the process of comparison of front and back frames is carried out, the process is circulated for C1-1 times, each comparison utilizes the particle value in the particle threshold value array L1 to compare with the set AI threshold value in turn, if the particle value in the particle threshold value array L1 is greater than the AI threshold value, the image comparison is greater than the AI threshold value; obtaining the comparison number C2 greater than an AI threshold value through circular calculation, further obtaining the comparison change rate X of all images by dividing the comparison times C1-1 of all the images by C2, wherein the comparison change rate is calculated according to a formula (5), and finally obtaining a primary ecological flow discharge state D (X) by comparing the comparison change rate with a comparison change rate threshold value ER, wherein the primary ecological flow discharge state D (X) comprises the following steps: 1 is normal discharge, 0 is discharge early warning, and the preliminary ecological flow discharge state is calculated according to the formula (6):
Figure 388206DEST_PATH_IMAGE008
(5)
Figure DEST_PATH_IMAGE009
(6)
fuzzy evaluation is carried out on the image by adopting a Laplacian operator in fuzzy calculation in the fuzzy check function; sequentially cutting and graying all images in the video to obtain a grayscale image; the fuzzy calculation adopts a Laplacian template of 3 multiplied by 3, and convolution calculation is carried out through the gray level image and the Laplacian template to obtain an array
Figure DEST_PATH_IMAGE011
. By logarithmic grouping
Figure 980993DEST_PATH_IMAGE012
Averaging to obtain average value, calculating the average value according to formula (7), and logarithmically grouping
Figure 438519DEST_PATH_IMAGE012
And calculating the variance to obtain the blurring degree var of the image, and calculating according to a variance calculation formula (8).
Further, averaging the blur degrees var of all the images to obtain a blur degree s of the video, calculating an average value calculation formula (9), and finally judging through the blur degree s of the video and the numerical parameter 20 to obtain an ecological flow discharge state D (x, s) calculation formula (10).
Further, the numerical parameters 20 are obtained by a number of blurred image tests.
Figure DEST_PATH_IMAGE013
(7)
Figure 63011DEST_PATH_IMAGE014
(8)
Figure DEST_PATH_IMAGE015
(9)
Figure DEST_PATH_IMAGE017
(10)
S4, re-rating parameters: and when the identification result of the main algorithm is checked to be abnormal, the parameters are calibrated again. An AI threshold value threshold is 0.001, and a contrast change rate threshold value ER is 0.8;
further, the identification result abnormal checking means that the video is extracted 100 times through a single scene, and the number of the identification result of the main algorithm and the manual review is consistent is less than 95 times. An AI threshold of 0.001 and an ER threshold of 0.8 were used.
Further, an AI threshold of 0.001 and a contrast rate threshold ER of 0.8 are obtained from a number of bleeding normal and bleeding pre-warning image tests. The AI threshold value threshold and the contrast change threshold value ER are basically not adjusted, and can be solved by re-calibration parameters when special scene recognition errors are encountered. Special cases and corresponding parameter adjustment solutions include, but are not limited to, the following:
case 1: the discharge position is far, the water flow is small: the bleeding characteristic image is small in proportion, the bleeding state is not obvious, the video contrast is not obvious, the accuracy of the identification result is influenced, and the ecological flow bleeding is identified as no ecological flow bleeding. The focal length of the camera can be adjusted, if the focal length cannot be adjusted, the contrast change rate threshold ER and the AI threshold are correspondingly reduced according to the minimum threshold minthreshold, the maximum threshold maxthreshold and the contrast change rate X of the recognition result, and special conditions can be recognized through the sub-model.
Case 2: weed sheltering, the water surface has ripples: the weeds and the water surface are influenced by wind blowing, the accuracy of the identification result is influenced, and no ecological flow discharge is identified as ecological flow discharge; the method can require the owner to weed, and can also correspondingly increase the threshold values ER and AI according to the minimum threshold value minthreshold, the maximum threshold value maxthreshold and the contrast change rate X of the recognition result, filter the influence of weeds and water ripples, and also recognize special conditions through a sub-model.
Case 3: video blurring: the video blurring condition can appear in the mountains under the influence of the water mist, so that the video contrast is not obvious. In general, in the case of ambiguity, the contrast change rate threshold ER and the AI threshold may be adjusted down according to the minimum threshold minthreshold, the maximum threshold maxthreshold, and the contrast change rate X of the recognition result. If the situation is very fuzzy, the fuzzy check function is used for identifying the fuzzy check, and the fuzzy check comprises the following steps: shadow, overflow dam, water mist, sunlight irradiation, video packet loss, camera shielding, dirty things of camera, etc.
Case 4: and (3) strong rainfall and snowing weather, namely, the strong rainfall and snowing weather affects the identification result and can identify that the ecological flow exists, and the condition just accords with the current condition of the ecological flow.
S5, identifying the sub model: and after the main algorithm recalibrates the parameters, the main algorithm identification result is checked to be abnormal, and the sub-model is adopted for identification.
Further, the identification result abnormal checking means that the video is extracted 100 times through a single scene, and the number of identification results passing through the main algorithm and the number of manual audits are consistent and is less than 95 times.
And completing the recognition of the sub-model through sample collection, sub-model training, sub-model parameter setting and sub-model recognition.
Collecting samples: the method comprises the steps of obtaining images of scenes in supervision time by frame extraction of video streams of a camera, selecting images in different time periods and images with ecological flow leakage in different weather, importing the images into a training data set, wherein the training data set comprises at least more than 40 images, completing ecological flow leakage characteristic marking by framing leakage characteristics of marked scenes, and obtaining a sub-model data set D0 and a leakage characteristic marking set A0.
Sub-model training: and performing model training on the submodel data set D0 and the bleeding characteristic annotation set A0 by using a convolutional neural network to obtain an ecological flow bleeding submodel P1.
Setting parameters of the submodels: and after the sub-model P1 is created, setting parameters of the sub-model, and modifying the AI identification method into the sub-model P1. And looking up parameters such as Precision, Recall, F-Measure and the like of the training result, and setting a corresponding AI threshold value.
Sub-model identification: and extracting the video at regular time to obtain an image I1, and performing target feature recognition on the image I1 by using a sub model P1 to obtain a recognition result. When the identification result has the bleeding characteristic, judging that the bleeding is normal, namely judging that the bleeding is 1; and when the identification result has no bleeding characteristic, judging that the detection result is 0, namely, performing bleeding early warning. And (4) adopting a convolutional neural network method for the submodel, if the recognition result is inconsistent with the result of manual review, performing re-labeling training on the picture with the wrong recognition, regenerating a new submodel, and repeating iteration until the recognition result is consistent with the result of manual review.
S6, early warning and auditing: and when the identification result of the main algorithm or the sub-model is normally verified, starting an early warning and auditing step.
Further, the identification result is checked to be normal, namely the video is extracted for 100 times through a single scene, and the number of identification results and manual review which are consistent through a main algorithm is larger than or equal to 95 times.
And carrying out manual auditing on the early warning information according to the identified early warning result, and carrying out early warning issuing and early warning disposal on the audited early warning information.
S7, operation and maintenance inspection: and (3) carrying out operation and maintenance inspection and verification every month according to the requirement of the recognition accuracy of more than 95% of a single scene, and carrying out parameter re-calibration or sub-model training of the main algorithm on the scene which does not meet the accuracy requirement, thereby ensuring the recognition accuracy.
The invention has the beneficial effects that:
1. the method identifies the ecological flow discharge through a video identification method, and effectively solves the problem of high personnel cost for manually monitoring the ecological flow.
2. The main algorithm of the invention has strong universality, can be used without training, saves the labor cost of training, and solves the problems of difficult acquisition of training data and long acquisition period.
3. The main algorithm has a fuzzy check function, can check abnormal videos such as shadows, overflow dams, water mist, sunlight irradiation, video packet loss and the like, and further guarantees the usability of the algorithm.
4. The invention combines the main algorithm and the sub-model, provides a complete solution, has strong universality and strong practicability, and is suitable for identifying ecological flow discharge of different thousands of differences.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic view of the ecological flow discharge of the present invention;
FIG. 3 is a cut-away illustration of the vent feature of the present invention;
FIG. 4 is a diagram illustrating the cropper map identification result of the present invention;
FIG. 5 is a comparison of the parameter adjustment results of the present invention;
FIG. 6 is a diagram illustrating the sub-model training results of the present invention;
FIG. 7 is a schematic diagram of a blurred image according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following embodiments and drawings, but the embodiments of the present invention are not limited thereto.
The method has the basic principle that a camera is used for acquiring an ecological flow scene video, and the state of whether the ecological flow is released or not is identified by setting parameters and using a main algorithm. And for a few special cases, identifying through re-calibration parameters and the submodel, wherein the submodel is obtained through sample learning and labeling of the bleeding characteristics and convolutional neural network training. The method is characterized in that the method combines a main algorithm mode and a sub-model mode to realize the identification of all ecological flow discharge states. The method has strong universal capability, the main algorithm can be used without training, a large amount of data acquisition, labeling and training time is saved, the problem that training data are difficult to acquire is solved, and meanwhile, the recognition precision is guaranteed.
As shown in fig. 1 to 7, the method for video identification of ecological flow discharge of the present invention includes the following steps:
s1, selecting scene:
s1.1, video access: and accessing the video by using a national standard 28181 protocol, a camera rtsp pull stream proxy and the like.
S1.2, selecting a scene: confirm the position of ecological flow characteristic of releasing in camera monitoring range, through the removal and the enlargeing and reduce camera focus to the camera, ensure that people's eye can see clearly whether ecological flow releases through the video, avoid the natural environment as far as possible to the condition of sheltering from of the characteristic of releasing.
Further, the camera cannot change the angle and the resolution after selecting the scene. Because the small hydropower stations are generally in mountainous areas, all due to the influence of factors such as terrain and the like, and due to the consideration of construction difficulty and cost, the ecological flow monitoring is not necessarily opposite to the discharge port, and can also be at the downstream of the discharge port, so the discharge characteristics comprise water splash, discharge water spray, water flow and other characteristic conditions.
S2, setting parameters:
setting the time length for extracting the video, considering the playing bandwidth of the video and the accuracy rate required by comparison and calculation, wherein 1 second is a more appropriate time length, so the time length for extracting the video for 1 second is set; meanwhile, the extraction time of the camera scene, an AI identification method, a contrast change rate threshold value, an AI threshold value, early warning receiver information and the like are set. The AI identification method, the contrast change rate threshold value and the AI threshold value all belong to identification model parameters. Setting relevant parameters of a scene, wherein the parameters comprise the following contents:
content 1: extraction time: random extraction is set in a video extraction time mode, the extraction times are 1, the extraction time range is from 9 am to 15 pm, and due to the fact that some light supplement lamps with long observation distances cannot irradiate, ecological flow cannot be seen clearly at all night, and therefore the requirements of supervision are met, and meanwhile bandwidth and computing resources are saved;
when the recognition result D (x, s) is a video blur state, random time compensation extraction recognition may be taken once at intervals of at least 2 hours. It is considered normal if the compensated decimation identification is also ambiguous. As the recognition mainly considers the accuracy of early warning detection, the fuzzy condition is a few condition, and the condition of water mist overflowing from a dam is common in the fuzzy condition and belongs to normal discharge. Shadows, sunlight irradiation and video packet loss cannot be judged whether to be released normally, and fewer shadows appear in fuzzy conditions, so that human eyes cannot distinguish the shadows, the sunlight irradiation and the video packet loss. When the two extractions are fuzzy, shadow, sunlight irradiation and video packet loss are caused, and ecological flow is not released, the condition is a small probability event, and the small probability event can be disregarded according to the principle that the small probability event is actually impossible. If the real cause happens, the specific cause must exist, and the reason of the ambiguity can be found twice through operation and maintenance inspection in the later period.
Content 2: identifying model parameters: setting an AI identification method as a main algorithm for identifying ecological flow, if the algorithm is the main algorithm, setting a default value of a contrast change rate threshold ER to be 0.8, and setting a default value of an AI threshold to be 0.001; if the model is a sub-model, setting the sub-model corresponding to the scene, and setting an AI threshold value according to the training result;
content 3: early warning receiving personnel information: the early warning receiving personnel information comprises corresponding personnel names and mobile phone numbers. The early warning mode is short message early warning.
S3, the main algorithm identifies:
by performing frame rate conversion on the extracted video V, a frame rate video file V1 of 25 frames/sec is obtained. Extracting all images from the video file V1 to obtain an all-image data set D1; sequentially carrying out Gaussian filtering, graying, particalization, calculation of corresponding particle mean square error and normalization on the image data of the front frame and the back frame to obtain a particle threshold value array L1, and sequentially comparing the numerical values in the particle threshold value array L1 with a set AI threshold value threshold to obtain a minimum threshold value minthreshold and a maximum threshold value maxthreshold calculated in the scene; obtaining a preliminary ecological flow discharge state D (x) by judgment; and a recognition result video V2 is generated for convenient observation.
The preliminary ecological flow bleeding state d (x) includes: 1 is normal discharge, and 0 is discharge early warning;
when the identification result D (x) is that the bleeding is normal, finishing initial ecological flow bleeding identification; when the identification result D (x) is a bleeding early warning, a video fuzzy check function is added, wherein the video fuzzy comprises the following steps: shadow, dam overflow, water mist, sunlight irradiation, video packet loss; sequentially extracting, cutting, graying, fuzzy calculating and average calculating all images in the video to obtain a fuzzy degree s of the video, and judging to obtain a final ecological flow discharge state D (x, s), wherein the final ecological flow discharge state D (x, s) comprises the following steps: and 2 is video fuzzy, 1 is normal discharge, and 0 is discharge early warning.
After setting parameters, the verification function can be clicked, the current scene video can be immediately extracted, the ecological flow state of the current scene is identified, and table 1 is a partial result statistic for 68 point verification:
table 1 statistical table of partial scene verification identification results
Serial number Name of scene Rate of change Maximum threshold value Minimum threshold value Recognition result Results of manual review
1 Power station 1 0.94 0.0562 0.0003 1 for normal bleeding Correct identification
2 Electric power station 2 0.96 0.024 0.0008 1 is bleeding Normal Identify correct
3 Power station 3 1 0.2873 0.0621 1 is bleeding Normal Identify correct
4 Power station 4 1 0.2344 0.0655 1 for normal bleeding Identify correct
5 Power station 5 1 0.0292 0.0084 1 is bleeding Normal Identify correct
6 Electric power station 6 1 0.0919 0.0025 1 for normal bleeding Correct identification
7 The power station 7 0 0.0002 0 0 is discharge warning Identify correct
8 Electric power station 8 0 0.0001 0 0 is discharge warning Identify correct
9 The power station 9 0.74 0.0072 0.0002 0 is discharge warning Wrong identification, far discharge position and small water flow
10 The power station 10 0.98 0.024 0.0005 1 is bleeding Normal Identification errors, weed disturbances
The recognition result basically accords with expectations, and a small number of special cases adopt the method of parameter rate setting and submodel recognition to realize the recognition.
S4, recalibration parameters:
and when the identification result of the main algorithm is abnormal, re-calibrating the parameters.
Further, the identification result abnormal checking means that the video is extracted 100 times through a single scene, and the identification result and the manual review consistent quantity through the main algorithm are less than 95 times.
According to the identification and verification result, for the scene of 'power station 9' with identification errors, the reason of the analysis errors is that the discharge position is far, the water flow is small, the analysis errors belong to a special scene, and when the identification errors of the special scene occur, the analysis errors can be solved through re-rating parameters. Re-rating the parameters, modifying the contrast change rate threshold ER to be 0.5 and the AI threshold to be 0.0003, and after the parameter re-rating is finished, adopting a main algorithm to identify and check, wherein the identification result is as follows: the change rate is 0.96, the maximum threshold value is 0.0072, the minimum threshold value is 0.0002, and the recognition result is 1, namely, the discharge is normal, and the recognition result is verified correctly.
S5, sub-model identification:
s5.1, creating a sub-model: and after the main algorithm recalibrates the parameters, the main algorithm identification result is checked to be abnormal, and the sub-model is adopted for identification.
Further, the identification result abnormal checking means that the video is extracted 100 times through a single scene, and the identification result and the manual review consistent quantity through the main algorithm are less than 95 times.
The scene of the power station 10 is interfered by weeds, so that the recognition is wrong, the discharge early warning is recognized as normal discharge, the scene belongs to a special case scene, the effect is not good through parameter adjustment and verification, and the scene of sub-model recognition is adopted.
The method comprises the steps of obtaining images of scenes in supervision time by frame extraction of video streams of a camera, selecting images in different time periods and images with ecological flow discharge in different weather, importing the images into a training data set, wherein the training data set is at least more than 40 images, completing ecological flow discharge characteristic marking by framing discharge characteristics of marked scenes, and obtaining a data set Dtsq and a discharge characteristic mark set Atsq of a sub-model 'power station 10'.
And performing model training on a data set Dtsq and a bleeding characteristic labeling set Atsq of the sub-model power station 10 by using a convolutional neural network to obtain an ecological flow bleeding sub-model Ptsq of the power station 10, and completing creation of the sub-model.
S5.2, setting parameters of the submodels: and after the sub-model is created, entering sub-model parameter setting, and modifying the AI identification method into the sub-model Ptsq. Looking up parameters such as Precision, Recall, F-Measure and the like of the training result, and setting the AI threshold value to be 0.8.
S5.3, checking the sub-model identification result: an image Itsq is obtained by extracting a video at regular time, and a recognition result is obtained by performing object feature recognition on the image Itsq using the submodel Ptsq. When the identification result has the bleeding characteristic, judging that the bleeding is normal, namely judging that the bleeding is 1; and when the identification result has no bleeding characteristic, judging that the detection result is 0, namely, performing bleeding early warning. And (4) adopting a convolutional neural network method for the submodels, if the recognition result is inconsistent with the result of manual review, performing re-labeling training on the pictures with wrong recognition, regenerating new submodels, and repeating iteration until the recognition result is consistent with the result of manual review.
And comparing the identification results of the sub-models of the 'power station 10', removing the situation of lost packets due to video screen splash, and ensuring the accuracy rate to be 100%. See table 2:
TABLE 2 statistical table of recognition results of sub-models of "power station 10
Serial number Human judgment Sub-model identification Image time Confidence (%)
1 1 is bleeding Normal 1 is bleeding Normal 2021-12-12 09:10:00 91.65
2 1 for normal bleeding 1 for normal bleeding 2021-12-12 11:20:00 84.13
3 0 is discharge warning 0 is discharge warning 2021-1-12 07:40:00 47.03
4 0 is discharge warning 0 is discharge warning 2021-12-12 07:50:00 43.24
5 0 is let outPut early warning 0 is discharge warning 2021-12-12 08:00:11 42.03
6 1 is bleeding Normal 1 for normal bleeding 2022-02-14 13:01:00 92.56
7 1 for normal bleeding 1 is bleeding Normal 2022-02-14 15:01:08 91.57
8 1 for normal bleeding 1 is bleeding Normal 2022-02-14 16:01:07 87.09
9 1 for normal bleeding 1 is bleeding Normal 2022-02-14 17:01:00 93.51
10 1 for normal bleeding 1 is bleeding Normal 2022-02-14 11:01:00 91.52
S6, early warning and auditing:
s6.1, early warning and auditing: and when the identification result of the main algorithm or the sub-model is checked normally, starting an early warning and auditing step.
Further, the identification result is checked to be normal, namely the video is extracted for 100 times through a single scene, and the identification result and the manual review consistent number are larger than or equal to 95 times through a main algorithm.
The method comprises the steps that statistics is carried out on recognition results of all scenes yesterday at the timing of 1:00 in the morning every day, and due to the fact that recognition is set to be carried out once every day, if early warning conditions occur in single scene recognition, early warning information is generated. And carrying out manual examination and verification on the early warning information according to the identified early warning result. The auditing mainly considers that the early warning issued information is required to be 100% correct, and meanwhile, a large amount of non-early warning information is filtered by AI identification, and only a small amount of early warning information needs to be audited.
S6.2, early warning issuing, namely issuing the early warning information to a mobile phone of an early warning receiver corresponding to the scene through a short message after the early warning information is manually checked, and finishing the pushing of the early warning information.
S6.3, early warning treatment: and after the early warning receiver receives the information, the situation that the ecological flow is not discharged is treated.
S7, operation and maintenance inspection:
s7.1, counting results, namely counting according to the AI identification result and the manual examination result, and counting the accuracy of the AI identification result. After the parameters are optimized for a period of time, the video blurring condition is not considered, the recognition results of 68 scenes for a period of time are counted, 67 scenes use a main algorithm and 1 sub-model, and the average recognition accuracy rate reaches more than 99%. Partial statistics table 3:
TABLE 3 statistical table of partial scene recognition results
Serial number Scene name Audit totals Audit consistent number Audit disagreement numbers Accuracy (%)
1 The power station 11 69 69 0 100%
2 The power station 12 67 67 0 100%
3 The power station 13 68 68 0 100%
4 The power station 14 70 70 0 100%
5 The power station 15 57 57 0 100%
6 The power station 16 34 34 0 100%
7 The power station 17 70 70 0 100%
8 The power station 18 65 64 1 98.5%
9 Electric power station 19 60 59 1 98.3%
10 The power station 20 70 68 2 97.1%
11 68 scenarios summarize: 4179 4143 36 99.1%
s7.2, operation and maintenance inspection, namely, performing operation and maintenance inspection and verification on the result of the identification error every month, determining scene distribution and abnormal proportion of abnormal conditions, and analyzing a solution according to actual conditions. And according to the requirement of the recognition accuracy of more than 95% of a single scene, if the parameters need to be adjusted continuously, and if the parameters need to be adjusted continuously, starting a training submodel of the submodel. Excluding the video blurring condition, the following abnormal conditions are generally found, see table 4:
table 4 statistical table for abnormal operation and maintenance inspection
Figure 947790DEST_PATH_IMAGE018

Claims (9)

1. A method for recognizing ecological flow discharge by video is characterized in that: the method comprises the following steps:
s1, selecting a scene: determining a release characteristic position in a camera scene, wherein the release characteristic position is framed and selected on a video image through a rectangular frame, and the release characteristics generally comprise splash, release spray and water flow release characteristics;
s2, setting parameters: setting the duration of extracting the video; meanwhile, setting extraction time of a camera scene, an AI identification method, a contrast change rate threshold value, an AI threshold value and early warning receiver information; the AI identification method comprises a main algorithm and a sub-model, wherein the main algorithm is used for setting a contrast change rate threshold value and an AI threshold value, and the sub-model is used for setting an AI threshold value; the AI identification method defaults to set a main algorithm;
s3, the main algorithm identifies: obtaining a 25-frame/second frame rate video file V1 by performing frame rate conversion on the extracted video V; extracting all images from the video file V1 to obtain an all-image data set D1; sequentially carrying out Gaussian filtering, graying, particalization, calculation of mean square error of corresponding particles and normalization on the image data of the previous frame and the image data of the next frame to obtain a particle threshold value array L1, and sequentially comparing the numerical values in the particle threshold value array L1 with a set AI threshold value threshold to obtain a minimum threshold value minthreshold value, a maximum threshold value maxthreshold and contrast change rates X of all images calculated by the scene; comparing the change rate X with a comparison change rate threshold value ER to obtain a primary ecological flow discharge state D (X); meanwhile, a recognition result video V2 is generated for convenient observation; the preliminary ecological flow bleeding state d (x) includes: 1 is normal discharge, and 0 is discharge early warning;
when the identification result D (x) is that the bleeding is normal, finishing initial ecological flow bleeding identification; when the identification result D (x) is a bleeding early warning, a video fuzzy check function is added, wherein the video fuzzy comprises the following steps: shadow, dam overflow, water mist, sunlight irradiation, video packet loss; sequentially extracting, cutting, graying, fuzzy calculating and average calculating all images in the video to obtain a fuzzy degree s of the video, and judging to obtain a final ecological flow discharge state D (x, s), wherein the final ecological flow discharge state D (x, s) comprises the following steps: 2, video blurring, 1, normal bleeding and 0, pre-warning bleeding;
the particles are square particles, and the edge positions of the particles are not large enough for removing the particles; the side length a of the square particles is calculated according to the pixel resolution width w and the height h of the monitoring area, and the calculation is shown in formula (2):
Figure FDA0003681413580000011
the particle threshold value arrayL1 is obtained by calculating the corresponding previous frame particle in previous and subsequent frames
Figure FDA0003681413580000012
And the post-frame particle y ═ y1,y2,…,ynObtaining MSE of a single particle by the mean square error of the mean square error, calculating to obtain a mean square error array LMSE of all particle arrays, and obtaining a particle threshold value array L1 by normalization; the mean square error is calculated in formula (3), and the normalization is calculated in formula (4):
Figure FDA0003681413580000013
in the formula:
n is the total number of pixel points of the square image particles;
Figure FDA0003681413580000014
the gray value of the ith pixel point of the previous frame particle is obtained;
yithe gray value of the ith pixel point of the subsequent frame particle is obtained;
Figure FDA0003681413580000021
s4, re-rating parameters: when the identification result of the main algorithm is abnormal in verification, parameters are calibrated again; an AI threshold value threshold is 0.001, and a contrast change rate threshold value ER is 0.8;
s5, sub-model identification: when the main algorithm recalibrates the parameters, the recognition result of the main algorithm is checked to be abnormal, and the sub-model is adopted for recognition; completing the recognition of the sub-model through sample collection, sub-model training, sub-model parameter setting and sub-model recognition;
s6, early warning and auditing: when the identification result of the main algorithm or the sub-model is normally checked, starting an early warning and auditing step; carrying out manual examination and verification on the early warning information according to the identified early warning result, and carrying out early warning issuing and early warning disposal on the early warning information after examination and verification;
s7, operation and maintenance inspection: and (3) carrying out operation and maintenance inspection and verification every month according to the requirement of the identification accuracy of more than 95% of a single scene, and carrying out main algorithm re-calibration parameter or sub-model training on the scene which does not meet the accuracy requirement, thereby ensuring the identification accuracy.
2. The method for video recognition of ecological traffic shedding according to claim 1, characterized in that: in step S2, the duration of extracting the video is set to 1 second, taking into account the playing bandwidth of the video and the accuracy rate required by the comparison calculation; the extraction time of the camera scene comprises two modes of timing extraction and random extraction: the timing extraction is to extract videos within a certain time range of one day according to a certain determined time interval; the random extraction is a video extraction performed at random time within a certain time range of a day, and performed at a certain number of times of extraction.
3. The method for video recognition of ecological traffic bleeding, as claimed in claim 1, wherein: in step S2, the contrast change rate threshold of the main algorithm is set to 0.8 by default, the AI threshold of the main algorithm is set to 0.001 by default, and the AI threshold of the sub-model is set according to the training result; all the thresholds range from 0 to 1.
4. The method for video recognition of ecological traffic shedding according to claim 1, characterized in that: in step S2, the information of the warning recipient includes the name and the phone number of the warning recipient.
5. The method for video recognition of ecological traffic shedding according to claim 1, characterized in that: in step S3, the gaussian filter uses a gaussian template of 3 × 3, and performs convolution calculation on the image and the gaussian template to obtain a filtered image;
the graying calculation obtains the gray value of the corresponding pixel point by weighting calculation of R, G, B three primary colors of the filtered image, thereby obtaining the gray value of all the pixel points of the whole image, and the graying calculation formula is shown in formula (1):
f(x,y)=0.3R(x,y)+0.59G(x,y)+0.11B(x,y) (1)
the calculation of the preliminary ecological flow rate bleeding state D (x) is that in the total image quantity C1 extracted from the video, through the comparison of front and back frames, the cycle is C1-1 times, each comparison utilizes the particle value in the particle threshold value array L1 to compare with the set AI threshold value in turn, if the particle value in the particle threshold value array L1 is greater than the AI threshold value, the image comparison is greater than the AI threshold value; obtaining the comparison number C2 greater than an AI threshold value through circular calculation, further obtaining the comparison change rate X of all images by dividing the comparison times C1-1 of all the images by C2, wherein the comparison change rate is calculated according to a formula (5), and finally obtaining a primary ecological flow discharge state D (X) by comparing the comparison change rate with a comparison change rate threshold value ER, wherein the primary ecological flow discharge state D (X) comprises the following steps: the 1 is normal discharge, the 0 is discharge early warning, and the calculation of the initial ecological flow discharge state is shown in a formula (6):
Figure FDA0003681413580000031
Figure FDA0003681413580000032
fuzzy evaluation is carried out on the image by adopting a Laplacian operator in the fuzzy calculation in the fuzzy check function; sequentially cutting and graying all images in the video to obtain a grayscale image; the fuzzy calculation adopts a 3 multiplied by 3 Laplace operator template, and convolution calculation is carried out through a gray level image and the Laplace operator template to obtain an array x ═ { x ═ x1,x2,…,xmM is the total number of pixel points contained in the image; by setting the array x ═ x1,x2,…,xmAveraging to obtain an average value
Figure FDA0003681413580000033
The average value is calculated and shown in the publicEquation (7), logarithm group x ═ x1,x2,…,xmCalculating the variance to obtain the blurring degree var of the image, wherein the calculation is shown in a variance calculation formula (8);
averaging the blur degrees var of all images to obtain a blur degree s of the video, calculating a mean value calculation formula (9), and finally judging through the blur degree s of the video and a numerical parameter 20 to obtain a final ecological flow discharge state D (x, s), wherein the calculation formula (10) is shown:
Figure FDA0003681413580000034
Figure FDA0003681413580000035
Figure FDA0003681413580000036
in the formula: k is the total number of images extracted from the video file;
Figure FDA0003681413580000037
6. the method for video recognition of ecological traffic shedding, according to claim 5, characterized by: the numerical parameters 20 are derived by a number of blurred image tests.
7. The method for video recognition of ecological traffic bleeding, as claimed in claim 1, wherein: in step S4, the identification result checking for abnormal condition means that the video is extracted 100 times through a single scene, and the number of times that the identification result of the main algorithm is consistent with the number of times of manual review is less than 95 times;
the AI threshold value threshold of 0.001 and the contrast change rate threshold value ER of 0.8 are obtained by a large number of bleeding normal and bleeding early warning image tests; the AI threshold and the ER parameter of the contrast change threshold are basically not adjusted; other special scenarios, solved by re-rating the parameters, special cases and corresponding parameter tuning solutions include the following cases:
case 1: the discharge position is far, the water flow is small: the image occupation ratio of the discharge port is small, the discharge state is not obvious, the video contrast is not obvious, the accuracy of the identification result is influenced, and the ecological flow discharge is identified as no ecological flow discharge; adjusting the focal length of the camera, if the focal length cannot be adjusted, correspondingly reducing a contrast change rate threshold ER and an AI threshold according to a minimum threshold minthreshold, a maximum threshold maxthreshold and a contrast change rate X of an identification result, or identifying special conditions through a sub-model;
case 2: weed shelters from, and the surface of water has ripple: the weeds and the water surface are influenced by wind, the accuracy of the identification result is influenced, and the condition that no ecological flow discharge exists is identified as ecological flow discharge; requiring the owner to weed, or according to the minimum threshold minthreshold, the maximum threshold maxthreshold and the contrast change rate X of the identification result, correspondingly increasing the contrast change rate threshold ER and the AI threshold, filtering the influence of weeds and water ripples, or identifying special conditions through a sub-model;
case 3: video blurring: the video blurring condition can occur when the mountains are affected by the water mist, so that the video contrast is not obvious; under the general fuzzy condition, correspondingly reducing a contrast change rate threshold ER and an AI threshold according to a minimum threshold minthreshold, a maximum threshold maxthreshold and a contrast change rate X of the recognition result; if the situation is very fuzzy, the fuzzy check function is used for identifying the fuzzy check, and the fuzzy check comprises the following steps: shadow, dam overflow, water mist, sunlight irradiation, video packet loss, camera shielding and dirty objects of the camera;
case 4: and (3) strong rainfall and snowing weather, namely, the strong rainfall and snowing weather affects the identification result and can identify that the ecological flow exists, and the condition just accords with the current condition of the ecological flow.
8. The method for video recognition of ecological traffic shedding according to claim 1, characterized in that: in step S5, the result checking for abnormal condition is that the video is extracted 100 times through a single scene, and the number of times that the identification result of the main algorithm is consistent with the manual review is less than 95 times;
the sample collection is: the method comprises the steps of obtaining images of scenes in supervision time by framing a video stream of a camera, selecting images of different time periods and images with ecological flow discharge in different weather, importing the images into a training data set, wherein the training data set comprises at least more than 40 images, completing the labeling of an ecological flow discharge port by framing and labeling the discharge characteristics of the scenes, and obtaining a sub-model data set D0 and a discharge characteristic labeling set A0;
the sub-model training comprises the following steps: performing model training on the submodel data set D0 and the bleeding characteristic annotation set A0 by using a convolutional neural network to obtain an ecological flow bleeding submodel P1;
the sub-model parameters are set as follows: after the sub-model P1 is created, entering sub-model parameter setting, modifying an AI identification method into a sub-model P1, checking Precision, Recall and F-Measure parameters of a training result, and setting a corresponding AI threshold value;
the sub-model is identified as: extracting videos regularly to obtain an image I1, and performing target feature recognition on the image I1 by using a sub-model P1 to obtain a recognition result; when the identification result has the bleeding characteristic, judging that the bleeding is normal, namely judging that the bleeding is 1; when the identification result has no bleeding characteristic, judging that the detection result is 0, namely, performing bleeding early warning; and (4) adopting a convolutional neural network method for the submodels, if the recognition result is inconsistent with the result of manual review, performing re-labeling training on the pictures with wrong recognition, regenerating new submodels, and repeating iteration until the recognition result is consistent with the result of manual review.
9. The method for video recognition of ecological traffic shedding according to claim 1, characterized in that: in step S6, the identification result being verified to be normal refers to extracting the video 100 times through a single scene, and the number of times of identification result passing through the main algorithm and the number of times of manual review being consistent is greater than or equal to 95.
CN202210315650.XA 2022-03-29 2022-03-29 Method for video recognition of ecological flow discharge Active CN114494979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210315650.XA CN114494979B (en) 2022-03-29 2022-03-29 Method for video recognition of ecological flow discharge

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210315650.XA CN114494979B (en) 2022-03-29 2022-03-29 Method for video recognition of ecological flow discharge

Publications (2)

Publication Number Publication Date
CN114494979A CN114494979A (en) 2022-05-13
CN114494979B true CN114494979B (en) 2022-07-22

Family

ID=81489176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210315650.XA Active CN114494979B (en) 2022-03-29 2022-03-29 Method for video recognition of ecological flow discharge

Country Status (1)

Country Link
CN (1) CN114494979B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697453A (en) * 2018-09-30 2019-04-30 中科劲点(北京)科技有限公司 Semi-supervised scene classification recognition methods, system and device based on multimodality fusion
CN112364802A (en) * 2020-11-19 2021-02-12 中国地质调查局水文地质环境地质调查中心 Deformation monitoring method for collapse landslide disaster body

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017051327A1 (en) * 2015-09-22 2017-03-30 Imageprovision Technology Pvt. Ltd. Method and system for detection and classification of particles based on processing of microphotographic images
CN108171001A (en) * 2017-11-29 2018-06-15 中国电建集团成都勘测设计研究院有限公司 It is a kind of to assess the method that effect is let out under hydraulic and hydroelectric engineering ecological flow
CN109299687A (en) * 2018-09-18 2019-02-01 成都网阔信息技术股份有限公司 A kind of fuzzy anomalous video recognition methods based on CNN
WO2020118533A1 (en) * 2018-12-11 2020-06-18 中广核工程有限公司 Nuclear power plant leakage monitoring alarm method and alarm system
CN112884198B (en) * 2021-01-13 2023-06-09 西安理工大学 Method for predicting dam crest settlement of panel dam by combining threshold regression and improved support vector machine
CN113487249B (en) * 2021-09-07 2021-12-07 长江水利委员会水文局 Self-adaptive hydropower station intelligent ecological regulation and control method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697453A (en) * 2018-09-30 2019-04-30 中科劲点(北京)科技有限公司 Semi-supervised scene classification recognition methods, system and device based on multimodality fusion
CN112364802A (en) * 2020-11-19 2021-02-12 中国地质调查局水文地质环境地质调查中心 Deformation monitoring method for collapse landslide disaster body

Also Published As

Publication number Publication date
CN114494979A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN112394152A (en) Water quality real-time intelligent monitoring analysis management system based on big data
CN102111532B (en) Camera lens occlusion detecting system and method
CN111583198A (en) Insulator picture defect detection method combining FasterR-CNN + ResNet101+ FPN
CN112149543B (en) Building dust recognition system and method based on computer vision
CN110378865A (en) A kind of greasy weather visibility intelligence hierarchical identification method and system under complex background
CN112396635B (en) Multi-target detection method based on multiple devices in complex environment
CN106530281A (en) Edge feature-based unmanned aerial vehicle image blur judgment method and system
CN111914695B (en) Tidal bore monitoring method based on machine vision
CN107065037A (en) A kind of Data of Automatic Weather acquisition control system
CN110852164A (en) YOLOv 3-based method and system for automatically detecting illegal building
CN107273799A (en) A kind of indoor orientation method and alignment system
CN114943917A (en) Algorithm for visually identifying aeration rate of aerobic tank of sewage plant
CN108665417A (en) A kind of license plate image deblurring method, apparatus and system
CN112418124A (en) Intelligent fish monitoring method based on video images
CN107977531B (en) A kind of ground resistance flexible measurement method based on image procossing and mathematical model
CN110849821B (en) Black and odorous water body remote sensing identification method based on Bayesian theorem
CN113884492B (en) Ringelmann blackness calibration and detection method and device for motor vehicle exhaust
CN114494979B (en) Method for video recognition of ecological flow discharge
CN109658405B (en) Image data quality control method and system in crop live-action observation
CN115578695B (en) Water gauge water level machine vision detection method and device with free shooting visual angle
CN109166081B (en) Method for adjusting target brightness in video visibility detection process
CN111091601B (en) PM2.5 index estimation method for real-time daytime outdoor mobile phone image
CN115761563A (en) River surface flow velocity calculation method and system based on optical flow measurement and calculation
CN113793069A (en) Urban waterlogging intelligent identification method of deep residual error network
CN112819817A (en) River flow velocity estimation method based on graph calculation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant