CN112418124A - Intelligent fish monitoring method based on video images - Google Patents
Intelligent fish monitoring method based on video images Download PDFInfo
- Publication number
- CN112418124A CN112418124A CN202011369081.4A CN202011369081A CN112418124A CN 112418124 A CN112418124 A CN 112418124A CN 202011369081 A CN202011369081 A CN 202011369081A CN 112418124 A CN112418124 A CN 112418124A
- Authority
- CN
- China
- Prior art keywords
- fish
- frame
- images
- frame images
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 241000251468 Actinopterygii Species 0.000 title claims abstract description 120
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000012544 monitoring process Methods 0.000 title claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 20
- 238000000605 extraction Methods 0.000 claims abstract description 8
- 230000009467 reduction Effects 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims description 16
- 238000001514 detection method Methods 0.000 claims description 7
- 238000012360 testing method Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 238000011176 pooling Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 description 4
- 238000013508 migration Methods 0.000 description 3
- 230000005012 migration Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001932 seasonal effect Effects 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a fish intelligent monitoring method based on video images, which comprises the steps of obtaining a fish video, performing frame extraction processing, performing noise reduction processing, and inputting into a trained fast-RCNN for identification; judging the distance of the center positions of the fishes of the same type in the two adjacent frame images to determine whether one fish is positioned on the adjacent frame image; and when the same fish is positioned on the two adjacent frame images, determining the change of the number of each type of fish in the fishway based on the relation between the central position of the same fish on the two adjacent frame images and the reference line and the variation of the central position. The scheme can realize accurate counting of the number of each type of fishes in the mode.
Description
Technical Field
The invention relates to a method for monitoring underwater targets, in particular to a method for intelligently monitoring fishes based on video images.
Background
In recent years, a large number of various hydropower stations are built in China, the dam bodies of the hydropower stations cause difficulty in free migration of fishes, particularly seasonal and physiological migration causes problems, and the survival and multiplication of the fishes are threatened. To solve this problem, a new dam is specially designed with a fishway for fish to pass through, and light, sound and biochemical fish luring technologies are used to try to guide the fish to the fishway entrance, guide and help the fish pass through the fishway.
The fishway is a fish passing building arranged at a gate, a dam or a natural obstacle for communicating fish migration channels, is one of evaluation indexes of river ecosystem health, and is also an important index of ecological environment protection in water conservancy and hydropower engineering environment influence evaluation. At present, the monitoring of at home and abroad to the fishway mainly lies in the kind and the quantity of the fish of control to verify the validity of fishway, thereby carry out the improvement of fishway, the detection mode of fish type has the invariant moment of Hu based on grating data combination in traditional fishway, realizes the affirmation of fish, because the fish when the grating, if can not pass through according to fixed mode, its acquisition information gap is great, makes to be difficult to realize the accurate discernment of fish type and quantity.
Disclosure of Invention
Aiming at the defects in the prior art, the intelligent fish monitoring method based on the video images solves the problem that the number of each type of fish cannot be counted based on the Hu invariant moment in the prior art.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
s1, obtaining a fish passing video at a t-time period at the entrance of the fishway, and performing frame extraction processing on the fish passing video to obtain a plurality of frame images;
s2, performing noise reduction processing on all frame images, and then sequentially inputting the frame images into the trained fast-RCNN according to the time sequence to obtain the identification information of each fish in the frame images; the identification information comprises the length and width of a frame where the fish is located, the center position and the probability of belonging to each type of fish;
s3, calculating the distance between the center position of each fish in the frame image i and the center positions of all the same type of fish in the frame image i +1 according to the identification information of each fish in the two adjacent frame images i, i + 1;
s4, judging whether one distance between the fish j in the frame image i and the same type of fish in the frame image i +1 is smaller than a preset distance; if yes, go to step S5, otherwise go to step S7;
s5, enabling the fish j to exist in the frame images i, i +1 at the same time, then judging whether the central positions of the fish j on the frame images i, i +1 are respectively positioned at two sides of the reference line, if so, entering a step S6, otherwise, entering a step S7;
s6, judging whether the variation of the center position of the fish j in the frame image i, i +1 is a positive value, if so, adding 1 to the number of the fish of the type of the fish j, and entering the step S7, otherwise, subtracting 1 from the number of the fish of the type of the fish j, and entering the step S7;
s7, judging whether all the fishes in the frame image i are judged to be finished, if yes, entering a step S8, otherwise, enabling j to be j +1, and returning to the step S4;
s8, determining whether all the frame images extracted in the time period t have been determined, if yes, making t equal to t +1, and returning to step S1, otherwise, making i equal to i +1, and returning to step S3.
The invention has the beneficial effects that: according to the scheme, firstly, the fish type in each frame of image is identified through a neural network, so that the problem that the fish type is inaccurately determined due to the fact that the fish posture is not standard during grating counting in the prior art is solved; the same fish is determined based on the central positions of two adjacent fishes of the same type, and the same fish in the two adjacent frame images can be accurately determined, so that the accuracy of subsequent counting of the same type of fish is ensured.
The movement direction of the fish can be accurately determined through the variation of the center position of the same fish, so that whether the fish at the entrance of the fishway enters or swims out of the fishway is determined, and accurate calculation of each type of fish is guaranteed.
Through the number of each type of fish passing through the fishway, managers can be assisted to know the applicability of each type to the fishway, so that the managers are promoted to improve the fishway, and the fishway is applicable to all types of fish in the basin as much as possible.
Drawings
Fig. 1 is a flow chart of a fish intelligent monitoring method based on video images.
Fig. 2 is a comparison of different types of filters for denoising the same samples.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Referring to fig. 1, fig. 1 illustrates a fish intelligent monitoring method based on video images; as shown in fig. 1, the method S includes steps S1 to S8.
In step S1, obtaining a fish passing video at a t-time period at the entrance of the fishway, and performing frame extraction on the fish passing video to obtain a plurality of frame images; in order to ensure the accuracy of detection, the scheme preferably extracts 30 frames of images per second of video.
The fish video shooting angle of crossing in this scheme is perpendicular with fishway entry axis, through the setting of shooting camera angle, can avoid shooting the improper fish image appearance deformity in the angle arouses the image, and influences the accuracy of follow-up fish species discernment.
In step S2, performing noise reduction processing on all frame images, and then sequentially inputting the frame images into the trained fast-RCNN in time order to obtain identification information of each fish in the frame images; the identification information comprises the length and width of a frame where the fish is located, the center position and the probability of belonging to each type of fish.
In the scheme, when the noise reduction processing is performed, a Gaussian low-pass filter is preferably adopted to perform filtering processing on the frame image. In order to verify the filtering effect of the Gaussian low-pass filter, the method is compared and analyzed with a common filter in the prior art:
25 samples are obtained, filtering processing is carried out on the 25 samples by adopting an ideal low-pass filter, a Butterworth low-pass filter, a Gaussian low-pass filter, an ideal high-pass filter, a Butterworth low-pass filter and a Gaussian high-pass filter respectively, comparison results of processing results of the filters and non-processing results are shown in figure 2, and the accuracy rate of the 25 samples after processing is shown in table 1.
TABLE 1 accuracy of filter processing of 25 samples with several filters
As can be seen from table 1, the gaussian low-pass filter employed in the present application has the best filtering effect.
In step S3, calculating the distance between the center position of each fish in the frame image i and the center positions of all the same type of fish in the frame image i +1 according to the identification information of each fish in the two adjacent frame images i, i + 1;
in step S4, it is determined whether there is a distance smaller than a preset distance between the fish j in the frame image i and the same type of fish in the frame image i + 1; if yes, go to step S5, otherwise go to step S7;
in step S5, the fish j is present in the frame images i, i +1 at the same time, and then it is determined whether the center positions of the fish j on the frame images i, i +1 are located on both sides of the reference line, if yes, step S6 is performed, otherwise, step S7 is performed;
in step S6, it is determined whether the variation of the center position of the fish j in the frame image i, i +1 is a positive value, if yes, the number of the type fish of the fish j is increased by 1, and step S7 is performed, otherwise, the number of the type fish of the fish j is decreased by 1, and step S7 is performed;
in step S7, it is determined whether all the fish in the frame image i have been determined, if yes, the process proceeds to step S8, otherwise, j is made j +1, and the process returns to step S4;
in step S8, it is determined whether all the frame images extracted during the period t have been determined to be completed, if yes, t is made t +1, and the process returns to step S1, otherwise, i is made i +1, and the process returns to step S3.
In one embodiment of the present invention, the training method of the Faster-RCNN is as follows:
acquiring a fish passing video acquired in a set time period in a fish passing channel, and performing frame extraction processing on the fish passing video to obtain a plurality of frame images A;
marking fish in all frame images A, performing noise reduction processing on the marked frame images A, and then adjusting each frame image A into a picture with the size of 720 × 406 pixels;
and dividing all the pictures with the adjusted sizes into a training set and a testing set according to a set proportion, and then training the Faster-RCNN by adopting the training set and the testing set to obtain the trained Faster-RCNN.
The method for training the fast-RCNN by adopting the training set comprises the following steps:
inputting the pictures in the training set into a convolutional neural network for feature extraction to obtain a feature map;
generating an Anchor box by using RPN, then cutting and filtering the feature image, and judging the foreground and the background of the feature image by using a softmax classifier;
mapping the suggested window after the regression correction of the anchor box of the bounding box to the last layer of feature map of the convolutional neural network, and generating feature maps with preset sizes for each ROI through an ROI pooling layer;
and finally, performing joint training on the classification probability and the frame regression by using the detection classification probability and the detection frame regression respectively.
The scheme also comprises the step of adjusting each frame image to be 720 × 406 pixel before inputting the frame image into the fast-RCNN. The unique setting of the picture size not only can effectively improve the speed of processing data by a computer and save the memory, but also can improve the accuracy of a neural network.
The accuracy of the fish number identification of the scheme is described by combining the test example as follows:
selecting three fishes with larger body type differences, adding a 10-watt LED lamp to fill in an aquarium with the length, the width and the height of 300mm, 150mm and 20mm respectively, recording video information for one minute by using a video camera with 4000 ten thousand pixels, and extracting frames of the video according to 30 frames per second to obtain picture information; the total number of the tests is 6, 10, 20, 12, 17, 20 and 10 fishes are respectively placed in the fish tank every time, after the video images are obtained, the scheme is adopted to count the quantity of the fishes, and the identification result is shown in table 2.
Table 26 test detection accuracy
As can be seen from the table 2, the intelligent fish monitoring method of the scheme has high accuracy in fish quantity statistics.
Claims (6)
1. The intelligent fish monitoring method based on the video images is characterized by comprising the following steps:
s1, obtaining a fish passing video at a t-time period at the entrance of the fishway, and performing frame extraction processing on the fish passing video to obtain a plurality of frame images;
s2, performing noise reduction processing on all frame images, and then sequentially inputting the frame images into the trained fast-RCNN according to the time sequence to obtain the identification information of each fish in the frame images; the identification information comprises the length and width of a frame where the fish is located, the center position and the probability of belonging to each type of fish;
s3, calculating the distance between the center position of each fish in the frame image i and the center positions of all the same type of fish in the frame image i +1 according to the identification information of each fish in the two adjacent frame images i, i + 1;
s4, judging whether one distance between the fish j in the frame image i and the same type of fish in the frame image i +1 is smaller than a preset distance; if yes, go to step S5, otherwise go to step S7;
s5, enabling the fish j to exist in the frame images i, i +1 at the same time, then judging whether the central positions of the fish j on the frame images i, i +1 are respectively positioned at two sides of the reference line, if so, entering a step S6, otherwise, entering a step S7;
s6, judging whether the variation of the center position of the fish j in the frame image i, i +1 is a positive value, if so, adding 1 to the number of the fish of the type of the fish j, and entering the step S7, otherwise, subtracting 1 from the number of the fish of the type of the fish j, and entering the step S7;
s7, judging whether all the fishes in the frame image i are judged to be finished, if yes, entering a step S8, otherwise, enabling j to be j +1, and returning to the step S4;
s8, determining whether all the frame images extracted in the time period t have been determined, if yes, making t equal to t +1, and returning to step S1, otherwise, making i equal to i +1, and returning to step S3.
2. The intelligent fish monitoring method based on video images as claimed in claim 1, wherein the training method of fast-RCNN comprises:
acquiring a fish passing video acquired in a set time period in a fish passing channel, and performing frame extraction processing on the fish passing video to obtain a plurality of frame images A;
marking fish in all frame images A, performing noise reduction processing on the marked frame images A, and then adjusting each frame image A into a picture with the size of 720 × 406 pixels;
and dividing all the pictures with the adjusted sizes into a training set and a testing set according to a set proportion, and then training the Faster-RCNN by adopting the training set and the testing set to obtain the trained Faster-RCNN.
3. The intelligent fish monitoring method based on video images as claimed in claim 2, wherein the method for training the fast-RCNN by using the training set comprises:
inputting the pictures in the training set into a convolutional neural network for feature extraction to obtain a feature map;
generating an Anchor box by using RPN, then cutting and filtering the feature image, and judging the foreground and the background of the feature image by using a softmax classifier;
mapping the suggested window after the regression correction of the anchor box of the bounding box to the last layer of feature map of the convolutional neural network, and generating feature maps with preset sizes for each ROI through an ROI pooling layer;
and finally, performing joint training on the classification probability and the frame regression by using the detection classification probability and the detection frame regression respectively.
4. The intelligent fish monitoring method according to claim 1, wherein the step of inputting the frame images into the fast-RCNN further comprises adjusting each frame image to a picture of 720 x 406 pixels.
5. The intelligent fish monitoring method based on video images as claimed in claim 1 or 2, wherein the frame images are filtered by adopting a Gaussian low-pass filter during the noise reduction processing.
6. The intelligent fish monitoring method based on video images as claimed in any one of claims 1 to 4, wherein the shooting angle of the fish passing video is perpendicular to the central axis of the fishway inlet.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011369081.4A CN112418124A (en) | 2020-11-30 | 2020-11-30 | Intelligent fish monitoring method based on video images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011369081.4A CN112418124A (en) | 2020-11-30 | 2020-11-30 | Intelligent fish monitoring method based on video images |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112418124A true CN112418124A (en) | 2021-02-26 |
Family
ID=74829319
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011369081.4A Pending CN112418124A (en) | 2020-11-30 | 2020-11-30 | Intelligent fish monitoring method based on video images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112418124A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113038670A (en) * | 2021-05-26 | 2021-06-25 | 武汉中科瑞华生态科技股份有限公司 | Light source control method and light source control device |
CN113204990A (en) * | 2021-03-22 | 2021-08-03 | 深圳市众凌汇科技有限公司 | Machine learning method and device based on intelligent fishing rod |
TWI778762B (en) * | 2021-08-24 | 2022-09-21 | 國立臺灣海洋大學 | Systems and methods for intelligent aquaculture estimation of the number of fish |
CN115170535A (en) * | 2022-07-20 | 2022-10-11 | 水电水利规划设计总院有限公司 | Hydroelectric engineering fishway fish passing counting method and system based on image recognition |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5692064A (en) * | 1993-11-01 | 1997-11-25 | Hitachi, Ltd. | Method and apparatus for counting underwater objects using an ultrasonic wave |
JP2007206843A (en) * | 2006-01-31 | 2007-08-16 | Central Res Inst Of Electric Power Ind | Method and device for counting moving body underwater or on water surface and its program |
CN106022459A (en) * | 2016-05-23 | 2016-10-12 | 三峡大学 | Automatic counting system for fish passing amount of fish passage based on underwater videos |
CN108734694A (en) * | 2018-04-09 | 2018-11-02 | 华南农业大学 | Thyroid tumors ultrasonoscopy automatic identifying method based on faster r-cnn |
CN109241871A (en) * | 2018-08-16 | 2019-01-18 | 北京此时此地信息科技有限公司 | A kind of public domain stream of people's tracking based on video data |
-
2020
- 2020-11-30 CN CN202011369081.4A patent/CN112418124A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5692064A (en) * | 1993-11-01 | 1997-11-25 | Hitachi, Ltd. | Method and apparatus for counting underwater objects using an ultrasonic wave |
JP2007206843A (en) * | 2006-01-31 | 2007-08-16 | Central Res Inst Of Electric Power Ind | Method and device for counting moving body underwater or on water surface and its program |
CN106022459A (en) * | 2016-05-23 | 2016-10-12 | 三峡大学 | Automatic counting system for fish passing amount of fish passage based on underwater videos |
CN108734694A (en) * | 2018-04-09 | 2018-11-02 | 华南农业大学 | Thyroid tumors ultrasonoscopy automatic identifying method based on faster r-cnn |
CN109241871A (en) * | 2018-08-16 | 2019-01-18 | 北京此时此地信息科技有限公司 | A kind of public domain stream of people's tracking based on video data |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113204990A (en) * | 2021-03-22 | 2021-08-03 | 深圳市众凌汇科技有限公司 | Machine learning method and device based on intelligent fishing rod |
CN113204990B (en) * | 2021-03-22 | 2022-01-14 | 深圳市众凌汇科技有限公司 | Machine learning method and device based on intelligent fishing rod |
CN113038670A (en) * | 2021-05-26 | 2021-06-25 | 武汉中科瑞华生态科技股份有限公司 | Light source control method and light source control device |
CN113038670B (en) * | 2021-05-26 | 2021-08-06 | 武汉中科瑞华生态科技股份有限公司 | Light source control method and light source control device |
TWI778762B (en) * | 2021-08-24 | 2022-09-21 | 國立臺灣海洋大學 | Systems and methods for intelligent aquaculture estimation of the number of fish |
CN115170535A (en) * | 2022-07-20 | 2022-10-11 | 水电水利规划设计总院有限公司 | Hydroelectric engineering fishway fish passing counting method and system based on image recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112418124A (en) | Intelligent fish monitoring method based on video images | |
CN108492319B (en) | Moving target detection method based on deep full convolution neural network | |
CN113971779B (en) | Water gauge automatic reading method based on deep learning | |
CN106778659B (en) | License plate recognition method and device | |
CN109740485B (en) | Reservoir or small reservoir identification method based on spectral analysis and deep convolutional neural network | |
CN107480607A (en) | A kind of method that standing Face datection positions in intelligent recording and broadcasting system | |
CN114049477A (en) | Fish passing fishway system and dynamic identification and tracking method for fish quantity and fish type | |
CN109271868B (en) | Dense connection convolution network hypersphere embedding-based target re-identification method | |
CN110084129A (en) | A kind of river drifting substances real-time detection method based on machine vision | |
CN110020658A (en) | A kind of well-marked target detection method based on multitask deep learning | |
CN114549446A (en) | Cylinder sleeve defect mark detection method based on deep learning | |
CN110853041A (en) | Underwater pier component segmentation method based on deep learning and sonar imaging | |
CN113420619A (en) | Remote sensing image building extraction method | |
CN114639064B (en) | Water level identification method and device | |
CN110321944A (en) | A kind of construction method of the deep neural network model based on contact net image quality evaluation | |
CN107103608A (en) | A kind of conspicuousness detection method based on region candidate samples selection | |
CN114596316A (en) | Road image detail capturing method based on semantic segmentation | |
CN115457277A (en) | Intelligent pavement disease identification and detection method and system | |
CN116721391A (en) | Method for detecting separation effect of raw oil based on computer vision | |
CN113920421B (en) | Full convolution neural network model capable of achieving rapid classification | |
CN114782875B (en) | Fish fine granularity information acquisition method based on fishway construction | |
CN115830514B (en) | Whole river reach surface flow velocity calculation method and system suitable for curved river channel | |
CN111161264A (en) | Method for segmenting TFT circuit image with defects | |
CN112101455B (en) | Tea lesser leafhopper identification and counting method based on convolutional neural network | |
CN115170535A (en) | Hydroelectric engineering fishway fish passing counting method and system based on image recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 614700 No.1, section 4, Binhe Road, Jinkouhe District, Leshan City, Sichuan Province Applicant after: Guoneng Dadu River Zhentouba Power Generation Co.,Ltd. Applicant after: CHENGDU DAHUI WULIAN TECHNOLOGY Co.,Ltd. Address before: 614700 No.1, section 4, Binhe Road, Jinkouhe District, Leshan City, Sichuan Province Applicant before: GUODIAN DADU RIVER ZHENTOUBA POWER GENERATION Co.,Ltd. Applicant before: CHENGDU DAHUI WULIAN TECHNOLOGY Co.,Ltd. |
|
CB02 | Change of applicant information |