CN110335245A - Cage netting damage monitoring method and system based on monocular space and time continuous image - Google Patents
Cage netting damage monitoring method and system based on monocular space and time continuous image Download PDFInfo
- Publication number
- CN110335245A CN110335245A CN201910425601.XA CN201910425601A CN110335245A CN 110335245 A CN110335245 A CN 110335245A CN 201910425601 A CN201910425601 A CN 201910425601A CN 110335245 A CN110335245 A CN 110335245A
- Authority
- CN
- China
- Prior art keywords
- etting
- image
- monitoring method
- obtains
- spliced
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000012544 monitoring process Methods 0.000 title claims abstract description 47
- 238000012549 training Methods 0.000 claims abstract description 22
- 238000012876 topography Methods 0.000 claims abstract description 21
- 238000012360 testing method Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 9
- 239000011159 matrix material Substances 0.000 claims description 20
- 230000004927 fusion Effects 0.000 claims description 13
- 238000013507 mapping Methods 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 8
- 238000000354 decomposition reaction Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 7
- 230000007246 mechanism Effects 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 230000008439 repair process Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 4
- 230000002159 abnormal effect Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 7
- 238000001514 detection method Methods 0.000 abstract description 4
- 238000004458 analytical method Methods 0.000 abstract description 2
- 230000007547 defect Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 9
- 241000251468 Actinopterygii Species 0.000 description 5
- 230000000903 blocking effect Effects 0.000 description 4
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 241000196171 Hydrodictyon reticulatum Species 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000002401 inhibitory effect Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000013535 sea water Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/02—Agriculture; Fishing; Forestry; Mining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Business, Economics & Management (AREA)
- Mining & Mineral Resources (AREA)
- Human Resources & Organizations (AREA)
- Animal Husbandry (AREA)
- Marine Sciences & Fisheries (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Agronomy & Crop Science (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of cage netting damage monitoring method and system based on monocular space and time continuous image, the method includes unsupervised learning network model training steps;Etting image processing step, comprising: (1), using monocular camera to etting be scanned, obtain the continuous etting topography in several spaces;(2), etting topography is inputted into unsupervised learning network model, obtains etting preliminary images;(3), the region being blocked in etting preliminary images is repaired and is spliced, obtain etting general image;(4), damage testing is carried out to etting general image.Etting damage monitoring method of the invention, whole process automatic detection analysis can effectively reduce the manpower consumption in etting damage testing, and real-time high-precision detects etting breakage.And the hardware configuration that this method needs is simple compared with underwater robot, it is at low cost.
Description
Technical field
The invention belongs to technical field of image processing, specifically, being related to a kind of net based on monocular space and time continuous image
Case etting damage monitoring method and system.
Background technique
Fishery is the important component of China's agricultural and national economy, is mentioned as population increases with national life level
Height, the demand to fishery products is also increasing, and especially the demand to high-quality marine aquatic product is even more rapid development, promotes me
State's near sea fishes cage culture rapidly develops, and especially deep water mesh cage scale is gradually increased.The production of deep-water net cage culture is large-scale
Change, scale is inevitable trend.But due to being difficult to find when sea-cage net-piece breakage in time, easily cause cultured fishes a large amount of
Escape, causes huge loss to raiser.Therefore, etting safety monitoring becomes the key of far-reaching extra large cage culture development
One of the important difficult point for needing to solve in technology and its popularization.
The breakage monitoring of etting mainly relies on diver to observe at present, or using underwater camera by manually seeing
It examines, such method needs to expend more manpower, and is influenced by environmental factor etc., and what is be monitored is inefficient.With electricity
The development of sub-information technology, it is thus proposed that including being creeped using the underwater robot with camera along etting, untethered underwater
People (AUV) carries out spiral inspection around etting, is passively monitored based on sonar technology, and traverse design net is incorporated into etting and breaks sensor
Equal ettings monitoring is imagined, but above-mentioned etting monitoring imagination does not account for blocking for the shoal of fish, realizes Costco Wholesale, working time
The problems such as, it is difficult to meet real-time etting monitoring in all weather of the year.
In view of the above-mentioned problems, the present invention provides a kind of, the deep water net cage net clothing breakage based on monocular space and time continuous image is supervised
Examining system and method, obtain etting preliminary images by optics monocular camera, remove foreground occlusion based on unsupervised learning, then
Splicing is fused to complete etting image, is based on non-extraction wavelet transform, monitors complete etting target image breakage, most
The high-precision real-time monitoring to etting breakage is realized eventually.
Summary of the invention
The present invention, which is directed to, is in the prior art patrolled to cage netting breakage monitoring using underwater robot, at high cost and easy
The technical problem for causing breakage monitoring precision low is blocked by the shoal of fish, proposes a kind of net cage net based on monocular space and time continuous image
Clothing damage monitoring method, can solve the above problem.
For achieving the above object, the present invention, which adopts the following technical solutions, is achieved:
A kind of cage netting damage monitoring method based on monocular space and time continuous image,
Unsupervised learning network model training step, training obtain unsupervised learning network model;
Etting image processing step, comprising:
(1), etting is scanned using monocular camera, obtains the continuous etting topography in several spaces;
(2), etting topography is inputted into unsupervised learning network model, obtains the different meaning of one's words in each etting topography
The depth of field, the pixel that the meaning of one's words is etting is separated, the pixel that is blocked in etting topography is removed, it is preliminary to obtain etting
Image;
(3), using there is shooting overlapping region between the continuous etting topography in space, to quilt in etting preliminary images
The region blocked is repaired and is spliced, and etting general image is obtained;
(4), damage testing is carried out to etting general image, comprising:
(41), etting general image is carried out to the wavelet decomposition of multistage non-extraction, fusion wavelet coefficient obtains Fusion Features
Matrix;
(42), Fusion Features matrix is divided into multiple regions, Gumbel points is used to the data distribution in each region
Cloth model modeling constructs the log-likelihood mapping of each rectangular area;
(43), binary conversion treatment is carried out to the log-likelihood mapping of each rectangular area, realizes that etting damaged area is broken with non-
Damage region is split, and obtains etting damage testing result.
Further, the unsupervised learning network model training step includes:
(101), etting is shot using binocular camera, obtains one group of left source figure I respectivelylWith one group of right source figure Ir;
(102), wherein one group of image it will be input in the unsupervised learning network model and carry out convolutional calculation, and generate two
The corresponding disparity map of group, respectively left disparity map dlWith right disparity map dr;
(103), to dlAnd drIt is calculated respectively using bilinearity sampler, it is reversed to generate left plane input pictureWith
Right plane input picture
(104), by IlWithError and IrWithError collectively as objective function, the training unsupervised learning
Network model determines model parameter.
Further, etting is scanned including horizontal sweep and vertical sweep in step (1), obtains several spaces company
Continuous etting topography.
Further, the region being blocked in etting preliminary images is repaired in step (3) and is also wrapped before splicing
It includes:
(30), geometric angle rotation correction is carried out to each etting preliminary images and luminance proportion is handled.
Further, the region being blocked in etting preliminary images is repaired and is spliced including level in step (3)
It repairs and splices step and splice step vertically, wherein level is repaired and splicing step includes:
(31), the multiple image comprising Same Scene is selected, takes the complete image of the scene as benchmark image;
(32), centered on the benchmark image, the image for taking its two sides adjacent is respectively compared the benchmark image and its
The complete situation of etting pixel in the adjacent picture registration region in two sides will include and benchmark image in the adjacent image in two sides
In the etting pixel filling that does not include into benchmark image, as new benchmark image;
(32), continue two images side-draw to the benchmark image, and between taken image and benchmark image presence
Every comparing the complete situation of the overlapping region between taken image in current base image, will include and benchmark in take image
The etting pixel filling not included in image is into benchmark image, until there are overlapping regions with the benchmark image by all
Image is searched and is filled and finishes;
(33), continuing the selected multiple image comprising later scene will own until the image repair of all scenes is completed
The image mosaic of scene, the horizontal complete image of etting of depth where obtaining the benchmark image;
Vertical splicing step includes splicing all horizontal complete images of etting in the vertical direction, obtains etting
General image.
Further, further include the steps that eliminating the suture spot that image mosaic generates in step (33).
Further, the removing method of spot is sutured are as follows:
(331), the characteristic point of image junction to be spliced is extracted;
(332), the perspective matrix of image to be spliced is obtained, the perspective matrix reflects the projection between image to be spliced
Relationship;
(333), it is fitted to obtain the same object of splicing regions respectively in the view of image to be spliced according to the perspective matrix
Subject image under angle;
(334), SIFT transformation is carried out to the subject image, obtains the pixel value at splicing seams.
Further, after step (334), further include (335), utilize the exception at RANSAC method elimination splicing seams
Point.
The present invention proposes a kind of cage netting damage monitoring system based on monocular space and time continuous image simultaneously, comprising:
Central axis is arranged vertically at the center of etting;
Crossbeam is horizontally disposed with, and connect with the central axis rotation;
First driving mechanism is used to drive the crossbeam around the central axis rotation;
Telescopic arm, one end of which is fixed on the crossbeams close to one end of etting, and length can stretch along the vertical direction;
Monocular camera is fixed on the free end of the telescopic arm, for shooting to etting;
Control module receives the image information that the monocular camera is sent, and according to described in claim any one of 1-8
Monitoring method monitor the breakage of the etting.
Further, further include the second driving mechanism, be used to that the telescopic arm to be driven to stretch along the vertical direction.
Compared with prior art, the advantages and positive effects of the present invention are: it is of the invention based on monocular space and time continuous image
Cage netting damage monitoring method, by optics monocular camera obtain etting preliminary images, based on unsupervised learning removal before
Scape blocks, and then splicing is fused to complete etting image, is based on non-extraction wavelet transform, monitors complete etting target image
Breakage, the final high-precision real-time monitoring realized to etting breakage.This method whole process automatic detection analysis, can be effective
The manpower consumption in etting damage testing is reduced, real-time high-precision detects etting breakage.And this method needs is hard
Part configuration is simple compared with underwater robot, at low cost.
After a specific embodiment of the invention is read in conjunction with the figure, the other features and advantages of the invention will become more clear
Chu.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to needed in the embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the invention, for this field
For those of ordinary skill, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of reality of the cage netting damage monitoring method proposed by the invention based on monocular space and time continuous image
Apply the flow chart of example;
Fig. 2 is a kind of reality of the cage netting damage monitoring method proposed by the invention based on monocular space and time continuous image
Apply etting illustraton of model in example;
Fig. 3 is a kind of reality of the cage netting damage monitoring method proposed by the invention based on monocular space and time continuous image
It applies unsupervised learning network model in example and trains schematic diagram;
Fig. 4 is a kind of reality of the cage netting damage monitoring method proposed by the invention based on monocular space and time continuous image
Apply scanning etting schematic diagram in example;
Fig. 5 is a kind of reality of the cage netting damage monitoring method proposed by the invention based on monocular space and time continuous image
Apply the acquisition schematic diagram of etting preliminary images in example;
Fig. 6 is a kind of reality of the cage netting damage monitoring method proposed by the invention based on monocular space and time continuous image
Apply the landscape images difference schematic diagram of two etting photos adjacent in example;
Fig. 7 is a kind of reality of the cage netting damage monitoring method proposed by the invention based on monocular space and time continuous image
Apply the pixel coordinate schematic diagram of image in example;
Fig. 8 is a kind of reality of the cage netting damage monitoring method proposed by the invention based on monocular space and time continuous image
It applies in example and schematic diagram is split to etting breakage;
A kind of implementation of Fig. 9 cage netting damage monitoring system proposed by the invention based on monocular space and time continuous image
Example structural schematic diagram.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, below with reference to drawings and examples,
Invention is further described in detail.
It should be noted that in the description of the present invention, term " on ", "lower", "left", "right", " perpendicular ", " cross ", "inner",
The direction of the instructions such as "outside" or the term of positional relationship are direction based on the figure or positional relationship, this is just for the sake of just
In description, rather than indication or suggestion described device or element must have a particular orientation, constructed and grasped with specific orientation
Make, therefore is not considered as limiting the invention.In addition, term " first ", " second " are used for description purposes only, and cannot manage
Solution is indication or suggestion relative importance.
Embodiment one, the invention proposes a kind of cage netting damage monitoring method based on monocular space and time continuous image,
Including unsupervised learning network model training step and etting image processing step, wherein the training of unsupervised learning network model
Step obtains unsupervised learning network model for training;
Etting image processing step, as shown in Figure 1, comprising:
S1, etting is scanned using monocular camera, obtains the continuous etting topography in several spaces;Such as Fig. 2 institute
Show, is etting model, it is assumed that etting is standard cylinder, and monocular camera is scanned along the surface of etting and takes pictures;
S2, etting topography is inputted into unsupervised learning network model, obtains the different meaning of one's words in each etting topography
The depth of field, the pixel that the meaning of one's words is etting is separated, the pixel that is blocked in etting topography is removed, it is preliminary to obtain etting
Image;
S3, using between the continuous etting topography in space exist shooting overlapping region, to quilt in etting preliminary images
The region blocked is repaired and is spliced, and etting general image is obtained;
S4, damage testing is carried out to etting general image, comprising:
S41, the wavelet decomposition that etting general image is carried out to multistage non-extraction, fusion wavelet coefficient obtain Fusion Features square
Battle array;
S42, Fusion Features matrix is divided into multiple regions, the data distribution in each region is distributed using Gumbel
Model modeling constructs the log-likelihood mapping of each rectangular area;
S43, binary conversion treatment is carried out to the log-likelihood mapping of each rectangular area, realizes etting damaged area and non-breakage
Region is split, and obtains etting damage testing result.
The cage netting damage monitoring method based on monocular space and time continuous image of the present embodiment, passes through optics monocular camera
Etting preliminary images are obtained, the mode based on unsupervised learning removes foreground occlusion, by the monocular etting figure of several adjacent space-times
As the equivalent binocular image of generation, the different meaning of one's words depth of field are estimated, extract etting preliminary images according to the different depth of field and remove foreground occlusion,
Low to image capture device requirement, monocular camera, cost is low accordingly, and depth of field letter is obtained by way of image procossing
Breath, and then remove to block and obtain etting preliminary images;Then splicing is fused to complete etting image, is based on non-extraction discrete wavelet
Transformation, monitors complete etting target image breakage, final to realize that this method be automatically complete to etting to etting breakage monitoring
Weather real-time monitoring, has saved human cost.By controlling picture-taken frequency, multiple image space-time collected is adjacent,
There is shooting overlapping region between image, high-precision etting image can be spliced into accordingly, and then realize the height of underwater etting
Precision real-time monitoring.
Unsupervised learning network is also referred to as non-supervisory deep learning network, sample can be divided into several classifications, pass through instruction
Practice unsupervised learning network model, the depth of view information in available monocular image, unsupervised learning network model training step
Include:
S101, etting is shot using binocular camera, obtains one group of left source figure I respectivelylWith one group of right source figure Ir;
S102, wherein one group of image it will be input in the unsupervised learning network model and carry out convolutional calculation, and generate two
The corresponding disparity map of group and respectively left disparity map dlWith right disparity map dr;
S103, to dlAnd drIt is calculated respectively using bilinearity sampler, it is reversed to generate left plane input pictureThe right side and
Plane input picture
S104, by IlWithError and IrWithError collectively as objective function, the training unsupervised learning
Network model determines model parameter.
Left and right two width binocular image of the training data of this programme under Same Scene needs in the network training stage
Binocular camera obtains the two images of Same Scene, the marker samples (ground truth) as study.After training, net
The network autonomous working stage is that monocular camera realizes depth of field estimation, and at this moment another can be removed for camera of training.
As shown in figure 3, for the model training stage schematic diagram (training after, right view camera IrNo longer need;It goes
Fall right view IrNetwork model be network model under actual working state).The training pattern includes 5 parts altogether,
The course of work is as follows: will several continuous (this is sentenced for three width) etting single frames left view scan imagesAs
Zuo Yuantu, inputs convolutional layer one by one, and subscript x indicates horizontal coordinate position when left camera horizontal sweep obtains image;Similarly,
Right view camera obtains the multiple image of corresponding Same Scene as target reference picture (ground truth), but it not as
Input.Convolutional layer is passed through in only left view image input, by opposite feature mapping model generate the corresponding disparity map in left and right and;Again
To disparity map and respectively using bilinearity sampler, approximate left and right plane input picture is reversely generatedWithLast basis
Generate left imageWith true left view IlError, and generate right viewWith true right view Ir(ground truth)
Error carry out the training that deep learning network is instructed collectively as objective function.This left and right consistency cost function can be reinforced
Consistency between two disparity maps, available more accurate parallax as a result, obtain the higher depth of field estimation of precision in turn.
According to generation equivalent left and right visual angle flat imageAnd its disparity map dr,dl, the depth of field is estimated in conjunction with binocular camera
Principle, according to formulaIt can be evaluated whether depth value, wherein b is indicated in formula: between two optical centers of binocular camera
Distance, f are indicated: the focal length of binocular camera, and d indicates that disparity map, Z indicate the depth of field.Assuming that known camera and etting plane most short distance
From for 2m, the depth of field data obtained by estimation extracts the etting target image of 2m depth of field distance from image, removes other depth of field
Location drawing picture, and then the interference of different depth of field shelters is eliminated, construct the preliminary etting image of single etting meaning of one's words target.
It is present invention etting schematic diagram to be monitored such as Fig. 8, it is assumed that etting is standard cylinder, circular diameter above and below etting
20m nets depth 10m, it is clear that limited by field angle and shooting distance, need to be scanned including level etting in step S1
Scanning and vertical sweep, obtain the continuous etting topography in several spaces.Fig. 4 is to obtain the signal of etting single frames scan image
Figure, camera are keeping being scanned etting in equidistant parallel plane with etting.Assuming that camera and etting distance are 2m, root
According to geometry image-forming principle, camera focus is adjusted, camera shooting etting is having a size of A*B at this time, and wherein A is in captured etting image
The length of etting, B are the height of etting in captured etting image.It is suitable in the horizontal plane around the lateral surface of etting to control camera
Hour hands or counterclockwise rotation, carry out horizontal sweep.Every 0.573A ° of rotation (A/10) obtains one and throws the net clothing scan image, horizontal
Direction can obtain 628/A and (round up) the etting single frames scan image of Zhang Xianglin space-time;It is complete in a wherein depth scan for etting
After a week, control camera is mobile in vertical direction, carries out vertical scanning.Moving distance is B/10m on vertical direction, obtains one
Etting scan image, vertical direction can obtain 100/B and (round up) the etting single frames scan image of Zhang Xianglin space-time.To acquisition
Image need to carry out noise reduction pretreatment, filter out the noise signal in photo, while unified image size, be convenient for subsequent processing.
Since camera is when scanning etting, realized to same characteristic point by slightly changing the shooting angle of camera not
It needs to carry out geometric angle rotation correction to it with the Image Acquisition (shooting process is as shown in Figure 4) under visual angle, therefore before splicing.
Further, since these images are obtained under synchronization exposure when shooting, they can be because illumination variation causes to clap
The image taken the photograph needs to carry out luminance proportion processing to it there are luminance difference.Seawater surge can also make the floating of fishing net etting that can lead
It causes image to generate motion blur, needs to do image recovery pretreatment.Therefore, in step S3 to being blocked in etting preliminary images
Region repaired and spliced before further include:
S30, each etting preliminary images are carried out with geometric angle rotation correction and luminance proportion processing.
According to the space-time expending of etting image taking, to the same depth of acquisition several etting images be attached with it is extensive
It is multiple.Considered in splicing because being known as: the shooting angle of each image, the space-time expending of shooting, etting depth,
The shooting range of camera, image definition etc..
The region being blocked in etting preliminary images is repaired in step S3 and is spliced and repairs and splices including level
Step and splice step vertically, wherein whole process is as shown in Figure 5, wherein serial number 1~5 indicates that 5 width space-times are adjacent
Etting topography, irregular lack part is the part for removing shelter in figure, and elliptical section is divided into the breakage of etting.Figure
Image (such as 3 (M)) in bracket in 5 marked as M is spliced as benchmark image, and M is the net that carry out splicing recovery
The view field image of clothing, C1、C2、C4、C5Reference picture when as splicing, 1 (C1) and 2 (C2) centered on image 3 (M) the left side
The reference picture of visual field, 4 (C4) and 5 (C5) it is reference picture on the right of center image 3 (M).First with 2 (C2) and 4 (C4) conduct
The reference picture of splicing for the first time intercepts 2 (C2) and 4 (C4) parts of images that is overlapped with 3 (M) in image, compare they and 3 (M)
Same area whether have complete etting, if 3 (M) or 2 (C2) and 4 (C4) middle there are the same areas of an image to have completely
Etting, then the image that this position is judged to complete etting, and the image of complete etting is added in 3 (M).2(C2)
With 4 (C4) add in 3 (M) after, the image after supplement is continued to splice as new benchmark image 3 (M), 1 (C1) and 5
(C5) as the reference picture that carry out etting splicing next time, it repeats to continue to splice the step of splicing for the first time, with this
Analogize, finally obtains the complete etting image of M field of view portion.
Level is repaired and splicing step includes:
S31, the selected multiple image comprising Same Scene, take the complete image of the scene as benchmark image;
S32, centered on the benchmark image, the image for taking its two sides adjacent is respectively compared the benchmark image and its
The complete situation of etting pixel in the adjacent picture registration region in two sides will include and benchmark image in the adjacent image in two sides
In the etting pixel filling that does not include into benchmark image, as new benchmark image;
S32, continue two images side-draw to the benchmark image, and between taken image and the benchmark image exist
Every comparing the complete situation of the overlapping region between taken image in current base image, will include and benchmark in take image
The etting pixel filling not included in image is into benchmark image, until there are overlapping regions with the benchmark image by all
Image is searched and is filled and finishes;
S33, continue the selected multiple image comprising later scene, until the image repair completion of all scenes, will own
The image mosaic of scene, the horizontal complete image of etting of depth where obtaining the benchmark image.
Specifically, when carrying out the filling of benchmark image, directly according to the depth of the variation of the shooting angle of camera and etting
And the visual field of camera, the landscape images difference for calculating two adjacent etting photos is Δ x, and the expression content of difference is as schemed
Shown in 6, filled according to difference direct splicing, shown in following formula:
pn(xi,j)=pn-1(xi+Δx,j)|pn-1(xi,j)|pn+1(xi-Δx,j)
Wherein, pn(xi,j) indicate n-th of etting image point xi,jThe grey scale pixel value at place, xi,jIndicate that abscissa is vertical for i
Point of the coordinate for j, symbol | indicate "or".Since the abscissa difference of adjacent two figures is Δ x, so in the phase for calculating etting
With position pixel when, to add Δ x to the abscissa of the first from left figure in this position image, the abscissa of one figure in the right subtracts
The pixel coordinate of Δ x, image are as shown in Figure 7.
Vertical splicing step includes splicing all horizontal complete images of etting in the vertical direction, obtains etting
General image.
Since the splicing of image is a rigid process, picture material is not considered specifically, but direct basis
The difference for covering scene domain between angle and image is spliced, so image after splicing can generate stitching portion not
The fracture of the problem of matching, i.e. image.If do not handled these fractures, it is possible to during late detection by this
A little fractures are mistaken for the breakage of etting itself, and the damage testing of etting is caused to generate great error, to solve the above-mentioned problems,
Further include the steps that eliminating the suture spot that image mosaic generates in step S33.
Preferably, the removing method of spot is sutured are as follows:
S331, the characteristic point for extracting image junction to be spliced;
S332, the perspective matrix for obtaining image to be spliced, wherein perspective matrix reflects the projection between image to be spliced
Relationship;
S333, it is fitted to obtain the same objects of splicing regions according to perspective matrix respectively under the visual angle of image to be spliced
Subject image;
S334, SIFT transformation is carried out to subject image, obtains the pixel value at splicing seams.
The characteristic point for extracting original image junction to be spliced respectively, is then fitted to obtain different perspectives according to perspective matrix
Under same object parts.SIFT (Scale-invariant feature transform, SIFT, scale are carried out to this part
Invariant features conversion), the detection and matching of characteristic point.In view of the influence of measurement error in alignment procedures, characteristic point pair is utilized
Symmetrical transmission error calculate homography matrix, which can be expressed as follows formula:
The transmission error of the formula the 1st expression present image, the 2nd indicates the transmission error of previous image, x '1And xi
Indicate that the ith feature point pair of previous image and present image, H indicate the projective transformation of this two images.Then in use
Value filtering method is smoothed image, obtains the etting image of suture edge smoothing.
After step S334, further includes S335, eliminated and spelled using RANSAC (Random Sample Consensus) method
The abnormal point of seam crossing.
In splicing, a kind of extreme case can be potentially encountered: the same scene position of the etting image of all shootings
There are fish to block, and there is no etting breakages for this blocking position.Leading to remove the same scene position after blocking has
Lack part, the etting image after splicing still can retain this missing, and will be mistaken for etting herein has breakage.Consider this
Kind of extreme case solves this kind of erroneous judgement there are two types of mode: the first be video camera again to be judged as damaged etting part into
Row is accurately taken pictures, and mobile camera, close-ups blocking position handles the image of acquisition again;Second is manually to examine
It surveys, monitoring system early warning etting is damaged, sends diver to carrying out artificial inspection to damaged part under water, it is determined whether to have broken
Damage.
In step S41, non-extraction wavelet transform is carried out to spliced every width etting image, is generated in multiple directions
Decomposition image: high pass passed through to etting image respectively and low pass wavelet filter carries out up-sampling and realizes non-extraction discrete wavelet
Transformation.On each decomposition level, retain the tap coefficient of equal length, and omits withdrawal device in decomposable process.Assuming that
The resolution ratio of image f (x, y) is N × N, generates the subgraph that four sizes are N × N by the non-wavelet transform that extracts of level-one
Picture, wherein corresponding to the first approximation image of original image comprising a low-pass pictures, three high-pass images correspond respectively to water
Flat, vertical and diagonally adjacent detail pictures.Above-mentioned decomposable process is then repeated on every grade of approximate image.Small wavelength-division
Solution generally proceeds to the fourth stage, and produces the approximate image of every level-one and the wavelet coefficient of detail pictures in order.It is calculating
Out on the basis of every level-one approximate image wavelet coefficient and detail pictures wavelet coefficient, multistage is constructed using data fusion scheme
The characteristic pattern of wavelet coefficient fusion: it in order to extract one group of textural characteristics with good recognition capability, needs to extract from wavelet field
The wavelet coefficient of each decomposition rank can generate the feature space of higher-dimension and higher computation burden relatively, in this way in order to realize
Image group from different scale and direction is fused into a characteristic pattern matrix by dimensionality reduction, the present invention.Wherein, defect area image
Intensity contrast information mostly come from approximate images at different levels;The local direction contrast information of defect area image is then thin
It is effectively maintained in knot band.
Particularly, the intensity contrast of defect area image can be in the morphology ladder of the approximate image in large scale
It is highlighted in degree, calculation method is that approximate image subtracts the image after its erosion after morphological dilations.Such as following formula:
For wavelet domain coefficients matrix, subscript A indicates that approximate image, subscript J are decomposed class, symbolDistinguish with Θ
It indicates expansion and corrodes operator, S is S × S squares of structural element.
The local direction contrast information of defect area image can be thin by the small echo between two continuous decomposition ranks
Section image difference effectively captures.Specifically, for each details channel d ∈ { H, V, D }, mappingDifference calculate
Shown in following formula:
Wherein j=1,2 ..., J-1.The sum of the difference in all channels are as follows:
Wherein N () is normalization operator.Above formula can enhance on the whole to be lacked by what a small amount of strong high fdrequency component was characterized
The mapping for the regular streaks textural characteristics for falling into the mapping in region, while inhibiting content more on the whole.
Characteristic pattern Ml(x, y) and Mh(x, y) is further combined together to generate fusion feature figure Mf(x, y) is defined as follows
Formula, and then enhance various sizes of defect area, while the background texture that decays:
Mf(x, y)=N (Ml(x,y))+N(Mh(x,y))
In step S42, in order to describe MfThe numerical characteristics of (x, y), proposed adoption Gumbel are distributed to portray its data distribution:
What the fusion feature figure obtained by above formula was made of the difference of one group of wavelet coefficient, common Gauss model is suitble to symmetrical
Data characteristics, the M of unsuitable this paperfThe numerical characteristics type of (x, y), so, Gumbel distribution is borrowed to Mf(x, y) into
Row modeling.
The probability density function of Gumbel maximum distribution is defined as:
Wherein x indicates stochastic variable, and μ is the location parameter of tail portion, and β is scale parameter.Parameter can be by common maximum seemingly
So estimation (MLE) obtains.Specifically, by matrix Mf(x, y) is divided into multiple small rectangular areas not overlapped each other, and for every
A small rectangle executes MLE algorithm, to estimate μ the and β parameter of corresponding Gumbel.
Is calculated by log-likelihood and is estimated according to the intermediate Gumbel parameter in the small rectangular area for each small rectangular area
Evaluation generates the log-likelihood mapping graph LLM of each small rectangular area: specifically, the observation x in rectangle cell domain1,
x2,...,xnFor Gumbel distribution, log-likelihood function is defined as:
According to above formula and then entire matrix M can be calculatedfThe log-likelihood mapping graph LLM of (x, y).
In step S43, binary conversion treatment is carried out to obtained LLM, realizes the segmentation to etting breakage: due to etting figure
The M of picturefThe background area that (x, y) can inhibit regular veins to be distributed enhances the irregular feature in open defect region;By pair
After number likelihood mapping, the log-likelihood high-flatness in the flawless region of etting, defect area is obviously highlighted.In this way, by following
The decision rule of formula, the LLM that etting image is generated carry out binaryzation, it may be assumed that zero defect position mark is 0, defective region
Labeled as 1.
Wherein, PkIndicate the small rectangular area of kth block,Indicate PkLog-likelihood, mLAnd σLIt respectively indicates
The mean value and variance log-likelihood of all small rectangular areas, λ is empirical, can be determined with ROC curve method.In this way, mark
Binarization operation is executed pixel-by-pixel on the LLM of journey of recording a demerit after the adjustment, to complete the accurate segmentation to defect, as shown in figure 8,
Realize the high-precision active real-time monitoring to etting breakage.
Embodiment two, the present embodiment propose a kind of cage netting breakage monitoring system based on monocular space and time continuous image
System, as shown in Figure 9, comprising:
Central axis 11 is arranged vertically at the center of etting 12;
Crossbeam 13 is horizontally disposed with, and is rotatablely connected with central axis 11;
First driving mechanism (not shown), receives the control of control module, for driving crossbeam 13 around central axis
11 rotations;
Telescopic arm 14, one end of which is fixed on crossbeams 13 close to one end of etting 12, and length can stretch along the vertical direction;
Monocular camera 15 is fixed on the free end of telescopic arm 14, for shooting to etting 14;
Control module 16 receives the image information that monocular camera 15 is sent, and according to documented prison in embodiment one
Survey method monitors the breakage of the etting.Wherein, detailed monitoring method can be found in recorded in embodiment one, and this will not be repeated here.
It further include that the second driving mechanism (is not shown in figure in order to facilitate monocular camera 15 to scan in the vertical direction
Out), the second driving mechanism receives the control of control module 16, is used to that telescopic arm 14 to be driven to stretch along the vertical direction.
The above embodiments are merely illustrative of the technical solutions of the present invention, rather than is limited;Although referring to aforementioned reality
Applying example, invention is explained in detail, for those of ordinary skill in the art, still can be to aforementioned implementation
Technical solution documented by example is modified or equivalent replacement of some of the technical features;And these are modified or replace
It changes, the spirit and scope for claimed technical solution of the invention that it does not separate the essence of the corresponding technical solution.
Claims (10)
1. a kind of cage netting damage monitoring method based on monocular space and time continuous image characterized by comprising
Unsupervised learning network model training step, training obtain unsupervised learning network model;
Etting image processing step, comprising:
(1), etting is scanned using monocular camera, obtains the continuous etting topography in several spaces;
(2), etting topography is inputted into unsupervised learning network model, obtains the scape of the different meaning of one's words in each etting topography
It is deep, the pixel that the meaning of one's words is etting is separated, the pixel being blocked in etting topography is removed, obtains etting and tentatively scheme
Picture;
(3), using there is shooting overlapping region between the continuous etting topography in space, to being blocked in etting preliminary images
Region repaired and spliced, obtain etting general image;
(4), damage testing is carried out to etting general image, comprising:
(41), etting general image is carried out to the wavelet decomposition of multistage non-extraction, fusion wavelet coefficient obtains Fusion Features matrix;
(42), Fusion Features matrix is divided into multiple regions, Gumbel distributed mode is used to the data distribution in each region
Type modeling constructs the log-likelihood mapping of each rectangular area;
(43), binary conversion treatment is carried out to the log-likelihood mapping of each rectangular area, realizes etting damaged area and non-damage zone
Domain is split, and obtains etting damage testing result.
2. monitoring method according to claim 1, which is characterized in that the unsupervised learning network model training step packet
It includes:
(101), etting is shot using binocular camera, obtains one group of left source figure I respectivelylWith one group of right source figure Ir;
(102), wherein one group of image it will be input in the unsupervised learning network model and carry out convolutional calculation, and generate two groups of phases
Corresponding disparity map, respectively left disparity map dlWith right disparity map dr;
(103), to dlAnd drIt is calculated respectively using bilinearity sampler, it is reversed to generate left plane input pictureIt is flat with the right side
Face input picture
(104), by IlWithError and IrWithError collectively as objective function, the training unsupervised learning network
Model determines model parameter.
3. monitoring method according to claim 1, which is characterized in that step is scanned including level etting in (1)
Scanning and vertical sweep, obtain the continuous etting topography in several spaces.
4. monitoring method according to claim 1, which is characterized in that being blocked in etting preliminary images in step (3)
Region repaired and spliced before further include:
(30), geometric angle rotation correction is carried out to each etting preliminary images and luminance proportion is handled.
5. monitoring method according to claim 1, which is characterized in that being blocked in etting preliminary images in step (3)
Region repaired and spliced and repaired including level and splicing step and splice step vertically, wherein level is repaired and is spelled
Connecing step includes:
(31), the multiple image comprising Same Scene is selected, takes the complete image of the scene as benchmark image;
(32), centered on the benchmark image, the image for taking its two sides adjacent is respectively compared the benchmark image and its two sides
The complete situation of etting pixel in adjacent picture registration region, will include in the adjacent image in two sides and in benchmark image not
The etting pixel filling for including is into benchmark image, as new benchmark image;
(32), continue two images side-draw to the benchmark image, and taken image and the benchmark image have interval, than
Compared in current base image between taken image overlapping region complete situation, will include in take image and in benchmark image
The etting pixel filling not included is into benchmark image, until there are the images of overlapping region to look into the benchmark image by all
It looks for and fills and finish;
(33), continue the selected multiple image comprising later scene, until the image repair of all scenes is completed, by all scenes
Image mosaic, obtain the horizontal complete image of etting of depth where the benchmark image;
Vertical splicing step includes splicing all horizontal complete images of etting in the vertical direction, obtains etting entirety
Image.
6. monitoring method according to claim 5, which is characterized in that further include eliminating image mosaic to generate in step (33)
Suture spot the step of.
7. monitoring method according to claim 6, which is characterized in that suture the removing method of spot are as follows:
(331), the characteristic point of image junction to be spliced is extracted;
(332), the perspective matrix of image to be spliced is obtained, the perspective matrix reflects the pass of the projection between image to be spliced
System;
(333), it is fitted to obtain the same objects of splicing regions respectively under the visual angle of image to be spliced according to the perspective matrix
Subject image;
(334), SIFT transformation is carried out to the subject image, obtains the pixel value at splicing seams.
8. monitoring method according to claim 7, which is characterized in that after step (334), further include (335), utilize
RANSAC method eliminates the abnormal point at splicing seams.
9. a kind of cage netting damage monitoring system based on monocular space and time continuous image characterized by comprising
Central axis is arranged vertically at the center of etting;
Crossbeam is horizontally disposed with, and connect with the central axis rotation;
First driving mechanism is used to drive the crossbeam around the central axis rotation;
Telescopic arm, one end of which is fixed on the crossbeams close to one end of etting, and length can stretch along the vertical direction;
Monocular camera is fixed on the free end of the telescopic arm, for shooting to etting;
Control module receives the image information that the monocular camera is sent, and according to the described in any item prisons of claim 1-8
Survey method monitors the breakage of the etting.
10. monitoring system according to claim 9, which is characterized in that further include the second driving mechanism, be used to drive institute
Telescopic arm is stated to stretch along the vertical direction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910425601.XA CN110335245A (en) | 2019-05-21 | 2019-05-21 | Cage netting damage monitoring method and system based on monocular space and time continuous image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910425601.XA CN110335245A (en) | 2019-05-21 | 2019-05-21 | Cage netting damage monitoring method and system based on monocular space and time continuous image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110335245A true CN110335245A (en) | 2019-10-15 |
Family
ID=68139062
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910425601.XA Pending CN110335245A (en) | 2019-05-21 | 2019-05-21 | Cage netting damage monitoring method and system based on monocular space and time continuous image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110335245A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111047583A (en) * | 2019-12-23 | 2020-04-21 | 大连理工大学 | Underwater netting system damage detection method based on machine vision |
CN111522012A (en) * | 2020-04-21 | 2020-08-11 | 中国水产科学研究院东海水产研究所 | Method for detecting damage of netting for fence |
CN111882555A (en) * | 2020-08-07 | 2020-11-03 | 中国农业大学 | Net detection method, device, equipment and storage medium based on deep learning |
CN114155685A (en) * | 2021-12-21 | 2022-03-08 | 中国华能集团清洁能源技术研究院有限公司 | Underwater monitoring device and method for marine ranching |
WO2022062242A1 (en) * | 2020-09-27 | 2022-03-31 | 广东海洋大学 | Deep learning-based underwater imaging and fishing net damage identification method and system |
CN114299130A (en) * | 2021-12-23 | 2022-04-08 | 大连理工大学 | Underwater binocular depth estimation method based on unsupervised adaptive network |
WO2022115142A1 (en) * | 2020-11-24 | 2022-06-02 | X Development Llc | Escape detection and mitigation for aquaculture |
CN114972508A (en) * | 2022-05-24 | 2022-08-30 | 中国船舶重工集团公司第七一五研究所 | Sonar-based net cage fish school escape alarm method |
GR1010693B (en) * | 2023-07-06 | 2024-05-20 | Πανεπιστημιο Αιγαιου/Ειδικος Λογαριασμος Κονδυλιων Ερευνας, | Underwater fisheries net monitoring system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105957015A (en) * | 2016-06-15 | 2016-09-21 | 武汉理工大学 | Thread bucket interior wall image 360 DEG panorama mosaicing method and system |
CN106709868A (en) * | 2016-12-14 | 2017-05-24 | 云南电网有限责任公司电力科学研究院 | Image stitching method and apparatus |
CN108399614A (en) * | 2018-01-17 | 2018-08-14 | 华南理工大学 | It is a kind of based on the fabric defect detection method without sampling small echo and Gumbel distribution |
CN108564587A (en) * | 2018-03-07 | 2018-09-21 | 浙江大学 | A kind of a wide range of remote sensing image semantic segmentation method based on full convolutional neural networks |
CN108961327A (en) * | 2018-05-22 | 2018-12-07 | 深圳市商汤科技有限公司 | A kind of monocular depth estimation method and its device, equipment and storage medium |
-
2019
- 2019-05-21 CN CN201910425601.XA patent/CN110335245A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105957015A (en) * | 2016-06-15 | 2016-09-21 | 武汉理工大学 | Thread bucket interior wall image 360 DEG panorama mosaicing method and system |
CN106709868A (en) * | 2016-12-14 | 2017-05-24 | 云南电网有限责任公司电力科学研究院 | Image stitching method and apparatus |
CN108399614A (en) * | 2018-01-17 | 2018-08-14 | 华南理工大学 | It is a kind of based on the fabric defect detection method without sampling small echo and Gumbel distribution |
CN108564587A (en) * | 2018-03-07 | 2018-09-21 | 浙江大学 | A kind of a wide range of remote sensing image semantic segmentation method based on full convolutional neural networks |
CN108961327A (en) * | 2018-05-22 | 2018-12-07 | 深圳市商汤科技有限公司 | A kind of monocular depth estimation method and its device, equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
CLÉMENT GODARD等: "Unsupervised Monocular Depth Estimation with Left-Right Consistency", 《IEEE》 * |
徐冬等: "基于机器视觉的热轧中间坯镰刀弯在线检测系统", 《中南大学学报(自然科学版)》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111047583B (en) * | 2019-12-23 | 2022-11-18 | 大连理工大学 | Underwater netting system damage detection method based on machine vision |
CN111047583A (en) * | 2019-12-23 | 2020-04-21 | 大连理工大学 | Underwater netting system damage detection method based on machine vision |
CN111522012A (en) * | 2020-04-21 | 2020-08-11 | 中国水产科学研究院东海水产研究所 | Method for detecting damage of netting for fence |
CN111882555A (en) * | 2020-08-07 | 2020-11-03 | 中国农业大学 | Net detection method, device, equipment and storage medium based on deep learning |
CN111882555B (en) * | 2020-08-07 | 2024-03-12 | 中国农业大学 | Deep learning-based netting detection method, device, equipment and storage medium |
WO2022062242A1 (en) * | 2020-09-27 | 2022-03-31 | 广东海洋大学 | Deep learning-based underwater imaging and fishing net damage identification method and system |
US11516997B2 (en) | 2020-11-24 | 2022-12-06 | X Development Llc | Escape detection and mitigation for aquaculture |
WO2022115142A1 (en) * | 2020-11-24 | 2022-06-02 | X Development Llc | Escape detection and mitigation for aquaculture |
US11778991B2 (en) | 2020-11-24 | 2023-10-10 | X Development Llc | Escape detection and mitigation for aquaculture |
JP7494392B2 (en) | 2020-11-24 | 2024-06-03 | エックス デベロップメント エルエルシー | Escape detection and mitigation for aquaculture |
CN114155685A (en) * | 2021-12-21 | 2022-03-08 | 中国华能集团清洁能源技术研究院有限公司 | Underwater monitoring device and method for marine ranching |
CN114299130A (en) * | 2021-12-23 | 2022-04-08 | 大连理工大学 | Underwater binocular depth estimation method based on unsupervised adaptive network |
CN114972508A (en) * | 2022-05-24 | 2022-08-30 | 中国船舶重工集团公司第七一五研究所 | Sonar-based net cage fish school escape alarm method |
GR1010693B (en) * | 2023-07-06 | 2024-05-20 | Πανεπιστημιο Αιγαιου/Ειδικος Λογαριασμος Κονδυλιων Ερευνας, | Underwater fisheries net monitoring system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110335245A (en) | Cage netting damage monitoring method and system based on monocular space and time continuous image | |
CN114241031B (en) | Fish body ruler measurement and weight prediction method and device based on double-view fusion | |
CN109741257A (en) | Panorama sketch automatically shoots, splicing system and method | |
CN114067197B (en) | Pipeline defect identification and positioning method based on target detection and binocular vision | |
CN110288623B (en) | Data compression method for unmanned aerial vehicle maritime net cage culture inspection image | |
US20240029347A1 (en) | Generating three-dimensional skeleton representations of aquatic animals using machine learning | |
Troisi et al. | 3D models comparison of complex shell in underwater and dry environments | |
CN113538702A (en) | Method for generating underwater scene panoramic image of marine culture area | |
CN113160053A (en) | Pose information-based underwater video image restoration and splicing method | |
CN116778310A (en) | Acoustic-optical image fusion monitoring method and system for aquaculture | |
CN111753693B (en) | Target detection method under static scene | |
Meline et al. | A camcorder for 3D underwater reconstruction of archeological objects | |
CN114596584A (en) | Intelligent detection and identification method for marine organisms | |
CN115619623A (en) | Parallel fisheye camera image splicing method based on moving least square transformation | |
CN116311218A (en) | Noise plant point cloud semantic segmentation method and system based on self-attention feature fusion | |
CN114359282A (en) | Multi-view-angle-fused power transmission line bird nest defect identification method and device | |
CN112883969A (en) | Rainfall intensity detection method based on convolutional neural network | |
CN114677859B (en) | Unmanned aerial vehicle route automatic correction method and device | |
CN114332682B (en) | Marine panorama defogging target identification method | |
Too et al. | A feasibility study on novel view synthesis of underwater structures using neural radiance fields | |
CN114648707A (en) | Coastline typical garbage rapid positioning and checking method based on unmanned aerial vehicle aerial photography technology | |
Li et al. | A method for identifying transmission line faults based on deep learning | |
Onmek et al. | Evaluation of underwater 3D reconstruction methods for Archaeological Objects: Case study of Anchor at Mediterranean Sea | |
Wei et al. | Underwater Object Detection of an UVMS Based on WGAN | |
Ding et al. | WaterMono: Teacher-Guided Anomaly Masking and Enhancement Boosting for Robust Underwater Self-Supervised Monocular Depth Estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20191015 |