CN109543585A - Underwater optics object detection and recognition method based on convolutional neural networks - Google Patents

Underwater optics object detection and recognition method based on convolutional neural networks Download PDF

Info

Publication number
CN109543585A
CN109543585A CN201811365290.4A CN201811365290A CN109543585A CN 109543585 A CN109543585 A CN 109543585A CN 201811365290 A CN201811365290 A CN 201811365290A CN 109543585 A CN109543585 A CN 109543585A
Authority
CN
China
Prior art keywords
image
target
detection
indicate
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811365290.4A
Other languages
Chinese (zh)
Inventor
李学龙
王�琦
宋春彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201811365290.4A priority Critical patent/CN109543585A/en
Publication of CN109543585A publication Critical patent/CN109543585A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The underwater optics object detection and recognition method based on convolutional neural networks that the present invention provides a kind of.Firstly, carrying out linear gradation transformation and the enhancing of histogram equalization processing to underwater acquisition image, influence of the water-bed environment to original image, the robustness and detection accuracy of Enhancement Method are reduced;Then, it using multilayer depth convolutional neural networks, by calculating image multilayer depth characteristic, returns to obtain detection position and the generic of different target in output layer, and be trained network using the training method based on supervised learning.The method of the present invention can obtain classifying quality more better than mark feature by hand, and accuracy of identification is higher.

Description

Underwater optics object detection and recognition method based on convolutional neural networks
Technical field
The invention belongs to computer vision, computer assisted image processing technical field, specific design is related to a kind of based on convolution mind Underwater picture object detection and recognition method through network.
Background technique
With the progress of depth learning technology, the development advanced by leaps and bounds also is showed in object detection field in recent years. The scholar of New York University in 2013 uses deep learning to carry out target detection earliest, relatively previous target detection obtain it is huge into Step, Ross Girshick is subsequent to have delivered a series of region methods based on convolutional neural networks feature, and using area first is waited It selects algorithm to extract comprising possible mesh target area, is then finally used using convolutional network in each extracted region convolution feature Support vector machines classifies to region.Recent research is more by using multiple dimensioned convolution characteristic pattern to reach better And speed as a result.But there are problems that two for object detection and recognition in underwater scene: first is that underwater light exists seriously Decaying and scattering effect, underwater picture is presented that scattering is fuzzy and the states such as color regression, prevents color and textural characteristics from making For the detection feature of underwater optics target.Second is that underwater actual scene is more complicated, so that traditional target detection based on segmentation Also it is no longer applicable in.Currently, underwater target detection and the research method of identification are mostly the detection methods based on manual features, mainly may be used It is divided into two kinds.
One is using vision noticing mechanism to form notable figure to obtain possible target area, active profile is then utilized Method realizes the segmentation of target in image.Christian et al. is in document " A fully automated method to detect and segment amanufactured object in an underwater color image.Advanced in 0 (0): Signal Processing For Maritime Applications uses color, brightness, side in 1-10,2010 " To equal three characteristic informations, notable figure to the end is obtained by merging 44 different characteristic figures.But this method feature calculation mistake Journey time-consuming is more, and time performance is difficult to improve, while when object color and background are close, it is difficult to detect.Another method It is Bazeille et al. in document " Color-based underwater object recognition using water Light attenuation, Intelligent Service Robotices, 5 (2): 109-118,2012 " middle use are based on The object identification of color characteristic preferably reduces the influence of change of scale, while passing through the compensation of the light of active and priori Colouring information reduces detection error caused by underwater light attenuation.
These methods have certain limitation, comprising: due to detecting the mostly foundation traditional infrastructures feature such as color, brightness, Cause detection accuracy lower, it is not high to the classification accuracy of different target, and do not have specific aim, complicated multiplicity can not be suitable for Underwater environment;Detection process is interfered vulnerable to factors such as light, poor for the robustness of environmental change;Based on unsupervised learning Method training process it is time-consuming, complicated difficult is to reappear.
Summary of the invention
In order to overcome the existing object detection method based on traditional characteristic detection that cannot accurately carry out underwater optics target inspection Survey the deficiency with identification, the underwater optics object detection and recognition method based on convolutional neural networks that the present invention provides a kind of. This method leads to the extremely strong characterization ability of optical target using the underwater picture taken as network inputs using neural network Cross the depth characteristic for calculating underwater photograph technical optical imagery, output layer return to obtain different target detection position and affiliated class Not.
A kind of underwater optics object detection and recognition method based on convolutional neural networks, it is characterised in that including following step It is rapid:
Step 1: using following formula image is acquired to each width under water and carries out piecewise linear gray transformation, converted Image afterwards:
Wherein, (x, y) is image pixel coordinates, and f indicates to acquire image under initial condition, and g indicates image after greyscale transformation, f (x, y) indicates that the pixel value in image f at (x, y), g (x, y) indicate that the pixel value in image g at (x, y), [a, b] are image f Intensity value ranges, [c, d] is the intensity value ranges of image g, and the value range of c and d are [0,255], and c < d, F are initial condition The number of greyscale levels of lower acquisition image.
Step 2: to image progress histogram equalization processing after each width greyscale transformation, image after being enhanced, and will Image random division is training dataset and test data set after all enhancings, and the training dataset and test data set are wrapped The ratio of quantity containing image is 3:1.
Step 3: carrying out flip horizontal for image after enhancing, and image and image after its overturning collectively form after all enhancings New data set is cut from new data set using random image difference mode, obtains the input that size is 300*300*3 Image matrix data collection;
Step 4: with VGG16 network for initial convolution feature extraction network, the input picture square for respectively obtaining step 3 Each Input matrix in battle array data set is repeated the steps of to network, until obtaining trained when Loss value is lower than 0.1 Network:
Image array: being input to 6 layers of convolution fallout predictor of VGG16 network by step a, and each layer obtains a two dimension output All output matrixes are added by matrix, obtain image characteristic matrix, and the image characteristic matrix contains the inspection of different target It surveys frame and each detection block belongs to the Forecasting recognition probability of different classifications, the convolution kernel size of the convolution fallout predictor is 3*3* 3;
Step b: it utilizesCalculate target detection position and mark The Jaccard coefficient of target actual position in image;
Wherein, ApIndicate the position frame of p-th of target in mark image,Indicate the target in correspondence image eigenmatrix I-th of detection position frame at position,Indicate ApWithThe attribute number that middle attribute is simultaneously 1,Table Show ApMiddle attribute value be 1 orThe attribute number that middle attribute value is 1, the attribute refer to all pixels point in image for 1 Pixel value is not 0;P=1 ..., P, i=1 ..., Ji, P are total target number, and Ji is target p in image characteristic matrix Set the detection position frame number at place;
Step c: it utilizesIt is calculated in target detection position and mark image as objective function Loss value between target actual position;
Wherein, N is the target detection position number that Jaccdard coefficient is greater than threshold value, and the value range of the threshold value is [0.4,1], α are weight coefficient, α=1, LconfIt is the confidence loss between detection block and callout box, LlocIt is detection block and mark Positioning loss between frame, calculation formula are respectively as follows:
Wherein, Pos is the set that all detection blocks are constituted in image characteristic matrix, and M is all callout box in mark image The set of composition,For identifying whether i-th of detection block and j-th of the callout box of target p match, if it does, Otherwise, Indicate that detection block i belongs to a different category the Forecasting recognition probability of p.Cx indicate target p all detection blocks with The x-axis position deviation of callout box central point, cy indicate all detection blocks of target p and the y-axis position deviation of callout box central point, W is the width of callout box, and h is callout box height, smmothL1Indicate smooth L1 norm,Indicate i-th of detection that value is m Frame,Indicate that value is j-th of callout box of m.
Step 5: the image that the test data that step 2 obtains is concentrated is input to trained network, network output is All target positions and its affiliated classification, i.e., final recognition result in the image.
The beneficial effects of the present invention are: due to using multilayer depth convolutional neural networks, by calculating image multilayer depth Feature can obtain classifying quality more better than mark feature by hand, accuracy of identification is more under the premise of being based on pre-training model It is high;Due to using the network training mode based on supervised learning, sky more significantly more efficient than original image has been arrived by training study Between structure feature so that the detection accuracy to target position is higher;Due to having carried out linear gradation in advance to underwater acquisition image Transformation and the enhancing of histogram equalization are handled, and are reduced influence of the water-bed environment to original image, are enhanced the robustness of method And detection accuracy.
Detailed description of the invention
Fig. 1 is a kind of underwater optics object detection and recognition method flow diagram based on convolutional neural networks of the present invention
Fig. 2 is the result curve schematic diagram classified using the method for the present invention to variety classes object
Specific embodiment
Present invention will be further explained below with reference to the attached drawings and examples, and the present invention includes but are not limited to following implementations Example.
As shown in Figure 1, the present invention provides a kind of underwater optics object detection and recognition side based on convolutional neural networks Method, basic process are as follows:
1, piecewise linear gray transformation
In addition to underwater environment light is dark, exposure and equipment self problem etc., can all be caused present in image imaging process Underwater acquisition picture contrast reduces, and keeps image resolution unclear.Therefore, segmented linear gray is carried out to underwater acquisition image first Transformation, the dynamic range of expanded images improve the contrast of original image, and linear transformation formula is as follows:
Wherein, (x, y) is image pixel coordinates, and f indicates to acquire image under initial condition, and g indicates image after greyscale transformation, f (x, y) indicates that the pixel value in image f at (x, y), g (x, y) indicate that the pixel value in image g at (x, y), [a, b] are image f Intensity value ranges, [c, d] is the intensity value ranges of image g, and the value range of c and d are [0,255], and c < d, F are initial condition The number of greyscale levels of lower acquisition image.
2, histogram equalization
Histogram equalization processing is carried out to image after greyscale transformation, image detail can be made apparent, improves vision effect Fruit.Assuming that r ∈ [0,1] is the Normalized Grey Level value before transformation, Pr(r) be original image non-homogeneous probability density function, T (r) it is grayscale mapping function, enables s=T (r) indicate transformed gray value, be also normalized to s ∈ [0,1], PsIt (s) is equilibrium Probability density function after change is obtained by probability theory knowledge:
Carry out histogram equalization, Ps(s)=1 it, can be obtained by formula above:
ds=Pr(r)dr (6)
Formula (6) both sides are integrated, the transforming function transformation function of histogram equalization can be obtained:
In digital picture, if the sum of all pixels of image is n, the frequency r for having k gray level to occurkIt indicates, then histogram Scheme the discrete formula of equalization are as follows:
skIndicate the final gray value that k-th of gray level obtains after carrying out discretization.
After carrying out histogram equalization processing, image random division after all enhancings is training by image after being enhanced Data set and test data set, wherein the ratio of the quantity of training dataset and the included image of test data set is 3:1.
3, flip horizontal
Using providing method in open source computer vision library OpenCV, image after enhancing is subjected to flip horizontal, Suo Youzeng Image and image after its overturning collectively form new data set after strong, are carried out from new data set using random image difference mode It cuts, obtains the input picture matrix data collection that size is 300*300*3.
4, network training
The present invention is with VGG16 network for initial convolution feature extraction network.VGG16 network structure and its each layer are specifically joined Number scale is loaded in document " K.Simonyan and A.Zisserman, Very Deep Convolutional Networks for In Large-Scale Image Recognition, arXiv preprint:arXiv:1409.1556,2014. ".
Each Input matrix that the input picture matrix data that step 3 is obtained respectively is concentrated is repeated to VGG16 network Following steps are trained network, and trained objective function is Loss value, if the pass of output Loss value and the number of iterations It is that curve is as restrained without obvious fluctuation, specifically 0.1 can be set by objective function Loss value convergence threshold, alternatively, will be repeatedly Generation number is set as 20,000 times.
(1) image array is input to 6 layers of convolution fallout predictor of VGG16 network, each layer obtains a two dimension output square Battle array, all output matrixes are added, image characteristic matrix is obtained, the image characteristic matrix contains the detection of different target Frame and each detection block belong to the Forecasting recognition probability of different classifications, and the convolution kernel size of the convolution fallout predictor is 3*3*3;
(2) it utilizesIt calculates target detection position and mark is schemed The Jaccard coefficient of target actual position as in.
Wherein, ApIndicate the position frame of p-th of target in mark image,Indicate the target in correspondence image eigenmatrix I-th of detection position frame at position,Indicate ApWithThe attribute number that middle attribute is simultaneously 1,Table Show ApMiddle attribute value be 1 orThe attribute number that middle attribute value is 1, the attribute refer to all pixels point in image for 1 Pixel value is not 0;P=1 ..., P, i=1 ..., Ji, P are total target number, and Ji is target p in image characteristic matrix Set the detection position frame number at place.
(3) objective function is utilizedIt calculates target detection position and target in mark image is true Loss value between real position.
Wherein, N is the target detection position number that Jaccdard coefficient is greater than threshold value, and the value range of the threshold value is [0.4,1], α are weight coefficient, α=1, LconfIt is the confidence loss between detection block and callout box, LlocIt is detection block and mark Positioning loss between frame, calculation formula are respectively as follows:
Wherein, Pos is the set that all detection blocks are constituted in image characteristic matrix, and M is all callout box in mark image The set of composition,For identifying whether i-th of detection block and j-th of the callout box of target p match, if it does, Otherwise, Indicate that detection block i belongs to a different category the Forecasting recognition probability of p.All detection blocks of cx expression target p With the x-axis position deviation of callout box central point, all detection blocks of cy expression target p and the y-axis position of callout box central point are inclined Difference, w are the width of callout box, and h is callout box height, smmothL1Indicate smooth L1 norm,Indicate i-th of inspection that value is m Frame is surveyed,Indicate that value is j-th of callout box of m.
Training process optimizes above-mentioned objective function by the way of based on stochastic gradient descent.Based in the present invention Experimental result, in order to obtain more stable calculated result and more efficient calculating speed, can will training when the number of iterations be arranged It is 20000 times, 50% procedural learning rate is set as 0.01 before iteration, and rear 50% iterative process learning rate is set as 0.001, weight Decaying is set as 0.0005, and every batch of training image 64 is opened.
5, evaluation model effect
The present invention carries out input picture using the feedforward convolutional network based on VGG16, using trained network Processing, it can obtain confidence level different classes of belonging to target in the detection block set and frame of fixed size.It is adopted in the present invention Recall rate (Recall) is used to carry out the final training pattern of Comprehensive Assessment to the order of accuarcy of data classification as evaluation index.It is calculated Formula isWherein TP represents classification correct result, and FP represents classification error result.
The present embodiment is in central processing unitI5-4590 3.2GHz CPU, memory 16G, WINDOWS 10 operation In system, the emulation of MATLAB software progress is utilized.The data used are public data collection PASCAL VOC2007.Fig. 2 is provided The method of the present invention is on data set respectively to the classification results schematic diagram of animal, plant and stone class.Horizontal axis indicates amount detection, Unit is thousand times, and the longitudinal axis is percentage shared by variety classes.Wherein, Fig. 2 (a) is the result curve of animal;Fig. 2 (b) is stone The result curve of class;Fig. 2 (c) is the result curve of plant.What different zones part successively represented from the bottom up is correctly to examine Survey (Cor), position error (Loc), similar type error (Sim), other wrong (Oth) or background error (BG) accountings differences In the case where, some accumulated errors that final result generates are influenced.Wherein, when solid line is that Jacarrd coefficient threshold takes 0.7, The change curve of recall rate (Recall), when dotted line is that Jacarrd coefficient takes 0.5, the change curve of recall rate.It can be seen that this Invention can effectively classify a variety of different objects, and most of classification results have high confidence level, variety classes Recall rate result about between 85-90%, while if by Jacarrd coefficient threshold be set as one it is lower as a result, Nicety of grading of the invention can obtain certain promotion.

Claims (1)

1. a kind of underwater optics object detection and recognition method based on convolutional neural networks, it is characterised in that including following step It is rapid:
Step 1: using following formula image is acquired to each width under water and carries out piecewise linear gray transformation, obtained transformed Image:
Wherein, (x, y) is image pixel coordinates, and f indicates to acquire image under initial condition, and g indicates image after greyscale transformation, f (x, y) Indicate that the pixel value in image f at (x, y), g (x, y) indicate that the pixel value in image g at (x, y), [a, b] are the ash of image f Angle value range, [c, d] are the intensity value ranges of image g, and the value range of c and d are [0,255], and c < d, F are to adopt under initial condition Collect the number of greyscale levels of image;
Step 2: histogram equalization processing, image after being enhanced are carried out to image after each width greyscale transformation, and will be owned Image random division is training dataset and test data set after enhancing, and the training dataset and test data set include figure The ratio of the quantity of picture is 3:1;
Step 3: carrying out flip horizontal for image after enhancing, and image and image after its overturning collectively form new after all enhancings Data set is cut from new data set using random image difference mode, obtains the input picture that size is 300*300*3 Matrix data collection;
Step 4: with VGG16 network for initial convolution feature extraction network, the input picture matrix function for respectively obtaining step 3 According to each Input matrix of concentration to network, repeat the steps of, until obtaining trained network when Loss value is lower than 0.1:
Image array: being input to 6 layers of convolution fallout predictor of VGG16 network by step a, and each layer obtains a two dimension output square Battle array, all output matrixes are added, image characteristic matrix is obtained, the image characteristic matrix contains the detection of different target Frame and each detection block belong to the Forecasting recognition probability of different classifications, and the convolution kernel size of the convolution fallout predictor is 3*3*3;
Step b: it utilizesCalculate target detection position and mark image The Jaccard coefficient of middle target actual position;
Wherein, ApIndicate the position frame of p-th of target in mark image,Indicate the target position in correspondence image eigenmatrix I-th of detection position frame at place,Indicate ApWithThe attribute number that middle attribute is simultaneously 1,Indicate ApIn Attribute value be 1 orThe attribute number that middle attribute value is 1, the attribute are 1 pixel value for referring to all pixels point in image It is not 0;P=1 ..., P, i=1 ..., Ji, P are total target number, and Ji is in image characteristic matrix at the position target p Detect position frame number;
Step c: it utilizesTarget in target detection position and mark image is calculated as objective function Loss value between actual position;
Wherein, N is the target detection position number that Jaccdard coefficient is greater than threshold value, the value range of the threshold value be [0.4, 1], α is weight coefficient, α=1, LconfIt is the confidence loss between detection block and callout box, LlocBe detection block and callout box it Between positioning loss, calculation formula is respectively as follows:
Wherein, Pos is the set that all detection blocks are constituted in image characteristic matrix, and M is that all callout box are constituted in mark image Set,For identifying whether i-th of detection block and j-th of the callout box of target p match, if it does,Otherwise, Indicate that detection block i belongs to a different category the Forecasting recognition probability of p;Cx indicates all detection blocks and mark of target p The x-axis position deviation of frame central point, cy indicate all detection blocks of target p and the y-axis position deviation of callout box central point, and w is The width of callout box, h are callout box height, smmothL1Indicate smooth L1 norm,Indicate that value is i-th of detection block of m,Indicate that value is j-th of callout box of m;
Step 5: the image that the test data that step 2 obtains is concentrated is input to trained network, network output is the figure All target positions and its affiliated classification as in, i.e., final recognition result.
CN201811365290.4A 2018-11-16 2018-11-16 Underwater optics object detection and recognition method based on convolutional neural networks Pending CN109543585A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811365290.4A CN109543585A (en) 2018-11-16 2018-11-16 Underwater optics object detection and recognition method based on convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811365290.4A CN109543585A (en) 2018-11-16 2018-11-16 Underwater optics object detection and recognition method based on convolutional neural networks

Publications (1)

Publication Number Publication Date
CN109543585A true CN109543585A (en) 2019-03-29

Family

ID=65847738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811365290.4A Pending CN109543585A (en) 2018-11-16 2018-11-16 Underwater optics object detection and recognition method based on convolutional neural networks

Country Status (1)

Country Link
CN (1) CN109543585A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245602A (en) * 2019-06-12 2019-09-17 哈尔滨工程大学 A kind of underwater quiet target identification method based on depth convolution feature
CN110706291A (en) * 2019-09-26 2020-01-17 哈尔滨工程大学 Visual measurement method suitable for three-dimensional trajectory of moving object in pool experiment
CN111445496A (en) * 2020-02-26 2020-07-24 沈阳大学 Underwater image recognition tracking system and method
CN112597906A (en) * 2020-12-25 2021-04-02 杭州电子科技大学 Underwater target detection method based on degradation prior
CN112927222A (en) * 2021-03-29 2021-06-08 福州大学 Method for realizing multi-type photovoltaic array hot spot detection based on hybrid improved Faster R-CNN
CN112926383A (en) * 2021-01-08 2021-06-08 浙江大学 Automatic target identification system based on underwater laser image
CN114092793A (en) * 2021-11-12 2022-02-25 杭州电子科技大学 End-to-end biological target detection method suitable for complex underwater environment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116740A (en) * 2013-01-24 2013-05-22 中国科学院声学研究所 Method and device for identifying underwater targets
CN107730473A (en) * 2017-11-03 2018-02-23 中国矿业大学 A kind of underground coal mine image processing method based on deep neural network
CN108564065A (en) * 2018-04-28 2018-09-21 广东电网有限责任公司 A kind of cable tunnel open fire recognition methods based on SSD

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116740A (en) * 2013-01-24 2013-05-22 中国科学院声学研究所 Method and device for identifying underwater targets
CN107730473A (en) * 2017-11-03 2018-02-23 中国矿业大学 A kind of underground coal mine image processing method based on deep neural network
CN108564065A (en) * 2018-04-28 2018-09-21 广东电网有限责任公司 A kind of cable tunnel open fire recognition methods based on SSD

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245602A (en) * 2019-06-12 2019-09-17 哈尔滨工程大学 A kind of underwater quiet target identification method based on depth convolution feature
CN110706291A (en) * 2019-09-26 2020-01-17 哈尔滨工程大学 Visual measurement method suitable for three-dimensional trajectory of moving object in pool experiment
CN111445496A (en) * 2020-02-26 2020-07-24 沈阳大学 Underwater image recognition tracking system and method
CN111445496B (en) * 2020-02-26 2023-06-30 沈阳大学 Underwater image recognition tracking system and method
CN112597906A (en) * 2020-12-25 2021-04-02 杭州电子科技大学 Underwater target detection method based on degradation prior
CN112597906B (en) * 2020-12-25 2024-02-02 杭州电子科技大学 Underwater target detection method based on degradation priori
CN112926383A (en) * 2021-01-08 2021-06-08 浙江大学 Automatic target identification system based on underwater laser image
CN112927222A (en) * 2021-03-29 2021-06-08 福州大学 Method for realizing multi-type photovoltaic array hot spot detection based on hybrid improved Faster R-CNN
CN114092793A (en) * 2021-11-12 2022-02-25 杭州电子科技大学 End-to-end biological target detection method suitable for complex underwater environment
CN114092793B (en) * 2021-11-12 2024-05-17 杭州电子科技大学 End-to-end biological target detection method suitable for complex underwater environment

Similar Documents

Publication Publication Date Title
CN109543585A (en) Underwater optics object detection and recognition method based on convolutional neural networks
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN110472627B (en) End-to-end SAR image recognition method, device and storage medium
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN108427920B (en) Edge-sea defense target detection method based on deep learning
EP3614308B1 (en) Joint deep learning for land cover and land use classification
WO2019140767A1 (en) Recognition system for security check and control method thereof
CN103049763B (en) Context-constraint-based target identification method
CN111445488B (en) Method for automatically identifying and dividing salt body by weak supervision learning
Karimpouli et al. Coal cleat/fracture segmentation using convolutional neural networks
CN109684922B (en) Multi-model finished dish identification method based on convolutional neural network
CN110211048B (en) Complex archive image tilt correction method based on convolutional neural network
CN111639587B (en) Hyperspectral image classification method based on multi-scale spectrum space convolution neural network
CN109840483B (en) Landslide crack detection and identification method and device
CN111626993A (en) Image automatic detection counting method and system based on embedded FEFnet network
CN112560675B (en) Bird visual target detection method combining YOLO and rotation-fusion strategy
CN110991257B (en) Polarized SAR oil spill detection method based on feature fusion and SVM
CN112613350A (en) High-resolution optical remote sensing image airplane target detection method based on deep neural network
CN112749621A (en) Remote sensing image cloud layer detection method based on deep convolutional neural network
Yaohua et al. A SAR oil spill image recognition method based on densenet convolutional neural network
Sun et al. Image recognition technology in texture identification of marine sediment sonar image
CN115861409A (en) Soybean leaf area measuring and calculating method, system, computer equipment and storage medium
CN107529647B (en) Cloud picture cloud amount calculation method based on multilayer unsupervised sparse learning network
CN117789037A (en) Crop growth period prediction method and device
CN113469097A (en) SSD (solid State disk) network-based real-time detection method for water surface floating object multiple cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190329

WD01 Invention patent application deemed withdrawn after publication