CN106372648B - Plankton image classification method based on multi-feature fusion convolutional neural network - Google Patents

Plankton image classification method based on multi-feature fusion convolutional neural network Download PDF

Info

Publication number
CN106372648B
CN106372648B CN201610912684.1A CN201610912684A CN106372648B CN 106372648 B CN106372648 B CN 106372648B CN 201610912684 A CN201610912684 A CN 201610912684A CN 106372648 B CN106372648 B CN 106372648B
Authority
CN
China
Prior art keywords
neural network
convolutional neural
image
plankton
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610912684.1A
Other languages
Chinese (zh)
Other versions
CN106372648A (en
Inventor
郑海永
王超
俞智斌
戴嘉伦
郑冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN201610912684.1A priority Critical patent/CN106372648B/en
Publication of CN106372648A publication Critical patent/CN106372648A/en
Application granted granted Critical
Publication of CN106372648B publication Critical patent/CN106372648B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a plankton image classification method based on a multi-feature fusion convolutional neural network, which comprises the steps of firstly collecting a large number of clear plankton images, constructing a large-scale multi-class plankton image data set, then extracting global features and local features by using an image transformation and edge extraction algorithm, putting an original feature image, the global feature image and the local feature image into a deep learning multi-feature fusion convolutional neural network for training to obtain a multi-feature fusion convolutional neural network model, finally inputting the plankton images into the multi-feature fusion convolutional neural network model, and realizing classification according to finally output probability scores. The invention combines the biological morphology angle, the computer vision method and the deep learning technology, and particularly has very high classification accuracy rate for large-scale multi-class plankton images.

Description

Plankton image classification method based on multi-feature fusion convolutional neural network
Technical Field
The invention relates to the technical field of biomorphic analysis, computer vision and deep learning, in particular to a plankton image classification method based on a multi-feature fusion convolutional neural network.
Background
Due to the importance of plankton in the ecosystem, the processing and analysis of plankton images is becoming increasingly important. However, since the number of types of plankton is enormous, there is a great difference in morphological characteristics and the like among the types of plankton. For plankton images, the shapes of plankton of the same kind are not necessarily identical and may have a large difference, while for plankton of different kinds, the features such as the shapes may have a very high similarity. The intra-class difference and the inter-class similarity bring great problems to the plankton image classification. The traditional image classification method mainly adopts a method of combining feature extraction and classifier design, but the common feature extraction method is not suitable for complex plankton images, and the special feature extraction method needs to consume a large amount of time and energy to carry out research and design, and cannot achieve good effect on the classification of large-scale multi-class plankton images.
Disclosure of Invention
The application provides a plankton image classification method based on a multi-feature fusion convolutional neural network, so as to solve the technical problem that large-scale multi-class plankton images are difficult to classify in the prior art.
In order to solve the technical problems, the application adopts the following technical scheme:
a plankton image classification method based on a multi-feature fusion convolutional neural network comprises the following steps:
s1: clear plankton images are collected, and a large-scale multi-class plankton image data set is constructed, wherein the plankton images in the data set are used as original characteristic images;
s2: processing the original characteristic image, extracting the global characteristic of plankton to obtain a global characteristic image, and specifically processing the steps as follows:
s21: converting the original characteristic image by using an image segmentation Scharr operator, wherein the converted image comprises global characteristics and local characteristics;
s22: removing local features in the converted image by using a bilateral filtering method;
s23: enhancing contrast to highlight global features in the transformed image;
s3: processing the original characteristic image by a Canny edge detection algorithm of computer vision, and extracting edge texture characteristics of planktons, namely local characteristics of the planktons to obtain a local characteristic image;
s4: constructing a multi-feature fusion convolutional neural network model based on original features, global features and local features, wherein the multi-feature fusion convolutional neural network comprises three mutually independent basic sub-networks, and each basic sub-network respectively trains an original feature image, a global feature image and a local feature image, wherein 1 to 5 layers of the multi-feature fusion convolutional neural network are convolutional layers, and 6 to 8 layers of the multi-feature fusion convolutional neural network are full-connection layers;
s5: inputting all the original feature images, the global feature images and the local feature images obtained in the steps S1, S2 and S3 into the multi-feature fusion convolutional neural network model for training, and finally obtaining an optimized multi-feature fusion convolutional neural network model:
s51: firstly, setting initial state information including iteration times, learning rate and initialization mode;
s52: carrying out forward transmission and backward feedback on the multi-feature fusion convolutional neural network model, so that the multi-feature fusion convolutional neural network model is trained and learned according to the input plankton image;
s53: outputting a loss function value and an accuracy rate;
s54: the performance of the multi-feature fusion convolutional neural network model is improved by reducing the loss function value;
s54: judging whether the set iteration times is reached, if so, finishing training to obtain an optimized multi-feature fusion convolutional neural network model, otherwise, continuing to jump to execute the step S52;
s6: inputting the plankton images to be classified into the optimized multi-feature fusion convolutional neural network model, and judging the corresponding categories of the plankton images according to the final output probability scores.
Further, the basic sub-network may use any one of the convolutional neural networks of AlexNet, VGGNet, or google lenet, depending on the actual situation and the requirement. The final classification accuracy of the multi-feature fusion convolutional neural network model can be gradually improved according to different selected basic sub-networks, and correspondingly, the time cost of model training can be gradually increased.
In the prior art, a plurality of feature maps are mostly directly merged, in order to enable three features to be better fused, information of high dimensionality and hierarchy is fully mined, and as an optimal technical scheme, a feature map obtained by training three sub-networks is fused at a full connection layer by adopting a full connection cross mixing method.
In consideration of the great difference between the global characteristic image and the local characteristic image, compared with the direct fusion of a common full-connection layer, the full-connection cross fusion method effectively reduces the error caused by the fusion of the global characteristic image and the local characteristic image, realizes the full fusion of multiple characteristics and improves the accuracy rate of the classification of the plankton images.
Compared with the prior art, the technical scheme that this application provided, the technological effect or advantage that have are: the method is based on the biological morphology and a computer vision method, and is combined with a deep learning technology, so that the large-scale classification of the plankton images with multiple classes is realized, and the classification accuracy is greatly improved.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a diagram of a multi-feature fusion convolutional neural network model according to the present invention.
Detailed Description
The embodiment of the application provides a plankton image classification method based on a multi-feature fusion convolutional neural network, so as to solve the technical problem that large-scale multi-class plankton images are difficult to classify in the prior art.
In order to better understand the technical solutions, the technical solutions will be described in detail below with reference to the drawings and specific embodiments.
Examples
A plankton image classification method based on a multi-feature fusion convolutional neural network is disclosed, as shown in FIG. 1, and comprises the following steps:
s1: collecting clear plankton images, and constructing a large-scale multi-class plankton image data set, wherein the plankton images in the data set are used as original characteristic images, the number of the collected images is about 30,000-90,000, and the class of plankton is about 30-50;
s2: processing the original characteristic image, extracting the global characteristic of plankton to obtain a global characteristic image, and specifically processing the steps as follows:
s21: converting the original characteristic image by using an image segmentation Scharr operator, wherein the converted image comprises global characteristics and local characteristics;
s22: removing local features in the converted image by using a bilateral filtering method;
s23: enhancing contrast to highlight global features in the transformed image;
s3: processing the original characteristic image by a Canny edge detection algorithm of computer vision, and extracting edge texture characteristics of planktons, namely local characteristics of the planktons to obtain a local characteristic image;
s4: constructing a multi-feature fusion convolutional neural network model based on original features, global features and local features, wherein the multi-feature fusion convolutional neural network comprises three mutually independent basic sub-networks, each basic sub-network respectively trains an original feature image, a global feature image and a local feature image, according to actual conditions and requirements, any one convolutional neural network of AlexNet, VGGNet or GoogLeNet can be used as the basic sub-network, the final classification accuracy based on the multi-feature fusion convolutional neural network model can be gradually improved according to the different selected basic sub-networks, and correspondingly, the time cost of model training can be gradually increased, in the embodiment, the basic sub-network adopts the AlexNet convolutional neural network, wherein 1 to 5 layers of the multi-feature fusion convolutional neural network are convolutional layers, and 6 to 8 layers are fully connected layers;
as shown in fig. 2, in the model, each basic sub-network is independent and mutually incoherent, each basic sub-network is a convolutional neural network, and the structural configurations are the same: each base subnetwork contains 5 convolutional layers, and the convolutional kernels of each convolutional layer become smaller and the number of convolutional kernels increases, the sizes of the convolutional kernels of each layer are 11x11,11x11,5x5,3x3 and 3x3, respectively, and the number of convolutional kernels is classified into 96, 96,384,384 and 256.
In the invention, in order to better fuse three features and fully mine information of high dimensionality and hierarchy, as a preferred technical scheme, after three basic sub-networks, cross fusion of a full connection layer is performed, the feature maps obtained by training the three basic sub-networks are fused in the full connection layer by adopting a full connection cross mixing method, 3 full connection layers are arranged in total and distributed in a pyramid shape, the number of full connections in each layer is gradually reduced, and the number of full connection neurons in each layer is fixed to be 2048.
In consideration of the great difference between the global characteristic image and the local characteristic image, compared with the direct fusion of a common full-connection layer, the full-connection cross fusion method effectively reduces the error caused by the fusion of the global characteristic image and the local characteristic image, realizes the full fusion of multiple characteristics and improves the accuracy rate of the classification of the plankton images.
S5: inputting all the original feature images, the global feature images and the local feature images obtained in the steps S1, S2 and S3 into the multi-feature fusion convolutional neural network model for training, and finally obtaining an optimized multi-feature fusion convolutional neural network model:
s51: firstly, setting initial state information including iteration times, learning rate and initialization mode;
s52: carrying out forward transmission and backward feedback on the multi-feature fusion convolutional neural network model, so that the multi-feature fusion convolutional neural network model is trained and learned according to the input plankton image;
s53: outputting a loss function value and an accuracy rate;
s54: the performance of the multi-feature fusion convolutional neural network model is improved by reducing the loss function value;
s54: judging whether the set iteration times is reached, if so, finishing training to obtain an optimized multi-feature fusion convolutional neural network model, otherwise, continuing to jump to execute the step S52;
s6: inputting the plankton images to be classified into the optimized multi-feature fusion convolutional neural network model, and judging the corresponding categories of the plankton images according to the final output probability scores.
On the premise of large amount of training data, the classification accuracy of the images of the plankton of the multiple classes is as high as 95%.
In the above embodiments of the present application, a plankton image classification method based on a multi-feature fusion convolutional neural network is provided, which includes collecting a large number of clear plankton images, constructing a large-scale multi-class plankton image data set, then extracting global features and local features by using an image transformation and edge extraction algorithm, putting an original feature image, a global feature image, and a local feature image into a deep learning multi-feature fusion convolutional neural network together for training, obtaining a multi-feature fusion convolutional neural network model, finally inputting the plankton images into the multi-feature fusion convolutional neural network model, and according to a final output probability score, classifying. The invention combines the biological morphology angle, the computer vision method and the deep learning technology, and particularly has very high classification accuracy rate for large-scale multi-class plankton images.
It should be noted that the above description is not intended to limit the present invention, and the present invention is not limited to the above examples, and those skilled in the art may make variations, modifications, additions or substitutions within the spirit and scope of the present invention.

Claims (2)

1. A plankton image classification method based on a multi-feature fusion convolutional neural network is characterized by comprising the following steps:
s1: clear plankton images are collected, and a large-scale multi-class plankton image data set is constructed, wherein the plankton images in the data set are used as original characteristic images;
s2: processing the original characteristic image, extracting the global characteristic of plankton to obtain a global characteristic image, and specifically processing the steps as follows:
s21: converting the original characteristic image by using an image segmentation Scharr operator, wherein the converted image comprises global characteristics and local characteristics;
s22: removing local features in the converted image by using a bilateral filtering method;
s23: enhancing contrast to highlight global features in the transformed image;
s3: processing the original characteristic image by a Canny edge detection algorithm of computer vision, and extracting edge texture characteristics of planktons, namely local characteristics of the planktons to obtain a local characteristic image;
s4: constructing a multi-feature fusion convolutional neural network model based on original features, global features and local features, wherein the multi-feature fusion convolutional neural network comprises three mutually independent basic sub-networks, each basic sub-network respectively trains an original feature image, a global feature image and a local feature image, 1 to 5 layers of the multi-feature fusion convolutional neural network are convolutional layers, 6 to 8 layers of the multi-feature fusion convolutional neural network are full-connection layers, and feature mapping graphs obtained by training the three basic sub-networks are fused in the full-connection layers by adopting a full-connection cross mixing method;
s5: inputting all the original feature images, the global feature images and the local feature images obtained in the steps S1, S2 and S3 into the multi-feature fusion convolutional neural network for training, and finally obtaining an optimized multi-feature fusion convolutional neural network model:
s51: firstly, setting initial state information including iteration times, learning rate and initialization mode;
s52: carrying out forward transmission and backward feedback on the multi-feature fusion convolutional neural network model, so that the multi-feature fusion convolutional neural network model is trained and learned according to the input plankton image;
s53: outputting a loss function value and an accuracy rate;
s54: the performance of the multi-feature fusion convolutional neural network model is improved by reducing the loss function value;
s54: judging whether the set iteration times is reached, if so, finishing training to obtain an optimized multi-feature fusion convolutional neural network model, otherwise, continuing to jump to execute the step S52;
s6: inputting the plankton images to be classified into the optimized multi-feature fusion convolutional neural network model, and judging the corresponding categories of the plankton images according to the final output probability scores.
2. The plankton image classification method based on the multi-feature fusion convolutional neural network of claim 1, wherein the basic sub-network uses any one of AlexNet, VGGNet or GoogleNet.
CN201610912684.1A 2016-10-20 2016-10-20 Plankton image classification method based on multi-feature fusion convolutional neural network Active CN106372648B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610912684.1A CN106372648B (en) 2016-10-20 2016-10-20 Plankton image classification method based on multi-feature fusion convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610912684.1A CN106372648B (en) 2016-10-20 2016-10-20 Plankton image classification method based on multi-feature fusion convolutional neural network

Publications (2)

Publication Number Publication Date
CN106372648A CN106372648A (en) 2017-02-01
CN106372648B true CN106372648B (en) 2020-03-13

Family

ID=57895026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610912684.1A Active CN106372648B (en) 2016-10-20 2016-10-20 Plankton image classification method based on multi-feature fusion convolutional neural network

Country Status (1)

Country Link
CN (1) CN106372648B (en)

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108427957B (en) * 2017-02-15 2021-12-21 中国科学院深圳先进技术研究院 Image classification method and system
CN106709478A (en) * 2017-02-22 2017-05-24 桂林电子科技大学 Pedestrian image feature classification method and system
CN106991666B (en) * 2017-02-24 2019-06-07 中国科学院合肥物质科学研究院 A kind of disease geo-radar image recognition methods suitable for more size pictorial informations
CN107292250A (en) * 2017-05-31 2017-10-24 西安科技大学 A kind of gait recognition method based on deep neural network
CN107403430B (en) * 2017-06-15 2020-08-07 中山大学 RGBD image semantic segmentation method
CN107506786B (en) * 2017-07-21 2020-06-02 华中科技大学 Deep learning-based attribute classification identification method
CN107610129B (en) * 2017-08-14 2020-04-03 四川大学 CNN-based multi-modal nasopharyngeal tumor joint segmentation method
CN107633258B (en) * 2017-08-21 2020-04-10 北京精密机电控制设备研究所 Deep learning identification system and method based on feedforward feature extraction
CN107610141B (en) * 2017-09-05 2020-04-03 华南理工大学 Remote sensing image semantic segmentation method based on deep learning
WO2019084560A1 (en) * 2017-10-27 2019-05-02 Google Llc Neural architecture search
CN108229341B (en) * 2017-12-15 2021-08-06 北京市商汤科技开发有限公司 Classification method and device, electronic equipment and computer storage medium
CN108038459A (en) * 2017-12-20 2018-05-15 深圳先进技术研究院 A kind of detection recognition method of aquatic organism, terminal device and storage medium
CN108154509B (en) * 2018-01-12 2022-11-11 平安科技(深圳)有限公司 Cancer identification method, device and storage medium
CN108171276B (en) * 2018-01-17 2019-07-23 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108491880B (en) * 2018-03-23 2021-09-03 西安电子科技大学 Object classification and pose estimation method based on neural network
CN108805181B (en) * 2018-05-25 2021-11-23 深圳大学 Image classification device and method based on multi-classification model
CN109190640A (en) * 2018-08-20 2019-01-11 贵州省生物研究所 A kind of the intercept type acquisition method and acquisition system of the planktonic organism based on big data
CN109190695B (en) * 2018-08-28 2021-08-03 中国海洋大学 Fish image classification method based on deep convolutional neural network
CN109711343A (en) * 2018-12-27 2019-05-03 北京思图场景数据科技服务有限公司 Behavioral structure method based on the tracking of expression, gesture recognition and expression in the eyes
CN109886933B (en) * 2019-01-25 2021-11-02 腾讯科技(深圳)有限公司 Medical image recognition method and device and storage medium
CN109993201B (en) * 2019-02-14 2024-07-16 平安科技(深圳)有限公司 Image processing method, device and readable storage medium
CN109871905A (en) * 2019-03-14 2019-06-11 同济大学 A kind of plant leaf identification method based on attention mechanism depth model
CN110188794B (en) * 2019-04-23 2023-02-28 深圳大学 Deep learning model training method, device, equipment and storage medium
CN110287990A (en) * 2019-05-21 2019-09-27 山东大学 Microalgae image classification method, system, equipment and storage medium
CN110825381A (en) * 2019-09-29 2020-02-21 南京大学 CNN-based bug positioning method combining source code semantics and grammatical features
CN111274860B (en) * 2019-11-08 2023-08-22 杭州安脉盛智能技术有限公司 Recognition method for online automatic tobacco grade sorting based on machine vision
US11682111B2 (en) 2020-03-18 2023-06-20 International Business Machines Corporation Semi-supervised classification of microorganism
CN111723714B (en) * 2020-06-10 2023-11-03 上海商汤智能科技有限公司 Method, device and medium for identifying authenticity of face image
CN111899241B (en) * 2020-07-28 2022-03-18 华中科技大学 Quantitative on-line detection method and system for defects of PCB (printed Circuit Board) patches in front of furnace
CN111898677A (en) * 2020-07-30 2020-11-06 大连海事大学 Plankton automatic detection method based on deep learning
CN112069958A (en) * 2020-08-27 2020-12-11 广西柳工机械股份有限公司 Material identification method, device, equipment and storage medium
CN112016574B (en) * 2020-10-22 2021-02-12 北京科技大学 Image classification method based on feature fusion
CN112488170B (en) * 2020-11-24 2024-04-05 杭州电子科技大学 Multi-feature fusion image classification method based on deep learning
CN112652032B (en) * 2021-01-14 2023-05-30 深圳科亚医疗科技有限公司 Modeling method for organ, image classification device, and storage medium
CN113205039B (en) * 2021-04-29 2023-07-28 广东电网有限责任公司东莞供电局 Power equipment fault image recognition disaster investigation system and method based on multiple DCNN networks
CN113837267A (en) * 2021-06-30 2021-12-24 山东易华录信息技术有限公司 Plankton image classification method based on different number of samples
CN114842510A (en) * 2022-05-27 2022-08-02 澜途集思生态科技集团有限公司 Ecological organism identification method based on ScatchDet algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488515A (en) * 2014-09-17 2016-04-13 富士通株式会社 Method for training convolutional neural network classifier and image processing device
CN105825235A (en) * 2016-03-16 2016-08-03 博康智能网络科技股份有限公司 Image identification method based on deep learning of multiple characteristic graphs

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488515A (en) * 2014-09-17 2016-04-13 富士通株式会社 Method for training convolutional neural network classifier and image processing device
CN105825235A (en) * 2016-03-16 2016-08-03 博康智能网络科技股份有限公司 Image identification method based on deep learning of multiple characteristic graphs

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Performance Evaluation of Hybrid CNN for SIPPER Plankton Image Classificaiton;Hussein A.Al-barazanchi et al.;《2015 Third Inernational Conference on Image Information Processing》;20151231;第551-556页 *
ZooplanktoNet: Deep Convolutional Network for Zooplankton Classification;Jialun Dai et al;《IEEE》;20160609;第1-6页 *

Also Published As

Publication number Publication date
CN106372648A (en) 2017-02-01

Similar Documents

Publication Publication Date Title
CN106372648B (en) Plankton image classification method based on multi-feature fusion convolutional neural network
Liu et al. Multi-scale patch aggregation (mpa) for simultaneous detection and segmentation
Raza et al. Appearance based pedestrians’ head pose and body orientation estimation using deep learning
US10140522B2 (en) Fully convolutional pyramid networks for pedestrian detection
CN105631426B (en) The method and device of text detection is carried out to picture
CN107239759B (en) High-spatial-resolution remote sensing image transfer learning method based on depth features
KR20180037192A (en) Detection of unknown classes and initialization of classifiers for unknown classes
Shuai et al. Integrating parametric and non-parametric models for scene labeling
Song et al. Joint multi-feature spatial context for scene recognition on the semantic manifold
JP6107531B2 (en) Feature extraction program and information processing apparatus
Yoo et al. Fast training of convolutional neural network classifiers through extreme learning machines
CN105095836A (en) Skin texture detecting method and apparatus based on Gabor features
CN102136074B (en) Man-machine interface (MMI) based wood image texture analyzing and identifying method
Zhang et al. Transland: An adversarial transfer learning approach for migratable urban land usage classification using remote sensing
Petrovai et al. Multi-task network for panoptic segmentation in automated driving
CN105956610B (en) A kind of remote sensing images classification of landform method based on multi-layer coding structure
Yuan et al. Few-shot scene classification with multi-attention deepemd network in remote sensing
CN113168509A (en) Coordinate estimation on N-spheres by sphere regression
Ruusuvuori et al. Image segmentation using sparse logistic regression with spatial prior
Zhou et al. Superpixel attention guided network for accurate and real-time salient object detection
Lu et al. An efficient fine-grained vehicle recognition method based on part-level feature optimization
CN103295026A (en) Spatial local clustering description vector based image classification method
Deepan et al. Road recognition from remote sensing imagery using machine learning
CN108960246A (en) A kind of binary conversion treatment device and method for image recognition
Ramesh et al. Scalable scene understanding via saliency consensus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant