CN112613425B - Target identification system for small sample underwater image - Google Patents

Target identification system for small sample underwater image Download PDF

Info

Publication number
CN112613425B
CN112613425B CN202011573123.6A CN202011573123A CN112613425B CN 112613425 B CN112613425 B CN 112613425B CN 202011573123 A CN202011573123 A CN 202011573123A CN 112613425 B CN112613425 B CN 112613425B
Authority
CN
China
Prior art keywords
unit
module
imaging
image
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011573123.6A
Other languages
Chinese (zh)
Other versions
CN112613425A (en
Inventor
于昌利
周晓滕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Ship Technology Research Institute
Harbin Institute of Technology Weihai
Original Assignee
Shandong Ship Technology Research Institute
Harbin Institute of Technology Weihai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Ship Technology Research Institute, Harbin Institute of Technology Weihai filed Critical Shandong Ship Technology Research Institute
Priority to CN202011573123.6A priority Critical patent/CN112613425B/en
Publication of CN112613425A publication Critical patent/CN112613425A/en
Application granted granted Critical
Publication of CN112613425B publication Critical patent/CN112613425B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a target recognition system for small sample underwater images, which comprises an imaging unit, a pre-processing unit and a target recognition unit, wherein the imaging unit is used for imaging a water area environment and pre-processing a generated picture; selecting a learning mode and a deep learning frame for training; storing the model training result, and evaluating the training result; the method and the system integrate very comprehensive image processing operation and deep learning models, can help users to realize training and recognition of underwater targets by adopting different combinations in different modes, compare prediction effects, store effective combination schemes for reference of follow-up research, and effectively solve the problems that no scheme can be circulated, no method can be followed and personal professional experience is limited in target recognition research of small-sample underwater images.

Description

Target identification system for small sample underwater image
Technical Field
The invention relates to the technical field of underwater target identification, in particular to a target identification system of a small sample underwater image.
Background
In underwater target recognition, optical imaging is blocked due to environmental limitations, and usually, acoustic imaging can only be adopted in a long-distance direction, namely, target searching is carried out by means of sonar equipment. However, as the underwater acoustic medium is variable, the signal transmission process is easy to be interfered, so that the sonar image quality of the target is low, the noise is high, and the characteristics are not obvious.
Due to the unknown underwater environment, most targets are small and rare samples, so that the acquired target image data are very few, when processing the target images, the target images can only be operated by the experience of researchers, no mature processing scheme exists, no systematic model exists for pertinently training the underwater images, when the underwater targets are analyzed, the image processing flow is complex, the correlation among modules is not strong, the unknown underwater images of the small samples can not be referred to by a mature processing method, a large amount of human resources and time resources are wasted, for the unknown data of the small samples, the image processing method can only be selected in an attempt mode, the verification is not enough, and the target recognition efficiency is seriously influenced.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and the title of the invention of this application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The present invention has been made in view of the above-mentioned particularities of the underwater environment and the problem that no mature treatment solution is available at present.
Therefore, the technical problem solved by the invention is as follows: the image processing and identifying process is complex, the relevance among modules is not strong, and unknown underwater images of small samples can not be referred to by a mature processing method.
A target recognition system for small sample underwater images is provided, and a user can select a proper preprocessing unit and a proper training model in the system.
In order to solve the technical problems, the invention provides the following technical scheme: a target recognition system for small sample underwater images comprises,
the imaging module is used for imaging an underwater target area and comprises an imaging mode selection unit, and the imaging mode selection unit can perform different selections of optical imaging or acoustic imaging;
the picture preprocessing module is connected with the imaging module and is used for preprocessing the picture output by the imaging module,
the image preprocessing module comprises a gray-scale image conversion unit, an image binarization unit, an ROI (region of interest) region dividing unit, an outline drawing unit, an image denoising and defogging unit, a morphological transformation unit and a feature extraction unit;
the deep learning module is connected with the picture preprocessing module, different deep learning frames and learning modes are used according to the selection of a user, the model is trained and monitored in the whole process,
the deep learning module comprises a learning mode selection unit, a deep learning frame selection unit and a training monitoring unit, wherein the learning mode selection unit comprises brand-new training, transfer learning and joint learning, and the deep learning frame selection unit comprises Tensorflow and Pythroch;
the evaluation module is connected with the deep learning module and used for storing and evaluating the model and the data generated by the deep learning module,
the evaluation module comprises a model training result storage unit and an evaluation unit, wherein the storage unit is used for predicting errors and classifying data, the evaluation unit is used for evaluating the training result, and evaluation indexes comprise F1-score, call, mAP and Precision.
The invention has the beneficial effects that: the system integrates very comprehensive image processing operation and a deep learning model, can help a user to realize training and recognition of underwater targets by adopting different combinations in different modes, compares prediction effects, stores effective combination schemes for reference of follow-up research, and effectively solves the problems that no scheme can be followed, no method can be followed and personal professional experience is limited in target recognition research of small-sample underwater images.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise. Wherein:
FIG. 1 is a schematic flow chart of a target identification method for a small sample underwater image according to the present invention;
FIG. 2 is a schematic view of a work flow of a target recognition system for a small sample underwater image according to the present invention;
FIG. 3 is a schematic diagram of the distribution of the module structure of a small sample underwater image-oriented target recognition system according to the present invention;
FIG. 4 is a main interface diagram of a target recognition system for a small sample underwater image according to the present invention;
FIG. 5 is a functional block diagram of a target recognition system for small sample underwater images according to the present invention;
FIG. 6 is a functional effect diagram of a target identification system for a small sample underwater image according to the present invention;
FIG. 7 is a model parameter setting diagram of a small sample underwater image-oriented target recognition system according to the present invention;
FIG. 8 is a diagram of a model training result of a small sample underwater image-oriented target recognition system according to the present invention;
fig. 9 is a model recognition effect diagram of a small sample underwater image-oriented target recognition system according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, specific embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
The present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially in general scale for convenience of illustration, and the drawings are only exemplary and should not be construed as limiting the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
Meanwhile, in the description of the present invention, it should be noted that the terms "upper, lower, inner and outer" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and operate, and thus, cannot be construed as limiting the present invention. Furthermore, the terms first, second, or third are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected and connected" in the present invention are to be understood broadly, unless otherwise explicitly specified or limited, for example: can be fixedly connected, detachably connected or integrally connected; they may be mechanically, electrically, or directly connected, or indirectly connected through intervening media, or may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
Referring to fig. 1 to 9, a first embodiment of the present invention provides a target recognition system for a small sample underwater image, including: the system comprises an imaging module 100, a picture preprocessing module 200, a deep learning module 300 and an evaluation module 400; the imaging module 100 is used for imaging an underwater target region, and includes an imaging mode selection unit 101, which can perform different selections of optical imaging or acoustic imaging.
The image preprocessing module 200 is connected to the imaging module 100, and trains and monitors the model in the whole process by using different deep learning frames and learning modes according to user selection, and comprises a grayscale map conversion unit 201, an image binarization unit 202, an ROI area division unit 203, a contour drawing unit 204, an image de-noising and defogging unit 205, a morphology transformation unit 206 and a feature extraction unit 207; and (3) converting a gray scale image: the operation can be used as a preprocessing step of image processing, and preparation is made for subsequent upper-layer operations such as image segmentation, image recognition and image analysis; image binarization: the operation is beneficial to further processing the gray-scale image, the collective property of the image is only related to the position of a point with a pixel value of 0 or 255, and the multi-level value of the pixel is not related, so that the processing is simple, and the processing and compression amount of data are small; dividing ROI (region of interest): the region of interest (ROI) is an image region selected from the image, which is the focus of your image analysis. The area is delineated for further processing. The ROI is used for delineating the target which the user wants to read, so that the processing time can be reduced, and the precision can be increased; drawing a minimum outline: drawing the contours of meaningful target points in the underwater image, and highlighting the target points; image denoising/defogging: carrying out noise reduction/defogging treatment on the underwater target image to reduce interference noise; morphological transformation: generally, boundary extraction, skeleton extraction, hole filling, corner extraction and image reconstruction are performed on a binary image; feature extraction: basic statistical characteristics including some simple region descriptors, histograms and their statistical characteristics, and gray level co-occurrence matrix; randomly displaying data: randomly extracting sample data after image preprocessing, and comparing the processing effects.
The deep learning module 300 is connected to the image preprocessing module 300, and uses different deep learning frames and learning modes according to user selection to train and monitor the model in the whole process, and includes a learning mode selection unit 301, a deep learning frame selection unit 302 and a training monitoring unit 303, wherein the learning mode selection unit 301 includes brand new training, transfer learning and joint learning, and the deep learning frame selection unit 302 includes Tensorflow, Pyorch and other domestic frames.
The evaluation module 400 is connected to the deep learning module 300 and is configured to store and evaluate the model and the data generated by the deep learning module 300, including a model training result storage unit 401 and an evaluation unit 402, where the storage unit 401 is used for prediction error and data classification, and the evaluation unit 402 is used for evaluating the training result, and the evaluation indexes include F1-score, call, mAP, and Precision.
The embodiment of the invention discloses a target recognition system for small sample underwater images, which comprises a main system display interface, a main system function schematic diagram, a system sub-function effect diagram, a model training effect diagram and a sample recognition effect diagram.
In the embodiment, the system integrates very comprehensive image processing operation and a deep learning model, can help a user to realize training and recognition of underwater targets by adopting different combinations in different modes, compares prediction effects, stores an effective combination scheme for reference of follow-up research, and effectively solves the problems that no scheme can be followed, no method can be followed and personal professional experience is limited in target recognition research of small-sample underwater images.
As shown in fig. 1 and 2, the system further provides a target identification method for the small sample underwater image, which includes:
s1: imaging the water area environment, and preprocessing the generated picture. In which it is to be noted that,
the imaging of the water area environment comprises optical imaging or acoustic imaging by utilizing a sonar and an optical camera, and the preprocessing comprises graying, image noise reduction or defogging, binaryzation, morphological transformation, ROI area division, feature extraction and the processing mode of drawing a minimum outline. And (3) converting a gray scale image: the operation can be used as a preprocessing step of image processing, and preparation is made for subsequent upper-layer operations such as image segmentation, image recognition and image analysis; image binarization: the operation is beneficial to further processing the gray-scale image, the collective property of the image is only related to the position of a point with a pixel value of 0 or 255, and the multi-level value of the pixel is not related, so that the processing is simple, and the processing and compression amount of data are small; dividing ROI (region of interest): the region of interest (ROI) is an image region selected from the image, which is the focus of your image analysis. The area is delineated for further processing. The ROI is used for delineating the target which the user wants to read, so that the processing time can be reduced, and the precision can be increased; drawing a minimum outline: drawing the contours of meaningful target points in the underwater image, and highlighting the target points; image denoising/defogging: carrying out noise reduction/defogging treatment on the underwater target image to reduce interference noise; morphological transformation: generally, boundary extraction, skeleton extraction, hole filling, corner extraction and image reconstruction are performed on a binary image; feature extraction: basic statistical characteristics including some simple region descriptors, histograms and their statistical characteristics, and gray level co-occurrence matrix; randomly displaying data: randomly extracting sample data after image preprocessing, and comparing the processing effects.
S2: and selecting a learning mode and a deep learning frame for training. In which it is to be noted that,
the learning mode comprises brand-new learning, transfer learning and joint learning; deep learning frameworks include Tensorflow, Pyorch, and other domestic frameworks.
S3: and storing the model training result and evaluating the training result. In which it is to be noted that,
the models in the model training comprise SSD, YOLOv3, YOLOv4 and F-CNN; the evaluation index includes F1-score, call, mAP, Precision.
The method in this embodiment includes the following steps: inputting water area environment model parameters, setting an imaging mode, setting the size and the type of a picture according to the requirements of engineering tasks, selecting image preprocessing operation required to be used, setting a data storage format, calculating the magnitude of a current data sample, and expanding a data set; retrieving system and hardware information of a current platform, listing a deep learning frame which can be set up, selecting a learning mode, installing the deep learning frame, selecting whether to accelerate, setting model training parameters, and operating a model; starting training whole-course monitoring, storing model training results, evaluating by using indexes such as RECALL, Precison and the like, storing prediction error data, recording, predicting underwater targets in real time and comparing image recognition effects.
In this embodiment, the functions attached to this method include: data annotation: carrying out target labeling on the preprocessed image by using developed software, and storing the preprocessed image to be trained; calibrating a camera: in order to more accurately apply a target image captured by a camera to a plurality of fields such as subsequent industrial measurement, visual monitoring, robot hand and eye and the like, basic camera calibration work needs to be completed; and (3) correcting image distortion: in the underwater vision engineering task, the first step is to correct the distortion of a camera; and (3) coordinate system conversion: world coordinate system, camera coordinate system, image physical coordinate system, pixel coordinate system
It should be recognized that embodiments of the present invention can be realized and implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable interface, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein. A computer program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (1)

1. A target recognition system for small sample underwater images is characterized in that: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
the imaging module (100) is used for imaging an underwater target area and comprises an imaging mode selection unit (101), and the imaging mode selection unit (101) can perform different selections of optical imaging or acoustic imaging;
the picture preprocessing module (200) is connected with the imaging module (100) and is used for preprocessing the picture output by the imaging module (100),
the image preprocessing module (200) comprises a grayscale map conversion unit (201), an image binarization unit (202), an ROI area division unit (203), a contour drawing unit (204), an image denoising and defogging unit (205), a morphology transformation unit (206) and a feature extraction unit (207);
the deep learning module (300) is connected with the picture preprocessing module (200) and uses different deep learning frames and learning modes according to the selection of a user to train and monitor the model in the whole process,
the deep learning module (300) comprises a learning mode selection unit (301), a deep learning frame selection unit (302) and a training monitoring unit (303), wherein the learning mode selection unit (301) comprises brand-new training, transfer learning and joint learning, and the deep learning frame selection unit (302) comprises Tensorflow and Pyorch;
an evaluation module (400) connected to the deep learning module (300) for storing and evaluating the model and data generated by the deep learning module (300),
the evaluation module (400) comprises a model training result storage unit (401) and an evaluation unit (402), wherein the storage unit (401) is used for predicting errors and classifying data, the evaluation unit (402) is used for evaluating the training result, and evaluation indexes comprise F1-score, call, mAP and Precision.
CN202011573123.6A 2020-12-24 2020-12-24 Target identification system for small sample underwater image Active CN112613425B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011573123.6A CN112613425B (en) 2020-12-24 2020-12-24 Target identification system for small sample underwater image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011573123.6A CN112613425B (en) 2020-12-24 2020-12-24 Target identification system for small sample underwater image

Publications (2)

Publication Number Publication Date
CN112613425A CN112613425A (en) 2021-04-06
CN112613425B true CN112613425B (en) 2022-03-22

Family

ID=75248050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011573123.6A Active CN112613425B (en) 2020-12-24 2020-12-24 Target identification system for small sample underwater image

Country Status (1)

Country Link
CN (1) CN112613425B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763408A (en) * 2021-09-29 2021-12-07 上海海事大学 Method for rapidly identifying aquatic weeds in water through images in sailing process of unmanned ship
CN113895575B (en) * 2021-11-15 2023-09-15 浙江理工大学 Water surface cleaning robot salvaging system based on Arian cloud and convolutional neural network algorithm

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165658A (en) * 2018-08-28 2019-01-08 哈尔滨工业大学(威海) A kind of strong negative sample underwater target detection method based on Faster-RCNN
CN109784343A (en) * 2019-01-25 2019-05-21 上海深杳智能科技有限公司 A kind of resource allocation methods and terminal based on deep learning model
CN109948527A (en) * 2019-03-18 2019-06-28 西安电子科技大学 Small sample terahertz image foreign matter detecting method based on integrated deep learning
CN110826612A (en) * 2019-10-31 2020-02-21 上海法路源医疗器械有限公司 Training and identifying method for deep learning
CN111612058A (en) * 2020-05-19 2020-09-01 江苏建筑职业技术学院 Artificial intelligence learning method based on deep learning
US10832087B1 (en) * 2020-02-05 2020-11-10 Sas Institute Inc. Advanced training of machine-learning models usable in control systems and other systems

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6492278B2 (en) * 2015-03-19 2019-04-03 本多電子株式会社 Fish finder
CN106529428A (en) * 2016-10-31 2017-03-22 西北工业大学 Underwater target recognition method based on deep learning
CN107808161B (en) * 2017-10-26 2020-11-24 江苏科技大学 Underwater target identification method based on optical vision
CN110807365B (en) * 2019-09-29 2022-02-11 浙江大学 Underwater target identification method based on fusion of GRU and one-dimensional CNN neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165658A (en) * 2018-08-28 2019-01-08 哈尔滨工业大学(威海) A kind of strong negative sample underwater target detection method based on Faster-RCNN
CN109784343A (en) * 2019-01-25 2019-05-21 上海深杳智能科技有限公司 A kind of resource allocation methods and terminal based on deep learning model
CN109948527A (en) * 2019-03-18 2019-06-28 西安电子科技大学 Small sample terahertz image foreign matter detecting method based on integrated deep learning
CN110826612A (en) * 2019-10-31 2020-02-21 上海法路源医疗器械有限公司 Training and identifying method for deep learning
US10832087B1 (en) * 2020-02-05 2020-11-10 Sas Institute Inc. Advanced training of machine-learning models usable in control systems and other systems
CN111612058A (en) * 2020-05-19 2020-09-01 江苏建筑职业技术学院 Artificial intelligence learning method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Underwater target recognition methods based on the framework of deep learning:A survey;Bowen Teng等;《International Journal of Advances Robotic Systems》;20201216;第1-12页 *
基于异构多模态深度学习方法在水下目标识别中的应用;曾赛等;《声学技术》;20181231;第37卷(第6期);第239-240页 *
电子信息图像处理在船舶目标识别中的应用;周晓滕;《数码设计 (上)》;20190110(第9期);第110页 *

Also Published As

Publication number Publication date
CN112613425A (en) 2021-04-06

Similar Documents

Publication Publication Date Title
CN110059694B (en) Intelligent identification method for character data in complex scene of power industry
JP6837597B2 (en) Active learning systems and methods
CN110827236B (en) Brain tissue layering method, device and computer equipment based on neural network
CN111652225B (en) Non-invasive camera shooting and reading method and system based on deep learning
CN112613425B (en) Target identification system for small sample underwater image
CN112132265B (en) Model training method, cup-disk ratio determining method, device, equipment and storage medium
CN111900694B (en) Relay protection equipment information acquisition method and system based on automatic identification
CN110910445B (en) Object size detection method, device, detection equipment and storage medium
CN112419202B (en) Automatic wild animal image recognition system based on big data and deep learning
CN113763348A (en) Image quality determination method and device, electronic equipment and storage medium
CN112052730A (en) 3D dynamic portrait recognition monitoring device and method
CN113780201B (en) Hand image processing method and device, equipment and medium
CN112966687B (en) Image segmentation model training method and device and communication equipment
CN116052831B (en) Data information processing method and device for orthopedics spine
CN114782822A (en) Method and device for detecting state of power equipment, electronic equipment and storage medium
CN110378241B (en) Crop growth state monitoring method and device, computer equipment and storage medium
CN115019396A (en) Learning state monitoring method, device, equipment and medium
CN113378921A (en) Data screening method and device and electronic equipment
KR20220122862A (en) Apparatus and Method for Providing a Surgical Environment based on a Virtual Reality
CN110956130A (en) Method and system for four-level face detection and key point regression
CN111882544B (en) Medical image display method and related device based on artificial intelligence
CN111242047A (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN110472601B (en) Remote sensing image target object identification method, device and storage medium
CN115114950B (en) Method, device and equipment for testing decision uncertainty
CN112257769B (en) Multilayer nuclear magnetic image classification method and system based on reinforcement learning type brain reading

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant