CN116453032B - Marine ecology detecting system - Google Patents

Marine ecology detecting system Download PDF

Info

Publication number
CN116453032B
CN116453032B CN202310718988.4A CN202310718988A CN116453032B CN 116453032 B CN116453032 B CN 116453032B CN 202310718988 A CN202310718988 A CN 202310718988A CN 116453032 B CN116453032 B CN 116453032B
Authority
CN
China
Prior art keywords
region
detection
generates
module
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310718988.4A
Other languages
Chinese (zh)
Other versions
CN116453032A (en
Inventor
张振昌
李小林
舒兆港
林甲祥
陈宏方
林清波
方艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Agriculture and Forestry University
Original Assignee
Fujian Agriculture and Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Agriculture and Forestry University filed Critical Fujian Agriculture and Forestry University
Priority to CN202310718988.4A priority Critical patent/CN116453032B/en
Publication of CN116453032A publication Critical patent/CN116453032A/en
Application granted granted Critical
Publication of CN116453032B publication Critical patent/CN116453032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Business, Economics & Management (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Tourism & Hospitality (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Economics (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of marine environment detection, and discloses a marine ecological detection system, which comprises the following components: a graph model generation module that generates a vertex set V based on the object, generates an edge set E based on the retained paired features, and generates global information U based on image information of the detection region; the graph model processing module is used for inputting the generated V, E, U into the GNN neural network model to output update, inputting the characteristics of the updated vertexes into the classifier A, and one classification label of the classification space of the classifier A represents one target biological population iteration number; the invention obtains the iteration information and the quantity information of the biological population in the detection area through processing the image information of the detection area, thereby accurately detecting and judging the ecology of the detection area.

Description

Marine ecology detecting system
Technical Field
The invention relates to the technical field of marine environment detection, in particular to a marine ecological detection system.
Background
For marine ecology detection, manual sampling is adopted, then a biological population in a detection area is generated through a statistical method, and the manner can lead to incomplete information and deviation of the detected biological population due to deviation of manual sampling, so that the result of marine ecology detection is inaccurate.
Disclosure of Invention
The invention provides a marine ecological detection system, which solves the technical problems of incomplete information and deviation of a detected biological population caused by deviation of manual sampling and inaccurate detection result of marine ecology in the related technology.
The invention provides a marine ecological detection system, comprising:
an image extraction module for extracting image information of the detection area;
a first region dividing module that obtains N divided first regions based on image information of the detection region;
a second region dividing module that generates an image feature for each first region based on the image information, inputs the image feature into a target identifier, and outputs a result of whether the first region contains a detection target and an offset parameter of the first region; the target identifier is a CNN model, the output of the CNN model is connected with an SVM classifier and three offset parameter classifiers, the activation function of the SVM classifier is a Softmax function, and a probability value is output; the classification space of the SVM classifier comprises a yes classification label and a no classification label, and whether the first area contains a detection target is determined by comparing and outputting the probability value of each classification; the classification spaces of the three offset parameter classifiers are respectively The method comprises the steps of carrying out a first treatment on the surface of the Wherein->One label corresponds to->The discretized value represents an included angle between a connecting line between the centers of the first region and the prediction region and the X axis; />One label corresponds to->The discretized value represents the offset of the first region and the predicted region in the X-axis direction; />One label corresponds to->The discretized value represents the offset of the first region and the predicted region in the Y-axis direction; a prediction region generation module that deletes a first region that does not include a detection target, and generates a prediction region based on the remaining first region and its corresponding offset parameter;
an object generation module that generates an object based on the prediction region, an attribute vector of the object being generated by image information of the prediction region;
a paired feature generation module for generating paired features between objects, the paired features between two objects being generated based on image information of an image band on a line connecting centers of prediction areas corresponding to the two objects; the middle part of the image band is a connecting line of the centers of the prediction areas corresponding to the two objects;
the feature recognition module is used for inputting the paired features into the CNN model to judge whether the generated paired features are reserved or not; the CNN model outputs two classifications, wherein two classification labels respectively represent reserved and unreserved, and deletion processing is carried out on paired features classified as unreserved;
a graph model generation module that generates a vertex set V based on the object, generates an edge set E based on the retained paired features, and generates global information U based on image information of the detection region; a graph model processing module for inputting the generated V, E, U into the GNN neural network model to output updated graph models、/>、/>Then ∈>Features of the vertices in (a) are input into classifier A, and the classification space of classifier A is expressed as +.>Wherein one classification tag represents a target biological population iteration number; the ecological detection module is used for judging whether marine ecology is unbalanced or not based on the number of target organism populations and the iteration times; the judgment standard is to set a corresponding threshold value, and when the number of target organism populations and the iteration number are higher than the threshold value, the marine ecological unbalance is judged.
Further, the first region dividing module generates the first region through a Selective Search algorithm.
Further, the width of the image belt is set to H.
The invention has the beneficial effects that:
the invention obtains the iteration information and the quantity information of the biological population in the detection area through processing the image information of the detection area, thereby accurately detecting and judging the ecology of the detection area.
Drawings
FIG. 1 is a schematic block diagram of a marine ecology detection system of the present invention.
In the figure: the image processing system comprises an image extraction module 101, a first region division module 102, a second region division module 103, a prediction region generation module 104, an object generation module 105, a paired feature generation module 106, a feature recognition module 107, a graph model generation module 108 and a graph model processing module 109.
Detailed Description
The subject matter described herein will now be discussed with reference to example embodiments. It is to be understood that these embodiments are merely discussed so that those skilled in the art may better understand and implement the subject matter described herein and that changes may be made in the function and arrangement of the elements discussed without departing from the scope of the disclosure herein. Various examples may omit, replace, or add various procedures or components as desired. In addition, features described with respect to some examples may be combined in other examples as well.
Example 1
As shown in fig. 1, a marine ecology detection system includes:
an image extraction module 101 for extracting image information of a detection area;
a first region dividing module 102 that obtains N divided first regions based on image information of the detection region;
the first region dividing module generates a first region through a Selective Search algorithm;
a second region dividing module 103 that generates an image feature for each first region based on the image information, inputs the image feature to a target identifier, and outputs a result of whether the first region contains a detection target (target living being) and an offset parameter of the first region;
in one embodiment of the invention, the target identifier is a first neural network model, the output of the first neural network model is connected with an SVM classifier and three offset parameter classifiers, the activation function of the SVM classifier is a Softmax function, and a probability value is output;
classification null of SVM classifierThe first region comprises a first classification label and a second classification label, and whether the first region comprises a detection target is determined by comparing and outputting probability values of the classifications; the classification spaces of the three offset parameter classifiers are respectively The method comprises the steps of carrying out a first treatment on the surface of the Wherein->One label corresponds to->The discretized value represents an included angle between a connecting line between the centers of the first region and the prediction region and the X axis; />One label corresponds toThe discretized value represents the offset of the first region and the predicted region in the X-axis direction;one label corresponds to->The discretized value represents the offset of the first region and the predicted region in the Y-axis direction; a prediction region generation module 104 that deletes a first region that does not include a detection target, and generates a prediction region based on the remaining first region and its corresponding offset parameter;
an object generation module 105 that generates an object based on the prediction region, and an attribute vector of the object is generated from image information of the prediction region.
A paired feature generation module 106 for generating paired features between objects, the paired features between two objects being generated based on image information of image bands on a line connecting centers of prediction areas corresponding to the two objects;
the width of the image band is set to be H, and the middle part of the image band is a connecting line of the centers of the prediction areas corresponding to the two objects. H may be set according to the total width of the initial image acquired, measured across the pixels of the image, e.g. 10000, representing 10000 pixels arranged across the width.
A feature recognition module 107 for inputting the paired features into the reduced neural network model to determine whether to retain the generated paired features;
outputting two classifications of the simple neural network model, wherein two classification labels respectively represent reserved and unreserved, and deleting paired characteristics classified as unreserved;
in one embodiment of the invention, the first neural network model and the reduced form neural network model may be CNN models.
A graph model generation module 108 that generates a vertex set V based on the object, an edge set E based on the retained paired features, and global information U based on image information of the detection region; a graph model processing module 109 for inputting the generated V, E, U into the GNN neural network model to output updated data、/>、/>Then ∈>Features of the vertices in (a) are input into classifier A, and the classification space of classifier A is expressed as +.>Wherein one classification tag represents a target biological population iteration number; for example->Representing 1 iteration of the target organism population. In one embodiment of the invention, a marine ecology detection system further comprises an ecology detection module for determining whether marine ecology is unbalanced based on the number of pest populations and the number of iterations.
The criterion for the determination may be to set a corresponding threshold, and the number of pest populations and the number of iterations are both above the threshold to determine marine ecological imbalance.
The number of biological populations may be the number of predicted regions of the biological populations.
The embodiment has been described above with reference to the embodiment, but the embodiment is not limited to the above-described specific implementation, which is only illustrative and not restrictive, and many forms can be made by those of ordinary skill in the art, given the benefit of this disclosure, are within the scope of this embodiment.

Claims (3)

1. A marine ecology detection system, comprising: an image extraction module for extracting image information of the detection area; a first region dividing module that obtains N divided first regions based on image information of the detection region; a second region dividing module that generates an image feature for each first region based on the image information, inputs the image feature into a target identifier, and outputs a result of whether the first region contains a detection target and an offset parameter of the first region; the target identifier is a CNN model, the output of the CNN model is connected with an SVM classifier and three offset parameter classifiers, the activation function of the SVM classifier is a Softmax function, and a probability value is output; the classification space of the SVM classifier comprises a yes classification label and a no classification label, and whether the first area contains detection is determined by comparing and outputting the probability value of each classificationA target; the classification spaces of the three offset parameter classifiers are respectively
Wherein the method comprises the steps ofOne label corresponds to->The discretized value represents an included angle between a connecting line between the centers of the first region and the prediction region and the X axis; />One label corresponds to->The discretized value represents the offset of the first region and the predicted region in the X-axis direction; />One label corresponds to->The discretized value represents the offset of the first region and the predicted region in the Y-axis direction; a prediction region generation module that deletes a first region that does not include a detection target, and generates a prediction region based on the remaining first region and its corresponding offset parameter;
an object generation module that generates an object based on the prediction region, an attribute vector of the object being generated by image information of the prediction region;
a paired feature generation module for generating paired features between objects, the paired features between two objects being generated based on image information of an image band on a line connecting centers of prediction areas corresponding to the two objects; the middle part of the image band is a connecting line of the centers of the prediction areas corresponding to the two objects;
the feature recognition module is used for inputting the paired features into the CNN model to judge whether the generated paired features are reserved or not; the CNN model outputs two classifications, wherein two classification labels respectively represent reserved and unreserved, and deletion processing is carried out on paired features classified as unreserved;
a graph model generation module that generates a vertex set V based on the object, generates an edge set E based on the retained paired features, and generates global information U based on image information of the detection region; a graph model processing module for inputting the generated V, E, U into the GNN neural network model to output updated graph models、/>、/>Then ∈>Features of the vertices in (a) are input into classifier A, and the classification space of classifier A is expressed as +.>Wherein one classification tag represents a target biological population iteration number; the ecological detection module is used for judging whether marine ecology is unbalanced or not based on the number of target organism populations and the iteration times; the criterion is to set the corresponding thresholdAnd judging the marine ecological unbalance when the number of target biological populations and the iteration times are higher than the threshold value.
2. The marine ecology detection system of claim 1 wherein the first region partitioning module generates the first region by a Selective Search algorithm.
3. A marine ecology detection system according to claim 1 wherein the width of the image strip is set to H.
CN202310718988.4A 2023-06-16 2023-06-16 Marine ecology detecting system Active CN116453032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310718988.4A CN116453032B (en) 2023-06-16 2023-06-16 Marine ecology detecting system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310718988.4A CN116453032B (en) 2023-06-16 2023-06-16 Marine ecology detecting system

Publications (2)

Publication Number Publication Date
CN116453032A CN116453032A (en) 2023-07-18
CN116453032B true CN116453032B (en) 2023-08-25

Family

ID=87134211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310718988.4A Active CN116453032B (en) 2023-06-16 2023-06-16 Marine ecology detecting system

Country Status (1)

Country Link
CN (1) CN116453032B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113673482A (en) * 2021-09-03 2021-11-19 四川大学 Cell antinuclear antibody fluorescence recognition method and system based on dynamic label distribution
CN114187183A (en) * 2021-11-23 2022-03-15 成都星亿年智慧科技有限公司 Fine-grained insect image classification method
CN114548291A (en) * 2022-02-24 2022-05-27 澜途集思(深圳)数字科技有限公司 Ecological biological identification method based on MR-CNN algorithm
CN114898436A (en) * 2022-05-23 2022-08-12 华东师范大学 Rare disease classification method based on prototype graph neural network and small sample learning
WO2023087558A1 (en) * 2021-11-22 2023-05-25 重庆邮电大学 Small sample remote sensing image scene classification method based on embedding smoothing graph neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11403488B2 (en) * 2020-03-19 2022-08-02 Hong Kong Applied Science and Technology Research Institute Company Limited Apparatus and method for recognizing image-based content presented in a structured layout

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113673482A (en) * 2021-09-03 2021-11-19 四川大学 Cell antinuclear antibody fluorescence recognition method and system based on dynamic label distribution
WO2023087558A1 (en) * 2021-11-22 2023-05-25 重庆邮电大学 Small sample remote sensing image scene classification method based on embedding smoothing graph neural network
CN114187183A (en) * 2021-11-23 2022-03-15 成都星亿年智慧科技有限公司 Fine-grained insect image classification method
CN114548291A (en) * 2022-02-24 2022-05-27 澜途集思(深圳)数字科技有限公司 Ecological biological identification method based on MR-CNN algorithm
CN114898436A (en) * 2022-05-23 2022-08-12 华东师范大学 Rare disease classification method based on prototype graph neural network and small sample learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Compound-protein interaction prediction with end-to-end learning of neural networks for graphs and sequences;Masashi Tsubaki 等;《Bioinformatics》;第309-318页 *

Also Published As

Publication number Publication date
CN116453032A (en) 2023-07-18

Similar Documents

Publication Publication Date Title
US10318848B2 (en) Methods for object localization and image classification
US8675974B2 (en) Image processing apparatus and image processing method
US7512273B2 (en) Digital ink labeling
CN107506786B (en) Deep learning-based attribute classification identification method
CN111615702B (en) Method, device and equipment for extracting structured data from image
US20170032247A1 (en) Media classification
CN107683469A (en) A kind of product classification method and device based on deep learning
CN109685065B (en) Layout analysis method and system for automatically classifying test paper contents
CN109977895B (en) Wild animal video target detection method based on multi-feature map fusion
US11600088B2 (en) Utilizing machine learning and image filtering techniques to detect and analyze handwritten text
CN114333040B (en) Multi-level target detection method and system
CN112836735A (en) Optimized random forest processing unbalanced data set method
Mohd-Isa et al. Detection of Malaysian traffic signs via modified YOLOv3 algorithm
CN112699858B (en) Unmanned platform smoke fog sensing method and system, computer equipment and storage medium
CN111881906A (en) LOGO identification method based on attention mechanism image retrieval
CN111815582A (en) Two-dimensional code area detection method for improving background prior and foreground prior
CN113011528B (en) Remote sensing image small target detection method based on context and cascade structure
CN111488400B (en) Data classification method, device and computer readable storage medium
CN116453032B (en) Marine ecology detecting system
CN111783088A (en) Malicious code family clustering method and device and computer equipment
CN113221929A (en) Image processing method and related equipment
CN114494441B (en) Grape and picking point synchronous identification and positioning method and device based on deep learning
Farfan-Escobedo et al. Towards accurate building recognition using convolutional neural networks
CN113903025A (en) Scene text detection method, device and model, and training method and training device thereof
CN114266971A (en) Weak supervision remote sensing image airplane detection method based on key points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant