CN112464742A - Method and device for automatically identifying red tide image - Google Patents

Method and device for automatically identifying red tide image Download PDF

Info

Publication number
CN112464742A
CN112464742A CN202011228637.8A CN202011228637A CN112464742A CN 112464742 A CN112464742 A CN 112464742A CN 202011228637 A CN202011228637 A CN 202011228637A CN 112464742 A CN112464742 A CN 112464742A
Authority
CN
China
Prior art keywords
red tide
image
pixel
pixel point
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011228637.8A
Other languages
Chinese (zh)
Inventor
张振昌
张少涵
马锦山
薛弘晖
林默想
陈日清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Agriculture and Forestry University
Original Assignee
Fujian Agriculture and Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Agriculture and Forestry University filed Critical Fujian Agriculture and Forestry University
Priority to CN202011228637.8A priority Critical patent/CN112464742A/en
Publication of CN112464742A publication Critical patent/CN112464742A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention provides a method and a device for automatically identifying a red tide image, which are characterized by comprising the steps of carrying out feature extraction, red tide pixel labeling, red tide feature extraction and red tide feature weight training on the existing red tide image, determining features and weights associated with red tides in a deep learning mode, further carrying out pixel point feature analysis and image processing on the monitored ocean image, and automatically identifying red tide information in the ocean image.

Description

Method and device for automatically identifying red tide image
Technical Field
The invention relates to the field of image recognition, in particular to a method and a device for automatically recognizing a red tide image.
Background
Red tide is a phenomenon of abnormal color of seawater caused by sudden and rapid proliferation of plankton in the ocean. The red tide is a signal of marine pollution, seriously damages marine fishery and aquatic resources, and seriously threatens human health and life safety. As the pollution of the ocean becomes more and more serious, the occurrence frequency of red tides becomes more and more frequent, and the bay and coastal waters become more prominent. In order to prevent and identify the occurrence of red tides in a timely manner, continuous monitoring and observation of red tide organisms in the ocean is required.
The red tide that arouses by different kinds of plankton has different colours, and traditional red tide monitoring mode is to sampling the sea water, utilizes the microscope to observe the water sample, confirms the kind and the density of plankton, and this kind of monitoring method is consuming time hard, and is higher to staff's professional requirement.
The red tide monitoring method based on image recognition is the direction of the latest research, but due to the fact that the types of red tide organisms are various, and a related database and an image recognition system are lacked, the problem that how to accurately determine the red tide information from the ocean image is in urgent need of solving is solved. Therefore, a method for automatically identifying red tide images is needed, which can accurately identify the information of the red tide from the shot ocean monitoring picture, accurately determine the position of the red tide, and effectively save manpower and material resources for monitoring the red tide.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for automatically identifying a red tide image, so as to solve the problem that in the prior art, a red tide cannot be accurately identified from an ocean picture, and thus, effective monitoring of the red tide cannot be achieved in time.
In order to solve the above technical problems, the proposed solution is as follows:
a method for automatically identifying red tide images comprises the following steps:
(1) a characteristic training stage;
(1.1) acquiring pixel points of the red tide image and local images at different positions, and extracting multiple features of the pixel points;
(1.2) classifying and labeling the pixel points into two types, namely red tide pixel points and non-red tide pixel points;
(1.3) clustering each feature of different types of pixel points respectively, and selecting at least one feature with discrimination to form a red tide pixel point feature set;
(1.4) training the weight of each red tide pixel point feature in the red tide pixel point feature set based on the local image;
(2) image recognition phase
(2.1) acquiring a new red tide image, and calculating the red tide pixel point characteristics of each pixel point of the red tide image based on the red tide pixel point characteristic set;
(2.2) determining the type of each pixel point based on the weight of the set red tide pixel point characteristics;
(2.3) carrying out edge detection on the red tide image, and marking an edge detection point;
(2.4) segmenting the red tide image based on the edge detection points to form a plurality of groups of segmented pixel point sets, and marking all pixel points in each group of segmented pixel point sets as the same type;
and (2.5) outputting a red tide image recognition result.
Preferably, the multiple features of the pixel point specifically include at least one of:
color features, color aggregation vectors, local color histograms, and/or local color matrices.
Preferably, training the weight of each red tide pixel point feature in the red tide pixel point feature set based on the local image specifically includes:
acquiring each red tide pixel point characteristic and pixel point type of each pixel point of the local image; linearly combining the characteristics of each red tide pixel point;
and dynamically adjusting the weight coefficient of each red tide pixel point characteristic, and determining the weight coefficient of each red tide pixel point characteristic matched with each pixel point type.
Preferably, the adjustment the weight coefficient of each item red tide pixel point characteristic makes each pixel point the combination of each item red tide pixel point characteristic with the characteristic phase-match of local image specifically includes:
and (3) learning and training the weight coefficient by adopting a deep learning method, and extracting the spatial correlation information of the red tide image by selecting different convolution functions.
Preferably, obtain new red tide image, calculate based on red tide pixel point feature set the red tide pixel point feature of each pixel point of red tide image specifically includes:
acquiring a new red tide image, and performing image preprocessing on the red tide image, wherein the image preprocessing comprises image enhancement and/or image denoising;
and calculating the characteristics in the red tide pixel point characteristic set aiming at each pixel point in the preprocessed red tide image.
Preferably, the red tide image is segmented based on the edge detection point to form a plurality of groups of segmented pixel point sets, and all the pixel points in each group of segmented pixel point sets are labeled to be of the same type, which specifically includes:
dividing each row of pixels in the red tide image into m row segmentation blocks and each column of pixels into n column segmentation blocks according to the edge detection point, wherein m is greater than 0, and n is greater than 0;
calculating the number of each type of pixel points in each row partition block, and uniformly marking all the pixel points as the types of the pixel points with the dominant number;
calculating the number of each type of pixel points in each column segmentation block, and uniformly marking all the pixel points as the types of the pixel points with the dominant number;
and repeating the calculation steps of the row segmentation blocks and the column segmentation blocks until the pixel points in each row segmentation block and each column segmentation block are of the same type.
An automatic red tide image recognition device comprises: a feature training unit and an image recognition unit;
the feature training unit includes:
the characteristic acquisition module is used for acquiring pixel points of the red tide image and local images at different positions and extracting multiple characteristics of the pixel points and the local images;
the characteristic labeling module is used for classifying and labeling the pixel points into two types, namely red tide pixel points and non-red tide pixel points;
the characteristic clustering module is used for respectively clustering each characteristic of different types of pixel points and selecting at least one characteristic with discrimination to form a red tide pixel point characteristic set;
the weight training module is used for setting the weight of each red tide pixel point feature in the red tide pixel point feature set based on the features of the local image;
the image recognition unit includes:
the image acquisition module is used for inputting a new red tide image and calculating the red tide pixel point characteristics of each pixel point of the red tide image based on the red tide pixel point characteristic set;
the pixel classification module is used for determining the type of each pixel point based on the weight of the set red tide pixel point characteristics;
the edge detection module is used for carrying out edge detection on the red tide image and marking an edge detection point;
the pixel identification module is used for segmenting the red tide image based on the edge detection point to form a plurality of groups of segmented pixel point sets, and marking all pixel points in each group of segmented pixel point sets as the same type;
and the result output module is used for outputting the red tide image recognition result.
Preferably, the pixel identification module specifically includes:
the pixel segmentation module is used for dividing each row of pixels in the red tide image into m row segmentation blocks and dividing each column of pixels into n column segmentation blocks according to the edge detection point, wherein m is greater than 0, and n is greater than 0;
the line processing module is used for calculating the number of the pixel points of each type in each line segmentation block and marking all the pixel points as the types of the pixel points with the dominant number in a unified manner;
the column processing module is used for calculating the number of each type of pixel points in each column segmentation block and marking all the pixel points as the types of the pixel points with the dominant number in a unified manner;
and the cycle control module is used for repeating the calculation steps of the row segmentation blocks and the column segmentation blocks until the pixel points in each row segmentation block and each column segmentation block are of the same type.
An apparatus for automatically recognizing red tide images, comprising a memory, a processor and a computer program stored in the memory and operable on the processor, wherein the processor implements the method steps of automatically recognizing red tide images when executing the computer program.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method steps of the automatic recognition of red tide images.
A system for automatically identifying red tide images comprises a red tide image automatic identification device, a mobile/fixed base, an energy power supply device, an optical/hyperspectral camera, an identification algorithm chip, a data storage chip, a communication chip and an attitude control device;
the mobile/fixed base is used for providing a stable base for the automatic identification device and realizing the long-time detection of the automatic identification device on the state of the seawater;
the energy power supply device is used for providing power supply support for the automatic identification device and supporting self battery voltage detection;
the optical/hyperspectral camera is used for acquiring a marine red tide image;
the recognition algorithm chip is used for solidifying the automatic recognition algorithm in the single chip microcomputer system and detecting the red tide image in real time;
the data storage chip is used for storing the red tide image acquired by the optical/hyperspectral camera, so that the processing of post data is facilitated;
the communication chip is used for providing network communication capability for the automatic identification device, and can timely send out early warning after an alarm event occurs, wherein the alarm event comprises the occurrence of a red tide event and/or insufficient power supply of a battery;
the attitude control device is used for controlling the shooting angle of the optical/hyperspectral camera so as to acquire the best shot image.
According to the technical scheme, the method for automatically identifying the red tide image comprises the steps of carrying out feature extraction on the existing red tide image, marking red tide pixels, extracting the red tide features and training the red tide feature weight, determining the features and the weights associated with the red tide in a deep learning mode, carrying out pixel point feature analysis and image processing on the ocean image obtained by monitoring, automatically identifying the red tide information in the ocean image, solving the problem that the red tide cannot be accurately identified from the ocean image and timely and effective monitoring of the red tide cannot be realized in the prior art, and realizing high-precision and low-cost timely monitoring of the red tide.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an automatic red tide image recognition method according to the present invention.
Fig. 2 is a second flowchart of the red tide image automatic identification method of the present invention.
Fig. 3 is a third flowchart of the red tide image automatic identification method of the present invention.
Fig. 4 is a schematic structural diagram of an automatic red tide image recognition device according to the present invention.
Fig. 5 is a second schematic structural diagram of the red tide image automatic identification device of the present invention.
Fig. 6 is a third schematic structural diagram of the red tide image automatic identification device of the present invention.
Fig. 7 is a schematic structural diagram of an automatic red tide image recognition system according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a method for automatically identifying a red tide image, which is suitable for the field of image identification, and particularly relates to the automatic identification of red tide information of an ocean image.
Red tide is also called red tide, and the phenomenon of abnormal color of seawater caused by sudden and rapid propagation of plankton in the ocean is one of the common important disasters in the offshore area. Different plankton causes different colors of sea water, and red tide is just a general term for various color tides. The excessive nutrient elements cause the mass propagation of algae, and the oxygen deficiency is the main reason for generating red tide. As the pollution of the ocean becomes more and more serious, the occurrence frequency of red tides becomes more and more frequent, and the bay and coastal waters become more prominent. The red tide is a signal of ocean pollution, and in the red tide period, a great amount of fish, shrimps, crabs and shellfish die, so that the aquatic resources are greatly damaged, and the harbor construction is seriously influenced by the formation of sediments.
The red tide monitoring is an important project for monitoring the marine ecological environment, and the red tide biological species, the influence range, the physical and chemical conditions of local seawater and other related environmental factors are monitored in the processes of generation, development and disappearance of the red tide. The traditional red tide monitoring mode is to sample seawater, and a microscope is used for observing a water sample to determine the type and density of plankton, and the monitoring method is time-consuming and labor-consuming and has higher professional requirements on workers.
The hyperspectral imaging technology is based on image data technology of a plurality of narrow wave bands, combines the imaging technology with the spectrum technology, detects two-dimensional geometric space and one-dimensional spectral information of a target, and acquires continuous and narrow wave band image data with hyperspectral resolution.
Image recognition, which refers to a technique for processing, analyzing and understanding images by a computer to recognize various different patterns of objects and objects, is a practical application of applying a deep learning algorithm. The traditional image identification process is divided into four steps: image acquisition → image preprocessing → feature extraction → image recognition.
At present, the main research direction of image recognition is face recognition and article recognition, and in the aspect of red tide monitoring, a red tide monitoring method based on image recognition is the latest research direction, but because of various types of red tide organisms, a related database and an image recognition system are lacked, and how to accurately determine red tide information from an ocean image is a problem which needs to be solved urgently.
As shown in fig. 1, the method for automatically identifying a red tide image provided by the present invention specifically includes:
step 100, in the feature training stage, selecting an existing red tide image, labeling a red tide point, performing deep learning training, and obtaining image features and feature weights specifically related to the red tide.
Step 101, obtaining pixel points of the red tide image and local images at different positions, and extracting multiple features of the pixel points.
And selecting the existing red tide image, and extracting local images at different positions in the image and multiple features of all pixel points. The extracted multiple features specifically include the following features:
color features, color aggregation vectors, local color histograms, and/or local color matrices.
The color histogram reflects the color composition distribution in the image, is the proportion of different colors in the whole image, and is particularly suitable for describing images which are difficult to automatically segment.
The color aggregation vector is a more complex method in histogram improvement algorithms, which divides each color cluster in the histogram into two parts, aggregated and non-aggregated. In the image similarity comparison process, the similarity of the images is compared respectively, and then a similarity value is obtained after comprehensive balance, so that a result is obtained.
And 102, classifying and labeling the pixels into two types, namely red tide pixels and non-red tide pixels.
And classifying and marking all pixel points in the red tide image into two types by adopting an automatic marking and manual correction mode.
And 103, clustering each feature of the different types of pixel points respectively, and selecting at least one feature with discrimination to form a red tide pixel point feature set.
Because the acquired red tide image is usually a hyperspectral image, the red tide has a specific spectrum in the hyperspectral image, and is obviously different from other marine impurities such as oil stains and the like. Therefore, it is necessary to extract these discriminative features as the unique features of the red tide.
Specifically, according to image features and pixel point labels, clustering analysis is respectively carried out on the features of two types of pixel points of red tide pixel points and non-red tide pixel points, the distinguishing features of the red tide pixel points are selected, irrelevant and redundant features are removed, feature space dimensions are effectively reduced, and the relevance between the features is eliminated.
The characteristic extraction method can adopt a principal component analysis method, a linear discriminant analysis method, a kernel method, a flow pattern learning method and the like.
And 104, training the weight of each red tide pixel point feature in the red tide pixel point feature set based on the local image.
Specifically, as shown in fig. 2:
step 1041, acquiring each red tide pixel point characteristic and pixel point type of each pixel point of the local image;
step 1042, linearly combining each red tide pixel point characteristic;
step 1043, dynamically adjusting the weight coefficients of each red tide pixel point feature, and determining the weight coefficients of each red tide pixel point feature matched with each pixel point type.
The method of deep learning can be adopted to perform learning training of the weight coefficient, and the spatial correlation information of the red tide image is extracted by selecting different convolution functions.
And 200, in an image identification stage, acquiring a new ocean image from the monitoring camera, carrying out edge detection, image segmentation and pixel point clustering on the new image based on the trained red tide related characteristics and weight, and outputting an identification result of the red tide image.
Step 201, obtaining a new red tide image, and calculating the red tide pixel point characteristics of each pixel point of the red tide image based on the red tide pixel point characteristic set.
And acquiring a new red tide image, and performing image preprocessing on the red tide image, wherein the image preprocessing mainly comprises image enhancement and image denoising.
And according to the training result, selecting features related to the characteristics of the red tide, and calculating the features in the feature set of the red tide pixels aiming at each pixel in the preprocessed red tide image.
Step 202, determining the type of each pixel point based on the set weight of the red tide pixel point characteristics.
And based on the weight coefficient of the features obtained by training, linearly combining the features of each red tide pixel point, and calculating the type of each pixel point matching.
And step 203, performing edge detection on the red tide image, and labeling an edge detection point.
Edge detection calculation can be performed on the red tide image by using a canny edge detection algorithm, an integral nested edge detection method and the like, and pixel points located at edge positions in the image are marked.
And 204, segmenting the red tide image based on the edge detection points to form a plurality of groups of segmented pixel point sets, and marking the pixel points in each group of segmented pixel point sets as the same type.
Specifically, as shown in fig. 3:
step 2041, dividing each row of pixels in the red tide image into m row divided blocks according to the edge detection point, and dividing each column of pixels into n column divided blocks, wherein m is greater than 0, and n is greater than 0;
step 2042, calculating the number of each type of pixel points in each row partition block, and marking all the pixel points as the types of the pixel points with the dominant number;
step 2043, calculating the number of each type of pixel points in each column division block, and marking all the pixel points as the types of the pixel points with the dominant number;
step 2044, repeat the calculation steps of the row partition block and the column partition block until the pixel points in each row partition block and each column partition block are of the same type.
And step 205, outputting the red tide image recognition result.
Based on the same concept of the red tide image automatic identification method provided in the foregoing of the present invention, the present invention further provides an automatic red tide image identification device, as shown in fig. 4, the device includes: a feature training unit 100 and an image recognition unit 200.
The feature training unit 100 includes:
the characteristic obtaining module 101 is configured to obtain pixel points of the red tide image and local images at different positions, and extract multiple characteristics of the pixel points and the local images.
The feature labeling module 102 is configured to label the pixel points as two types, namely red tide pixel points and non-red tide pixel points.
The feature clustering module 103 is configured to cluster each feature of the different types of pixel points, and select at least one feature with a discrimination to form a red tide pixel point feature set.
And the weight training module 104 is configured to set the weight of each red tide pixel point feature in the red tide pixel point feature set based on the feature of the local image.
The image recognition unit 200 includes:
the image acquisition module 201 is configured to input a new red tide image, and calculate a red tide pixel point feature of each pixel point of the red tide image based on the red tide pixel point feature set.
The pixel classification module 202 is configured to determine a type of each pixel point based on the set weight of the red tide pixel point feature.
And the edge detection module 203 is configured to perform edge detection on the red tide image and mark an edge detection point.
The pixel identification module 204 is configured to segment the red tide image based on the edge detection point to form a plurality of groups of segmented pixel point sets, and label all the pixel points in each group of segmented pixel point sets as the same type.
And the result output module 205 is used for outputting the red tide image recognition result.
As shown in fig. 5, the pixel identification module 204 specifically includes:
the pixel segmentation module 2041 is configured to divide each row of pixels in the red tide image into m row segments and each column of pixels into n column segments according to the edge detection point, where m is greater than 0 and n is greater than 0.
The line processing module 2042 is configured to calculate the number of each type of pixel point in each line partition block, and label all the pixel points as the type of the pixel point with the dominant number.
The column processing module 2043 is configured to calculate the number of each type of pixel in each column division block, and label all the pixels as the type of the pixel with the dominant number.
And the cycle control module 2044 is configured to repeat the calculation steps of the row partition block and the column partition block until the pixel points in each of the row partition block and the column partition block are of the same type.
Based on the same concept of the red tide image automatic identification method provided in the foregoing of the present invention, the present invention further provides an automatic red tide image identification device, as shown in fig. 6, the device includes: a memory 101, a processor 102, and a computer program stored in the memory and executable on the processor 102. The method steps of the sign ambiguity detection are implemented by the processor 102 when executing a computer program.
Based on the same concept of the red tide image automatic identification method provided in the foregoing of the present invention, the present invention further provides an automatic red tide image identification system, as shown in fig. 7, the system includes: the red tide image recognition system comprises a red tide image automatic recognition device 1, a mobile/fixed base 2, an energy power supply device 3, an optical/hyperspectral camera 4, a recognition algorithm chip 5, a data storage chip 6, a communication chip 7 and an attitude control device 8.
And the movable/fixed base 2 is used for providing a stable base for the automatic identification device 1 and realizing the long-time detection of the automatic identification device 1 on the state of the seawater.
And the energy power supply device 3 is used for providing power supply support for the automatic identification device 1 and simultaneously supporting self battery voltage detection.
The optical/hyperspectral camera 4 is used for acquiring a marine red tide image.
And the identification algorithm chip 5 is used for solidifying the automatic identification algorithm in the single chip microcomputer system and detecting the red tide image in real time.
The data storage chip 6 is used for storing the red tide image acquired by the optical/hyperspectral camera 4, and facilitates the processing of the data afterwards.
The communication chip 7 is used for providing network communication capability for the automatic identification device 1, and can timely send out early warning after an alarm event occurs, wherein the alarm event comprises a red tide event and/or insufficient battery power supply;
and the attitude control device 8 is used for controlling the shooting angle of the optical/hyperspectral camera 4 so as to acquire the best shot image.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the use of the phrase "comprising a. -. said" to define an element does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods, apparatus, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart and block diagrams may represent a module, segment, or portion of code, which comprises one or more computer-executable instructions for implementing the logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. It will also be noted that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims (11)

1. A method for automatically recognizing red tide images is characterized by comprising the following steps:
(1) a characteristic training stage;
(1.1) acquiring pixel points of the red tide image and local images at different positions, and extracting multiple features of the pixel points;
(1.2) classifying and labeling the pixel points into two types, namely red tide pixel points and non-red tide pixel points;
(1.3) clustering each feature of different types of pixel points respectively, and selecting at least one feature with discrimination to form a red tide pixel point feature set;
(1.4) training the weight of each red tide pixel point feature in the red tide pixel point feature set based on the local image;
(2) image recognition phase
(2.1) acquiring a new red tide image, and calculating the red tide pixel point characteristics of each pixel point of the red tide image based on the red tide pixel point characteristic set;
(2.2) determining the type of each pixel point based on the weight of the set red tide pixel point characteristics;
(2.3) carrying out edge detection on the red tide image, and marking an edge detection point;
(2.4) segmenting the red tide image based on the edge detection points to form a plurality of groups of segmented pixel point sets, and marking all pixel points in each group of segmented pixel point sets as the same type;
and (2.5) outputting a red tide image recognition result.
2. The method according to claim 1, wherein the plurality of characteristics of the pixel points specifically include at least one of:
color features, color aggregation vectors, local color histograms, and/or local color matrices.
3. The method of claim 2, wherein the training of the weights of the red tide pixel point features in the red tide pixel point feature set based on the local image specifically comprises:
acquiring each red tide pixel point characteristic and pixel point type of each pixel point of the local image;
linearly combining the characteristics of each red tide pixel point;
and dynamically adjusting the weight coefficient of each red tide pixel point characteristic, and determining the weight coefficient of each red tide pixel point characteristic matched with each pixel point type.
4. The method as claimed in claim 3, wherein the adjusting the weight coefficients of the red tide pixel features to match the combination of the red tide pixel features of each pixel with the features of the local image comprises:
and (3) learning and training the weight coefficient by adopting a deep learning method, and extracting the spatial correlation information of the red tide image by selecting different convolution functions.
5. The method of claim 1, wherein the obtaining of the new red tide image and the calculating of the red tide pixel point feature of each pixel point of the red tide image based on the red tide pixel point feature set specifically comprise:
acquiring a new red tide image, and performing image preprocessing on the red tide image, wherein the image preprocessing comprises image enhancement and/or image denoising;
and calculating the characteristics in the red tide pixel point characteristic set aiming at each pixel point in the preprocessed red tide image.
6. The method of claim 1, wherein the segmenting the red tide image based on the edge detection points to form a plurality of sets of segmented pixel point sets, and labeling all the pixel points in each set of segmented pixel point sets as the same type, specifically comprises:
dividing each row of pixels in the red tide image into m row segmentation blocks and each column of pixels into n column segmentation blocks according to the edge detection point, wherein m is greater than 0, and n is greater than 0;
calculating the number of each type of pixel points in each row partition block, and uniformly marking all the pixel points as the types of the pixel points with the dominant number;
calculating the number of each type of pixel points in each column segmentation block, and uniformly marking all the pixel points as the types of the pixel points with the dominant number;
and repeating the calculation steps of the row segmentation blocks and the column segmentation blocks until the pixel points in each row segmentation block and each column segmentation block are of the same type.
7. An automatic red tide image recognition device, characterized in that the device comprises: a feature training unit and an image recognition unit;
the feature training unit includes:
the characteristic acquisition module is used for acquiring pixel points of the red tide image and local images at different positions and extracting multiple characteristics of the pixel points and the local images;
the characteristic labeling module is used for classifying and labeling the pixel points into two types, namely red tide pixel points and non-red tide pixel points;
the characteristic clustering module is used for respectively clustering each characteristic of different types of pixel points and selecting at least one characteristic with discrimination to form a red tide pixel point characteristic set;
the weight training module is used for setting the weight of each red tide pixel point feature in the red tide pixel point feature set based on the features of the local image;
the image recognition unit includes:
the image acquisition module is used for inputting a new red tide image and calculating the red tide pixel point characteristics of each pixel point of the red tide image based on the red tide pixel point characteristic set;
the pixel classification module is used for determining the type of each pixel point based on the weight of the set red tide pixel point characteristics;
the edge detection module is used for carrying out edge detection on the red tide image and marking an edge detection point;
the pixel identification module is used for segmenting the red tide image based on the edge detection point to form a plurality of groups of segmented pixel point sets, and marking all pixel points in each group of segmented pixel point sets as the same type;
and the result output module is used for outputting the red tide image recognition result.
8. The apparatus according to claim 7, wherein the pixel identification module specifically comprises:
the pixel segmentation module is used for dividing each row of pixels in the red tide image into m row segmentation blocks and each column of pixels into n column segmentation blocks according to the edge detection point, wherein m is greater than 0, and n is greater than 0;
the line processing module is used for calculating the number of the pixel points of each type in each line segmentation block and marking all the pixel points as the types of the pixel points with the dominant number in a unified manner;
the column processing module is used for calculating the number of each type of pixel points in each column segmentation block and marking all the pixel points as the types of the pixel points with the dominant number in a unified manner;
and the cycle control module is used for repeating the calculation steps of the row segmentation blocks and the column segmentation blocks until the pixel points in each row segmentation block and each column segmentation block are of the same type.
9. An apparatus for automatic red tide image recognition, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to any of claims 1-6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
11. A system for automatically recognizing red tide images, which comprises the automatic recognition device of claim 6, a mobile/fixed base, an energy supply device, an optical/hyperspectral camera, a recognition algorithm chip, a data storage chip, a communication chip and an attitude control device;
the mobile/fixed base is used for providing a stable base for the automatic identification device and realizing the long-time detection of the automatic identification device on the state of the seawater;
the energy power supply device is used for providing power supply support for the automatic identification device and supporting self battery voltage detection;
the optical/hyperspectral camera is used for acquiring a marine red tide image;
the recognition algorithm chip is used for solidifying the automatic recognition algorithm in the single chip microcomputer system and detecting the red tide image in real time;
the data storage chip is used for storing the red tide image acquired by the optical/hyperspectral camera, so that the processing of post data is facilitated;
the communication chip is used for providing network communication capability for the automatic identification device, and can timely send out early warning after an alarm event occurs, wherein the alarm event comprises the occurrence of a red tide event and/or insufficient power supply of a battery;
the attitude control device is used for controlling the shooting angle of the optical/hyperspectral camera so as to acquire the best shot image.
CN202011228637.8A 2021-01-29 2021-01-29 Method and device for automatically identifying red tide image Pending CN112464742A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011228637.8A CN112464742A (en) 2021-01-29 2021-01-29 Method and device for automatically identifying red tide image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011228637.8A CN112464742A (en) 2021-01-29 2021-01-29 Method and device for automatically identifying red tide image

Publications (1)

Publication Number Publication Date
CN112464742A true CN112464742A (en) 2021-03-09

Family

ID=74825844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011228637.8A Pending CN112464742A (en) 2021-01-29 2021-01-29 Method and device for automatically identifying red tide image

Country Status (1)

Country Link
CN (1) CN112464742A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170139A (en) * 2021-11-09 2022-03-11 深圳市衡兴安全检测技术有限公司 Offshore sea area ecological disaster early warning method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120136568A (en) * 2011-06-09 2012-12-20 목포대학교산학협력단 Red tide image recognition method using semantic features
WO2017092431A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Human hand detection method and device based on skin colour
CN107229907A (en) * 2017-05-09 2017-10-03 宁波大红鹰学院 A kind of method using unmanned machine testing marine red tide occurring area
CN107358242A (en) * 2017-07-11 2017-11-17 浙江宇视科技有限公司 Target area color identification method, device and monitor terminal
CN108961301A (en) * 2018-07-12 2018-12-07 中国海洋大学 It is a kind of based on the unsupervised Chaetoceros image partition method classified pixel-by-pixel
CN110598752A (en) * 2019-08-16 2019-12-20 深圳宇骏视觉智能科技有限公司 Image classification model training method and system for automatically generating training data set
CN110781956A (en) * 2019-10-24 2020-02-11 精硕科技(北京)股份有限公司 Target detection method and device, electronic equipment and readable storage medium
WO2020103892A1 (en) * 2018-11-21 2020-05-28 北京市商汤科技开发有限公司 Lane line detection method and apparatus, electronic device, and readable storage medium
CN112181270A (en) * 2020-09-29 2021-01-05 南方科技大学 Image segmentation labeling method, model training method, device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120136568A (en) * 2011-06-09 2012-12-20 목포대학교산학협력단 Red tide image recognition method using semantic features
WO2017092431A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Human hand detection method and device based on skin colour
CN107229907A (en) * 2017-05-09 2017-10-03 宁波大红鹰学院 A kind of method using unmanned machine testing marine red tide occurring area
CN107358242A (en) * 2017-07-11 2017-11-17 浙江宇视科技有限公司 Target area color identification method, device and monitor terminal
CN108961301A (en) * 2018-07-12 2018-12-07 中国海洋大学 It is a kind of based on the unsupervised Chaetoceros image partition method classified pixel-by-pixel
WO2020103892A1 (en) * 2018-11-21 2020-05-28 北京市商汤科技开发有限公司 Lane line detection method and apparatus, electronic device, and readable storage medium
CN110598752A (en) * 2019-08-16 2019-12-20 深圳宇骏视觉智能科技有限公司 Image classification model training method and system for automatically generating training data set
CN110781956A (en) * 2019-10-24 2020-02-11 精硕科技(北京)股份有限公司 Target detection method and device, electronic equipment and readable storage medium
CN112181270A (en) * 2020-09-29 2021-01-05 南方科技大学 Image segmentation labeling method, model training method, device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LEE MIN-SUN 等: "Red tide detection using deep learning and high-spatial resolution optical satellite imagery", 《INTERNATIONAL JOURNAL OF REMOTE SENSING 》, 17 March 2019 (2019-03-17) *
余肖翰;曾松福;曹宇峰;陈瑶;谢杰镇;郑少平;: "基于流式细胞摄像技术(FlowCAM)的赤潮藻类识别分析初探", 海洋科学进展, no. 04, 15 October 2013 (2013-10-15) *
施国武;邢宽平;张俊贤;李长平;李霞;: "基于深度学习的无人机影像建筑物自动提取", 地矿测绘, no. 01, 25 March 2020 (2020-03-25) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170139A (en) * 2021-11-09 2022-03-11 深圳市衡兴安全检测技术有限公司 Offshore sea area ecological disaster early warning method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Labao et al. Cascaded deep network systems with linked ensemble components for underwater fish detection in the wild
CN109145830B (en) Intelligent water gauge identification method
Hu et al. Automatic plankton image recognition with co-occurrence matrices and support vector machine
CN109117802A (en) Ship Detection towards large scene high score remote sensing image
CN115100512A (en) Monitoring, identifying and catching method and system for marine economic species and storage medium
CN106570485B (en) A kind of raft culture remote sensing images scene mask method based on deep learning
CN108388916B (en) Method and system for automatically identifying water floater based on artificial intelligence
CN108921099A (en) Moving ship object detection method in a kind of navigation channel based on deep learning
Osterloff et al. A computer vision approach for monitoring the spatial and temporal shrimp distribution at the LoVe observatory
Andrew et al. Semi‐automated detection of eagle nests: an application of very high‐resolution image data and advanced image analyses to wildlife surveys
CN113591592B (en) Overwater target identification method and device, terminal equipment and storage medium
Lee et al. Contour matching for fish species recognition and migration monitoring
Ditria et al. Automating the analysis of fish grazing behaviour from videos using image classification and optical flow
CN112990085A (en) Method and device for detecting change of culture pond and computer readable storage medium
CN112418028A (en) Satellite image ship identification and segmentation method based on deep learning
Wang et al. Vision-based in situ monitoring of plankton size spectra via a convolutional neural network
Guo et al. Real-time automated identification of algal bloom species for fisheries management in subtropical coastal waters
CN112464742A (en) Method and device for automatically identifying red tide image
Dawkins et al. Automatic scallop detection in benthic environments
Gobi Towards generalized benthic species recognition and quantification using computer vision
Xu et al. Detection of bluefin tuna by cascade classifier and deep learning for monitoring fish resources
CN116416523A (en) Machine learning-based rice growth stage identification system and method
Horak et al. Water quality assessment by image processing
Shimura et al. Fishing spot detection using sea water temperature pattern by nonlinear clustering
Soetedjo et al. Leaf Segmentation in Outdoor Environment Using A Low-Cost Infrared Camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination