CN116228757B - Deep sea cage and netting detection method based on image processing algorithm - Google Patents

Deep sea cage and netting detection method based on image processing algorithm Download PDF

Info

Publication number
CN116228757B
CN116228757B CN202310504538.5A CN202310504538A CN116228757B CN 116228757 B CN116228757 B CN 116228757B CN 202310504538 A CN202310504538 A CN 202310504538A CN 116228757 B CN116228757 B CN 116228757B
Authority
CN
China
Prior art keywords
image
cage
pixel
edge
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310504538.5A
Other languages
Chinese (zh)
Other versions
CN116228757A (en
Inventor
张永波
李振
张丛
马哲
常琳
王言哲
王继业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Academy Of Marine Sciences Qingdao National Marine Science Research Center
Original Assignee
Shandong Academy Of Marine Sciences Qingdao National Marine Science Research Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Academy Of Marine Sciences Qingdao National Marine Science Research Center filed Critical Shandong Academy Of Marine Sciences Qingdao National Marine Science Research Center
Priority to CN202310504538.5A priority Critical patent/CN116228757B/en
Publication of CN116228757A publication Critical patent/CN116228757A/en
Application granted granted Critical
Publication of CN116228757B publication Critical patent/CN116228757B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Abstract

The invention discloses a deep sea cage netting detection method based on an image processing algorithm, which relates to the technical fields of computer image processing technology and intelligent recognition, and comprises the following steps: s1: acquiring a color image acquired by a sensor; s2: preprocessing the color image; s3: performing target detection on a net cage target in the image; s4: extracting edge information of a net cage; s5: carrying out line segmentation treatment on the edge information of the net cage and the net cover; s6: analyzing and processing the line segment information of the net cage and the net clothing; s7: and (5) visualizing the detection result of the defects of the net cage and the netting. According to the invention, through obtaining the color image of the deep sea cage netting, the whole and partial fracture and defect conditions of the netting can be rapidly and accurately detected based on the steps of image pretreatment, target detection, edge detection, line segmentation treatment, defect detection, result display and the like, and the purpose of predicting and preventing the fishery economic loss and damage caused by the netting damage is achieved, and the safety and efficiency of deep sea cage culture are improved.

Description

Deep sea cage and netting detection method based on image processing algorithm
Technical Field
The invention relates to the technical field of image processing, in particular to a deep sea cage and netting detection method based on an image processing algorithm.
Background
Deep sea farming is a new farming method, and is still under continuous development and exploration, in which deep sea net cages play an important role as one of the important devices. As the application range of deep sea cage becomes wider, inspection and maintenance of cage clothes becomes more and more important. The traditional detection methods mainly comprise two types, namely visual detection, including underwater diver visual detection and net lifting visual detection; the other is detection by an instrument. The visual detection method is simple and visual, but a large number of artificial subjective factors exist, and the visual detection method is low in efficiency, time-consuming and labor-consuming; although unmanned operation can be realized in the instrument detection, the equipment is expensive, the equipment is difficult to adapt to the net boxes and the net jackets with different specifications, and the detection result has errors and cannot meet the requirement of accurate detection.
Disclosure of Invention
In order to overcome the problems in the prior art, the invention provides a deep sea cage and netting detection method based on an image processing algorithm.
The technical scheme adopted for solving the technical problems is as follows: a deep sea cage netting detection method based on an image processing algorithm comprises the following steps: s1, acquiring a color image of a deep sea cage netting acquired by a sensor;
s2, preprocessing the color image obtained in the step S1;
s3, detecting a net cage object in the image obtained in the S2;
s4, extracting edge information of the net cage clothes;
s5, carrying out line segmentation treatment on the edge information of the net cage and the net cover;
s6, analyzing and processing the line segment information of the net cage and the net clothing;
s7, visualizing a detection result of the defects of the net box;
in the step S7, the visualization of the defect detection result of the cage cover further includes the following steps:
s71, overlapping and fusing an analysis result with an original image by adopting an OpenCV library of Python to generate a visualized image or video;
s72, marking the defect area by selecting a rectangular frame, polygon filling, text marking or color mapping according to the defect type and the position, so that the visual effect is improved;
s73, displaying images or videos on a screen or equipment by using a graphical interface or a dynamic interaction mode, or remotely accessing and sharing the images or videos through a network or a storage medium;
and S74, storing the visualization result and the defect information into a standard picture format and a standard video format.
In the above method for detecting a deep sea cage netting based on an image processing algorithm, in the step S2, the color image is preprocessed, and the method further includes the following steps:
s21, removing noise information such as points, lines, planes and the like in the color image by adopting median filtering or mean filtering;
s22, converting the color image into a gray image by using a convolution or weighted average method;
s23, improving the contrast of the image in the deep sea environment by adopting a histogram equalization method;
s24, performing brightness compensation or color balance on the image by using a Laplace enhancement method;
s25, correcting the position of the net cage by adjusting the image angle.
In the above method for detecting a deep sea cage net based on an image processing algorithm, in the step S3, a target detection is performed on a cage net target in an image, and the method further includes the following steps:
s31, extracting key features in the image through a SIFT algorithm, a SURF algorithm or an ORB algorithm;
s32, generating a candidate region containing a detection target by adopting an image region segmentation and selective search method;
s33, classifying and positioning the candidate areas by using a convolutional neural network algorithm to judge whether the detection targets are contained or not, and giving out frame coordinates of the detection targets;
s34, filtering the overlapped detection frames by adopting a non-maximum suppression algorithm, and removing the detection frames with lower confidence;
and S35, drawing frames and labels in the image, and displaying the confidence of the detection target.
In the above method for detecting deep sea cage clothes based on image processing algorithm, in step S4, edge information of the cage clothes is extracted, and the method further includes the following steps:
s41, removing noise signals in the image by Gaussian filtering;
s42, determining the edge position of the image by using a Sobel operator and a gradient threshold value;
s43, refining the image edge position by adopting a non-maximum suppression algorithm, reserving the pixel with the maximum gradient, and filtering out the pixels which are not adjacent or lower than the gradient threshold value;
s44, carrying out high-threshold and low-threshold processing on the edge pixels, wherein the edge pixels are reserved when being larger than the high threshold, filtered when being lower than the low threshold, and reserved when being adjacent to the high-threshold pixels and filtered when not being adjacent to the high-threshold pixels;
s45, connecting the reserved edge pixels into an uninterrupted curve by using a curve fitting method so as to represent the edge of the net cage.
The above method for detecting a deep sea cage netting based on an image processing algorithm, wherein the step S44 specifically includes:
s441, carrying out threshold processing on edge pixels refined by a non-maximum suppression algorithm, and distinguishing the edge pixels with different intensities by using a high threshold value and a low threshold value;
s442, for each pixel, if the intensity value of the pixel is larger than a high threshold value, marking the pixel as an edge pixel, setting the pixel value of the pixel as 1 to indicate that the pixel is positioned at the position of the net of the deep sea net cage, and if the pixel value is smaller than a low threshold value, marking the pixel as a non-edge pixel, setting the pixel value of the pixel as 0 to indicate that the pixel is positioned outside the net of the deep sea net cage;
s443, for the edge pixels between the high threshold and the low threshold, if the edge pixels are adjacent to the high threshold pixels, marking the edge pixels as edge pixels, and setting the pixel values as 1; the definition of adjacent is that in 8 directions, the pixel value of at least one pixel is larger than a high threshold value, if the pixel is not adjacent to the high threshold value pixel, the pixel is marked as a non-edge pixel, and the pixel value is set to 0, so that the pixel is located outside the deep sea cage clothes;
s444, through threshold processing, the deep sea cage and coat region can be accurately identified from the image, and further analysis, positioning, control and other operations can be performed.
In the above method for detecting deep sea cage clothes based on image processing algorithm, in the step S5, the edge information of the cage clothes is subjected to line segmentation processing, and the method further comprises the following steps:
s51, edge detection of the image is achieved by adopting a Canny algorithm, and edges of the net cage are extracted;
s52, edge connection is carried out on the edge detection images by using a characteristic point matching, fitting or tracking method, so that edges with continuous and closed shapes are formed;
s53, detecting line segments on edges by using a Hough transformation or maximum suppression method, and filtering noise and irrelevant straight lines;
s54, fitting the curved edge into a straight line or a broken line segment by using a straight line fitting algorithm;
s55, converting the result of the line segmentation processing into data and images.
In the above method for detecting deep sea cage netting based on image processing algorithm, in step S6, analysis processing is performed on the cage netting line segment information, and the method further includes the following steps:
s61, separating a specific structure of the net cage from the image by adopting an image segmentation technology;
s62, extracting representative features of each specific structure by using a SIFT algorithm, a SURF algorithm or an ORB algorithm;
s63, training a classification model by adopting a supervised learning method, and regarding the defects as classification problems so as to distinguish normal parts and defect parts of the net box;
s64, comprehensively considering the size, shape and texture factors of the defects, and detecting and positioning the defects by using morphological operation and texture feature analysis methods;
s65, analyzing the type, the number and the positions of the defects of the net box.
The invention has the beneficial effects that the invention provides the deep sea net cage clothing detection method based on the image processing algorithm, the image processing algorithm is applied to the deep sea net cage clothing quality detection, the implementation principle is based on the deep learning algorithm, the image characteristics are optimized by collecting and processing the net cage clothing image, and further the net cage clothing can be accurately detected as defects such as holes, broken lines, attachments and the like by training and identifying the machine learning algorithm, so that the quality and the cultivation benefit of the net cage clothing are ensured. The invention realizes accurate detection and analysis of the net cage through the processes of collecting, analyzing, extracting the characteristics of the net cage images and the like, and provides powerful technical support for safe production and production efficiency improvement in the deep sea aquaculture industry.
Drawings
The invention will be further described with reference to the drawings and examples.
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of preprocessing a color image in the present invention;
FIG. 3 is a flow chart of object detection for a cage coat object in an image according to the present invention;
FIG. 4 is a flow chart of extracting edge information of a cage cover according to the present invention;
FIG. 5 is a flowchart of the process of segment-based processing of edge information of a cage according to the present invention;
FIG. 6 is a flow chart of analysis processing of the line segment information of the cage according to the present invention;
FIG. 7 is a flow chart of the present invention for visualizing the results of the defect detection of the cage clothes.
Detailed Description
The present invention will be described in detail below with reference to the drawings and detailed description to enable those skilled in the art to better understand the technical scheme of the present invention.
As shown in fig. 1, the embodiment discloses a deep sea cage and netting detection method based on an image processing algorithm, which comprises the following steps:
s1, acquiring a color image acquired by a sensor:
in the embodiment, the color image of the deep sea cage clothes is acquired through the optical sensor.
S2, preprocessing the color image:
in a deep sea environment, reflection and scattering of light rays generate much noise, so that image quality is degraded and accurate recognition and analysis are difficult. Color image pre-processing can help remove color distortion and image blurring in deep sea environments to more accurately analyze the quality and structure of the cage netting. A flow chart for preprocessing a color image is shown in fig. 2.
S21, removing noise information such as points, lines, planes and the like in the color image by adopting median filtering or mean filtering:
the color image is split into three components of red, green and blue by adopting a color image splitting technology. Then, the three components are subjected to median filtering or mean filtering, respectively. For median filtering, a fixed window size is first defined, pixels in the window are ordered, the pixel value of the number of bits in the window is taken as the value of the current pixel, and the process is repeated until the whole image is traversed. For mean filtering, a fixed window size is also defined, the pixels in the window are averaged, and this process is repeated until the entire image is traversed as the current pixel value. Although the mode of processing noise is different, the median filtering and the mean filtering can effectively remove noise information such as points, lines, planes and the like in the color image.
S22, converting the color image into a gray image by using a convolution or weighted average method:
there are two main methods for converting color images into gray scale images, one is using a weighted average method and the other is using a convolution method. The specific method comprises the following steps:
1. weighted average method: the R, G, B components of the color image are linearly combined by artificial weights to finally obtain a gray value. The weighted average is typically performed using the following formula:
gray value=0.299×r+0.587g0.114×b, wherein the weight of each component is derived from the luminance perceptron.
2. Convolution method: the convolution method converts the RGB color space into a gray space using a convolution kernel of 3x3 or 5x5 according to the idea of average gray. The convolution method is as follows:
1) And smoothing the original color image to remove high-frequency noise. 2) The smoothed image was convolved with a convolution kernel and the R, G, B three components were weighted averaged to obtain a gray value. 3) The converted gray-scale image is obtained by pixel-by-pixel processing.
S23, improving the contrast of the image in the deep sea environment by adopting a histogram equalization method:
in the deep sea cage and netting detection method, a histogram equalization method can be adopted to improve the contrast of an image according to the following steps:
1. and carrying out histogram statistics on the gray level image to obtain the frequency distribution of the pixel values of the image. 2. And carrying out equalization treatment on the histogram to obtain an equalized histogram. 3. And recalculating the pixel values of the image according to the equalized histogram, and mapping the pixel values back to the original image. This results in an enhanced image with significantly improved contrast.
The histogram equalization method can effectively improve the image contrast in the deep sea environment, thereby improving the accuracy and reliability of the deep sea cage and netting detection method.
S24, performing brightness compensation or color balance on the image by using a Laplace enhancement method:
in the deep sea cage and netting detection method, the Laplace enhancement method can be adopted to carry out brightness compensation or color balance on the image according to the following steps:
1. and carrying out Laplacian transformation on the image to obtain the high-frequency characteristic of the image. 2. The parameters such as brightness, contrast, saturation and the like of the image can be changed by adjusting the image after the Laplace transformation, so that brightness compensation or color balance is realized. 3. According to the adjusted image, the state and the position of the deep sea cage netting can be detected, so that the deep sea cage netting detection is realized.
S25, correcting the positions of the net layers of the net boxes by adjusting the image angles:
the invention uses the rotation operation in the image processing to adjust the angle of the image so as to enable the position of the net cage to be horizontal or vertical, thereby enabling the position of the net cage to be more accurate. And positioning the position of the net cage in the image by utilizing an edge detection algorithm or a characteristic point detection algorithm in the image processing algorithm, so as to adjust the angle of the image and realize the position correction of the net cage. It should be noted that the correction of the position of the net should be determined according to the situation so as to achieve the best correction effect.
S3, carrying out target detection on a net cage target in the image:
in the deep sea cage clothes detection method based on the image processing algorithm, the main purpose of object detection of the cage clothes in the image is to identify and position the cage clothes object in the image so as to further detect and analyze.
Specifically, the target detection algorithm is used for identifying the targets of the net cage and the netting in the image, so that the net cage and the netting can be automatically detected and positioned in a deep sea environment, and the detection accuracy and efficiency are improved. Meanwhile, the detected targets of the net cage and the net clothing are further analyzed through an image processing technology, for example, the size, shape, quantity and other information of the net cage and the net clothing are calculated, so that deep sea fishery workers can be helped to know marine environments and target objects more comprehensively, and meanwhile, deep sea ecological environments and fishery resources are effectively protected and managed. A flow chart of object detection for a cage netting object in an image is shown in fig. 3.
S31, extracting key features in the image through a SIFT algorithm, a SURF algorithm or an ORB algorithm:
the SIFT algorithm, the SURF algorithm and the ORB algorithm are all common feature extraction algorithms and are all algorithms based on local features, and the method comprises the following steps:
1. and (3) constructing a scale space: constructing a scale space of the image through Gaussian filtering to detect key points under different scales; 2. and (3) key point detection: in the constructed scale space, determining key points in the image by searching extreme points and key points meeting certain conditions, such as local extreme values, edge response and the like; 3. local direction estimation: performing direction estimation on pixels around each key point by using the gradient direction so as to ensure the rotation invariance of the key points; 4. local feature description: the key region is extracted with the key point as the center, and feature vectors, such as gradient magnitude and direction, in the region are calculated to generate local feature descriptors of the key point.
S32, generating a candidate region containing a detection target by adopting an image region segmentation and selective search method:
the invention adopts an image region segmentation and selective search method to generate a candidate region containing a detection target, and comprises the following specific steps:
1. region segmentation: the image is segmented into regions, each representing a possible target region in the image, using an image segmentation algorithm, such as a color and texture based segmentation algorithm, an edge based segmentation algorithm, or a region growing algorithm, etc. 2. Candidate region generation: for each region, a number of candidate regions of different dimensions and different shapes are generated using a selective search method to cover all cases that may contain detection targets. The selective search method is generally based on multi-scale segmentation, and factors such as color, texture, shape and the like are jointly considered on all scales to generate a series of candidate regions with high similarity. 3. Regional screening: and classifying each candidate region by using a classifier to judge whether a target exists in the region or not, and screening out a final target region.
S33, classifying and positioning the candidate areas by using a convolutional neural network algorithm to judge whether the detection target is contained or not, and giving out the frame coordinates of the detection target:
the invention classifies and locates the candidate region by adopting a convolutional neural network algorithm to judge whether the detection target is contained or not, and gives out the frame coordinates of the detection target. The specific implementation steps are as follows: 1. data preparation: firstly, training data and test data are prepared, and the data are preprocessed. 2. Building a convolutional neural network model: a convolutional neural network model is built that includes a plurality of convolutional layers, a pooling layer, and a fully-connected layer. 3. Training a convolutional neural network model: and training the convolutional neural network model by using the prepared training data, thereby realizing classification and positioning of the target area. 4. Classifying and locating the candidate regions: for each candidate region, classification and localization is performed by a convolutional neural network model. 5. Target screening: and screening each candidate region according to the classification and positioning results, and selecting a final target region so as to further realize detection and identification of the deep sea cage netting.
S34, filtering the overlapped detection frames by adopting a non-maximum suppression algorithm, and removing the detection frames with lower confidence coefficient:
the invention adopts a non-maximum suppression algorithm to filter overlapped detection frames and remove detection frames with lower confidence. The specific implementation steps are as follows:
1. and sequencing all the detection frames according to the confidence level from high to low, and selecting the detection frame with the highest confidence level as a final output result. 2. For the remaining test frames, it is determined whether they overlap the highest confidence test frame based on their IoU values. 3. Steps 1 and 2 are repeatedly executed until all the detection frames are processed. 4. And removing the detection frame with the confidence coefficient lower than a set threshold value by adopting a confidence coefficient threshold value filtering method.
S35, drawing frames and labels in the image, and displaying the confidence of the detection target:
according to the method, the positions and the categories of the detection targets are displayed rapidly and intuitively in a frame and label drawing mode, and the confidence of the detection targets is displayed, so that the detection visualization effect is further improved. The specific implementation steps are as follows: 1. for each detected object, the coordinates of its border are calculated. 2. And drawing the coordinates of the frame in the image. 3. Labels of objects detected are marked beside the border. 4. The confidence that the target is detected is displayed alongside the tag, often expressed in terms of percentages.
S4, extracting edge information of the net cage clothes:
in the deep sea cage clothes detection method based on the image processing algorithm, the purpose of extracting the edge information of the cage clothes is to detect the position and the shape of the cage clothes, so that the accurate identification and classification of the cage clothes are realized. A flowchart of extracting edge information of a cage cover is shown in fig. 4.
S41, removing noise signals in the image by Gaussian filtering:
the invention adopts Gaussian filtering to remove noise signals in the image so as to improve the accuracy and reliability of detecting the net cage clothes. The specific implementation steps are as follows: 1. the image is filtered using a gaussian filter to remove noise and smooth the image. 2. The proper gaussian kernel size and standard deviation are selected, and generally, the gaussian kernel size and standard deviation need to be adjusted and optimized for different application scenes and noise levels to meet specific requirements. 3. And dynamically adjusting the size and standard deviation of the Gaussian kernel by adopting an adaptive Gaussian filtering algorithm.
S42, determining the edge position of the image by using a Sobel operator and a gradient threshold value:
the invention adopts a Sobel operator and a gradient threshold value to determine the edge position of an image, and specifically comprises the following steps: 1. and calculating the gradient amplitude G of the image by using a Sobel operator, obtaining gradient amplitudes Gx and Gy in the x direction and the y direction, and obtaining converted Gx and Gy by convolving the image with the Sobel operator. 2. According to the gradient threshold T, binarization processing is performed on the gradient magnitude G (x, y), and the pixel point of G (x, y) < T is set to 0.3. And judging whether each image pixel point (x, y) is a net of the deep sea net cage according to the position of the image pixel point and the binarization result of surrounding pixel points.
S43, thinning the image edge position by adopting a non-maximum suppression algorithm, reserving the pixel with the maximum gradient, and filtering out the pixels which are not adjacent or lower than the gradient threshold value:
and carrying out edge refinement processing on the image by adopting a non-maximum suppression algorithm so as to reserve pixels with maximum gradients and filtering out pixels which are not adjacent or lower than a gradient threshold value. Specifically, the method comprises the following specific steps: 1. the image was divided into several quadrants, each of 45 ° in size, for a total of 8 quadrants. 2. For each pixel at an edge location, the angle value in the global coordinate system of the gradient direction in which it lies is converted into an angle value with respect to the corresponding quadrant. 3. And determining two adjacent quadrants in which the pixel is positioned and the weights of the two quadrants according to the relative angle value. 4. For each pixel in the image, the gradient values in its and adjacent two quadrants are compared in magnitude, and if its gradient value is the largest in the adjacent quadrant, the pixel is retained. If the gradient value is below the set gradient threshold, the pixel is filtered out. If the pixel is not adjacent to a pixel in an adjacent quadrant, it is also filtered out, which is one of the important characteristics of the non-maximum suppression algorithm. 5. And taking the pixel point subjected to non-maximum suppression processing as a final edge position.
S44, carrying out high-threshold and low-threshold processing on the edge pixels, wherein the edge pixels are reserved when being larger than the high threshold, filtered when being lower than the low threshold, and reserved when being adjacent to the high-threshold pixels and filtered when not being adjacent to the high-threshold pixels:
in the invention, the high threshold value and the low threshold value of the edge pixels are processed so as to realize the accurate detection of the deep sea cage clothes. The method comprises the following specific steps: 1. edge pixels refined by a non-maximum suppression algorithm are thresholded to distinguish between edge pixels of different intensities with a high threshold and a low threshold. 2. For each pixel, if its intensity value is greater than the high threshold, the pixel is marked as an edge pixel and its pixel value is set to 1, indicating that the pixel is located at the net position of the deep sea cage. If the pixel value is less than the low threshold, it is marked as a non-edge pixel and its pixel value is set to 0, indicating that the pixel is outside the deep sea cage coat. 3. For an edge pixel between the high and low thresholds, if it is adjacent to the high threshold pixel, it is marked as an edge pixel and its pixel value is set to 1. Adjacent definition means that in 8 directions, at least one pixel has a pixel value greater than the high threshold. If the pixel is not adjacent to the high threshold pixel, it is marked as a non-edge pixel and its pixel value is set to 0, indicating that the pixel is outside the deep sea cage. 4. Finally, through threshold processing, the deep sea cage and netting area can be accurately identified from the image, and further analysis, positioning, control and other operations can be performed.
S45, connecting the reserved edge pixels into an uninterrupted curve by using a curve fitting method so as to represent the edge of the net cage:
in this patent, the remaining edge pixels are connected into an uninterrupted curve using a curve fitting method to represent the edges of the cage netting. The method comprises the following specific steps: 1. the remaining edge pixels are input into a curve fitting algorithm as discrete points of the curve. 2. Different curve fitting methods, such as least square fitting, least square quadratic curve fitting and the like, are selected according to specific requirements, fitting precision and the like. 3. The edge pixels are connected into a curve through a curve fitting algorithm to represent the edge of the deep sea cage clothes. 4. For some incomplete or broken edge pixels, interpolation or the like may be used for filling or repairing to maintain the integrity of the curve.
S5, carrying out line segmentation processing on the edge information of the net cage and the netting:
in the invention, the purpose of carrying out the line segmentation processing on the edge information of the deep sea cage clothes is to represent the edge information into a plurality of line segments, so that noise and interference in the edge information of the deep sea cage clothes can be eliminated, and the accuracy and the robustness of a detection algorithm are improved. A flowchart of the process of segment-based processing of the edge information of the cage is shown in fig. 5.
S51, edge detection of the image is achieved by adopting a Canny algorithm, and edges of the net cage are extracted:
first, pixels in an image are divided into edge pixels and non-edge pixels. Next, the edge pixels are connected to form a continuous edge line. Finally, some non-maximum edge pixels are filtered out so that the edge lines become clearer and finer.
S52, edge connection is carried out on the edge detection images by using a characteristic point matching, fitting or tracking method, so that edges with continuous and closed shapes are formed:
first, the matching may be performed using different methods, such as SIFT, SURF, or ORB algorithms. Secondly, by matching the feature points, edge lines with continuous shapes but not closed can be obtained. Finally, these discontinuous edge lines can be converted into smooth, continuous curves using a fitting algorithm.
S53, detecting line segments on edges by using a Hough transformation or maximum suppression method, and filtering noise and uncorrelated straight lines:
first, all straight lines in the edge image are detected by hough transform. Second, for all the detected straight lines, the inclination angle and length thereof are calculated, and unnecessary information is filtered out. And finally, combining similar line segments into one through a maximum value suppression method, retaining the line segments matched with the similar line segments according to the size and the shape of the template, and filtering out irrelevant straight lines.
S54, fitting the curved edges into straight lines or broken line segments by using a straight line fitting algorithm:
the conventional straight line fitting algorithm includes least square method and RANSAC algorithm. In practical applications, a fitting algorithm suitable for the user can be selected. And matching the obtained straight line or broken line segment with a template to extract the shape and structure of the deep sea cage.
S55, converting the result after the line segmentation processing into data and images:
each line segment may be represented by coordinates of a start point and an end point, or by parameters such as a length, a direction, and a position of the line segment. In deep sea cage netting detection, it is necessary to convert the detected line segments into images matching the shape of the deep sea cage.
S6, analyzing and processing the line segment information of the net cage and the net clothing:
the invention aims to analyze and process the line segment information of the net cage, so as to realize the rapid and accurate detection and identification of the shape and structure of the deep sea net cage, and provide safer, efficient and humanized deep sea cultivation and underwater detection services. A flowchart of the analysis processing of the line segment information of the cage is shown in fig. 6.
S61, separating the specific structure of the net cage from the image by adopting an image segmentation technology:
image segmentation techniques include region-based segmentation methods, edge-based segmentation methods, threshold-based segmentation methods, and the like. The segmentation method based on the region is suitable for processing the network box and netting images with certain spatial continuity and simpler textures, the segmentation method based on the edge is suitable for processing the network box and netting images with obvious edges, and the segmentation method based on the threshold value is suitable for processing the network box and netting images with single pixel value distribution.
S62, extracting representative features of each specific structure by using a SIFT algorithm, a SURF algorithm or an ORB algorithm:
SIFT, SURF, and ORB are all commonly used image feature point extraction algorithms for detecting local features in images. For a deep sea cage netting detection method based on image processing algorithms, these algorithms can be used to extract representative features for each specific structure.
S63, training a classification model by adopting a supervised learning method, and regarding the defects as classification problems so as to distinguish normal parts and defect parts of the net cage:
in order to distinguish defective parts from normal parts in deep sea cage netting, a classification model is trained using a supervised learning method. In the training process of the classification model, pictures of deep sea net cage clothes are taken as input, and labels of defects and normal parts are taken as output. Algorithms suitable for classifying problems, such as Support Vector Machines (SVMs), decision trees, random forests, etc., may be selected for model training.
S64, comprehensively considering the size, shape and texture factors of the defects, and detecting and positioning the defects by using morphological operation and texture feature analysis methods:
the invention separates the defective area from the normal area by setting the proper threshold value, and further separates the connected area of the defective area by using morphological operation. Then, by calculating texture features of the defective area, such as LBP, etc., the defective position is located and the size and shape of the defect are quantified.
S65, analyzing the types, the number and the positions of defects of the net cage and the netting:
determining the type of the defect by analyzing texture features, shape features, color features and the like of the defect area; analyzing the defect condition in the deep sea cage netting by detecting the number of the defect areas; and determining the net cage area where the defect is located by detecting the position of the defect area.
S7, visualizing a detection result of the defects of the net cage and the netting:
the visual detection result of the net cage defects aims at providing visual and clear detection results and targeted maintenance guidance, so that a user can more deeply understand the defect condition of the net cage and take corresponding actions. Meanwhile, the method is convenient for subsequent analysis and research, and helps to improve and enhance the design and use effect of the deep sea net cage. A flow chart for visualizing the detection result of the defects of the net cage, as shown in fig. 7.
S71, overlapping and fusing an analysis result with an original image by adopting an OpenCV library of Python to generate a visualized image or video:
the analysis result is overlapped and fused with the original image by using an OpenCV library, and a visualized image or video is generated, and the specific steps are as follows: 1. the image is loaded using the cv2.imread () function in OpenCV and then resized using the cv2.resize () function. 2. Preprocessing using gaussian filters, thresholding, morphological operations, etc., and then feature extraction and computation using appropriate algorithms, such as edge detection, contour detection, binary morphology, etc. 3. The two images are fused using the addWeighted () function in OpenCV. 4. The images are saved as files and the multiple images are stitched together in video form using the VideoWriter class in OpenCV.
S72, marking the defect area by selecting a rectangular frame, polygon filling, text marking or color mapping according to the defect type and the position, and improving the visual effect:
annotation is implemented using a function in an image processing software library (e.g., openCV). For example, a rectangular box is drawn around the defective area using a cv2.rectangle () function, a polygon fill is drawn within the defective area using a cv2.filePoly () function, a text label is added to the defective area using a cv2.putText () function, and a color map is applied to the defective area using a cv2.applColorMap () function.
S73, displaying the image or video on a screen or a device by using a graphical interface or a dynamic interaction mode, or remotely accessing and sharing the image or the video through a network or a storage medium:
in the present invention, in order to display an image or video and allow a user to interact therewith, a GUI is constructed using a GUI library (e.g., tkinter, wxPython, pyQt, etc.) in Python to display the image or video in a window. Loading and displaying images or videos are loaded using the cv2.imread () and cv2.videocapture () functions in OpenCV and then displayed on a screen using components (e.g., labels, canvas, etc.) in the GUI library. To enable remote access and sharing, images or video are shared into a network or other storage medium, using a network protocol (e.g., HTTP, FTP, etc.), or they are saved as files having an appropriate format (e.g., jpg, png, mp4, etc.).
S74, storing the visualized result and the defect information into a standard picture format and a standard video format:
the visualization results are saved as standard picture format using the cv2.Imwrite () function of the OpenCV library and as standard video format using the videowrite () function of the OpenCV library.
The above embodiments are only exemplary embodiments of the present invention and are not intended to limit the present invention, the scope of which is defined by the claims. Various modifications and equivalent arrangements of this invention will occur to those skilled in the art, and are intended to be within the spirit and scope of the invention.

Claims (3)

1. The deep sea cage and netting detection method based on the image processing algorithm is characterized by comprising the following steps of: s1, acquiring a color image of a deep sea cage netting acquired by a sensor;
s2, preprocessing the color image obtained in the step S1;
s3, detecting a net cage object in the image obtained in the S2;
s4, extracting edge information of the net cage clothes;
s5, carrying out line segmentation treatment on the edge information of the net cage and the net cover;
s6, analyzing and processing the line segment information of the net cage and the net clothing;
s7, visualizing a detection result of the defects of the net box;
in the step S7, the visualization of the defect detection result of the cage cover further includes the following steps:
s71, overlapping and fusing an analysis result with an original image by adopting an OpenCV library of Python to generate a visualized image or video;
s72, marking the defect area by selecting a rectangular frame, polygon filling, text marking or color mapping according to the defect type and the position, so that the visual effect is improved;
s73, displaying images or videos on a screen or equipment by using a graphical interface or a dynamic interaction mode, or remotely accessing and sharing the images or videos through a network or a storage medium;
s74, storing the visualization result and the defect information into a standard picture format and a standard video format;
in the step S4, the edge information of the cage cover is extracted, and the method further includes the following steps:
s41, removing noise signals in the image by Gaussian filtering;
s42, determining the edge position of the image by using a Sobel operator and a gradient threshold value;
s43, refining the image edge position by adopting a non-maximum suppression algorithm, reserving the pixel with the maximum gradient, and filtering out the pixels which are not adjacent or lower than the gradient threshold value;
s44, carrying out high-threshold and low-threshold processing on the edge pixels, wherein the edge pixels are reserved when being larger than the high threshold, filtered when being lower than the low threshold, and reserved when being adjacent to the high-threshold pixels and filtered when not being adjacent to the high-threshold pixels;
s441, carrying out threshold processing on edge pixels refined by a non-maximum suppression algorithm, and distinguishing the edge pixels with different intensities by using a high threshold value and a low threshold value;
s442, for each pixel, if the intensity value of the pixel is larger than a high threshold value, marking the pixel as an edge pixel, setting the pixel value of the pixel as 1 to indicate that the pixel is positioned at the position of the net of the deep sea net cage, and if the pixel value is smaller than a low threshold value, marking the pixel as a non-edge pixel, setting the pixel value of the pixel as 0 to indicate that the pixel is positioned outside the net of the deep sea net cage;
s443, for the edge pixels between the high threshold and the low threshold, if the edge pixels are adjacent to the high threshold pixels, marking the edge pixels as edge pixels, and setting the pixel values as 1; the definition of adjacent is that in 8 directions, the pixel value of at least one pixel is larger than a high threshold value, if the pixel is not adjacent to the high threshold value pixel, the pixel is marked as a non-edge pixel, and the pixel value is set to 0, so that the pixel is located outside the deep sea cage clothes;
s444, accurately identifying the deep sea cage coat region from the image through threshold processing, and further performing operations such as analysis, positioning, control and the like;
s45, connecting the reserved edge pixels into an uninterrupted curve by using a curve fitting method so as to represent the edge of the net cage;
in the step S5, the edge information of the cage is segmented, and the method further includes the following steps:
s51, edge detection of the image is achieved by adopting a Canny algorithm, and edges of the net cage are extracted;
s52, edge connection is carried out on the edge detection images by using a characteristic point matching, fitting or tracking method, so that edges with continuous and closed shapes are formed;
s53, detecting line segments on edges by using a Hough transformation or maximum suppression method, and filtering noise and irrelevant straight lines;
s54, fitting the curved edge into a straight line or a broken line segment by using a straight line fitting algorithm;
s55, converting the result of the line segmentation processing into data and images;
in the step S6, the analysis and processing are performed on the line segment information of the cage clothes, and the method further includes the following steps:
s61, separating a specific structure of the net cage from the image by adopting an image segmentation technology;
s62, extracting representative features of each specific structure by using a SIFT algorithm, a SURF algorithm or an ORB algorithm;
s63, training a classification model by adopting a supervised learning method, and regarding the defects as classification problems so as to distinguish normal parts and defect parts of the net box;
s64, comprehensively considering the size, shape and texture factors of the defects, and detecting and positioning the defects by using morphological operation and texture feature analysis methods;
s65, analyzing the type, the number and the positions of the defects of the net box.
2. The method for detecting the deep sea cage netting based on the image processing algorithm according to claim 1, wherein in the step S2, the color image is preprocessed, and further comprising the steps of:
s21, removing noise information such as points, lines, planes and the like in the color image by adopting median filtering or mean filtering;
s22, converting the color image into a gray image by using a convolution or weighted average method;
s23, improving the contrast of the image in the deep sea environment by adopting a histogram equalization method;
s24, performing brightness compensation or color balance on the image by using a Laplace enhancement method;
s25, correcting the position of the net cage by adjusting the image angle.
3. The method for detecting a deep sea cage according to claim 1, wherein in the step S3, the target detection is performed on the cage targets in the image, further comprising the steps of:
s31, extracting key features in the image through a SIFT algorithm, a SURF algorithm or an ORB algorithm;
s32, generating a candidate region containing a detection target by adopting an image region segmentation and selective search method;
s33, classifying and positioning the candidate areas by using a convolutional neural network algorithm to judge whether the detection targets are contained or not, and giving out frame coordinates of the detection targets;
s34, filtering the overlapped detection frames by adopting a non-maximum suppression algorithm, and removing the detection frames with lower confidence;
and S35, drawing frames and labels in the image, and displaying the confidence of the detection target.
CN202310504538.5A 2023-05-08 2023-05-08 Deep sea cage and netting detection method based on image processing algorithm Active CN116228757B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310504538.5A CN116228757B (en) 2023-05-08 2023-05-08 Deep sea cage and netting detection method based on image processing algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310504538.5A CN116228757B (en) 2023-05-08 2023-05-08 Deep sea cage and netting detection method based on image processing algorithm

Publications (2)

Publication Number Publication Date
CN116228757A CN116228757A (en) 2023-06-06
CN116228757B true CN116228757B (en) 2023-08-29

Family

ID=86589544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310504538.5A Active CN116228757B (en) 2023-05-08 2023-05-08 Deep sea cage and netting detection method based on image processing algorithm

Country Status (1)

Country Link
CN (1) CN116228757B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117474912A (en) * 2023-12-27 2024-01-30 浪潮软件科技有限公司 Road section gap analysis method and model based on computer vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047583A (en) * 2019-12-23 2020-04-21 大连理工大学 Underwater netting system damage detection method based on machine vision
CN111882555A (en) * 2020-08-07 2020-11-03 中国农业大学 Net detection method, device, equipment and storage medium based on deep learning
CN112163517A (en) * 2020-09-27 2021-01-01 广东海洋大学 Underwater imaging fish net damage identification method and system based on deep learning
CN112529853A (en) * 2020-11-30 2021-03-19 南京工程学院 Method and device for detecting damage of netting of underwater aquaculture net cage
CN114419533A (en) * 2021-12-09 2022-04-29 南方海洋科学与工程广东省实验室(湛江) Deepwater netting damage identification method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9195901B2 (en) * 2012-04-10 2015-11-24 Victor KAISER-PENDERGRAST System and method for detecting target rectangles in an image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047583A (en) * 2019-12-23 2020-04-21 大连理工大学 Underwater netting system damage detection method based on machine vision
CN111882555A (en) * 2020-08-07 2020-11-03 中国农业大学 Net detection method, device, equipment and storage medium based on deep learning
CN112163517A (en) * 2020-09-27 2021-01-01 广东海洋大学 Underwater imaging fish net damage identification method and system based on deep learning
CN112529853A (en) * 2020-11-30 2021-03-19 南京工程学院 Method and device for detecting damage of netting of underwater aquaculture net cage
CN114419533A (en) * 2021-12-09 2022-04-29 南方海洋科学与工程广东省实验室(湛江) Deepwater netting damage identification method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
图像跟踪分析在海水养殖试验中的应用技术研究;江丹丹;《中国优秀硕士学位论文全文数据库(电子期刊)》;第2015卷(第9期);全文 *

Also Published As

Publication number Publication date
CN116228757A (en) 2023-06-06

Similar Documents

Publication Publication Date Title
US20220172348A1 (en) Information processing device, information processing method, and storage medium
CN113781402B (en) Method and device for detecting scratch defects on chip surface and computer equipment
CN113592845A (en) Defect detection method and device for battery coating and storage medium
US20080193020A1 (en) Method for Facial Features Detection
CN107103320B (en) Embedded medical data image identification and integration method
CN108596102B (en) RGB-D-based indoor scene object segmentation classifier construction method
CN108009554A (en) A kind of image processing method and device
US20100008576A1 (en) System and method for segmentation of an image into tuned multi-scaled regions
CN110717489A (en) Method and device for identifying character area of OSD (on screen display) and storage medium
CN111008961B (en) Transmission line equipment defect detection method and system, equipment and medium thereof
CN108229342B (en) Automatic sea surface ship target detection method
CN111915704A (en) Apple hierarchical identification method based on deep learning
CN113449606B (en) Target object identification method and device, computer equipment and storage medium
CN109190456B (en) Multi-feature fusion overlook pedestrian detection method based on aggregated channel features and gray level co-occurrence matrix
CN116228757B (en) Deep sea cage and netting detection method based on image processing algorithm
WO2019204577A1 (en) System and method for multimedia analytic processing and display
CN112883881B (en) Unordered sorting method and unordered sorting device for strip-shaped agricultural products
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision
Petraglia et al. Pipeline tracking and event classification for an automatic inspection vision system
Zhu et al. Automatic object detection and segmentation from underwater images via saliency-based region merging
CN112633274A (en) Sonar image target detection method and device and electronic equipment
KR20190059083A (en) Apparatus and method for recognition marine situation based image division
CN104268550A (en) Feature extraction method and device
CN115131355B (en) Intelligent method for detecting waterproof cloth abnormity by using electronic equipment data
Dulecha et al. Crack detection in single-and multi-light images of painted surfaces using convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant