CN116721303B - Unmanned aerial vehicle fish culture method and system based on artificial intelligence - Google Patents

Unmanned aerial vehicle fish culture method and system based on artificial intelligence Download PDF

Info

Publication number
CN116721303B
CN116721303B CN202311007901.9A CN202311007901A CN116721303B CN 116721303 B CN116721303 B CN 116721303B CN 202311007901 A CN202311007901 A CN 202311007901A CN 116721303 B CN116721303 B CN 116721303B
Authority
CN
China
Prior art keywords
fish
formula
image
calculated
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311007901.9A
Other languages
Chinese (zh)
Other versions
CN116721303A (en
Inventor
侯昕昊
杨森
王莹莹
王瑞雪
周骏
孙欣怡
裘伟豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University of Technology
Original Assignee
Tianjin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University of Technology filed Critical Tianjin University of Technology
Priority to CN202311007901.9A priority Critical patent/CN116721303B/en
Publication of CN116721303A publication Critical patent/CN116721303A/en
Application granted granted Critical
Publication of CN116721303B publication Critical patent/CN116721303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/05Underwater scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned aerial vehicle fish culture method and system based on artificial intelligence, wherein the method comprises the following steps: data acquisition, texture feature extraction, color feature extraction, shape feature extraction, input parameter determination and multi-variety fish state classification. The invention belongs to the technical field of intelligent cultivation, in particular to an unmanned aerial vehicle fish cultivation method and system based on artificial intelligence, wherein the calculation formulas of a final edge detection operator are improved by improving the calculation formulas of a first edge detection operator and a second edge detection operator, and the accuracy of edge detection is improved, so that the extraction quality of shape features is improved; determining input parameters by adopting a gray relation analysis method, enhancing the correlation between the input parameters and experimental results, and improving the convergence speed and the prediction accuracy of the model; by continuously adjusting the size of the inertia weight, particles are close to a better searching area, and the problem that a better global solution cannot be found due to the fact that local minima are trapped is avoided.

Description

一种基于人工智能的无人机鱼类养殖方法及系统An artificial intelligence-based drone fish farming method and system

技术领域Technical field

本发明属于智能养殖技术领域,具体是指一种基于人工智能的无人机鱼类养殖方法及系统。The invention belongs to the field of intelligent breeding technology, and specifically refers to a UAV fish breeding method and system based on artificial intelligence.

背景技术Background technique

基于图像建立分类模型一般选用边缘检测算法和角点提取组合的方式提取特征信息,利用提取的特征信息建立分类模型,但是现有的图像处理方法存在提取形状特征过程中图像边缘定位不准确的技术问题;存在输入参数过多导致分类模型过度拟合的技术问题;存在分类算法容易陷入局部极小值而无法找到更优的全局解的技术问题。Building a classification model based on images generally uses a combination of edge detection algorithm and corner point extraction to extract feature information, and uses the extracted feature information to build a classification model. However, existing image processing methods have inaccurate image edge positioning during the process of extracting shape features. Problem; there is a technical problem that too many input parameters lead to overfitting of the classification model; there is a technical problem that the classification algorithm easily falls into a local minimum and cannot find a better global solution.

发明内容Contents of the invention

针对上述情况,为克服现有技术的缺陷,本发明提供了一种基于人工智能的无人机鱼类养殖方法及系统,针对提取形状特征过程中图像边缘定位不准确的技术问题,本发明通过改进第一、第二边缘检测算子的计算公式来改进最终边缘检测算子的计算公式,使图像边缘定位更加准确,提高边缘检测的准确度,从而提高形状特征的提取质量;针对输入参数过多导致分类模型过度拟合的技术问题,本发明采用灰色关系分析方法确定输入参数,加强输入参数与实验结果之间的相关性,提高模型的收敛速度和预测精度;针对分类算法容易陷入局部极小值而无法找到更优的全局解的技术问题,本发明通过不断调整惯性权重的大小,使优化参数向更好的搜索区域靠拢,避免陷入局部极小值而无法找到更优的全局解的问题。In view of the above situation, in order to overcome the shortcomings of the existing technology, the present invention provides an artificial intelligence-based UAV fish farming method and system. In order to solve the technical problem of inaccurate image edge positioning in the process of extracting shape features, the present invention adopts Improve the calculation formula of the first and second edge detection operators to improve the calculation formula of the final edge detection operator, make the image edge positioning more accurate, improve the accuracy of edge detection, thereby improving the extraction quality of shape features; for input parameter processing Technical problems often lead to overfitting of classification models. The present invention uses gray relationship analysis methods to determine input parameters, strengthen the correlation between input parameters and experimental results, and improve the convergence speed and prediction accuracy of the model; in view of the fact that classification algorithms are prone to falling into local extremes. The technical problem of being unable to find a better global solution due to small values. This invention continuously adjusts the size of the inertia weight to move the optimization parameters closer to a better search area and avoid falling into local minimum values and being unable to find a better global solution. question.

本发明采取的技术方案如下:本发明提供的一种基于人工智能的无人机鱼类养殖方法,该方法包括以下步骤:The technical solution adopted by the present invention is as follows: The present invention provides an artificial intelligence-based UAV fish breeding method, which includes the following steps:

步骤S1:数据采集,采集鱼类图像及对应标签,标签为鱼的品种和生长状态,将采集的鱼类图像作为鱼类图像;Step S1: Data collection, collect fish images and corresponding labels, the labels are fish species and growth status, and use the collected fish images as fish images;

步骤S2:提取纹理特征,基于鱼类图像的RGB三个通道的像素值计算出对应的灰度值,进而计算灰度共生矩阵和各中心像素的概率,最后通过计算对比度、能量、熵和均匀性得到纹理特征;Step S2: Extract texture features, calculate the corresponding gray value based on the pixel values of the RGB three channels of the fish image, then calculate the gray co-occurrence matrix and the probability of each central pixel, and finally calculate the contrast, energy, entropy and uniformity Sex gets texture features;

步骤S3:提取颜色特征,将鱼类图像转换为HSV颜色空间,然后将HSV颜色空间划分成若干个区间,进而计算颜色直方图,最后通过计算均值、方差、中位数和标准差得到颜色特征;Step S3: Extract color features, convert the fish image into HSV color space, then divide the HSV color space into several intervals, then calculate the color histogram, and finally obtain the color features by calculating the mean, variance, median and standard deviation. ;

步骤S4:提取形状特征,通过计算第一边缘检测算子和计算第二边缘检测算子计算改进的边缘检测算子,然后通过计算小尺度图像边缘和计算大尺度图像边缘计算最终的图像边缘,再进行多边形拟合,最后通过计算轮廓长度、轮廓面积、中心距和离心率得到形状特征;Step S4: Extract shape features, calculate the improved edge detection operator by calculating the first edge detection operator and calculating the second edge detection operator, and then calculate the final image edge by calculating the small-scale image edge and calculating the large-scale image edge, Then polygon fitting is performed, and finally the shape characteristics are obtained by calculating the contour length, contour area, center distance and eccentricity;

步骤S5:确定输入参数,首先构建分类数据集,然后通过设置参考序列构建比较矩阵,通过非量纲化数据得到非量纲化矩阵,通过计算灰色相关系数计算灰色相关度,最后确定输入参数;Step S5: Determine the input parameters, first construct a classification data set, then construct a comparison matrix by setting a reference sequence, obtain a non-dimensional matrix through non-dimensional data, calculate the gray correlation degree by calculating the gray correlation coefficient, and finally determine the input parameters;

步骤S6:多品种鱼类状态分类,首先构建训练数据集和测试数据集,然后初始化优化参数位置和速度,生成多品种鱼类状态分类模型参数和训练多品种鱼类状态分类模型,通过初始化个体最优位置和全局最优位置,然后更新优化参数速度、位置和适应度值,再更新惯性权重、个体最优位置和全局最优位置,基于评估阈值确定最终的多品种鱼类状态分类模型,利用模型对无人机实时采集鱼类图像输出的鱼类品种和生长状态进行喂食。Step S6: Multi-species fish status classification, first construct a training data set and a test data set, then initialize the optimization parameter position and speed, generate multi-species fish status classification model parameters and train a multi-species fish status classification model, by initializing individuals The optimal position and the global optimal position, then update the optimization parameter speed, position and fitness value, then update the inertia weight, individual optimal position and global optimal position, and determine the final multi-species fish status classification model based on the evaluation threshold, The model is used to feed the fish species and growth status outputted by the drone's real-time collection of fish images.

进一步地,在步骤S2中,所述提取纹理特征具体包括以下步骤:Further, in step S2, the extraction of texture features specifically includes the following steps:

步骤S21:计算灰度值,将鱼类图像的RGB三个通道的像素值计算出对应的灰度值,将得到的灰度值赋值给相应的像素点,得到灰度化图像,所用公式如下:Step S21: Calculate the grayscale value, calculate the corresponding grayscale value from the pixel values of the three RGB channels of the fish image, and assign the obtained grayscale value to the corresponding pixel point to obtain a grayscale image. The formula used is as follows :

A=0.299*R+0.587*G+0.114*B;A=0.299*R+0.587*G+0.114*B;

式中,A是每个像素点的灰度值,R、G、B分别是红、绿、蓝通道的像素值,0.299、0.587、0.114分别是R、G、B对应的加权系数;In the formula, A is the gray value of each pixel, R, G, and B are the pixel values of the red, green, and blue channels respectively, and 0.299, 0.587, and 0.114 are the weighting coefficients corresponding to R, G, and B respectively;

步骤S22:计算灰度共生矩阵,所用公式如下:Step S22: Calculate the gray level co-occurrence matrix. The formula used is as follows:

;

式中,G(i,j,δr,δc)是灰度共生矩阵,i和j分别是灰度级别,δr和δc分别是领域像素在行和列方向上的偏移量,Nr和Nc分别是灰度化图像的行数和列数,I(m,n)是灰度化图像第m行、第n列像素的灰度值;In the formula, G (i, j, δr, δc) is the gray-level co-occurrence matrix, i and j are the gray levels respectively, δr and δc are the offsets of the field pixels in the row and column directions, Nr and Nc respectively. is the number of rows and columns of the grayscale image, I(m,n) is the grayscale value of the pixel in the mth row and nth column of the grayscale image;

步骤S23:计算概率,所用公式如下:Step S23: Calculate the probability, the formula used is as follows:

;

式中,P(i,j)是灰度共生矩阵中以灰度级别i为邻域像素、以灰度级别j为中心像素的概率,Npq是以p为中心、q为领域像素的出现次数,N1是灰度共生矩阵中所有元素的总和;In the formula, P (i, j) is the probability of gray level i as the neighborhood pixel and gray level j as the center pixel in the gray co-occurrence matrix, and Npq is the number of occurrences of the pixel with p as the center and q as the domain pixel. , N1 is the sum of all elements in the gray-level co-occurrence matrix;

步骤S24:计算纹理特征,步骤如下:Step S24: Calculate texture features, the steps are as follows:

步骤S241:计算对比度,所用公式如下:Step S241: Calculate contrast, the formula used is as follows:

C=∑ij(i,j)2*P(i,j);C=∑ ij (i, j) 2 *P (i, j);

式中,C是图像中像素间的对比度;In the formula, C is the contrast between pixels in the image;

步骤S242:计算能量,所用公式如下:Step S242: Calculate energy, the formula used is as follows:

D=∑ijP(i,j)2D=∑ ij P(i, j) 2 ;

式中,D是能量;In the formula, D is energy;

步骤S243:计算熵,所用公式如下:Step S243: Calculate entropy, the formula used is as follows:

E=-∑ijP(i,j)*㏒(P(i,j));E=-∑ ij P(i,j)*㏒(P(i,j));

式中,E是熵;In the formula, E is entropy;

步骤S244:计算均匀性,所用公式如下:Step S244: Calculate uniformity, the formula used is as follows:

;

式中,F是均匀性。In the formula, F is uniformity.

进一步地,在步骤S3中,所述提取颜色特征具体包括以下步骤:Further, in step S3, the extraction of color features specifically includes the following steps:

步骤S31:将鱼类图像转换为HSV颜色空间,步骤如下:Step S31: Convert the fish image to HSV color space. The steps are as follows:

步骤S311:归一化,将RGB彩色图像中的RGB值归一化至[0,1];Step S311: Normalize, normalize the RGB values in the RGB color image to [0, 1];

步骤S312:计算色调,所用公式如下:Step S312: Calculate hue, the formula used is as follows:

;

式中,H是色调且取值范围为[0º,360º],Lmax和Lmin分别是R、G、B三个颜色通道中对应的最大值和最小值;In the formula, H is the hue and the value range is [0º, 360º], Lmax and Lmin are the corresponding maximum and minimum values in the three color channels of R, G and B respectively;

步骤S313:计算饱和度,所用公式如下:Step S313: Calculate saturation, the formula used is as follows:

;

式中,S是饱和度且取值范围为[0,1];In the formula, S is the saturation and the value range is [0, 1];

步骤S314:计算亮度,所用公式如下:Step S314: Calculate brightness, the formula used is as follows:

V=Lmax;V=Lmax;

式中,V是亮度且取值范围为[0,1];In the formula, V is the brightness and the value range is [0, 1];

步骤S32:将HSV颜色空间划分成若干个区间,在HSV颜色空间中,将色调H均匀分为24个区间,将饱和度S和亮度V各自均匀分为10个区间;Step S32: Divide the HSV color space into several intervals. In the HSV color space, the hue H is evenly divided into 24 intervals, and the saturation S and brightness V are each evenly divided into 10 intervals;

步骤S33:计算颜色直方图,遍历图像中的每个像素,统计其所属颜色空间区间的数量,得到颜色直方图;Step S33: Calculate the color histogram, traverse each pixel in the image, count the number of color space intervals to which it belongs, and obtain the color histogram;

步骤S34:计算颜色特征,步骤如下:Step S34: Calculate color features, the steps are as follows:

步骤S341:计算均值,所用公式如下:Step S341: Calculate the mean, the formula used is as follows:

;

式中,μ是均值,Mi是颜色直方图中第i个像素值的出现频率,N2是颜色直方图中像素值的总个数;In the formula, μ is the mean value, Mi is the occurrence frequency of the i-th pixel value in the color histogram, and N2 is the total number of pixel values in the color histogram;

步骤S342:计算方差,所用公式如下:Step S342: Calculate variance, the formula used is as follows:

;

式中,σ2是方差;In the formula, σ 2 is the variance;

步骤S343:计算中位数,将颜色直方图中像素值按升序排序,取排在中间位置的值为中位数;Step S343: Calculate the median, sort the pixel values in the color histogram in ascending order, and take the value in the middle as the median;

步骤S344:计算标准差,所用公式如下:Step S344: Calculate the standard deviation. The formula used is as follows:

σ=sqrt(σ2);σ=sqrt(σ 2 );

式中,σ是标准差。In the formula, σ is the standard deviation.

进一步地,在步骤S4中,所述提取形状特征具体包括以下步骤:Further, in step S4, the extraction of shape features specifically includes the following steps:

步骤S41:图像去噪,基于步骤S21得到的灰度化图像进行图像去噪,所用公式如下:Step S41: Image denoising. Image denoising is performed based on the grayscale image obtained in step S21. The formula used is as follows:

;

式中,g(x,y)是去噪后的灰度化图像,f(x,y)是原始灰度化图像,k是归一化系数,r是高斯滤波器的半径,ωij是高斯滤波器的权值;In the formula, g(x,y) is the denoised grayscale image, f(x,y) is the original grayscale image, k is the normalization coefficient, r is the radius of the Gaussian filter, and ωij is the Gaussian The weight of the filter;

步骤S42:计算第一边缘检测算子,所用公式如下:Step S42: Calculate the first edge detection operator, the formula used is as follows:

Y1i=(g(x,y)⊕bi)•bi-g(x,y);Y1i=(g(x,y)⊕bi)·bi-g(x,y);

式中,Y1i是第一边缘检测算子,bi是不同方向的结构元素且i=1,2,…,8,⊕是异或运算符,•是两个向量之间的点乘运算符;In the formula, Y1i is the first edge detection operator, bi is the structural element in different directions and i=1, 2,...,8, ⊕ is the exclusive OR operator, and • is the dot multiplication operator between two vectors;

步骤S43:计算第二边缘检测算子,所用公式如下:Step S43: Calculate the second edge detection operator, the formula used is as follows:

;

式中,Y2i是第二边缘检测算子,Θ是逻辑或运算符,是两个向量之间的按位乘积运算符;In the formula, Y2i is the second edge detection operator, Θ is the logical OR operator, is the bitwise product operator between two vectors;

步骤S44:计算改进的边缘检测算子,所用公式如下:Step S44: Calculate the improved edge detection operator, the formula used is as follows:

Yi=Y1i+Y2i+Ymin;Yi=Y1i+Y2i+Ymin;

式中,Yi是改进的边缘检测算子,Yimin是边缘最小值且Yimin=min{Y1i,Y2i};In the formula, Yi is the improved edge detection operator, Yimin is the edge minimum and Yimin=min{Y1i, Y2i};

步骤S45:计算小尺度图像边缘,利用3*3的结构元素bi(i=1,2,3,4)进行边缘检测,所用公式如下:Step S45: Calculate small-scale image edges, and use 3*3 structural elements bi (i=1, 2, 3, 4) for edge detection. The formula used is as follows:

;

式中,Q1是小尺度图像边缘;In the formula, Q1 is the small-scale image edge;

步骤S46:计算大尺度图像边缘,利用5*5的结构元素bi(i=5,6,7,8)进行边缘检测,所用公式如下:Step S46: Calculate large-scale image edges, and use 5*5 structural elements bi (i=5, 6, 7, 8) for edge detection. The formula used is as follows:

;

式中,Q2是大尺度图像边缘;In the formula, Q2 is the large-scale image edge;

步骤S47:计算最终的图像边缘,得到图形区域的边缘信息,所用公式如下:Step S47: Calculate the final image edge and obtain the edge information of the graphics area. The formula used is as follows:

;

式中,Q是最终的图像边缘;In the formula, Q is the final image edge;

步骤S48:多边形拟合,基于步骤S47得到的边缘信息,找出图形区域的轮廓,并用多边形拟合将轮廓转换成多边形;Step S48: Polygon fitting, based on the edge information obtained in step S47, find the outline of the graphics area, and use polygon fitting to convert the outline into a polygon;

步骤S49:计算形状特征,步骤如下:Step S49: Calculate shape features, the steps are as follows:

步骤S491:计算轮廓长度,所用公式如下:Step S491: Calculate the contour length. The formula used is as follows:

;

式中,R是轮廓长度,n是多边形的变数,di是第i条边的长度;In the formula, R is the length of the outline, n is the variable of the polygon, and di is the length of the i-th side;

步骤S492:计算轮廓面积,所用公式如下:Step S492: Calculate the contour area. The formula used is as follows:

;

式中,S是轮廓面积,(xi,yi)是多边形的第i个顶点的坐标;In the formula, S is the outline area, ( xi , yi ) is the coordinate of the i-th vertex of the polygon;

步骤S493:计算中心距,所用公式如下:Step S493: Calculate the center distance, the formula used is as follows:

;

式中,O是中心距,(xc,yc)是轮廓重心的坐标,(xm,ym)是最靠近重心点的轮廓点的坐标;In the formula, O is the center distance, (x c , y c ) is the coordinate of the center of gravity of the contour, (x m , y m ) is the coordinate of the contour point closest to the center of gravity point;

步骤S494:计算离心率,所用公式如下:Step S494: Calculate the eccentricity, the formula used is as follows:

;

式中,U是离心率,e是鱼类图像轮廓的最小外接椭圆的长轴长度,z是鱼类图像轮廓的最小外接椭圆的短轴长度。In the formula, U is the eccentricity, e is the length of the major axis of the minimum circumscribed ellipse of the fish image outline, and z is the length of the minor axis of the minimum circumscribed ellipse of the fish image outline.

进一步地,在步骤S5中,所述确定输入参数具体包括以下步骤:Further, in step S5, determining the input parameters specifically includes the following steps:

步骤S51:构建分类数据集,基于步骤S2计算的纹理特征、步骤S3计算的颜色特征、步骤S4计算的形状特征和步骤S1采集的鱼类图像构建分类数据集,其中纹理特征的特征变量包括对比度、能量、熵和均匀性,颜色特征的特征变量包括均值、方差、中位数和标准差,形状特征的特征变量包括轮廓长度、轮廓面积、中心距和离心率;Step S51: Construct a classification data set based on the texture features calculated in step S2, the color features calculated in step S3, the shape features calculated in step S4 and the fish images collected in step S1, where the characteristic variables of the texture features include contrast. , energy, entropy and uniformity, the characteristic variables of color features include mean, variance, median and standard deviation, the characteristic variables of shape features include outline length, outline area, center distance and eccentricity;

步骤S52:设置参考序列,预先从分类数据集中选择n个标准数据作为评估参数,所用公式如下:Step S52: Set the reference sequence and select n standard data from the classification data set as evaluation parameters in advance. The formula used is as follows:

X0=(x0(1),x0(2),…,x0(n));X0=(x0(1),x0(2),…,x0(n));

式中,X0是参考序列,n是评估参数的数量;In the formula, X0 is the reference sequence, n is the number of evaluation parameters;

步骤S53:构建比较矩阵,基于分类数据集中的样本数据,设置比较顺序,所用公式如下:Step S53: Construct a comparison matrix, set the comparison order based on the sample data in the classified data set, and use the following formula:

;

式中,X是比较矩阵,m是分类数据集中样本数据的数量;In the formula, X is the comparison matrix, m is the number of sample data in the classification data set;

步骤S54:非量纲化数据,所用公式如下:Step S54: Non-dimensionalized data, the formula used is as follows:

;

式中,xp(q)是比较矩阵X第p列第q行的原始数据,x’p(q)是原始数据xp(q)非量纲化处理后的非量纲化数据,xmin是比较矩阵X第p列的最小值,xmax是比较矩阵X第p列的最大值;In the formula, xp(q) is the original data of the pth column and qth row of the comparison matrix X, x'p(q) is the non-dimensionalized data after non-dimensional processing of the original data xp(q), and The minimum value of the p-th column of the matrix X, xmax is the maximum value of the p-th column of the comparison matrix X;

步骤S55:非量纲化矩阵,所用公式如下:Step S55: Non-dimensionalized matrix, the formula used is as follows:

;

式中,X’是非量纲化矩阵;In the formula, X’ is a non-dimensional matrix;

步骤S56:计算灰色相关系数,计算每个样本数据序列和参考序列的相应元素之间的灰色相关系数,所用公式如下:Step S56: Calculate the gray correlation coefficient between the corresponding elements of each sample data sequence and the reference sequence. The formula used is as follows:

;

式中,εp(q)是第p个样本数据序列和参考序列在第q个评估参数之间的灰色相关系数,ρ是分辨率系数且0<ρ<1;In the formula, εp(q) is the gray correlation coefficient between the p-th sample data sequence and the reference sequence at the q-th evaluation parameter, ρ is the resolution coefficient and 0<ρ<1;

步骤S57:计算灰色相关度,所用公式如下:Step S57: Calculate the gray correlation degree. The formula used is as follows:

;

式中,rp是第p个样本数据序列和参考序列在所有评估参数上的灰色相关度;In the formula, rp is the gray correlation degree between the p-th sample data sequence and the reference sequence on all evaluation parameters;

步骤S58:确定输入参数,预先设有灰色相关度阈值,将灰色相关度大于阈值的特征变量确定为输入参数。Step S58: Determine the input parameters, set a gray correlation threshold in advance, and determine the characteristic variables whose gray correlation is greater than the threshold as input parameters.

进一步地,在步骤S6中,所述多品种鱼类状态分类具体包括以下步骤:Further, in step S6, the multi-species fish status classification specifically includes the following steps:

步骤S61:构建训练数据集和测试数据集,将分类数据集中不大于灰色相关度阈值的特征变量的维度信息删除,并获取数据对应标签,得到样本数据,所述对应标签是步骤S1采集的标签,随机选取70%的样本数据作为训练数据集,其余30%的样本数据作为测试数据集;Step S61: Construct a training data set and a test data set, delete the dimensional information of the feature variables that are not greater than the gray correlation threshold in the classification data set, and obtain the corresponding labels of the data to obtain sample data. The corresponding labels are the labels collected in step S1. , randomly select 70% of the sample data as the training data set, and the remaining 30% of the sample data as the test data set;

步骤S62:初始化优化参数位置,对每个优化参数随机生成一个初始位置,所用公式如下:Step S62: Initialize the optimization parameter positions and randomly generate an initial position for each optimization parameter. The formula used is as follows:

Y(i,j)=rand(0,1)*(U(j)-L(j))+L(j);Y(i,j)=rand(0,1)*(U(j)-L(j))+L(j);

式中,i是优化参数的编号,j是Y的维度,Y(i,j)是第i个优化参数在第j维的位置,rand(0,1)是生成0到1之间的随机数,U(j)是第j维的上界限制,L(j)是第j维的下界限制;In the formula, i is the number of the optimization parameter, j is the dimension of Y, Y (i, j) is the position of the i-th optimization parameter in the j-th dimension, and rand (0, 1) is a random number between 0 and 1. Number, U(j) is the upper bound limit of the jth dimension, L(j) is the lower bound limit of the jth dimension;

步骤S63:初始化优化参数速度,对每个优化参数随机生成一个初始速度,所用公式如下:Step S63: Initialize the optimization parameter speed, and randomly generate an initial speed for each optimization parameter. The formula used is as follows:

V(i,j)=rand(0,1)*(Vmax(j)-Vmin(j))+Vmin(j);V(i,j)=rand(0,1)*(Vmax(j)-Vmin(j))+Vmin(j);

式中,V(i,j)是第i个优化参数在第j维的速度,Vmax(j)是第j维的速度上限,Vmin(j)是第j维的速度下限;In the formula, V (i, j) is the speed of the i-th optimization parameter in the j-th dimension, Vmax (j) is the upper speed limit of the j-th dimension, and Vmin (j) is the lower speed limit of the j-th dimension;

步骤S64:生成多品种鱼类状态分类模型参数,对于每个优化参数,根据当前位置生成一组多品种鱼类状态分类模型参数,一组多品种鱼类状态分类模型参数由一个惩罚因子和一个核函数参数组成,所用公式如下:Step S64: Generate multi-species fish status classification model parameters. For each optimization parameter, generate a set of multi-species fish status classification model parameters based on the current position. A set of multi-species fish status classification model parameters are composed of a penalty factor and a The kernel function parameters are composed of the following formulas:

C(i)=2Y(i,1)C(i)=2 Y(i,1) ;

G(i)=2Y(i,2)G(i)=2 Y(i,2) ;

式中,C(i)是多品种鱼类状态分类模型的惩罚因子,G(i)是多品种鱼类状态分类模型的核函数参数;In the formula, C(i) is the penalty factor of the multi-species fish status classification model, and G(i) is the kernel function parameter of the multi-species fish status classification model;

步骤S65:训练多品种鱼类状态分类模型,基于步骤S64确定的多品种鱼类状态分类模型参数以及步骤S61构建的训练数据集,对多品种鱼类状态分类模型进行训练,计算多品种鱼类状态分类模型的权重向量和偏置值,所用公式如下:Step S65: Train the multi-species fish status classification model. Based on the multi-species fish status classification model parameters determined in step S64 and the training data set constructed in step S61, train the multi-species fish status classification model and calculate the multi-species fish status classification model. The weight vector and bias value of the state classification model, the formula used is as follows:

wi=∑ai*ci*ei;wi=∑ai*ci*ei;

;

式中,wi是第i个二分类器的权重向量,ai是第i个样本的拉格朗日乘子,ci是第i个样本的特征向量,ei是第i个样本的对应标签,ti是第i个二分类器的偏置值,n是训练样本的数量,g(ci,cj)是核函数,cj是第j个样本的特征向量;In the formula, wi is the weight vector of the i-th binary classifier, ai is the Lagrange multiplier of the i-th sample, ci is the feature vector of the i-th sample, ei is the corresponding label of the i-th sample, ti is the bias value of the i-th binary classifier, n is the number of training samples, g (ci, cj) is the kernel function, and cj is the feature vector of the j-th sample;

步骤S66:计算优化参数适应度值,利用步骤S65训练好的多品种鱼类状态分类模型,对步骤S61构建的测试数据集进行预测,并计算适应度值,所用公式如下:Step S66: Calculate the fitness value of the optimization parameters, use the multi-species fish status classification model trained in step S65 to predict the test data set constructed in step S61, and calculate the fitness value. The formula used is as follows:

;

式中,f(i)是第i个优化参数的适应度值,k是对应标签的种类数量,Nj是第j个对应标签的样本量,yjl是第j个对应标签中第l个样本的真实对应标签,pjl(i)是第j个对应标签中第l个样本在第i个优化参数上的多品种鱼类状态分类模型的预测概率;In the formula, f(i) is the fitness value of the i-th optimization parameter, k is the number of types of the corresponding label, Nj is the sample size of the j-th corresponding label, and yjl is the l-th sample in the j-th corresponding label. The true corresponding label, pjl(i) is the predicted probability of the multi-species fish status classification model on the i-th optimized parameter for the l-th sample in the j-th corresponding label;

步骤S67:初始化个体最优位置和全局最优位置,将步骤S62初始化的每个优化参数的初始位置作为对应优化参数的个体最优位置,将所有优化参数中适应度值最低的优化参数的个体最优位置作为全局最优位置;Step S67: Initialize the individual optimal position and the global optimal position, use the initial position of each optimization parameter initialized in step S62 as the individual optimal position of the corresponding optimization parameter, and use the individual optimization parameter with the lowest fitness value among all optimization parameters. The optimal position is used as the global optimal position;

步骤S68:更新优化参数速度,所用公式如下:Step S68: Update the optimization parameter speed, the formula used is as follows:

V(i,j)=h*V(i,j)+d1*rand(0,1)*(T1(i,j)-Y(i,j))+d2*rand(0,1)*(T2(j)-Y(i,j));V(i,j)=h*V(i,j)+d1*rand(0,1)*(T1(i,j)-Y(i,j))+d2*rand(0,1)* (T2(j)-Y(i,j));

式中,h是惯性权重,T1(i,j)是第i个优化参数在第j个维度上的个体最优位置,T2(j)是全局最优位置在第j个维度上的值,d1和d2是学习因子,rand(0,1)是生成0到1之间的随机数;In the formula, h is the inertia weight, T1(i, j) is the individual optimal position of the i-th optimization parameter in the j-th dimension, T2(j) is the value of the global optimal position in the j-th dimension, d1 and d2 are learning factors, and rand(0,1) generates a random number between 0 and 1;

步骤S69:更新优化参数位置,所用公式如下:Step S69: Update the optimization parameter position, the formula used is as follows:

Y(i,j)=Y(i,j)+V(i,j);Y(i,j)=Y(i,j)+V(i,j);

式中,Y(i,j)是第i个优化参数在第j个维度上的位置;In the formula, Y (i, j) is the position of the i-th optimization parameter in the j-th dimension;

步骤S610:更新优化参数适应度值;Step S610: Update the optimization parameter fitness value;

步骤S611:更新惯性权重,所用公式如下:Step S611: Update the inertia weight, the formula used is as follows:

;

式中,hmin是最小惯性权重,hmax是最大惯性权重,f是当前的适应度值,fmin是最小适应度值,favg是所有优化参数适应度的平均值;In the formula, hmin is the minimum inertia weight, hmax is the maximum inertia weight, f is the current fitness value, fmin is the minimum fitness value, and favg is the average fitness value of all optimization parameters;

步骤S612:更新个体最优位置和全局最优位置,根据优化参数的适应度值更新优化参数的个体最优位置,并根据所有优化参数的个体最优位置更新全局最优位置;Step S612: Update the individual optimal position and the global optimal position, update the individual optimal position of the optimization parameter according to the fitness value of the optimization parameter, and update the global optimal position according to the individual optimal position of all optimization parameters;

步骤S613:模型确定,预先设有评估阈值和最大迭代次数,当优化参数的适应度值低于评估阈值则基于当前参数建立多品种鱼类状态分类模型并转至步骤S614;若达到最大迭代次数,则转至步骤S62;否则转至步骤S68;Step S613: The model is determined. The evaluation threshold and the maximum number of iterations are preset. When the fitness value of the optimization parameter is lower than the evaluation threshold, a multi-species fish status classification model is established based on the current parameters and goes to step S614; if the maximum number of iterations is reached , then go to step S62; otherwise, go to step S68;

步骤S614:分类,无人机实时采集鱼类图像并输入至多品种鱼类状态分类模型中,基于模型输出的鱼类品种及生长状态进行喂食。Step S614: Classification, the drone collects fish images in real time and inputs them into the multi-species fish status classification model, and feeds based on the fish species and growth status output by the model.

本发明提供的一种基于人工智能的无人机鱼类养殖系统,包括数据采集模块、获取纹理特征模块、获取颜色特征模块、获取形状特征模块、确定输入参数模块和多品种鱼类状态分类模块;The invention provides an artificial intelligence-based UAV fish breeding system, which includes a data collection module, a texture feature acquisition module, a color feature acquisition module, a shape feature acquisition module, an input parameter determination module and a multi-species fish status classification module. ;

以鱼类图像为例,所述数据采集模块采集各种生产状态下鱼类图像及对应标签,将采集的鱼类图像作为鱼类图像,并将鱼类图像发送至获取纹理特征模块和获取颜色特征模块;Taking fish images as an example, the data collection module collects fish images and corresponding tags under various production conditions, uses the collected fish images as fish images, and sends the fish images to the texture feature acquisition module and color acquisition module. feature module;

所述获取纹理特征模块和获取颜色特征模块接收数据采集模块发送的鱼类图像,分别利用灰度共生矩阵和颜色直方图提取纹理特征和颜色特征,并将提取得到的纹理特征和颜色特征发送至确定输入参数模块,并且获取纹理特征模块将灰度化图像发送至获取形状特征模块;The texture feature acquisition module and the color feature acquisition module receive the fish image sent by the data collection module, respectively extract texture features and color features using the gray level co-occurrence matrix and color histogram, and send the extracted texture features and color features to Determine the input parameter module, and the texture feature acquisition module sends the grayscale image to the shape feature acquisition module;

所述获取形状特征模块接收获取纹理特征模块发送的灰度化图像,通过改进第一、第二边缘检测算子的计算公式来改进最终的边缘检测算子,提高形状特征的提取质量,并将提取得到的形状特征发送至确定输入参数模块;The shape feature acquisition module receives the grayscale image sent by the texture feature acquisition module, improves the final edge detection operator by improving the calculation formulas of the first and second edge detection operators, improves the extraction quality of the shape features, and The extracted shape features are sent to the module for determining input parameters;

所述确定输入参数模块接收获取纹理特征模块发送的纹理特征、获取颜色特征模块发送的颜色特征和获取形状特征模块发送的形状特征,采用灰色关系分析方法确定输入参数,并将确定的输入参数发送至多品种鱼类状态分类模块;The determining input parameter module receives the texture features sent by the texture feature acquisition module, the color features sent by the color feature acquisition module and the shape features sent by the shape feature acquisition module, uses the gray relationship analysis method to determine the input parameters, and sends the determined input parameters Up to multi-species fish status classification module;

所述多品种鱼类状态分类模块接收确定输入参数模块发送的输入参数,通过不断调整惯性权重的大小,最终建立多品种鱼类状态分类模型。The multi-species fish status classification module receives the input parameters sent by the input parameter determination module, and finally establishes a multi-species fish status classification model by continuously adjusting the size of the inertia weight.

采用上述方案本发明取得的有益效果如下:The beneficial effects achieved by the present invention by adopting the above scheme are as follows:

(1)针对提取形状特征过程中图像边缘定位不准确的技术问题,本发明通过改进第一、第二边缘检测算子的计算公式来改进最终边缘检测算子的计算公式,使图像边缘定位更加准确,提高边缘检测的准确度,从而提高形状特征的提取质量。(1) Aiming at the technical problem of inaccurate image edge positioning in the process of extracting shape features, the present invention improves the calculation formula of the final edge detection operator by improving the calculation formulas of the first and second edge detection operators, making image edge positioning more accurate. Accurate, improve the accuracy of edge detection, thereby improving the quality of shape feature extraction.

(2)针对输入参数过多导致分类模型过度拟合的技术问题,本发明采用灰色关系分析方法确定输入参数,加强输入参数与实验结果之间的相关性,提高模型的收敛速度和预测精度。(2) In view of the technical problem of overfitting of the classification model caused by too many input parameters, the present invention uses the gray relationship analysis method to determine the input parameters, strengthen the correlation between the input parameters and the experimental results, and improve the convergence speed and prediction accuracy of the model.

(3)针对因分类算法容易陷入局部极小值而无法找到更优的全局解的技术问题,本发明通过不断调整惯性权重的大小,使优化参数向更好的搜索区域靠拢,避免陷入局部极小值而无法找到更优的全局解的问题。(3) In view of the technical problem that the classification algorithm is prone to falling into local minima and cannot find a better global solution, the present invention continuously adjusts the size of the inertia weight to move the optimization parameters closer to a better search area and avoid falling into local minima. Problems where a better global solution cannot be found due to small values.

附图说明Description of the drawings

图1为本发明提供的一种基于人工智能的无人机鱼类养殖方法的流程示意图;Figure 1 is a schematic flow chart of an artificial intelligence-based drone fish farming method provided by the present invention;

图2为本发明提供的一种基于人工智能的无人机鱼类养殖系统的示意图;Figure 2 is a schematic diagram of an artificial intelligence-based drone fish farming system provided by the present invention;

图3为步骤S2的流程示意图;Figure 3 is a schematic flow chart of step S2;

图4为步骤S3的流程示意图;Figure 4 is a schematic flow chart of step S3;

图5为步骤S4的流程示意图;Figure 5 is a schematic flow chart of step S4;

图6为步骤S5的流程示意图;Figure 6 is a schematic flow chart of step S5;

图7为步骤S6的流程示意图;Figure 7 is a schematic flow chart of step S6;

图8为优化参数搜索位置示意图;Figure 8 is a schematic diagram of the optimization parameter search position;

图9为优化参数搜索曲线图。Figure 9 is the optimization parameter search curve chart.

附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。The drawings are used to provide a further understanding of the present invention and constitute a part of the specification. They are used to explain the present invention together with the embodiments of the present invention and do not constitute a limitation of the present invention.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例;基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, not all of them; based on The embodiments of the present invention and all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of the present invention.

在本发明的描述中,需要理解的是,术语“上”、“下”、“前”、“后”、“左”、“右”、“顶”、“底”、“内”、“外”等指示方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。In the description of the present invention, it should be understood that the terms "upper", "lower", "front", "back", "left", "right", "top", "bottom", "inner", " The orientation or positional relationship indicated by "outside" is based on the orientation or positional relationship shown in the drawings. It is only for the convenience of describing the present invention and simplifying the description. It does not indicate or imply that the device or element referred to must have a specific orientation or a specific location. orientation, construction and operation, and therefore should not be construed as limitations of the present invention.

实施例一,参阅图1,本发明提供的一种基于人工智能的无人机鱼类养殖方法,该方法包括以下步骤:Embodiment 1. Referring to Figure 1, the present invention provides an artificial intelligence-based drone fish farming method. The method includes the following steps:

步骤S1:数据采集,采集鱼类图像及对应标签,标签为鱼的品种和生长状态,将采集的鱼类图像作为鱼类图像;Step S1: Data collection, collect fish images and corresponding labels, the labels are fish species and growth status, and use the collected fish images as fish images;

步骤S2:提取纹理特征,基于鱼类图像的RGB三个通道的像素值计算出对应的灰度值,进而计算灰度共生矩阵和各中心像素的概率,最后通过计算对比度、能量、熵和均匀性得到纹理特征;Step S2: Extract texture features, calculate the corresponding gray value based on the pixel values of the RGB three channels of the fish image, then calculate the gray co-occurrence matrix and the probability of each central pixel, and finally calculate the contrast, energy, entropy and uniformity Sex gets texture features;

步骤S3:提取颜色特征,将鱼类图像转换为HSV颜色空间,然后将HSV颜色空间划分成若干个区间,进而计算颜色直方图,最后通过计算均值、方差、中位数和标准差得到颜色特征;Step S3: Extract color features, convert the fish image into HSV color space, then divide the HSV color space into several intervals, then calculate the color histogram, and finally obtain the color features by calculating the mean, variance, median and standard deviation. ;

步骤S4:提取形状特征,通过计算第一边缘检测算子和计算第二边缘检测算子计算改进的边缘检测算子,然后通过计算小尺度图像边缘和计算大尺度图像边缘计算最终的图像边缘,再进行多边形拟合,最后通过计算轮廓长度、轮廓面积、中心距和离心率得到形状特征;Step S4: Extract shape features, calculate the improved edge detection operator by calculating the first edge detection operator and calculating the second edge detection operator, and then calculate the final image edge by calculating the small-scale image edge and calculating the large-scale image edge, Then polygon fitting is performed, and finally the shape characteristics are obtained by calculating the contour length, contour area, center distance and eccentricity;

步骤S5:确定输入参数,首先构建分类数据集,然后通过设置参考序列构建比较矩阵,通过非量纲化数据得到非量纲化矩阵,通过计算灰色相关系数计算灰色相关度,最后确定输入参数;Step S5: Determine the input parameters, first construct a classification data set, then construct a comparison matrix by setting a reference sequence, obtain a non-dimensional matrix through non-dimensional data, calculate the gray correlation degree by calculating the gray correlation coefficient, and finally determine the input parameters;

步骤S6:多品种鱼类状态分类,首先构建训练数据集和测试数据集,然后初始化优化参数位置和速度,生成多品种鱼类状态分类模型参数和训练多品种鱼类状态分类模型,通过初始化个体最优位置和全局最优位置,然后更新优化参数速度、位置和适应度值,再更新惯性权重、个体最优位置和全局最优位置,基于评估阈值确定最终的多品种鱼类状态分类模型,利用模型对无人机实时采集鱼类图像输出的鱼类品种和生长状态进行喂食。Step S6: Multi-species fish status classification, first construct a training data set and a test data set, then initialize the optimization parameter position and speed, generate multi-species fish status classification model parameters and train a multi-species fish status classification model, by initializing individuals The optimal position and the global optimal position, then update the optimization parameter speed, position and fitness value, then update the inertia weight, individual optimal position and global optimal position, and determine the final multi-species fish status classification model based on the evaluation threshold, The model is used to feed the fish species and growth status outputted by the drone's real-time collection of fish images.

实施例二,参阅图1和图3,该实施例基于上述实施例,在步骤S2中,提取纹理特征具体包括以下步骤:Embodiment 2. Refer to Figures 1 and 3. This embodiment is based on the above embodiment. In step S2, extracting texture features specifically includes the following steps:

步骤S21:计算灰度值,将鱼类图像的RGB三个通道的像素值计算出对应的灰度值,将得到的灰度值赋值给相应的像素点,得到灰度化图像,所用公式如下:Step S21: Calculate the grayscale value, calculate the corresponding grayscale value from the pixel values of the three RGB channels of the fish image, and assign the obtained grayscale value to the corresponding pixel point to obtain a grayscale image. The formula used is as follows :

A=0.299*R+0.587*G+0.114*B;A=0.299*R+0.587*G+0.114*B;

式中,A是每个像素点的灰度值,R、G、B分别是红、绿、蓝通道的像素值,0.299、0.587、0.114分别是R、G、B对应的加权系数;In the formula, A is the gray value of each pixel, R, G, and B are the pixel values of the red, green, and blue channels respectively, and 0.299, 0.587, and 0.114 are the weighting coefficients corresponding to R, G, and B respectively;

步骤S22:计算灰度共生矩阵,所用公式如下:Step S22: Calculate the gray level co-occurrence matrix. The formula used is as follows:

;

式中,G(i,j,δr,δc)是灰度共生矩阵,i和j分别是灰度级别,δr和δc分别是领域像素在行和列方向上的偏移量,Nr和Nc分别是灰度化图像的行数和列数,I(m,n)是灰度化图像第m行、第n列像素的灰度值;In the formula, G (i, j, δr, δc) is the gray-level co-occurrence matrix, i and j are the gray levels respectively, δr and δc are the offsets of the field pixels in the row and column directions, Nr and Nc respectively. is the number of rows and columns of the grayscale image, I(m,n) is the grayscale value of the pixel in the mth row and nth column of the grayscale image;

步骤S23:计算概率,所用公式如下:Step S23: Calculate the probability, the formula used is as follows:

;

式中,P(i,j)是灰度共生矩阵中以灰度级别i为邻域像素、以灰度级别j为中心像素的概率,Npq是以p为中心、q为领域像素的出现次数,N1是灰度共生矩阵中所有元素的总和;In the formula, P (i, j) is the probability of gray level i as the neighborhood pixel and gray level j as the center pixel in the gray co-occurrence matrix, and Npq is the number of occurrences of the pixel with p as the center and q as the domain pixel. , N1 is the sum of all elements in the gray-level co-occurrence matrix;

步骤S24:计算纹理特征,步骤如下:Step S24: Calculate texture features, the steps are as follows:

步骤S241:计算对比度,所用公式如下:Step S241: Calculate contrast, the formula used is as follows:

C=∑ij(i,j)2*P(i,j);C=∑ ij (i, j) 2 *P (i, j);

式中,C是图像中像素间的对比度;In the formula, C is the contrast between pixels in the image;

步骤S242:计算能量,所用公式如下:Step S242: Calculate energy, the formula used is as follows:

D=∑ijP(i,j)2D=∑ ij P (i, j) 2 ;

式中,D是能量;In the formula, D is energy;

步骤S243:计算熵,所用公式如下:Step S243: Calculate entropy, the formula used is as follows:

E=-∑ijP(i,j)*㏒(P(i,j));E=-∑ ij P(i,j)*㏒(P(i,j));

式中,E是熵;In the formula, E is entropy;

步骤S244:计算均匀性,所用公式如下:Step S244: Calculate uniformity, the formula used is as follows:

;

式中,F是均匀性。In the formula, F is uniformity.

实施例三,参阅图1和图4,该实施例基于上述实施例,在步骤S3中,提取颜色特征具体包括以下步骤:Embodiment 3. Refer to Figures 1 and 4. This embodiment is based on the above embodiment. In step S3, extracting color features specifically includes the following steps:

步骤S31:将鱼类图像转换为HSV颜色空间,步骤如下:Step S31: Convert the fish image to HSV color space. The steps are as follows:

步骤S311:归一化,将RGB彩色图像中的RGB值归一化至[0,1];Step S311: Normalize, normalize the RGB values in the RGB color image to [0, 1];

步骤S312:计算色调,所用公式如下:Step S312: Calculate hue, the formula used is as follows:

;

式中,H是色调且取值范围为[0º,360º],Lmax和Lmin分别是R、G、B三个颜色通道中对应的最大值和最小值;In the formula, H is the hue and the value range is [0º, 360º], Lmax and Lmin are the corresponding maximum and minimum values in the three color channels of R, G and B respectively;

步骤S313:计算饱和度,所用公式如下:Step S313: Calculate saturation, the formula used is as follows:

;

式中,S是饱和度且取值范围为[0,1];In the formula, S is the saturation and the value range is [0, 1];

步骤S314:计算亮度,所用公式如下:Step S314: Calculate brightness, the formula used is as follows:

V=Lmax;V=Lmax;

式中,V是亮度且取值范围为[0,1];In the formula, V is the brightness and the value range is [0, 1];

步骤S32:将HSV颜色空间划分成若干个区间,在HSV颜色空间中,将色调H均匀分为24个区间,将饱和度S和亮度V各自均匀分为10个区间;Step S32: Divide the HSV color space into several intervals. In the HSV color space, the hue H is evenly divided into 24 intervals, and the saturation S and brightness V are each evenly divided into 10 intervals;

步骤S33:计算颜色直方图,遍历图像中的每个像素,统计其所属颜色空间区间的数量,得到颜色直方图;Step S33: Calculate the color histogram, traverse each pixel in the image, count the number of color space intervals to which it belongs, and obtain the color histogram;

步骤S34:计算颜色特征,步骤如下:Step S34: Calculate color features, the steps are as follows:

步骤S341:计算均值,所用公式如下:Step S341: Calculate the mean, the formula used is as follows:

;

式中,μ是均值,Mi是颜色直方图中第i个像素值的出现频率,N2是颜色直方图中像素值的总个数;In the formula, μ is the mean value, Mi is the occurrence frequency of the i-th pixel value in the color histogram, and N2 is the total number of pixel values in the color histogram;

步骤S342:计算方差,所用公式如下:Step S342: Calculate variance, the formula used is as follows:

;

式中,σ2是方差;In the formula, σ 2 is the variance;

步骤S343:计算中位数,将颜色直方图中像素值按升序排序,取排在中间位置的值为中位数;Step S343: Calculate the median, sort the pixel values in the color histogram in ascending order, and take the value in the middle as the median;

步骤S344:计算标准差,所用公式如下:Step S344: Calculate the standard deviation. The formula used is as follows:

σ=sqrt(σ2);σ=sqrt(σ 2 );

式中,σ是标准差。In the formula, σ is the standard deviation.

实施例四,参阅图1和图5,该实施例基于上述实施例,在步骤S4中,提取形状特征具体包括以下步骤:Embodiment 4. Refer to Figures 1 and 5. This embodiment is based on the above embodiment. In step S4, extracting shape features specifically includes the following steps:

步骤S41:图像去噪,基于步骤S21得到的灰度化图像进行图像去噪,所用公式如下:Step S41: Image denoising. Image denoising is performed based on the grayscale image obtained in step S21. The formula used is as follows:

;

式中,g(x,y)是去噪后的灰度化图像,f(x,y)是原始灰度化图像,k是归一化系数,r是高斯滤波器的半径,ωij是高斯滤波器的权值;In the formula, g(x,y) is the denoised grayscale image, f(x,y) is the original grayscale image, k is the normalization coefficient, r is the radius of the Gaussian filter, and ωij is the Gaussian The weight of the filter;

步骤S42:计算第一边缘检测算子,所用公式如下:Step S42: Calculate the first edge detection operator, the formula used is as follows:

Y1i=(g(x,y)⊕bi)•bi-g(x,y);Y1i=(g(x,y)⊕bi)·bi-g(x,y);

式中,Y1i是第一边缘检测算子,bi是不同方向的结构元素且i=1,2,…,8,⊕是异或运算符,•是两个向量之间的点乘运算符;In the formula, Y1i is the first edge detection operator, bi is the structural element in different directions and i=1, 2,...,8, ⊕ is the exclusive OR operator, and • is the dot multiplication operator between two vectors;

步骤S43:计算第二边缘检测算子,所用公式如下:Step S43: Calculate the second edge detection operator, the formula used is as follows:

;

式中,Y2i是第二边缘检测算子,Θ是逻辑或运算符,是两个向量之间的按位乘积运算符;In the formula, Y2i is the second edge detection operator, Θ is the logical OR operator, is the bitwise product operator between two vectors;

步骤S44:计算改进的边缘检测算子,所用公式如下:Step S44: Calculate the improved edge detection operator, the formula used is as follows:

Yi=Y1i+Y2i+Ymin;Yi=Y1i+Y2i+Ymin;

式中,Yi是改进的边缘检测算子,Yimin是边缘最小值且Yimin=min{Y1i,Y2i};In the formula, Yi is the improved edge detection operator, Yimin is the edge minimum and Yimin=min{Y1i, Y2i};

步骤S45:计算小尺度图像边缘,利用3*3的结构元素bi(i=1,2,3,4)进行边缘检测,所用公式如下:Step S45: Calculate small-scale image edges, and use 3*3 structural elements bi (i=1, 2, 3, 4) for edge detection. The formula used is as follows:

;

式中,Q1是小尺度图像边缘;In the formula, Q1 is the small-scale image edge;

步骤S46:计算大尺度图像边缘,利用5*5的结构元素bi(i=5,6,7,8)进行边缘检测,所用公式如下:Step S46: Calculate large-scale image edges, and use 5*5 structural elements bi (i=5, 6, 7, 8) for edge detection. The formula used is as follows:

;

式中,Q2是大尺度图像边缘;In the formula, Q2 is the large-scale image edge;

步骤S47:计算最终的图像边缘,得到图形区域的边缘信息,所用公式如下:Step S47: Calculate the final image edge and obtain the edge information of the graphics area. The formula used is as follows:

;

式中,Q是最终的图像边缘;In the formula, Q is the final image edge;

步骤S48:多边形拟合,基于步骤S47得到的边缘信息,找出图形区域的轮廓,并用多边形拟合将轮廓转换成多边形;Step S48: Polygon fitting, based on the edge information obtained in step S47, find the outline of the graphics area, and use polygon fitting to convert the outline into a polygon;

步骤S49:计算形状特征,步骤如下:Step S49: Calculate shape features, the steps are as follows:

步骤S491:计算轮廓长度,所用公式如下:Step S491: Calculate the contour length. The formula used is as follows:

;

式中,R是轮廓长度,n是多边形的变数,di是第i条边的长度;In the formula, R is the length of the outline, n is the variable of the polygon, and di is the length of the i-th side;

步骤S492:计算轮廓面积,所用公式如下:Step S492: Calculate the contour area. The formula used is as follows:

;

式中,S是轮廓面积,(xi,yi)是多边形的第i个顶点的坐标;In the formula, S is the outline area, ( xi , yi ) is the coordinate of the i-th vertex of the polygon;

步骤S493:计算中心距,所用公式如下:Step S493: Calculate the center distance, the formula used is as follows:

;

式中,O是中心距,(xc,yc)是轮廓重心的坐标,(xm,ym)是最靠近重心点的轮廓点的坐标;In the formula, O is the center distance, (x c , y c ) is the coordinate of the center of gravity of the contour, (x m , y m ) is the coordinate of the contour point closest to the center of gravity point;

步骤S494:计算离心率,所用公式如下:Step S494: Calculate the eccentricity, the formula used is as follows:

;

式中,U是离心率,e是鱼类图像轮廓的最小外接椭圆的长轴长度,z是鱼类图像轮廓的最小外接椭圆的短轴长度。In the formula, U is the eccentricity, e is the length of the major axis of the minimum circumscribed ellipse of the fish image outline, and z is the length of the minor axis of the minimum circumscribed ellipse of the fish image outline.

通过执行上述操作,针对提取形状特征过程中图像边缘定位不准确的技术问题,本发明通过改进第一、第二边缘检测算子的计算公式来改进最终边缘检测算子的计算公式,使图像边缘定位更加准确,提高边缘检测的准确度,从而提高形状特征的提取质量。By performing the above operations, in order to solve the technical problem of inaccurate image edge positioning in the process of extracting shape features, the present invention improves the calculation formula of the final edge detection operator by improving the calculation formulas of the first and second edge detection operators, so that the image edge The positioning is more accurate and the accuracy of edge detection is improved, thereby improving the quality of shape feature extraction.

实施例五,参阅图1和图6,该实施例基于上述实施例,在步骤S5中,确定输入参数具体包括以下步骤:Embodiment 5. Refer to Figures 1 and 6. This embodiment is based on the above embodiment. In step S5, determining the input parameters specifically includes the following steps:

步骤S51:构建分类数据集,基于步骤S2计算的纹理特征、步骤S3计算的颜色特征、步骤S4计算的形状特征和步骤S1采集的鱼类图像构建分类数据集,其中纹理特征的特征变量包括对比度、能量、熵和均匀性,颜色特征的特征变量包括均值、方差、中位数和标准差,形状特征的特征变量包括轮廓长度、轮廓面积、中心距和离心率;Step S51: Construct a classification data set based on the texture features calculated in step S2, the color features calculated in step S3, the shape features calculated in step S4 and the fish images collected in step S1, where the characteristic variables of the texture features include contrast. , energy, entropy and uniformity, the characteristic variables of color features include mean, variance, median and standard deviation, the characteristic variables of shape features include outline length, outline area, center distance and eccentricity;

步骤S52:设置参考序列,预先从分类数据集中选择n个标准数据作为评估参数,所用公式如下:Step S52: Set the reference sequence and select n standard data from the classification data set as evaluation parameters in advance. The formula used is as follows:

X0=(x0(1),x0(2),…,x0(n));X0=(x0(1),x0(2),…,x0(n));

式中,X0是参考序列,n是评估参数的数量;In the formula, X0 is the reference sequence, n is the number of evaluation parameters;

步骤S53:构建比较矩阵,基于分类数据集中的样本数据,设置比较顺序,所用公式如下:Step S53: Construct a comparison matrix, set the comparison order based on the sample data in the classified data set, and use the following formula:

;

式中,X是比较矩阵,m是分类数据集中样本数据的数量;In the formula, X is the comparison matrix, m is the number of sample data in the classification data set;

步骤S54:非量纲化数据,所用公式如下:Step S54: Non-dimensionalized data, the formula used is as follows:

;

式中,xp(q)是比较矩阵X第p列第q行的原始数据,x’p(q)是原始数据xp(q)非量纲化处理后的非量纲化数据,xmin是比较矩阵X第p列的最小值,xmax是比较矩阵X第p列的最大值;In the formula, xp(q) is the original data of the pth column and qth row of the comparison matrix X, x'p(q) is the non-dimensionalized data after non-dimensional processing of the original data xp(q), and The minimum value of the p-th column of the matrix X, xmax is the maximum value of the p-th column of the comparison matrix X;

步骤S55:非量纲化矩阵,所用公式如下:Step S55: Non-dimensionalized matrix, the formula used is as follows:

;

式中,X’是非量纲化矩阵;In the formula, X’ is a non-dimensional matrix;

步骤S56:计算灰色相关系数,计算每个样本数据序列和参考序列的相应元素之间的灰色相关系数,所用公式如下:Step S56: Calculate the gray correlation coefficient between the corresponding elements of each sample data sequence and the reference sequence. The formula used is as follows:

;

式中,εp(q)是第p个样本数据序列和参考序列在第q个评估参数之间的灰色相关系数,ρ是分辨率系数且0<ρ<1;In the formula, εp(q) is the gray correlation coefficient between the p-th sample data sequence and the reference sequence at the q-th evaluation parameter, ρ is the resolution coefficient and 0<ρ<1;

步骤S57:计算灰色相关度,所用公式如下:Step S57: Calculate the gray correlation degree. The formula used is as follows:

;

式中,rp是第p个样本数据序列和参考序列在所有评估参数上的灰色相关度;In the formula, rp is the gray correlation degree between the p-th sample data sequence and the reference sequence on all evaluation parameters;

步骤S58:确定输入参数,预先设有灰色相关度阈值,将灰色相关度大于阈值的特征变量确定为输入参数。Step S58: Determine the input parameters, set a gray correlation threshold in advance, and determine the characteristic variables whose gray correlation is greater than the threshold as input parameters.

通过执行上述操作,针对输入参数过多导致分类模型过度拟合的技术问题,本发明采用灰色关系分析方法确定输入参数,加强输入参数与实验结果之间的相关性,提高模型的收敛速度和预测精度。By performing the above operations, in order to solve the technical problem of overfitting of the classification model caused by too many input parameters, the present invention uses the gray relationship analysis method to determine the input parameters, strengthen the correlation between the input parameters and the experimental results, and improve the convergence speed and prediction of the model. Accuracy.

实施例六,参阅图1和图7,该实施例基于上述实施例,在步骤S6中,多品种鱼类状态分类具体包括以下步骤:Embodiment 6. Refer to Figures 1 and 7. This embodiment is based on the above embodiment. In step S6, the classification of the status of multiple species of fish specifically includes the following steps:

步骤S61:构建训练数据集和测试数据集,将分类数据集中不大于灰色相关度阈值的特征变量的维度信息删除,并获取数据对应标签,得到样本数据,所述对应标签是步骤S1采集的标签,随机选取70%的样本数据作为训练数据集,其余30%的样本数据作为测试数据集;Step S61: Construct a training data set and a test data set, delete the dimensional information of the feature variables that are not greater than the gray correlation threshold in the classification data set, and obtain the corresponding labels of the data to obtain sample data. The corresponding labels are the labels collected in step S1. , randomly select 70% of the sample data as the training data set, and the remaining 30% of the sample data as the test data set;

步骤S62:初始化优化参数位置,对每个优化参数随机生成一个初始位置,所用公式如下:Step S62: Initialize the optimization parameter positions and randomly generate an initial position for each optimization parameter. The formula used is as follows:

Y(i,j)=rand(0,1)*(U(j)-L(j))+L(j);Y(i,j)=rand(0,1)*(U(j)-L(j))+L(j);

式中,i是优化参数的编号,j是Y的维度,Y(i,j)是第i个优化参数在第j维的位置,rand(0,1)是生成0到1之间的随机数,U(j)是第j维的上界限制,L(j)是第j维的下界限制;In the formula, i is the number of the optimization parameter, j is the dimension of Y, Y (i, j) is the position of the i-th optimization parameter in the j-th dimension, and rand (0, 1) is a random number between 0 and 1. Number, U(j) is the upper bound limit of the jth dimension, L(j) is the lower bound limit of the jth dimension;

步骤S63:初始化优化参数速度,对每个优化参数随机生成一个初始速度,所用公式如下:Step S63: Initialize the optimization parameter speed, and randomly generate an initial speed for each optimization parameter. The formula used is as follows:

V(i,j)=rand(0,1)*(Vmax(j)-Vmin(j))+Vmin(j);V(i,j)=rand(0,1)*(Vmax(j)-Vmin(j))+Vmin(j);

式中,V(i,j)是第i个优化参数在第j维的速度,Vmax(j)是第j维的速度上限,Vmin(j)是第j维的速度下限;In the formula, V (i, j) is the speed of the i-th optimization parameter in the j-th dimension, Vmax (j) is the upper speed limit of the j-th dimension, and Vmin (j) is the lower speed limit of the j-th dimension;

步骤S64:生成多品种鱼类状态分类模型参数,对于每个优化参数,根据当前位置生成一组多品种鱼类状态分类模型参数,一组多品种鱼类状态分类模型参数由一个惩罚因子和一个核函数参数组成,所用公式如下:Step S64: Generate multi-species fish status classification model parameters. For each optimization parameter, generate a set of multi-species fish status classification model parameters based on the current position. A set of multi-species fish status classification model parameters are composed of a penalty factor and a The kernel function parameters are composed of the following formulas:

C(i)=2Y(i,1)C(i)=2 Y(i,1) ;

G(i)=2Y(i,2)G(i)=2 Y(i,2) ;

式中,C(i)是多品种鱼类状态分类模型的惩罚因子,G(i)是多品种鱼类状态分类模型的核函数参数;In the formula, C(i) is the penalty factor of the multi-species fish status classification model, and G(i) is the kernel function parameter of the multi-species fish status classification model;

步骤S65:训练多品种鱼类状态分类模型,基于步骤S64确定的多品种鱼类状态分类模型参数以及步骤S61构建的训练数据集,对多品种鱼类状态分类模型进行训练,计算多品种鱼类状态分类模型的权重向量和偏置值,所用公式如下:Step S65: Train the multi-species fish status classification model. Based on the multi-species fish status classification model parameters determined in step S64 and the training data set constructed in step S61, train the multi-species fish status classification model and calculate the multi-species fish status classification model. The weight vector and bias value of the state classification model, the formula used is as follows:

wi=∑ai*ci*ei;wi=∑ai*ci*ei;

;

式中,wi是第i个二分类器的权重向量,ai是第i个样本的拉格朗日乘子,ci是第i个样本的特征向量,ei是第i个样本的对应标签,ti是第i个二分类器的偏置值,n是训练样本的数量,g(ci,cj)是核函数,cj是第j个样本的特征向量;In the formula, wi is the weight vector of the i-th binary classifier, ai is the Lagrange multiplier of the i-th sample, ci is the feature vector of the i-th sample, ei is the corresponding label of the i-th sample, ti is the bias value of the i-th binary classifier, n is the number of training samples, g (ci, cj) is the kernel function, and cj is the feature vector of the j-th sample;

步骤S66:计算优化参数适应度值,利用步骤S65训练好的多品种鱼类状态分类模型,对步骤S61构建的测试数据集进行预测,并计算适应度值,所用公式如下:Step S66: Calculate the fitness value of the optimization parameters, use the multi-species fish status classification model trained in step S65 to predict the test data set constructed in step S61, and calculate the fitness value. The formula used is as follows:

;

式中,f(i)是第i个优化参数的适应度值,k是对应标签的种类数量,Nj是第j个对应标签的样本量,yjl是第j个对应标签中第l个样本的真实对应标签,pjl(i)是第j个对应标签中第l个样本在第i个优化参数上的多品种鱼类状态分类模型的预测概率;In the formula, f(i) is the fitness value of the i-th optimization parameter, k is the number of types of the corresponding label, Nj is the sample size of the j-th corresponding label, and yjl is the l-th sample in the j-th corresponding label. The true corresponding label, pjl(i) is the predicted probability of the multi-species fish status classification model on the i-th optimized parameter for the l-th sample in the j-th corresponding label;

步骤S67:初始化个体最优位置和全局最优位置,将步骤S62初始化的每个优化参数的初始位置作为对应优化参数的个体最优位置,将所有优化参数中适应度值最低的优化参数的个体最优位置作为全局最优位置;Step S67: Initialize the individual optimal position and the global optimal position, use the initial position of each optimization parameter initialized in step S62 as the individual optimal position of the corresponding optimization parameter, and use the individual optimization parameter with the lowest fitness value among all optimization parameters. The optimal position is used as the global optimal position;

步骤S68:更新优化参数速度,所用公式如下:Step S68: Update the optimization parameter speed, the formula used is as follows:

V(i,j)=h*V(i,j)+d1*rand(0,1)*(T1(i,j)-Y(i,j))+d2*rand(0,1)*(T2(j)-Y(i,j));V(i,j)=h*V(i,j)+d1*rand(0,1)*(T1(i,j)-Y(i,j))+d2*rand(0,1)* (T2(j)-Y(i,j));

式中,h是惯性权重,T1(i,j)是第i个优化参数在第j个维度上的个体最优位置,T2(j)是全局最优位置在第j个维度上的值,d1和d2是学习因子,rand(0,1)是生成0到1之间的随机数;In the formula, h is the inertia weight, T1(i, j) is the individual optimal position of the i-th optimization parameter in the j-th dimension, T2(j) is the value of the global optimal position in the j-th dimension, d1 and d2 are learning factors, and rand(0,1) generates a random number between 0 and 1;

步骤S69:更新优化参数位置,所用公式如下:Step S69: Update the optimization parameter position, the formula used is as follows:

Y(i,j)=Y(i,j)+V(i,j);Y(i,j)=Y(i,j)+V(i,j);

式中,Y(i,j)是第i个优化参数在第j个维度上的位置;In the formula, Y (i, j) is the position of the i-th optimization parameter in the j-th dimension;

步骤S610:更新优化参数适应度值;Step S610: Update the optimization parameter fitness value;

步骤S611:更新惯性权重,所用公式如下:Step S611: Update the inertia weight, the formula used is as follows:

;

式中,hmin是最小惯性权重,hmax是最大惯性权重,f是当前的适应度值,fmin是最小适应度值,favg是所有优化参数适应度的平均值;In the formula, hmin is the minimum inertia weight, hmax is the maximum inertia weight, f is the current fitness value, fmin is the minimum fitness value, and favg is the average fitness value of all optimization parameters;

步骤S612:更新个体最优位置和全局最优位置,根据优化参数的适应度值更新优化参数的个体最优位置,并根据所有优化参数的个体最优位置更新全局最优位置;Step S612: Update the individual optimal position and the global optimal position, update the individual optimal position of the optimization parameter according to the fitness value of the optimization parameter, and update the global optimal position according to the individual optimal position of all optimization parameters;

步骤S613:模型确定,预先设有评估阈值和最大迭代次数,当优化参数的适应度值低于评估阈值则基于当前参数建立多品种鱼类状态分类模型并转至步骤S614;若达到最大迭代次数,则转至步骤S62;否则转至步骤S68;Step S613: The model is determined. The evaluation threshold and the maximum number of iterations are preset. When the fitness value of the optimization parameter is lower than the evaluation threshold, a multi-species fish status classification model is established based on the current parameters and goes to step S614; if the maximum number of iterations is reached , then go to step S62; otherwise, go to step S68;

步骤S614:分类,无人机实时采集鱼类图像并输入至多品种鱼类状态分类模型中,基于模型输出的鱼类品种及生长状态进行喂食。Step S614: Classification, the drone collects fish images in real time and inputs them into the multi-species fish status classification model, and feeds based on the fish species and growth status output by the model.

通过执行上述操作,针对因分类算法容易陷入局部极小值而无法找到更优的全局解的技术问题,本发明通过不断调整惯性权重的大小,使优化参数向更好的搜索区域靠拢,避免陷入局部极小值而无法找到更优的全局解的问题。By performing the above operations, in view of the technical problem that the classification algorithm is prone to falling into local minima and cannot find a better global solution, the present invention continuously adjusts the size of the inertia weight to move the optimization parameters closer to a better search area and avoid falling into Problems that cannot find a better global solution due to local minima.

实施例七,参阅图8和图9,该实施例基于上述实施例,在图8中,展示优化参数不断更新所在位置,直到找到全局最优位置的过程;在图9中,纵坐标是优化参数最优解的位置,横坐标是迭代次数,展示随着迭代次数的变化优化参数的位置不断趋于最优解位置的变化过程,使优化参数向更好的搜索区域靠拢,避免陷入局部极小值而无法找到更优的全局解的问题。Embodiment 7, refer to Figures 8 and 9. This embodiment is based on the above embodiment. In Figure 8, the process of continuously updating the position of the optimization parameters until the global optimal position is found is shown. In Figure 9, the ordinate is the optimization The position of the optimal solution of the parameters. The abscissa is the number of iterations. It shows the changing process of the position of the optimization parameters constantly tending to the position of the optimal solution as the number of iterations changes, so that the optimization parameters move closer to a better search area and avoid falling into local extremes. Problems where a better global solution cannot be found due to small values.

实施例八,参阅图2,该实施例基于上述实施例,本发明提供的一种基于人工智能的无人机鱼类养殖系统,包括数据采集模块、获取纹理特征模块、获取颜色特征模块、获取形状特征模块、确定输入参数模块和多品种鱼类状态分类模块;Embodiment 8, refer to Figure 2. This embodiment is based on the above embodiment. The invention provides an artificial intelligence-based UAV fish breeding system, including a data acquisition module, a texture feature acquisition module, a color feature acquisition module, and an artificial intelligence acquisition module. Shape feature module, input parameter determination module and multi-species fish status classification module;

以鱼类图像为例,所述数据采集模块采集各种生产状态下鱼类图像及对应标签,将采集的鱼类图像作为鱼类图像,并将鱼类图像发送至获取纹理特征模块和获取颜色特征模块;Taking fish images as an example, the data collection module collects fish images and corresponding tags under various production conditions, uses the collected fish images as fish images, and sends the fish images to the texture feature acquisition module and color acquisition module. feature module;

所述获取纹理特征模块和获取颜色特征模块接收数据采集模块发送的鱼类图像,分别利用灰度共生矩阵和颜色直方图提取纹理特征和颜色特征,并将提取得到的纹理特征和颜色特征发送至确定输入参数模块,并且获取纹理特征模块将灰度化图像发送至获取形状特征模块;The texture feature acquisition module and the color feature acquisition module receive the fish image sent by the data collection module, respectively extract texture features and color features using the gray level co-occurrence matrix and color histogram, and send the extracted texture features and color features to Determine the input parameter module, and the texture feature acquisition module sends the grayscale image to the shape feature acquisition module;

所述获取形状特征模块接收获取纹理特征模块发送的灰度化图像,通过改进第一、第二边缘检测算子的计算公式来改进最终的边缘检测算子,提高形状特征的提取质量,并将提取得到的形状特征发送至确定输入参数模块;The shape feature acquisition module receives the grayscale image sent by the texture feature acquisition module, improves the final edge detection operator by improving the calculation formulas of the first and second edge detection operators, improves the extraction quality of the shape features, and The extracted shape features are sent to the module for determining input parameters;

所述确定输入参数模块接收获取纹理特征模块发送的纹理特征、获取颜色特征模块发送的颜色特征和获取形状特征模块发送的形状特征,采用灰色关系分析方法确定输入参数,并将确定的输入参数发送至多品种鱼类状态分类模块;The determining input parameter module receives the texture features sent by the texture feature acquisition module, the color features sent by the color feature acquisition module and the shape features sent by the shape feature acquisition module, uses the gray relationship analysis method to determine the input parameters, and sends the determined input parameters Up to multi-species fish status classification module;

所述多品种鱼类状态分类模块接收确定输入参数模块发送的输入参数,通过不断调整惯性权重的大小,最终建立多品种鱼类状态分类模型。The multi-species fish status classification module receives the input parameters sent by the input parameter determination module, and finally establishes a multi-species fish status classification model by continuously adjusting the size of the inertia weight.

需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。It should be noted that in this article, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply that these entities or operations are mutually exclusive. any such actual relationship or sequence exists between them. Furthermore, the terms "comprises," "comprises," or any other variations thereof are intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus that includes a list of elements includes not only those elements, but also those not expressly listed other elements, or elements inherent to the process, method, article or equipment.

尽管已经示出和描述了本发明的实施例,对于本领域的普通技术人员而言,可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由所附权利要求及其等同物限定。Although the embodiments of the present invention have been shown and described, those of ordinary skill in the art will understand that various changes, modifications, and substitutions can be made to these embodiments without departing from the principles and spirit of the invention. and modifications, the scope of the invention is defined by the appended claims and their equivalents.

以上对本发明及其实施方式进行了描述,这种描述没有限制性,附图中所示的也只是本发明的实施方式之一,实际的结构并不局限于此。总而言之如果本领域的普通技术人员受其启示,在不脱离本发明创造宗旨的情况下,不经创造性的设计出与该技术方案相似的结构方式及实施例,均应属于本发明的保护范围。The present invention and its embodiments have been described above. This description is not limiting. What is shown in the drawings is only one embodiment of the present invention, and the actual structure is not limited thereto. In short, if a person of ordinary skill in the art is inspired by the invention and without departing from the spirit of the invention, can devise structural methods and embodiments similar to the technical solution without inventiveness, they shall all fall within the protection scope of the invention.

Claims (7)

1. An unmanned aerial vehicle fish culture method based on artificial intelligence is characterized in that: the method comprises the following steps:
step S1: collecting data, namely collecting fish images and corresponding tags, wherein the tags are the variety and growth state of fish, and taking the collected fish images as fish images;
step S2: extracting texture features, and calculating contrast, energy, entropy and uniformity to obtain the texture features;
Step S3: extracting color characteristics, and calculating the mean, variance, median and standard deviation to obtain the color characteristics;
step S4: extracting shape characteristics, calculating an improved edge detection operator by calculating a first edge detection operator and calculating a second edge detection operator, calculating a final image edge by calculating a small-scale image edge and calculating a large-scale image edge, performing polygon fitting, and finally obtaining shape characteristics by calculating contour length, contour area, center distance and eccentricity;
step S5: determining input parameters, firstly constructing a classification data set, then constructing a comparison matrix by setting a reference sequence, obtaining a non-dimensionality matrix by non-dimensionality data, calculating gray correlation degree by calculating gray correlation coefficient, and finally determining the input parameters;
step S6: the method comprises the steps of firstly, constructing a training data set and a testing data set, initializing optimized parameter positions and speeds, generating multi-variety fish state classification model parameters and training multi-variety fish state classification models, initializing individual optimal positions and global optimal positions, updating optimized parameter speeds, positions and fitness values, updating inertia weights, individual optimal positions and global optimal positions, determining a final multi-variety fish state classification model based on evaluation thresholds, and feeding fish varieties and growth states output by real-time fish image acquisition by using unmanned aerial vehicles;
In step S4, the extracting the shape feature specifically includes the steps of:
step S41: image denoising, namely performing image denoising based on the gray-scale image obtained in the step S21, wherein the following formula is used:
wherein g (x, y) is a denoised grey image, f (x, y) is an original grey image, k is a normalization coefficient, r is a radius of a gaussian filter, ωij is a weight of the gaussian filter;
step S42: the first edge detection operator is calculated using the following formula:
Y1i=(g(x,y)⊕bi)•bi-g(x,y);
wherein Y1i is a first edge detection operator, bi is a structural element in different directions and i=1, 2, …,8, # is an exclusive or operator, # is a dot product operator between two vectors;
step S43: the second edge detection operator is calculated using the following formula:
where Y2i is the second edge detection operator, Θ is the logical OR operator,is a bitwise product operator between two vectors;
step S44: the improved edge detection operator is calculated using the following formula:
Yi=Y1i+Y2i+Ymin;
where Yi is an improved edge detection operator, yimin is an edge minimum and Yimin = min { Y1i, Y2i };
step S45: the edges of the small-scale image were calculated and edge detection was performed using the structural element bi (i=1, 2,3, 4) of 3*3 using the following formula:
Wherein Q1 is a small scale image edge;
step S46: the edges of the large-scale image were calculated and edge detection was performed using the structural element bi (i=5, 6,7, 8) of 5*5 using the following formula:
wherein Q2 is the edge of the large-scale image;
step S47: calculating the final image edge to obtain the edge information of the graphic area, wherein the formula is as follows:
where Q is the final image edge;
step S48: polygon fitting, namely, finding out the outline of the graphic area based on the edge information obtained in the step S47, and converting the outline into a polygon by using the polygon fitting;
step S49: calculating shape characteristics, wherein the steps are as follows:
step S491: the contour length is calculated using the following formula:
wherein R is the contour length, n is a variable of a polygon, di is the length of the ith side;
step S492: the contour area is calculated using the following formula:
where S is the area of the contour, (x i ,y i ) Is the coordinates of the ith vertex of the polygon;
step S493: the center distance is calculated using the following formula:
wherein O is the center distance, (x) c ,y c ) Is the coordinates of the center of gravity of the contour, (x m ,y m ) Is the coordinates of the contour point closest to the center of gravity point;
step S494: the eccentricity was calculated using the following formula:
Where U is the eccentricity, e is the length of the major axis of the smallest circumscribing ellipse of the fish image profile, and z is the length of the minor axis of the smallest circumscribing ellipse of the fish image profile.
2. The unmanned aerial vehicle fish farming method based on artificial intelligence according to claim 1, wherein: in step S6, the classification of the states of the multiple fish species specifically includes the following steps:
step S61: constructing a training data set and a test data set, deleting dimension information of feature variables which are not more than a gray correlation threshold in the classification data set, and acquiring data corresponding labels to obtain sample data, wherein the corresponding labels are labels acquired in the step S1, 70% of the sample data are randomly selected as the training data set, and the rest 30% of the sample data are selected as the test data set;
step S62: initializing the positions of the optimized parameters, and randomly generating an initial position for each optimized parameter by using the following formula:
Y(i,j)=rand(0,1)*(U(j)-L(j))+L(j);
where i is the number of the optimization parameter, j is the dimension of Y, Y (i, j) is the position of the ith optimization parameter in the jth dimension, rand (0, 1) is a random number generated between 0 and 1, U (j) is the upper bound limit of the jth dimension, and L (j) is the lower bound limit of the jth dimension;
Step S63: initializing the speed of optimized parameters, and randomly generating an initial speed for each optimized parameter by using the following formula:
V(i,j)=rand(0,1)*(Vmax(j)-Vmin(j))+Vmin(j);
where V (i, j) is the speed of the ith optimization parameter in the jth dimension, vmax (j) is the upper limit of the speed in the jth dimension, and Vmin (j) is the lower limit of the speed in the jth dimension;
step S64: generating multiple-variety fish state classification model parameters, and generating a group of multiple-variety fish state classification model parameters according to the current position for each optimization parameter, wherein the group of multiple-variety fish state classification model parameters consists of a punishment factor and a kernel function parameter, and the formula is as follows:
C(i)=2 Y(i,1)
G(i)=2 Y(i,2)
wherein, C (i) is a punishment factor of the state classification model of the multi-variety fish, and G (i) is a kernel function parameter of the state classification model of the multi-variety fish;
step S65: training a multi-variety fish state classification model, training the multi-variety fish state classification model based on the multi-variety fish state classification model parameters determined in the step S64 and the training data set constructed in the step S61, and calculating weight vectors and bias values of the multi-variety fish state classification model, wherein the formula is as follows:
wi=∑ai*ci*ei;
where wi is the weight vector of the ith classifier, ai is the Lagrangian multiplier of the ith sample, ci is the eigenvector of the ith sample, ei is the corresponding label of the ith sample, ti is the bias value of the ith classifier, n is the number of training samples, g (ci, cj) is the kernel function, cj is the eigenvector of the jth sample;
Step S66: calculating an optimization parameter fitness value, predicting the test data set constructed in the step S61 by using the multi-variety fish state classification model trained in the step S65, and calculating the fitness value by using the following formula:
wherein f (i) is the fitness value of the ith optimization parameter, k is the number of types of corresponding labels, nj is the sample size of the jth corresponding label, yjl is the true corresponding label of the ith sample in the jth corresponding label, pjl (i) is the prediction probability of the multiple-variety fish state classification model of the ith sample in the jth corresponding label on the ith optimization parameter;
step S67: initializing an individual optimal position and a global optimal position, taking the initial position of each optimization parameter initialized in the step S62 as the individual optimal position of the corresponding optimization parameter, and taking the individual optimal position of the optimization parameter with the lowest fitness value in all the optimization parameters as the global optimal position;
step S68: updating the speed of the optimized parameters by the following formula:
V(i,j)=h*V(i,j)+d1*rand(0,1)*(T1(i,j)-Y(i,j))+d2*rand(0,1)*(T2(j)-Y(i,j));
where h is the inertial weight, T1 (i, j) is the individual optimal position of the ith optimization parameter in the jth dimension, T2 (j) is the value of the global optimal position in the jth dimension, d1 and d2 are learning factors, and rand (0, 1) is a random number generated between 0 and 1;
Step S69: updating the optimized parameter position by the following formula:
Y(i,j)=Y(i,j)+V(i,j);
where Y (i, j) is the position of the ith optimization parameter in the jth dimension;
step S610: updating the fitness value of the optimization parameter;
step S611: the inertial weights are updated using the following formula:
wherein hmin is the minimum inertial weight, hmax is the maximum inertial weight, f is the current fitness value, fmin is the minimum fitness value, favg is the average value of fitness of all optimization parameters;
step S612: updating the individual optimal position and the global optimal position, updating the individual optimal position of the optimization parameters according to the fitness value of the optimization parameters, and updating the global optimal position according to the individual optimal positions of all the optimization parameters;
step S613: determining a model, namely presetting an evaluation threshold value and the maximum iteration times, and establishing a multi-variety fish state classification model based on the current parameters when the fitness value of the optimized parameters is lower than the evaluation threshold value, and turning to step S614; if the maximum iteration number is reached, go to step S62; otherwise go to step S68;
step S614: classifying, wherein the unmanned aerial vehicle collects fish images in real time and inputs the fish images into a classification model of the states of the fishes of multiple varieties, and feeding is performed based on the fish varieties and the growth states output by the model.
3. The unmanned aerial vehicle fish farming method based on artificial intelligence according to claim 1, wherein: in step S5, the determining the input parameter specifically includes the following steps:
step S51: constructing a classification data set based on the texture features calculated in the step S2, the color features calculated in the step S3, the shape features calculated in the step S4 and the fish images acquired in the step S1, wherein the feature variables of the texture features comprise contrast, energy, entropy and uniformity, the feature variables of the color features comprise mean, variance, median and standard deviation, and the feature variables of the shape features comprise contour length, contour area, center distance and eccentricity;
step S52: setting a reference sequence, and selecting n standard data from the classified data set in advance as evaluation parameters by the following formula:
X0=(x0(1),x0(2),…,x0(n));
wherein X0 is a reference sequence and n is the number of evaluation parameters;
step S53: a comparison matrix is constructed, and a comparison sequence is set based on sample data in the classification data set, wherein the formula is as follows:
wherein X is a comparison matrix, and m is the number of sample data in the classified data set;
step S54: non-dimensionalized data using the formula:
Wherein xp (q) is the original data of the p-th column and q-th row of the comparison matrix X, X' p (q) is the non-dimensionalized data of the original data xp (q) after the non-dimensionalization processing, xmin is the minimum value of the p-th column of the comparison matrix X, and xmax is the maximum value of the p-th column of the comparison matrix X;
step S55: a non-dimensionalized matrix using the formula:
wherein X' is a non-dimensionalized matrix;
step S56: the gray correlation coefficients are calculated, and the gray correlation coefficients between the corresponding elements of each sample data sequence and the reference sequence are calculated using the following formula:
where εp (q) is the gray correlation coefficient of the p-th sample data sequence and the reference sequence between the q-th evaluation parameters, ρ is the resolution coefficient and 0< ρ <1;
step S57: the gray correlation is calculated using the following formula:
where rp is the gray correlation of the p-th sample data sequence and the reference sequence over all evaluation parameters;
step S58: and determining an input parameter, presetting a gray correlation threshold, and determining a characteristic variable with gray correlation larger than the threshold as the input parameter.
4. The unmanned aerial vehicle fish farming method based on artificial intelligence according to claim 1, wherein: in step S2, the extracting texture features specifically includes the following steps:
Step S21: calculating gray values, calculating pixel values of three RGB channels of the fish image to obtain corresponding gray values, and assigning the obtained gray values to corresponding pixel points to obtain a gray image, wherein the formula is as follows:
A=0.299*R+0.587*G+0.114*B;
wherein, A is the gray value of each pixel point, R, G, B is the pixel value of red, green and blue channels, and 0.299, 0.587 and 0.114 are the weighting coefficients corresponding to R, G, B respectively;
step S22: the gray level co-occurrence matrix is calculated by the following formula:
wherein G (I, j, δr, δc) is a gray level co-occurrence matrix, I and j are gray levels, δr and δc are offsets of the pixels of the field in the row and column directions, nr and Nc are the number of rows and columns of the grayscale image, and I (m, n) is a gray value of the pixels of the m-th row and n-th column of the grayscale image;
step S23: the probability is calculated using the formula:
wherein P (i, j) is the probability of using gray level i as a neighborhood pixel and gray level j as a center pixel in the gray level co-occurrence matrix, npq is the occurrence number of pixels in the field with P as the center and q as the center, and N1 is the sum of all elements in the gray level co-occurrence matrix;
step S24: calculating texture features, wherein the steps are as follows:
step S241: the contrast is calculated using the following formula:
C=∑ ij (i,j) 2 *P(i,j);
Wherein C is the contrast between pixels in the image;
step S242: the energy was calculated using the formula:
D=∑ ij P(i,j) 2
wherein D is energy;
step S243: the entropy is calculated using the formula:
E=-∑ ij P(i,j)*㏒(P(i,j));
wherein E is entropy;
step S244: uniformity was calculated using the following formula:
wherein F is uniformity.
5. The unmanned aerial vehicle fish farming method based on artificial intelligence according to claim 1, wherein: in step S3, the extracting color features specifically includes the following steps:
step S31: the fish image is converted into HSV color space, and the steps are as follows:
step S311: normalizing, namely normalizing RGB values in the RGB color image to be [0,1];
step S312: the hue is calculated using the formula:
wherein H is tone and the value range is [0 degree, 360 degrees ], lmax and Lmin are the corresponding maximum value and minimum value in R, G, B color channels respectively;
step S313: the saturation was calculated using the formula:
wherein S is saturation and has a value range of [0,1];
step S314: the brightness was calculated using the following formula:
V=Lmax;
wherein V is brightness and has a value range of [0,1];
step S32: dividing an HSV color space into a plurality of sections, uniformly dividing a tone H into 24 sections in the HSV color space, and uniformly dividing saturation S and brightness V into 10 sections respectively;
Step S33: calculating a color histogram, traversing each pixel in an image, and counting the number of the color space intervals to which the pixel belongs to obtain the color histogram;
step S34: the color characteristics are calculated as follows:
step S341: the mean was calculated using the formula:
wherein μ is a mean value, mi is the frequency of occurrence of the ith pixel value in the color histogram, and N2 is the total number of pixel values in the color histogram;
step S342: the variance is calculated using the formula:
in sigma 2 Is the variance;
step S343: calculating the median, sorting the pixel values in the color histogram according to ascending order, and taking the value arranged at the middle position as the median;
step S344: standard deviation was calculated using the following formula:
σ=sqrt(σ 2 );
where σ is the standard deviation.
6. An artificial intelligence based unmanned aerial vehicle fish farming system for implementing an artificial intelligence based unmanned aerial vehicle fish farming method as defined in any one of claims 1-5, wherein: the fish state classifying device comprises a data acquisition module, a texture feature acquisition module, a color feature acquisition module, a shape feature acquisition module, an input parameter determining module and a multi-variety fish state classifying module.
7. An artificial intelligence based unmanned aerial vehicle fish farming system according to claim 6, wherein: taking a fish image as an example, the data acquisition module acquires the fish image and corresponding tags in various production states, takes the acquired fish image as a fish image, and sends the fish image to the texture feature acquisition module and the color feature acquisition module;
The texture feature acquisition module and the color feature acquisition module receive the fish images sent by the data acquisition module, extract texture features and color features by using the gray level co-occurrence matrix and the color histogram respectively, send the extracted texture features and color features to the input parameter determination module, and send the gray level images to the shape feature acquisition module;
the shape feature acquisition module receives the graying image sent by the texture feature acquisition module, improves a final edge detection operator by improving a calculation formula of the first edge detection operator and the second edge detection operator, improves the extraction quality of the shape feature, and sends the extracted shape feature to the input parameter determination module;
the input parameter determining module receives the texture features sent by the texture feature obtaining module, the color features sent by the color feature obtaining module and the shape features sent by the shape feature obtaining module, determines input parameters by adopting a gray relation analysis method, and sends the determined input parameters to the multi-variety fish state classification module;
the multi-variety fish state classification module receives and determines the input parameters sent by the input parameter module, and finally establishes a multi-variety fish state classification model by continuously adjusting the magnitude of the inertia weight.
CN202311007901.9A 2023-08-11 2023-08-11 Unmanned aerial vehicle fish culture method and system based on artificial intelligence Active CN116721303B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311007901.9A CN116721303B (en) 2023-08-11 2023-08-11 Unmanned aerial vehicle fish culture method and system based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311007901.9A CN116721303B (en) 2023-08-11 2023-08-11 Unmanned aerial vehicle fish culture method and system based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN116721303A CN116721303A (en) 2023-09-08
CN116721303B true CN116721303B (en) 2023-10-20

Family

ID=87866540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311007901.9A Active CN116721303B (en) 2023-08-11 2023-08-11 Unmanned aerial vehicle fish culture method and system based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN116721303B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069817A (en) * 2019-03-15 2019-07-30 温州大学 A method of prediction model is constructed based on California gray whale optimization algorithm is improved
CN110287896A (en) * 2019-06-27 2019-09-27 北京理工大学 A Human Action Recognition Method Based on Heterogeneous Hierarchical PSO and SVM
CN112862750A (en) * 2020-12-29 2021-05-28 深圳信息职业技术学院 Blood vessel image processing method and device based on multi-scale fusion and meta-heuristic optimization
CN116168392A (en) * 2022-12-28 2023-05-26 北京工业大学 Target labeling method and system based on optimal source domain of multidimensional spatial feature model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069817A (en) * 2019-03-15 2019-07-30 温州大学 A method of prediction model is constructed based on California gray whale optimization algorithm is improved
CN110287896A (en) * 2019-06-27 2019-09-27 北京理工大学 A Human Action Recognition Method Based on Heterogeneous Hierarchical PSO and SVM
CN112862750A (en) * 2020-12-29 2021-05-28 深圳信息职业技术学院 Blood vessel image processing method and device based on multi-scale fusion and meta-heuristic optimization
CN116168392A (en) * 2022-12-28 2023-05-26 北京工业大学 Target labeling method and system based on optimal source domain of multidimensional spatial feature model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于图像处理技术的黄瓜叶片含水率检测系统设计;李畅畅;《中国优秀硕士学位论文全文数据库》;D048-36 *
基于灰色关联度和Prewitt算子的边缘检测新方法;石俊涛等;《微计算机信息》;214-216 *

Also Published As

Publication number Publication date
CN116721303A (en) 2023-09-08

Similar Documents

Publication Publication Date Title
CN110516596B (en) Octave convolution-based spatial spectrum attention hyperspectral image classification method
CN111191732A (en) A target detection method based on fully automatic learning
CN108830188A (en) Vehicle checking method based on deep learning
CN109086792A (en) Based on the fine granularity image classification method for detecting and identifying the network architecture
CN108921201B (en) Dam defect identification and classification method based on feature combination and CNN
CN111833322B (en) A Garbage Multi-target Detection Method Based on Improved YOLOv3
CN109815979B (en) Weak label semantic segmentation calibration data generation method and system
CN106295124A (en) Utilize the method that multiple image detecting technique comprehensively analyzes gene polyadenylation signal figure likelihood probability amount
CN113658163A (en) High-score SAR image segmentation method based on multi-level collaboration to improve FCM
CN113743470B (en) AI algorithm-based garbage recognition precision improvement method for automatic bag breaking classification box
CN111611972B (en) Crop leaf type identification method based on multi-view multi-task integrated learning
CN106340016A (en) DNA quantitative analysis method based on cell microscope image
CN112233099B (en) Reusable spacecraft surface impact damage characteristic identification method
CN109409438B (en) Remote sensing image classification method based on IFCM clustering and variational inference
CN112509017B (en) Remote sensing image change detection method based on learnable differential algorithm
CN106056165B (en) A saliency detection method based on superpixel correlation-enhanced Adaboost classification learning
Sabri et al. Nutrient deficiency detection in maize (Zea mays L.) leaves using image processing
CN112183237A (en) Automatic white blood cell classification method based on color space adaptive threshold segmentation
CN111783885A (en) Millimeter wave image quality classification model construction method based on local enhancement
CN116071339A (en) A Product Defect Identification Method Based on Improved Whale Algorithm Optimizing SVM
Sabzi et al. The use of soft computing to classification of some weeds based on video processing
CN115761240B (en) Image semantic segmentation method and device for chaotic back propagation graph neural network
CN116721303B (en) Unmanned aerial vehicle fish culture method and system based on artificial intelligence
CN112037230A (en) Forest region image segmentation algorithm based on super-pixel and super-metric contour map
CN118840663A (en) Plant leaf identification method based on improved dung beetle optimization algorithm and integrated learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant