CN116630080B - Method and system for determining capacity of aquatic product intensive culture feed based on image recognition - Google Patents

Method and system for determining capacity of aquatic product intensive culture feed based on image recognition Download PDF

Info

Publication number
CN116630080B
CN116630080B CN202310913354.4A CN202310913354A CN116630080B CN 116630080 B CN116630080 B CN 116630080B CN 202310913354 A CN202310913354 A CN 202310913354A CN 116630080 B CN116630080 B CN 116630080B
Authority
CN
China
Prior art keywords
data
training
ecological environment
fish
feeding amount
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310913354.4A
Other languages
Chinese (zh)
Other versions
CN116630080A (en
Inventor
孙育平
陈晓瑛
黄文�
黄敏伟
赵吉臣
阮灼豪
鲁慧杰
邹伟华
郑艳芸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Animal Science of Guangdong Academy of Agricultural Sciences
Original Assignee
Institute of Animal Science of Guangdong Academy of Agricultural Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Animal Science of Guangdong Academy of Agricultural Sciences filed Critical Institute of Animal Science of Guangdong Academy of Agricultural Sciences
Priority to CN202310913354.4A priority Critical patent/CN116630080B/en
Publication of CN116630080A publication Critical patent/CN116630080A/en
Application granted granted Critical
Publication of CN116630080B publication Critical patent/CN116630080B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/05Underwater scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Medical Informatics (AREA)
  • Agronomy & Crop Science (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Mining & Mineral Resources (AREA)
  • Primary Health Care (AREA)
  • Animal Husbandry (AREA)
  • General Business, Economics & Management (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an aquatic product intensive culture feed capacity determining method and system based on image recognition, comprising the following steps: acquiring underwater fish shoal data of fishes to be fed; performing target detection on the fish school image data as input of a first distributed multi-agent to obtain a target detection result; combining the historical feeding amount with target detection results corresponding to different culture areas respectively to obtain high-dimensional data characteristics, and taking the high-dimensional data characteristics as the input of a first long-short-term memory network to extract low-dimensional characteristics related to time sequences; taking the low-dimensional data characteristic as an observation environment of a first reinforcement learning network, carrying out distributed distribution of the feeding amount in real time and dynamically, and carrying out distributed feeding according to the obtained current feeding amount; the invention can improve the accurate control of the feeding amount of the intensive aquaculture and reduce the resource waste and ecological environment pollution caused by feeding.

Description

Method and system for determining capacity of aquatic product intensive culture feed based on image recognition
Technical Field
The invention relates to the technical field of aquaculture, in particular to an aquatic product intensive culture feed capacity determining method and system based on image recognition.
Background
Feeding is a critical factor in determining aquaculture costs and water quality. For aquaculture animals, especially for intensive cultivation, the development of accurate feeding is an important component for optimizing aquaculture technology. In the aquaculture process, the feed is fed by manual and subjective modes, and the traditional manual feeding consumes manpower and easily causes the problem of overmuch or insufficient feeding. The AI auxiliary feeding mode such as unmanned ship or unmanned plane is an effective means for reducing feed waste and optimizing aquaculture feeding. However, due to the complexity of the farming environment and uncertainty of the farming animal's behavior, accurately identifying the behavior of the aquaculture animal for precise feeding presents significant challenges. In the prior art, after image information is acquired through an unmanned ship or an unmanned plane, a feed feeding dataized mathematical model is established for cultured fishes, the weight of a single fish and the quantity of the fed fries are required to be obtained for establishing the mathematical model, however, the weight of the fish dynamically changes along with growth, ecological environments in different areas and the types of the fish have different influences on the feed feeding, the changed ecological environments have relevance in time sequence, the fries are lost in quantity more or less, and the reference meaning of the quantity of the actually fed fries obtained by establishing the mathematical model in the feeding process is gradually reduced; in addition, the unmanned aerial vehicle image correction is needed through the mode that unmanned aerial vehicle carries out the feeding, and unmanned aerial vehicle or unmanned ship can survey the shoal of fish state of surface of water, can't survey the shoal of fish state under water, has certain vision blind area.
Therefore, whether the traditional artificial feeding or the unmanned aerial vehicle or the unmanned ship is used for feeding in an AI mode, the adaptive adjustment cannot be carried out according to the time sequence of the ecological environment and the dynamic growth change of the fish, the accurate feeding cannot be achieved, and the applicability is low.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides an aquatic product intensive culture feed capacity determining method and system based on image recognition, which can accurately acquire the culture capacity in a culture water body, so that accurate feed feeding can be realized.
In a first aspect, the invention provides a method for determining the capacity of an aquatic product intensive culture feed based on image recognition, which comprises the following steps:
acquiring underwater fish shoal data of fishes to be fed; wherein the fish school data comprises: the method comprises the steps of (1) shoal image data of different areas of a culture water body, historical feeding amounts corresponding to the different areas of the culture water body, historical ecological environment data, current ecological environment data and type data of fishes to be fed;
taking the fish-swarm image data as the input of a trained first distributed multi-agent, and carrying out target detection on the fish-swarm image data according to a trained first convolutional neural network to obtain a target detection result; wherein the first distributed multi-agent comprises: the system comprises a trained first convolutional neural network, a trained first long-term and short-term memory network and a trained first reinforcement learning network;
Combining the historical feeding amount with the historical ecological environment data, the current ecological environment data, the type data and target detection results corresponding to different areas of the culture water body respectively to obtain high-dimensional data characteristics, and taking the high-dimensional data characteristics as the input of the first long-short-term memory network to extract time-sequence-related low-dimensional characteristics to obtain low-dimensional data characteristics;
taking the low-dimensional data characteristic as an observation environment of the first reinforcement learning network, carrying out distributed distribution of feeding amount in real time and dynamically to obtain current feeding amount finally output by the first distributed multi-agent, and carrying out distributed feeding according to the current feeding amount.
According to the invention, the underwater fish swarm image data is subjected to target detection, so that the culture capacity in the culture water body can be accurately obtained, the culture capacity is combined with the historical ecological environment data and the current ecological environment data, the time sequence characteristic extraction is performed on the obtained high-dimensional data characteristic, and the influence of the relevance of the ecological environment index on the time sequence on the feeding amount can be obtained, so that the accurate feed feeding can be realized; in addition, the self-adaptive adjustment of the feeding amount is carried out through reinforcement learning, so that the method can be suitable for different fish schools, the feeding accuracy of the feed is further improved, and the feeding adaptability of the feed is further improved; in addition, compared with the prior art, the current feeding amount is not dependent on the number of the fed seedlings, so that the feeding amount is not limited to the number of the seedlings fed in the initial stage, and the feeding amount can adapt to feeding of different growth periods and the types with dynamically changed number when the number of the types in the culture water body is dynamically changed.
Further, before the target detection is performed on the fish-shoal image data according to the trained first convolutional neural network, the method includes:
respectively taking training fish swarm data of a plurality of culture water body areas divided under water as training data of different intelligent agents, and performing distributed execution on each intelligent agent according to the training data to obtain training feeding amount;
and storing the corresponding training fish school data and the corresponding training feeding amount into an experience playback data pool by each intelligent agent, so that the centralized training intelligent agent acquires experience data from the experience playback data pool for centralized training.
Further, the training fish school data of a plurality of aquaculture water areas divided under water are respectively used as training data of different agents, each agent performs distributed execution according to the training data to obtain training feeding amount, and the method comprises the following steps:
respectively taking preset fish swarm image data of a plurality of culture water body areas divided under water as input of a second convolution neural network corresponding to the initiation of the intelligent agent, and sequentially obtaining training target detection results;
sequentially combining the training historical feeding amount with training historical ecological environment data, training current ecological environment data, training type data and the training target detection result respectively to obtain corresponding training high-dimension data characteristics, and taking the training high-dimension data characteristics as the input of a second long-short-period memory network corresponding to the intelligent agent respectively to obtain corresponding training low-dimension data characteristics;
And sequentially taking the training low-dimensional data characteristics as an observation environment of a second reinforcement learning network corresponding to the initial intelligent agent, and carrying out distributed distribution on the feeding amount to obtain the corresponding training feeding amount.
Further, the storing, by each agent, the corresponding training fish school data and the corresponding training feeding amount in the experience playback data pool, so that the centralized training agent obtains the experience data from the experience playback data pool to perform the centralized training, including:
each intelligent agent stores the corresponding training fish school data and the corresponding training feeding amount into an experience playback data pool, so that the concentrated training intelligent agent transmits the parameters of the training model to each intelligent agent after concentrated training according to the training threshold value; wherein the centralized training agent and each agent employ the same network framework.
Further, the method for enabling the centralized training agent to acquire experience data from the experience playback data pool for centralized training includes:
weighting the first loss function of the third convolutional neural network, the second loss function of the third long-short-term memory network and the third loss function of the third strong learning network of the centralized training agent to obtain an overall loss function of the centralized training agent;
And updating the parameters of the centralized training agents according to the overall loss function, and transmitting the obtained updated parameters to each agent.
Preferably, the overall loss function may be expressed as:
wherein,、/>and->Weights of the first, second and third loss functions, respectively,/->、/>And->Respectively +.>Classification sub-loss function of the first loss function of the samples +.>Locator loss function>And confidence sub-loss function->Weight of->For long-term memory network as second loss function of encoder,/for long-term memory network>And->Respectively +.>Q value and true Q value of third strong learning network of individual samples, +.>And->Is the time difference error as a function of the third loss.
Further, the obtaining the fish swarm data of the fish to be fed underwater includes:
dividing the underwater aquaculture water into areas, sequentially obtaining fish swarm image data in each aquaculture water area, and sequentially preprocessing the fish swarm image data; wherein the preprocessing comprises: gaussian filtering, image denoising, image enhancement and removal of non-target images.
Further, the obtaining the fish swarm data of the fish to be fed underwater further includes:
According to the acquired annual ecological environment data, selecting a plurality of ecological environment characteristics with the greatest degree of correlation with the feeding amount, and according to the plurality of ecological environment characteristics, obtaining historical ecological environment data and current ecological environment data of a plurality of days, wherein the method specifically comprises the following steps of:
taking the feeding amount as a parent sequence and taking a plurality of ecological environment indexes in annual ecological environment data as a child sequence; wherein, the ecological environment index includes: dissolved oxygen, water temperature, lowest air temperature, average air temperature, highest air temperature, precipitation, solar radiation, water vapor pressure, total precipitation and cultivation density;
sequentially preprocessing the values corresponding to the subsequences according to seasons to obtain dimensionless values of the corresponding ecological environment indexes;
sequentially calculating correlation coefficients of a plurality of dimensionless numbers and the parent sequence, and obtaining gray correlation degrees of each ecological environment index according to the correlation coefficients;
selecting ecological environment indexes corresponding to a plurality of digits before gray correlation degree from large to small as ecological environment characteristics, and selecting data of the ecological environment characteristics from historical data of a plurality of recent days as historical ecological environment data;
and collecting current ecological environment data according to the ecological environment characteristics.
According to the invention, the ecological environment data is subjected to statistical analysis to obtain the ecological environment factor with the greatest influence on the feeding amount, and the influence degree of dissolved oxygen and the like in water on the feeding amount is obtained by statistics, so that the ecological environment factor with smaller influence is removed conveniently, the influence factor with larger influence on the feeding amount is reserved, the calculated amount caused by redundant ecological environment factors is reduced, the data loss caused by information asymmetry is reduced, and the feeding precision of the feed is improved.
Preferably, the first reinforcement learning network is an executor-reviewer network with continuous values of output;
the reward function of the first reinforcement learning network is obtained by performing target detection on the fed fish swarm image according to the corresponding first convolution neural network, obtaining fed satiation state, and obtaining instant reward according to the satiation state.
In a second aspect, the present invention also provides an aquaculture feed capacity determining system based on image recognition, comprising:
the data acquisition module is used for acquiring underwater fish shoal data of the fish to be fed; wherein the fish school data comprises: the method comprises the steps of (1) shoal image data of different areas of a culture water body, historical feeding amounts corresponding to the different areas of the culture water body, historical ecological environment data, current ecological environment data and type data of fishes to be fed;
The target detection module is used for taking the fish-school image data as the input of a trained first distributed multi-agent, and carrying out target detection on the fish-school image data according to a trained first convolutional neural network to obtain a target detection result; wherein the first distributed multi-agent comprises: the system comprises a trained first convolutional neural network, a trained first long-term and short-term memory network and a trained first reinforcement learning network;
the time sequence feature acquisition module is used for respectively combining the historical feeding amount with the historical ecological environment data, the current ecological environment data, the type data and target detection results corresponding to different areas of the culture water body to obtain high-dimensional data features, and extracting time sequence-related low-dimensional features by taking the high-dimensional data features as the input of the first long-short-term memory network to obtain low-dimensional data features;
and the feeding module is used for taking the low-dimensional data characteristics as an observation environment of the first reinforcement learning network, carrying out distributed distribution of the feeding amount in real time and dynamically, obtaining the current feeding amount finally output by the first distributed multi-agent, and carrying out distributed feeding according to the current feeding amount.
Drawings
FIG. 1 is a schematic flow chart of an aquaculture feed capacity determination method based on image recognition provided by an embodiment of the invention;
FIG. 2 is a schematic diagram of acquiring feeding amount in real time by an agent in the method for determining aquaculture feed capacity based on image recognition according to the embodiment of the present invention;
FIG. 3 is a schematic diagram of a training process of an aquaculture feed capacity determination method based on image recognition according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an aquaculture feed capacity determining system based on image recognition.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a flow chart of a method for determining the capacity of an aquatic product intensive culture feed based on image recognition according to an embodiment of the invention includes steps S11 to S14, specifically:
S11, acquiring underwater fish swarm data of fishes to be fed; wherein the fish school data comprises: the method comprises the steps of providing shoal image data of different areas of a culture water body, historical feeding amounts corresponding to the different areas of the culture water body, historical ecological environment data, current ecological environment data and type data of fishes to be fed.
The method for acquiring the fish shoal data of the underwater fish to be fed comprises the following steps: dividing the underwater aquaculture water into areas, sequentially obtaining fish swarm image data in each aquaculture water area, and sequentially preprocessing the fish swarm image data; wherein the preprocessing comprises: gaussian filtering, image denoising, image enhancement and removal of non-target images.
It is worth noting that, because the changed ecological environment has a certain influence on the fish feeding amount, the key factors of the ecological environment factors are obtained by adopting a statistical method, the key factors are taken as ecological environment characteristics influencing the feeding amount, and the self-adaptive feeding amount is obtained through the time sequence of the ecological environment characteristics; in addition, the influence degree of dissolved oxygen in water on the feeding amount is analyzed, and the feeding precision of the feed is further improved.
Specifically, according to the acquired annual ecological environment data, selecting a plurality of ecological environment features with the greatest degree of relevance to the feeding amount, and according to the plurality of ecological environment features, obtaining historical ecological environment data and current ecological environment data of a plurality of days, wherein the method comprises substeps S111-S115, specifically comprises the following steps:
Sub-step S111, taking the feeding amount as a parent sequence and taking a plurality of ecological environment indexes in annual ecological environment data as a child sequence; wherein, the ecological environment index includes: dissolved oxygen, water temperature, air minimum air temperature, air average air temperature, air maximum air temperature, precipitation, solar radiation, water vapor pressure, total precipitation and cultivation density.
And step S112, sequentially preprocessing the values corresponding to the subsequences according to seasons to obtain the dimensionless numerical values of the corresponding ecological environment indexes.
And S113, calculating correlation coefficients of a plurality of dimensionless numbers and the parent sequence in sequence, and obtaining gray correlation degree of each ecological environment index according to the correlation coefficients.
And step S114, selecting the ecological environment indexes corresponding to a plurality of digits before the gray correlation degree from large to small as the ecological environment characteristics, and selecting the data of the ecological environment characteristics from the historical data of a plurality of last days as the historical ecological environment data.
Substep S115, collecting current ecological environment data according to the ecological environment characteristics.
Illustratively, historical feeding amounts and historical ecological environment data are obtained, and the historical feeding amounts and annual ecological environment data are divided according to seasons, so that monthly, weekly or daily annual ecological environment data under each season are obtained. As the observation data, the observation data is divided in quarters to obtain a matrix of annual ecological environment data divided in quarters, and the matrix is normalized to obtain a normalized matrix. The method comprises the following steps: obtaining a feeding amount average value of feeding amounts, taking the ratio of the feeding amounts to the feeding average value in each quarter as a dimensionless value of the feeding amounts, and sequentially carrying out the same operation on subsequences in the same way, for example: and acquiring an annual water temperature average value of the water temperature, taking the ratio of the water temperature to the water temperature average value in each quarter as a dimensionless value of the water temperature, and so on to obtain the dimensionless values of all the subsequences.
Based on a standardized matrix, sequentially calculating correlation coefficients of a plurality of dimensionless numbers and a parent sequence, specifically calculating minimum difference and maximum difference of two poles of each ecological environment index and the parent sequence, and obtaining a correlation coefficient matrix according to the minimum difference and the maximum difference of the two poles.
Preferably, the elements in the correlation coefficient matrix may be expressed as:
wherein,for elements in the correlation coefficient matrix +.>And->Respectively, minimum difference of two poles and maximum difference of two poles, < >>For resolution factor +.>Elements of the parent sequence in the matrix which are standardized annual environmental data +.>Elements of a sub-sequence in a matrix of standardized annual environment data; wherein the resolution factor is a random variable on a standard normal distribution of the administered amount.
According to the correlation coefficient matrix, gray correlation degrees of the sub-sequences affecting feeding amount are obtained, the average value of the gray correlation degrees of the sub-sequences is calculated, a plurality of sub-sequences are selected as ecological environment characteristics according to the sequence from large average value to small average value of the sub-sequences, data of the ecological environment characteristics are selected from historical data of a plurality of days to be fed to be used as historical ecological environment data, and current ecological environment data are collected according to the ecological environment characteristics.
Step S12, taking the fish-school image data as the input of a trained first distributed multi-agent, and carrying out target detection on the fish-school image data according to a trained first convolutional neural network to obtain a target detection result; wherein the first distributed multi-agent comprises: the system comprises a trained first convolutional neural network, a trained first long-short-term memory network and a trained first reinforcement learning network.
Referring to fig. 2, a schematic diagram of acquiring feeding amount in real time by an agent in the method for determining aquaculture feed capacity based on image recognition according to the embodiment of the present invention is shown only by a network frame of one agent because the agents are all configured with the same network structure. Intelligent bodyGet->The fish swarm image data of the culture water body area is used as input of a first convolutional neural network, and a target detection result is obtained after target detection is carried out; wherein the first convolutional neural network comprises: the system comprises a backbone network, a Neck network, a density prediction network, a coefficient prediction network and an output network of the obtained target detection result.
Preferably, the first convolutional neural network is a YOLOv5 (You Only Look Once version 5) neural network.
According to the invention, the YOLOv5 neural network is adopted, so that the data of the fish school image can be rapidly detected, and the accuracy and the efficiency of target detection of the fish school image are improved; the backbone network adopts a CSPNet network, and adopts self-adaptive training, data enhancement and multi-scale training, so that the data of the fish-shoal image can be efficiently trained, the characteristics of the fish-shoal image can be conveniently extracted, the target detection result with higher precision can be conveniently obtained, and the target detection is used for carrying out distributed distribution of feeding amount.
Combining the target detection result with training historical feeding amount, current ecological environment data, historical ecological environment data and fish school type data corresponding to the culture water body region, and sending the obtained high-dimensional data characteristic into a first long-short-period memory network to obtain a low-dimensional data characteristic corresponding to the historical feeding amount, the current ecological environment data, the historical ecological environment data, the fish school type data and the target detection result; the first long-short-term neural network adopts a two-layer structure, comprises an LSTM layer 1 and an LSTM layer 2, outputs through a dropout layer on a full-connection layer of the LSTM layer 2, obtains time sequence related low-dimensional characteristics from high-dimensional data characteristics, and can prevent the overfitting phenomenon of multiple intelligent agents, so that the accuracy of distributed distribution of feeding quantity is improved.
Preferably, the loss rate of the dropout layer is 0.3.
Taking the low-dimensional data characteristic as an observation environment of a first reinforcement learning network, carrying out real-time dynamic feeding amount distributed distribution, and outputting the current feeding amount corresponding to real-time dynamic feeding amount by each intelligent body for the input fish shoal image data of the aquaculture water body area. The intelligent agent is added according to the obtained current feeding quantity +.>And carrying out feeding, wherein the corresponding reward function of the first reinforcement learning network is obtained by carrying out target detection on the fed fish swarm image according to the corresponding first convolution neural network, obtaining the fed satiation state and obtaining instant rewards according to the satiation state.
Preferably, the first reinforcement learning network adopts an executor-reviewer network with continuous output values, takes the output of the first long-term and short-term memory network as an observation environment of the first reinforcement learning network, and the first reinforcement learning network outputs the distributed distribution result of the feeding amount in real time according to a dynamic environment.
Before the target detection is performed on the fish-shoal image data according to the trained first convolutional neural network, the method comprises the following steps: respectively taking training fish swarm data of a plurality of culture water body areas divided under water as training data of different intelligent agents, and performing distributed execution on each intelligent agent according to the training data to obtain training feeding amount; and storing the corresponding training fish school data and the corresponding training feeding amount into an experience playback data pool by each intelligent agent, so that the centralized training intelligent agent acquires experience data from the experience playback data pool for centralized training.
Specifically, training fish school data of a plurality of aquaculture water areas divided under water are respectively used as training data of different agents, each agent performs distributed execution according to the training data to obtain training feeding amount, and the method comprises the following steps: respectively taking preset fish swarm image data of a plurality of culture water body areas divided under water as input of a second convolution neural network corresponding to the initiation of the intelligent agent, and sequentially obtaining training target detection results; sequentially combining the training historical feeding amount with training historical ecological environment data, training current ecological environment data, training type data and the training target detection result respectively to obtain corresponding training high-dimension data characteristics, and taking the training high-dimension data characteristics as the input of a second long-short-period memory network corresponding to the intelligent agent respectively to obtain corresponding training low-dimension data characteristics; and sequentially taking the training low-dimensional data characteristics as an observation environment of a second reinforcement learning network corresponding to the initial intelligent agent, and carrying out distributed distribution on the feeding amount to obtain the corresponding training feeding amount.
Storing the corresponding training fish school data and the corresponding training feeding amount into an experience playback data pool by each intelligent agent, so that the centralized training intelligent agent acquires experience data from the experience playback data pool for centralized training, and the method comprises the following steps: each intelligent agent stores the corresponding training fish school data and the corresponding training feeding amount into an experience playback data pool, so that the concentrated training intelligent agent transmits the parameters of the training model to each intelligent agent after concentrated training according to the training threshold value; wherein the centralized training agent and each agent employ the same network framework.
Specifically, the centralized training agent acquires experience data from the experience playback data pool to perform centralized training, and the method comprises the following steps: weighting the first loss function of the third convolutional neural network, the second loss function of the third long-short-term memory network and the third loss function of the third strong learning network of the centralized training agent to obtain an overall loss function of the centralized training agent; and updating the parameters of the centralized training agents according to the overall loss function, and transmitting the obtained updated parameters to each agent.
Referring to fig. 3, a schematic diagram of a training process of an aquaculture feed capacity determining method based on image recognition according to an embodiment of the present invention is provided, where each agent processes image data of a fish shoal, and performs distributed distribution of feeding amounts, and performs offline training on initial multiple agents by using a distributed execution-centralized training manner, so as to obtain trained agents. Specifically, the agentGet->The fish school image data of the culture water body area is used as the input of the self network; the method comprises the steps of taking fish swarm image data as input of an initial second convolution neural network to perform target detection to obtain a training target detection result, combining the training target detection result with training historical feeding amount, training historical ecological environment data, training current ecological environment data and training type data corresponding to the aquaculture water body area to obtain training high-dimensional data characteristics, omitting a combined representation process in the figure for convenience of representation, and directly displaying the obtained training high-dimensional data characteristics.
Will train high-dimensional data featuresAs the input of the initial second long-short-term memory network, extracting the time sequence related low-dimensional data characteristics, wherein the obtained low-dimensional data characteristics have time sequence correlation, and taking the obtained low-dimensional data characteristics as the observation environment of the initial second reinforcement learning network to output the feeding amount in the training stage; wherein each agent is based on self-helpThe method comprises the steps of performing distributed feeding after feeding the current feeding amount obtained by the body, performing target detection on a fed fish swarm image according to a corresponding second convolution neural network, obtaining a fed satiation state, and obtaining instant rewards according to the satiation state.
Each agent stores experience data of the successfully allocated feeding volume at the second reinforcement learning into an experience playback data pool. It is worth noting that the experience playback data pool not only comprises the feeding amount of each agent, but also comprises training shoal data, instant rewards and training shoal data of the next feeding corresponding to each agent. Each intelligent agent only carries out the calculation of feeding amount according to the self parameters, does not carry out training, and updates the parameters of each intelligent agent after carrying out centralized training through centralized training of the intelligent agents.
When reaching a preset training time, the centralized training agent acquires experience data from the experience playback data pool to perform centralized training, the centralized training agent and each agent adopt the same network framework, the second convolutional neural network, the second long-short-term memory network and the second reinforcement learning network are synchronously updated according to the total loss functions of the third convolutional neural network, the third long-short-term memory network and the third reinforcement learning network, so that the result of distributed distribution of the feeding amount for obtaining global optimum is obtained, and the centralized training agent acquires the feeding amount output process and the agent according to the experience dataThe processing procedure of (2) is the same and will not be described in detail herein. After the central training agent completes the parameter updating of the central training agent according to the total loss function, the obtained updated parameters are transmitted to all distributed agents, so that each agent calculates the next feeding amount according to the received updated parameters.
It is noted that the reward function of the initial second reinforcement learning network is obtained by performing target detection on the fed fish-school image according to the corresponding initial second convolution neural network, obtaining the fed satiety state, and obtaining the instant reward according to the satiety state. Similarly, the rewarding function of the third strong learning network is obtained by performing target detection on the fed shoal image according to the corresponding third convolution neural network, obtaining fed satiety state and obtaining instant rewarding according to the satiety state.
Specifically, the reward function of the reinforcement learning network judges the satiation state of the fish shoal according to the activity of the fish shoal after feeding, if fewer fish shoals are in a state of eating and spitting out feed, namely fewer fish shoals are in an oversaturation state, and fewer fish shoals are in a foraging state, the instant reward is larger, and otherwise the instant reward is smaller. The satiety state of the fish shoal can be obtained by carrying out target detection on the fed image according to the corresponding convolutional neural network so as to obtain the satiety state after feeding.
Preferably, the overall loss function can be expressed as:
wherein,、/>and->Weights of the first, second and third loss functions, respectively,/->、/>And->Respectively +.>Classification sub-loss function of the first loss function of the samples +.>Locator loss function>And confidence sub-loss function->Weight of->For long-term memory network as second loss function of encoder,/for long-term memory network>And->Respectively +.>Q value and true Q value of third strong learning network of individual samples, +.>And->Is the time difference error as a function of the third loss.
It should be noted that, since the reinforcement learning network has multiple neural networks including the executor network and the reviewer network, when the overall loss function is calculated, only the time difference error is processed, and the two networks can update parameters according to the time difference error, so that the reinforcement learning network can update the multiple networks synchronously, so that the whole network frame is updated with the overall parameters, which is beneficial to obtaining the optimal solution of the whole network frame of the intelligent agent, thereby improving the accuracy of feeding the fish shoal.
Preferably, the first reinforcement learning network is an actor-reviewer network whose output value is a continuous value.
Preferably, the learning rates of the actor network and the reviewer network in the actor-reviewer network are 0.001 and 0.002, respectively.
And S13, respectively combining the historical feeding amount with the historical ecological environment data, the current ecological environment data, the category data and target detection results corresponding to different areas of the culture water body to obtain high-dimension data characteristics, and taking the high-dimension data characteristics as the input of the first long-short-term memory network to extract time-sequence-related low-dimension characteristics to obtain low-dimension data characteristics.
And S14, taking the low-dimensional data characteristics as an observation environment of the first reinforcement learning network, performing distributed distribution of the feeding amount in real time and dynamically to obtain the current feeding amount finally output by the first distributed multi-agent, and performing distributed feeding according to the current feeding amount.
Referring to fig. 4, a schematic structural diagram of an aquaculture feed capacity determining system based on image recognition according to the present invention includes: a data acquisition module 41, a target detection module 42, a timing characteristic acquisition module 43 and a feeding module 44.
It should be noted that, the data obtaining module 41 mainly obtains fish swarm data of the fish to be fed underwater, and transmits the obtained fish swarm data to the target detecting module 42 for target detection; after the target detection module 42 acquires the fish swarm data, the fish swarm data is used as input of a convolutional neural network of the distributed multi-agent to perform target detection, and the obtained target detection result is transmitted to the time sequence feature acquisition module 43; after the time sequence feature obtaining module 43 obtains the target detection result, and combines the historical feeding amount, the historical ecological environment data, the current ecological environment data and the type data of the fish shoal in the corresponding area, obtains the low-dimensional data feature after combination, and transmits the low-dimensional data feature to the feeding module 44; after receiving the low-dimensional data feature, the feeding module 44 obtains a current feeding amount according to the reinforcement learning network, and performs distributed feeding according to the current feeding amount.
The data acquisition module 41 is used for acquiring underwater fish shoal data of the fish to be fed; wherein the fish school data comprises: the method comprises the steps of providing shoal image data of different areas of a culture water body, historical feeding amounts corresponding to the different areas of the culture water body, historical ecological environment data, current ecological environment data and type data of fishes to be fed.
The method for acquiring the fish shoal data of the underwater fish to be fed comprises the following steps: dividing the underwater aquaculture water into areas, sequentially obtaining fish swarm image data in each aquaculture water area, and sequentially preprocessing the fish swarm image data; wherein the preprocessing comprises: gaussian filtering, image denoising, image enhancement and removal of non-target images.
Specifically, according to the acquired annual ecological environment data, selecting a plurality of ecological environment features with the greatest degree of relevance to the feeding amount, and according to the plurality of ecological environment features, obtaining historical ecological environment data and current ecological environment data of a plurality of days, wherein the method comprises substeps S111-S115, specifically comprises the following steps:
sub-step S111, taking the feeding amount as a parent sequence and taking a plurality of ecological environment indexes in annual ecological environment data as a child sequence; wherein, the ecological environment index includes: dissolved oxygen, water temperature, air minimum air temperature, air average air temperature, air maximum air temperature, precipitation, solar radiation, water vapor pressure, total precipitation and cultivation density.
And step S112, sequentially preprocessing the values corresponding to the subsequences according to seasons to obtain the dimensionless numerical values of the corresponding ecological environment indexes.
And S113, calculating correlation coefficients of a plurality of dimensionless numbers and the parent sequence in sequence, and obtaining gray correlation degree of each ecological environment index according to the correlation coefficients.
And step S114, selecting the ecological environment indexes corresponding to a plurality of digits before the gray correlation degree from large to small as the ecological environment characteristics, and selecting the data of the ecological environment characteristics from the historical data of a plurality of last days as the historical ecological environment data.
Substep S115, collecting current ecological environment data according to the ecological environment characteristics.
The target detection module 42 is configured to take the fish-school image data as an input of a trained first distributed multi-agent, and perform target detection on the fish-school image data according to a trained first convolutional neural network, so as to obtain a target detection result; wherein the first distributed multi-agent comprises: the system comprises a trained first convolutional neural network, a trained first long-short-term memory network and a trained first reinforcement learning network.
Before the target detection is performed on the fish-shoal image data according to the trained first convolutional neural network, the method comprises the following steps: respectively taking training fish swarm data of a plurality of culture water body areas divided under water as training data of different intelligent agents, and performing distributed execution on each intelligent agent according to the training data to obtain training feeding amount; and storing the corresponding training fish school data and the corresponding training feeding amount into an experience playback data pool by each intelligent agent, so that the centralized training intelligent agent acquires experience data from the experience playback data pool for centralized training.
Specifically, training fish school data of a plurality of aquaculture water areas divided under water are respectively used as training data of different agents, each agent performs distributed execution according to the training data to obtain training feeding amount, and the method comprises the following steps: respectively taking preset fish swarm image data of a plurality of culture water body areas divided under water as input of a second convolution neural network corresponding to the initiation of the intelligent agent, and sequentially obtaining training target detection results; sequentially combining the training historical feeding amount with training historical ecological environment data, training current ecological environment data, training type data and the training target detection result respectively to obtain corresponding training high-dimension data characteristics, and taking the training high-dimension data characteristics as the input of a second long-short-period memory network corresponding to the intelligent agent respectively to obtain corresponding training low-dimension data characteristics; and sequentially taking the training low-dimensional data characteristics as an observation environment of a second reinforcement learning network corresponding to the initial intelligent agent, and carrying out distributed distribution on the feeding amount to obtain the corresponding training feeding amount.
Storing the corresponding training fish school data and the corresponding training feeding amount into an experience playback data pool by each intelligent agent, so that the centralized training intelligent agent acquires experience data from the experience playback data pool for centralized training, and the method comprises the following steps: each intelligent agent stores the corresponding training fish school data and the corresponding training feeding amount into an experience playback data pool, so that the concentrated training intelligent agent transmits the parameters of the training model to each intelligent agent after concentrated training according to the training threshold value; wherein the centralized training agent and each agent employ the same network framework.
Specifically, the centralized training agent acquires experience data from the experience playback data pool to perform centralized training, and the method comprises the following steps: weighting the first loss function of the third convolutional neural network, the second loss function of the third long-short-term memory network and the third loss function of the third strong learning network of the centralized training agent to obtain an overall loss function of the centralized training agent; and updating the parameters of the centralized training agents according to the overall loss function, and transmitting the obtained updated parameters to each agent.
Preferably, the overall loss function may be expressed as:
wherein,、/>and->Weights of the first, second and third loss functions, respectively,/->、/>And->Respectively +.>Classification sub-loss function of the first loss function of the samples +.>Locator loss function>And confidence sub-loss function->Weight of->For long-term memory network as second loss function of encoder,/for long-term memory network>And->Respectively +.>Q value and true Q value of third strong learning network of individual samples, +.>And->Is the time difference error as a function of the third loss.
The time sequence feature obtaining module 43 is configured to combine the historical feeding amount with the historical ecological environment data, the current ecological environment data, the category data and the target detection results corresponding to different areas of the aquaculture water body respectively to obtain a high-dimensional data feature, and extract a low-dimensional feature related to time sequence by using the high-dimensional data feature as an input of the first long-short-term memory network to obtain a low-dimensional data feature.
And the feeding module 44 is configured to perform distributed allocation of feeding amounts dynamically in real time by using the low-dimensional data feature as an observation environment of the first reinforcement learning network, so as to obtain a current feeding amount finally output by the first distributed multi-agent, and perform distributed feeding according to the current feeding amount.
Preferably, the first reinforcement learning network is an executor-reviewer network with continuous values of output; the reward function of the first reinforcement learning network is obtained by performing target detection on the fed fish swarm image according to the corresponding first convolution neural network, obtaining fed satiation state, and obtaining instant reward according to the satiation state.
According to the invention, the target detection result is carried out on underwater fish swarm image data, so that the fish capacity in the culture water body can be accurately obtained, the fish swarm image data is combined with historical ecological environment data and current ecological environment data, the time sequence characteristics of the obtained high-dimensional data are extracted, and the influence of the relevance of ecological environment indexes on the time sequence on the feeding amount can be obtained, so that the accurate feed feeding can be realized; in addition, the self-adaptive adjustment of the feeding amount is carried out through reinforcement learning, so that the method can be suitable for different fish schools, the feeding accuracy of the feed is further improved, and the feeding adaptability of the feed is further improved; in addition, compared with the prior art, the obtained feeding amount does not depend on the number of the fed fries, so that the feeding amount is not limited to the number of the fries fed in the initial stage, and the feeding amount can adapt to the feeding of the fishes in different growth periods and with dynamically changed number when the number of the fishes in the culture water body is dynamically changed.
It will be appreciated by those skilled in the art that embodiments of the present application may also provide a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.

Claims (7)

1. The method for determining the capacity of the aquatic product intensive culture feed based on image recognition is characterized by comprising the following steps of:
acquiring underwater fish shoal data of fishes to be fed; wherein the fish school data comprises: the method comprises the steps of (1) shoal image data of different areas of a culture water body, historical feeding amounts corresponding to the different areas of the culture water body, historical ecological environment data, current ecological environment data and type data of fishes to be fed;
taking the fish-swarm image data as the input of a trained first distributed multi-agent, and carrying out target detection on the fish-swarm image data according to a trained first convolutional neural network to obtain a target detection result; wherein the first distributed multi-agent comprises: the system comprises a trained first convolutional neural network, a trained first long-term and short-term memory network and a trained first reinforcement learning network;
combining the historical feeding amount with the historical ecological environment data, the current ecological environment data, the type data and target detection results corresponding to different areas of the culture water body respectively to obtain high-dimensional data characteristics, and taking the high-dimensional data characteristics as the input of the first long-short-term memory network to extract time-sequence-related low-dimensional characteristics to obtain low-dimensional data characteristics;
Taking the low-dimensional data characteristic as an observation environment of the first reinforcement learning network, carrying out distributed distribution of feeding amount in real time and dynamically to obtain current feeding amount finally output by the first distributed multi-agent, and carrying out distributed feeding according to the current feeding amount;
before the target detection is carried out on the fish school image data according to the trained first convolutional neural network, the method comprises the following steps:
respectively taking training fish swarm data of a plurality of culture water body areas divided under water as training data of different intelligent agents, and performing distributed execution on each intelligent agent according to the training data to obtain training feeding amount;
storing the corresponding training fish school data and the corresponding training feeding amount into an experience playback data pool by each intelligent agent, so that the concentrated training intelligent agent acquires experience data from the experience playback data pool for concentrated training;
the obtaining of the fish shoal data of the fish to be fed underwater comprises the following steps:
dividing the underwater aquaculture water into areas, sequentially obtaining fish swarm image data in each aquaculture water area, and sequentially preprocessing the fish swarm image data; wherein the preprocessing comprises: gaussian filtering, image denoising, image enhancement and removal of non-target images;
The obtaining of the fish shoal data of the fish to be fed underwater further comprises:
according to the acquired annual ecological environment data, selecting a plurality of ecological environment characteristics with the greatest degree of correlation with the feeding amount, and according to the plurality of ecological environment characteristics, obtaining historical ecological environment data and current ecological environment data of a plurality of days, wherein the method specifically comprises the following steps of:
taking the feeding amount as a parent sequence and taking a plurality of ecological environment indexes in annual ecological environment data as a child sequence; wherein, the ecological environment index includes: dissolved oxygen, water temperature, air minimum air temperature, air average air temperature, air maximum air temperature, precipitation, solar radiation, water vapor pressure and cultivation density;
sequentially preprocessing the values corresponding to the subsequences according to seasons to obtain dimensionless values of the corresponding ecological environment indexes;
sequentially calculating correlation coefficients of a plurality of dimensionless numbers and the parent sequence, and obtaining gray correlation degrees of each ecological environment index according to the correlation coefficients;
selecting ecological environment indexes corresponding to a plurality of digits before gray correlation degree from large to small as ecological environment characteristics, and selecting data of the ecological environment characteristics from historical data of a plurality of recent days as historical ecological environment data;
And collecting current ecological environment data according to the ecological environment characteristics.
2. The method for determining the capacity of the intensive aquaculture feed based on image recognition according to claim 1, wherein the training fish shoal data of a plurality of aquaculture water areas to be divided underwater are respectively used as training data of different agents, each agent performs distributed execution according to the training data to obtain the training feeding amount, and the method comprises the following steps:
respectively taking preset fish swarm image data of a plurality of culture water body areas divided under water as input of a second convolution neural network corresponding to the initiation of the intelligent agent, and sequentially obtaining training target detection results;
sequentially combining the training historical feeding amount with training historical ecological environment data, training current ecological environment data, training type data and the training target detection result respectively to obtain corresponding training high-dimension data characteristics, and taking the training high-dimension data characteristics as the input of a second long-short-period memory network corresponding to the intelligent agent respectively to obtain corresponding training low-dimension data characteristics;
and sequentially taking the training low-dimensional data characteristics as an observation environment of a second reinforcement learning network corresponding to the initial intelligent agent, and carrying out distributed distribution on the feeding amount to obtain the corresponding training feeding amount.
3. The method for determining the capacity of the intensive aquaculture feed based on image recognition according to claim 1, wherein storing the corresponding training fish shoal data and the corresponding training feeding amount in the experience playback data pool by each agent so that the intensive training agent acquires experience data from the experience playback data pool for intensive training comprises:
each intelligent agent stores the corresponding training fish school data and the corresponding training feeding amount into an experience playback data pool, so that the concentrated training intelligent agent transmits the parameters of the training model to each intelligent agent after concentrated training according to the training threshold value; wherein the centralized training agent and each agent employ the same network framework.
4. The method for determining the capacity of an aquaculture intensive feed based on image recognition according to claim 3, wherein said enabling the intensive training agent to obtain empirical data from said empirical playback data pool for intensive training comprises:
weighting the first loss function of the third convolutional neural network, the second loss function of the third long-short-term memory network and the third loss function of the third strong learning network of the centralized training agent to obtain an overall loss function of the centralized training agent;
And updating the parameters of the centralized training agents according to the overall loss function, and transmitting the obtained updated parameters to each agent.
5. The method for determining the capacity of an aquaculture feed based on image recognition according to claim 4, wherein the overall loss function is expressed as:
wherein,、/>and->The weights of the first, second and third loss functions respectively,、/>and->Respectively +.>Classification sub-loss function of the first loss function of the samples +.>Locator loss function>And confidence sub-loss function->Weight of->For long-term memory network as second loss function of encoder,/for long-term memory network>And->Respectively +.>Q value and true Q value of third strong learning network of individual samples, +.>And->Is the time difference error as a function of the third loss.
6. The method for determining the capacity of the aquatic product intensive culture feed based on image recognition according to claim 1, wherein the first reinforcement learning network is an executor-reviewer network with continuous output values;
the reward function of the first reinforcement learning network is obtained by performing target detection on the fed fish swarm image according to the corresponding first convolution neural network, obtaining fed satiation state, and obtaining instant reward according to the satiation state.
7. An aquatic product intensive culture feed capacity determining system based on image recognition, which is characterized by comprising:
the data acquisition module is used for acquiring underwater fish shoal data of the fish to be fed; wherein the fish school data comprises: the method comprises the steps of (1) shoal image data of different areas of a culture water body, historical feeding amounts corresponding to the different areas of the culture water body, historical ecological environment data, current ecological environment data and type data of fishes to be fed;
the target detection module is used for taking the fish-school image data as the input of a trained first distributed multi-agent, and carrying out target detection on the fish-school image data according to a trained first convolutional neural network to obtain a target detection result; wherein the first distributed multi-agent comprises: the system comprises a trained first convolutional neural network, a trained first long-term and short-term memory network and a trained first reinforcement learning network;
the time sequence feature acquisition module is used for respectively combining the historical feeding amount with the historical ecological environment data, the current ecological environment data, the type data and target detection results corresponding to different areas of the culture water body to obtain high-dimensional data features, and extracting time sequence-related low-dimensional features by taking the high-dimensional data features as the input of the first long-short-term memory network to obtain low-dimensional data features;
The feeding module is used for taking the low-dimensional data characteristics as an observation environment of the first reinforcement learning network, carrying out distributed distribution of feeding amount in real time and dynamically to obtain current feeding amount finally output by the first distributed multi-agent, and carrying out distributed feeding according to the current feeding amount;
before the target detection is carried out on the fish school image data according to the trained first convolutional neural network, the method comprises the following steps:
respectively taking training fish swarm data of a plurality of culture water body areas divided under water as training data of different intelligent agents, and performing distributed execution on each intelligent agent according to the training data to obtain training feeding amount;
storing the corresponding training fish school data and the corresponding training feeding amount into an experience playback data pool by each intelligent agent, so that the concentrated training intelligent agent acquires experience data from the experience playback data pool for concentrated training;
the obtaining of the fish shoal data of the fish to be fed underwater comprises the following steps:
dividing the underwater aquaculture water into areas, sequentially obtaining fish swarm image data in each aquaculture water area, and sequentially preprocessing the fish swarm image data; wherein the preprocessing comprises: gaussian filtering, image denoising, image enhancement and removal of non-target images;
The obtaining of the fish shoal data of the fish to be fed underwater further comprises:
according to the acquired annual ecological environment data, selecting a plurality of ecological environment characteristics with the greatest degree of correlation with the feeding amount, and according to the plurality of ecological environment characteristics, obtaining historical ecological environment data and current ecological environment data of a plurality of days, wherein the method specifically comprises the following steps of:
taking the feeding amount as a parent sequence and taking a plurality of ecological environment indexes in annual ecological environment data as a child sequence; wherein, the ecological environment index includes: dissolved oxygen, water temperature, air minimum air temperature, air average air temperature, air maximum air temperature, precipitation, solar radiation, water vapor pressure and cultivation density;
sequentially preprocessing the values corresponding to the subsequences according to seasons to obtain dimensionless values of the corresponding ecological environment indexes;
sequentially calculating correlation coefficients of a plurality of dimensionless numbers and the parent sequence, and obtaining gray correlation degrees of each ecological environment index according to the correlation coefficients;
selecting ecological environment indexes corresponding to a plurality of digits before gray correlation degree from large to small as ecological environment characteristics, and selecting data of the ecological environment characteristics from historical data of a plurality of recent days as historical ecological environment data;
And collecting current ecological environment data according to the ecological environment characteristics.
CN202310913354.4A 2023-07-25 2023-07-25 Method and system for determining capacity of aquatic product intensive culture feed based on image recognition Active CN116630080B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310913354.4A CN116630080B (en) 2023-07-25 2023-07-25 Method and system for determining capacity of aquatic product intensive culture feed based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310913354.4A CN116630080B (en) 2023-07-25 2023-07-25 Method and system for determining capacity of aquatic product intensive culture feed based on image recognition

Publications (2)

Publication Number Publication Date
CN116630080A CN116630080A (en) 2023-08-22
CN116630080B true CN116630080B (en) 2024-01-26

Family

ID=87603119

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310913354.4A Active CN116630080B (en) 2023-07-25 2023-07-25 Method and system for determining capacity of aquatic product intensive culture feed based on image recognition

Country Status (1)

Country Link
CN (1) CN116630080B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116843085B (en) * 2023-08-29 2023-12-01 深圳市明心数智科技有限公司 Freshwater fish growth monitoring method, device, equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111240200A (en) * 2020-01-16 2020-06-05 北京农业信息技术研究中心 Fish swarm feeding control method, fish swarm feeding control device and feeding boat
CN111476317A (en) * 2020-04-29 2020-07-31 中国科学院合肥物质科学研究院 Plant protection image non-dense pest detection method based on reinforcement learning technology
CN112352724A (en) * 2020-11-13 2021-02-12 湖北海洋工程装备研究院有限公司 Method and system for feeding feed in fishing ground
CN112400773A (en) * 2021-01-21 2021-02-26 南京农业大学 Greenhouse fry intelligent feeding device and method based on machine vision technology
CN112634202A (en) * 2020-12-04 2021-04-09 浙江省农业科学院 Method, device and system for detecting behavior of polyculture fish shoal based on YOLOv3-Lite
JP2021114993A (en) * 2020-01-27 2021-08-10 Assest株式会社 Fry feeding amount proposing program
CN113487143A (en) * 2021-06-15 2021-10-08 中国农业大学 Fish shoal feeding decision method and device, electronic equipment and storage medium
CN113919482A (en) * 2021-09-22 2022-01-11 上海浦东发展银行股份有限公司 Intelligent agent training method and device, computer equipment and storage medium
CN115486391A (en) * 2022-09-13 2022-12-20 浙江大学 Method for accurately feeding and culturing pearl, gentian and grouper
CN115497026A (en) * 2022-09-27 2022-12-20 中国农业大学 Precision feeding method, system and device based on fish school feeding activity quantification
KR102528286B1 (en) * 2022-08-30 2023-05-03 (주)오투컴퍼니 Automatic feeding system and method for fish farms based on AI
CN116258949A (en) * 2023-01-19 2023-06-13 武汉理工大学 Precise fish feed throwing system based on convolutional neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201710372D0 (en) * 2017-06-28 2017-08-09 Observe Tech Ltd System and method of feeding aquatic animals

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111240200A (en) * 2020-01-16 2020-06-05 北京农业信息技术研究中心 Fish swarm feeding control method, fish swarm feeding control device and feeding boat
JP2021114993A (en) * 2020-01-27 2021-08-10 Assest株式会社 Fry feeding amount proposing program
CN111476317A (en) * 2020-04-29 2020-07-31 中国科学院合肥物质科学研究院 Plant protection image non-dense pest detection method based on reinforcement learning technology
CN112352724A (en) * 2020-11-13 2021-02-12 湖北海洋工程装备研究院有限公司 Method and system for feeding feed in fishing ground
CN112634202A (en) * 2020-12-04 2021-04-09 浙江省农业科学院 Method, device and system for detecting behavior of polyculture fish shoal based on YOLOv3-Lite
CN112400773A (en) * 2021-01-21 2021-02-26 南京农业大学 Greenhouse fry intelligent feeding device and method based on machine vision technology
CN113487143A (en) * 2021-06-15 2021-10-08 中国农业大学 Fish shoal feeding decision method and device, electronic equipment and storage medium
CN113919482A (en) * 2021-09-22 2022-01-11 上海浦东发展银行股份有限公司 Intelligent agent training method and device, computer equipment and storage medium
KR102528286B1 (en) * 2022-08-30 2023-05-03 (주)오투컴퍼니 Automatic feeding system and method for fish farms based on AI
CN115486391A (en) * 2022-09-13 2022-12-20 浙江大学 Method for accurately feeding and culturing pearl, gentian and grouper
CN115497026A (en) * 2022-09-27 2022-12-20 中国农业大学 Precision feeding method, system and device based on fish school feeding activity quantification
CN116258949A (en) * 2023-01-19 2023-06-13 武汉理工大学 Precise fish feed throwing system based on convolutional neural network

Also Published As

Publication number Publication date
CN116630080A (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN116630080B (en) Method and system for determining capacity of aquatic product intensive culture feed based on image recognition
CN113592896B (en) Fish feeding method, system, equipment and storage medium based on image processing
CN115067243B (en) Fishery monitoring and analyzing method, system and storage medium based on Internet of things technology
CN110583550A (en) Accurate feeding system and device are bred to fish shrimp sea cucumber based on target detection and tracking
NO20210919A1 (en) Systems and methods for predicting growth of a population of organisms
CN113349111A (en) Dynamic feeding method, system and storage medium for aquaculture
CN114724022A (en) Culture fish school detection method, system and medium fusing SKNet and YOLOv5
CN112949517A (en) Plant stomata density and opening degree identification method and system based on deep migration learning
CN115797844A (en) Fish body fish disease detection method and system based on neural network
CN115512215A (en) Underwater biological monitoring method and device and storage medium
CN114004433A (en) Method and device for regulating and controlling growth environment of cultured fishes
CN115578423A (en) Fish key point detection, individual tracking and biomass estimation method and system based on deep learning
CN116621409A (en) Livestock and poultry manure recycling treatment equipment and method
CN114419432B (en) Fish group ingestion intensity assessment method and device
CN109255200B (en) Soft measurement method and device for ammonia nitrogen in aquaculture water
CN114519538B (en) Multi-nutrition-layer culture system capable of effectively controlling recycling of nutrients
CN114937030A (en) Phenotypic parameter calculation method for intelligent agricultural planting of lettuce
Santhosh Kumar et al. Review on disease detection of plants using image processing and machine learning techniques
CN113160108A (en) Sequence query counting method for few-sample and multi-class baits
CN112380486A (en) Dynamic monitoring and evaluating method and system for cultivation state of seawater ornamental fish
CN116579508B (en) Fish prediction method, device, equipment and storage medium
CN111223574A (en) Penaeus vannamei boone enterohepatic sporulosis early warning method based on big data mining
CN117035164B (en) Ecological disaster monitoring method and system
CN117859687A (en) Eel breeding feed accurate feeding method, system, equipment and medium
CN115761296A (en) Insect classification counting method based on Bilinear CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant