CN111563439A - Aquatic organism disease detection method, device and equipment - Google Patents

Aquatic organism disease detection method, device and equipment Download PDF

Info

Publication number
CN111563439A
CN111563439A CN202010352073.2A CN202010352073A CN111563439A CN 111563439 A CN111563439 A CN 111563439A CN 202010352073 A CN202010352073 A CN 202010352073A CN 111563439 A CN111563439 A CN 111563439A
Authority
CN
China
Prior art keywords
image
network layer
network
fusion
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010352073.2A
Other languages
Chinese (zh)
Other versions
CN111563439B (en
Inventor
张为明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Haiyi Tongzhan Information Technology Co Ltd
Original Assignee
Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Haiyi Tongzhan Information Technology Co Ltd filed Critical Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority to CN202010352073.2A priority Critical patent/CN111563439B/en
Publication of CN111563439A publication Critical patent/CN111563439A/en
Application granted granted Critical
Publication of CN111563439B publication Critical patent/CN111563439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Animal Husbandry (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Mining & Mineral Resources (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Agronomy & Crop Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method, a device and equipment for detecting aquatic organism diseases, and belongs to the technical field of data processing. The method comprises the following steps: shooting aquatic organisms in a culture pond to obtain an image to be identified containing the aquatic organisms; performing feature extraction and feature fusion processing on the image to be recognized through a disease recognition model to obtain fusion image features of the image to be recognized; and predicting the characteristics of the fused image based on the disease identification model to obtain the classification category of the aquatic product diseases in the image to be identified. By adopting the technical scheme provided by the application, the aquatic organism disease detection efficiency can be improved.

Description

Aquatic organism disease detection method, device and equipment
Technical Field
The application relates to the technical field of data processing, in particular to a method, a device and equipment for aquatic organism diseases.
Background
In aquaculture, it is necessary to detect whether aquatic organisms suffer from diseases so as to treat the suffering aquatic organisms in time and avoid economic losses.
Normally, the skin of the aquatic organism is abnormal when the disease occurs, for example, the eye drops of the fish become white when the fish suffers from globalburning, and the gill part is ulcerated when the fish suffers from gill rot. Based on this, when detecting diseases, after some aquatic products are generally fished out from the culture pond, the professional can observe the aquatic products through human eyes to judge whether the aquatic products have diseases.
However, the amount of aquatic products in the culture pond is extremely large, and whether aquatic organisms suffer from diseases is judged one by one through human eye observation, so that the detection efficiency of the diseases is low.
Disclosure of Invention
The embodiment of the application aims to provide a method, a device and equipment for detecting aquatic organism diseases, so as to solve the problem of low detection efficiency of aquatic organism diseases. The specific technical scheme is as follows:
in a first aspect, the present application provides a method for detecting disease in aquatic organisms, the method comprising:
shooting aquatic organisms in a culture pond to obtain an image to be identified containing the aquatic organisms;
performing feature extraction and feature fusion processing on the image to be recognized through a disease recognition model to obtain fusion image features of the image to be recognized;
and predicting the characteristics of the fused image based on the disease identification model to obtain the classification category of the aquatic product diseases in the image to be identified.
Optionally, the disease identification model is a neural network including a plurality of network layers, and the obtaining of the fusion image feature of the image to be identified by performing feature extraction and feature fusion processing on the image to be identified through the disease identification model includes:
inputting the image to be identified into the lowest network layer with the lowest network layer according to the sequence of the network layers from low to high, and respectively extracting the characteristics through each network layer to obtain the initial image characteristics corresponding to the network layer;
for a first network layer in the plurality of network layers, if the first network layer is the highest network layer with the highest network level in the plurality of network layers, taking the initial image feature of the first network layer as the fusion image feature of the first network layer;
if the first network layer is not the highest network layer, determining the fusion image feature of the first network layer according to the initial image feature corresponding to the first network layer and the initial image feature corresponding to a second network layer, wherein the network level of the second network layer is higher than that of the first network layer.
Optionally, when the number of the second network layers is at least two, the determining the fusion image feature of the first network layer according to the initial image feature corresponding to the first network layer and the initial image feature corresponding to the second network layer includes:
determining a network level difference between the first network layer and the highest network layer;
performing feature fusion processing on initial image features corresponding to two adjacent network layers aiming at the first network layer and the second network layer to obtain a hierarchical image feature corresponding to a low network layer, wherein the low network layer is the network layer with the lowest network level in the two adjacent network layers, and the fusion level number of the hierarchical image feature is equal to the number of times of performing feature fusion processing;
judging whether the fusion grade number of the grading image characteristics reaches the network grade difference value or not;
if the fusion grade number of the hierarchical image features reaches the network grade difference value, taking the hierarchical image features as the fusion image features of the first network layer;
and if the fusion progression of the hierarchical image features does not reach the network hierarchy difference value, performing feature fusion processing on the hierarchical image features corresponding to two adjacent network layers to obtain the hierarchical image features corresponding to a lower network layer and subjected to fusion progression updating, and executing the step of judging whether the fusion progression of the hierarchical image features reaches the network hierarchy difference value.
Optionally, the second network layer is a network layer adjacent to the first network layer.
Optionally, the disease identification model further includes a feature processing layer, and the fused image feature further includes an image feature obtained by performing feature extraction on the initial image feature of the highest network layer by the feature processing layer.
Optionally, the method further includes:
and performing regression prediction processing on the fusion image characteristics through the disease identification model to obtain the position information of the disease part in the image to be identified.
Optionally, the training process of the disease recognition model includes:
acquiring a training image set, wherein the training image set comprises a plurality of sample images, classification categories of diseases corresponding to each sample image, and position information of disease parts in each sample image;
performing feature extraction and feature fusion processing on each sample image through an initial model to obtain initial fusion image features of the sample image;
predicting based on the initial fusion image characteristics to obtain the initial classification category and the initial position information of the sample image;
calculating a function value of a loss function based on the initial classification categories and the initial position information of the plurality of sample images;
and changing the parameters of the initial model according to the function value until the function value is smaller than a preset threshold value, so as to obtain the disease identification model.
In a second aspect, the present application also provides an aquatic organism disease detecting apparatus comprising an image pickup part and a processing part, wherein,
the image pickup component is arranged in the culture pond and is used for shooting aquatic organisms in the culture pond to obtain an image to be identified, wherein the image to be identified comprises the aquatic organisms;
the processing component is used for receiving the image to be recognized shot by the shooting component, performing feature extraction and feature fusion processing on the image to be recognized through a disease recognition model based on the image to be recognized to obtain fusion image features of the image to be recognized, and predicting the fusion image features based on the disease recognition model to obtain classification categories of water borne diseases in the image to be recognized.
Optionally, the processing component is configured to, when the disease identification model is a neural network including multiple network layers, input the image to be identified into a lowest network layer with a lowest network layer according to a sequence of network layers of the multiple network layers from low to high, and perform feature extraction through each network layer respectively to obtain an initial image feature corresponding to the network layer;
for a first network layer in the plurality of network layers, if the first network layer is the highest network layer with the highest network level in the plurality of network layers, taking the initial image feature of the first network layer as the fusion image feature of the first network layer;
if the first network layer is not the highest network layer, determining the fusion image feature of the first network layer according to the initial image feature corresponding to the first network layer and the initial image feature corresponding to a second network layer, wherein the network level of the second network layer is higher than that of the first network layer.
Optionally, the processing unit is configured to determine a network level difference between the first network layer and the highest network layer when the number of second network layers is at least two;
performing feature fusion processing on initial image features corresponding to two adjacent network layers aiming at the first network layer and the second network layer to obtain a hierarchical image feature corresponding to a low network layer, wherein the low network layer is the network layer with the lowest network level in the two adjacent network layers, and the fusion level number of the hierarchical image feature is equal to the number of times of performing feature fusion processing;
judging whether the fusion grade number of the grading image characteristics reaches the network grade difference value or not;
if the fusion grade number of the hierarchical image features reaches the network grade difference value, taking the hierarchical image features as the fusion image features of the first network layer;
and if the fusion progression of the hierarchical image features does not reach the network hierarchy difference value, performing feature fusion processing on the hierarchical image features corresponding to two adjacent network layers to obtain the hierarchical image features corresponding to a lower network layer and subjected to fusion progression updating, and executing the step of judging whether the fusion progression of the hierarchical image features reaches the network hierarchy difference value.
Optionally, the second network layer is a network layer adjacent to the first network layer.
Optionally, the disease identification model further includes a feature processing layer, and the fused image feature further includes an image feature obtained by performing feature extraction on the initial image feature of the highest network layer by the feature processing layer.
Optionally, the processing component is configured to perform regression prediction processing on the fusion image feature through the disease identification model to obtain position information of a disease part in the image to be identified.
Optionally, the training process of the disease recognition model includes:
acquiring a training image set, wherein the training image set comprises a plurality of sample images, classification categories of diseases corresponding to each sample image, and position information of disease parts in each sample image;
performing feature extraction and feature fusion processing on each sample image through an initial model to obtain initial fusion image features of the sample image;
predicting based on the initial fusion image characteristics to obtain the initial classification category and the initial position information of the sample image;
calculating a function value of a loss function based on the initial classification categories and the initial position information of the plurality of sample images;
and changing the parameters of the initial model according to the function value until the function value is smaller than a preset threshold value, so as to obtain the disease identification model.
In a third aspect, the present application also provides an aquatic organism disease detection apparatus, comprising:
the shooting module is used for shooting aquatic organisms in the culture pond to obtain an image to be identified containing the aquatic organisms;
the characteristic fusion processing module is used for carrying out characteristic extraction and characteristic fusion processing on the image to be recognized through a disease recognition model to obtain fusion image characteristics of the image to be recognized;
and the prediction module is used for predicting the characteristics of the fused image based on the disease identification model to obtain the classification category of the water generating substance diseases in the image to be identified.
Optionally, the disease identification model is a neural network including a plurality of network layers, and the feature fusion processing module includes:
the feature extraction submodule is used for inputting the image to be identified into the lowest network layer with the lowest network layer according to the sequence of the network layers from low to high, and performing feature extraction through each network layer respectively to obtain initial image features corresponding to the network layer;
the determining submodule is used for regarding a first network layer in the plurality of network layers, and when the first network layer is the highest network layer with the highest network level in the plurality of network layers, taking the initial image feature of the first network layer as the fusion image feature of the first network layer;
the determining sub-module is further configured to determine, when the first network layer is not the highest network layer, a fused image feature of the first network layer according to an initial image feature corresponding to the first network layer and an initial image feature corresponding to a second network layer, where the network level of the second network layer is higher than the network level of the first network layer.
Optionally, when the number of the second network layers is at least two, the determining sub-module includes:
a first determining unit for determining a network level difference between the first network layer and the highest network layer;
the feature fusion processing unit is configured to perform feature fusion processing on initial image features corresponding to two adjacent network layers for the first network layer and the second network layer to obtain a hierarchical image feature corresponding to a low network layer, where the low network layer is a network layer with a lowest network level among the two adjacent network layers, and a fusion level of the hierarchical image feature is equal to the number of times of performing the feature fusion processing;
the judging unit is used for judging whether the fusion grade number of the hierarchical image features reaches the network grade difference value or not;
a second determining unit configured to take the hierarchical image feature as a fused image feature of the first network layer when a fusion progression of the hierarchical image feature reaches the network hierarchy difference value;
and the second determining unit is further configured to perform feature fusion processing on the hierarchical image features corresponding to two adjacent network layers when the fusion progression of the hierarchical image features does not reach the network hierarchy difference, to obtain the hierarchical image features corresponding to a lower network layer after updating the fusion progression, and to execute the step of determining whether the fusion progression of the hierarchical image features reaches the network hierarchy difference.
Optionally, the second network layer is a network layer adjacent to the first network layer.
Optionally, the disease identification model further includes a feature processing layer, and the fused image feature further includes an image feature obtained by performing feature extraction on the initial image feature of the highest network layer by the feature processing layer.
Optionally, the apparatus further comprises:
the prediction module is further configured to perform regression prediction processing on the fusion image features through the disease identification model to obtain position information of a disease part in the image to be identified.
Optionally, the apparatus further includes a training module, where the training module is configured to obtain a training image set, where the training image set includes a plurality of sample images, a classification category of a disease corresponding to each sample image, and position information of a disease portion in each sample image; performing feature extraction and feature fusion processing on each sample image through an initial model to obtain initial fusion image features of the sample image; predicting based on the initial fusion image characteristics to obtain the initial classification category and the initial position information of the sample image; calculating a function value of a loss function based on the initial classification categories and the initial position information of the plurality of sample images; and changing the parameters of the initial model according to the function value until the function value is smaller than a preset threshold value, so as to obtain the disease identification model.
In a fourth aspect, the present application further provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor adapted to perform the method steps of any of the first aspects when executing a program stored in the memory.
In a fifth aspect, the present application further provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method steps of any of the first aspects.
In a sixth aspect, the present application also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method steps of any of the first aspects described above.
The embodiment of the application has the following beneficial effects:
the embodiment of the application provides a method, a device and equipment for detecting diseases of aquatic organisms, wherein the aquatic organisms in a culture pond are shot to obtain an image to be identified comprising the aquatic organisms; performing feature extraction and feature fusion processing on the image to be recognized through a disease recognition model to obtain fusion image features of the image to be recognized; and predicting the characteristics of the fused image based on a disease identification model to obtain the classification category of the aquatic organism diseases. Because the image to be identified is predicted through the disease identification model, and the classification category of the aquatic organism diseases is determined, the disease detection efficiency of the aquatic organisms can be improved.
Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a method for detecting disease in aquatic organisms according to an embodiment of the present application;
FIG. 2 is a flow chart of another method for detecting disease in aquatic organisms according to an embodiment of the present application;
fig. 3a is a schematic structural diagram of a disease identification model provided in an embodiment of the present application;
fig. 3b is a schematic structural diagram of a network layer according to an embodiment of the present application;
FIG. 4 is a flow chart of another method for detecting disease in aquatic organisms according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an aquatic organism disease detection apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides an aquatic organism disease detection method, which can be applied to electronic equipment, wherein the electronic equipment can have an image processing function, and for example, the electronic equipment can be a mobile phone, a computer and the like.
A disease identification model can be preset in the electronic equipment, and the disease identification model can be an improved version-slim convolutional neural network. The electronic equipment can realize the disease detection of the aquatic organisms through the disease identification model.
The method for detecting disease of aquatic organisms provided in the embodiments of the present application will be described in detail with reference to the following specific embodiments, as shown in fig. 1, the specific steps are as follows:
step 101, shooting aquatic organisms in a culture pond to obtain an image to be identified containing the aquatic organisms.
In implementation, an underwater camera may be disposed in the culture pond, and the underwater camera may capture the current culture pond and then transmit a captured image to the electronic device. The electronic device may perform a preliminary identification of the received image, and if the image contains aquatic life, the electronic device may treat the image as an image to be identified. If the image does not contain aquatic life, the electronic device may not perform subsequent processing.
When the aquaculture density of the aquatic organisms is high, the electronic equipment can also directly take the received image as the image to be identified.
In the embodiment of the application, the underwater camera can shoot aquatic organisms in the culture pond according to a preset detection period, and the underwater camera can also shoot the aquatic organisms in the culture pond after receiving a shooting instruction of the electronic equipment.
Optionally, the electronic device may include a camera component, and the electronic device may be disposed in the culture pond. The electronic equipment can shoot aquatic products in the culture pond through the camera shooting component to obtain images to be identified containing aquatic organisms, and disease detection is carried out on the basis of the images to be identified.
And 102, performing feature extraction and feature fusion processing on the image to be recognized through the disease recognition model to obtain fusion image features of the image to be recognized.
In implementation, the electronic device may perform feature extraction on the image to be recognized through the disease recognition model to obtain a plurality of image features. Then, the electronic device may perform feature fusion processing on the extracted multiple image features to obtain fused image features of the image to be recognized.
In the embodiment of the application, the disease identification model may include a plurality of network layers, each network layer is provided with a corresponding network hierarchy, and the plurality of network layers perform feature extraction on the image to be identified according to the sequence of the network hierarchies from low to high. With the improvement of the network level, the image features extracted by the network level are more advanced and comprehensive. The feature fusion processing is to fuse high-level image features extracted by a network layer of a high network level with low-level image features extracted by a network layer of a low network level. Such as edge features, color features, surface features, texture features, etc., and high-level image features such as shape features, semantic features, object features, and object features.
By performing the feature fusion processing, fusion of high-level image features and low-level image features can be realized. Compared with the advanced image characteristics, the fused image characteristics obtained by the characteristic fusion processing have stronger characteristic expression, and the prediction accuracy can be improved by predicting the disease classification categories based on the fused image characteristics.
And 103, predicting the characteristics of the fusion image based on the disease identification model to obtain the classification category of the aquatic product diseases in the image to be identified.
In implementation, the electronic device may perform classification prediction on the fused image features based on the disease identification model, and in the case that the aquatic organisms have a disease, the electronic device may determine a classification category corresponding to the fused image features based on the disease identification model to obtain the classification category of the aquatic product disease in the image to be identified.
The electronic device classifies and predicts the fusion image features based on the disease recognition model, the specific processing process for determining the classification categories corresponding to the fusion image features can be various, in a feasible implementation mode, the disease recognition model can store the corresponding relationship between the fusion image features and the classification categories in advance, and the electronic device can determine the classification categories corresponding to the fusion image features based on the disease recognition model, the corresponding relationship between the fusion image features and the classification categories to obtain the classification categories of the aquatic product diseases in the image to be recognized.
In another feasible implementation manner, for each classification category of the diseases, the electronic device may calculate the probability of the classification category based on the disease recognition model and the fusion image features to obtain a plurality of probabilities. Then, the electronic device may use the classification category with the highest probability as the classification category of the water producing substance disease in the image to be identified.
Optionally, in the case that the aquatic organism has a disease, the electronic device may further determine a position of the disease portion of the aquatic organism in the image to be identified based on the disease identification model, and a detailed description will be given later on a specific processing procedure.
In the embodiment of the application, the electronic equipment can shoot aquatic organisms in the culture pond to obtain the image to be identified comprising the aquatic organisms; performing feature extraction and feature fusion processing on the image to be recognized through a disease recognition model to obtain fusion image features of the image to be recognized; and predicting the characteristics of the fused image based on the disease identification model to obtain the classification category of the aquatic product diseases in the image to be identified. Because the image to be identified is predicted through the disease identification model, and the classification category of the aquatic organism diseases is determined, compared with the artificial disease detection, the disease detection efficiency of the aquatic organisms can be improved.
By adopting the aquatic organism disease detection method provided by the embodiment of the application, the macroscopic obvious diseases can be identified in real time based on the computer vision technology, so that the disease detection efficiency can be improved. Moreover, the manual work is not needed to participate in the disease detection, so that the labor cost required by the disease detection can be reduced. On the other hand, the disease detection can not cause damage to aquatic organisms, and the breeding personnel can find the diseases in time and take corresponding treatment measures, so that the breeding loss is reduced.
Optionally, the disease recognition model is a neural network including a plurality of network layers, and an embodiment of the present application provides a specific processing procedure for performing feature extraction and feature fusion processing on an image to be recognized through the disease recognition model to obtain a fusion image feature of the image to be recognized, and as shown in fig. 2, the specific processing procedure includes the following steps:
step 201, inputting an image to be recognized into the lowest network layer with the lowest network layer according to the sequence of the network layers from low to high, and respectively performing feature extraction through each network layer to obtain initial image features corresponding to the network layer.
In implementation, the electronic device may input the image to be recognized into a lowest network layer with a lowest network layer, perform feature extraction through the lowest network layer, and then, the electronic device may use a calculation result output by the lowest network layer as an initial image feature corresponding to the lowest network layer.
The electronic device may use the lowest network layer as the previous network layer, determine the next network layer according to the sequence of the network layers of the plurality of network layers from low to high, and use the calculation result output by the previous network layer as the input quantity of the next network layer. Then, the electronic device can perform feature extraction through the initial image features of the next network layer and the previous network layer to obtain the initial image features corresponding to the network layer. Then, the electronic device may use the next network layer as the previous network layer, and repeat the step of determining the next network layer according to the sequence of the network layers of the plurality of network layers from low to high until all network layers included in the disease identification model are traversed.
For example, fig. 3a is a disease identification model provided in an embodiment of the present application, where the disease identification model includes 5 network layers, and the network layers of the network layers are, in order from low to high, conv1, conv2, conv3, conv4, and conv 5.
The electronic device may input the image to be recognized into the lowest network layer conv1 with the lowest network layer, perform feature extraction through the lowest network layer conv1, and then the electronic device may take the calculation result output by the lowest network layer conv1 as the initial image feature corresponding to the lowest network layer conv 1.
The electronic device may determine the next network layer conv2 in the order from the lower to the higher of the network hierarchy of the plurality of network layers with the lowest network layer conv1 as the previous network layer, and use the calculation result output by the previous network layer conv1 as the input quantity of the next network layer conv 2. Then, the electronic device may perform feature extraction on the initial image features of the next network layer conv2 and the previous network layer conv1 to obtain an initial image feature corresponding to the network layer conv 2. Then, the electronic device may use the next network layer conv2 as the previous network layer, and repeat the step of determining the next network layer according to the sequence of the network layers from low to high until 5 network layers included in the disease identification model are traversed.
Step 202, for a first network layer of the plurality of network layers, determining whether the first network layer is a highest network layer with a highest network level among the plurality of network layers.
In an implementation, an electronic device may determine a first network layer among a plurality of network layers. The electronic device may then determine whether the first network layer is a highest network layer of the plurality of network layers that is a highest network layer of a network hierarchy.
If the first network layer is the highest network layer with the highest network hierarchy among the plurality of network layers, the electronic device may perform step 203; if the first network layer is not the highest network layer, the electronic device can perform step 204.
In an embodiment of the present application, the electronic device may determine the first network layer in the multiple network layers in multiple ways, and in a feasible implementation manner, the electronic device may use each of the multiple network layers as the first network layer.
For example, as in the disease identification model shown in fig. 3a, the electronic device may use conv1, conv2, conv3, conv4, and conv5 as the first network layer, respectively.
In another possible implementation, the electronic device may be pre-provisioned with the first network layer.
Still taking the disease identification model shown in fig. 3a as an example, the first network layer may be conv3, conv4 and conv5 as the first network layer, respectively.
Step 203, taking the initial image feature of the first network layer as the fusion image feature of the first network layer.
And 204, determining the fusion image characteristics of the first network layer according to the initial image characteristics corresponding to the first network layer and the initial image characteristics corresponding to the second network layer.
Wherein the network level of the second network layer is higher than the network level of the first network layer.
In implementation, if the first network layer is not the highest network layer, the electronic device may determine a second network layer having a network layer higher than the first network layer, and then, the electronic device may determine the fused image feature of the first network layer according to the initial image feature corresponding to the first network layer and the initial image feature corresponding to the second network layer, where details will be described later in the specific processing procedure.
In this embodiment of the present application, when the number of network layers higher than the first network layer is 1, the network layer higher than the first network layer is the second network layer.
Still taking the disease identification model shown in fig. 3a as an example, the first network layer may be conv4, and the number of network layers with network layers higher than the first network layer is 1, that is, the network layer with network layers higher than the first network layer is conv5, and thus, the second network layer may be conv 5.
The electronic device determines the specific processing procedure of the fusion image feature of the first network layer according to the initial image feature corresponding to the first network layer and the initial image feature corresponding to the second network layer, which is different from the first network layer, according to the difference of the number of the second network layers.
In a possible implementation manner, the number of the second network layers may be 1, and in this case, the second network layer may be any network layer having a network layer level higher than that of the first network layer. For example, the first network layer may be conv3 and the second network layer may be conv4 or conv 5.
Alternatively, the second network layer may be a network layer adjacent to the first network layer and having a network level higher than that of the first network layer. Because the initial image features of the adjacent network layers are more relevant, the adjacent network layers are selected as the second network layers, and the electronic device can conveniently perform feature fusion processing based on the initial image features corresponding to the first network layers and the initial image features corresponding to the second network layers.
For example, the first network layer may be conv3, and the second network layer may be a network layer conv4 adjacent to the first network layer conv3 and having a network layer level higher than the first network layer conv 3.
In another possible implementation manner, the number of the second network layers may be multiple; the second network layer may be a preset number of network layers having a network layer level higher than the first network layer.
For example, the first network layer may be conv1, the network layers of the network level higher than the first network layer are conv2, conv3, conv4 and conv5, the preset number is 2, and the second network layer may be conv2 and conv 3.
Or, the second network layer is all network layers of a network level higher than the first network layer.
For example, the first network layer may be conv3, and the network layers of the network hierarchy higher than the first network layer are conv4 and conv5, that is, the second network layer is conv4 and conv 5.
In the embodiment of the application, the electronic device can input the image to be recognized into the lowest network layer with the lowest network layer according to the sequence of the network layers from low to high, and perform feature extraction through each network layer respectively to obtain the initial image features corresponding to the network layer. Then, regarding a first network layer of the plurality of network layers, when the first network layer is a highest network layer of the plurality of network layers, the initial image feature of the first network layer is used as a fusion image feature of the first network layer. And under the condition that the first network layer is not the highest network layer, determining the fusion image feature of the first network layer according to the initial image feature corresponding to the first network layer and the initial image feature corresponding to the second network layer. Therefore, the method can realize the feature extraction and feature fusion of the image to be recognized and determine the fusion image features with stronger feature expression.
In the embodiment of the present application, the feature fusion processing may be implemented by convolution processing, for example, for the network layer conv5, the initial image feature may be extracted by deformable convolution, and then, the up-sampling of the initial image feature may be implemented by transposed convolution, so as to ensure that the matrix dimension of the initial image feature of the network layer conv5 is the same as the matrix dimension of the initial image feature of the previous network layer conv4, and improve the feature resolution of the image feature of the conv5 layer. Thereafter, the initial image features of conv5 and conv4 may be subjected to an addition process, thereby implementing feature fusion.
Optionally, an embodiment of the present application provides an implementation manner for determining a fusion image feature of a first network layer according to an initial image feature corresponding to the first network layer and an initial image feature corresponding to a second network layer when the second network layer is all network layers with a network level higher than the first network layer and the number of the second network layers is at least two, as shown in fig. 4, the implementation manner includes the following steps:
step 401, determining a network level difference between the first network layer and the highest network layer.
In an implementation, the electronic device may calculate a difference between a network level of the first network layer and a network level of the highest network layer, resulting in a network level difference.
For example, still taking the disease identification model shown in fig. 3a as an example, the first network layer may be conv3, the second network layers are conv4 and conv5, and the conv5 is the highest network layer. The electronic device may calculate a difference between network level 3 of the first network layer conv3 and network level 5 of the highest network layer conv5, resulting in 2, i.e. a network level difference of 2.
Step 402, aiming at the first network layer and the second network layer, performing feature fusion processing on the initial image features corresponding to the two adjacent network layers to obtain the hierarchical image feature corresponding to the low network layer.
The low network layer is the network layer with the lowest network level in the two adjacent network layers, and the fusion grade number of the hierarchical image features is equal to the times of feature fusion processing.
In an implementation, the electronic device may determine, in the first network layer and the second network layer, two adjacent network layers. Then, the electronic device may perform feature fusion processing on the initial image features corresponding to the two network layers for each two adjacent network layers, and then use the image features obtained through the feature fusion processing as the hierarchical image features corresponding to the low network layer.
For example, the electronic device may determine two adjacent network layers in the first network layer conv3 and the second network layers conv4 and conv5, resulting in conv3 and conv4, conv4 and conv 5. Then, the electronic device may perform feature fusion processing on the initial image feature corresponding to conv3 and the initial image feature corresponding to conv4 for two adjacent network layers conv3 and conv4, and then use the image feature obtained through the feature fusion processing as a hierarchical image feature corresponding to the lower network layer conv 3. Since the feature fusion processing is performed only once for conv3 and conv4, the number of fusion stages of the hierarchical image features corresponding to conv3 obtained at this time is 1. For the convenience of distinction, the hierarchical image features with the fusion progression of 1 are referred to as first-level image features, the hierarchical image features with the fusion progression of 2 are referred to as second-level image features, and the names of the hierarchical image features with other fusion progressions are analogized in turn, and are not described herein again.
Similarly, the electronic device may perform feature fusion processing on the initial image feature corresponding to conv4 and the initial image feature corresponding to conv5 for two adjacent network layers conv4 and conv5, and then use the image feature obtained through the feature fusion processing as the hierarchical image feature corresponding to the lower network layer conv 4. Since the feature fusion processing is performed only once for conv4 and conv5, the number of fusion stages of the hierarchical image features corresponding to conv4 obtained at this time is 1.
And 403, judging whether the fusion grade of the graded image features reaches a network grade difference value.
In implementation, the electronic device may determine whether the fusion level of the hierarchical image features reaches a network level difference value, and if the fusion level of the hierarchical image features reaches the network level difference value, the electronic device may perform step 404, and if the fusion level of the hierarchical image features does not reach the network level difference value, the electronic device may perform step 405.
And step 404, taking the hierarchical image feature as a fusion image feature of the first network layer.
And 405, performing feature fusion processing on the hierarchical image features corresponding to the two adjacent network layers to obtain the hierarchical image features corresponding to the lower network layer and subjected to fusion level number updating.
In an implementation, the electronic device may determine two adjacent network layers among the network layers corresponding to the hierarchical image features. Then, the electronic device may perform feature fusion processing on the hierarchical image features corresponding to the two network layers for every two adjacent network layers, and then use the image features obtained through the feature fusion processing as the hierarchical image features corresponding to the low network layer after the fusion progression is updated. Thereafter, the electronic device may perform step 403.
For example, for the first network layer conv3, it is currently determined that the fusion level number of the hierarchical image features corresponding to conv3 is 1, and the electronic device may determine that the fusion level number 1 of the hierarchical image features does not reach the network level difference value of 2. Then, the electronic device may determine two adjacent network layers in the network layers corresponding to the hierarchical image features, i.e., in the first network layer conv3 and the second network layer conv4, resulting in conv3 and conv 4. Then, the electronic device may perform feature fusion processing on the primary image feature corresponding to conv3 and the primary image feature corresponding to conv4, and then use the image feature obtained through the feature fusion processing as a secondary image feature corresponding to the low network layer conv3 and updated in a fusion level. Since the feature fusion processing is performed only twice for conv3 and conv4, the number of fusion stages of the hierarchical image features corresponding to conv3 obtained at this time is 2.
The electronic device may determine that the fusion level 2 of the hierarchical image features reaches the network level difference 2, and then the electronic device may use the secondary image features as the fused image features of the first network layer conv 3.
In this embodiment of the application, the electronic device may determine a network level difference between the first network layer and the highest network layer, and then, the electronic device may perform feature fusion processing on initial image features corresponding to two adjacent network layers for the first network layer and the second network layer to obtain a hierarchical image feature corresponding to a low network layer. Then, the electronic device may take the hierarchical image feature as a fused image feature of the first network layer when the fusion progression of the hierarchical image feature reaches a network hierarchy difference value; and under the condition that the fusion progression of the hierarchical image features does not reach the network hierarchy difference value, performing feature fusion processing on the hierarchical image features corresponding to two adjacent network layers to obtain the hierarchical image features corresponding to the lower network layer and subjected to fusion progression updating until the fusion progression of the hierarchical image features reaches the network hierarchy difference value.
Therefore, the feature fusion processing can be performed on the initial image features of the first network layer and the initial image features of each second network layer, so that the feature expression of the fusion image features can be further enhanced, and the prediction accuracy of disease classification type prediction based on the fusion image features can be improved.
Optionally, in order to improve the feature processing effect of the image features extracted from the image to be identified, the disease identification model further includes an extras layer (feature processing layer), the input quantity of the feature processing layer is the initial image features of the highest network layer, the electronic device may further perform feature extraction through the feature processing layer and the initial image features of the highest network layer, and use the extracted image features as the fusion image features.
Because the initial image features of the feature processing layer and the highest network layer are used for further feature extraction, the image information of the whole image to be identified can be integrated, and the extra features with more feature expression can be obtained. Furthermore, the extra features extracted by the feature processing layer are used as the fusion image features, so that the feature expression of the fusion image features can be further enhanced, and the prediction accuracy of disease classification type prediction based on the fusion image features can be improved.
Optionally, in addition to detecting the classification type of the disease, the electronic device may determine the position of the disease part of the aquatic organism in the image to be identified based on the disease identification model, and the specific processing procedure includes: and performing regression prediction processing on the characteristics of the fusion image through a disease identification model to obtain the position information of the disease part in the image to be identified.
In implementation, the electronic device performs regression prediction processing on the fusion image features through the disease recognition model, and a specific processing procedure of obtaining the position information of the disease part in the image to be recognized may refer to a processing procedure of performing regression prediction processing on the image features by the electronic device based on a neural network to obtain the position information of the target object in the image in the related art, which is not described herein again.
In the embodiment of the application, the position information may be coordinates of the diseased part in the image to be recognized, and the position information may also be an identification frame for identifying the position of the diseased part in the image to be recognized.
In the embodiment of the application, the electronic device can perform regression prediction processing on the fusion image features through the disease identification model to obtain the position information of the disease part in the image to be identified. Therefore, the electronic equipment can output specific position information of the disease part besides outputting classification types of the diseases, so that the disease detection result can be enriched, and the breeding personnel can conveniently know the state of illness of the aquatic organisms.
In the embodiment of the application, based on the image to be recognized, the electronic equipment can recognize whether aquatic organisms in the image to be recognized are sick or not through the disease recognition model, and when the aquatic organisms are sick, the electronic equipment can output preset health information to prompt that the aquatic organisms in the image to be recognized are all in a healthy state. When the aquatic organisms are sick, the electronic equipment can output the classification categories of the diseases and the identification frame for identifying the positions of the parts with the diseases in the image to be identified, so that the breeding personnel can further confirm the illness state.
Optionally, an embodiment of the present application further provides a training process of a disease recognition model, including the following steps:
step 1, obtaining a training image set.
The training image set comprises a plurality of sample images, classification types of diseases corresponding to each sample image and position information of disease parts in each sample image.
In implementations, the electronic device may have a training image set stored locally, and the electronic device may acquire the training image set locally. The electronic device may also obtain the training image set from the network address indicated by the training instruction according to the received training instruction.
The embodiment of the present application provides an implementation manner for generating a training image set, including: the electronic equipment can acquire images of a plurality of aquatic organisms through the underwater camera, and aiming at each image containing the diseased aquatic organisms, the electronic equipment can select the positions of the diseased parts in the images through an application program with a marking function and manually mark the classification categories of the diseases. The application program having the labeling function may be LabelImg or Labelme. The classified categories of the diseases can be eyeball alburning disease, gill rot disease, saprolegniasis and the like.
Then, the electronic device may use the image marked with the position and the classification category as a sample image to obtain a training image set.
And 2, performing feature extraction and feature fusion processing on each sample image through the initial model to obtain initial fusion image features of the sample image.
In the implementation, the specific processing procedure of this step may refer to the processing procedure of step 102, and is not described herein again.
And 3, predicting based on the initial fusion image characteristics to obtain the initial classification category and the initial position information of the sample image.
In implementation, the specific processing procedure in this step may refer to step 103 and perform regression prediction processing on the fusion image features through the disease identification model to obtain the processing procedure of the position information, which is not described herein again.
And 4, calculating a function value of the loss function based on the initial classification categories and the initial position information of the multiple sample images.
In an implementation, the electronic device may be preset with a loss function, and the electronic device may calculate a function value of the loss function based on the initial classification categories, the initial position information, and the loss function of the plurality of sample images.
And 5, changing parameters of the initial model according to the function value until the function value is smaller than a preset threshold value, and obtaining a disease identification model.
In an implementation, the electronic device may determine whether the function value is less than a preset threshold. If the function value is not smaller than the preset threshold value, the electronic device can determine the parameter and the parameter value to be adjusted according to the function value and the preset parameter change mode, and then set the parameter of the initial model according to the determined parameter and the parameter value. And if the function value is smaller than the preset threshold value, the electronic equipment can use the current initial model as a disease identification model.
In the embodiment of the application, the electronic device can train the initial model based on the training sample set to obtain the disease identification model. Therefore, the subsequent electronic equipment can conveniently detect the aquatic organism diseases based on the disease identification model.
The embodiment of the present application provides a specific structure of an initial model, as shown in table 1, the initial model includes 5 network layers including conv1, conv2, conv3, conv4 and conv5, and according to a sequence from low to high of the network layers, an image feature output by an upper network layer is a feature of each downsampling, and a resolution is sequentially reduced by 2 times. With the increase of network levels, the features extracted by the network layer are more advanced and comprehensive.
TABLE 1
Figure RE-GDA0002583339790000221
Wherein, the conv module is a 3x3 convolution layer, the network structure of the conv _ BN module is shown in (a) in fig. 3b, the conv _ BN module is composed of a 3x3 convolution layer and a BN layer, the network structure of the conv _ dw module is shown in (b) in fig. 3b, the conv _ dw module is composed of two convolution layers and two BN layers, and after the 1x1 convolution layer is placed on the 3x3 convolution layer, the nonlinearity of feature extraction can be increased, and the feature expression capability of feature extraction is improved. In order to fully extract image features, multiple conv-dw modules can be used at each network layer. In order to improve the image recognition speed, the number of output characteristic channels of the network layer can be 256 at most.
Based on the same technical concept, the embodiment of the application also provides aquatic organism disease detection equipment, which is characterized by comprising an image pickup part and a processing part, wherein,
the image pickup component is arranged in the culture pond and is used for shooting aquatic organisms in the culture pond to obtain an image to be identified, wherein the image to be identified comprises the aquatic organisms;
the processing component is used for receiving the image to be recognized shot by the shooting component, performing feature extraction and feature fusion processing on the image to be recognized through a disease recognition model based on the image to be recognized to obtain fusion image features of the image to be recognized, and predicting the fusion image features based on the disease recognition model to obtain classification categories of water borne diseases in the image to be recognized.
Optionally, the processing component is configured to, when the disease identification model is a neural network including multiple network layers, input the image to be identified into a lowest network layer with a lowest network layer according to a sequence of network layers of the multiple network layers from low to high, and perform feature extraction through each network layer respectively to obtain an initial image feature corresponding to the network layer;
for a first network layer in the plurality of network layers, if the first network layer is the highest network layer with the highest network level in the plurality of network layers, taking the initial image feature of the first network layer as the fusion image feature of the first network layer;
if the first network layer is not the highest network layer, determining the fusion image feature of the first network layer according to the initial image feature corresponding to the first network layer and the initial image feature corresponding to a second network layer, wherein the network level of the second network layer is higher than that of the first network layer.
Optionally, the processing unit is configured to determine a network level difference between the first network layer and the highest network layer when the number of second network layers is at least two;
performing feature fusion processing on initial image features corresponding to two adjacent network layers aiming at the first network layer and the second network layer to obtain a hierarchical image feature corresponding to a low network layer, wherein the low network layer is the network layer with the lowest network level in the two adjacent network layers, and the fusion level number of the hierarchical image feature is equal to the number of times of performing feature fusion processing;
judging whether the fusion grade number of the grading image characteristics reaches the network grade difference value or not;
if the fusion grade number of the hierarchical image features reaches the network grade difference value, taking the hierarchical image features as the fusion image features of the first network layer;
and if the fusion progression of the hierarchical image features does not reach the network hierarchy difference value, performing feature fusion processing on the hierarchical image features corresponding to two adjacent network layers to obtain the hierarchical image features corresponding to a lower network layer and subjected to fusion progression updating, and executing the step of judging whether the fusion progression of the hierarchical image features reaches the network hierarchy difference value.
Optionally, the second network layer is a network layer adjacent to the first network layer.
Optionally, the disease identification model further includes a feature processing layer, and the fused image feature further includes an image feature obtained by performing feature extraction on the initial image feature of the highest network layer by the feature processing layer.
Optionally, the processing component is configured to perform regression prediction processing on the fusion image feature through the disease identification model to obtain position information of a disease part in the image to be identified.
Optionally, the training process of the disease recognition model includes:
acquiring a training image set, wherein the training image set comprises a plurality of sample images, classification categories of diseases corresponding to each sample image, and position information of disease parts in each sample image;
performing feature extraction and feature fusion processing on each sample image through an initial model to obtain initial fusion image features of the sample image;
predicting based on the initial fusion image characteristics to obtain the initial classification category and the initial position information of the sample image;
calculating a function value of a loss function based on the initial classification categories and the initial position information of the plurality of sample images;
and changing the parameters of the initial model according to the function value until the function value is smaller than a preset threshold value, so as to obtain the disease identification model.
The embodiment of the application provides aquatic organism disease detection equipment, which can shoot aquatic organisms in a culture pond to obtain an image to be identified comprising the aquatic organisms; performing feature extraction and feature fusion processing on the image to be recognized through a disease recognition model to obtain fusion image features of the image to be recognized; and predicting the characteristics of the fused image based on a disease identification model to obtain the classification category of the aquatic organism diseases. Because the image to be identified is predicted through the disease identification model, and the classification category of the aquatic organism diseases is determined, the disease detection efficiency of the aquatic organisms can be improved.
Based on the same technical concept, the embodiment of the present application further provides an aquatic organism disease detection device, as shown in fig. 5, the device includes:
the shooting module 510 is configured to shoot aquatic organisms in the culture pond to obtain an image to be identified, which includes the aquatic organisms;
the feature fusion processing module 520 is configured to perform feature extraction and feature fusion processing on the image to be recognized through a disease recognition model to obtain a fusion image feature of the image to be recognized;
and the predicting module 530 is used for predicting the characteristics of the fused image based on the disease identification model to obtain the classification category of the water generating substance and the disease in the image to be identified.
The embodiment of the application provides an aquatic organism disease detection device, which can shoot aquatic organisms in a culture pond to obtain an image to be identified comprising the aquatic organisms; performing feature extraction and feature fusion processing on the image to be recognized through a disease recognition model to obtain fusion image features of the image to be recognized; and predicting the characteristics of the fused image based on a disease identification model to obtain the classification category of the aquatic organism diseases. Because the image to be identified is predicted through the disease identification model, and the classification category of the aquatic organism diseases is determined, the disease detection efficiency of the aquatic organisms can be improved.
Based on the same technical concept, the embodiment of the present application further provides an electronic device, as shown in fig. 6, including a processor 601, a communication interface 602, a memory 603 and a communication bus 604, where the processor 601, the communication interface 602 and the memory 603 complete communication with each other through the communication bus 604,
a memory 603 for storing a computer program;
the processor 601 is used for implementing the steps of the aquatic organism disease detection method when executing the program stored in the memory 603.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
In yet another embodiment provided by the present application, there is also provided a computer readable storage medium having a computer program stored therein, wherein the computer program when executed by a processor implements any of the above-mentioned steps of the aquatic organism disease detection method.
In a further embodiment provided by the present application, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the aquatic organism disease detection methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (17)

1. A method for detecting disease in aquatic organisms, the method comprising:
shooting aquatic organisms in a culture pond to obtain an image to be identified containing the aquatic organisms;
performing feature extraction and feature fusion processing on the image to be recognized through a disease recognition model to obtain fusion image features of the image to be recognized;
and predicting the characteristics of the fused image based on the disease identification model to obtain the classification category of the aquatic product diseases in the image to be identified.
2. The method according to claim 1, wherein the disease identification model is a neural network including a plurality of network layers, and the obtaining of the fusion image feature of the image to be identified by performing feature extraction and feature fusion processing on the image to be identified through the disease identification model comprises:
inputting the image to be identified into the lowest network layer with the lowest network layer according to the sequence of the network layers from low to high, and respectively extracting the characteristics through each network layer to obtain the initial image characteristics corresponding to the network layer;
for a first network layer in the plurality of network layers, if the first network layer is the highest network layer with the highest network level in the plurality of network layers, taking the initial image feature of the first network layer as the fusion image feature of the first network layer;
if the first network layer is not the highest network layer, determining the fusion image feature of the first network layer according to the initial image feature corresponding to the first network layer and the initial image feature corresponding to a second network layer, wherein the network level of the second network layer is higher than that of the first network layer.
3. The method according to claim 2, wherein when the number of the second network layers is at least two, the determining the fused image feature of the first network layer according to the initial image feature corresponding to the first network layer and the initial image feature corresponding to the second network layer comprises:
determining a network level difference between the first network layer and the highest network layer;
performing feature fusion processing on initial image features corresponding to two adjacent network layers aiming at the first network layer and the second network layer to obtain a hierarchical image feature corresponding to a low network layer, wherein the low network layer is the network layer with the lowest network level in the two adjacent network layers, and the fusion level number of the hierarchical image feature is equal to the number of times of performing feature fusion processing;
judging whether the fusion grade number of the grading image characteristics reaches the network grade difference value or not;
if the fusion grade number of the hierarchical image features reaches the network grade difference value, taking the hierarchical image features as the fusion image features of the first network layer;
and if the fusion progression of the hierarchical image features does not reach the network hierarchy difference value, performing feature fusion processing on the hierarchical image features corresponding to two adjacent network layers to obtain the hierarchical image features corresponding to a lower network layer and subjected to fusion progression updating, and executing the step of judging whether the fusion progression of the hierarchical image features reaches the network hierarchy difference value.
4. The method of claim 2, wherein the second network layer is a network layer adjacent to the first network layer.
5. The method according to claim 2, wherein the disease identification model further comprises a feature processing layer, and the fused image features further comprise image features obtained by feature extraction of initial image features of the highest network layer by the feature processing layer.
6. The method of claim 1, further comprising:
and performing regression prediction processing on the fusion image characteristics through the disease identification model to obtain the position information of the disease part in the image to be identified.
7. The method of claim 1, wherein the training process of the disease recognition model comprises:
acquiring a training image set, wherein the training image set comprises a plurality of sample images, classification categories of diseases corresponding to each sample image, and position information of disease parts in each sample image;
performing feature extraction and feature fusion processing on each sample image through an initial model to obtain initial fusion image features of the sample image;
predicting based on the initial fusion image characteristics to obtain the initial classification category and the initial position information of the sample image;
calculating a function value of a loss function based on the initial classification categories and the initial position information of the plurality of sample images;
and changing the parameters of the initial model according to the function value until the function value is smaller than a preset threshold value, so as to obtain the disease identification model.
8. An aquatic organism disease detection apparatus characterized by comprising an image pickup part and a processing part, wherein,
the image pickup component is arranged in the culture pond and is used for shooting aquatic organisms in the culture pond to obtain an image to be identified, wherein the image to be identified comprises the aquatic organisms;
the processing component is used for receiving the image to be recognized shot by the shooting component, performing feature extraction and feature fusion processing on the image to be recognized through a disease recognition model based on the image to be recognized to obtain fusion image features of the image to be recognized, and predicting the fusion image features based on the disease recognition model to obtain classification categories of water borne diseases in the image to be recognized.
9. An aquatic organism disease detection apparatus, comprising:
the shooting module is used for shooting aquatic organisms in the culture pond to obtain an image to be identified containing the aquatic organisms;
the characteristic fusion processing module is used for carrying out characteristic extraction and characteristic fusion processing on the image to be recognized through a disease recognition model to obtain fusion image characteristics of the image to be recognized;
and the prediction module is used for predicting the characteristics of the fused image based on the disease identification model to obtain the classification category of the water generating substance diseases in the image to be identified.
10. The apparatus of claim 9, wherein the disease identification model is a neural network comprising a plurality of network layers, and the feature fusion processing module comprises:
the feature extraction submodule is used for inputting the image to be identified into the lowest network layer with the lowest network layer according to the sequence of the network layers from low to high, and performing feature extraction through each network layer respectively to obtain initial image features corresponding to the network layer;
the determining submodule is used for regarding a first network layer in the plurality of network layers, and when the first network layer is the highest network layer with the highest network level in the plurality of network layers, taking the initial image feature of the first network layer as the fusion image feature of the first network layer;
the determining sub-module is further configured to determine, when the first network layer is not the highest network layer, a fused image feature of the first network layer according to an initial image feature corresponding to the first network layer and an initial image feature corresponding to a second network layer, where the network level of the second network layer is higher than the network level of the first network layer.
11. The apparatus of claim 10, wherein when the number of second network layers is at least two, the determining sub-module comprises:
a first determining unit for determining a network level difference between the first network layer and the highest network layer;
the feature fusion processing unit is configured to perform feature fusion processing on initial image features corresponding to two adjacent network layers for the first network layer and the second network layer to obtain a hierarchical image feature corresponding to a low network layer, where the low network layer is a network layer with a lowest network level among the two adjacent network layers, and a fusion level of the hierarchical image feature is equal to the number of times of performing the feature fusion processing;
the judging unit is used for judging whether the fusion grade number of the hierarchical image features reaches the network grade difference value or not;
a second determining unit configured to take the hierarchical image feature as a fused image feature of the first network layer when a fusion progression of the hierarchical image feature reaches the network hierarchy difference value;
and the second determining unit is further configured to perform feature fusion processing on the hierarchical image features corresponding to two adjacent network layers when the fusion progression of the hierarchical image features does not reach the network hierarchy difference, to obtain the hierarchical image features corresponding to a lower network layer after updating the fusion progression, and to execute the step of determining whether the fusion progression of the hierarchical image features reaches the network hierarchy difference.
12. The apparatus of claim 10, wherein the second network layer is a network layer adjacent to the first network layer.
13. The apparatus according to claim 10, wherein the disease identification model further includes a feature processing layer, and the fused image features further include image features obtained by feature extraction of the initial image features of the highest network layer by the feature processing layer.
14. The apparatus of claim 9, further comprising:
the prediction module is further configured to perform regression prediction processing on the fusion image features through the disease identification model to obtain position information of a disease part in the image to be identified.
15. The apparatus according to claim 9, further comprising a training module, wherein the training module is configured to obtain a training image set, the training image set includes a plurality of sample images, each sample image corresponds to a classification category of a disease, and position information of a disease portion in each sample image; performing feature extraction and feature fusion processing on each sample image through an initial model to obtain initial fusion image features of the sample image; predicting based on the initial fusion image characteristics to obtain the initial classification category and the initial position information of the sample image; calculating a function value of a loss function based on the initial classification categories and the initial position information of the plurality of sample images; and changing the parameters of the initial model according to the function value until the function value is smaller than a preset threshold value, so as to obtain the disease identification model.
16. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
17. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN202010352073.2A 2020-04-28 2020-04-28 Aquatic organism disease detection method, device and equipment Active CN111563439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010352073.2A CN111563439B (en) 2020-04-28 2020-04-28 Aquatic organism disease detection method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010352073.2A CN111563439B (en) 2020-04-28 2020-04-28 Aquatic organism disease detection method, device and equipment

Publications (2)

Publication Number Publication Date
CN111563439A true CN111563439A (en) 2020-08-21
CN111563439B CN111563439B (en) 2023-08-08

Family

ID=72073283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010352073.2A Active CN111563439B (en) 2020-04-28 2020-04-28 Aquatic organism disease detection method, device and equipment

Country Status (1)

Country Link
CN (1) CN111563439B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052114A (en) * 2021-04-02 2021-06-29 东营市阔海水产科技有限公司 Dead shrimp identification method, terminal device and readable storage medium
CN113254458A (en) * 2021-07-07 2021-08-13 赛汇检测(广州)有限公司 Intelligent diagnosis method for aquatic disease
CN113780073A (en) * 2021-08-03 2021-12-10 华南农业大学 Device and method for assisting in estimating chicken flock uniformity

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609598A (en) * 2017-09-27 2018-01-19 武汉斗鱼网络科技有限公司 Image authentication model training method, device and readable storage medium storing program for executing
CN108573277A (en) * 2018-03-12 2018-09-25 北京交通大学 A kind of pantograph carbon slide surface disease automatic recognition system and method
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure
CN109801293A (en) * 2019-01-08 2019-05-24 平安科技(深圳)有限公司 Remote Sensing Image Segmentation, device and storage medium, server
CN109919013A (en) * 2019-01-28 2019-06-21 浙江英索人工智能科技有限公司 Method for detecting human face and device in video image based on deep learning
WO2019127451A1 (en) * 2017-12-29 2019-07-04 深圳前海达闼云端智能科技有限公司 Image recognition method and cloud system
CN109977942A (en) * 2019-02-02 2019-07-05 浙江工业大学 A kind of scene character recognition method based on scene classification and super-resolution
CN110210434A (en) * 2019-06-10 2019-09-06 四川大学 Pest and disease damage recognition methods and device
CN110348387A (en) * 2019-07-12 2019-10-18 腾讯科技(深圳)有限公司 A kind of image processing method, device and computer readable storage medium
CN110378305A (en) * 2019-07-24 2019-10-25 中南民族大学 Tealeaves disease recognition method, equipment, storage medium and device
CN110443254A (en) * 2019-08-02 2019-11-12 上海联影医疗科技有限公司 The detection method of metallic region, device, equipment and storage medium in image
WO2019233341A1 (en) * 2018-06-08 2019-12-12 Oppo广东移动通信有限公司 Image processing method and apparatus, computer readable storage medium, and computer device
CN110728680A (en) * 2019-10-25 2020-01-24 上海眼控科技股份有限公司 Automobile data recorder detection method and device, computer equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609598A (en) * 2017-09-27 2018-01-19 武汉斗鱼网络科技有限公司 Image authentication model training method, device and readable storage medium storing program for executing
WO2019127451A1 (en) * 2017-12-29 2019-07-04 深圳前海达闼云端智能科技有限公司 Image recognition method and cloud system
CN108573277A (en) * 2018-03-12 2018-09-25 北京交通大学 A kind of pantograph carbon slide surface disease automatic recognition system and method
WO2019233341A1 (en) * 2018-06-08 2019-12-12 Oppo广东移动通信有限公司 Image processing method and apparatus, computer readable storage medium, and computer device
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure
CN109801293A (en) * 2019-01-08 2019-05-24 平安科技(深圳)有限公司 Remote Sensing Image Segmentation, device and storage medium, server
CN109919013A (en) * 2019-01-28 2019-06-21 浙江英索人工智能科技有限公司 Method for detecting human face and device in video image based on deep learning
CN109977942A (en) * 2019-02-02 2019-07-05 浙江工业大学 A kind of scene character recognition method based on scene classification and super-resolution
CN110210434A (en) * 2019-06-10 2019-09-06 四川大学 Pest and disease damage recognition methods and device
CN110348387A (en) * 2019-07-12 2019-10-18 腾讯科技(深圳)有限公司 A kind of image processing method, device and computer readable storage medium
CN110378305A (en) * 2019-07-24 2019-10-25 中南民族大学 Tealeaves disease recognition method, equipment, storage medium and device
CN110443254A (en) * 2019-08-02 2019-11-12 上海联影医疗科技有限公司 The detection method of metallic region, device, equipment and storage medium in image
CN110728680A (en) * 2019-10-25 2020-01-24 上海眼控科技股份有限公司 Automobile data recorder detection method and device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
梁蒙蒙;周涛;夏勇;张飞飞;杨健;: "基于随机化融合和CNN的多模态肺部肿瘤图像识别", 南京大学学报(自然科学), no. 04, pages 117 - 127 *
陈天娇;曾娟;谢成军;王儒敬;刘万才;张洁;李瑞;陈红波;胡海瀛;董伟;: "基于深度学习的病虫害智能化识别系统", 中国植保导刊, no. 04, pages 28 - 36 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052114A (en) * 2021-04-02 2021-06-29 东营市阔海水产科技有限公司 Dead shrimp identification method, terminal device and readable storage medium
CN113254458A (en) * 2021-07-07 2021-08-13 赛汇检测(广州)有限公司 Intelligent diagnosis method for aquatic disease
CN113254458B (en) * 2021-07-07 2022-04-08 赛汇检测(广州)有限公司 Intelligent diagnosis method for aquatic disease
CN113780073A (en) * 2021-08-03 2021-12-10 华南农业大学 Device and method for assisting in estimating chicken flock uniformity
CN113780073B (en) * 2021-08-03 2023-12-05 华南农业大学 Device and method for auxiliary estimation of chicken flock uniformity

Also Published As

Publication number Publication date
CN111563439B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN111524137B (en) Cell identification counting method and device based on image identification and computer equipment
CN111563439A (en) Aquatic organism disease detection method, device and equipment
CN111161311A (en) Visual multi-target tracking method and device based on deep learning
CN112614133B (en) Three-dimensional pulmonary nodule detection model training method and device without anchor point frame
CN112819821B (en) Cell nucleus image detection method
CN110232721B (en) Training method and device for automatic sketching model of organs at risk
CN109242826B (en) Mobile equipment end stick-shaped object root counting method and system based on target detection
EP3564857A1 (en) Pattern recognition method of autoantibody immunofluorescence image
CN114140651A (en) Stomach focus recognition model training method and stomach focus recognition method
CN111325181B (en) State monitoring method and device, electronic equipment and storage medium
CN115147862A (en) Benthonic animal automatic identification method, system, electronic device and readable storage medium
CN111178364A (en) Image identification method and device
CN115797844A (en) Fish body fish disease detection method and system based on neural network
CN115393351A (en) Method and device for judging cornea immune state based on Langerhans cells
CN114841974A (en) Nondestructive testing method and system for internal structure of fruit, electronic equipment and medium
CN111046944A (en) Method and device for determining object class, electronic equipment and storage medium
CN111597937B (en) Fish gesture recognition method, device, equipment and storage medium
Muñoz-Benavent et al. Impact evaluation of deep learning on image segmentation for automatic bluefin tuna sizing
CN112559785A (en) Bird image recognition system and method based on big data training
CN117253192A (en) Intelligent system and method for silkworm breeding
CN116758539A (en) Embryo image blastomere identification method based on data enhancement
CN110110749A (en) Image processing method and device in a kind of training set
CN110110750A (en) A kind of classification method and device of original image
CN114037868B (en) Image recognition model generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, Beijing 100176

Applicant before: BEIJING HAIYI TONGZHAN INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant