CN111563439B - Aquatic organism disease detection method, device and equipment - Google Patents

Aquatic organism disease detection method, device and equipment Download PDF

Info

Publication number
CN111563439B
CN111563439B CN202010352073.2A CN202010352073A CN111563439B CN 111563439 B CN111563439 B CN 111563439B CN 202010352073 A CN202010352073 A CN 202010352073A CN 111563439 B CN111563439 B CN 111563439B
Authority
CN
China
Prior art keywords
network layer
image
network
fusion
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010352073.2A
Other languages
Chinese (zh)
Other versions
CN111563439A (en
Inventor
张为明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202010352073.2A priority Critical patent/CN111563439B/en
Publication of CN111563439A publication Critical patent/CN111563439A/en
Application granted granted Critical
Publication of CN111563439B publication Critical patent/CN111563439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Business, Economics & Management (AREA)
  • Agronomy & Crop Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Mining & Mineral Resources (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Animal Husbandry (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method, a device and equipment for detecting aquatic organism diseases, and belongs to the technical field of data processing. The method comprises the following steps: shooting aquatic organisms in a culture pond to obtain an image to be identified containing the aquatic organisms; performing feature extraction and feature fusion processing on the image to be identified through a disease identification model to obtain fusion image features of the image to be identified; and predicting the characteristics of the fused image based on the disease recognition model to obtain the classification category of the water-borne object diseases in the image to be recognized. By adopting the technical scheme provided by the application, the aquatic organism disease detection efficiency can be improved.

Description

Aquatic organism disease detection method, device and equipment
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method, an apparatus, and a device for disease control of aquatic organisms.
Background
In aquaculture, it is necessary to detect whether an aquatic organism has a disease, so as to treat the diseased aquatic organism in time, avoiding economic loss.
In general, aquatic organisms can develop skin abnormalities when they are ill, for example, fish can have white eyes when they have albedo, and gill areas can be ulcerated when they have gill rot. In view of this, in detecting diseases, it is common to collect a part of aquatic organisms from a culture pond and then to judge whether the aquatic organisms have diseases or not by observing them through human eyes by a professional.
However, the number of aquatic organisms in the culture pond is very large, whether the aquatic organisms have diseases or not is judged one by one through human eyes, and the detection efficiency of the diseases is low.
Disclosure of Invention
An aim of the embodiment of the application is to provide a method, a device and equipment for detecting aquatic organism diseases, so as to solve the problem of low detection efficiency of aquatic organism diseases. The specific technical scheme is as follows:
in a first aspect, the present application provides a method of detecting an aquatic organism disease, the method comprising:
shooting aquatic organisms in a culture pond to obtain an image to be identified containing the aquatic organisms;
performing feature extraction and feature fusion processing on the image to be identified through a disease identification model to obtain fusion image features of the image to be identified;
and predicting the characteristics of the fused image based on the disease recognition model to obtain the classification category of the water-borne object diseases in the image to be recognized.
Optionally, the disease recognition model is a neural network including a plurality of network layers, and the feature extraction and feature fusion processing are performed on the image to be recognized by the disease recognition model to obtain a fused image feature of the image to be recognized, including:
Inputting the images to be identified into the lowest network layer with the lowest network layer according to the sequence from low network layer to high network layer of the network layers, and extracting the characteristics through each network layer to obtain the initial image characteristics corresponding to the network layer;
for a first network layer in the plurality of network layers, if the first network layer is the highest network layer with the highest network layer in the plurality of network layers, taking the initial image characteristics of the first network layer as the fusion image characteristics of the first network layer;
and if the first network layer is not the highest network layer, determining the fusion image characteristics of the first network layer according to the initial image characteristics corresponding to the first network layer and the initial image characteristics corresponding to a second network layer, wherein the network layer level of the second network layer is higher than the network layer level of the first network layer.
Optionally, when the number of the second network layers is at least two, determining the fused image feature of the first network layer according to the initial image feature corresponding to the first network layer and the initial image feature corresponding to the second network layer includes:
determining a network level difference between the first network layer and the highest network layer;
Performing feature fusion processing on initial image features corresponding to two adjacent network layers aiming at the first network layer and the second network layer to obtain hierarchical image features corresponding to a low network layer, wherein the low network layer is a network layer with the lowest network level in the two adjacent network layers, and the fusion level of the hierarchical image features is equal to the number of times of feature fusion processing;
judging whether the fusion level number of the hierarchical image features reaches the network level difference value or not;
if the fusion level of the hierarchical image features reaches the network level difference value, taking the hierarchical image features as the fusion image features of the first network layer;
and if the fusion level number of the hierarchical image features does not reach the network level difference value, carrying out feature fusion processing on the hierarchical image features corresponding to two adjacent network layers to obtain the hierarchical image features which correspond to the lower network layer and are updated in the fusion level number, and executing the step of judging whether the fusion level number of the hierarchical image features reaches the network level difference value.
Optionally, the second network layer is a network layer adjacent to the first network layer.
Optionally, the disease identification model further includes a feature processing layer, and the fused image feature further includes an image feature obtained by extracting the feature of the initial image feature of the highest network layer by the feature processing layer.
Optionally, the method further comprises:
and carrying out regression prediction processing on the fused image features through the disease identification model to obtain the position information of the disease part in the image to be identified.
Optionally, the training process of the disease identification model includes:
acquiring a training image set, wherein the training image set comprises a plurality of sample images, classification categories of diseases corresponding to each sample image and position information of disease parts in each sample image;
performing feature extraction and feature fusion processing on each sample image through an initial model to obtain initial fusion image features of the sample image;
predicting based on the initial fusion image characteristics to obtain initial classification category and initial position information of the sample image;
calculating a function value of a loss function based on the initial classification category and the initial position information of the plurality of sample images;
and changing parameters of the initial model according to the function value until the function value is smaller than a preset threshold value, so as to obtain the disease identification model.
In a second aspect, the present application also provides an aquatic organism disease detection apparatus comprising an imaging part and a processing part, wherein,
The camera shooting component is arranged in the culture pond and used for shooting aquatic organisms in the culture pond to obtain images to be identified, including the aquatic organisms;
the processing component is used for receiving the image to be identified, which is shot by the shooting component, carrying out feature extraction and feature fusion processing on the image to be identified through a disease identification model based on the image to be identified to obtain fusion image features of the image to be identified, and predicting the fusion image features based on the disease identification model to obtain classification categories of water-generated object diseases in the image to be identified.
Optionally, the processing unit is configured to, when the disease recognition model is a neural network including a plurality of network layers, input the image to be recognized into a lowest network layer with a lowest network layer level according to a sequence from low network layers to high network layers of the plurality of network layers, and perform feature extraction through each network layer to obtain initial image features corresponding to the network layer;
for a first network layer in the plurality of network layers, if the first network layer is the highest network layer with the highest network layer in the plurality of network layers, taking the initial image characteristics of the first network layer as the fusion image characteristics of the first network layer;
And if the first network layer is not the highest network layer, determining the fusion image characteristics of the first network layer according to the initial image characteristics corresponding to the first network layer and the initial image characteristics corresponding to a second network layer, wherein the network layer level of the second network layer is higher than the network layer level of the first network layer.
Optionally, the processing unit is configured to determine a network level difference between the first network layer and the highest network layer when the number of second network layers is at least two;
performing feature fusion processing on initial image features corresponding to two adjacent network layers aiming at the first network layer and the second network layer to obtain hierarchical image features corresponding to a low network layer, wherein the low network layer is a network layer with the lowest network level in the two adjacent network layers, and the fusion level of the hierarchical image features is equal to the number of times of feature fusion processing;
judging whether the fusion level number of the hierarchical image features reaches the network level difference value or not;
if the fusion level of the hierarchical image features reaches the network level difference value, taking the hierarchical image features as the fusion image features of the first network layer;
And if the fusion level number of the hierarchical image features does not reach the network level difference value, carrying out feature fusion processing on the hierarchical image features corresponding to two adjacent network layers to obtain the hierarchical image features which correspond to the lower network layer and are updated in the fusion level number, and executing the step of judging whether the fusion level number of the hierarchical image features reaches the network level difference value.
Optionally, the second network layer is a network layer adjacent to the first network layer.
Optionally, the disease identification model further includes a feature processing layer, and the fused image feature further includes an image feature obtained by extracting the feature of the initial image feature of the highest network layer by the feature processing layer.
Optionally, the processing unit is configured to perform regression prediction processing on the fused image feature through the disease identification model, so as to obtain position information of the disease part in the image to be identified.
Optionally, the training process of the disease identification model includes:
acquiring a training image set, wherein the training image set comprises a plurality of sample images, classification categories of diseases corresponding to each sample image and position information of disease parts in each sample image;
Performing feature extraction and feature fusion processing on each sample image through an initial model to obtain initial fusion image features of the sample image;
predicting based on the initial fusion image characteristics to obtain initial classification category and initial position information of the sample image;
calculating a function value of a loss function based on the initial classification category and the initial position information of the plurality of sample images;
and changing parameters of the initial model according to the function value until the function value is smaller than a preset threshold value, so as to obtain the disease identification model.
In a third aspect, the present application also provides an aquatic organism disease detection device, the device comprising:
the shooting module is used for shooting aquatic organisms in the culture pond to obtain an image to be identified containing the aquatic organisms;
the feature fusion processing module is used for carrying out feature extraction and feature fusion processing on the image to be identified through the disease identification model to obtain fusion image features of the image to be identified;
and the prediction module is used for predicting the characteristics of the fused image based on the disease identification model to obtain the classification category of the water-borne object diseases in the image to be identified.
Optionally, the disease identification model is a neural network including a plurality of network layers, and the feature fusion processing module includes:
the feature extraction sub-module is used for inputting the image to be identified into the lowest network layer with the lowest network layer according to the sequence from low network layers to high network layers, and extracting features through each network layer to obtain initial image features corresponding to the network layers;
a determining submodule, configured to, for a first network layer of the plurality of network layers, use an initial image feature of the first network layer as a fused image feature of the first network layer when the first network layer is a highest network layer of the plurality of network layers with a highest network level;
and the determination submodule is further used for determining the fusion image characteristics of the first network layer according to the initial image characteristics corresponding to the first network layer and the initial image characteristics corresponding to the second network layer when the first network layer is not the highest network layer, and the network layer level of the second network layer is higher than the network layer level of the first network layer.
Optionally, when the number of the second network layers is at least two, the determining submodule includes:
A first determining unit configured to determine a network level difference between the first network layer and the highest network layer;
the feature fusion processing unit is used for carrying out feature fusion processing on initial image features corresponding to two adjacent network layers aiming at the first network layer and the second network layer to obtain hierarchical image features corresponding to a low network layer, wherein the low network layer is a network layer with the lowest network level in the two adjacent network layers, and the fusion level number of the hierarchical image features is equal to the number of times of feature fusion processing;
the judging unit is used for judging whether the fusion level number of the hierarchical image features reaches the network level difference value;
the second determining unit is used for taking the hierarchical image characteristics as the fusion image characteristics of the first network layer when the fusion level number of the hierarchical image characteristics reaches the network level difference value;
and the second determining unit is further configured to perform feature fusion processing on the hierarchical image features corresponding to two adjacent network layers when the fusion level number of the hierarchical image features does not reach the network level difference value, obtain the hierarchical image features corresponding to the low network layer and updated with the fusion level number, and perform the step of determining whether the fusion level number of the hierarchical image features reaches the network level difference value.
Optionally, the second network layer is a network layer adjacent to the first network layer.
Optionally, the disease identification model further includes a feature processing layer, and the fused image feature further includes an image feature obtained by extracting the feature of the initial image feature of the highest network layer by the feature processing layer.
Optionally, the apparatus further includes:
and the prediction module is also used for carrying out regression prediction processing on the fused image features through the disease identification model to obtain the position information of the disease part in the image to be identified.
Optionally, the device further comprises a training module, wherein the training module is used for acquiring a training image set, and the training image set comprises a plurality of sample images, classification categories of diseases corresponding to each sample image and position information of disease parts in each sample image; performing feature extraction and feature fusion processing on each sample image through an initial model to obtain initial fusion image features of the sample image; predicting based on the initial fusion image characteristics to obtain initial classification category and initial position information of the sample image; calculating a function value of a loss function based on the initial classification category and the initial position information of the plurality of sample images; and changing parameters of the initial model according to the function value until the function value is smaller than a preset threshold value, so as to obtain the disease identification model.
In a fourth aspect, the present application further provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of the first aspects when executing a program stored on a memory.
In a fifth aspect, the present application also provides a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements the method steps of any of the first aspects.
In a sixth aspect, the present application also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method steps of any of the first aspects described above.
The beneficial effects of the embodiment of the application are that:
the embodiment of the application provides a method, a device and equipment for detecting aquatic organism diseases, which are used for shooting aquatic organisms in a culture pond to obtain images to be identified, including the aquatic organisms; performing feature extraction and feature fusion processing on the image to be identified through the disease identification model to obtain fusion image features of the image to be identified; and predicting the characteristics of the fused image based on the disease recognition model to obtain classification categories of aquatic organism diseases. The classification type of the aquatic organism diseases is determined by predicting the images to be identified through the disease identification model, so that the disease detection efficiency of the aquatic organism can be improved.
Of course, not all of the above-described advantages need be achieved simultaneously in practicing any one of the products or methods of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flowchart of a method for detecting aquatic organism diseases according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of another method for detecting aquatic organism disease according to an embodiment of the present disclosure;
fig. 3a is a schematic structural diagram of a disease recognition model according to an embodiment of the present application;
fig. 3b is a schematic structural diagram of a network layer according to an embodiment of the present application;
FIG. 4 is a flowchart of another method for detecting aquatic organism disease according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an aquatic organism disease detection device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The embodiment of the application provides a method for detecting aquatic organism diseases, which can be applied to electronic equipment, wherein the electronic equipment can have an image processing function, and for example, the electronic equipment can be a mobile phone, a computer and the like.
The electronic device may be preset with a disease recognition model, which may be a modified version-slide convolutional neural network. The electronic equipment can realize disease detection of aquatic organisms through a disease identification model.
The following will describe a detailed description of a method for detecting aquatic organism diseases provided in the embodiments of the present application with reference to specific embodiments, as shown in fig. 1, and the specific steps are as follows:
and 101, shooting aquatic organisms in the culture pond to obtain an image to be identified containing the aquatic organisms.
In an implementation, an underwater camera can be arranged in the culture pond, the underwater camera can shoot the current culture pond, and then the shot image is sent to the electronic equipment. The electronic device may perform a preliminary identification of the received image, and if the image contains aquatic organisms, the electronic device may treat the image as an image to be identified. If the image does not contain aquatic life, the electronic device may not perform subsequent processing.
When the cultivation density of the aquatic organisms is high, the electronic equipment can directly take the received image as the image to be identified.
In this embodiment of the application, the underwater camera can shoot aquatic creatures in the culture pond according to a preset detection period, and the underwater camera can shoot aquatic creatures in the culture pond after receiving shooting instructions of the electronic equipment.
Alternatively, the electronic device may include a camera component, and the electronic device may be disposed in the culture pond. The electronic device can shoot aquatic products in the culture pond through the camera component, an image to be identified containing the aquatic products is obtained, and disease detection is carried out based on the image to be identified.
And 102, carrying out feature extraction and feature fusion processing on the image to be identified through the disease identification model to obtain fusion image features of the image to be identified.
In implementation, the electronic device may perform feature extraction on the image to be identified through the disease identification model, so as to obtain a plurality of image features. Then, the electronic device can perform feature fusion processing on the extracted plurality of image features to obtain fusion image features of the image to be identified.
In this embodiment of the present application, the disease recognition model may include a plurality of network layers, each network layer is provided with a corresponding network layer, and the plurality of network layers performs feature extraction on the image to be recognized according to the order of the network layers from low to high. As the network level increases, the image features extracted by the network level are more advanced and comprehensive. The feature fusion processing refers to fusing the high-level image features extracted by the network layer of the high network level with the low-level image features extracted by the network layer of the low network level. Among other low-level image features such as edge features, color features, surface features, texture features, etc., and high-level image features such as shape features, semantic features, object features, and object features.
By performing feature fusion processing, fusion of the high-level image features and the low-level image features can be realized. Compared with the high-level image features, the fused image features obtained by the feature fusion processing have stronger feature expression, and the disease classification type is predicted based on the fused image features, so that the prediction accuracy can be improved.
And step 103, predicting the characteristics of the fused image based on the disease recognition model to obtain classification categories of water-borne object diseases in the image to be recognized.
In implementation, the electronic device can conduct classification prediction on the fusion image features based on the disease recognition model, and under the condition that the aquatic organisms suffer from the diseases, the electronic device can determine classification categories corresponding to the fusion image features based on the disease recognition model to obtain classification categories of the water-borne object diseases in the images to be recognized.
The electronic device performs classification prediction on the fused image features based on the disease recognition model, and the specific processing process of determining the classification category corresponding to the fused image features can be various.
In another possible implementation, for each classification category of the disease, the electronic device may calculate the probability of the classification category based on the disease recognition model, fusing image features, resulting in multiple probabilities. Then, the electronic device can take the classification category with the highest probability as the classification category of the water-borne object diseases in the image to be identified.
Optionally, in the case that the aquatic organism has a disease, the electronic device may further determine, based on the disease recognition model, a position of a disease portion of the aquatic organism in the image to be recognized, and a detailed description will be given later on in the specific processing procedure.
In the embodiment of the application, the electronic equipment can shoot the aquatic organisms in the culture pond to obtain the image to be identified including the aquatic organisms; performing feature extraction and feature fusion processing on the image to be identified through the disease identification model to obtain fusion image features of the image to be identified; and predicting the characteristics of the fused image based on the disease recognition model to obtain the classification category of the water-borne object diseases in the image to be recognized. The classification type of the aquatic organism diseases is determined by predicting the images to be identified through the disease identification model, so that compared with the manual disease detection, the disease detection efficiency of the aquatic organism can be improved.
By adopting the aquatic organism disease detection method provided by the embodiment of the application, obvious diseases visible to naked eyes can be identified in real time based on a computer vision technology, so that the disease detection efficiency can be improved. Moreover, since it is not necessary to manually participate in disease detection, the labor cost required for disease detection can be reduced. On the other hand, disease detection can not cause damage to aquatic organisms, and moreover, breeding personnel can discover diseases in time and take corresponding treatment measures, so that the breeding loss is reduced.
Optionally, the disease recognition model is a neural network including a plurality of network layers, and the embodiment of the application provides a specific processing process for performing feature extraction and feature fusion processing on an image to be recognized through the disease recognition model to obtain a fused image feature of the image to be recognized, as shown in fig. 2, including the following steps:
step 201, inputting the image to be identified into the lowest network layer with the lowest network layer according to the sequence from low network layer to high network layer, and extracting the features of each network layer to obtain the initial image features corresponding to the network layer.
In an implementation, the electronic device may input the image to be identified into a lowest network layer with a lowest network layer, perform feature extraction through the lowest network layer, and then use the calculation result output by the lowest network layer as the initial image feature corresponding to the lowest network layer.
The electronic device may determine the next network layer according to the order of the network levels of the plurality of network layers from low to high by using the lowest network layer as the last network layer, and use the calculation result output by the last network layer as the input quantity of the next network layer. Then, the electronic device can perform feature extraction through the initial image features of the next network layer and the previous network layer to obtain the initial image features corresponding to the network layer. And then, the electronic equipment can take the next network layer as the upper network layer, and repeatedly determine the next network layer according to the sequence from low network level to high network level of the network layers until all the network layers contained in the disease identification model are traversed.
For example, fig. 3a is a disease identification model provided in an embodiment of the present application, where the disease identification model includes 5 network layers, and the network layers of the network layers are respectively conv1, conv2, conv3, conv4, and conv5 in the order from low to high.
The electronic device may input the image to be identified into the lowest network layer conv1 with the lowest network layer, perform feature extraction through the lowest network layer conv1, and then, the electronic device may use the calculation result output by the lowest network layer conv1 as the initial image feature corresponding to the lowest network layer conv 1.
The electronic device may determine the next network layer conv2 by using the lowest network layer conv1 as the last network layer according to the order of the network levels of the plurality of network layers from low to high, and use the calculation result output by the last network layer conv1 as the input quantity of the next network layer conv 2. Then, the electronic device may perform feature extraction through the initial image features of the next network layer conv2 and the previous network layer conv1, to obtain the initial image features corresponding to the network layer conv 2. And then, the electronic device can take the next network layer conv2 as the upper network layer, and repeatedly determine the next network layer according to the sequence from low network level to high network level of the network layers until 5 network layers contained in the disease identification model are traversed.
Step 202, for a first network layer of the plurality of network layers, determining whether the first network layer is a highest network layer with a highest network layer hierarchy of the plurality of network layers.
In implementations, an electronic device may determine a first network layer among a plurality of network layers. The electronic device may then determine whether the first network layer is a highest network layer of the plurality of network layers having a highest network level.
If the first network layer is the highest network layer with the highest network level in the plurality of network layers, the electronic device may execute step 203; if the first network layer is not the highest network layer, the electronic device may perform step 204.
In the embodiment of the present application, the electronic device may determine the first network layer in multiple network layers in multiple manners, and in a feasible implementation manner, the electronic device may use each network layer in the multiple network layers as the first network layer.
For example, in the disease recognition model shown in fig. 3a, the electronic device may have conv1, conv2, conv3, conv4, and conv5 as the first network layers, respectively.
In another possible implementation, the electronic device may be pre-configured with the first network layer.
Still taking the disease identification model shown in fig. 3a as an example, the first network layer may be conv3, conv4 and conv5 as the first network layers, respectively.
Step 203, taking the initial image characteristics of the first network layer as the fusion image characteristics of the first network layer.
Step 204, determining the fusion image characteristics of the first network layer according to the initial image characteristics corresponding to the first network layer and the initial image characteristics corresponding to the second network layer.
Wherein the network layer level of the second network layer is higher than the network layer level of the first network layer.
In an implementation, if the first network layer is not the highest network layer, the electronic device may determine that the network layer is higher than the second network layer of the first network layer, and then, the electronic device may determine, according to the initial image feature corresponding to the first network layer and the initial image feature corresponding to the second network layer, a fused image feature of the first network layer, and a detailed description will follow in a specific processing procedure.
In this embodiment of the present application, when the number of network layers with a network layer level higher than the first network layer is 1, the network layer with a network layer level higher than the first network layer is the second network layer.
Still taking the disease recognition model shown in fig. 3a as an example, the first network layer may be conv4, and the number of network layers higher than the first network layer is 1, that is, the network layer higher than the first network layer is conv5, so the second network layer may be conv5.
The number of the second network layers can be one or more under the condition that the number of the network layers higher than that of the first network layers is more than one, and specific processing procedures of determining the fused image features of the first network layers by the electronic equipment according to the initial image features corresponding to the first network layers and the initial image features corresponding to the second network layers are different according to the different numbers of the second network layers.
In one possible implementation, the number of second network layers may be 1, and in this case, the second network layer may be any network layer higher than the first network layer. For example, the first network layer may be conv3 and the second network layer may be conv4 or conv5.
Alternatively, the second network layer may be a network layer adjacent to the first network layer and having a network layer level higher than the network layer level of the first network layer. Because the correlation of the initial image features of the adjacent network layers is larger, the adjacent network layers are selected as the second network layers, so that the electronic equipment can conveniently perform feature fusion processing based on the initial image features corresponding to the first network layers and the initial image features corresponding to the second network layers.
For example, the first network layer may be conv3, and the second network layer may be a network layer conv4 adjacent to the first network layer conv3 and having a network layer level higher than the first network layer conv 3.
In another possible implementation, the number of second network layers may be a plurality; the second network layer may be a predetermined number of network layers higher than the first network layer.
For example, the first network layer may be conv1, the network layers of the network hierarchy higher than the first network layer are conv2, conv3, conv4, and conv5, the preset number is 2, and the second network layer may be conv2 and conv3.
Alternatively, the second network layer is the entire network layer of the network hierarchy having a network layer level higher than the first network layer.
For example, the first network layer may be conv3, and the network layers of the network hierarchy higher than the first network layer are conv4 and conv5, i.e., the second network layer is conv4 and conv5.
In this embodiment of the present application, the electronic device may input the image to be identified into a lowest network layer with a lowest network layer according to a sequence from low to high of network levels of the plurality of network layers, and perform feature extraction through each network layer, so as to obtain initial image features corresponding to the network layer. Then, for a first network layer of the plurality of network layers, in the case that the first network layer is the highest network layer with the highest network layer level of the plurality of network layers, the initial image feature of the first network layer is used as the fused image feature of the first network layer. And under the condition that the first network layer is not the highest network layer, determining the fusion image characteristics of the first network layer according to the initial image characteristics corresponding to the first network layer and the initial image characteristics corresponding to the second network layer. Therefore, feature extraction and feature fusion of the image to be identified can be realized, and the fused image features with stronger feature expression can be determined.
In this embodiment of the present application, the feature fusion process may be implemented by convolution processing, for example, for the network layer conv5, the extraction of the initial image feature may be performed by deformable convolution, and then the up-sampling of the initial image feature may be implemented by transposed convolution, so as to ensure that the matrix dimension of the initial image feature of the network layer conv5 is the same as the matrix dimension of the initial image feature of the previous network layer conv4, and improve the feature resolution of the image feature of the conv5 layer. Thereafter, the initial image features of conv5 and conv4 may be subjected to an addition process, thereby achieving feature fusion.
Optionally, in the case where the second network layer is all network layers of a network layer level higher than that of the first network layer and the number of the second network layers is at least two, determining, according to the initial image feature corresponding to the first network layer and the initial image feature corresponding to the second network layer, an implementation manner of the fused image feature of the first network layer, as shown in fig. 4, includes the following steps:
step 401, determining a network level difference between the first network layer and the highest network layer.
In implementations, the electronic device may calculate a difference between the network level of the first network layer and the network level of the highest network layer, resulting in a network level difference.
For example, still taking the disease identification model shown in fig. 3a as an example, the first network layer may be conv3, the second network layer may be conv4 and conv5, and conv5 may be the highest network layer. The electronic device may calculate the difference between network level 3 of the first network layer conv3 and network level 5 of the highest network layer conv5, resulting in 2, i.e. the network level difference is 2.
Step 402, aiming at the first network layer and the second network layer, performing feature fusion processing on initial image features corresponding to two adjacent network layers to obtain hierarchical image features corresponding to a low network layer.
The low network layer is the network layer with the lowest network layer in the two adjacent network layers, and the fusion level of the hierarchical image features is equal to the number of times of feature fusion processing.
In implementations, an electronic device may determine two adjacent network layers in a first network layer and a second network layer. Then, the electronic device may perform feature fusion processing on the initial image features corresponding to each two adjacent network layers, and then use the image features obtained by the feature fusion processing as hierarchical image features corresponding to the low network layer.
For example, the electronic device may determine two adjacent network layers in the first network layer conv3 and the second network layers conv4, conv5, to obtain conv3 and conv4, conv4 and conv5. Then, the electronic device may perform feature fusion processing on the initial image feature corresponding to conv3 and the initial image feature corresponding to conv4 for two adjacent network layers conv3 and conv4, and then use the image feature obtained by the feature fusion processing as a hierarchical image feature corresponding to the low network layer conv 3. Since the feature fusion processing is performed only once for conv3 and conv4, the fusion level of the hierarchical image feature corresponding to conv3 obtained at this time is 1. For convenience of distinction, the hierarchical image features with the fusion level of 1 are called primary image features, the hierarchical image features with the fusion level of 2 are called secondary image features, and names of the hierarchical image features with other fusion levels are analogized in sequence, so that the description is omitted.
Similarly, the electronic device may perform feature fusion processing on the initial image feature corresponding to conv4 and the initial image feature corresponding to conv5 for two adjacent network layers conv4 and conv5, and then use the image feature obtained by the feature fusion processing as the hierarchical image feature corresponding to the low network layer conv 4. Since the feature fusion processing is performed only once for conv4 and conv5, the fusion level of the hierarchical image feature corresponding to conv4 obtained at this time is 1.
Step 403, judging whether the fusion level of the hierarchical image features reaches the network level difference value.
In implementation, the electronic device may determine whether the fusion level of the hierarchical image features reaches the network level difference, if the fusion level of the hierarchical image features reaches the network level difference, the electronic device may perform step 404, and if the fusion level of the hierarchical image features does not reach the network level difference, the electronic device may perform step 405.
Step 404, taking the hierarchical image characteristic as a fusion image characteristic of the first network layer.
And 405, performing feature fusion processing on the hierarchical image features corresponding to the two adjacent network layers to obtain the hierarchical image features corresponding to the low network layer and updated in fusion level number.
In implementations, the electronic device may determine two adjacent network layers among the network layers corresponding to the hierarchical image features. Then, the electronic device can perform feature fusion processing on the hierarchical image features corresponding to each two adjacent network layers, and then uses the image features obtained by the feature fusion processing as the hierarchical image features corresponding to the low network layer after the fusion level is updated. Thereafter, the electronic device may perform step 403.
For example, for the first network layer conv3, it is currently determined that the fusion level of the hierarchical image feature corresponding to conv3 is 1, and the electronic device may determine that the fusion level 1 of the hierarchical image feature does not reach the network level difference 2. Then, the electronic device may determine two adjacent network layers among the network layers corresponding to the hierarchical image features, that is, among the first network layer conv3 and the second network layer conv4, to obtain conv3 and conv4. Then, the electronic device can perform feature fusion processing on the primary image features corresponding to conv3 and the primary image features corresponding to conv4, and then uses the image features obtained by the feature fusion processing as secondary image features corresponding to the low network layer conv3 after the fusion stage number is updated. Since feature fusion processing is performed only twice for conv3 and conv4, the fusion level of the hierarchical image features corresponding to conv3 obtained at this time is 2.
The electronic device may determine that the fusion level 2 of the hierarchical image features reaches the network level difference 2, and then the electronic device may use the secondary image features as the fusion image features of the first network layer conv 3.
In the embodiment of the application, the electronic device may determine a network level difference between the first network layer and the highest network layer, and then, the electronic device may perform feature fusion processing on initial image features corresponding to two adjacent network layers according to the first network layer and the second network layer, to obtain hierarchical image features corresponding to the low network layer. Then, the electronic equipment can take the classified image features as the fused image features of the first network layer under the condition that the fusion level of the classified image features reaches the network level difference value; and under the condition that the fusion level number of the classified image features does not reach the network level difference, carrying out feature fusion processing on the classified image features corresponding to the two adjacent network layers to obtain the classified image features corresponding to the low network layer and updated in the fusion level number until the fusion level number of the classified image features reaches the network level difference.
Therefore, feature fusion processing can be carried out on the initial image features of the first network layer and the initial image features of each second network layer, so that the feature expression of the fused image features can be further enhanced, and the prediction accuracy of disease classification type prediction based on the fused image features is improved.
Optionally, in order to improve the feature processing effect of the extracted image features of the image to be identified, the disease identification model further includes an extras layer (feature processing layer), the input quantity of the feature processing layer is the initial image feature of the highest network layer, and the electronic device can further perform feature extraction through the feature processing layer and the initial image feature of the highest network layer, and the extracted image features are used as the fused image features.
The initial image features of the feature processing layer and the highest network layer are used for further feature extraction, so that the image information of the whole image to be identified can be integrated, and extras features with more feature expression can be obtained. Furthermore, the extras features extracted by the feature processing layer are used as the fused image features, so that the feature expression of the fused image features can be further enhanced, and the prediction accuracy of disease classification type prediction based on the fused image features is improved.
Optionally, in addition to detecting classification categories of diseases, the electronic device may determine a position of a disease portion of the aquatic organism in the image to be identified based on the disease identification model, where the specific processing process includes: and carrying out regression prediction processing on the fused image features through the disease identification model to obtain the position information of the disease part in the image to be identified.
In implementation, the specific processing procedure of performing regression prediction processing on the fused image features by the electronic device through the disease recognition model to obtain the position information of the disease part in the image to be recognized can refer to the processing procedure of performing regression prediction processing on the image features by the electronic device based on the neural network to obtain the position information of the target object in the image in the related art, which is not described herein.
In this embodiment of the present application, the position information may be coordinates of the disease portion in the image to be identified, and the position information may also be a mark frame that marks a position of the disease portion in the image to be identified.
In the embodiment of the application, the electronic equipment can perform regression prediction processing on the fused image features through the disease identification model to obtain the position information of the disease part in the image to be identified. Therefore, besides the classification type of the output diseases, the electronic equipment can also output specific position information of disease parts, so that not only can disease detection results be enriched, but also the aquaculture personnel can conveniently further know the disease conditions of aquatic organisms.
In the embodiment of the application, based on the image to be identified, the electronic device can identify whether the aquatic organisms in the image to be identified have diseases or not through the disease identification model, and when the aquatic organisms do not have diseases, the electronic device can output preset health information to prompt that the aquatic organisms in the image to be identified are in a health state. When aquatic organisms have diseases, the electronic equipment can output classification types of the diseases and a marking frame for marking the positions of the disease parts in the images to be identified, so that the breeding personnel can further confirm the diseases.
Optionally, the embodiment of the application further provides a training process of the disease identification model, which comprises the following steps:
step 1, acquiring a training image set.
The training image set comprises a plurality of sample images, classification categories of diseases corresponding to each sample image and position information of disease parts in each sample image.
In implementations, the electronic device may store a training image set locally, from which the electronic device may obtain the training image set. The electronic device may also obtain a training image set from a network address indicated by the training instruction according to the received training instruction.
The embodiment of the application provides an implementation manner for generating a training image set, which comprises the following steps: the electronic equipment can acquire images of a plurality of aquatic organisms through the underwater camera, and can select the positions of disease parts in the images through an application program with a marking function by the electronic equipment according to each image containing diseased aquatic organisms, and manually mark classification types of the diseases. Among them, the application program having the marking function may be LabelImg, labelme. The classification of diseases can be eyeball fulguria, gill rot disease, saprolegniasis and the like.
The electronic device may then take the image marked with the location and classification category as a sample image, resulting in a training image set.
And 2, performing feature extraction and feature fusion processing on each sample image through an initial model to obtain initial fusion image features of the sample image.
In implementation, the specific processing procedure of this step may refer to the processing procedure of step 102, which is not described herein.
And 3, predicting based on the initial fusion image characteristics to obtain the initial classification category and initial position information of the sample image.
In implementation, the specific processing procedure in this step may refer to step 103 and the processing procedure of performing regression prediction processing on the fused image features through the disease identification model to obtain the position information, which is not described herein again.
And 4, calculating a function value of the loss function based on the initial classification category and the initial position information of the plurality of sample images.
In an implementation, the electronic device may be preset with a loss function, and the electronic device may calculate a function value of the loss function based on an initial classification category, initial position information, and the loss function of the plurality of sample images.
And step 5, changing parameters of the initial model according to the function value until the function value is smaller than a preset threshold value, and obtaining a disease identification model.
In implementations, the electronic device may determine whether the function value is less than a preset threshold. If the function value is not smaller than the preset threshold, the electronic equipment can determine the parameters to be adjusted and the parameter values according to the function value and the preset parameter changing mode, and then set the parameters of the initial model according to the determined parameters and the parameter values. If the function value is smaller than the preset threshold value, the electronic equipment can use the current initial model as a disease identification model.
In the embodiment of the application, the electronic device can train the initial model based on the training sample set to obtain the disease identification model. Therefore, the aquatic organism disease detection is convenient for subsequent electronic equipment based on the disease recognition model.
The embodiment of the application provides a specific structure of an initial model, as shown in table 1, where the initial model includes 5 network layers conv1, conv2, conv3, conv4 and conv5, and according to the order of the network layers from low to high, the image feature output by the previous network layer is the feature of each downsampling, and the resolution is reduced by 2 times in sequence. As the level of the network increases, the features extracted by the network layer are also more advanced and comprehensive.
TABLE 1
The network structure of the conv_bn module is shown in (a) in fig. 3b, the conv_bn module is composed of one 3x3 convolution layer and one BN layer, the network structure of the conv_dw module is shown in (b) in fig. 3b, the conv_dw module is composed of two convolution layers and two BN layers, and after the 1x1 convolution layer is placed in the 3x3 convolution layer, nonlinearity of feature extraction can be increased, and feature expression capability of extracted features can be improved. To adequately extract image features, multiple conv-dw modules may be used at each network layer. In order to increase the image recognition speed, the number of output characteristic channels of the network layer may be 256 at most.
Based on the same technical concept, the embodiment of the application also provides an aquatic organism disease detection device, which is characterized in that the aquatic organism disease detection device comprises an imaging component and a processing component, wherein,
the camera shooting component is arranged in the culture pond and used for shooting aquatic organisms in the culture pond to obtain images to be identified, including the aquatic organisms;
the processing component is used for receiving the image to be identified, which is shot by the shooting component, carrying out feature extraction and feature fusion processing on the image to be identified through a disease identification model based on the image to be identified to obtain fusion image features of the image to be identified, and predicting the fusion image features based on the disease identification model to obtain classification categories of water-generated object diseases in the image to be identified.
Optionally, the processing unit is configured to, when the disease recognition model is a neural network including a plurality of network layers, input the image to be recognized into a lowest network layer with a lowest network layer level according to a sequence from low network layers to high network layers of the plurality of network layers, and perform feature extraction through each network layer to obtain initial image features corresponding to the network layer;
For a first network layer in the plurality of network layers, if the first network layer is the highest network layer with the highest network layer in the plurality of network layers, taking the initial image characteristics of the first network layer as the fusion image characteristics of the first network layer;
and if the first network layer is not the highest network layer, determining the fusion image characteristics of the first network layer according to the initial image characteristics corresponding to the first network layer and the initial image characteristics corresponding to a second network layer, wherein the network layer level of the second network layer is higher than the network layer level of the first network layer.
Optionally, the processing unit is configured to determine a network level difference between the first network layer and the highest network layer when the number of second network layers is at least two;
performing feature fusion processing on initial image features corresponding to two adjacent network layers aiming at the first network layer and the second network layer to obtain hierarchical image features corresponding to a low network layer, wherein the low network layer is a network layer with the lowest network level in the two adjacent network layers, and the fusion level of the hierarchical image features is equal to the number of times of feature fusion processing;
Judging whether the fusion level number of the hierarchical image features reaches the network level difference value or not;
if the fusion level of the hierarchical image features reaches the network level difference value, taking the hierarchical image features as the fusion image features of the first network layer;
and if the fusion level number of the hierarchical image features does not reach the network level difference value, carrying out feature fusion processing on the hierarchical image features corresponding to two adjacent network layers to obtain the hierarchical image features which correspond to the lower network layer and are updated in the fusion level number, and executing the step of judging whether the fusion level number of the hierarchical image features reaches the network level difference value.
Optionally, the second network layer is a network layer adjacent to the first network layer.
Optionally, the disease identification model further includes a feature processing layer, and the fused image feature further includes an image feature obtained by extracting the feature of the initial image feature of the highest network layer by the feature processing layer.
Optionally, the processing unit is configured to perform regression prediction processing on the fused image feature through the disease identification model, so as to obtain position information of the disease part in the image to be identified.
Optionally, the training process of the disease identification model includes:
acquiring a training image set, wherein the training image set comprises a plurality of sample images, classification categories of diseases corresponding to each sample image and position information of disease parts in each sample image;
performing feature extraction and feature fusion processing on each sample image through an initial model to obtain initial fusion image features of the sample image;
predicting based on the initial fusion image characteristics to obtain initial classification category and initial position information of the sample image;
calculating a function value of a loss function based on the initial classification category and the initial position information of the plurality of sample images;
and changing parameters of the initial model according to the function value until the function value is smaller than a preset threshold value, so as to obtain the disease identification model.
The embodiment of the application provides aquatic organism disease detection equipment, which can shoot aquatic organisms in a culture pond to obtain an image to be identified comprising the aquatic organisms; performing feature extraction and feature fusion processing on the image to be identified through the disease identification model to obtain fusion image features of the image to be identified; and predicting the characteristics of the fused image based on the disease recognition model to obtain classification categories of aquatic organism diseases. The classification type of the aquatic organism diseases is determined by predicting the images to be identified through the disease identification model, so that the disease detection efficiency of the aquatic organism can be improved.
Based on the same technical concept, the embodiment of the application also provides an aquatic organism disease detection device, as shown in fig. 5, the device includes:
the shooting module 510 is configured to shoot aquatic creatures in the culture pond, and obtain an image to be identified including the aquatic creatures;
the feature fusion processing module 520 is configured to perform feature extraction and feature fusion processing on the image to be identified through a disease identification model, so as to obtain a fused image feature of the image to be identified;
and the prediction module 530 is configured to predict the features of the fused image based on the disease recognition model, so as to obtain a classification category of the water-borne object disease in the image to be recognized.
The embodiment of the application provides an aquatic organism disease detection device, which can shoot aquatic organisms in a culture pond to obtain an image to be identified comprising the aquatic organisms; performing feature extraction and feature fusion processing on the image to be identified through the disease identification model to obtain fusion image features of the image to be identified; and predicting the characteristics of the fused image based on the disease recognition model to obtain classification categories of aquatic organism diseases. The classification type of the aquatic organism diseases is determined by predicting the images to be identified through the disease identification model, so that the disease detection efficiency of the aquatic organism can be improved.
Based on the same technical concept, the embodiment of the present application further provides an electronic device, as shown in fig. 6, including a processor 601, a communication interface 602, a memory 603, and a communication bus 604, where the processor 601, the communication interface 602, and the memory 603 perform communication with each other through the communication bus 604,
a memory 603 for storing a computer program;
the processor 601 is configured to execute the program stored in the memory 603 to implement the above-mentioned method for detecting aquatic organism disease.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment provided herein, there is also provided a computer readable storage medium having stored therein a computer program which when executed by a processor implements the steps of any of the above-described aquatic organism disease detection methods.
In yet another embodiment provided herein, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform any of the aquatic organism disease detection methods of the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the application to enable one skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (17)

1. A method for detecting a disease of an aquatic organism, the method comprising:
shooting aquatic organisms in a culture pond to obtain an image to be identified containing the aquatic organisms;
performing feature extraction and feature fusion processing on the image to be identified through a disease identification model to obtain fusion image features of the image to be identified, wherein the disease identification model comprises a plurality of network layers, a next network layer is determined according to the sequence of network levels of the network layers from low to high, a calculation result output by a previous network layer is used as input quantity of the next network layer, and feature extraction is performed through initial image features of the next network layer and the previous network layer to obtain initial image features corresponding to the network layer;
predicting the characteristics of the fused image based on the disease recognition model to obtain classification categories of water-borne object diseases in the image to be recognized;
the disease recognition model is a neural network comprising a plurality of network layers, the feature extraction and feature fusion processing are carried out on the image to be recognized through the disease recognition model to obtain the fusion image features of the image to be recognized, and the method comprises the following steps:
Inputting the images to be identified into the lowest network layer with the lowest network layer according to the sequence from low network layer to high network layer of the network layers, and extracting the characteristics through each network layer to obtain the initial image characteristics corresponding to the network layer;
for a first network layer of the plurality of network layers, if the first network layer is not the highest network layer, determining a fusion image characteristic of the first network layer according to an initial image characteristic corresponding to the first network layer and an initial image characteristic corresponding to a second network layer, wherein the network layer level of the second network layer is higher than the network layer level of the first network layer.
2. The method according to claim 1, wherein the disease recognition model is a neural network including a plurality of network layers, the feature extraction and feature fusion processing are performed on the image to be recognized by the disease recognition model to obtain a fused image feature of the image to be recognized, and the method includes:
and aiming at a first network layer in the plurality of network layers, if the first network layer is the highest network layer with the highest network layer in the plurality of network layers, taking the initial image characteristics of the first network layer as the fusion image characteristics of the first network layer.
3. The method according to claim 2, wherein when the number of second network layers is at least two, the determining the fused image feature of the first network layer according to the initial image feature corresponding to the first network layer and the initial image feature corresponding to the second network layer includes:
determining a network level difference between the first network layer and the highest network layer;
performing feature fusion processing on initial image features corresponding to two adjacent network layers aiming at the first network layer and the second network layer to obtain hierarchical image features corresponding to a low network layer, wherein the low network layer is a network layer with the lowest network level in the two adjacent network layers, and the fusion level of the hierarchical image features is equal to the number of times of feature fusion processing;
judging whether the fusion level number of the hierarchical image features reaches the network level difference value or not;
if the fusion level of the hierarchical image features reaches the network level difference value, taking the hierarchical image features as the fusion image features of the first network layer;
and if the fusion level number of the hierarchical image features does not reach the network level difference value, carrying out feature fusion processing on the hierarchical image features corresponding to two adjacent network layers to obtain the hierarchical image features which correspond to the lower network layer and are updated in the fusion level number, and executing the step of judging whether the fusion level number of the hierarchical image features reaches the network level difference value.
4. The method of claim 2, wherein the second network layer is a network layer adjacent to the first network layer.
5. The method of claim 2, wherein the disease recognition model further comprises a feature processing layer, and the fused image features further comprise image features obtained by feature extraction of initial image features of the highest network layer by the feature processing layer.
6. The method according to claim 1, wherein the method further comprises:
and carrying out regression prediction processing on the fused image features through the disease identification model to obtain the position information of the disease part in the image to be identified.
7. The method of claim 1, wherein the training process of the disease recognition model comprises:
acquiring a training image set, wherein the training image set comprises a plurality of sample images, classification categories of diseases corresponding to each sample image and position information of disease parts in each sample image;
performing feature extraction and feature fusion processing on each sample image through an initial model to obtain initial fusion image features of the sample image;
Predicting based on the initial fusion image characteristics to obtain initial classification category and initial position information of the sample image;
calculating a function value of a loss function based on the initial classification category and the initial position information of the plurality of sample images;
and changing parameters of the initial model according to the function value until the function value is smaller than a preset threshold value, so as to obtain the disease identification model.
8. An aquatic organism disease detection apparatus characterized by comprising an imaging part and a processing part, wherein,
the camera shooting component is arranged in the culture pond and used for shooting aquatic organisms in the culture pond to obtain images to be identified, including the aquatic organisms;
the processing component is used for receiving the image to be identified, which is shot by the shooting component, carrying out feature extraction and feature fusion processing on the image to be identified through a disease identification model based on the image to be identified, obtaining fusion image features of the image to be identified, and predicting the fusion image features based on the disease identification model, obtaining classification categories of water generated object diseases in the image to be identified, wherein the disease identification model comprises a plurality of network layers, the processing component is used for determining a next network layer according to the sequence of the network layers of the plurality of network layers from low to high, taking a calculation result output by a previous network layer as an input quantity of the next network layer, carrying out feature extraction on initial image features of the next network layer and the previous network layer, and obtaining initial image features corresponding to the network layer;
Wherein the processing component is further configured to:
inputting the images to be identified into the lowest network layer with the lowest network layer according to the sequence from low network layer to high network layer of the network layers, and extracting the characteristics through each network layer to obtain the initial image characteristics corresponding to the network layer;
for a first network layer of the plurality of network layers, if the first network layer is not the highest network layer, determining a fusion image characteristic of the first network layer according to an initial image characteristic corresponding to the first network layer and an initial image characteristic corresponding to a second network layer, wherein the network layer level of the second network layer is higher than the network layer level of the first network layer.
9. An aquatic organism disease detection device, the device comprising:
the shooting module is used for shooting aquatic organisms in the culture pond to obtain an image to be identified containing the aquatic organisms;
the feature fusion processing module is used for carrying out feature extraction and feature fusion processing on the image to be identified through a disease identification model to obtain fusion image features of the image to be identified, wherein the disease identification model comprises a plurality of network layers, a next network layer is determined according to the sequence of network levels of the network layers from low to high, a calculation result output by a previous network layer is used as an input quantity of the next network layer, and feature extraction is carried out through initial image features of the next network layer and the previous network layer to obtain initial image features corresponding to the network layer;
The prediction module is used for predicting the characteristics of the fused image based on the disease recognition model to obtain classification categories of water-borne object diseases in the image to be recognized;
the feature fusion processing module is used for:
inputting the images to be identified into the lowest network layer with the lowest network layer according to the sequence from low network layer to high network layer of the network layers, and extracting the characteristics through each network layer to obtain the initial image characteristics corresponding to the network layer;
for a first network layer of the plurality of network layers, if the first network layer is not the highest network layer, determining a fusion image characteristic of the first network layer according to an initial image characteristic corresponding to the first network layer and an initial image characteristic corresponding to a second network layer, wherein the network layer level of the second network layer is higher than the network layer level of the first network layer.
10. The apparatus of claim 9, wherein the disease recognition model is a neural network comprising a plurality of network layers, and the feature fusion processing module comprises:
the feature extraction sub-module is used for inputting the image to be identified into the lowest network layer with the lowest network layer according to the sequence from low network layers to high network layers, and extracting features through each network layer to obtain initial image features corresponding to the network layers;
A determining submodule, configured to, for a first network layer of the plurality of network layers, use an initial image feature of the first network layer as a fused image feature of the first network layer when the first network layer is a highest network layer of the plurality of network layers with a highest network level;
and the determination submodule is further used for determining the fusion image characteristics of the first network layer according to the initial image characteristics corresponding to the first network layer and the initial image characteristics corresponding to the second network layer when the first network layer is not the highest network layer, and the network layer level of the second network layer is higher than the network layer level of the first network layer.
11. The apparatus of claim 10, wherein when the number of second network layers is at least two, the determining submodule comprises:
a first determining unit configured to determine a network level difference between the first network layer and the highest network layer;
the feature fusion processing unit is used for carrying out feature fusion processing on initial image features corresponding to two adjacent network layers aiming at the first network layer and the second network layer to obtain hierarchical image features corresponding to a low network layer, wherein the low network layer is a network layer with the lowest network level in the two adjacent network layers, and the fusion level number of the hierarchical image features is equal to the number of times of feature fusion processing;
The judging unit is used for judging whether the fusion level number of the hierarchical image features reaches the network level difference value;
the second determining unit is used for taking the hierarchical image characteristics as the fusion image characteristics of the first network layer when the fusion level number of the hierarchical image characteristics reaches the network level difference value;
and the second determining unit is further configured to perform feature fusion processing on the hierarchical image features corresponding to two adjacent network layers when the fusion level number of the hierarchical image features does not reach the network level difference value, obtain the hierarchical image features corresponding to the low network layer and updated with the fusion level number, and perform the step of determining whether the fusion level number of the hierarchical image features reaches the network level difference value.
12. The apparatus of claim 10, wherein the second network layer is a network layer adjacent to the first network layer.
13. The apparatus of claim 10, wherein the disease recognition model further comprises a feature processing layer, and wherein the fused image features further comprise image features obtained by feature extraction of initial image features of the highest network layer by the feature processing layer.
14. The apparatus of claim 9, wherein the apparatus further comprises:
and the prediction module is also used for carrying out regression prediction processing on the fused image features through the disease identification model to obtain the position information of the disease part in the image to be identified.
15. The device according to claim 9, further comprising a training module, wherein the training module is configured to obtain a training image set, the training image set including a plurality of sample images, classification categories of diseases corresponding to each sample image, and location information of disease sites in each sample image; performing feature extraction and feature fusion processing on each sample image through an initial model to obtain initial fusion image features of the sample image; predicting based on the initial fusion image characteristics to obtain initial classification category and initial position information of the sample image; calculating a function value of a loss function based on the initial classification category and the initial position information of the plurality of sample images; and changing parameters of the initial model according to the function value until the function value is smaller than a preset threshold value, so as to obtain the disease identification model.
16. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1-7 when executing a program stored on a memory.
17. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, carries out the method steps according to any of claims 1-7.
CN202010352073.2A 2020-04-28 2020-04-28 Aquatic organism disease detection method, device and equipment Active CN111563439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010352073.2A CN111563439B (en) 2020-04-28 2020-04-28 Aquatic organism disease detection method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010352073.2A CN111563439B (en) 2020-04-28 2020-04-28 Aquatic organism disease detection method, device and equipment

Publications (2)

Publication Number Publication Date
CN111563439A CN111563439A (en) 2020-08-21
CN111563439B true CN111563439B (en) 2023-08-08

Family

ID=72073283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010352073.2A Active CN111563439B (en) 2020-04-28 2020-04-28 Aquatic organism disease detection method, device and equipment

Country Status (1)

Country Link
CN (1) CN111563439B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052114A (en) * 2021-04-02 2021-06-29 东营市阔海水产科技有限公司 Dead shrimp identification method, terminal device and readable storage medium
CN113254458B (en) * 2021-07-07 2022-04-08 赛汇检测(广州)有限公司 Intelligent diagnosis method for aquatic disease
CN113780073B (en) * 2021-08-03 2023-12-05 华南农业大学 Device and method for auxiliary estimation of chicken flock uniformity

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609598A (en) * 2017-09-27 2018-01-19 武汉斗鱼网络科技有限公司 Image authentication model training method, device and readable storage medium storing program for executing
CN108573277A (en) * 2018-03-12 2018-09-25 北京交通大学 A kind of pantograph carbon slide surface disease automatic recognition system and method
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure
CN109801293A (en) * 2019-01-08 2019-05-24 平安科技(深圳)有限公司 Remote Sensing Image Segmentation, device and storage medium, server
CN109919013A (en) * 2019-01-28 2019-06-21 浙江英索人工智能科技有限公司 Method for detecting human face and device in video image based on deep learning
WO2019127451A1 (en) * 2017-12-29 2019-07-04 深圳前海达闼云端智能科技有限公司 Image recognition method and cloud system
CN109977942A (en) * 2019-02-02 2019-07-05 浙江工业大学 A kind of scene character recognition method based on scene classification and super-resolution
CN110210434A (en) * 2019-06-10 2019-09-06 四川大学 Pest and disease damage recognition methods and device
CN110348387A (en) * 2019-07-12 2019-10-18 腾讯科技(深圳)有限公司 A kind of image processing method, device and computer readable storage medium
CN110378305A (en) * 2019-07-24 2019-10-25 中南民族大学 Tealeaves disease recognition method, equipment, storage medium and device
CN110443254A (en) * 2019-08-02 2019-11-12 上海联影医疗科技有限公司 The detection method of metallic region, device, equipment and storage medium in image
WO2019233341A1 (en) * 2018-06-08 2019-12-12 Oppo广东移动通信有限公司 Image processing method and apparatus, computer readable storage medium, and computer device
CN110728680A (en) * 2019-10-25 2020-01-24 上海眼控科技股份有限公司 Automobile data recorder detection method and device, computer equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609598A (en) * 2017-09-27 2018-01-19 武汉斗鱼网络科技有限公司 Image authentication model training method, device and readable storage medium storing program for executing
WO2019127451A1 (en) * 2017-12-29 2019-07-04 深圳前海达闼云端智能科技有限公司 Image recognition method and cloud system
CN108573277A (en) * 2018-03-12 2018-09-25 北京交通大学 A kind of pantograph carbon slide surface disease automatic recognition system and method
WO2019233341A1 (en) * 2018-06-08 2019-12-12 Oppo广东移动通信有限公司 Image processing method and apparatus, computer readable storage medium, and computer device
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure
CN109801293A (en) * 2019-01-08 2019-05-24 平安科技(深圳)有限公司 Remote Sensing Image Segmentation, device and storage medium, server
CN109919013A (en) * 2019-01-28 2019-06-21 浙江英索人工智能科技有限公司 Method for detecting human face and device in video image based on deep learning
CN109977942A (en) * 2019-02-02 2019-07-05 浙江工业大学 A kind of scene character recognition method based on scene classification and super-resolution
CN110210434A (en) * 2019-06-10 2019-09-06 四川大学 Pest and disease damage recognition methods and device
CN110348387A (en) * 2019-07-12 2019-10-18 腾讯科技(深圳)有限公司 A kind of image processing method, device and computer readable storage medium
CN110378305A (en) * 2019-07-24 2019-10-25 中南民族大学 Tealeaves disease recognition method, equipment, storage medium and device
CN110443254A (en) * 2019-08-02 2019-11-12 上海联影医疗科技有限公司 The detection method of metallic region, device, equipment and storage medium in image
CN110728680A (en) * 2019-10-25 2020-01-24 上海眼控科技股份有限公司 Automobile data recorder detection method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的病虫害智能化识别系统;陈天娇;曾娟;谢成军;王儒敬;刘万才;张洁;李瑞;陈红波;胡海瀛;董伟;;中国植保导刊(第04期);28-36 *

Also Published As

Publication number Publication date
CN111563439A (en) 2020-08-21

Similar Documents

Publication Publication Date Title
CN111563439B (en) Aquatic organism disease detection method, device and equipment
CN114424253B (en) Model training method and device, storage medium and electronic equipment
CN109740668B (en) Deep model training method and device, electronic equipment and storage medium
CN108366203B (en) Composition method, composition device, electronic equipment and storage medium
WO2019200735A1 (en) Livestock feature vector acquisition method, apparatus, computer device and storage medium
CN111597937B (en) Fish gesture recognition method, device, equipment and storage medium
CN109903282B (en) Cell counting method, system, device and storage medium
CN112712518B (en) Fish counting method and device, electronic equipment and storage medium
CN110232721B (en) Training method and device for automatic sketching model of organs at risk
CN110163798B (en) Method and system for detecting damage of purse net in fishing ground
CN111325181B (en) State monitoring method and device, electronic equipment and storage medium
CN110991443A (en) Key point detection method, image processing method, key point detection device, image processing device, electronic equipment and storage medium
CN112634202A (en) Method, device and system for detecting behavior of polyculture fish shoal based on YOLOv3-Lite
CN113793301B (en) Training method of fundus image analysis model based on dense convolution network model
CN111862189B (en) Body size information determining method, body size information determining device, electronic equipment and computer readable medium
CN112836625A (en) Face living body detection method and device and electronic equipment
CN113221718B (en) Formula identification method, device, storage medium and electronic equipment
CN112836653A (en) Face privacy method, device and apparatus and computer storage medium
CN116778309A (en) Residual bait monitoring method, device, system and storage medium
CN112348808A (en) Screen perspective detection method and device
CN111753775A (en) Fish growth assessment method, device, equipment and storage medium
Muñoz-Benavent et al. Impact evaluation of deep learning on image segmentation for automatic bluefin tuna sizing
CN113744280B (en) Image processing method, device, equipment and medium
CN118334336A (en) Colposcope image segmentation model construction method, image classification method and device
CN112559785A (en) Bird image recognition system and method based on big data training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, Beijing 100176

Applicant before: BEIJING HAIYI TONGZHAN INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant