CN112766040B - Method, device, apparatus and readable storage medium for detecting residual bait - Google Patents

Method, device, apparatus and readable storage medium for detecting residual bait Download PDF

Info

Publication number
CN112766040B
CN112766040B CN202011545764.0A CN202011545764A CN112766040B CN 112766040 B CN112766040 B CN 112766040B CN 202011545764 A CN202011545764 A CN 202011545764A CN 112766040 B CN112766040 B CN 112766040B
Authority
CN
China
Prior art keywords
neural network
network model
residual bait
residual
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011545764.0A
Other languages
Chinese (zh)
Other versions
CN112766040A (en
Inventor
周超
刘杨
杨信廷
孙传恒
赵振锡
徐大明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Research Center for Information Technology in Agriculture
Original Assignee
Beijing Research Center for Information Technology in Agriculture
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Research Center for Information Technology in Agriculture filed Critical Beijing Research Center for Information Technology in Agriculture
Priority to CN202011545764.0A priority Critical patent/CN112766040B/en
Publication of CN112766040A publication Critical patent/CN112766040A/en
Application granted granted Critical
Publication of CN112766040B publication Critical patent/CN112766040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Abstract

The invention provides a method, equipment, a device and a readable storage medium for detecting residual bait. Modifying the algorithm network structure of the initial neural network model according to the residual bait characteristics to generate an intermediate neural network model; setting initial parameters of an intermediate neural network model, inputting a training data set and a verification data set into the intermediate neural network model, and training the intermediate neural network model to generate a target neural network model; and inputting the test picture into a target neural network model, and outputting a residual bait identification result. According to the invention, the algorithm network structure of the initial neural network model is modified according to the characteristics of the residual baits, so that the initial neural network model is matched with the characteristics of the baits, the interference of the external environment can be better eliminated, and the recognition accuracy of the residual baits is improved.

Description

Method, device, apparatus and readable storage medium for detecting residual bait
Technical Field
The present invention relates to the field of machine learning technologies, and in particular, to a method, an apparatus, and a device for detecting residual bait, and a readable storage medium.
Background
In aquaculture, the real-time detection and monitoring of the residual bait change condition in the aquaculture water body is one of important bases for making a scientific bait casting strategy, and can effectively reduce bait waste, thereby realizing win-win of economic benefit and ecological benefit.
A number of challenges faced by underwater image residual bait detection, such as in-water impurities, floaters, fish interference; the light problem causes the image contrast to be reduced and the image to be blurred; the small target problem caused by the volume of the bait; the problem of large quantity and large density of baits caused by dense feeding; and the problem of motion blur caused by the falling motion of the residual bait in water. The machine vision method based on traditional machine learning is mainly used for identifying the underwater residual baits in an ideal environment for removing interference, but the residual baits are low in identification accuracy under the condition of interference of underwater floaters, fish excreta and the like in a complex scene.
Disclosure of Invention
The invention provides a method, equipment and device for detecting residual baits and a readable storage medium, which are used for solving the defect of low residual bait identification precision in the prior art and realizing the accurate identification of residual baits.
The invention provides a method for detecting residual bait, which comprises the following steps:
performing frame taking operation on the received video to generate a residual bait image;
preprocessing the residual bait image to generate a training data set and a verification data set;
acquiring an initial neural network model, and modifying an algorithm network structure of the initial neural network model according to the residual bait characteristics to generate an intermediate neural network model;
setting initial parameters of the intermediate neural network model, inputting the training data set and the verification data set into the intermediate neural network model, training the intermediate neural network model, and generating a target neural network model;
and inputting the test picture into a target neural network model, and outputting a residual bait identification result.
According to the method for detecting the residual bait provided by the invention, an initial neural network model is obtained, the algorithm network structure of the initial neural network model is modified according to the characteristics of the residual bait, and an intermediate neural network model is generated, and the method comprises the following steps:
the method comprises the steps of obtaining an initial neural network model, modifying an output characteristic layer of a basic network architecture according to residual bait characteristics, using a dense connection mode for a residual error module in the first main network, and performing redundancy elimination operation on the basic network architecture to generate an intermediate neural network model.
According to the method for detecting the residual bait provided by the invention, the modification of the output characteristic layer of the basic network architecture comprises the following steps:
and adding up-sampling of a first preset number of times to a convolution layer of the basic network architecture, reducing down-sampling of a second preset number of times, deleting an output layer lower than a preset size threshold in the convolution layer, and fusing the output of the convolution layer with a corresponding layer in the first backbone network.
According to the method for detecting the residual bait provided by the invention, the intensive connection mode is used for the residual modules in the first backbone network, and the method comprises the following steps:
and modifying the residual error module in the first backbone network into an intensive connection mechanism, and adding a third preset number of direct connection.
According to the method for detecting residual bait provided by the invention, the convolution modules in the first backbone network comprise a first convolution module, a second convolution module, a third convolution module, a fourth convolution module and a fifth convolution module, and the basic network architecture performs redundancy elimination operation and comprises the following steps:
and sequentially modifying the layers of the first convolution module, the second convolution module, the third convolution module, the fourth convolution module and the fifth convolution module into 1, 2, 4 and 2.
According to the method for detecting the residual bait provided by the invention, the loss function of the target neural network model adopts the CIOU loss function.
According to the detection method of the residual bait, provided by the invention, the residual bait image is preprocessed to generate a training data set and a verification data set, and the detection method comprises the following steps:
and carrying out picture enhancement on the residual bait image by using a self-adaptive histogram equalization method with limited contrast, and distributing the residual bait image after picture enhancement according to a preset proportion to generate a training data set and a verification data set.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method for detecting residual bait as described above when executing the program.
The invention also provides a detection device which comprises the underwater camera, the light source, the illuminance transmitter and the electronic equipment, wherein the electronic equipment is respectively connected with the underwater camera, the light source and the illuminance transmitter.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor implements the steps of the method for detecting residual bait as described above.
According to the residual bait detection method provided by the invention, the residual bait image is generated through the received video frame taking operation, and the training data set and the verification data set are generated after preprocessing. Modifying the algorithm network structure of the initial neural network model according to the residual bait characteristics to generate an intermediate neural network model; setting initial parameters of an intermediate neural network model, inputting the training data set and the verification data set into the intermediate neural network model, training the intermediate neural network model, and generating a target neural network model; and inputting the test picture into a target neural network model, and outputting a residual bait identification result. According to the invention, the algorithm network structure of the initial neural network model is modified according to the characteristics of the residual baits, so that the initial neural network model is matched with the characteristics of the baits, the interference of the external environment can be better eliminated, and the recognition accuracy of the residual baits is improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for detecting residual bait provided by the invention;
FIG. 2 is a schematic algorithm of a method for detecting residual bait provided by the invention;
FIG. 3 is a schematic structural diagram of an illuminance transmitter;
FIG. 4 is a schematic structural view of the detecting device of the present invention;
FIG. 5a is an original of a test picture;
FIG. 5b is a residual bait detection result of the test picture;
fig. 6 is a schematic structural diagram of an electronic device provided by the present invention.
Reference numerals:
1: an underwater camera; 2: a light source; 3: an illuminance transmitter;
4: an arithmetic processor; 31: an illuminance sensor; 32: a microcontroller;
33: communication interface 100: dense unit 200: D-CSP X
300:PANet 400:Head 500:Loss
810: a processor; 820: a communication interface; 830: memory device
840: a communication bus.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The following describes embodiments of the present invention with reference to fig. 1-6.
Referring to fig. 1, the present invention provides a method of detecting residual bait, comprising:
step S100: and carrying out frame taking operation on the received video to generate a residual bait image. In this example, the detection of residual bait for aquaculture is described in the context of this example. In order to detect the density of the residual bait under water, the underwater picture needs to be sampled in a video monitoring mode. Because the direct object of the subsequent processing is a picture, a frame taking operation is required to be performed on the sampled video, and a certain number of residual bait images related to the underwater bait situation are generated. The format of the residual bait picture is again not limited, and may be bmp, jpg, png, tif, gif or the like, for example.
Step S200: preprocessing the residual bait image to generate a training data set and a verification data set. The image preprocessing is a process performed before feature extraction, segmentation, and matching are performed on an input image in image analysis. The main purpose of image preprocessing is to eliminate extraneous information in the image, recover useful real information, enhance the detectability of related information and maximally simplify data, thereby improving the reliability of feature extraction, image segmentation, matching and recognition.
In this embodiment, the training data set and the verification data set are all used for training the neural network model.
Step S300: and acquiring an initial neural network model, and modifying an algorithm network structure of the initial neural network model according to the residual bait characteristics to generate an intermediate neural network model.
It is worth to say that the initial neural network model is provided with a YOLO-v4 algorithm, and the YOLO-v4 algorithm is a common algorithm in the field of target detection. The residual bait is characterized by smaller target size, high real-time performance of detection feedback and repeated detection of characteristics. To increase the accuracy of the identification, the YOLO-v4 algorithm needs to be improved based on the residual bait characteristics. In this embodiment, the output feature layer of fpn+panet in YOLO-v4 algorithm is specifically modified, an intensive connection mode is used for a residual module of CSPDerknet53 in the backhaul, redundancy removal operation is performed on CSPDerknet53 in the backhaul, and the network layer number of CSPDerknet53 in the backhaul is reduced. Wherein FPN is Feature Pyramid Network. The PANet is Path Aggregation Network.
YOLO (You Only Look Once: unified, real-Time Object Detection), which is a single neural network-based object detection system, YOLO-v4 is an algorithmic structure developed in YOLO.
Step S400: setting initial parameters of the intermediate neural network model, inputting the training data set and the verification data set into the intermediate neural network model, training the intermediate neural network model, and generating a target neural network model.
It is easy to understand that when training the intermediate neural network model, initial values including weights (weights) and bias terms (bias) need to be given, then a training data set is input into the intermediate neural network model for training, after training, a verification data set is input into the intermediate neural network model for training, the recognition condition of the intermediate neural network model after training for the residual bait is verified, and when the recognition condition meets the design requirement, the training is finished.
Step S500: and inputting the test picture into a target neural network model, and outputting a residual bait identification result.
Referring to fig. 5a and 5b, it should be noted that, in order to intuitively feel the residual bait density about the human eye, the identified result may be further identified by computer assistance. After computer aided identification, the residual bait density can be visually seen in fig. 5.
According to the residual bait detection method provided by the invention, the residual bait image is generated through the received video frame taking operation, and the training data set and the verification data set are generated after preprocessing. Modifying the algorithm network structure of the initial neural network model according to the residual bait characteristics to generate an intermediate neural network model; setting initial parameters of an intermediate neural network model, inputting the training data set and the verification data set into the intermediate neural network model, training the intermediate neural network model, and generating a target neural network model; and inputting the test picture into a target neural network model, and outputting a residual bait identification result. According to the invention, the algorithm network structure of the initial neural network model is modified according to the characteristics of the residual baits, so that the initial neural network model is matched with the characteristics of the baits, the interference of the external environment can be better eliminated, and the recognition accuracy of the residual baits is improved.
In an embodiment, the obtaining the initial neural network model, modifying an algorithm network structure of the initial neural network model according to the residual bait characteristic, and generating an intermediate neural network model includes:
the method comprises the steps of obtaining an initial neural network model, modifying an output characteristic layer of a basic network architecture according to residual bait characteristics, using a dense connection mode for a residual error module in the first main network, and performing redundancy elimination operation on the basic network architecture to generate an intermediate neural network model.
Referring to fig. 2, it should be noted that the YOLO-v4 algorithm adopted by the initial neural network model is a basic network architecture, i.e. backhaul, and the first Backbone network, i.e. CSPDerknet53. In this embodiment, the CSPDerknet53 is a backhaul. The residual module, res unit, is one of the modules that make up the cspdreknet 53.
In this embodiment, the output feature layer of the backhaul is modified according to the residual bait features, the residual modules in the CSPDerknet53 are connected in a dense manner, and the backhaul is subjected to redundancy elimination operation, so as to generate an intermediate neural network model
According to the method for detecting the residual bait provided by the invention, the modification of the output characteristic layer of the basic network architecture (backhaul) comprises the following steps:
and adding up-sampling for a first preset times to a convolution layer of the basic network architecture, reducing down-sampling for a second preset times, deleting an output layer lower than a preset size threshold in the convolution layer, and fusing the output of the convolution layer with a corresponding layer in a first backbone network (CSPDarknet 53) to generate a first characteristic output layer.
In this embodiment, the output feature layer for FPN and PANet in YOLO-v4 is operated. The convolutions of the backbox output are up-sampled (upsampled) once more, and the output is feature fused with the corresponding convolutions in the CSPDarknet53 to generate a larger feature output layer, and two feature output layers with smaller output image sizes are deleted by reducing two downsampling operations (downsamples) at the tail end of the convolutions of the backbox output to generate a first feature output layer. Therefore, the backhaul network can retain more shallow features, generate a feature output layer which is more beneficial to small target detection, acquire more abundant fine granularity information, and delete a feature output layer which is useless for a data set and is responsible for large target detection.
In this embodiment, when the input picture size of the network where the FPN and the PANet are located is 416×416×3, the output feature layer sizes of the network where the FPN and the PANet are located are 52×52×18 and 104×104×18, respectively.
In an embodiment, the using a dense connection method for a residual module (Res unit) in the first backbone network includes:
and modifying the residual error module in the first backbone network into an intensive connection mechanism, and adding a third preset number of direct connection.
For the residual module (Res unit) of the CSPDerknet53, the modification is to be a dense connection mode, and the Res unit is the smallest unit module constituting the CSPDerknet53. In the Res unit, a Dense connection mechanism of DenseNet is introduced, two direct connection (shortcut) connections are added, and the connection is named as a DenseUnit. The gradient vanishing problem is solved by the modified intensive connection mechanism, and the feature transmission and multiplexing of small target detection are enhanced.
The convolution modules in the first backbone network comprise a first convolution module, a second convolution module, a third convolution module, a fourth convolution module and a fifth convolution module, and the basic network architecture performs redundancy elimination operation and comprises:
the number of layers of the first convolution module, the second convolution module, the third convolution module, the fourth convolution module and the fifth convolution module are sequentially modified to be 1, 2, 4 and 2, and the method comprises the following steps:
the convolution modules in the first backbone network are modified to be closely-spaced, sequentially adjacent connection blocks. The intensive connecting block comprises a first convolution module, a second convolution module, a third convolution module, a fourth convolution module and a fifth convolution module; the first convolution module includes 1 component CSP (Cross Stage Partial) including the above-mentioned Dense unit, namely D-CSP. The second convolution module comprises 2D-CSPs, the third convolution module comprises 4D-CSPs, the fourth convolution module comprises 4D-CSPs, and the fifth convolution module comprises 2D-CSPs.
With continued reference to fig. 2, convs in fig. 2 represents one convolution layer, convs 5 represents 5 convolution layers, SPP represents a pooled pyramid network, and C1-C5 are 5 densely connected blocks. F2 and F3 represent feature output layers. Conv Filters represent the smallest constituent unit of the convolutional layer. In this embodiment, conv filters=3× (k+5) layers, where K represents the layer number value corresponding to the class loss of the classified type, and 5 represents the layer number value corresponding to the class loss of 4 CIOU loss and the layer number value corresponding to the confidence of 1. The node "+" represents an add function operation, and "c" represents a con-cate function operation.
In one embodiment, the loss function of the target neural network model employs a loss function using CIOU (Complete-IoU).
The CIOUloss calculation formula is:
ρ(A ctr ,B ctr ) Is the euclidean distance of the center points of the predicted and real frames, and c is the diagonal length of the minimum bounding area of the predicted and real frames. Alpha and v are penalty terms for aspect ratio. Wherein α is a positive number, and v is the consistency used to measure aspect ratio, and is specifically defined as follows:
wherein w is gt And h gt The width and the height of the real frame; w and h are the width and height of the prediction frame.
The loss function consists of three parts, namely regression box loss, confidence loss and classification loss. The formula is as follows:
i represents the ith cell of the feature output layer, k represents grid size, j represents the j-th responsible prediction frame, and w and h represent the width and height of the real frame. C (C) i Representing the confidence level of the grid,represents whether the ith cell has an object or not, L CIOU Is the loss function of CIOU.
In the embodiment, on a 64-bit windows10 operating system platform, an underwater residual bait recognition model is constructed based on a Darknet deep learning framework and by using a C language, and training of the model is completed by using a NVIDIA GTX 2080ti GPU. The batch on the single GPU is 16, 416×416 pixel pictures, the minimum batch processing number is 1, the initial learning rate of the model is set to be 0.001, and the training round number is 12000 batches. Acceleration environments are CUDA10.2 and CUDNN7.6.5, development environments are Visual Studio 2019, and the Opencv3.4.0 library is used.
In one embodiment, the preprocessing of the residual bait image to generate a training data set and a verification data set includes:
and carrying out picture enhancement on the residual bait image by using a self-adaptive histogram equalization method with limited contrast, and distributing the residual bait image after picture enhancement according to a preset proportion to generate a training data set and a verification data set.
In this embodiment, the training data set and the validation data set are distributed in a ratio of 0.8 to 0.2, and the data sets are labeled using the open source script LabelImg on Github. Tables 1 and 2 show the data structures obtained by the detection method using the residual bait. In table 1, AP refers to average precision, and the average accuracy, i.e., precision of each class at the time of multi-class prediction, is averaged. AP50, AP75, AP50:95 is the IOU threshold of the fetch detector is greater than 0.5, greater than 0.6, and between 0.5 and 0.95.
Table 1 algorithm results table
In Table 2, TP represents True Positive, FP represents False Positive, and TN represents True Negative.
Precision represents Precision, recall represents Recall, and F1-score represents the harmonic mean of Precision and Recall
Table 2 shows sample size 1193, conf-thresh=0.25, IOU=0.5
Table 2 validation set evaluation results table
The electronic device provided by the invention is described below, and the electronic device described below and the method for detecting residual bait described above can be referred to correspondingly.
To sum up: the method for detecting the underwater residual bait in the aquaculture has the following advantages:
1. compared with the method for identifying the underwater residual baits by the traditional machine learning, the method for identifying the underwater residual baits disclosed by the invention realizes the identification of the underwater residual baits for the first time by using a target detection method based on deep learning and obtains higher precision.
2. In a further technical scheme of the invention, in order to cope with excessive small targets in a data set by a model, the invention modifies the PANet connection mode to obtain a characteristic output layer with more abundant fine granularity information, and pruning is responsible for detecting the characteristic output layer of a large target. Solves the problems of high false detection and high omission of the residual bait caused by higher density, overlapping, too small target and the like.
3. In a further technical scheme of the invention, in order to accelerate the model training speed, intensive connection is carried out on a model residual error network, so that feature transmission and multiplexing are enhanced, and the gradient disappearance problem in the training process is improved. The model performance and the identification precision of the residual baits are further improved.
4. In a further technical scheme of the invention, redundancy removal operation is performed on the residual blocks in the model CSPDarknet53, so that the operation amount is reduced by about 1/3, and the recognition speed is improved.
Fig. 6 illustrates a physical schematic diagram of an electronic device, as shown in fig. 6, which may include: processor 810, communication interface (Communications Interface) 820, memory 830, and communication bus 840, wherein processor 810, communication interface 820, memory 830 accomplish communication with each other through communication bus 840. The processor 810 may invoke logic instructions in the memory 830 to perform a method of detecting residual bait, the method comprising:
performing frame taking operation on the received video to generate a residual bait image;
preprocessing the residual bait image to generate a training data set and a verification data set;
acquiring an initial neural network model, and modifying an algorithm network structure of the initial neural network model according to the residual bait characteristics to generate an intermediate neural network model;
setting initial parameters of the intermediate neural network model, inputting the training data set and the verification data set into the intermediate neural network model, training the intermediate neural network model, and generating a target neural network model;
and inputting the test picture into a target neural network model, and outputting a residual bait identification result.
Further, the logic instructions in the memory 830 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Referring to fig. 4, the invention further provides a detection device, which comprises the underwater camera 1, the light source 2, the illuminance transmitter 3 and the electronic equipment, wherein the electronic equipment is respectively connected with the underwater camera 1, the light source 2 and the illuminance transmitter 3.
The underwater camera 1 can collect underwater images of residual baits under the control of the electronic equipment, the light source 2 is used for supplementing light for the underwater camera 1, the illuminance transmitter 3 can sense the light intensity of the environment and transmit the light intensity information to the operation processor 4, the electronic equipment controls the light source switch and the illumination intensity according to the light intensity information, and the electronic equipment can receive the images collected by the underwater camera and conduct real-time residual baits identification on the images.
Referring to fig. 3, further, the illuminance transmitter 3 includes an illuminance sensor 31, a microprocessor 32, and a communication interface 33. The microcontroller 32 is respectively connected with the illuminance sensor 31 and the communication interface 33, and the microcontroller 32 can control the illuminance sensor 31 to collect data and transmit the data collected by the illuminance sensor 31 to the electronic device through the communication interface 33.
In summary, in the method for detecting residual baits in the embodiment, the underwater camera is used for acquiring the underwater image in the feeding process, the light source and the illuminance transmitter supplement the light when the underwater light is insufficient, and then the operation processor is used for identifying the residual baits in the complex underwater environment through the training model, so that the problem of residual bait identification precision caused by small underwater bait targets, large density, ambiguity and the like is solved. The underwater bait target can be effectively detected and applied to the actual environment of aquaculture.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, are capable of performing the method of detecting residual bait provided by the methods described above, the method comprising:
performing frame taking operation on the received video to generate a residual bait image;
preprocessing the residual bait image to generate a training data set and a verification data set;
acquiring an initial neural network model, and modifying an algorithm network structure of the initial neural network model according to the residual bait characteristics to generate an intermediate neural network model;
setting initial parameters of the intermediate neural network model, inputting the training data set and the verification data set into the intermediate neural network model, training the intermediate neural network model, and generating a target neural network model;
and inputting the test picture into a target neural network model, and outputting a residual bait identification result.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the above-described methods of providing for the detection of residual bait, the method comprising:
performing frame taking operation on the received video to generate a residual bait image;
preprocessing the residual bait image to generate a training data set and a verification data set;
acquiring an initial neural network model, and modifying an algorithm network structure of the initial neural network model according to the residual bait characteristics to generate an intermediate neural network model;
setting initial parameters of the intermediate neural network model, inputting the training data set and the verification data set into the intermediate neural network model, training the intermediate neural network model, and generating a target neural network model;
and inputting the test picture into a target neural network model, and outputting a residual bait identification result.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A method of detecting residual bait, comprising:
performing frame taking operation on the received video to generate a residual bait image;
preprocessing the residual bait image to generate a training data set and a verification data set;
acquiring an initial neural network model, and modifying an algorithm network structure of the initial neural network model according to the residual bait characteristics to generate an intermediate neural network model;
setting initial parameters of the intermediate neural network model, inputting the training data set and the verification data set into the intermediate neural network model, training the intermediate neural network model, and generating a target neural network model;
inputting the test picture into a target neural network model, and outputting a residual bait identification result;
the obtaining the initial neural network model, modifying the algorithm network structure of the initial neural network model according to the residual bait characteristics, and generating an intermediate neural network model comprises the following steps:
acquiring an initial neural network model, modifying an output characteristic layer of a basic network architecture according to residual bait characteristics, using an intensive connection mode for a residual error module in a first main network, and performing redundancy elimination operation on the basic network architecture to generate an intermediate neural network model;
the method comprises the steps of using YOLO-v4 as an initial neural network, modifying an output characteristic layer of FPN+PANet in YOLOv4, performing redundancy elimination operation on CSPDerknet53 in a backbond by using a dense connection mode aiming at a residual module of CSPDerknet53 in the backbond, and reducing the network layer number of the CSPDerknet53 in the backbond to obtain a target detection algorithm.
2. The method of claim 1, wherein modifying the output feature layer of the infrastructure comprises:
and adding up-sampling of a first preset number of times to a convolution layer of the basic network architecture, reducing down-sampling of a second preset number of times, deleting an output layer lower than a preset size threshold in the convolution layer, and fusing the output of the convolution layer with a corresponding layer in the first backbone network.
3. The method for detecting residual bait according to claim 1, wherein the step of using the intensive connection method for the residual modules in the first backbone network comprises:
and modifying the residual error module in the first backbone network into an intensive connection mechanism, and adding a third preset number of direct connection.
4. The method of claim 1, wherein the convolution modules in the first backbone network include a first convolution module, a second convolution module, a third convolution module, a fourth convolution module, and a fifth convolution module, and the infrastructure performs a de-redundancy operation, including:
and sequentially modifying the layers of the first convolution module, the second convolution module, the third convolution module, the fourth convolution module and the fifth convolution module into 1, 2, 4 and 2.
5. A method of detecting residual bait according to any one of claims 1-4, wherein the loss function of the target neural network model employs a using CIOU loss function.
6. A method of detecting residual bait according to any one of claims 1-4, wherein preprocessing the residual bait image to generate a training data set and a verification data set comprises:
and carrying out picture enhancement on the residual bait image by using a self-adaptive histogram equalization method with limited contrast, and distributing the residual bait image after picture enhancement according to a preset proportion to generate a training data set and a verification data set.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method for detecting residual bait according to any one of claims 1-6 when the program is executed.
8. A detection device comprising an underwater camera, a light source, an illuminance transmitter, and the electronic device of claim 7, wherein the electronic device is connected to the underwater camera, the light source, and the illuminance transmitter, respectively.
9. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, carries out the steps of the method for detecting residual bait according to any one of claims 1-6.
CN202011545764.0A 2020-12-23 2020-12-23 Method, device, apparatus and readable storage medium for detecting residual bait Active CN112766040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011545764.0A CN112766040B (en) 2020-12-23 2020-12-23 Method, device, apparatus and readable storage medium for detecting residual bait

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011545764.0A CN112766040B (en) 2020-12-23 2020-12-23 Method, device, apparatus and readable storage medium for detecting residual bait

Publications (2)

Publication Number Publication Date
CN112766040A CN112766040A (en) 2021-05-07
CN112766040B true CN112766040B (en) 2024-02-06

Family

ID=75695467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011545764.0A Active CN112766040B (en) 2020-12-23 2020-12-23 Method, device, apparatus and readable storage medium for detecting residual bait

Country Status (1)

Country Link
CN (1) CN112766040B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113615629A (en) * 2021-03-19 2021-11-09 东营市阔海水产科技有限公司 Aquaculture water quality monitoring method, terminal equipment and readable storage medium
CN113192040B (en) * 2021-05-10 2023-09-22 浙江理工大学 Fabric flaw detection method based on YOLO v4 improved algorithm
CN113822844A (en) * 2021-05-21 2021-12-21 国电电力宁夏新能源开发有限公司 Unmanned aerial vehicle inspection defect detection method and device for blades of wind turbine generator system and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826592A (en) * 2019-09-25 2020-02-21 浙江大学宁波理工学院 Prawn culture residual bait counting method based on full convolution neural network
CN111240200A (en) * 2020-01-16 2020-06-05 北京农业信息技术研究中心 Fish swarm feeding control method, fish swarm feeding control device and feeding boat

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9137990B2 (en) * 2011-07-15 2015-09-22 The United States Of America As Represented By The Secretary Of Agriculture Methods of monitoring and controlling the walnut twig beetle, Pityophthorus juglandis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826592A (en) * 2019-09-25 2020-02-21 浙江大学宁波理工学院 Prawn culture residual bait counting method based on full convolution neural network
CN111240200A (en) * 2020-01-16 2020-06-05 北京农业信息技术研究中心 Fish swarm feeding control method, fish swarm feeding control device and feeding boat

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Detection of uneaten fish food pellets in underwater images for aquaculture;Dawei Li等;《Aquacultural Engineering》;78:85-94 *
Real-time detection of uneaten feed pellets in underwater images for aquaculture using an improved YOLO-V4 network;Xuelong Hu等;《Computers and Electronics in Agriculture》;1-11 *
基于暗通道先验与YOLO的水下河蟹识别研究;贺帆;赵德安;;软件导刊(第05期);35-38 *
基于机器视觉的水下河蟹识别方法;赵德安等;《农业机械学报》;151-158 *
基于自适应模糊神经网络的鱼类投喂预测方法研究;陈澜等;《中国农业科技导报》;91-100 *

Also Published As

Publication number Publication date
CN112766040A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN112766040B (en) Method, device, apparatus and readable storage medium for detecting residual bait
US11798132B2 (en) Image inpainting method and apparatus, computer device, and storage medium
KR102574141B1 (en) Image display method and device
JP2018055259A (en) Information processing apparatus, information processing method and program
US20230080693A1 (en) Image processing method, electronic device and readable storage medium
CN111444744A (en) Living body detection method, living body detection device, and storage medium
CN112528782B (en) Underwater fish target detection method and device
CN112200057A (en) Face living body detection method and device, electronic equipment and storage medium
CN111080531A (en) Super-resolution reconstruction method, system and device for underwater fish image
CN112561879B (en) Ambiguity evaluation model training method, image ambiguity evaluation method and image ambiguity evaluation device
CN112733929A (en) Improved method for detecting small target and shielded target of Yolo underwater image
KR102262671B1 (en) Method and storage medium for applying bokeh effect to video images
CN111652231B (en) Casting defect semantic segmentation method based on feature self-adaptive selection
CN115578615A (en) Night traffic sign image detection model establishing method based on deep learning
CN115239581A (en) Image processing method and related device
US11783454B2 (en) Saliency map generation method and image processing system using the same
CN111160100A (en) Lightweight depth model aerial photography vehicle detection method based on sample generation
CN112926667B (en) Method and device for detecting saliency target of depth fusion edge and high-level feature
CN116703925B (en) Bearing defect detection method and device, electronic equipment and storage medium
CN115358952B (en) Image enhancement method, system, equipment and storage medium based on meta-learning
CN114119428B (en) Image deblurring method and device
CN112861687B (en) Mask wearing detection method, device, equipment and medium for access control system
CN114897728A (en) Image enhancement method and device, terminal equipment and storage medium
CN111914766B (en) Method for detecting business trip behavior of city management service
CN113256556A (en) Image selection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant