CN116152718A - Intelligent observation device and method for prawn culture - Google Patents

Intelligent observation device and method for prawn culture Download PDF

Info

Publication number
CN116152718A
CN116152718A CN202310247718.XA CN202310247718A CN116152718A CN 116152718 A CN116152718 A CN 116152718A CN 202310247718 A CN202310247718 A CN 202310247718A CN 116152718 A CN116152718 A CN 116152718A
Authority
CN
China
Prior art keywords
prawn
prawns
feed tray
frame
observation device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310247718.XA
Other languages
Chinese (zh)
Inventor
张铮
朱豪男
姜龙
童琳涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Ocean University
Original Assignee
Shanghai Ocean University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Ocean University filed Critical Shanghai Ocean University
Priority to CN202310247718.XA priority Critical patent/CN116152718A/en
Publication of CN116152718A publication Critical patent/CN116152718A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K61/00Culture of aquatic animals
    • A01K61/50Culture of aquatic animals of shellfish
    • A01K61/59Culture of aquatic animals of shellfish of crustaceans, e.g. lobsters or shrimps
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K61/00Culture of aquatic animals
    • A01K61/80Feeding devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Abstract

The invention discloses an intelligent observation device and an intelligent observation method for prawn culture, wherein the intelligent observation device comprises a bracket; a transparent feed tray, which can be used for placing feed for feeding prawns; the driving unit is fixed on the bracket and is used for driving the transparent feed tray to ascend or descend; the prawn image acquisition module is used for acquiring videos in the feeding process of prawns; the prawn image processing module is connected with the prawn image acquisition module and can identify and analyze the ingestion intensity, the activity state and the hepatopancreas of the prawn according to the video in the ingestion process of the prawn; the hydrophone module is arranged right below the transparent feed tray; and the stress sensor module is arranged at the middle position inside the transparent feed tray. The invention realizes remodelling, intelligently observes the growth condition of the prawns, compares the collected data with the sample data of the normal growth of the prawns, makes corresponding adjustment measures, and reduces the consumption of manpower, material resources and financial resources.

Description

Intelligent observation device and method for prawn culture
Technical Field
The invention relates to the technical field of aquaculture, in particular to an intelligent observation device and an intelligent observation method for prawn culture.
Background
With the rapid development of global economy and the improvement of the living standard of people, the consumption of aquaculture products has been rapidly increased in the past two decades, and in order to meet the demands of people on aquaculture products, people need to pay close attention to the growth conditions of aquaculture organisms. The global aquaculture industry suffers from significant losses due to the varying climate.
In order to ensure healthy growth of prawns and harvest of fishermen, an intelligent observation method and device for prawn culture are urgently needed.
The traditional method usually depends on the experience of manpower or culture personnel to fish out the shrimps from the culture pond or net cage, and the body length of the shrimps is observed by naked eyes, so that the traditional method is a time-consuming and labor-intensive process, and depends on the experience of shrimp farmers, the body color of the shrimps is observed by naked eyes to be pure, and the liver and pancreas is diseased or not. In this process, an edge-transparent feed pan is submerged near the water surface. The bait was placed on a feed pan for a period of time. Shrimp on the feed tray were then recorded and measured. In addition to the body length, the number of shrimps and the number of residual baits were also recorded to evaluate the growth status and appetite thereof. While this approach may be reliable and often used, it is time consuming and subjective. Most importantly, shrimp are benthonic animals, a process that also disrupts the feeding process and may not reflect their actual behavior. Thus, there is a need for an appropriate and automatic method to estimate shrimp body length. In addition, manual fishing can bring a certain influence to the growth of the prawns, frequent fishing can cause the prawns to suffer excessive pressure and even die, and certain loss can be brought to the breeding personnel.
Now, along with the development of artificial intelligence, a machine vision image processing technology can be adopted to process the feeding video of the prawns shot by a camera frame by frame, so that the data such as the body length, the hepatopancreas, the residual bait quantity and the like of the prawns are obtained, the growth condition of the prawns is observed in real time, and corresponding adjustment measures are made according to the collected data and the sample data of the normal growth of the prawns, so that the loss is reduced. Therefore, an intelligent observation device and an observation method for prawn culture are provided.
Disclosure of Invention
Based on the above, it is necessary to provide an intelligent observation device and an observation method for prawn culture, which can realize remodelling and intelligent observation of the growth condition of prawns, obtain the body length, quantity and residual bait quantity of the prawns, improve the utilization rate of the bait, bring great convenience to the culture personnel and reduce the cost.
An intelligent observation device for prawn culture, comprising:
a bracket;
the transparent feed tray can be used for placing feed for feeding prawns;
the driving unit is fixed on the bracket and is used for driving the transparent feed tray to ascend or descend;
the shrimp image acquisition module is positioned above the transparent feed tray and is used for acquiring videos in the shrimp feeding process;
the prawn image processing module is connected with the prawn image acquisition module and can identify and analyze the ingestion intensity, the activity state and the hepatopancreas of the prawn according to the video in the ingestion process of the prawn;
the hydrophone module is arranged right below the transparent feed tray and is used for judging the feeding strength of the prawns;
the stress sensor module is arranged at the middle position inside the transparent feed tray and can sense the active state of the prawns.
In one embodiment, the feed pan comprises a feed pan upper wall and a feed pan lower floor, both of which are made of PMMA-type material.
In one embodiment, the bracket comprises:
a cross rod is arranged at one side of the vertical rod;
the triangular supports are connected between the vertical rods and the transverse rods.
In one embodiment, the driving unit includes:
the motor is connected with the solar cell panel;
the first U-shaped fixing frame and the second U-shaped fixing frame are fixed on the upper surface of the cross rod, and a first pulley and a second pulley are respectively arranged on the first U-shaped fixing frame and the second U-shaped fixing frame;
and one end of the rope is wound on the motor, and the other end of the rope sequentially passes through the first pulley and the second pulley and then is connected with the transparent feed tray.
An observation method of an intelligent observation device for prawn culture comprises the following steps:
s1, acquiring videos of prawns in the feeding process;
s2, capturing the acquired video frame by frame to obtain an image, and obtaining an original data set by using a Mosaic data enhancement algorithm modified version technology;
s3, dividing the original data set into a training set, a test set and a verification set by using Labelme;
s4, in a target recognition stage, inputting the training set into a feature extraction network to obtain a feature map;
s5, respectively inputting the feature images into a regional suggestion network and an interested region for pooling, and respectively generating candidate detection frames and suggested feature images;
s6, performing identification classification and regression on the candidate detection frames and the suggested feature images by using a Faster R-CNN;
s7, obtaining the body length, the quantity and the residual bait quantity of the prawns.
In one embodiment, the step S2 includes:
s21, selecting 5 original pictures intercepted frame by frame, randomly picking out 4 pictures from the 5 original pictures, turning over, scaling and changing the color gamut;
s22, placing an original picture on the upper left according to the first picture, placing a second picture on the lower left, placing a third picture on the lower right, and placing a fourth picture on the upper right to be well arranged;
and S23, finally fusing the combined picture with the rest picture to obtain a rich data set serving as an original data set.
In one embodiment, the step S3 includes: the original dataset was labeled using Labelme and divided into training, test and validation sets at 8:1:1.
In one embodiment, in the step S5, inputting the feature map into the regional suggestion network includes:
inputting the area suggestion network and then outputting all candidate frames possibly containing targets;
extracting candidate frames by adopting a sliding window method;
judging whether the target is a target or a background by using a classifier, further carrying out positioning correction on the candidate frames by using a regressive device, finishing target positioning, finally obtaining the candidate frames which contain the target and are subjected to positioning adjustment, and removing the candidate frames which are too small and exceed the boundary;
finally, non-maximum suppression is used to determine candidate detection frames containing detection targets.
In one embodiment, in the step S5, pooling the feature map into the region of interest includes: the coordinates of each candidate box generated by the regional suggestion network are collected and marked on the original feature map to generate a suggested feature map.
In one embodiment, in the step S7, the method for obtaining the volume length data of the prawn includes:
inputting the video shot in the feeding process of the prawns into the fast R-CNN frame by frame; the Faster R-CNN detects the body parts of the moving prawns;
when a certain number of body parts of the prawns are detected, a boundary frame is drawn to cover the whole prawn body, and a single node corresponding to the center of the drawn boundary frame is also allocated to each correctly detected body part;
when the detection is sufficient, namely two adjacent parts are detected, the effective directed cycle graph DCG can be obtained by utilizing the physical characteristics of the prawns;
defining the gesture of the prawn by the key part of the prawn body part to obtain a gesture matrix containing a series of gestures, and then encoding the obtained gesture matrix of the prawn;
and comparing and determining the posture of the prawns, and estimating the body length of the prawns according to the size of the boundary box.
According to the intelligent observation device and the observation method for the prawn culture, the traditional device module is combined with the machine vision image processing technology, the remote and intelligent observation of the growth condition of the prawn is realized, the collected data are compared with sample data of the normal growth of the prawn, corresponding adjustment measures are made, and the consumption of manpower, material resources and financial resources is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic structural diagram of an intelligent observation device for prawn culture;
FIG. 2 is a schematic diagram of the intelligent observation method for prawn culture.
Detailed Description
In order that the invention may be readily understood, a more complete description of the invention will be rendered by reference to the appended drawings. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It will be understood that when an element is referred to as being "fixed to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Referring to fig. 1, an embodiment of the present invention provides an intelligent observation device for prawn culture, which includes:
a bracket;
a transparent feed tray 16, wherein the transparent feed tray 16 can be used for placing feed for feeding prawns;
a driving unit fixed on the bracket for driving the transparent feed tray 16 to ascend or descend;
the shrimp image acquisition module 10 is positioned above the transparent feed tray 16, and the shrimp image acquisition module 10 is used for acquiring videos in the shrimp feeding process; in this embodiment, the shrimp image acquisition module 10 may include a camera, etc., which is suspended vertically above the transparent feed tray 16 to capture the amount of shrimp and residual bait during ingestion. Optionally, to obtain a higher definition image, a light source 14 or the like may be added to the transparent feed tray 16.
The prawn image processing module 17 is connected with the prawn image acquisition module 10, and the prawn image processing module 17 can identify and analyze the feeding strength, the activity state and the hepatopancreas of the prawn according to the video in the feeding process of the prawn;
the hydrophone module 15 is arranged right below the transparent feed tray 16, and the hydrophone module 15 is used for judging the feeding strength of the prawns;
and the stress sensor module 13 is arranged at the middle position inside the transparent feed tray 16, and the stress sensor module 13 can sense the active state of the prawns.
In the invention, the prawn image processing module 17 comprises a prawn active state module, a prawn ingestion intensity module and a prawn hepatopancreas module. Wherein, the prawn activity state module fuses the characteristics obtained by the stress sensor module 13 and the bracket to further judge the activity state of the prawn. The prawn feeding intensity module fuses the characteristics obtained by the prawn image acquisition module 10 and the hydrophone module 15 to further judge the feeding intensity of the prawn. In addition, under normal conditions, the hepatopancreas of the prawn is tan; when the bodies of the prawns are abnormal, the hepatopancreas of the prawns can be red, yellow, black and the like, the collected prawn images are processed by the hepatopancreas module of the prawns to identify the color of the hepatopancreas, and whether the growth of the prawns is abnormal is judged according to the color change of the hepatopancreas.
The stress sensor module 13 is used for detecting data of internal stress variation of the transparent feed tray 16, and judges the active state of the prawn in the feeding process by monitoring distortion and crowding in the feeding process of the prawn to the deformation of the bottom of the transparent feed tray 16. The hydrophone module 15 collects sound frequency and intensity data sent by a large number of prawns in the state of just beginning to ingest and approaching to satiety, analyzes the change rule of the sound frequency and intensity in the process of ingesting the prawns, and compares the sound frequency and intensity data collected in real time with the analyzed rule, so that the ingestion intensity of the prawns is judged.
In one embodiment of the invention, the feed pan comprises a feed pan upper wall 11 and a feed pan lower plate 12, wherein the feed pan upper wall 11 and the feed pan lower plate 12 are made of PMMA materials. The PMMA material is transparent and has good light transmittance, so that the prawn and residual bait quantity images can be acquired more accurately and effectively.
In one embodiment of the present invention, the bracket includes:
the vertical rod 3, one side of the vertical rod 3 is provided with a cross rod 9; the vertical rod 3 and the horizontal rod 9 form a 7-shaped structure.
And the triangular support 4 is connected between the vertical rod 3 and the cross rod 9. The tripod 4 can play a role in reinforcing and supporting the bracket.
In an embodiment of the invention, the driving unit includes:
the motor 2 is connected with the solar panel 1; specifically, the motor 2 is mounted on the vertical rod 3, is about 0.5m from the ground, and is driven by power generation of the solar panel 1.
The first U-shaped fixing frame 6 and the second U-shaped fixing frame 8 are fixed on the upper surface of the cross rod 9, and the first pulley 5 and the second pulley 7 are respectively arranged on the first U-shaped fixing frame 6 and the second U-shaped fixing frame 8; the first U-shaped fixing frame 6 and the second U-shaped fixing frame 8 are used for installing the first pulley 5 and the second pulley 7.
One end of the rope is wound on the motor 2, and the other end of the rope sequentially passes through the first pulley 5 and the second pulley 7 and then is connected with the transparent feed tray 16.
In this embodiment, the solar panel 1 can drive the motor 2 to act under the action of sunlight, and the rope moves according to the lifting track of the transparent feed tray 16 under the action of the first pulley 5 and the second pulley 7, so as to realize the lifting of the transparent feed tray 16. During feeding, the feed is placed on the lower bottom plate 12 of the transparent feed tray 16, then the transparent feed tray 16 is lowered to the water bottom, the transparent feed tray 16 is lifted after about 20-60 minutes, and the shrimp image acquisition module 10 continuously shoots during the whole feeding period to acquire experimental materials.
Referring to fig. 2, an embodiment of the invention provides an observation method of an intelligent observation device for prawn culture, which comprises the following steps:
s1, acquiring videos of prawns in the feeding process; the prawn image acquisition module 10 (camera) can be adopted for video acquisition;
s2, capturing the acquired video frame by frame to obtain an image, and obtaining an original data set by using a Mosaic data enhancement algorithm modified version technology;
s3, dividing the original data set into a training set, a test set and a verification set by using Labelme;
s4, in a target recognition stage, inputting the training set into a feature extraction network (DRN) to obtain a feature map;
s5, respectively inputting the feature map into a regional suggestion network (RPN) and an interested region pooling (ROI pooling) to respectively generate a candidate detection frame and a suggested feature map;
s6, performing identification classification and regression on the candidate detection frames and the suggested feature images by using a Faster R-CNN;
s7, obtaining the body length, the quantity and the residual bait quantity of the prawns.
In the invention, aiming at the video acquired by the prawn image acquisition module 10 in the feeding process of the prawns, the prawn image processing module 17 adopts DRN and fast R-CNN penaeus vannamei detection models for identification and analysis. Specifically, DRN is mainly used for feature extraction, and Faster R-CNN is mainly used for identifying, classifying and frame regression of candidate detection frames. Wherein, the advantage of selecting DRN to use is that: the use of multi-layer convolution to learn the residual between the input and output can make the network more easy to train. The output data of each residual module is composed of input data and residual, and gradient propagation is easier due to the fact that a direct channel appears between the input and the output in each module.
In an embodiment of the present invention, the step S2 includes:
s21, selecting 5 original pictures intercepted frame by frame, randomly picking out 4 pictures from the 5 original pictures, turning over, scaling and changing the color gamut;
s22, placing an original picture on the upper left according to the first picture, placing a second picture on the lower left, placing a third picture on the lower right, and placing a fourth picture on the upper right to be well arranged;
and S23, finally fusing the combined picture with the rest picture to obtain a rich data set serving as an original data set.
In an embodiment of the present invention, the step S3 includes: the original dataset was labeled using Labelme and divided into training, test and validation sets at 8:1:1.
In an embodiment of the present invention, in the step S5, inputting the feature map into a regional recommendation network (RPN) includes:
inputting a regional suggestion network (RPN) and then outputting all candidate boxes possibly containing targets;
extracting candidate frames by adopting a sliding window method;
judging whether the target is a target or a background by using a classifier, further carrying out positioning correction on the candidate frames by using a regressive device, finishing target positioning, finally obtaining the candidate frames which contain the target and are subjected to positioning adjustment, and removing the candidate frames which are too small and exceed the boundary;
finally, non-maximal suppression (NMS) is used to determine candidate detection frames containing detection targets.
In an embodiment of the present invention, in the step S5, the pooling the feature map into the region of interest includes: the coordinates of each candidate box generated by the regional suggestion network (RPN) are collected and noted on the original feature map to generate a suggested feature map. Finally, the mixture is sent to a subsequent fast R-CNN (full connection layer) for continuous classification and regression, and finally the prawn and residual bait are detected.
In the invention, the swimming characteristics of the prawns are as follows: when the prawn swims, the relative position of each part of the prawn body is unchanged although the swimming posture of the prawn is continuously changed. The characteristics of the prawn body, such as head and tail, are opposed portions, and abdomen and gastropod are opposed.
In an embodiment of the present invention, in step S7, the method for obtaining the volume length data of the prawn includes:
inputting the video shot in the feeding process of the prawns into the fast R-CNN frame by frame; the Faster R-CNN detects the body parts of the moving prawns;
when a certain number of body parts of the prawns are detected, a boundary frame is drawn to cover the whole prawn body, and a single node corresponding to the center of the drawn boundary frame is also allocated to each correctly detected body part;
when the detection is sufficient, namely two adjacent parts are detected, the effective directed cycle graph DCG can be obtained by utilizing the physical characteristics of the prawns; such as the head and abdomen, or head and gastropod, so that we can use the dual invariance to get the missing detection due to the defect of the detector, only then can get the effective DCG.
Defining the gesture of the prawn by the key part of the prawn body part to obtain a gesture matrix containing a series of gestures, and then encoding the obtained gesture matrix of the prawn; the right side of the prawn is defined as binary code '1' to the reader, and the right side of the prawn is defined as the right side facing the reader and follows a DCG chart (head, gastropod, tail, abdomen); the left side of the prawn is defined as binary code "0" to the reader, and the left side follows the DCG diagram (head-abdomen-tail-gastropod) to the reader, and then obtains 16 template poses clockwise and counterclockwise at 45 degree intervals. The specific swimming direction coding range is 1-8, and the codes of the detected posture of the prawns are formed according to the left (right) codes of the prawns facing the reader and the specific swimming direction codes.
And comparing and determining the posture of the prawns, and estimating the body length of the prawns according to the size of the boundary box. Specifically, the posture of the prawn is determined by comparing the existing 16 templates, and then the body length of the prawn is calculated by comparing the template with original prawn sample data according to the size of a boundary box.
In the invention, the DCG graph has the advantages that the detector can not detect the required parts due to the overlapping of prawns, shielding of feed and excrement, exposure of a camera and the like, and at the moment, the false detection or undetected parts can be filled by depending on the dual invariance of the prawn signs, so that the effective DCG graph is finally formed. Secondly, through the effective DCG graph, one side of the prawn facing the reader can be rapidly judged, the characteristic can reduce the calculation cost of the prawn body length by half, and the speed of detecting the prawn body length is also improved.
In the present invention, the residual amount of feed in the transparent feed tray 16 is identified by inputting the feeding video of the prawn in a frame-by-frame manner, and when the residual bait part is detected, the residual bait part is covered with a boundary box, and the amount of residual bait is estimated by calculating the area of the boundary box; in particular, image enhancement techniques such as histogram equalization may also be employed to maximize image contrast; gray level transformation is carried out, so that the image is clear and the characteristics are obvious; image smoothing mainly uses gaussian filtering, median filtering and the like to eliminate noise in images.
In summary, the invention combines the traditional device module with the machine vision image processing technology to obtain the body length, the quantity, the hepatopancreas and the residual bait quantity of the prawns from the prawn ingestion video. The intelligent and remote observation of the growth condition of the prawns is realized, and great convenience is brought to the breeding personnel. Judging the feeding strength of the prawns according to the residual bait quantity, and deciding the next feeding quantity; the bait utilization rate can be effectively improved, and the cultivation cost is reduced.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The examples described above represent only a few embodiments of the present invention and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (10)

1. An intelligent observation device is bred to shrimp, characterized by comprising:
a bracket;
a transparent feed tray (16), wherein the transparent feed tray (16) can be used for placing feed for feeding prawns;
the driving unit is fixed on the bracket and is used for driving the transparent feed tray (16) to ascend or descend;
the shrimp image acquisition module (10) is positioned above the transparent feed tray (16), and the shrimp image acquisition module (10) is used for acquiring videos in the shrimp feeding process;
the prawn image processing module (17) is connected with the prawn image acquisition module (10), and the prawn image processing module (17) can identify and analyze the ingestion intensity, the activity state and the hepatopancreas of the prawn according to the video in the ingestion process of the prawn;
the hydrophone module (15) is arranged right below the transparent feed tray (16), and the hydrophone module (15) is used for judging the feeding intensity of the prawns;
the stress sensor module (13) is arranged at the middle position inside the transparent feed tray (16), and the stress sensor module (13) can sense the activity state of the prawns.
2. The intelligent observation device for prawn culture according to claim 1, wherein the feed tray comprises a feed tray upper wall (11) and a feed tray lower bottom plate (12), and the feed tray upper wall (11) and the feed tray lower bottom plate (12) are both made of PMMA (polymethyl methacrylate) materials.
3. The intelligent observation device for prawn culture according to claim 2, wherein the bracket comprises:
a vertical rod (3), wherein a cross rod (9) is arranged at one side of the vertical rod (3);
the triangular support (4) is connected between the vertical rod (3) and the cross rod (9).
4. A shrimp farming intelligent observation device according to claim 3, wherein said drive unit comprises:
the motor (2) is connected with the solar panel (1);
the first U-shaped fixing frame (6) and the second U-shaped fixing frame (8) are fixed on the upper surface of the cross rod (9), and the first pulley (5) and the second pulley (7) are respectively arranged on the first U-shaped fixing frame (6) and the second U-shaped fixing frame (8);
one end of the rope is wound on the motor (2), and the other end of the rope sequentially passes through the first pulley (5) and the second pulley (7) and then is connected with the transparent feed tray (16).
5. An observation method of the intelligent observation device for prawn culture according to any one of claims 1 to 4, comprising the steps of:
s1, acquiring videos of prawns in the feeding process;
s2, capturing the acquired video frame by frame to obtain an image, and obtaining an original data set by using a Mosaic data enhancement algorithm modified version technology;
s3, dividing the original data set into a training set, a test set and a verification set by using Labelme;
s4, in a target recognition stage, inputting the training set into a feature extraction network to obtain a feature map;
s5, respectively inputting the feature images into a regional suggestion network and an interested region for pooling, and respectively generating candidate detection frames and suggested feature images;
s6, performing identification classification and regression on the candidate detection frames and the suggested feature images by using a Faster R-CNN;
s7, obtaining the body length, the quantity and the residual bait quantity of the prawns.
6. The method for observing an intelligent observation device for prawn culture according to claim 5, wherein the step S2 comprises:
s21, selecting 5 original pictures intercepted frame by frame, randomly picking out 4 pictures from the 5 original pictures, turning over, scaling and changing the color gamut;
s22, placing an original picture on the upper left according to the first picture, placing a second picture on the lower left, placing a third picture on the lower right, and placing a fourth picture on the upper right to be well arranged;
and S23, finally fusing the combined picture with the rest picture to obtain a rich data set serving as an original data set.
7. The method for observing an intelligent observation device for prawn culture according to claim 6, wherein the step S3 comprises: the original dataset was labeled using Labelme and divided into training, test and validation sets at 8:1:1.
8. The method according to claim 7, wherein the step S5 of inputting the feature map into the regional suggestion network comprises:
inputting the area suggestion network and then outputting all candidate frames possibly containing targets;
extracting candidate frames by adopting a sliding window method;
judging whether the target is a target or a background by using a classifier, further carrying out positioning correction on the candidate frames by using a regressive device, finishing target positioning, finally obtaining the candidate frames which contain the target and are subjected to positioning adjustment, and removing the candidate frames which are too small and exceed the boundary;
finally, non-maximum suppression is used to determine candidate detection frames containing detection targets.
9. The method according to claim 8, wherein the step S5 of pooling the feature map into the region of interest comprises: the coordinates of each candidate box generated by the regional suggestion network are collected and marked on the original feature map to generate a suggested feature map.
10. The method for observing an intelligent observation device for prawn culture according to claim 9, wherein in the step S7, the method for acquiring the body length data of the prawn comprises:
inputting the video shot in the feeding process of the prawns into the fast R-CNN frame by frame; the Faster R-CNN detects the body parts of the moving prawns;
when a certain number of body parts of the prawns are detected, a boundary frame is drawn to cover the whole prawn body, and a single node corresponding to the center of the drawn boundary frame is also allocated to each correctly detected body part;
when the detection is sufficient, namely two adjacent parts are detected, the effective directed cycle graph DCG can be obtained by utilizing the physical characteristics of the prawns;
defining the gesture of the prawn by the key part of the prawn body part to obtain a gesture matrix containing a series of gestures, and then encoding the obtained gesture matrix of the prawn;
and comparing and determining the posture of the prawns, and estimating the body length of the prawns according to the size of the boundary box.
CN202310247718.XA 2023-03-15 2023-03-15 Intelligent observation device and method for prawn culture Pending CN116152718A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310247718.XA CN116152718A (en) 2023-03-15 2023-03-15 Intelligent observation device and method for prawn culture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310247718.XA CN116152718A (en) 2023-03-15 2023-03-15 Intelligent observation device and method for prawn culture

Publications (1)

Publication Number Publication Date
CN116152718A true CN116152718A (en) 2023-05-23

Family

ID=86361897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310247718.XA Pending CN116152718A (en) 2023-03-15 2023-03-15 Intelligent observation device and method for prawn culture

Country Status (1)

Country Link
CN (1) CN116152718A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117256545A (en) * 2023-11-21 2023-12-22 安徽农业大学 Intelligent feeding monitoring device and monitoring system thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117256545A (en) * 2023-11-21 2023-12-22 安徽农业大学 Intelligent feeding monitoring device and monitoring system thereof
CN117256545B (en) * 2023-11-21 2024-02-02 安徽农业大学 Intelligent feeding monitoring device and monitoring system thereof

Similar Documents

Publication Publication Date Title
CN108830144B (en) Lactating sow posture identification method based on improved Faster-R-CNN
Chen et al. Detection of aggressive behaviours in pigs using a RealSence depth sensor
Hu et al. Real-time nondestructive fish behavior detecting in mixed polyculture system using deep-learning and low-cost devices
Ibrahim Aliyu et al. A proposed fish counting algorithm using digital image processing technique
CN111406693A (en) Marine ranch fishery resource maintenance effect evaluation method based on bionic sea eels
CN110532899B (en) Sow antenatal behavior classification method and system based on thermal imaging
CN105894536A (en) Method and system for analyzing livestock behaviors on the basis of video tracking
CN112257564B (en) Aquatic product quantity statistical method, terminal equipment and storage medium
CN108491807B (en) Real-time monitoring method and system for oestrus of dairy cows
CN112184699A (en) Aquatic product health detection method, terminal device and storage medium
CN116152718A (en) Intelligent observation device and method for prawn culture
CN107330403A (en) A kind of yak method of counting based on video data
CN112232978A (en) Aquatic product length and weight detection method, terminal equipment and storage medium
Pettersen et al. Detection and classification of Lepeophterius salmonis (Krøyer, 1837) using underwater hyperspectral imaging
CN114898405B (en) Portable broiler chicken anomaly monitoring system based on edge calculation
Long et al. Automatic classification of cichlid behaviors using 3D convolutional residual networks
JP6059957B2 (en) Wildlife identification device, wildlife identification method, and program
Lai et al. Automatic measuring shrimp body length using CNN and an underwater imaging system
Yang et al. A defencing algorithm based on deep learning improves the detection accuracy of caged chickens
Setiawan et al. Shrimp body weight estimation in aquaculture ponds using morphometric features based on underwater image analysis and machine learning approach
Zainuddin et al. Classification Of Fertile And Infertile Eggs Using Thermal Camera Image And Histogram Analysis: Technology Application In Poultry Farming Industry
CN113284164A (en) Shrimp swarm automatic counting method and device, electronic equipment and storage medium
Hashisho et al. AI-assisted Automated Pipeline for Length Estimation, Visual Assessment of the Digestive Tract and Counting of Shrimp in Aquaculture Production.
Rao et al. An Overview on Detemining Fish Population using Image and Acoustic Approaches
Jian et al. Identification and grading of tea using computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination