CN115512215A - Underwater biological monitoring method and device and storage medium - Google Patents

Underwater biological monitoring method and device and storage medium Download PDF

Info

Publication number
CN115512215A
CN115512215A CN202211193531.8A CN202211193531A CN115512215A CN 115512215 A CN115512215 A CN 115512215A CN 202211193531 A CN202211193531 A CN 202211193531A CN 115512215 A CN115512215 A CN 115512215A
Authority
CN
China
Prior art keywords
image
lens
fish body
fish
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211193531.8A
Other languages
Chinese (zh)
Inventor
谢荀
赵思恒
姜勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
711th Research Institute of CSIC
Original Assignee
711th Research Institute of CSIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 711th Research Institute of CSIC filed Critical 711th Research Institute of CSIC
Priority to CN202211193531.8A priority Critical patent/CN115512215A/en
Publication of CN115512215A publication Critical patent/CN115512215A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/05Underwater scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Abstract

The invention discloses an underwater biological monitoring method, an underwater biological monitoring device and a storage medium, wherein the method comprises the following steps: acquiring image information of the fish through a double-lens camera arranged at a preset depth underwater; obtaining a fish body contour based on the deep neural network, and correcting and screening the fish body contour to obtain a real fish body contour; determining pose information of the real fish body outline so as to obtain position information in an image coordinate system; obtaining three-dimensional point cloud data of the real fish body outline in a world coordinate system based on the lens target image; and obtaining fish body characteristic parameters based on the three-dimensional point cloud data and the position information. The technical scheme provided by the invention can solve the technical problem that when a sonar detection method is adopted to detect aquatic products in a free state in the prior art, signals are easy to interfere, so that the measurement accuracy is low. The invention can identify various underwater organisms through the image acquired by the double-lens camera, and can more accurately monitor the growth state of aquatic products.

Description

Underwater biological monitoring method and device and storage medium
Technical Field
The invention relates to the technical field of biological monitoring, in particular to an underwater biological monitoring method, an underwater biological monitoring device and a storage medium.
Background
Under the background of the big data era, cultivation intellectualization has become a necessary trend for the development of the deep-sea aquaculture field. At present, the water area culture modes in most areas are extensive, culture management mainly depends on manual work, the intelligent degree is low, and the traditional contact type monitoring method needs to fish out aquatic products and then measure the aquatic products through manual measurement or stand the aquatic products in a container for photographing and measuring. The measurement means of the aquatic products not only wastes time and labor, but also can cause adverse effects on the growth of the aquatic products and cannot sense the growth state of the aquatic products in time.
In the prior art, sonar detection means is generally adopted in an industrial culture biomass monitoring system of deep sea environment for detecting aquatic products in a free state, but the method has low measurement precision on single aquatic products, and meanwhile, a control method for detecting by multiple sonar devices is complex and can cause back-and-forth wave interference. For example, patent CN104285849B discloses a system and a method for monitoring biomass in net cage culture, the scheme realizes detection of biomass of aquatic products by means of sonar detection, but in practical application, acoustic detection resolution is limited, accuracy is not high, and the system and the method cannot be applied to various aquatic product monitoring environments.
In summary, it is desirable to provide an underwater biological monitoring method, which can monitor the state of aquatic products in an underwater free state, so as to detect the biomass of the aquatic products in a single body and evaluate the group behaviors, such as length, weight and health status, with high precision.
Disclosure of Invention
The invention provides an underwater biological monitoring method, an underwater biological monitoring device and a storage medium, and aims to effectively solve the technical problem that when a sonar detection method is adopted to detect aquatic products in a free state in the prior art, signals are easy to interfere, and the measurement accuracy is low. The invention can identify various underwater organisms by the image acquired by the double-lens camera, and can more accurately monitor the growth state of the aquatic products.
According to one aspect of the invention, there is provided a method of underwater biological monitoring, the method comprising:
acquiring image information of various fishes through a double-lens camera arranged at a preset depth underwater, wherein the image information comprises a first lens image acquired by a first lens and a second lens image acquired by a second lens of the double-lens camera;
performing image instance segmentation on the first lens image by using a depth neural network to obtain a plurality of fish body contours corresponding to the various fishes, screening abnormal fish body contours in the plurality of fish body contours to obtain a plurality of target fish body contours, and correcting the plurality of target fish body contours based on distance correction parameters to obtain a plurality of real fish body contours;
determining pose information of each real fish body contour of the plurality of real fish body contours based on a minimum circumscribed rectangle method, and determining position information of each real fish body contour in an image coordinate system based on the pose information;
determining a first lens target image corresponding to the real fish body outlines in the first lens image, determining a second lens target image corresponding to the real fish body outlines in the second lens image, and obtaining three-dimensional point cloud data of the real fish body outlines in a world coordinate system based on the first lens target image and the second lens target image;
and aiming at each real fish body contour, obtaining fish body characteristic parameters corresponding to the real fish body contour based on the three-dimensional point cloud data and the position information.
Further, the method further comprises:
before the image information of various fishes is acquired through a double-lens camera arranged at a preset depth underwater, acquiring a first lens training image and a second lens training image of the various fishes through the double-lens camera;
obtaining a fish instance segmentation dataset based on the first shot training image and the second shot training image;
and performing migration training on the deep neural network based on the fish example segmentation data set to obtain a trained deep neural network model.
Further, the method further comprises:
before the image instance segmentation is carried out on the first lens image by using the depth neural network so as to obtain a plurality of fish body contours corresponding to the plurality of fishes, image preprocessing operation is carried out on the first lens image so as to obtain a data-enhanced first lens image, wherein the image preprocessing operation comprises image graying operation and image denoising operation.
Further, the image instance segmentation of the first shot image by using a deep neural network to obtain a plurality of fish body contours corresponding to the plurality of fishes comprises:
carrying out image instance segmentation on the first lens image through the deep neural network model to obtain an instance segmentation result;
obtaining the original size of the fish body corresponding to the first lens image based on the example segmentation result;
and acquiring a camera internal reference matrix of the first lens, and obtaining the plurality of fish body outlines based on the camera internal reference matrix and the original size of the fish body.
Further, the image instance segmentation of the first lens image by the deep neural network model to obtain an instance segmentation result comprises:
inputting the first lens image into the deep neural network model, coding the first lens image through a convolutional neural network corresponding to the deep neural network model to obtain a high-dimensional feature map, sampling the high-dimensional feature map to obtain an original resolution mask map, and performing pixel-by-pixel segmentation on the original resolution mask map to obtain the example segmentation result.
Further, the method further comprises:
before the image information of various fishes is acquired through a double-lens camera arranged at a preset depth underwater, acquiring first lens calibration images of a plurality of calibration targets arranged at a plurality of preset positions through the double-lens camera;
obtaining distance information between each of the plurality of calibration targets and the first lens;
and correcting the first lens calibration image based on a distance correction algorithm and the distance information to obtain the distance correction parameter corresponding to the first lens.
Further, the obtaining three-dimensional point cloud data of the plurality of real fish body outlines in a world coordinate system based on the first lens target image and the second lens target image comprises:
and calculating an image parallax parameter between the first lens target image and the second lens target image based on a binocular stereo matching algorithm, and obtaining the three-dimensional point cloud data based on the image parallax parameter and the real fish body outlines.
Further, the method further comprises:
before the image information of multiple fishes is obtained through a double-lens camera arranged at a preset depth underwater, obtaining diseased fish image information corresponding to the multiple fishes through the double-lens camera, and establishing a diseased fish data set based on the diseased fish image information;
and performing migration training on the deep neural network based on the diseased fish data set to obtain a trained deep neural network model.
Further, the obtaining of the fish body characteristic parameter corresponding to the real fish body contour based on the three-dimensional point cloud data and the position information includes:
determining a plurality of anatomical coordinate points of each real fish body contour based on the three-dimensional point cloud data, and determining a shape parameter and a size parameter of the fish body based on the plurality of anatomical coordinate points.
Further, the method further comprises:
after determining the shape parameter and the size parameter of the fish body based on the plurality of anatomical coordinate points, determining fish type information based on the shape parameter of each real fish body contour, determining fish age information based on the fish type information, and determining fish weight information based on the fish type information and the size parameter.
According to another aspect of the present invention, there is also provided an underwater biometric monitoring apparatus, the apparatus comprising:
the system comprises an image information acquisition module, a first image acquisition module and a second image acquisition module, wherein the image information acquisition module is used for acquiring image information of multiple fishes through a double-lens camera arranged at a preset depth underwater, and the image information comprises a first lens image acquired by a first lens and a second lens image acquired by a second lens of the double-lens camera;
a real fish body contour obtaining module, configured to perform image instance segmentation on the first lens image by using a depth neural network to obtain a plurality of fish body contours corresponding to the plurality of fishes, screen abnormal fish body contours from the plurality of fish body contours to obtain a plurality of target fish body contours, and correct the plurality of target fish body contours based on distance correction parameters to obtain a plurality of real fish body contours;
the position information determining module is used for determining the pose information of each real fish body contour of the plurality of real fish body contours based on a minimum circumscribed rectangle method and determining the position information of each real fish body contour in an image coordinate system based on the pose information;
a three-dimensional point cloud data obtaining module, configured to determine a first lens target image corresponding to the multiple real fish body contours in the first lens image, determine a second lens target image corresponding to the multiple real fish body contours in the second lens image, and obtain three-dimensional point cloud data of the multiple real fish body contours in a world coordinate system based on the first lens target image and the second lens target image;
and the fish body characteristic parameter determining module is used for obtaining fish body characteristic parameters corresponding to each real fish body contour based on the three-dimensional point cloud data and the position information.
According to another aspect of the present invention, there is also provided a storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform any of the underwater bio-monitoring methods described above.
Through one or more of the above embodiments of the present invention, at least the following technical effects can be achieved:
in the technical scheme disclosed by the invention, aiming at the defects of the biomass monitoring means in the existing underwater free state, the underwater double-lens stereoscopic vision technology is applied on the basis of the machine vision technology, and the underwater double-lens camera is used for carrying out three-dimensional reconstruction, monomer biomass detection and high-precision monitoring on the aquatic product in the underwater free state, so that the size and the quality of the cultured organisms are detected, and identification, analysis and group behavior evaluation on the growth condition and the health condition of the monomer are carried out, and the growth process information of the underwater cultured organisms can be accurately obtained. According to the scheme, the weight assessment accuracy is improved, the adverse effect of the traditional contact type measurement on aquatic products is avoided, and the breeding efficiency is improved. In the aspect of fish disease prevention, the scheme detects fish diseases with obvious characteristics, can effectively give an early warning, prevents the fish diseases from spreading and reduces loss. In addition, the device of this scheme simple structure, the cost is lower, uses extensively and promotes easily.
Drawings
The technical scheme and other beneficial effects of the invention are obvious from the detailed description of the specific embodiments of the invention in combination with the attached drawings.
FIG. 1 is a flow chart illustrating steps of a method for monitoring underwater organisms according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an anatomical coordinate point provided by an embodiment of the invention;
fig. 3 is a schematic structural diagram of an underwater biological monitoring device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that, unless explicitly stated or limited otherwise, the term "and/or" herein is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document generally indicates that the preceding and following related objects are in an "or" relationship, unless otherwise specified.
Fig. 1 is a flow chart illustrating steps of an underwater biological monitoring method according to an embodiment of the present invention, the method including:
step 101: acquiring image information of various fishes through a double-lens camera arranged at a preset depth underwater, wherein the image information comprises a first lens image acquired by a first lens of the double-lens camera and a second lens image acquired by a second lens;
step 102: performing image instance segmentation on the first lens image by using a depth neural network to obtain a plurality of fish body contours corresponding to the various fishes, screening abnormal fish body contours in the plurality of fish body contours to obtain a plurality of target fish body contours, and correcting the plurality of target fish body contours based on distance correction parameters to obtain a plurality of real fish body contours;
step 103: determining the pose information of each real fish body contour of the plurality of real fish body contours based on a minimum bounding rectangle method, and determining the position information of each real fish body contour in an image coordinate system based on the pose information;
step 104: determining a first lens target image corresponding to the real fish body contours in the first lens image, determining a second lens target image corresponding to the real fish body contours in the second lens image, and obtaining three-dimensional point cloud data of the real fish body contours in a world coordinate system based on the first lens target image and the second lens target image;
step 105: and obtaining fish body characteristic parameters corresponding to each real fish body contour based on the three-dimensional point cloud data and the position information.
Based on the technical scheme of the invention, a fish growth characteristic database can be established to support more than 2 fish species of fish species, for example, two kinds of fishes, namely large yellow croakers and lateolabrax japonicus, are selected for research, and the specific research conditions are as follows:
A. age was selected as follows:
large yellow croaker: collecting growth indexes of 4-6 months (150 g) in the current year, 4-6 months (400-500 g) in the previous year and 9-12 months (750 g) in the previous year, stocking 150g of fish seeds in the middle ten-5 ten days of the 4 month, and continuously breeding for 5-7 months from 500g to 750g when the fish seeds are bred to 400g at the end of the year and the weight of the fish seeds in the 4-5 months in the second year after winter.
Perch: collecting growth indexes of 4-6 months, 10-12 months (250 g) and 8-10 months (500 g) in the next year. The average weight of the fingerlings can reach 250g when the fingerlings with about 10cm are cultured in the seawater net cage in summer in the same year and are cultured to the end of the year in autumn. The 1-year-old yellow perches after wintering are cultured for 8-10 months, and the average weight can reach the specification of commercial fish of 500 g.
B. The quantities were chosen as follows:
30-50 fish were measured at each growth stage.
C. The characteristic indexes are as follows:
traditional morphological character measurement comprises 12 morphological variables, namely full length, body height, head length, eye back head length, trunk length, tail handle length, kiss length, eye diameter, eye distance and tail handle height; the records were weighed and body mass Specific Growth Rate (SGR), daily gain (DWG), body length Specific Growth Rate (SGRL), body mass Relative Growth Rate (RGR), coefficient of Variation (CV), body length daily gain (DLG) and fullness (CF) were calculated.
The above steps 101 to 105 are specifically described below.
In step 101, image information of a plurality of fishes is acquired by a dual-lens camera arranged at a preset depth underwater, wherein the image information comprises a first lens image acquired by a first lens of the dual-lens camera and a second lens image acquired by a second lens.
Illustratively, according to the scheme, the images of underwater free-state fishes are acquired through the double-lens camera, wherein the light rays at dozens of meters underwater in the deep sea environment are weak, a light supplement lamp is required to be used, and clear left-eye and right-eye fish color images can be respectively obtained through the underwater double-lens camera, namely a first lens image obtained by the first lens of the double-lens camera and a second lens image obtained by the second lens.
In step 102, image instance segmentation is performed on the first shot image by using a depth neural network to obtain a plurality of fish body contours corresponding to the plurality of fishes, abnormal fish body contours are screened out from the plurality of fish body contours to obtain a plurality of target fish body contours, and the plurality of target fish body contours are corrected based on distance correction parameters to obtain a plurality of real fish body contours.
For example, in order to improve the measurement accuracy, the image acquired by one lens of the dual-lens camera is subjected to data processing in the step, and image instance segmentation is performed through a pre-trained deep neural network.
The accurate value can be obtained based on the profile of the front side of the fish, the collected fish profiles are classified and screened based on the deep neural network, images with unsatisfactory shooting angles, such as the shielded fish body and the incomplete side of the fish body, are screened out, and only images with complete side faces of the fish body are left for next processing.
Since the directly obtained image is an image of a fish in refracted water, a certain error exists between the image and a real image, and in order to improve the detection accuracy, the image is subjected to distance correction based on a distance correction parameter which is determined in advance.
In step 103, the pose information of each real fish body contour of the plurality of real fish body contours is determined based on the minimum bounding rectangle method, and the position information of each real fish body contour in the image coordinate system is determined based on the pose information.
Illustratively, because the pose of the fish appearing in the image has a certain randomness, the pose of the fish body can be determined by finding the circumscribed rectangle of the fish body contour by a minimum circumscribed rectangle method (minAreact), and the fish head orientation can also be identified by using a template matching algorithm (matchTemplate) based on the fish body contour in the minimum circumscribed matrix. And then after an image coordinate system is determined, obtaining the coordinate related to the minimum external matrix, and further determining the position information of each real fish body contour.
In step 104, a first lens target image corresponding to the real fish body contours is determined in the first lens image, a second lens target image corresponding to the real fish body contours is determined in the second lens image, and three-dimensional point cloud data of the real fish body contours in a world coordinate system is obtained based on the first lens target image and the second lens target image.
Illustratively, the position information representing the size of the fish body extracted from the image is coordinates in an image coordinate system, and is converted into three-dimensional point cloud data in a real world coordinate system. The image coordinates are converted to real world coordinates from the depth image provided by the camera. The resolution of the depth image is consistent with that of the color image, and the difference between the resolution of the depth image and the color image is that the information stored in each pixel of the depth image is not the color value of the pixel but the distance from the camera to the plane where the real object corresponding to the pixel is located. The conversion from image coordinates to a real world coordinate system can be done according to the size of the light sensing element used by the camera, the characteristic pixel size.
In step 105, for each real fish body contour, fish body feature parameters corresponding to the real fish body contour are obtained based on the three-dimensional point cloud data and the position information.
Illustratively, through the corresponding relation between the image coordinates and the three-dimensional point cloud coordinates, fish body characteristic parameters, particularly fish body length and fish body width, can be calculated, and information such as fish body weight and the like can be obtained based on the fish body length and the fish body width.
Further, the method further comprises:
before the image information of various fishes is obtained through a double-lens camera arranged at a preset depth underwater, a first lens training image and a second lens training image of the various fishes are obtained through the double-lens camera;
obtaining a fish instance segmentation dataset based on the first shot training image and the second shot training image;
and performing migration training on the deep neural network based on the fish example segmentation data set to obtain a trained deep neural network model.
Illustratively, to obtain an accurate deep neural network model, the model may be trained in advance. For example, after 1800 underwater photos of fishes in different growth stages are collected, 1800 images are manually marked to form an underwater fish example segmentation data set. And then the data set is used for carrying out migration training on the pre-trained deep neural network, so that the model precision is improved.
Further, the method further comprises:
before the image instance segmentation is carried out on the first lens image by using the depth neural network so as to obtain a plurality of fish body contours corresponding to the various fishes, image preprocessing operation is carried out on the first lens image so as to obtain a data-enhanced first lens image, wherein the image preprocessing operation comprises image graying operation and image denoising operation.
Illustratively, after the training of the deep neural network model is completed, example segmentation is carried out on the color image of the first lens by using the deep neural network, and the underwater fish outline is obtained. The fish shape feature extraction mainly aims at contour extraction, so the image preprocessing mainly comprises image graying and denoising. The image graying is to reduce useless information carried by an image, noise is an important reason of image interference, the noise source of the image is relatively complex, but certain noise meets certain mathematical statistical rules, and the noise removal can improve the accuracy of image processing.
Further, in step 102, the image instance segmentation of the first shot image using a deep neural network to obtain a plurality of fish body contours corresponding to the plurality of fishes includes:
carrying out image instance segmentation on the first lens image through the depth neural network model to obtain an instance segmentation result;
obtaining the original size of the fish body corresponding to the first lens image based on the example segmentation result;
and acquiring a camera internal reference matrix of the first lens, and obtaining the plurality of fish body outlines based on the camera internal reference matrix and the original size of the fish body.
Further, in step 102, the image instance segmentation of the first lens image by the deep neural network model to obtain an instance segmentation result includes:
inputting the first lens image into the deep neural network model, coding the first lens image through a convolutional neural network corresponding to the deep neural network model to obtain a high-dimensional feature map, sampling the high-dimensional feature map to obtain an original resolution mask map, and performing pixel-by-pixel segmentation on the original resolution mask map to obtain the example segmentation result.
Illustratively, for example segmentation tasks, convolutional neural networks are generally employed, specifically consisting of feature encoders and decoders. Wherein the decoder may be implemented using a convolutional neural network. The weights of the convolutional neural network are called convolutional kernels, each layer of the network contains a plurality of convolutional kernels, and the convolutional kernels are used for extracting various types of features in the image. The process of training the convolutional neural network is to let the convolutional kernel learn the features to be extracted and store the features in the convolutional kernel, and the process can be realized by a typical residual error structure of two weight layers, wherein the weight layer is a hidden layer with weights. The shortcut connection function in the residual error structure is to directly map the input of the two weight layers to the output end in an identical way, so that the input and the output are added conveniently, and the neural network layer is endowed with the identical mapping capability, namely the identical mapping is directly used as one part of the network.
In addition, batch standardization is generally used in the neural network to prevent gradient diffusion, the BP deep learning algorithm is used for training the network, and with the deeper and deeper depth of the neural network, the regularization operation can make the gradient correlation of back propagation worse and worse, and finally the gradient correlation is close to white noise. The image has a local correlation and therefore it can be considered that its gradient should also have similar properties. When the gradient is close to white noise, the iterative update of the network weight according to the gradient becomes meaningless random disturbance. And the residual structure can greatly reduce the attenuation of gradient correlation, thereby further preventing model degradation of a deeper neural network.
After the neural network encoder encodes the input image, a series of high-dimensional feature maps are obtained, and in this case, a decoder is required to decode and classify the high-dimensional features. The decoder is also composed of a convolutional layer, generally performs image decoding by multilayer transposed convolution, performs upsampling on the feature map in the decoding process, and gradually restores the high-dimensional feature map into a mask map with the original resolution size. The overall dimension of each fish can be calculated according to the mask map, and the dimension of each fish needs to be converted into the actual three-dimensional length based on the camera internal reference matrix.
Further, the method further comprises:
before the image information of various fishes is acquired through a double-lens camera arranged at a preset depth underwater, acquiring first lens calibration images of a plurality of calibration targets arranged at a plurality of preset positions through the double-lens camera;
obtaining distance information between each of the plurality of calibration targets and the first lens;
and correcting the first lens calibration image based on a distance correction algorithm and the distance information to obtain the distance correction parameter corresponding to the first lens.
Illustratively, the underwater dual-lens stereo camera is different from a land dual-lens stereo camera in that the light propagation media of two sides of the underwater dual-lens stereo camera are different, the outside of the underwater dual-lens stereo camera is water, and the inside of the underwater dual-lens stereo camera is air. Therefore, light is refracted when entering the lens from water, distance measurement accuracy is greatly affected, and in order to obtain high underwater distance measurement accuracy, the double-lens camera needs to be calibrated underwater to correct a distance measurement algorithm error.
Specifically, underwater calibration images need to be shot at different distances, the actual distances between the images and the camera are recorded, and then the real images are obtained through correction through data regression, so that distance correction parameters corresponding to the first lens can be determined.
Further, in step 104, the obtaining three-dimensional point cloud data of the plurality of real fish body outlines in a world coordinate system based on the first lens target image and the second lens target image comprises:
and calculating an image parallax parameter between the first lens target image and the second lens target image based on a binocular stereo matching algorithm, and obtaining the three-dimensional point cloud data based on the image parallax parameter and the real fish body outlines.
Illustratively, after camera calibration is completed, according to left eye and right eye color images, parallax errors of two plane images with two different visual angles are respectively calculated by using a binocular stereo matching algorithm (SGBM) algorithm, a depth map is obtained by taking a first lens as a center, and finally corresponding three-dimensional point cloud data are calculated to complete underwater fish three-dimensional reconstruction.
Further, the method further comprises:
before the image information of multiple fishes is obtained through a double-lens camera arranged at a preset depth underwater, obtaining diseased fish image information corresponding to the multiple fishes through the double-lens camera, and establishing a diseased fish data set based on the diseased fish image information;
and performing migration training on the deep neural network based on the diseased fish data set to obtain a trained deep neural network model.
Illustratively, a monomer fish disease phenotype database can be established by the scheme, for example, 30-50 fishes of infected large yellow croakers (cryptocaryon irritans) and lateolabrax japonicus (red sea bream iridovirus disease) are collected and temporarily cultured in independent net cages, and an underwater camera is used for collecting individual photos of each fish, wherein each fish is not less than 10 photos (including front side photos, front elevation photos and front elevation photos). And then detecting fish diseases, training a deep neural network by using the collected fish disease database, detecting the fishes shot by the underwater camera through the trained model, and predicting whether the fishes are infected with diseases.
Further, in step 105, the obtaining of the fish body feature parameter corresponding to the real fish body contour based on the three-dimensional point cloud data and the position information includes:
determining a plurality of anatomical coordinate points of each real fish body contour based on the three-dimensional point cloud data, and determining a shape parameter and a size parameter of the fish body based on the plurality of anatomical coordinate points.
Exemplarily, fig. 2 is a schematic diagram of an anatomical coordinate point according to an embodiment of the present invention, and as shown in the figure, the frame shape is measured by using 11 anatomical coordinate points to perform frame measurement, and several frame shape are measured. Referring to fig. 2, a plurality of anatomical coordinate points are shown as follows: 1 is pectoral fin starting point, 2 is osculum, 3 is ventral fin starting point, 4 is frontal part with scale portion foremost, 5 is buttock fin starting point, 6 is dorsal fin starting point, 7 is buttock fin basal portion, 8 is dorsal fin basal portion terminal, 9 is caudal fin belly starting point, 10 is caudal fin back starting point, 11 is gill cover basal portion.
The shape parameter and the size parameter of the fish body may be based on the coordinate values of the plurality of anatomical coordinate points and the distance information between the points.
Further, the method further comprises:
after determining the shape parameter and the size parameter of the fish body based on the plurality of anatomical coordinate points, determining fish type information based on the shape parameter of each real fish body contour, determining fish age information based on the fish type information, and determining fish weight information based on the fish type information and the size parameter.
Illustratively, after the shape and the size of the fish are determined, correlation analysis and path analysis can be performed on the biological indexes by using a statistical method, and a character with high correlation is selected to obtain a best fit model with the weight by adopting nonlinear models such as Logistic, gompertz and Bertalanffy. Finally, the character indexes of the fish, such as body length, body height, head length behind eyes, body length, tail handle length, kiss length, eye diameter, eye space, tail handle and the like, are calculated, and further, the more accurate weight of the fish body can be calculated.
Based on the technical scheme of the invention, a fish growth characteristic database can be established to support more than 2 fish species, and the estimation accuracy of the health state of the fish with the single fish weight model prediction error less than 10% is higher than 80%.
Through one or more of the above embodiments in the present invention, at least the following technical effects can be achieved:
in the technical scheme disclosed by the invention, aiming at the defects of the biomass monitoring means in the existing underwater free state, an underwater double-lens stereoscopic vision technology is applied on the basis of a machine vision technology, and an underwater double-lens camera is used for carrying out three-dimensional reconstruction, monomer biomass detection and high-precision monitoring on the aquatic products in the underwater free state, so that the size and the quality of cultured organisms are detected, and the growth condition and the health condition of the monomers are identified, analyzed and group behavior is evaluated, and the growth process information of the underwater cultured organisms can be accurately obtained. According to the scheme, the weight assessment accuracy is improved, the adverse effect of the traditional contact type measurement on aquatic products is avoided, and the breeding efficiency is improved. In the aspect of fish disease prevention, the scheme detects fish diseases with obvious characteristics, can effectively give an early warning, prevents the fish diseases from spreading and reduces loss. In addition, the device of this scheme simple structure, the cost is lower, uses extensively and promotes easily.
Based on the same inventive concept as the underwater biological monitoring method according to the embodiment of the present invention, an embodiment of the present invention provides an underwater biological monitoring apparatus, please refer to fig. 3, the apparatus includes:
the image information acquiring module 201 is configured to acquire image information of multiple fishes through a dual-lens camera arranged at a preset depth underwater, where the image information includes a first lens image acquired by a first lens of the dual-lens camera and a second lens image acquired by a second lens;
a real fish body contour obtaining module 202, configured to perform image instance segmentation on the first lens image by using a deep neural network to obtain a plurality of fish body contours corresponding to the plurality of fishes, screen abnormal fish body contours from the plurality of fish body contours to obtain a plurality of target fish body contours, and correct the plurality of target fish body contours based on the distance correction parameters to obtain a plurality of real fish body contours;
a position information determining module 203, configured to determine pose information of each of the plurality of real fish body contours based on a minimum bounding rectangle method, and determine position information of each of the real fish body contours in an image coordinate system based on the pose information;
a three-dimensional point cloud data obtaining module 204, configured to determine a first lens target image corresponding to the plurality of real fish body contours in the first lens image, determine a second lens target image corresponding to the plurality of real fish body contours in the second lens image, and obtain three-dimensional point cloud data of the plurality of real fish body contours in a world coordinate system based on the first lens target image and the second lens target image;
a fish body characteristic parameter determining module 205, configured to obtain, for each real fish body contour, a fish body characteristic parameter corresponding to the real fish body contour based on the three-dimensional point cloud data and the position information.
Further, the apparatus is further configured to:
before the image information of various fishes is acquired through a double-lens camera arranged at a preset depth underwater, acquiring a first lens training image and a second lens training image of the various fishes through the double-lens camera;
obtaining a fish instance segmentation dataset based on the first shot training image and the second shot training image;
and performing migration training on the deep neural network based on the fish example segmentation data set to obtain a trained deep neural network model.
Further, the apparatus is further configured to:
before the image instance segmentation is carried out on the first lens image by using the depth neural network so as to obtain a plurality of fish body contours corresponding to the plurality of fishes, image preprocessing operation is carried out on the first lens image so as to obtain a data-enhanced first lens image, wherein the image preprocessing operation comprises image graying operation and image denoising operation.
Further, the real fish body contour obtaining module 202 is further configured to:
carrying out image instance segmentation on the first lens image through the depth neural network model to obtain an instance segmentation result;
obtaining the original size of the fish body corresponding to the first lens image based on the example segmentation result;
and acquiring a camera internal reference matrix of the first lens, and obtaining the plurality of fish body outlines based on the camera internal reference matrix and the original size of the fish body.
Further, the real fish body contour obtaining module 202 is further configured to:
inputting the first lens image into the deep neural network model, coding the first lens image through a convolutional neural network corresponding to the deep neural network model to obtain a high-dimensional feature map, sampling the high-dimensional feature map to obtain an original resolution mask map, and performing pixel-by-pixel segmentation on the original resolution mask map to obtain the example segmentation result.
Further, the apparatus is further configured to:
before the image information of various fishes is acquired through a double-lens camera arranged at a preset depth underwater, acquiring first lens calibration images of a plurality of calibration targets arranged at a plurality of preset positions through the double-lens camera;
obtaining distance information between each of the plurality of calibration targets and the first lens;
and correcting the first lens calibration image based on a distance correction algorithm and the distance information to obtain the distance correction parameter corresponding to the first lens.
Further, the three-dimensional point cloud data obtaining module 204 is further configured to:
and calculating an image parallax parameter between the first lens target image and the second lens target image based on a binocular stereo matching algorithm, and obtaining the three-dimensional point cloud data based on the image parallax parameter and the real fish body outlines.
Further, the apparatus is further configured to:
before the image information of multiple fishes is obtained through a double-lens camera arranged at a preset depth underwater, obtaining diseased fish image information corresponding to the multiple fishes through the double-lens camera, and establishing a diseased fish data set based on the diseased fish image information;
and performing migration training on the deep neural network based on the diseased fish data set to obtain a trained deep neural network model.
Further, the fish body characteristic parameter determination module 205 is further configured to:
determining a plurality of anatomical coordinate points of each real fish body contour based on the three-dimensional point cloud data, and determining a shape parameter and a size parameter of the fish body based on the plurality of anatomical coordinate points.
Further, the apparatus is further configured to:
after determining the shape parameter and the size parameter of the fish body based on the plurality of anatomical coordinate points, determining fish type information based on the shape parameter of each real fish body contour, determining fish age information based on the fish type information, and determining fish weight information based on the fish type information and the size parameter.
In addition, the underwater biological monitoring device comprises a biomass monitoring workstation (a control terminal), an underwater optical data acquisition device (an underwater double-lens camera), a server unit and a switch. The underwater camera collects image data, the image data are transmitted to the video analysis server through the switch, the video analysis server processes video information, and three-dimensional reconstruction is carried out on aquatic products to obtain monomer characteristic quantity and health assessment.
A software system of the biomass monitoring system is established on a Linux system, and the storage and processing of color image and depth image data are completed. The main software modules comprise a double-lens vision identification module, a monocular vision identification module, a fish health state evaluation module and the like, module codes are compiled by adopting Python, and the software processing speed is accelerated by the CUDA technology.
The optical data acquisition and storage device mainly comprises an underwater data acquisition camera, an underwater fishway device and a video storage NVR (network video recorder), video images are acquired by adopting two modes of an underwater double-lens camera, an underwater monocular camera and a fishway, real-time storage is carried out through the NVR, and data are transmitted to the server device through the switch unit.
The device can be configured with a biomass monitoring workstation, run biomass monitoring system software, display a visual human-computer interaction interface, display statistics on fish shoal scale, perform visual display on image segmentation and recognition, recognize fish shoal behaviors and the like, and record and query related data.
Other aspects and implementation details of the underwater biological monitoring device are the same as or similar to those of the underwater biological monitoring method described above, and are not described herein again.
According to another aspect of the present invention, there is also provided a storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform any of the underwater bio-monitoring methods described above.
In summary, although the present invention has been described with reference to the preferred embodiments, the above-described preferred embodiments are not intended to limit the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present invention, therefore, the scope of the present invention shall be determined by the appended claims.

Claims (12)

1. A method of underwater biological monitoring, the method comprising:
acquiring image information of various fishes through a double-lens camera arranged at a preset depth underwater, wherein the image information comprises a first lens image acquired by a first lens of the double-lens camera and a second lens image acquired by a second lens;
performing image instance segmentation on the first lens image by using a depth neural network to obtain a plurality of fish body contours corresponding to the various fishes, screening abnormal fish body contours in the plurality of fish body contours to obtain a plurality of target fish body contours, and correcting the plurality of target fish body contours based on distance correction parameters to obtain a plurality of real fish body contours;
determining pose information of each real fish body contour of the plurality of real fish body contours based on a minimum circumscribed rectangle method, and determining position information of each real fish body contour in an image coordinate system based on the pose information;
determining a first lens target image corresponding to the real fish body contours in the first lens image, determining a second lens target image corresponding to the real fish body contours in the second lens image, and obtaining three-dimensional point cloud data of the real fish body contours in a world coordinate system based on the first lens target image and the second lens target image;
and aiming at each real fish body contour, obtaining fish body characteristic parameters corresponding to the real fish body contour based on the three-dimensional point cloud data and the position information.
2. The method of claim 1, wherein the method further comprises:
before the image information of various fishes is acquired through a double-lens camera arranged at a preset depth underwater, acquiring a first lens training image and a second lens training image of the various fishes through the double-lens camera;
obtaining a fish instance segmentation dataset based on the first shot training image and the second shot training image;
and performing migration training on the deep neural network based on the fish example segmentation data set to obtain a trained deep neural network model.
3. The method of claim 2, wherein the method further comprises:
before the image instance segmentation is carried out on the first lens image by using the depth neural network so as to obtain a plurality of fish body contours corresponding to the plurality of fishes, image preprocessing operation is carried out on the first lens image so as to obtain a data-enhanced first lens image, wherein the image preprocessing operation comprises image graying operation and image denoising operation.
4. The method of claim 3, wherein the image instance segmentation of the first shot image using a deep neural network to obtain a plurality of fish body contours corresponding to the plurality of fish comprises:
carrying out image instance segmentation on the first lens image through the depth neural network model to obtain an instance segmentation result;
obtaining the original size of the fish body corresponding to the first lens image based on the example segmentation result;
and acquiring a camera internal reference matrix of the first lens, and obtaining the plurality of fish body outlines based on the camera internal reference matrix and the original size of the fish body.
5. The method of claim 4, wherein the image instance segmentation of the first lens image by the deep neural network model to obtain an instance segmentation result comprises:
inputting the first lens image into the deep neural network model, coding the first lens image through a convolutional neural network corresponding to the deep neural network model to obtain a high-dimensional feature map, sampling the high-dimensional feature map to obtain an original resolution mask map, and performing pixel-by-pixel segmentation on the original resolution mask map to obtain the example segmentation result.
6. The method of claim 1, wherein the method further comprises:
before the image information of various fishes is acquired through a double-lens camera arranged at a preset depth underwater, acquiring first lens calibration images of a plurality of calibration targets arranged at a plurality of preset positions through the double-lens camera;
obtaining distance information between each of the plurality of calibration targets and the first lens;
and correcting the first lens calibration image based on a distance correction algorithm and the distance information to obtain the distance correction parameter corresponding to the first lens.
7. The method of claim 6, wherein the deriving three-dimensional point cloud data of the plurality of real fish body contours in a world coordinate system based on the first lens target image and the second lens target image comprises:
and calculating an image parallax parameter between the first lens target image and the second lens target image based on a binocular stereo matching algorithm, and obtaining the three-dimensional point cloud data based on the image parallax parameter and the real fish body outlines.
8. The method of claim 1, wherein the method further comprises:
before the image information of a plurality of fishes is acquired through the double-lens camera arranged at the underwater preset depth, acquiring diseased fish image information corresponding to the plurality of fishes through the double-lens camera, and establishing a diseased fish data set based on the diseased fish image information;
and performing migration training on the deep neural network based on the diseased fish data set to obtain a trained deep neural network model.
9. The method of claim 1, wherein the obtaining of the fish body feature parameters corresponding to the real fish body contour based on the three-dimensional point cloud data and the position information comprises:
determining a plurality of anatomical coordinate points of each real fish body contour based on the three-dimensional point cloud data, and determining a shape parameter and a size parameter of the fish body based on the plurality of anatomical coordinate points.
10. The method of claim 9, wherein the method further comprises:
after determining the shape parameter and the size parameter of the fish body based on the plurality of anatomical coordinate points, determining fish type information based on the shape parameter of each real fish body contour, determining fish age information based on the fish type information, and determining fish weight information based on the fish type information and the size parameter.
11. An underwater biological monitoring device, the device comprising:
the system comprises an image information acquisition module, a processing module and a display module, wherein the image information acquisition module is used for acquiring image information of various fishes through a double-lens camera arranged at a preset depth underwater, and the image information comprises a first lens image acquired by a first lens of the double-lens camera and a second lens image acquired by a second lens;
a real fish body contour obtaining module, configured to perform image instance segmentation on the first lens image by using a depth neural network to obtain a plurality of fish body contours corresponding to the plurality of fishes, screen abnormal fish body contours from the plurality of fish body contours to obtain a plurality of target fish body contours, and correct the plurality of target fish body contours based on distance correction parameters to obtain a plurality of real fish body contours;
the position information determining module is used for determining the position and attitude information of each real fish body contour of the plurality of real fish body contours based on a minimum circumscribed rectangle method and determining the position information of each real fish body contour in an image coordinate system based on the position and attitude information;
a three-dimensional point cloud data obtaining module, configured to determine a first lens target image corresponding to the multiple real fish body contours in the first lens image, determine a second lens target image corresponding to the multiple real fish body contours in the second lens image, and obtain three-dimensional point cloud data of the multiple real fish body contours in a world coordinate system based on the first lens target image and the second lens target image;
and the fish body characteristic parameter determining module is used for obtaining fish body characteristic parameters corresponding to each real fish body contour based on the three-dimensional point cloud data and the position information.
12. A storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the method of any one of claims 1 to 10.
CN202211193531.8A 2022-09-28 2022-09-28 Underwater biological monitoring method and device and storage medium Pending CN115512215A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211193531.8A CN115512215A (en) 2022-09-28 2022-09-28 Underwater biological monitoring method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211193531.8A CN115512215A (en) 2022-09-28 2022-09-28 Underwater biological monitoring method and device and storage medium

Publications (1)

Publication Number Publication Date
CN115512215A true CN115512215A (en) 2022-12-23

Family

ID=84509085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211193531.8A Pending CN115512215A (en) 2022-09-28 2022-09-28 Underwater biological monitoring method and device and storage medium

Country Status (1)

Country Link
CN (1) CN115512215A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522951A (en) * 2023-12-29 2024-02-06 深圳市朗诚科技股份有限公司 Fish monitoring method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522951A (en) * 2023-12-29 2024-02-06 深圳市朗诚科技股份有限公司 Fish monitoring method, device, equipment and storage medium
CN117522951B (en) * 2023-12-29 2024-04-09 深圳市朗诚科技股份有限公司 Fish monitoring method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111862048B (en) Automatic fish posture and length analysis method based on key point detection and deep convolution neural network
Costa et al. Extracting fish size using dual underwater cameras
Shi et al. An automatic method of fish length estimation using underwater stereo system based on LabVIEW
CN107667903B (en) Livestock breeding living body weight monitoring method based on Internet of things
CN111178197A (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN112232978B (en) Aquatic product length and weight detection method, terminal equipment and storage medium
CN112598713A (en) Offshore submarine fish detection and tracking statistical method based on deep learning
CN112257564B (en) Aquatic product quantity statistical method, terminal equipment and storage medium
CN111339912A (en) Method and system for recognizing cattle and sheep based on remote sensing image
CN113592896B (en) Fish feeding method, system, equipment and storage medium based on image processing
CN112131921B (en) Biological automatic measurement system and measurement method based on stereoscopic vision
CN112232977A (en) Aquatic product cultivation evaluation method, terminal device and storage medium
CN112634202A (en) Method, device and system for detecting behavior of polyculture fish shoal based on YOLOv3-Lite
CN115512215A (en) Underwater biological monitoring method and device and storage medium
Isa et al. CNN transfer learning of shrimp detection for underwater vision system
Tonachella et al. An affordable and easy-to-use tool for automatic fish length and weight estimation in mariculture
Shi et al. Underwater fish mass estimation using pattern matching based on binocular system
CN115601301B (en) Fish phenotype characteristic measurement method, system, electronic equipment and storage medium
CN108765448B (en) Shrimp larvae counting analysis method based on improved TV-L1 model
CN111369497A (en) Walking type tree fruit continuous counting method and device
CN110956198A (en) Visual weight measuring method for monocular camera
CN113484867B (en) Method for detecting density of fish shoal in closed space based on imaging sonar
CN116295022A (en) Pig body ruler measurement method based on deep learning multi-parameter fusion
CN114037737A (en) Neural network-based offshore submarine fish detection and tracking statistical method
CN113628182B (en) Automatic fish weight estimation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination