CN115690452A - High-throughput fish phenotype analysis method and device based on machine vision - Google Patents

High-throughput fish phenotype analysis method and device based on machine vision Download PDF

Info

Publication number
CN115690452A
CN115690452A CN202211125793.0A CN202211125793A CN115690452A CN 115690452 A CN115690452 A CN 115690452A CN 202211125793 A CN202211125793 A CN 202211125793A CN 115690452 A CN115690452 A CN 115690452A
Authority
CN
China
Prior art keywords
fish
length
features
individual
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211125793.0A
Other languages
Chinese (zh)
Inventor
房开民
肖连智
韩振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong EHualu Information Technology Co ltd
Original Assignee
Shandong EHualu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong EHualu Information Technology Co ltd filed Critical Shandong EHualu Information Technology Co ltd
Priority to CN202211125793.0A priority Critical patent/CN115690452A/en
Publication of CN115690452A publication Critical patent/CN115690452A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a high-throughput fish phenotype analysis method and device based on machine vision. The method specifically comprises the steps of segmenting key parts of a fish body based on an image segmentation technology, extracting features of segmented images, wherein the features include color, texture, kiss length, tail stem height, head length, body height, eye back head length, full length, eye space, body width, body length, eye diameter, tail length, caliber and the like, and then sending the combined features into a random forest classification model for individual identification and phenotype analysis. The device is an automatic device based on the fish phenotype analysis method, and can complete a series of work such as automatic obtaining of a single fish body, individual identification, fish body weighing, fish body surface shooting and fin ray sampling without human intervention. Compared with the traditional method of taking the individual fish to be detected out and manually measuring the individual fish by using professional equipment, the method has the advantages of high efficiency, high accuracy, low cost, capability of quickly carrying out individual fish identification and phenotype analysis and the like.

Description

High-throughput fish phenotype analysis method and device based on machine vision
Technical Field
The invention belongs to the technical field of machine learning and image recognition, and particularly relates to a method for segmenting important part features of fish through an image segmentation technology, identifying individual fish by utilizing a random forest and extracting phenotypic features.
Background
The phenotype data is very important information for researchers, and not only reflects the growth condition of individual fish, but also plays an important role in related researches such as gene breeding and the like. However, in the conventional method, a professional is mainly used for fishing out the individual fish to be detected, when the individual fish is quiet, data measurement and gene acquisition by taking a fin are performed by using professional equipment, and when the amount of data to be detected is large, long time is needed, measurement errors may be caused by long-time work or misoperation, and the measurement errors are easily interfered by subjective factors such as experience and external environments, so that the detection result is strong in subjectivity, poor in consistency and high in error rate, the measured data is inaccurate, and the final result is influenced. The fish individuals are put back into the culture pond after the measurement is finished, and the artificial participation mode can damage the fish individuals due to stress reaction after the fish is dehydrated, so that the artificial participation mode has higher requirements on the professional technology, and is also an examination on the physical strength and operation of people along with the increase of measurement index data.
Disclosure of Invention
Aiming at the problems, the invention provides a method and a device for high-throughput fish individual identification and phenotype analysis based on machine vision, which aim to save labor cost and avoid the situation that professionals are in a long-term fatigue state. The key parts of the fish body can be segmented through an image segmentation technology, the segmented images are subjected to feature extraction, and then the combined features are sent to a random forest classification model for individual identification and phenotype analysis. Compared with the traditional manual operation method, the method has the advantages of high classification accuracy, low cost and capability of quickly identifying and analyzing the phenotype of the fish.
In order to achieve the technical purpose, the fish individual identification and phenotype analysis method comprises the following steps:
(1) Obtaining image data of different individual fishes to form a data sample library
Firstly, pictures of fishes are collected by using a smart phone, each fish is of one category, about 30 fishes are shot by each fish, and for convenience of subsequent fish individual feature extraction and training, each fish individual is placed in a respective folder, so that a sample library is constructed.
(2) Extracting image characteristics of fish individuals and constructing characteristic vectors
For the individual data of the fishes in the sample library, the characteristics of color, texture, kiss length, caudal peduncle height, head length, body height, eye back head length, full length, eye spacing, body width, body length, eye diameter, tail length, caliber and the like are respectively extracted, and the characteristics are combined into a characteristic vector to prepare for the individual identification and the phenotypic analysis of the subsequent fishes.
Color characteristics: color features are one of the most basic features of an image, which describe the surface properties of a fish image. Because the surface color distribution of each fish is different, the proportion of the colors is different, the color characteristics describe the proportion of different pixel values in the whole image, and the spatial position of each pixel is not concerned, the color characteristics can be used as one of the image characteristics for identifying the fish individuals.
Texture characteristics: textural features are also global features, which also describe the surface properties of the fish image. The texture feature does not perform statistical calculation on a certain pixel point, but performs statistical calculation on a region. The texture features have rotation invariance and are also highly resistant to noise. Here we describe the texture features of fish images by gray level co-occurrence matrix method and local binary method.
The gray level co-occurrence matrix method in the texture features comprises the following steps: the gray level co-occurrence matrix reflects the correlation of surrounding neighboring pixels. Some statistics we build based on it: angular secondary moment measures whether the gray level distribution is uniform or not, contrast measures the sharpness of the edge, difference measures the local contrast, and entropy measures the complexity of the image to serve as texture classification features.
The local binary method in the texture features comprises the following steps: and taking a window of 3-by-3 of the image as a block area, taking a central pixel of the window as a threshold, comparing values of adjacent 8 pixels with a middle pixel value, if the values of surrounding pixels are smaller than the threshold, marking the position of the pixel point as 0, otherwise marking the position as 1. Thus, 8 points in the 3-by-3 neighborhood can generate 8-bit binary numbers through comparison, the LBP value of the pixel point in the center of the window is obtained through the binary numbers, and the texture characteristics of the region are reflected through the value.
In individual identification and phenotype analysis of the fish, the information of the eyes, the head, the tail, the body length, the body width, the full length and the like of the fish can reflect the characteristics of the fish, and the fish image and the information of the eyes, the head, the whole body, the tail coordinates and the like of the fish marked by software are sent to an image segmentation network to obtain a segmentation model which is specially used for segmenting each part of the fish. The trained image segmentation model can be used for obtaining key part images of the eyes, the head, the whole body and the tail of the fish and the like of the fish. The images are used for calculating the characteristics of the fish, such as kiss length, caudal peduncle height, head length, body height, retroocular head length, full length, inter-ocular distance, body width, body length, eye diameter, caudal length, caliber and the like, and the individual identification and phenotype analysis of the fish are carried out by calculating and combining the characteristics and the ratio between the characteristics as the characteristics.
(3) Training individual classifiers of fish using random forest
And (3) performing multi-feature fusion on the features of the color, the texture, the kissing length, the tail handle height, the head length, the body height, the eye back head length, the full length, the eye distance, the body width, the body length, the eye diameter, the tail length, the caliber and the like in the step (2), combining the features into a one-dimensional feature vector, and training the feature vectors of all the fish images in the training sample by using a random forest. The random forest is a supervised machine learning algorithm constructed by a decision tree algorithm, is often used for solving the problems of classification and regression, utilizes ensemble learning and combines a plurality of classifiers, and provides a reliable solution for complex problems.
After the random forest training is finished, a corresponding random forest classifier model is obtained, and then the individual identification and phenotype analysis can be carried out on the fish by transmitting the characteristics of the fish into the model.
(4) The fish to be detected is subjected to individual identification and phenotype analysis through a fish phenotype analysis device
The fish phenotype analyzing device comprises an anesthesia pool 1, a fish sucking pump 2, a fish sucking pipeline 3, a fish phenotype extractor 4, a lower slideway 5, a recovery pool 6 and the like, wherein the fish sucking pump 2 automatically sucks fish from the anesthesia pool 1 into the fish phenotype extractor 4 through the fish sucking pipeline 3, and for unidentified fish individuals to be detected, image features are extracted by the method in the step (2) to form feature vectors. And (4) sending the extracted feature vectors into the model trained in the step (3) for identifying the individual fish to obtain the information and the phenotypic features of the individual fish. In particular, the fish phenotype extractor 4 may also perform fish body weighing and fin sampling, and the sampled fish individuals enter the recovery tank 6 through the lower chute 5 for sign recovery.
The method comprises the steps of segmenting key parts of the fish body based on an image segmentation technology, extracting features of the segmented images, sending the combined features into a random forest classification model for individual identification and phenotype analysis. Compared with the traditional manual operation method, the method has the advantages of high classification accuracy, low cost and capability of quickly carrying out individual identification and phenotype analysis on the fish.
Drawings
FIG. 1 is a flow chart of a fish individual identification and fish phenotype data analysis method provided by an embodiment of the invention;
FIG. 2 is an example of phenotype data of fish provided by an embodiment of the present invention;
fig. 3 is a schematic diagram of a fish phenotype analysis apparatus according to an embodiment of the present invention.
The reference signs are: 1 anaesthesia pool, 2 fish suction pumps, 3 fish suction pipelines, 4 fish phenotype extractors, 5 glideslopes and 6 recovery pools
Detailed Description
Specific embodiments of the machine vision-based high-throughput fish phenotype analysis method and apparatus will be explained below with reference to the drawings in the present application, but it should be noted that the practice of the present invention is not limited to the following embodiments.
Example one
The method of the invention has the specific steps as shown in the attached figure 1, and the main flow comprises the following steps:
(1) Obtaining image data of different individual fishes to form a data sample library
Firstly, pictures of fishes are collected by an intelligent mobile phone, each fish is of one category, about 30 pictures are taken by each fish, and each individual fish is placed in a respective folder in order to facilitate subsequent extraction and training of individual fish features, so that a sample library is constructed.
(2) Extracting image characteristics of fish individuals and constructing characteristic vectors
For the fish individual data in the sample library, the characteristics of color, texture, kiss length, caudal peduncle height, head length, body height, eye back head length, full length, inter-eye distance, body width, body length, eye diameter, caudal length, caliber and the like are respectively extracted and combined into a characteristic vector.
Color characteristics: color features are one of the most basic features of an image, which describe the surface properties of a fish image. Because the surface color distribution of each fish is different, the color of some fishes is more black than white, and the color of some fishes is white. The color distribution describes the proportion of different pixel values in the whole image, and does not concern the spatial position of each pixel, so the color characteristic can be used as one of the image characteristics for identifying the fish individuals.
Texture characteristics: texture features are also global features, which also describe the surface properties of fish images. The texture feature does not perform statistical calculation on a certain pixel point, but performs statistical calculation on a region. The texture features have rotation invariance and are also highly resistant to noise. Here we describe the texture features of fish images by gray level co-occurrence matrix method and local binary method.
The gray level co-occurrence matrix method in the texture features comprises the following steps: the gray level co-occurrence matrix reflects the correlation of surrounding neighboring pixels. We use some statistics built based on it: the angle second moment measures whether the gray level distribution is uniform or not, the contrast measures the sharpness of the edge, the difference measures the local contrast, and the entropy measures the complexity of the image to serve as texture classification features.
The local binary method in the texture features comprises the following steps: taking a window of 3 x 3 of the image as a block area, taking a central pixel of the window as a threshold, comparing values of 8 adjacent pixels with a middle pixel value, if the surrounding pixel values are less than the threshold, marking the position of the pixel point as 0, otherwise marking as 1. Thus, 8 points in the 3-by-3 neighborhood can generate 8-bit binary numbers through comparison, the LBP value of the pixel point in the center of the window is obtained through the binary numbers, and the texture characteristics of the region are reflected through the value.
In the individual identification and phenotype analysis of fish, the fish's eyes, head, tail, body length, body width and overall length can reflect the fish's characteristics, as shown in fig. 2, the fish's phenotypic characteristics are marked in the figure, wherein AB is kiss length, EF is caudal peduncle length (kiss to base of caudal fin), JK is caudal peduncle height (JK is the depressed part of caudal peduncle, shortest height of caudal peduncle), BC is eye diameter, AF is body length, LG is caudal length, AD is head length, AG is overall length, CD is posterior eye head length, and HI is body height. The fish image and information such as the coordinates of the eyes, the head, the whole body and the tail of the fish marked by software are sent to an image segmentation network to obtain a segmentation model, and the segmentation model is specially used for segmenting each part of the fish. And key position images of the eyes, the head, the whole body, the tail and the like of the fish can be obtained by using the trained image segmentation model. And calculating phenotypic characteristics of the fish, such as kiss length, caudal peduncle height, head length, body height, eye back head length, full length, eye space, body width, body length, eye diameter, tail length, caliber and the like by using the images, and performing individual identification and phenotypic analysis on the fish by calculating and combining the characteristics and the ratio characteristics among the characteristics.
(3) Training individual classifiers of fish using random forest
And (3) performing multi-feature fusion on the features of the color, the texture, the kissing length, the tail handle height, the head length, the body height, the eye back head length, the full length, the eye distance, the body width, the body length, the eye diameter, the tail length, the caliber and the like in the step (2), combining the features into a one-dimensional feature vector, and training the feature vectors of all the fish images in the training sample by using a random forest. The random forest is a supervised machine learning algorithm constructed by a decision tree algorithm, is often used for solving the problems of classification and regression, and provides a reliable solution for complex problems by utilizing ensemble learning and combining a plurality of classifiers.
After the random forest training is finished, a corresponding random forest classifier model is obtained, and then the individual identification and phenotype analysis can be carried out on the fish by transmitting the characteristics of the fish into the model.
(4) Then, the individual identification and phenotype analysis of the fish can be carried out, the whole device from fish inlet to identification and phenotype analysis to fish outlet is schematically shown in fig. 3, a fish suction pump 2 automatically sucks the fish from an anesthesia pool 1 into a fish phenotype extractor 4 through a fish suction pipeline 3, a picture is taken after the fish enters the fish phenotype extractor 4, and then the image features are extracted by the method in the step (2) to form feature vectors. And (4) sending the extracted feature vectors into the model trained in the step (3) to identify the fish individual, so as to obtain the information and the phenotypic feature of the individual fish. In particular, the fish phenotype extractor 4 may also perform fish body weighing and fin sampling, and the sampled fish individuals enter the recovery tank 6 through the lower chute 5 for sign recovery.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (1)

1. The high-throughput fish phenotype analysis method and device based on machine vision are characterized by comprising the following steps:
(1) Obtaining image data of different individual fishes to form a data sample library
Firstly, pictures of fishes are collected by an intelligent mobile phone, each fish is of one category, about 30 pictures are taken by each fish, and each individual fish is placed in a respective folder in order to facilitate subsequent extraction and training of individual fish features, so that a sample library is constructed.
(2) Extracting image characteristics of fish individuals and constructing characteristic vectors
For the fish individual data in the sample library, the characteristics of color, texture, kiss length, caudal peduncle height, head length, body height, eye back head length, full length, inter-eye distance, body width, body length, eye diameter, caudal length, caliber and the like are respectively extracted and combined into a characteristic vector.
Color characteristics: color features are one of the most basic features of an image, which describe the surface properties of a fish image. Because the surface color distribution of each fish is different, the color of some fishes is more black than white, and the color of some fishes is white. The color distribution describes the proportion of different pixel values in the whole image, and does not care about the spatial position of each pixel, so the color characteristic can be used as one of the image characteristics for identifying the fish individual.
Texture characteristics: texture features are global features that also describe the surface properties of fish images. The texture feature does not perform statistical calculation on a certain pixel point, but performs statistical calculation on a region. The texture features have rotation invariance and are also highly resistant to noise. Here we describe the texture features of fish images by gray level co-occurrence matrix method and local binary method.
The gray level co-occurrence matrix method in the texture features comprises the following steps: the gray level co-occurrence matrix reflects the correlation of surrounding neighboring pixels. We use some statistics built based on it: angular secondary moment measures whether the gray level distribution is uniform or not, contrast measures the sharpness of the edge, difference measures the local contrast, and entropy measures the complexity of the image to serve as texture classification features.
The local binary method in the texture features comprises the following steps: and taking a window of 3-by-3 of the image as a block area, taking a central pixel of the window as a threshold, comparing values of adjacent 8 pixels with a middle pixel value, if the values of surrounding pixels are smaller than the threshold, marking the position of the pixel point as 0, otherwise marking the position as 1. Thus, 8 points in the 3-by-3 neighborhood can generate 8-bit binary numbers through comparison, the LBP value of the pixel point in the center of the window is obtained through the binary numbers, and the texture characteristics of the region are reflected through the value.
The fish feature can be reflected by the fish phenotype features such as fish eyes, fish heads, tails, body length, body width and full length, the fish image and the information such as fish eyes, fish heads, fish whole body and tail coordinates marked by software are sent to an image segmentation network to obtain a segmentation model, and the model is specially used for segmenting each part of the fish. And key position images of the eyes, the head, the whole body, the tail and the like of the fish can be obtained by using the trained image segmentation model. The images are used for calculating the phenotypic characteristics of the fish, such as kiss length, caudal peduncle height, head length, body height, retroocular head length, full length, interocular distance, body width, body length, ocular diameter, caudal length, caliber and the like, and the individual identification and phenotypic analysis of the fish are carried out by calculating and combining the characteristics and the ratio between the characteristics.
(3) Training individual classifiers of fish using random forest
And (3) performing multi-feature fusion on the features of the color, the texture, the kissing length, the tail handle height, the head length, the body height, the eye back head length, the full length, the eye distance, the body width, the body length, the eye diameter, the tail length, the caliber and the like in the step (2), combining the features into a one-dimensional feature vector, and training the feature vectors of all the fish images in the training sample by using a random forest. The random forest is a supervised machine learning algorithm constructed by a decision tree algorithm, is often used for solving the problems of classification and regression, and provides a reliable solution for complex problems by utilizing ensemble learning and combining a plurality of classifiers.
After the random forest training is finished, a corresponding random forest classifier model is obtained, and then the individual identification and phenotype analysis can be carried out on the fish by transmitting the characteristics of the fish into the model.
(4) The fish to be detected is subjected to individual identification and phenotype analysis by a fish phenotype analysis device
The fish phenotype analyzing device comprises an anesthesia pool, a fish sucking pump, a fish sucking pipeline, a fish phenotype extractor, a downslide, a recovery pool and the like, wherein the fish sucking pump automatically sucks fish from the anesthesia pool into the fish phenotype extractor through the fish sucking pipeline. And (3) extracting image features to form a feature vector for the unidentified fish individual 0 to be detected by the method in the step (2). And (4) sending the extracted feature vectors into the model trained in the step (3) for identifying the individual fish to obtain the information and the phenotypic features of the individual fish. In particular, the fish phenotype extractor can also perform fish body weighing and fin ray sampling, and the sampled fish individuals enter a recovery pond for sign recovery.
CN202211125793.0A 2022-09-20 2022-09-20 High-throughput fish phenotype analysis method and device based on machine vision Pending CN115690452A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211125793.0A CN115690452A (en) 2022-09-20 2022-09-20 High-throughput fish phenotype analysis method and device based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211125793.0A CN115690452A (en) 2022-09-20 2022-09-20 High-throughput fish phenotype analysis method and device based on machine vision

Publications (1)

Publication Number Publication Date
CN115690452A true CN115690452A (en) 2023-02-03

Family

ID=85062352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211125793.0A Pending CN115690452A (en) 2022-09-20 2022-09-20 High-throughput fish phenotype analysis method and device based on machine vision

Country Status (1)

Country Link
CN (1) CN115690452A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778474A (en) * 2023-06-02 2023-09-19 河南农业大学 Intelligent phenotype analyzer for tomato fruits

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778474A (en) * 2023-06-02 2023-09-19 河南农业大学 Intelligent phenotype analyzer for tomato fruits

Similar Documents

Publication Publication Date Title
Vink et al. Efficient nucleus detector in histopathology images
JP2022180419A (en) Image analysis method, device, program, and method for manufacturing trained deep learning algorithm
Zion et al. Real-time underwater sorting of edible fish species
CN108288506A (en) A kind of cancer pathology aided diagnosis method based on artificial intelligence technology
CN109145808B (en) Tuna identification method based on self-adaptive fish body bending model
CN109785310B (en) Automatic staging system based on breast lymph node panoramic image calculation
CN109711389B (en) Lactating sow posture conversion recognition method based on Faster R-CNN and HMM
CN110189383B (en) Traditional Chinese medicine tongue color and fur color quantitative analysis method based on machine learning
CN111462058B (en) Method for rapidly detecting effective rice ears
CN112464843A (en) Accurate passenger flow statistical system, method and device based on human face human shape
CN111696150A (en) Method for measuring phenotypic data of channel catfish
CN112257702A (en) Crop disease identification method based on incremental learning
CN113269191A (en) Crop leaf disease identification method and device and storage medium
CN111666897A (en) Oplegnathus punctatus individual identification method based on convolutional neural network
CN108596176B (en) Method and device for identifying diatom types of extracted diatom areas
CN115690452A (en) High-throughput fish phenotype analysis method and device based on machine vision
CN111161295A (en) Background stripping method for dish image
CN107481243B (en) Sheep body size detection method based on sheep top view
Sako et al. Computer image analysis and classification of giant ragweed seeds
CN114581709A (en) Model training, method, apparatus, and medium for recognizing target in medical image
CN111612749B (en) Focus detection method and device based on lung image
CN112966698A (en) Freshwater fish image real-time identification method based on lightweight convolutional network
CN111753903A (en) Soybean variety identification method based on vein topological characteristics
CN109118540B (en) Sturgeon rapid statistical method based on ridge line extraction
CN116416523A (en) Machine learning-based rice growth stage identification system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination