CN111428785B - Puffer individual identification method based on deep learning - Google Patents
Puffer individual identification method based on deep learning Download PDFInfo
- Publication number
- CN111428785B CN111428785B CN202010207890.9A CN202010207890A CN111428785B CN 111428785 B CN111428785 B CN 111428785B CN 202010207890 A CN202010207890 A CN 202010207890A CN 111428785 B CN111428785 B CN 111428785B
- Authority
- CN
- China
- Prior art keywords
- puffer fish
- segmentation
- training
- model
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000000872 buffer Substances 0.000 title claims abstract description 95
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000013135 deep learning Methods 0.000 title claims abstract description 13
- 241000251468 Actinopterygii Species 0.000 claims abstract description 95
- 230000011218 segmentation Effects 0.000 claims abstract description 68
- 238000012549 training Methods 0.000 claims abstract description 48
- 241001627955 Tetraodon lineatus Species 0.000 claims abstract description 35
- 239000013598 vector Substances 0.000 claims abstract description 31
- 238000000605 extraction Methods 0.000 claims abstract description 30
- 238000012360 testing method Methods 0.000 claims abstract description 16
- 238000002372 labelling Methods 0.000 claims abstract description 13
- 238000003709 image segmentation Methods 0.000 claims abstract description 11
- 238000007781 pre-processing Methods 0.000 claims abstract description 8
- 238000012795 verification Methods 0.000 claims abstract description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 11
- 238000010606 normalization Methods 0.000 claims description 10
- 230000004927 fusion Effects 0.000 claims description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 description 6
- 241000054448 Takifugu bimaculatus Species 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 241001441724 Tetraodontidae Species 0.000 description 4
- 238000009395 breeding Methods 0.000 description 3
- 230000001488 breeding effect Effects 0.000 description 3
- CFMYXEVWODSLAX-QOZOJKKESA-N tetrodotoxin Chemical compound O([C@@]([C@H]1O)(O)O[C@H]2[C@@]3(O)CO)[C@H]3[C@@H](O)[C@]11[C@H]2[C@@H](O)N=C(N)N1 CFMYXEVWODSLAX-QOZOJKKESA-N 0.000 description 3
- 229950010357 tetrodotoxin Drugs 0.000 description 3
- CFMYXEVWODSLAX-UHFFFAOYSA-N tetrodotoxin Natural products C12C(O)NC(=N)NC2(C2O)C(O)C3C(CO)(O)C1OC2(O)O3 CFMYXEVWODSLAX-UHFFFAOYSA-N 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 2
- 241000894007 species Species 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 231100000572 poisoning Toxicity 0.000 description 1
- 230000000607 poisoning effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 210000001835 viscera Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G06T5/90—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/80—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
- Y02A40/81—Aquaculture, e.g. of fish
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
A puffer fish individual identification method based on deep learning relates to computer vision and individual identification. Collecting a puffer fish image, and preprocessing the puffer fish image, including puffer fish image mask labeling and data enhancement; training an image segmentation model on the preprocessed globefish data set to obtain a trained segmentation model; sending the globefish images for globefish feature extraction into a trained segmentation model, and aligning the globefish images obtained by segmentation; training a feature extraction network, namely dividing the aligned puffer fish images into a training set and a verification set, and training the feature extraction network on the training set to obtain a feature extraction model for extracting the puffer fish features; and putting the picture for testing into a trained segmentation model, sending the segmentation result into the trained feature extraction model after aligning to obtain a feature vector of the input picture, and calculating the distance between the feature vector and the puffer fish feature vector in the data set to obtain the individual information of the puffer fish in the input picture.
Description
Technical Field
The invention relates to the technical field of computer vision and individual identification, in particular to a puffer fish individual identification method based on deep learning.
Background
In the field of computer vision, individual identification mainly refers to judging whether an individual in two pictures is the same individual (Trigueros, daniel S a ez, meng L, hartnet m.face Recognition: from Traditional to Deep Learning Methods [ J ]. 2018), more specifically, in the process of face identification, the face in the pictures is mainly detected and face features are extracted, then feature comparison is carried out, whether the individual is the same individual can be judged according to comparison results, the identity of a person can be rapidly determined by using a face identification technology, and the efficiency of person searching and tracking is improved.
Puffer fish, i.e., tetrodotoxin fish, refers to a group of fish whose viscera or even muscle have the characteristic function of accumulating tetrodotoxin (the research progress of tetrodotoxin by dawn Hui, shaoyoujun. Fujian animal husbandry veterinarian, 2019,41 (04): 23-24+ 29). In recent years, on one hand, the scale of the globefish breeding industry in China is gradually enlarged, and on the other hand, poisoning caused by using globefish still occurs every year, and a technology capable of tracking and identifying the globefish is urgently needed. Takifugu bimaculatus is one of the unique species in China and is also the main breeding species (Zhengmeifen, zhoujiamin. Takifugu bimaculatus biology and artificial breeding summary [ J ]. Hebei fishery, 2003 (04): 19-44) in Fujian province, and the back of the Takifugu bimaculatus has unique texture characteristics, which provides favorable conditions for extracting the characteristics of the individual Takifugu bimaculatus by deep learning.
Disclosure of Invention
The invention aims to provide a blowfish individual identification method based on deep learning, which utilizes an image segmentation technology in computer vision to obtain individual blowfish images and then utilizes a convolutional neural network to extract the characteristics of the blowfish for individual identification, aiming at the defects of the prior art.
The invention comprises the following steps:
1) Acquiring a puffer fish picture by using a smart phone, and preprocessing the acquired puffer fish picture, including puffer fish image mask labeling and data enhancement;
2) Training an image segmentation model on the preprocessed globefish data set to obtain a trained segmentation model;
3) Sending the puffer fish images for the feature extraction of the puffer fish into a trained segmentation model, and aligning the puffer fish images obtained by segmentation;
4) Training a feature extraction network, namely dividing the aligned puffer fish images into a training set and a verification set, and training the feature extraction network on the training set to obtain a feature extraction model for extracting the puffer fish features;
5) And putting the picture for testing into a trained segmentation model, sending the aligned segmentation result into the trained feature extraction model to obtain the feature vector of the input picture, and calculating the distance between the feature vector and the puffer fish feature vector in the data set to obtain the individual information of the puffer fish in the input picture.
In step 1), the specific method for preprocessing the collected puffer fish picture may be:
step 1.1: labeling the edges of the puffer fish in the picture by using a labeling tool, and exporting labeled contents into a json file;
step 1.2: and carrying out image enhancement on the marked image, wherein the using modes comprise random sharpness enhancement, random color enhancement, random contrast enhancement, random brightness enhancement and image fusion, and the enhanced marked image can be changed into five marked images.
In step 2), the specific method for training the image segmentation model on the preprocessed puffer fish data set may be:
step 2.1: modifying a batch normalization layer in the model into a group normalization layer, adjusting initialization content, and skipping the initialization of batch normalization;
step 2.2: initializing a network by adopting a pre-training weight, initializing the network by using the weight trained on a COOC or PASCAL _ VOC data set, and randomly initializing a final output layer;
step 2.3: and (3) sending the preprocessed training sample and the corresponding label to the network model each time, and repeatedly training through a forward propagation step and a backward propagation step until the maximum iteration number is reached so as to minimize the loss function value.
In step 3), the specific method for aligning the segmented puffer fish image may be:
step 3.1: sending the preprocessed image into a trained segmentation model to obtain a segmentation result of the image, and setting an area except for a segmentation mask result in the image to be 0, namely black;
step 3.2: calculating a segmentation edge by using OpenCV for the segmentation result, and then calculating the minimum circumscribed rectangle of the segmentation edge by using OpenCV again;
step 3.3: finding out the short edge and a straight line corresponding to the short edge according to the four points of the minimum circumscribed rectangle;
step 3.4: respectively moving the long sides by 1/4 distance from the two short sides to the center of the rectangle, wherein the moving direction is parallel to the long sides of the rectangle, respectively calculating the edge area between the two short sides, the small area is the position of the tail, and the large area is the position of the head;
step 3.5: rearranging four coordinate points of the rectangle according to the positions of the head and the tail, so that the head of the puffer fish obtained by affine transformation is horizontal to the right, and the tail is horizontal to the left;
step 3.6: and cutting the aligned original puffer fish image according to the size of the rectangle obtained by the OpenCV.
In step 4), the specific method for training the feature extraction network may be:
step 4.1: putting all pictures of each globefish into a trained segmentation model to obtain a segmentation result;
step 4.2: aligning all the segmentation results, counting the average length and width of the puffer fish, and zooming all the aligned pictures according to the average length and width;
step 4.3: and sending the aligned pictures into a convolutional neural network, learning how to extract the characteristics of the puffer fish by the network, and performing iterative training to obtain a model for extracting the characteristics of the puffer fish.
In step 5), the specific method for obtaining the individual globefish information in the input picture may be:
step 5.1: putting the picture for testing into a trained segmentation model to obtain a segmentation result, aligning and scaling the segmentation result, and sending the alignment result into the trained feature extraction model to obtain the feature vector of the puffer fish in the test picture;
and step 5.2: and comparing the obtained characteristic vector with the existing characteristic vector, wherein the cosine distance or the Euclidean distance can be used for comparison, and the individual information of the puffer fish in the test picture is returned according to the comparison result.
The invention provides a method for obtaining individual puffer fish images by using an image segmentation model in deep learning, extracting puffer fish features by using a convolutional neural network, and then comparing the extracted features with the existing features to perform individual identification. Preprocessing an acquired picture, sending the preprocessed picture into a segmentation model trained on a puffer fish data set to segment individual puffer fish, calculating a minimum external rectangle for a mask obtained by segmentation, judging a mask of a head part and a tail part through a long side and a short side of the minimum external rectangle, calculating a mask area with a length deviation of 1/4 of the long side to judge a specific position of the head and the tail, carrying out affine transformation on the rectangle according to the specific position of the head and the tail, enabling the long side of the rectangle to be in a horizontal direction and the short side to be in a direction perpendicular to the horizontal direction, enabling a non-puffer fish area to be black when a final segmented picture is output, then putting the segmented picture into a trained feature extraction network to carry out feature extraction, and finally comparing the extracted features with existing features to obtain the individual puffer fish information in the input picture. Through the operation of the steps, the individual puffer fish can be identified. It should be noted that in the present invention, the training of the model is performed separately, but the testing process is an end-to-end process.
The beneficial effects of the invention are: a model for individual identification of Takifugu bimaculatus is provided, which has the advantages that: the image segmentation model is used for extracting the image on the back of the globefish, and the convolutional neural network is used for extracting the characteristic vector with discriminability, so that individual identification of the globefish can be carried out according to the characteristic vector, the possibility of carrying out individual identification of the globefish by utilizing deep learning is proved, technical support is provided for tracing the globefish by utilizing computer vision, and popularization of the globefish market is facilitated and the edible safety of the globefish is improved.
Drawings
Fig. 1 is a diagram showing a label of divided data.
Fig. 2 is an exemplary diagram of a puffer fish after data enhancement.
Fig. 3 is a flow of the puffer fish individual identification method based on deep learning.
Fig. 4 is an exemplary diagram of a segmentation result of a puffer fish after image segmentation.
Fig. 5 is an exemplary diagram of the alignment result of a puffer fish segmentation graph after alignment.
FIG. 6 is a diagram showing the structure of a Mask R-CNN model used in the examples.
Detailed Description
The following embodiments will further describe the technical solutions of the present invention with reference to the accompanying drawings.
The embodiment of the invention comprises the following steps:
1) Acquiring a puffer fish picture by using a smart phone, and preprocessing the acquired puffer fish picture, including puffer fish image mask labeling and data enhancement;
step 1.1: marking the edge of the puffer fish in the picture by using a marking tool, and then exporting the marked content into a json file;
step 1.2: and carrying out image enhancement on the marked image, wherein the using modes comprise random sharpness enhancement, random color enhancement, random contrast enhancement, random brightness enhancement and picture fusion, and the enhanced marked picture can be changed into five pictures.
2) Training an image segmentation model on the preprocessed puffer fish data set to obtain a trained segmentation model;
step 2.1: modifying a batch normalization layer in the model into a group normalization layer, adjusting initialization content, and skipping the initialization of batch normalization;
step 2.2: adopting a pre-training weight initialization network, using a weight initialization network trained on a COOC or PASCAL _ VOC data set, and performing random initialization on a final output layer;
step 2.3: and (3) sending the preprocessed training sample and the corresponding label to the network model every time, and repeatedly training through two steps of forward propagation and backward propagation until the maximum iteration number is reached so as to minimize the loss function value.
3) Sending the globefish images for globefish feature extraction into a trained segmentation model, and aligning the globefish images obtained by segmentation;
step 3.1: sending the preprocessed image into a trained segmentation model to obtain a segmentation result of the image, and setting an area except for a segmentation mask result in the image to be 0, namely black;
step 3.2: calculating a segmentation edge by using OpenCV for the segmentation result, and then calculating the minimum circumscribed rectangle of the segmentation edge by using OpenCV again;
step 3.3: finding out the short sides and the straight lines corresponding to the short sides according to the four points of the minimum circumscribed rectangle;
step 3.4: respectively moving the long sides by 1/4 distance from the two short sides to the center of the rectangle, wherein the moving direction is parallel to the long sides of the rectangle, respectively calculating the edge area between the two short sides, the small area is the position of the tail, and the large area is the position of the head;
step 3.5: rearranging four coordinate points of the rectangle according to the positions of the head and the tail, so that the head of the puffer fish obtained by affine transformation is horizontal to the right, and the tail is horizontal to the left;
step 3.6: and cutting the aligned original puffer fish image according to the size of the rectangle obtained by the OpenCV.
4) Training a feature extraction network, namely dividing the aligned puffer fish images into a training set and a verification set, and training the feature extraction network on the training set to obtain a feature extraction model for extracting the puffer fish features;
step 4.1: putting all the pictures of each globefish into a trained segmentation model to obtain a segmentation result;
step 4.2: aligning all the segmentation results, counting the average length and width of the puffer fish, and zooming all the aligned pictures according to the average length and width;
step 4.3: and sending the aligned pictures into a convolutional neural network, learning how to extract the characteristics of the puffer fish by the network, and performing iterative training to obtain a model for extracting the characteristics of the puffer fish.
5) And putting the picture for testing into a trained segmentation model, sending the aligned segmentation result into the trained feature extraction model to obtain the feature vector of the input picture, and calculating the distance between the feature vector and the puffer fish feature vector in the data set to obtain the individual information of the puffer fish in the input picture.
Step 5.1: putting the picture for testing into a trained segmentation model to obtain a segmentation result, aligning and scaling the segmentation result, and sending the alignment result into the trained feature extraction model to obtain the feature vector of the puffer fish in the test picture;
step 5.2: and comparing the obtained characteristic vector with the existing characteristic vector, wherein the cosine distance or the Euclidean distance can be used for comparison, and the individual information of the puffer fish in the test picture is returned according to the comparison result.
Fig. 3 is a flow chart of puffer fish individual identification based on deep learning, and the following embodiments further illustrate specific embodiments of various parts involved in the details with reference to the drawings.
Step one, labeling an image segmentation data set and enhancing data.
The specific method for labeling the data comprises the following steps: firstly, data to be labeled is led into a labeling tool labelme or via, then a polygon label is created, the edge of the puffer fish is labeled by the polygon, the puffer fish edge is labeled as detailed as possible during labeling, and a labeling example is shown in fig. 1.
The specific method for enhancing the data is as follows: the four enhancement modes are carried out by adopting an ImageEnhance library which can only be realized by PIL (picture integration), the enhancement mode of picture fusion needs to acquire random pictures for fusion, the random pictures can be downloaded on the internet, then background pictures are zoomed into the size of puffer fish pictures needing fusion, then the two pictures are opened in an RGBA (red green blue alpha) mode, and the transparency of the background is randomly selected to superpose the two pictures. An exemplary graph after contrast enhancement is shown in fig. 2.
And step two, training a segmentation model.
The specific process of model training is as follows: first, a deep convolutional neural network capable of segmenting an image is selected, in this embodiment, mask R-CNN is selected, and a model structure of the deep convolutional neural network is shown in FIG. 6. And dividing the enhanced data set into a training set and a test set in a ratio of 8: 2, then sending the pictures and the labels into a segmentation model, preprocessing each picture sent into the model in a mode of picture scaling and normalization, and obtaining the segmentation model after iterative training. The puffer fish in the picture can be segmented by the learned parameters, and an example of the segmentation result is shown in fig. 4.
Step three, the blowfish images are aligned
The specific method of alignment is: the picture with the globefish is sent into the trained segmentation model, the segmentation result shown in the figure 4 can be obtained, the tail labeling area of the globefish is small, the head labeling area of the globefish is large, the head and tail positions of the globefish can be judged by using the segmentation result, and the globefish can be aligned according to the head and tail positions. The method comprises the steps of firstly finding out edge information in a segmentation result by using findContours in OpenCV, taking the largest edge information, then calculating a minimum external rectangle of the edge by using minAreaRect in OpenCV, finding out the minimum external rectangle, then finding out the positions of long sides and short sides of the rectangle according to four coordinate points of the rectangle, wherein the long sides correspond to the sides of the puffer fish certainly, the short sides correspond to the heads or the tails of the puffer fish certainly, then moving the distance of the long sides to the center of the rectangle by 1/4 from the two short sides along the long sides to obtain two small rectangles, calculating the areas of the edges in the two small rectangles by using contourAreaL in OpenCV, wherein the small area represents the tail, and the large area represents the head, so that the puffer fish parts corresponding to the four coordinate points of the rectangle can be obtained, arranging the four points of the rectangle according to the coordinate sequence of the four points of the puffer fish after alignment, then finishing the alignment by using affine transformation, and finally cutting the puffer fish according to the size of the rectangle, wherein the alignment result is shown in an actual drawing 5.
And step four, training a feature extraction network.
The specific process of training the feature extraction network is as follows: and dividing the aligned and scaled globefish pictures into a training set and a testing set, and reserving a part of globefish as the globefish for registration verification. And (3) sending the training set into a convolutional neural network for training, wherein each picture sent into the network needs to be normalized in the training process, and a model for extracting the characteristics of the puffer fish can be obtained after the network is repeatedly subjected to iterative training.
And step five, comparing characteristics.
The specific characteristic comparison method comprises the following steps: after the segmented and aligned puffer fish pictures are sent into a feature extraction network, feature vectors can be obtained, the length of the feature vectors can be 128 dimensions or 256 dimensions, the obtained feature vectors are compared with puffer fish feature vectors already stored in a file, the comparison mode is to calculate the Euclidean distance or cosine distance of the two feature vectors, taking the Euclidean distance as an example, the smaller the calculation result is, the closer the two vectors are, the feature vectors are very likely to be the same puffer fish, and otherwise, the feature vectors are more unlikely to be the same puffer fish. And finally, returning the information of the puffer fish in the input picture according to the comparison result.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: numerous changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Claims (2)
1. A globefish individual identification method based on deep learning is characterized by comprising the following steps:
1) Acquiring a puffer fish picture by using a smart phone, and preprocessing the acquired puffer fish picture, including puffer fish image mask labeling and data enhancement;
2) Training an image segmentation model on the preprocessed globefish data set to obtain a trained segmentation model;
the specific method for training the image segmentation model on the preprocessed puffer fish data set comprises the following steps:
step 2.1: modifying a batch normalization layer in the model into a group normalization layer, adjusting initialization content, and skipping the initialization of batch normalization;
step 2.2: adopting a pre-training weight initialization network, using a weight initialization network trained on a COOC or PASCAL _ VOC data set, and performing random initialization on a final output layer;
step 2.3: sending the preprocessed training sample and the corresponding label to the network model each time, and repeatedly training through a forward propagation step and a backward propagation step until the maximum iteration times is reached so as to minimize the loss function value;
3) Sending the globefish images for globefish feature extraction into a trained segmentation model, and aligning the globefish images obtained by segmentation;
the specific method for aligning the segmented puffer fish images comprises the following steps:
step 3.1: sending the preprocessed image into a trained segmentation model to obtain a segmentation result of the image, and setting an area except for a segmentation mask result in the image to be 0, namely black;
step 3.2: calculating a segmentation edge by using OpenCV for the segmentation result, and then calculating the minimum circumscribed rectangle of the segmentation edge by using OpenCV again;
step 3.3: finding out the short edge and a straight line corresponding to the short edge according to the four points of the minimum circumscribed rectangle;
step 3.4: respectively moving the long sides by 1/4 distance from the two short sides to the center of the rectangle, wherein the moving direction is parallel to the long sides of the rectangle, respectively calculating the edge area between the two short sides, wherein the small area is the position of the tail, and the large area is the position of the head;
step 3.5: rearranging four coordinate points of the rectangle according to the positions of the head and the tail, so that the head of the puffer fish obtained by affine transformation is horizontal to the right, and the tail is horizontal to the left;
step 3.6: cutting the aligned original globefish image according to the size of the rectangle obtained by OpenCV;
4) Training a feature extraction network, namely dividing the aligned puffer fish images into a training set and a verification set, and training the feature extraction network on the training set to obtain a feature extraction model for extracting the puffer fish features;
the specific method for training the feature extraction network comprises the following steps:
step 4.1: putting all the pictures of each globefish into a trained segmentation model to obtain a segmentation result;
step 4.2: aligning all the segmentation results, counting the average length and width of the puffer fish, and zooming all the aligned pictures according to the average length and width;
step 4.3: sending the aligned pictures into a convolutional neural network, learning how to extract the characteristics of the puffer fish by the network, and performing iterative training to obtain a model for extracting the characteristics of the puffer fish;
5) Putting a picture for testing into a trained segmentation model, sending the aligned segmentation result into the trained feature extraction model to obtain a feature vector of an input picture, and calculating the distance between the feature vector and a puffer fish feature vector in a data set to obtain individual information of the puffer fish in the input picture;
the specific method for obtaining the individual information of the puffer fish in the input picture comprises the following steps:
step 5.1: putting the picture for testing into a trained segmentation model to obtain a segmentation result, aligning and scaling the segmentation result, and sending the alignment result into the trained feature extraction model to obtain the feature vector of the puffer fish in the test picture;
step 5.2: and comparing the obtained characteristic vector with the existing characteristic vector, comparing by using a cosine distance or an Euclidean distance, and returning the individual information of the puffer fish in the test picture according to the comparison result.
2. The puffer fish individual identification method based on deep learning according to claim 1, wherein in the step 1), the specific method for preprocessing the collected puffer fish pictures comprises:
step 1.1: marking the edge of the puffer fish in the picture by using a marking tool, and then exporting the marked content into a json file;
step 1.2: and carrying out image enhancement on the marked image, wherein the using modes comprise random sharpness enhancement, random color enhancement, random contrast enhancement, random brightness enhancement and image fusion, and the enhanced marked image is changed into five marked images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010207890.9A CN111428785B (en) | 2020-03-23 | 2020-03-23 | Puffer individual identification method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010207890.9A CN111428785B (en) | 2020-03-23 | 2020-03-23 | Puffer individual identification method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111428785A CN111428785A (en) | 2020-07-17 |
CN111428785B true CN111428785B (en) | 2023-04-07 |
Family
ID=71549053
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010207890.9A Active CN111428785B (en) | 2020-03-23 | 2020-03-23 | Puffer individual identification method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111428785B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112767382A (en) * | 2021-01-29 | 2021-05-07 | 安徽工大信息技术有限公司 | Fry counting method based on deep learning |
CN113096080B (en) * | 2021-03-30 | 2024-01-16 | 四川大学华西第二医院 | Image analysis method and system |
CN113326850B (en) * | 2021-08-03 | 2021-10-26 | 中国科学院烟台海岸带研究所 | Example segmentation-based video analysis method for group behavior of Charybdis japonica |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830862A (en) * | 2018-06-08 | 2018-11-16 | 江南大学 | Based on the crab of image segmentation towards recognition methods |
CN108921058A (en) * | 2018-06-19 | 2018-11-30 | 厦门大学 | Fish identification method, medium, terminal device and device based on deep learning |
CN109086752A (en) * | 2018-09-30 | 2018-12-25 | 北京达佳互联信息技术有限公司 | Face identification method, device, electronic equipment and storage medium |
CN109543663A (en) * | 2018-12-28 | 2019-03-29 | 北京旷视科技有限公司 | A kind of dog personal identification method, device, system and storage medium |
CN109949276A (en) * | 2019-02-28 | 2019-06-28 | 华中科技大学 | A kind of lymph node detection method in improvement SegNet segmentation network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10095950B2 (en) * | 2015-06-03 | 2018-10-09 | Hyperverge Inc. | Systems and methods for image processing |
-
2020
- 2020-03-23 CN CN202010207890.9A patent/CN111428785B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830862A (en) * | 2018-06-08 | 2018-11-16 | 江南大学 | Based on the crab of image segmentation towards recognition methods |
CN108921058A (en) * | 2018-06-19 | 2018-11-30 | 厦门大学 | Fish identification method, medium, terminal device and device based on deep learning |
CN109086752A (en) * | 2018-09-30 | 2018-12-25 | 北京达佳互联信息技术有限公司 | Face identification method, device, electronic equipment and storage medium |
CN109543663A (en) * | 2018-12-28 | 2019-03-29 | 北京旷视科技有限公司 | A kind of dog personal identification method, device, system and storage medium |
CN109949276A (en) * | 2019-02-28 | 2019-06-28 | 华中科技大学 | A kind of lymph node detection method in improvement SegNet segmentation network |
Non-Patent Citations (2)
Title |
---|
基于改进群组归一化的目标检测与实例分割;王旭 等;《青岛科技大学学报(自然科学版)》;20191210;第40卷(第6期);第99-105页 * |
焦李成 等.图像分割.《人工智能前沿技术丛书 简明人工智能》.西安电子科技大学出版社,2019,第414-416页. * |
Also Published As
Publication number | Publication date |
---|---|
CN111428785A (en) | 2020-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111428785B (en) | Puffer individual identification method based on deep learning | |
Dvornik et al. | Modeling visual context is key to augmenting object detection datasets | |
CN108376244B (en) | Method for identifying text font in natural scene picture | |
CN111582294B (en) | Method for constructing convolutional neural network model for surface defect detection and application thereof | |
CN111860348A (en) | Deep learning-based weak supervision power drawing OCR recognition method | |
CN108985170A (en) | Transmission line of electricity hanger recognition methods based on Three image difference and deep learning | |
CN110163798B (en) | Method and system for detecting damage of purse net in fishing ground | |
CN109359576B (en) | Animal quantity estimation method based on image local feature recognition | |
CN110267101B (en) | Unmanned aerial vehicle aerial video automatic frame extraction method based on rapid three-dimensional jigsaw | |
CN110807775A (en) | Traditional Chinese medicine tongue image segmentation device and method based on artificial intelligence and storage medium | |
CN115272204A (en) | Bearing surface scratch detection method based on machine vision | |
Schnitman et al. | Inducing semantic segmentation from an example | |
CN112085017A (en) | Tea tender shoot image segmentation method based on significance detection and Grabcut algorithm | |
CN110969101A (en) | Face detection and tracking method based on HOG and feature descriptor | |
CN110674823A (en) | Sample library construction method based on automatic identification of deep sea large benthonic animals | |
Xiao et al. | Group-housed pigs and their body parts detection with Cascade Faster R-CNN | |
CN112966698A (en) | Freshwater fish image real-time identification method based on lightweight convolutional network | |
CN112464744A (en) | Fish posture identification method | |
CN112381830A (en) | Method and device for extracting bird key parts based on YCbCr superpixels and graph cut | |
CN116521917A (en) | Picture screening method and device | |
CN113269136B (en) | Off-line signature verification method based on triplet loss | |
CN113936147A (en) | Method and system for extracting salient region of community image | |
CN112926694A (en) | Method for automatically identifying pigs in image based on improved neural network | |
CN109086774B (en) | Color image binarization method and system based on naive Bayes | |
CN113177552A (en) | License plate recognition method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |