CN115359324A - Method for identifying head and chest beetle characteristic points of eriocheir sinensis - Google Patents

Method for identifying head and chest beetle characteristic points of eriocheir sinensis Download PDF

Info

Publication number
CN115359324A
CN115359324A CN202211153659.1A CN202211153659A CN115359324A CN 115359324 A CN115359324 A CN 115359324A CN 202211153659 A CN202211153659 A CN 202211153659A CN 115359324 A CN115359324 A CN 115359324A
Authority
CN
China
Prior art keywords
eriocheir sinensis
point
head
characteristic points
chest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211153659.1A
Other languages
Chinese (zh)
Inventor
王书献
张胜茂
郑汉丰
王伟
郭全友
杨矫捷
樊伟
戴阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Jiean Information Technology Co ltd
East China Sea Fishery Research Institute Chinese Academy of Fishery Sciences
Original Assignee
Suzhou Jiean Information Technology Co ltd
East China Sea Fishery Research Institute Chinese Academy of Fishery Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Jiean Information Technology Co ltd, East China Sea Fishery Research Institute Chinese Academy of Fishery Sciences filed Critical Suzhou Jiean Information Technology Co ltd
Priority to CN202211153659.1A priority Critical patent/CN115359324A/en
Publication of CN115359324A publication Critical patent/CN115359324A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Abstract

The invention discloses a method for identifying head and chest beetle characteristic points of eriocheir sinensis, which comprises the following steps: shooting a photo of the head and the chest nails of the eriocheir sinensis by using a mobile phone; detecting a skull and a sternum in the picture by using a target recognition model and clipping the picture; enhancing the image by using methods such as random zooming, random rotation, random shielding, random brightness contrast ratio and the like; designing a 37-point Chinese mitten crab head and breastplate positioning method, and manually marking characteristic points of an initial picture by using the 37-point Chinese mitten crab head and breastplate positioning method; designing an end-to-end differentiable neural network model, and training by using the marked pictures; and carrying out characteristic thermal annotation and characteristic point annotation on the shot picture by using the trained model. The invention obviously reduces the manual workload in the links of Eriocheir sinensis culture observation, quality detection and the like, and greatly improves the working efficiency.

Description

Method for identifying head and chest beetle characteristic points of eriocheir sinensis
Technical Field
The invention relates to the application field of deep learning technology in fishery, in particular to a method for identifying feature points of eriocheir sinensis head and breast nails.
Background
Eriocheir sinensis, an organism of the genera Arthropoda, crustacea, depodales, reptiles, eriocheir sinensis, is continuously favored by the market due to its unique and elegant taste. As early as the beginning of the 20 th century, eriocheir sinensis began to be introduced from china into germany. Subsequently, in 1920-1930's, the number of Eriocheir sinensis proliferated, and its distribution range rapidly expanded to many northern European rivers and estuaries. In 1992, commercial shrimp trawlers in the south of san Francisco collected the first batch of Eriocheir sinensis on the west coast. The eriocheir sinensis becomes a dish in all parts of the world and has great economic value and potential. In China, chinese mitten crabs are divided into a plurality of species such as Yangtze river species, liaojiang river species, oujiang river species, yellow river species and the like according to the growing environment. In order to research the morphological embodiment of various attributes of the eriocheir sinensis, scholars in various fields fully research the morphological of the crab. But a classification method which is convenient, quick and convenient for batch operation is not formed yet. Computer image recognition based on feature point detection can provide a new technical path for solving the problem.
Feature point detection is an important branch of research in computer vision. Feature point detection is widely applied to the fields of target matching, target tracking, three-dimensional reconstruction and the like. The conventional feature point detection commonly uses corner points and the like as important point features, and feature point detection algorithms widely applied comprise Harris corner point detection, SIFT feature detection and the like. With the development of deep learning, a feature point detection method based on a neural network gradually becomes one of mainstream methods for feature point detection. At present, feature point detection methods based on neural networks mainly include a fully-connected regression method and a gaussian Heatmap regression method. Fully-connected regression methods typically incorporate a fully-connected layer at the end of the convolutional neural network to map the feature map to coordinate points. Intuitiveness is a significant advantage of the fully-connected regression approach, where the fully-connected layer can directly connect global features to the coordinates of the feature points. However, the fully-connected regression method greatly impairs the spatial generalization capability of the network. At one extreme, when all crabs in the training data are in the top left corner of the image, reshape becomes a one-dimensional feature vector, and the activation weights of the fully-connected layer are mainly concentrated in the first half of the one-dimensional vector. When the network is used for testing a picture of the crabs positioned at the lower right corner, a good effect is difficult to obtain. Therefore, the full-connected regression method is too dependent on the distribution of the training data, and is more likely to cause overfitting. The Gaussian heat map regression method is a main method in the field of human body posture estimation at present, and the Gaussian heat map has larger characteristic map and stronger space generalization capability. Meanwhile, the gaussian heat map also has the problems of high memory occupation, low training and reasoning speed and the like, and the gradient flow of the mode is not end-to-end.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method for identifying the feature points of the eriocheir sinensis head and breast nails, which can quickly calculate and mark the 37 feature point coordinates of the eriocheir sinensis head and breast nails through the eriocheir sinensis head and breast nail photos shot by a mobile phone.
The invention is realized by the following technical scheme:
a method for identifying the feature points of the head and the chest of the Eriocheir sinensis specifically comprises the following steps:
(1) The collected eriocheir sinensis sample is photographed by using a mobile phone, the head and the chest of the eriocheir sinensis are cut by using a universal target detection model through transfer learning, and the cut data are enhanced;
(2) Designing a 37-point Eriocheir sinensis, and positioning the head and chest nail contour of the Eriocheir sinensis by using 37 characteristic points;
(3) Labeling the picture based on the 37-point feature point positioning method, wherein a labeling file is stored in an xml format;
(4) Designing an end-to-end differentiable convolutional neural network, and inputting the enhanced image into the network for training until the training parameters are converged;
(5) And storing the trained model, inputting a new cephalothorax nail picture, and automatically generating a characteristic point thermodynamic diagram and marking characteristic points by a program.
In a preferred embodiment, in the step (1), the shooting mobile phone is directly above the crab, the left-right or front-back inclination angle is controlled to be within 10 degrees as much as possible to ensure that all characteristic points are shot, and the shooting process is performed by natural light from 9 am to 6 evening.
As a preferred embodiment, the data enhancement in step (1) adopts a data enhancement mode of randomly combining multiple enhancement schemes such as random occlusion, random rotation, random brightness contrast, and the like.
As a preferred embodiment, the label file after the corresponding transformation is calculated while the data in the step (1) is enhanced, and manual re-labeling is not needed.
As a preferred embodiment, the 37-point positioning method in step (2) complies with the following rules, the 37-point positioning method is used for positioning 12 teeth (4 frontal teeth, 4 left frontal teeth and 4 right frontal teeth) of the skull nail, each tooth uses three feature points of starting point, peak point and ending point, two adjacent teeth share one feature point, the back edge uses 3 feature points for positioning, and the M-shaped neck sulcus is positioned by 7 feature points.
As a preferred embodiment, the file marked in the step (3) is stored in an xml file in a tree structure, each picture is determined by using an < image > tag, the file attribute of the < image > tag is a file name, the < label > tag under the < image > tag stores a eriocheir sinensis number, each < part > tag stores a feature point, and in the part tag, the three attributes of name, x and y respectively represent the number, horizontal offset and vertical offset of the feature point.
As a preferred embodiment, said designing an end-to-end differentiable convolutional neural network in said step (4) comprises the following sub-steps:
(41) Designing a full convolution neural network comprising 7 convolution modules, and replacing the traditional global average pooling operation with global deep convolution (GDConv) in the full convolution neural network, wherein the calculation process of the GDConv is G m =∑K i,j,m ·F i,j,m F is the input of the GDConv layer, K is the convolution kernel, G is the output of the GDConv layer, if the size of F is w × h × m, the size of K is w × h × m, the size of G is 1 × 1 × m;
(42) After passing through the full convolution neural network, performing Softmax normalization on each channel
Figure BDA0003857400160000041
Z′ i,j =exp(Z i,j ) Defining an X, T matrix, wherein
Figure BDA0003857400160000042
Figure BDA0003857400160000043
Calculating the inferred coordinates (x, y), x =<Z′,X> F ,y=<Z′,Y> F In the above process, m and n respectively represent the width and height of the feature matrix,<A,B> F and multiplying corresponding elements in the matrixes A and B, adding the multiplied corresponding elements, finally, calculating the predicted positions (x and y) of the characteristic points and the Loss value of the coordinates in the label file, and updating the parameters.
The design principle of the invention is as follows: firstly, a 37-point Chinese mitten crab head and chest nail positioning method is provided, namely 37 characteristic points are used for expressing a head and chest nail. Secondly, random enhancement such as distortion, rotation, shielding and the like is carried out on the shot data, so that the generalization capability of the model is improved from the aspect of data while the data set is enlarged. Dividing the data into a training set, a verification set and a test set, and inputting all the data in the training set into a network for training. And (5) storing the model after the training parameters are converged. And finally, testing the effect of the storage model by using the test set data to generate a feature point thermodynamic diagram and a feature point label diagram.
Has the beneficial effects that: compared with the prior art, the method has lower requirements on input data, and the pictures of the cephalothorax of the Chinese mitten crab taken in each link of the production line of the Chinese mitten crabs can be directly input into the trained model. Compared with the existing characteristic point positioning method, the method greatly improves the positioning accuracy and the positioning speed, can save a large amount of labor cost for the eriocheir sinensis industry, also obviously reduces the manual workload in the links of eriocheir sinensis cultivation observation, quality detection and the like, and greatly improves the working efficiency.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of data enhancement of the present invention;
FIG. 3 is a schematic diagram of a 37 point localization method of Eriocheir sinensis head and breast shell according to the present invention;
FIG. 4 is a diagram of the data enhancement effect of the present invention;
FIG. 5 is a diagram of a full convolution neural network of the present invention;
fig. 6 is a graph of the test effect of the present invention.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings: the present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection scope of the present invention is not limited to the following embodiments.
As shown in fig. 1 and 2, a method for identifying the feature points of the eriocheir sinensis head and chest nails specifically comprises the following steps: the collected eriocheir sinensis samples are photographed by using a mobile phone, the head and the chest of the eriocheir sinensis are cut by using a universal target detection model through transfer learning, and the cut data are enhanced. A37-point Chinese mitten crab is designed, and 37 characteristic points are used for positioning the head and chest nail contour of the Chinese mitten crab. And labeling the picture based on the 37-point feature point positioning method, wherein the labeled file is stored in an xml format. Designing an end-to-end differentiable convolution neural network, and inputting the enhanced image into the network for training until the training parameters are converged. And storing the trained model, inputting a new cephalothorax nail picture, and automatically generating a characteristic point thermodynamic diagram and marking characteristic points by a program.
The invention is further illustrated by the following specific example:
1. data acquisition:
the Hongze lake is the fourth big fresh water lake in China, is located at the downstream of the Huaihe river (33 degrees 06 '-33 degrees 40' in northern latitude and 118 degrees 10 '-118 degrees 52' in east longitude), and is also one of the main producing areas of the eriocheir sinensis. In the embodiment of the invention, 50 crabs are collected from Hongze lake, and 40-50 images are taken of each crab. The shooting equipment is a 6T smart phone and comprises a main camera and an auxiliary camera. The main camera 1600 ten thousand pixels, the sensor is Sony IMX519, the pixel size is 1.22m, f/1.7 big aperture. The secondary camera has 2000 million pixels, the sensor is Sony IMX376K, the pixel size is 1.0m, and the f/1.7 large aperture is adopted. During shooting, the mobile phone is positioned right above the crabs, and the left and right or front and back inclination angles are controlled within 10 degrees as much as possible so as to ensure that all characteristic points are shot. The present invention finally uses 2300 images of 50 crabs. All images were taken on the same day without additional fill lighting, but the capture process experienced natural light from 9 a.m. to 6 a.m.
2. The method for determining the feature point comprises the following steps:
the front side of the eriocheir sinensis head and breastplate is provided with 4 frontal teeth, the left side and the right side are respectively provided with 4 lateral teeth, and the rear edge is relatively flat. A relatively obvious M-shaped neck groove is arranged in the middle. Based on the characteristics of the eriocheir sinensis head and chest beetle, a 37-point characteristic point positioning method is designed as shown in figure 3.
3. Data enhancement:
the image shooting angles are relatively uniform, and the problem of shielding of most obstacles is also avoided deliberately during shooting. However, in practical applications, there may be various factors affecting the shooting device, shooting environment, shooting angle, obstacles, and the like. Therefore, data enhancement is necessary for this study. As shown in fig. 4, the three sets of diagrams a, B, and C respectively show a normal image, a blurry image, and an image with a large angular deviation. Column 1 represents an original image, column 2 represents an image subjected to random gaussian blur processing, column 3 represents an image subjected to random brightness and contrast processing, column 4 represents an image subjected to random rotation, column 5 represents an image subjected to random occlusion, and Column 6 represents an image subjected to random combination of various data enhancement methods. After data enhancement, the image data volume is 4600 sheets.
4. Designing a neural network model:
firstly, a full convolution neural network is designed. Structurally, the main difference between the full convolutional neural network and MobileNetV2 is the optimization of global average pooling to global depthwise volume (GDConv). A weight that can be learned is added to each location. The GDConv is calculated as shown in equation (1). The convolution kernel size of the GDConv layer is the same as the size of the input dimension.
G m =∑K i,j,m ·F i,j,m
(1)
In equation (1), F is the input of the GDConv layer, K is the convolution kernel, and G is the output of the GDConv layer. When the size of F is w × h × m, the size of K is w × h × m, and the size of G is 1 × 1 × m. When designing the network, the invention also uses the block convolution and the inverse residual module to achieve the effects of reducing the calculated amount and quickly sampling.
The overall network design is shown in fig. 5, and the size of the feature vector after passing through the previous module is marked after the arrow in fig. 5. In each Block, some important parameters and the sub-blocks contained in the parameters are marked. DW Conv in the figure represents a Depthwise Convolition operation. The inversed Res module represents a reciprocal residual module, and the dw _ num parameter in the inversed Res module represents the number of depth separable convolution operations in the reciprocal residual module. Groups in the Conv module represent the number of groups when the packets are convolved. After the full connection layer, the feature vector is 1 × 78. This is because the upper left corner and the lower right corner of the rectangular frame containing the eriocheir sinensis are also regarded as two feature points except 37 feature points, and the two-dimensional coordinates of the 39 points include 78 numbers in total.
After the design of the full convolution neural network is completed, the deep separable convolution module is added behind the full convolution neural network, so that the differentiable effect is achieved. After the picture samples are converted into matrix variables with the size of 3 × 512 × 512, a feature matrix with the size of 128 × 32 × 32 is obtained through the full convolution neural network of fig. 5 (but not through the full connection layer). On the basis of the characteristic matrix, convolution operation is performed again, so that the size of the matrix is changed into 39 multiplied by 32, and 39 represents 39 characteristic points needing regression. The feature matrix (39 × 32 × 32) is input to the DSNT module to obtain an output Z matrix. And globally normalizing the Z matrix by using a Softmax method to obtain Z'. Two matrices of X and Y with the same dimension as Z are defined and their values are normalized to between-1 and 1. That is, coordinate points in two dimensions of X and Y are converted to be between-1 and 1. And Z' respectively carries out F norm (point-by-point multiplication and addition) on the X matrix and the Y matrix to obtain predicted values of X and Y coordinates under each channel. And finally, calculating the Loss between the predicted value and the true value, and updating the parameters of the network. The reasoning process in this way is as follows:
Figure BDA0003857400160000091
5. training a model:
and inputting the enhanced data into a neural network model for training. The training environment is a supercomputer device equipped with NVIDIA Tesla V100 GB high performance GPU, ubuntu 18.04. After 8.33 hours and 300 rounds of training, the parameters are converged, and the trained model is stored. The trained model parameter is 0.84M, the calculated quantity is 8.34G, and the model size is 3.67MB.
6. And (4) visualizing the result:
and inputting the test data into the stored model to obtain a thermal distribution graph and a characteristic point label graph of the eriocheir sinensis head and breast nail characteristic points. The thermodynamic diagram reflects the thermodynamic distribution of the characteristic points of the cephalothorax, and the characteristic point label diagram calculates accurate coordinate values and is superposed and labeled in the original image. The visualization of the partial test set is shown in fig. 6.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (7)

1. A method for identifying the head and breast shell feature points of Chinese mitten crabs is characterized by comprising the following steps:
(1) The collected eriocheir sinensis sample is photographed by using a mobile phone, the head and the chest of the eriocheir sinensis are cut by using a universal target detection model through transfer learning, and the cut data are enhanced;
(2) Designing a 37-point Eriocheir sinensis, and positioning the head and chest nail contour of the Eriocheir sinensis by using 37 characteristic points;
(3) Labeling the picture based on the 37-point feature point positioning method, wherein a labeling file is stored in an xml format;
(4) Designing an end-to-end differentiable convolutional neural network, and inputting the enhanced image into the network for training until the training parameters are converged;
(5) And storing the trained model, inputting a new cephalothorax plate picture, and automatically generating a feature point thermodynamic diagram and marking feature points by a program.
2. The method for identifying the characteristic points of the eriocheir sinensis skull top as claimed in claim 1, wherein in the step (1), the shooting mobile phone is positioned right above the crab, and the left-right or front-back inclination angle is controlled within 10 degrees as much as possible to ensure that all the characteristic points are shot, and the shooting process is performed by natural light from 9 am to 6 evening.
3. The method for identifying the feature points of the eriocheir sinensis head and chest nails according to claim 1, wherein the data enhancement in the step (1) adopts a data enhancement mode of random combination of a plurality of enhancement schemes such as random occlusion, random rotation and random brightness contrast.
4. The method for identifying the head and chest nail characteristic points of the eriocheir sinensis according to claim 1, wherein the label file after the corresponding transformation is calculated while the data in the step (1) is enhanced, and manual re-labeling is not needed.
5. The eriocheir sinensis cephalosporium characteristic point identification method according to claim 1, wherein the 37-point localization method in the step (2) complies with the following regulations, wherein the 37-point localization method is used for 12 teeth (4 frontal teeth, 4 left front teeth and 4 right front teeth) of the cephalosporium, each tooth is localized by using three characteristic points of starting point, peak and ending point, two adjacent teeth share one characteristic point, the back edge is localized by using 3 characteristic points, and the M-shaped neck sulcus is localized by 7 characteristic points.
6. The method for identifying the eriocheir sinensis head and chest nail characteristic points according to claim 1, wherein the files marked in the step (3) are stored in an xml file in a tree structure, each picture is determined by using an < image > tag, the file attribute of the < image > tag is a file name, the < label > tag under the < image > tag stores the eriocheir sinensis number, each < part > tag stores one characteristic point, and in the part tag, three attributes of name, x and y respectively represent the number, horizontal offset and vertical offset of the characteristic point.
7. The method for identifying the feature points of the eriocheir sinensis head and chest nails as claimed in claim 1, wherein said designing the end-to-end differentiable convolutional neural network in step (4) comprises the following sub-steps:
(41) Designing a full convolution neural network containing 7 convolution modules, wherein the full convolution neural network uses global deep convolution (GDConv) to replace the traditional global average pooling operation, and the calculation process of the GDConv is
G m =∑K i,j,m ·F i,j,m
F is the input of the GDConv layer, K is the convolution kernel, G is the output of the GDConv layer, if the size of F is w × h × m, the size of K is also w × h × m, the size of G is 1 × 1 × m;
(42) After passing through the full convolution neural network, performing Softmax normalization on each channel
Figure FDA0003857400150000031
Z′ i,j =exp(Z i,j ) Defining an X, Y matrix in which
Figure FDA0003857400150000032
Figure FDA0003857400150000033
i =1,2,.., m; j =1, 2.... N, the inferred coordinates (x, y) are calculated, x =<Z′,X> F ,y=<Z′,Y> F In the above process, m and n respectively represent the width and height of the feature matrix,<A,B> F and (4) multiplying corresponding elements in the matrixes A and B, adding the multiplied corresponding elements, finally, calculating the predicted positions (x and y) of the characteristic points and the Loss value of the coordinates in the label file, and updating the parameters.
CN202211153659.1A 2022-09-21 2022-09-21 Method for identifying head and chest beetle characteristic points of eriocheir sinensis Pending CN115359324A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211153659.1A CN115359324A (en) 2022-09-21 2022-09-21 Method for identifying head and chest beetle characteristic points of eriocheir sinensis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211153659.1A CN115359324A (en) 2022-09-21 2022-09-21 Method for identifying head and chest beetle characteristic points of eriocheir sinensis

Publications (1)

Publication Number Publication Date
CN115359324A true CN115359324A (en) 2022-11-18

Family

ID=84007273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211153659.1A Pending CN115359324A (en) 2022-09-21 2022-09-21 Method for identifying head and chest beetle characteristic points of eriocheir sinensis

Country Status (1)

Country Link
CN (1) CN115359324A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117011795A (en) * 2023-08-08 2023-11-07 南京农业大学 River crab growth state nondestructive monitoring and evaluating platform and method based on Gaussian-like fuzzy support degree

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117011795A (en) * 2023-08-08 2023-11-07 南京农业大学 River crab growth state nondestructive monitoring and evaluating platform and method based on Gaussian-like fuzzy support degree
CN117011795B (en) * 2023-08-08 2024-02-13 南京农业大学 River crab growth state nondestructive monitoring and evaluating platform and method based on Gaussian-like fuzzy support degree

Similar Documents

Publication Publication Date Title
CN107292298A (en) Ox face recognition method based on convolutional neural networks and sorter model
CN109360206A (en) Crop field spike of rice dividing method based on deep learning
CN109544512A (en) It is a kind of based on multi-modal embryo&#39;s pregnancy outcome prediction meanss
CN107408211A (en) Method for distinguishing is known again for object
CN109117877A (en) A kind of Pelteobagrus fulvidraco and its intercropping kind recognition methods generating confrontation network based on depth convolution
CN110853070A (en) Underwater sea cucumber image segmentation method based on significance and Grabcut
CN106570485A (en) Deep learning based raft cultivation remote sensing image scene labeling method
CN115205667A (en) Dense target detection method based on YOLOv5s
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN113269191A (en) Crop leaf disease identification method and device and storage medium
CN115359324A (en) Method for identifying head and chest beetle characteristic points of eriocheir sinensis
Liu et al. Deep learning based research on quality classification of shiitake mushrooms
CN115661628A (en) Fish detection method based on improved YOLOv5S model
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
CN114724031A (en) Corn insect pest area detection method combining context sensing and multi-scale mixed attention
CN115719445A (en) Seafood identification method based on deep learning and raspberry type 4B module
Shuai et al. An improved YOLOv5-based method for multi-species tea shoot detection and picking point location in complex backgrounds
CN110956178B (en) Plant growth measuring method and system based on image similarity calculation and electronic equipment
CN117079125A (en) Kiwi fruit pollination flower identification method based on improved YOLOv5
CN115690570B (en) Fish shoal feeding intensity prediction method based on ST-GCN
CN117133014A (en) Live pig face key point detection method
CN115458151A (en) Diagnostic method for cryptocaryon irritans disease of marine fishes based on image recognition technology
CN114037737B (en) Neural network-based offshore submarine fish detection and tracking statistical method
CN113284164A (en) Shrimp swarm automatic counting method and device, electronic equipment and storage medium
CN113936019A (en) Method for estimating field crop yield based on convolutional neural network technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination