CN111724355B - Image measuring method for abalone body type parameters - Google Patents

Image measuring method for abalone body type parameters Download PDF

Info

Publication number
CN111724355B
CN111724355B CN202010493461.2A CN202010493461A CN111724355B CN 111724355 B CN111724355 B CN 111724355B CN 202010493461 A CN202010493461 A CN 202010493461A CN 111724355 B CN111724355 B CN 111724355B
Authority
CN
China
Prior art keywords
abalone
ruler
width
target
length
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010493461.2A
Other languages
Chinese (zh)
Other versions
CN111724355A (en
Inventor
刘向荣
彭惠民
柳娟
张悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN202010493461.2A priority Critical patent/CN111724355B/en
Publication of CN111724355A publication Critical patent/CN111724355A/en
Application granted granted Critical
Publication of CN111724355B publication Critical patent/CN111724355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products

Abstract

An image measuring method for abalone body type parameters belongs to the technical field of computer vision. Through gathering the abalone data set, use YOLOV3 target detection algorithm to train the data set, behind detection target abalone and the reference thing ruler, cut out the target prospect. The abalone edge is obtained by using a canny operator, and the minimum bounding rectangle frame of the edge and the area covered by the edge are calculated. And carrying out scale calculation on the ruler to obtain a proportion between the pixel value and the scale. And converting the length and width of the minimum rectangle into the actual length and width to obtain the length and width of the abalone. And (3) carrying out feature combination on the length, the width and the area occupied by the abalone, training a GBDT algorithm model to obtain an abalone weight prediction model, inputting the detected length, width and area features, and outputting the abalone weight. The YOLOV3 can be combined with the GBDT model to make predictions after detecting the target. Realize automated inspection abalone's length, width and weight, greatly reduce human cost and time cost.

Description

Image measuring method for abalone body type parameters
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to an image measuring method for abalone body type parameters.
Technical Field
Abalone is a very common marine organism, is delicious, has extremely high nutritive value and is popular with consumers. The abalone is processed as food in many factories, the sizes of abalone products are distinguished, manual sorting is performed by workers' experience in many times, and the workers with insufficient experience are difficult to sort correctly and quickly, so that the problems of low labor efficiency, inaccurate sorting and the like exist. In the field of scientific research, scientists need a large number of data sets when studying abalone, and the measurement of these abalone data is all marked with the mode of manual measurement basically, has occupied a lot of manual works and time, and this is not an important focus for scientific research, and the price/performance ratio is not high.
In recent years, computer vision technology has developed rapidly, and technologies such as target detection and visual positioning have become mature, so that many applications of target identification and target positioning, such as license plate identification and pedestrian detection, appear. Automatic image measurement of abalone is a very important technique and its task is to automatically detect the length, width and weight of abalone. The automatic image measurement of abalone can save unnecessary labour's expenditure, practices thrift a large amount of time, accomplishes quick accurate measurement. At present, the application technology of image measurement is relatively rare, and no relevant patent document exists for the image measurement of abalone temporarily.
Disclosure of Invention
The invention aims to provide an image measuring method for abalone body type parameters, which can solve the problems of automatic image measurement of abalone and the like.
The invention comprises the following steps:
1) collecting and labeling an abalone data set, wherein each sample comprises abalones and a reference object ruler, manually measuring and recording the actual length, width and weight of the abalones, and dividing a training set and a test set according to the ratio of 1: 1;
2) learning the training data set by using a Yolov3 target detection network, identifying abalone and a ruler, and obtaining a detection model based on a Yolov3 neural network;
3) detecting the abalone and the ruler in the sample picture by using the detection model based on the Yolov3 neural network obtained in the step 2), and cutting an abalone target image;
4) detecting the abalone edge contour in the abalone target image obtained in the step 3) by using an edge detection algorithm Canny operator, calculating the length and the width of a minimum external rectangular frame of the edge contour, and then calculating the area of the edge contour;
5) combining the length, width, area and actual weight of the abalone obtained in the step 4) with features to be used as a training sample of a GBDT algorithm, outputting the weight prediction of the abalone, and training an abalone weight prediction model;
6) in the testing stage, a detection model based on a Yolov3 neural network obtained in the step 2) is used for reasoning a test data set to obtain detection results of the abalone and the ruler, and further cutting out an abalone target image and a ruler target image;
7) determining the positions of two ends of the ruler in the ruler target image obtained in the step 6) by using a projection positioning method, and obtaining a conversion ratio according to the scale and the pixel ratio;
8) and (4) converting and mapping the length and the width of the abalone into actual sizes according to the conversion proportion, and predicting the weight of the abalone by using a trained GBDT model.
9) The abalone length, width and weight predicted by the GBDT algorithm are output.
In step 2), the backhaul of the YOLOV3 target detection network is a darknet53 network for a total of 53 convolutional layers; after the image sample is subjected to feature extraction through a dark net53 network, 3 feature maps with different sizes are obtained in an up-sampling mode for target detection, wherein the feature maps are respectively 13 × 13 feature maps, 26 × 26 feature maps and 52 × 52 feature maps; wherein the training objective function formula of YOLOV3 includes the following components:
Figure BDA0002519047120000021
wherein the content of the first and second substances,
Figure BDA0002519047120000022
whether the jth anchor box of the ith grid is responsible for the obj or not, if not, for the ith grid
Figure BDA0002519047120000023
Responsible for being 1, xi,yiIn order to predict the center coordinates of the target,
Figure BDA0002519047120000024
is the true center coordinates of the object and,
Figure BDA0002519047120000025
is the predicted width of the object and,
Figure BDA0002519047120000026
is the predicted altitude of the object or objects,
Figure BDA0002519047120000027
is the true width of the object and,
Figure BDA0002519047120000028
is the true height of the target and,
Figure BDA0002519047120000029
representing the confidence, the value is determined by the bounding box of the grid, if the bounding box is responsible for the target, then
Figure BDA0002519047120000031
Otherwise, the value is 0; pi jIn order to predict the probability of a category,
Figure BDA0002519047120000032
is the true class probability.
In step 5), the GBDT algorithm is a gradient boost regression model, and specifically includes the following steps:
(1) assume that the input training sample D ═ x1,y1),(x2,y2),...,(xn,yn) Maximum iteration time T, and loss function L; the output is a strong learner f (x);
(2) initializing a weak learner:
Figure BDA0002519047120000033
c is the average of all samples;
(3) for the iteration round T ═ 1, 2.
(3.1) for sample i ═ 1,2, 3.., n, a negative gradient is calculated:
Figure BDA0002519047120000034
(3.2) Using (x)i,rti) (i ═ 1, 2.. times, n), fitting a CART regression tree to obtain a t-th regression tree, wherein the corresponding leaf node region is RtjJ ═ 1,2,. ·, J; wherein J is the number of leaf nodes of the regression tree t;
(3.3) for leaf node region J1, 2.. J, calculate the best fit value:
Figure BDA0002519047120000035
(3.4) updating the strong learner:
Figure BDA0002519047120000036
(4) obtaining an expression for a strong learner f (x):
Figure BDA0002519047120000037
in step 5), the manner of combining the features may be a cartesian product combination.
In step 7), the specific method for determining the positions of the two ends of the straightedge in the straightedge target image obtained in step 6) by using the projection positioning method and obtaining the conversion ratio according to the scale and the pixel ratio is that firstly, hough transform straight line detection is carried out on the cut straightedge, the straightedge is corrected to be horizontal according to the straight line on the straightedge, then, the two ends of the straightedge are positioned by using the projection positioning method, and the conversion ratio is calculated, and the specific steps comprise:
(1) graying the cut ruler picture, and then carrying out threshold segmentation binarization, wherein when the gray value is 0, a scale mark is represented;
(2) projecting the scale marks in the vertical direction, and calculating the pixel values of the scale marks according to the two ends of the projection;
(3) and calculating the conversion ratio through the scale value and the length of the pixels at two ends.
The method comprises the steps of collecting an abalone data set, training the data set by using a Yolov3 target detection algorithm, detecting a target abalone and a reference object ruler, and cutting out a target foreground. Processing the abalone by using a canny operator to obtain the edge of the abalone, and calculating the minimum circumscribed rectangle frame of the edge and the area covered by the edge. And calculating the scales of the straight ruler by a projection positioning method to obtain a ratio between the pixel value and the scales. The minimum rectangle length and width are then converted to the actual length and width to give the length and width of the abalone. And (3) carrying out characteristic combination through the length, the width and the abalone occupied area, training a GBDT algorithm model to obtain an abalone weight prediction model, inputting the detected length, width and area characteristics, and outputting the abalone weight. The YOLOV3 can be combined with the GBDT model to make predictions after detecting the target. The YOLOV3 model and the GBDT weight prediction model are trained in advance during training, an end-to-end system is adopted during reasoning, and the length, the width and the weight of the abalone can be directly output. Compared with manual experience sorting and manual measurement, the method and the device have the advantages that the length, the width and the weight of the abalone can be automatically detected, and the labor cost and the time cost are greatly reduced.
Drawings
Fig. 1 is an overall flow chart of the image measuring method of abalone body type parameters of the invention;
FIG. 2 is a flow chart of the reasoning process of the image measuring method of abalone body type parameters;
fig. 3 is a diagram of a YOLOV3 network structure used in the image measurement method of abalone body type parameters.
Detailed Description
To further illustrate the technical features and advantages of the present invention in detail, the following embodiments are further described with reference to the accompanying drawings.
As shown in fig. 1, the embodiment provides an image measuring method for abalone body type parameters, which includes the following specific steps:
step 1, firstly, an abalone data set is collected and marked. Each abalone data sample contained abalone and a reference ruler. When the abalone marks, the abalone and the ruler are marked by using lableme marking software. Length, width and weight of artifical measurement abalone simultaneously to drawing the data set according to 1: the scale of 1 is divided into a training set and a data set.
And 2, training the training data set by using a Yolov3 target detection algorithm, initializing a pre-training model by using ImageNet by using the model, uniformly inputting pictures by 416, training for 50000 times, wherein the learning rate is 0.001, when the pictures are input by 42000 times and 48000 times, the learning rate is 0.1 time and 0.01 time of the initialization, and the batch size is set to be 32. And obtaining an abalone target neural network detection model based on YOLOV3 after training.
And 3, detecting the abalone and the ruler in the picture by using the trained Yolov3 model, and cutting out an abalone target image.
And 4, detecting the edges of the cut abalone image by using an edge detection algorithm Canny operator, calculating the length and the width of the minimum circumscribed rectangle frame, and then calculating the edge outline area.
And 5, combining the length, width, area and actual weight of the abalone into a training sample of the GDBT algorithm according to the characteristics, and outputting the training sample as a weight predicted value of the abalone. The feature combination mode is a Cartesian product combination.
And 6, in a testing stage, the abalone target neural network detection model based on YOLOV3 obtained in the step 2 is used for reasoning a test data set to obtain detection results of the abalone and the ruler, and further cutting out abalone target images and ruler target images.
Step 7, determining the positions of two ends of the ruler in the ruler target image obtained in the step 6 by using a projection positioning method, and obtaining a conversion ratio according to the scales and the pixel ratio; the method comprises the following specific steps:
(1) graying the cut ruler picture, and then carrying out threshold segmentation binarization, wherein when the gray value is 0, a scale mark is shown, and 255 is a background.
(2) And projecting the scale marks in the vertical direction, and calculating the length of the pixel values at the two ends according to the two ends of the projection.
(3) And calculating the conversion ratio through the scale value and the length of the pixels at two ends.
And 8, mapping the length and the width of the abalone to actual sizes through conversion proportion, and predicting the weight of the abalone by using a trained GBDT model.
And 9, outputting the length and the width of the abalone and the weight predicted by the GBDT algorithm.
Further, in the inference stage, after a target is detected by the YOLOV3, the target can be predicted by combining a GDBT model, and the method is an end-to-end system and can directly output the length, width and weight of the abalone. The reasoning process is as shown in fig. 2, after a test sample is input into a Yolov3 target detection model, an abalone target image and a ruler target image are obtained, then edge detection is carried out on the abalone target image by using a Canny operator, the proportion between a pixel value and a scale value is calculated by using a projection positioning method, the length and the width of the abalone are mapped to actual values, and finally the weight of the abalone is predicted by using a GBDT weight prediction model and the length, the width and the weight of the abalone are output.
Further, the backbone of YOLOV3 in step 2 is a darknet53 network, as shown in fig. 3, a total of 53 convolutional layers of the darknet53 host network, and after the image sample is subjected to feature extraction through the darknet53 network, 3 feature maps with different sizes are obtained through an upsampling mode for target detection, which are respectively a 13 × 13 feature map, a 26 × 26 feature map, and a 52 × 52 feature map, and correspond to the first prediction module, the second prediction module, and the third prediction module in fig. 3. Wherein the training objective function formula of YOLOV3 includes the following components:
Figure BDA0002519047120000061
wherein the content of the first and second substances,
Figure BDA0002519047120000062
j-th anchor representing i-th meshWhether or not box is responsible for the obj, if not for
Figure BDA0002519047120000063
The responsibility is 1. x is a radical of a fluorine atomi,yiIn order to predict the center coordinates of the target,
Figure BDA0002519047120000064
is the true center coordinate of the target.
Figure BDA0002519047120000065
Is the predicted width of the object and,
Figure BDA0002519047120000066
is the predicted altitude of the target.
Figure BDA0002519047120000067
Is the true width of the object and,
Figure BDA0002519047120000068
is the true height of the target.
Figure BDA0002519047120000069
Representing the confidence, the value is determined by the bounding box of the grid, if the bounding box is responsible for the target, then
Figure BDA00025190471200000610
Otherwise it is 0. Pi jIn order to predict the probability of a category,
Figure BDA00025190471200000611
is the true class probability.
In step 5, the GBDT algorithm is a gradient lifting regression model, and specifically includes the following steps:
(1) assume that the input training sample D ═ x1,y1),(x2,y2),...,(xn,yn) The maximum number of iterations T, the loss function is L. Output is strong learner f (x)
(2) Initializing the weak learner:
Figure BDA00025190471200000612
c is the average of all samples.
(3) For the iteration round T ═ 1, 2.
(3.1) for sample i ═ 1,2, 3.., n, a negative gradient is calculated:
Figure BDA00025190471200000613
(3.2) Using (x)i,rti) (i ═ 1, 2.. times, n), fitting a CART regression tree to obtain a t-th regression tree, wherein the corresponding leaf node region is RtjJ is 1, 2. Wherein J is the number of leaf nodes of the regression tree t.
(3.3) for leaf node region J ═ 1, 2.. J, calculate the best fit value:
Figure BDA0002519047120000071
(3.4) updating the strong learner:
Figure BDA0002519047120000072
(4) obtaining an expression of a strong learner f (x):
Figure BDA0002519047120000073
in step 7, firstly carrying out Hough transform straight line detection on the cut ruler, judging whether the ruler is placed obliquely or not according to straight lines on the ruler, correcting the ruler to be horizontal if the ruler is placed obliquely, then positioning two ends of the ruler by using a projection positioning method, and calculating the length of a pixel value and an actual scale value.
The method comprises the steps of firstly marking an abalone data set, enabling each picture to comprise abalone and a reference object ruler, and according to the following steps of 1: 1 into a training data set. The training data set was then trained using the YOLOV3 network, resulting in a YOLOV3 target detection model that was able to recognize abalone and a reference ruler. And detecting the abalone target by using YOLOV3, and performing edge detection on the abalone target image by using an edge detection algorithm Canny operator to obtain the edge of the abalone. And calculating the minimum circumscribed rectangle of the abalone edge and the size of the edge area. And training a GBDT weight prediction model by using the length and the width of the abalone and the area on the image as features through feature combination, and outputting the weight. And in the testing stage, a trained Yolov3 target detection model is used for reasoning a testing data set to obtain an abalone target image and a ruler target image, a projection positioning method is used for calculating the proportion of a reference object ruler to the actual length, then the length and the width of a rectangle are converted into the actual length and the width, the weight is predicted through the trained GBDT weight prediction model, and finally the length, the width and the weight of the abalone are output.
The specific embodiments of the present invention are described for illustrative purposes only and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (5)

1. An image measuring method for abalone body type parameters is characterized by comprising the following steps:
1) collecting and labeling an abalone data set, wherein each sample comprises abalones and a reference object ruler, manually measuring and recording the actual length, width and weight of the abalones, and dividing a training set and a test set according to the ratio of 1: 1;
2) learning the training data set by using a Yolov3 target detection network, identifying abalone and a ruler, and obtaining a detection model based on a Yolov3 neural network;
3) detecting the abalone and the ruler in the sample picture by using the detection model based on the Yolov3 neural network obtained in the step 2), and cutting an abalone target image;
4) detecting the abalone edge contour in the abalone target image obtained in the step 3) by using an edge detection algorithm Canny operator, calculating the length and the width of a minimum external rectangular frame of the edge contour, and then calculating the area of the edge contour;
5) combining the length, width, area and actual weight of the abalone obtained in the step 4) with features to be used as a training sample of a GBDT algorithm, outputting the weight prediction of the abalone, and training an abalone weight prediction model;
6) in the testing stage, a detection model based on a Yolov3 neural network obtained in the step 2) is used for reasoning a testing data set to obtain detection results of the abalone and the ruler, and abalone target images and ruler target images are cut out;
7) determining the positions of two ends of the ruler in the ruler target image obtained in the step 6) by using a projection positioning method, and obtaining a conversion ratio according to the scale and the pixel ratio;
8) converting and mapping the length and the width of the abalone into actual sizes according to the conversion proportion, and predicting the weight of the abalone by using a trained GBDT model;
9) the abalone length, width and weight predicted by the GBDT algorithm are output.
2. The method for image measurement of bodily form parameters of abalone according to claim 1, wherein in step 2), the backbone of the YOLOV3 target detection network is a darknet53 network for a total of 53 convolutional layers; after the image sample is subjected to feature extraction through a dark net53 network, 3 feature maps with different sizes are obtained in an up-sampling mode for target detection, wherein the feature maps are respectively 13 × 13 feature maps, 26 × 26 feature maps and 52 × 52 feature maps; wherein the training objective function formula of YOLOV3 includes the following components:
Figure FDA0003631546140000011
Figure FDA0003631546140000021
wherein the content of the first and second substances,
Figure FDA0003631546140000022
whether the jth anchor box of the ith grid is responsible for obj or not, if not, for obj
Figure FDA0003631546140000023
The charge is 1, and the charge is,
Figure FDA0003631546140000024
is the true center coordinates of the object and,
Figure FDA0003631546140000025
is the predicted width of the object and,
Figure FDA0003631546140000026
is the predicted height of the target and is,
Figure FDA0003631546140000027
is the true width of the object and,
Figure FDA0003631546140000028
is the true height of the target and,
Figure FDA0003631546140000029
representing the confidence, the value is determined by the bounding box of the grid, if the bounding box is responsible for the target, then
Figure FDA00036315461400000210
Otherwise, the value is 0; pi jIn order to predict the probability of a category,
Figure FDA00036315461400000211
is the true class probability.
3. The method according to claim 1, wherein in step 5), the GBDT algorithm is a gradient-boosting regression model, and comprises the following steps:
(1) assume that the input training sample D ═ x1,y1),(x2,y2),...,(xn,yn) Maximum iteration time T, and loss function L; the output is a strong learner f (x);
(2) initializing the weak learner:
Figure FDA00036315461400000212
c is the average of all samples;
(3) for the iteration round T ═ 1, 2.
(3.1) for sample i ═ 1,2, 3.., n, a negative gradient is calculated:
Figure FDA00036315461400000213
(3.2) Using (x)i,rti) I is 1,2, …, n, fitting a CART regression tree to obtain the k regression tree, wherein the corresponding leaf node region is RkjJ is 1,2, …, J; wherein J is the number of leaf node regions of the regression tree k;
(3.3) to leaf node region RkjCalculating a best fit value:
Figure FDA00036315461400000214
(3.4) updating the strong learner:
Figure FDA00036315461400000215
(4) obtaining an expression for a strong learner f (x):
Figure FDA0003631546140000031
4. a method as claimed in claim 1, wherein in step 5), the combination of features is in the form of cartesian product.
5. The method for measuring the image of the body parameters of the abalone as claimed in claim 1, wherein in step 7), the positions of the two ends of the straight ruler in the target image of the straight ruler obtained in step 6) are determined by using a projection positioning method, and the specific method for obtaining the conversion ratio according to the scale and the pixel ratio is to firstly perform hough transform straight line detection on the cut straight ruler, correct the straight ruler to be horizontal according to the straight line on the straight ruler, then position the two ends of the straight ruler by using a projection positioning method, and calculate the conversion ratio, and the specific steps comprise:
(1) graying the cut ruler picture, and then carrying out threshold segmentation binarization, wherein when the gray value is 0, a scale mark is represented;
(2) projecting the scale marks in the vertical direction, and calculating the pixel values of the scale marks according to the two ends of the projection;
(3) and calculating the conversion ratio through the scale value and the length of the pixels at two ends.
CN202010493461.2A 2020-06-01 2020-06-01 Image measuring method for abalone body type parameters Active CN111724355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010493461.2A CN111724355B (en) 2020-06-01 2020-06-01 Image measuring method for abalone body type parameters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010493461.2A CN111724355B (en) 2020-06-01 2020-06-01 Image measuring method for abalone body type parameters

Publications (2)

Publication Number Publication Date
CN111724355A CN111724355A (en) 2020-09-29
CN111724355B true CN111724355B (en) 2022-06-14

Family

ID=72565625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010493461.2A Active CN111724355B (en) 2020-06-01 2020-06-01 Image measuring method for abalone body type parameters

Country Status (1)

Country Link
CN (1) CN111724355B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239324B (en) * 2021-04-13 2023-11-10 江苏农林职业技术学院 Snakehead sexual maturity judging method and system
CN113591671B (en) * 2021-07-28 2023-10-24 常州大学 Fish growth identification detection method based on Mask-Rcnn
CN114001810A (en) * 2021-11-08 2022-02-01 厦门熵基科技有限公司 Weight calculation method and device
CN114882059A (en) * 2022-07-01 2022-08-09 深圳市远湖科技有限公司 Dimension measuring method, device and equipment based on image analysis and storage medium
CN115797432B (en) * 2023-01-05 2023-07-14 荣耀终端有限公司 Method and device for estimating absolute depth of image
CN116735463A (en) * 2023-06-01 2023-09-12 中山大学 Directed target detection-based diatom size automatic measurement method
CN117029673A (en) * 2023-07-12 2023-11-10 中国科学院水生生物研究所 Fish body surface multi-size measurement method based on artificial intelligence

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107180438A (en) * 2017-04-26 2017-09-19 清华大学 Estimate yak body chi, the method for body weight and corresponding portable computer device
CN109636826A (en) * 2018-11-13 2019-04-16 平安科技(深圳)有限公司 Live pig weight method for measurement, server and computer readable storage medium
CN109740662A (en) * 2018-12-28 2019-05-10 成都思晗科技股份有限公司 Image object detection method based on YOLO frame
CN109977817A (en) * 2019-03-14 2019-07-05 南京邮电大学 EMU car bed bolt fault detection method based on deep learning
CN110309771A (en) * 2019-06-28 2019-10-08 南京丰厚电子有限公司 A kind of EAS sound magnetic system tag recognition algorithm based on GBDT-INSGAII
CN110321646A (en) * 2019-07-10 2019-10-11 海默潘多拉数据科技(深圳)有限公司 A kind of multiphase flow rates virtual metrology method promoting regression tree model based on gradient
CN110728259A (en) * 2019-10-23 2020-01-24 南京农业大学 Chicken group weight monitoring system based on depth image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107180438A (en) * 2017-04-26 2017-09-19 清华大学 Estimate yak body chi, the method for body weight and corresponding portable computer device
CN109636826A (en) * 2018-11-13 2019-04-16 平安科技(深圳)有限公司 Live pig weight method for measurement, server and computer readable storage medium
CN109740662A (en) * 2018-12-28 2019-05-10 成都思晗科技股份有限公司 Image object detection method based on YOLO frame
CN109977817A (en) * 2019-03-14 2019-07-05 南京邮电大学 EMU car bed bolt fault detection method based on deep learning
CN110309771A (en) * 2019-06-28 2019-10-08 南京丰厚电子有限公司 A kind of EAS sound magnetic system tag recognition algorithm based on GBDT-INSGAII
CN110321646A (en) * 2019-07-10 2019-10-11 海默潘多拉数据科技(深圳)有限公司 A kind of multiphase flow rates virtual metrology method promoting regression tree model based on gradient
CN110728259A (en) * 2019-10-23 2020-01-24 南京农业大学 Chicken group weight monitoring system based on depth image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
3种对虾的图像测量技术与人工测量方法的比较分析;金烨楠 等;《水产学报》;20180613;第42卷(第11期);全文 *
An accurate eye pupil localization approach based on adaptive gradient boosting decision tree;Dong Tian et al.;《2016 Visual Communications and Image Processing (VCIP)》;20170105;全文 *
基于计算机视觉和GA-SVM的梭子蟹体重预测;唐杨捷 等;《宁波大学学报(理工版)》;20190110;第32卷(第1期);第32-37页 *

Also Published As

Publication number Publication date
CN111724355A (en) 2020-09-29

Similar Documents

Publication Publication Date Title
CN111724355B (en) Image measuring method for abalone body type parameters
CN109785337B (en) In-column mammal counting method based on example segmentation algorithm
CN111027547B (en) Automatic detection method for multi-scale polymorphic target in two-dimensional image
CN108830188B (en) Vehicle detection method based on deep learning
CN106875381B (en) Mobile phone shell defect detection method based on deep learning
CN109165623B (en) Rice disease spot detection method and system based on deep learning
CN111598098B (en) Water gauge water line detection and effectiveness identification method based on full convolution neural network
CN110929756B (en) Steel size and quantity identification method based on deep learning, intelligent equipment and storage medium
CN108038846A (en) Transmission line equipment image defect detection method and system based on multilayer convolutional neural networks
CN108564085B (en) Method for automatically reading of pointer type instrument
CN110929713B (en) Steel seal character recognition method based on BP neural network
CN108537751B (en) Thyroid ultrasound image automatic segmentation method based on radial basis function neural network
CN109871829B (en) Detection model training method and device based on deep learning
CN114998852A (en) Intelligent detection method for road pavement diseases based on deep learning
CN109886928A (en) A kind of target cell labeling method, device, storage medium and terminal device
CN114973002A (en) Improved YOLOv 5-based ear detection method
CN113177456B (en) Remote sensing target detection method based on single-stage full convolution network and multi-feature fusion
CN114821102A (en) Intensive citrus quantity detection method, equipment, storage medium and device
CN111612747A (en) Method and system for rapidly detecting surface cracks of product
CN114049325A (en) Construction method and application of lightweight face mask wearing detection model
CN110659637A (en) Electric energy meter number and label automatic identification method combining deep neural network and SIFT features
CN116863274A (en) Semi-supervised learning-based steel plate surface defect detection method and system
CN112801227A (en) Typhoon identification model generation method, device, equipment and storage medium
CN111178405A (en) Similar object identification method fusing multiple neural networks
CN114387261A (en) Automatic detection method suitable for railway steel bridge bolt diseases

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant