CN113674226A - Tea leaf picking machine tea leaf bud tip detection method based on deep learning - Google Patents
Tea leaf picking machine tea leaf bud tip detection method based on deep learning Download PDFInfo
- Publication number
- CN113674226A CN113674226A CN202110876674.8A CN202110876674A CN113674226A CN 113674226 A CN113674226 A CN 113674226A CN 202110876674 A CN202110876674 A CN 202110876674A CN 113674226 A CN113674226 A CN 113674226A
- Authority
- CN
- China
- Prior art keywords
- tea
- tea leaf
- images
- deep learning
- detection method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 241001122767 Theaceae Species 0.000 title claims abstract description 69
- 238000001514 detection method Methods 0.000 title claims abstract description 29
- 238000013135 deep learning Methods 0.000 title claims abstract description 16
- 230000007246 mechanism Effects 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 9
- 230000006870 function Effects 0.000 claims abstract description 8
- 238000003709 image segmentation Methods 0.000 claims abstract description 7
- 238000002372 labelling Methods 0.000 claims abstract description 4
- 238000012549 training Methods 0.000 claims description 26
- 238000012360 testing method Methods 0.000 claims description 17
- 238000000034 method Methods 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims 1
- 230000003044 adaptive effect Effects 0.000 abstract description 2
- 238000005457 optimization Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 5
- 230000007423 decrease Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013210 evaluation model Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a tea leaf bud tip detection method of a tea leaf picker based on deep learning, which comprises the steps of collecting tea leaf bud tip images by using a sectional type tea leaf picking mechanism, carrying out data set enhancement, labeling, dividing and other operations on the collected images, putting the images into a YOLOv4 model, introducing deep separable convolution and adaptive contrast enhancement processing to carry out model optimization, evaluating by using a CIOU loss function, and finally obtaining picking point coordinates by using HSV image segmentation and convex hull detection. The tea leaf bud tip detection method can efficiently and accurately identify the positions of the tea leaf bud tips and realize high-quality picking of famous tea.
Description
Technical Field
The invention relates to the technical field of tea-leaf picker robots, in particular to a deep learning-based tea-leaf picker tea-leaf bud tip detection method.
Background
The main tea plucking machine types at present are a manual ridge-crossing tea plucking machine and a handheld tea plucking machine, but the new buds and old leaves of tea leaves cannot be distinguished, so that the bud and leaf breakage rate is high, and the famous tea plucking standard cannot be reached. Therefore, the tea garden still chooses to rely on the manual work to pick the famous tea. However, along with the increasing shortage of tea-picking labor force, the shortage of tea-picking machines is getting worse and restricting the development of tea-picking industry. The development of efficient and high-quality tea pluckers is necessary and meaningful.
In recent years, with the more mature technology of an artificial intelligence system, the application of deep learning is continuously increased, and intelligent picking is applied to the fields of agricultural fruit picking and the like.
Disclosure of Invention
In order to solve the technical problem that tender shoots are easy to miss detection in the intelligent picking process mentioned in the background art, the invention provides a tea leaf bud tip detection method of a tea picking machine based on deep learning.
The invention adopts the following technical scheme:
the tea leaf picking machine tea leaf bud tip detection method based on deep learning comprises the following steps:
step 1: collecting tea images by using a segmented tea-picking mechanism;
step 2: performing data set enhancement operation on the images acquired in the step 1, expanding the number of the data sets, manually labeling the bud tip position of each image in the expanded data sets, and dividing the labeled data sets into a training set and a test set;
and step 3: putting the training set in the step 2 into an improved YOLOv4 network for training;
and 4, step 4: predicting the test set in the step 2 by using the model trained in the step 3 to obtain a prediction frame of the bud tip;
and 5: carrying out image segmentation and convex hull detection on the tender shoots in the prediction frame obtained in the step 4 to obtain picking point coordinates;
further, the step 1 specifically includes: the tea leaves are processed in a segmented mode through a segmented tea picking mechanism, and an industrial camera with the resolution of 1920 x 1080 is adopted for shooting and collecting images.
Further, the step 2 specifically includes: carrying out-20-degree random image rotation on the acquired image, expanding the number of images in the data set, and carrying out image rotation on the data set according to the following ratio of 9: the scale of 1 is divided into a training set and a test set.
Further, the step 3 specifically includes: setting the size of the training set as 416 multiplied by 416, inputting an improved YOLOv4 network for model training to obtain a trained model, and evaluating the model effect by using a CIOU loss function.
Further, the step 4 specifically includes: inputting the images in the test set into a trained model, carrying out self-adaptive contrast enhancement processing on the read images after the images are read by the model, and then carrying out tea bud tip detection to obtain a prediction frame of the tea bud tips.
Further, the step 5 specifically includes: and cutting the prediction frame to obtain a target area image only with tender shoots, carrying out HSV image segmentation on the target area image to obtain a binary image, carrying out convex hull detection on the binary image to obtain an intersection line of the outline and an image frame, and calculating the coordinates of the picking points.
The invention has the beneficial effects that:
the detection method can efficiently and accurately identify the positions of the tea leaf bud tips and realize high-quality picking of famous tea.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic view of a segmented tea plucking mechanism;
FIG. 3 is a schematic diagram of a modified YOLOv4 network structure;
FIG. 4 is a graph of a loss function of a tea image;
FIG. 5 is a diagram of the test set image of the tea leaf bud tip detection results, wherein 5a-5d are 4 exemplary diagrams respectively;
fig. 6 is a flow chart of calculation of picking points of tea shoot tips, wherein 6a is an input test image, 6b is an image cut by a prediction frame, 6c is an image subjected to binarization, and 6d is a schematic diagram of intersection lines of outlines and image frames obtained by convex hull detection.
Detailed Description
In order to make the objects, technical solutions and technical effects of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments.
As shown in fig. 1, a method for detecting tea leaf bud tips of a tea plucker based on deep learning comprises the following steps:
step 1: the tea leaf image acquisition is carried out by using a segmented tea picking mechanism, and the method specifically comprises the following steps:
the sectional tea picking mechanism is applied for patent, application number is 2021106842465, sectional type image acquisition of tea is carried out by the sectional type tea picking mechanism, as shown in fig. 2, ridge type pressing plates and rear shielding plates of the sectional type tea picking mechanism are utilized to limit the upper ends of the tea at a gap, a dark box is arranged around by utilizing light barriers to remove interference of natural light, strip-shaped LED lamps are arranged in the dark box for illumination, an industrial camera with resolution of 1920 multiplied by 1080 is adopted to carry out image shooting and acquisition, and the camera is arranged at a position 15 cm away from the gap.
Step 2: performing data set enhancement operation on the images acquired in the step 1, expanding the number of the data sets, manually labeling the bud tip position of each image in the expanded data sets, and dividing the labeled data sets into a training set and a test set; the method specifically comprises the following steps:
the original RGB images of the tea leaves in the original data set are subjected to image center rotation or image horizontal turning processing from-20 degrees to 20 degrees, the original data set is expanded to 4 times of the original data set, the data images subjected to data enhancement processing are screened, the images of the tea leaf bud tips lost due to image processing are removed, 1430 images of the data set are finally obtained, the generalization performance of the model can be improved, and overfitting of the deep learning model can be prevented. The data set was as follows 9: 1 into a training set and a test set, 9: the scale division of 1 can better verify the effect of similar data on the model.
And step 3: putting the training set into an improved Yolov4 network for training, specifically:
fig. 4 is a schematic diagram of an improved YOLOv4 network structure, in which a training set size is set to 416 × 416 pixels, and a YOLOv4 network is input for model training, the improved YOLOv4 feature extraction network uses deep separable convolution instead of a standard convolution block of the original YOLOv4, and uses a Relu6 activation function to sample 5 feature layers of different sizes, i.e., 208 × 208,104 × 104,52 × 52,26 × 26, and 13 × 13. And performing convolution on the feature layer at the lowest layer for three times, obtaining feature maps by using different maximum pooling processes of 13 × 13,9 × 9,5 × 5 and 1 × 1, taking the feature maps and the feature maps at the third layer and the fourth layer as the input of a bidirectional feature pyramid, and extracting features of the feature pyramid from bottom to top and from top to bottom to obtain three pieces of feature information with different sizes. And (4) carrying out recognition and prior frame adjustment by utilizing the characteristic information of the three different scales to obtain a trained model.
According to the method, the CIOU loss function evaluation model effect is selected, and the CIOU considers the distance, the overlapping rate, the scale and the punishment item of the target and the prediction frame, so that the regression of the target frame is more stable.
The graph of the loss function of the trained model is shown in fig. 4, and the loss function gradually decreases as the training times increase. The loss rapidly decreases before 1600 times of training (about 10 generations of training), gradually decreases before 1600 to 3200 times of training (about 20 generations of training), and tends to be stable after 6400 times of training (about 40 generations of training), which indicates that the training of the model can reach a convergence state, and the design of the trained model is reasonable.
And 4, step 4: predicting the test image by using the trained model to obtain a prediction frame of the bud tip, which specifically comprises the following steps:
the images concentrated in the input test enter the trained model in the step 3, after the model reads the input images, the ACE self-adaptive contrast enhancement processing is carried out on the images, so that the characteristics of the tea bud tips are clearer, then the tea bud tips are detected, and a prediction frame for obtaining the tea bud tips is shown in fig. 5.
And 5: carrying out image segmentation and convex hull detection on the tender shoots in the prediction frame to obtain picking point coordinates, which specifically comprises the following steps:
the test image subjected to the ACE adaptive contrast enhancement processing is shown as 6(a), a prediction frame in the test image is cut to obtain a target area image only with tender shoots, as shown in fig. 6(b), the target area image is an RGB image, the target area image is converted into an HSV image, the tea leaves are separated from the background by HSV image segmentation according to the characteristic that the tea leaves are different in color from the color tone of a rear shielding plate, as shown in fig. 6(c), a binary image is obtained, convex hull detection is performed on the binary image, the convex hull detection is to search for a smallest convex polygon from one point on the outline, the smallest convex polygon comprises all the points in the target point set, and the target object can be completely contained in the test image. The convex hull detection can completely contain the target bud tip inside, and obtain the intersection line of the outline and the image frame, as shown in fig. 6(d), the midpoint of each intersection line is the coordinate of the picking point.
It should be understood that parts of the present invention not specifically set forth are within the prior art.
It should be understood by those skilled in the art that the above-mentioned embodiments are only specific embodiments and procedures of the present invention, and the scope of the present invention is not limited thereto. The scope of the invention is limited only by the appended claims.
Claims (8)
1. A tea leaf bud tip detection method of a tea plucking machine based on deep learning is characterized by comprising the following steps:
step 1: collecting tea images by using a segmented tea-picking mechanism;
step 2: performing data set enhancement operation on the images acquired in the step 1, expanding the number of the data sets, manually labeling the bud tip position of each image in the expanded data sets, and dividing the labeled data sets into a training set and a test set;
and step 3: putting the training set in the step 2 into an improved YOLOv4 network for training;
and 4, step 4: predicting the test set in the step 2 by using the model trained in the step 3 to obtain a prediction frame of the bud tip;
and 5: and 4, carrying out image segmentation and convex hull detection on the tender shoots in the prediction frame obtained in the step 4 to obtain picking point coordinates.
2. The deep learning-based tea plucker tea bud tip detection method according to claim 1, wherein the step 1 specifically comprises: the sectional tea picking mechanism is used for carrying out sectional processing on the tea leaves, and an industrial camera is used for shooting and collecting images.
3. The deep learning-based tea plucker tea bud tip detection method according to claim 2, wherein the step 2 specifically comprises: carrying out-20-degree random image rotation on the acquired images, expanding the number of images in the data set, and carrying out data set conversion according to the following steps of 9: the scale of 1 is divided into a training set and a test set.
4. The deep learning-based tea plucker tea bud tip detection method according to claim 3, wherein the step 3 specifically comprises: setting the size of the training set as 416 x 416 pixels, inputting an improved YOLOv4 network for model training, obtaining a trained model, and evaluating the model effect.
5. The deep learning based tea plucker tea bud tip detection method as claimed in claim 4, wherein the model effect is evaluated using CIOU loss function.
6. The deep learning-based tea plucker tea bud tip detection method according to claim 4, wherein the step 4 specifically comprises: inputting the images in the test set into a trained model, carrying out self-adaptive contrast enhancement processing on the images by the model, and then carrying out tea bud tip detection to obtain a prediction frame of the tea bud tips.
7. The deep learning-based tea plucker tea bud tip detection method according to claim 6, wherein the step 5 specifically comprises: and cutting the prediction frame to obtain a target area image only with tender shoots, carrying out HSV image segmentation on the target area image to obtain a binary image, carrying out convex hull detection on the binary image to obtain an intersection line of the outline and an image frame, and calculating the coordinates of the picking points.
8. The method for detecting tea leaf bud tips of a tea plucking machine based on deep learning as claimed in claim 7, wherein the middle point of each intersecting line is the coordinate of the plucking point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110876674.8A CN113674226A (en) | 2021-07-31 | 2021-07-31 | Tea leaf picking machine tea leaf bud tip detection method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110876674.8A CN113674226A (en) | 2021-07-31 | 2021-07-31 | Tea leaf picking machine tea leaf bud tip detection method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113674226A true CN113674226A (en) | 2021-11-19 |
Family
ID=78540949
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110876674.8A Pending CN113674226A (en) | 2021-07-31 | 2021-07-31 | Tea leaf picking machine tea leaf bud tip detection method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113674226A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114190166A (en) * | 2021-12-15 | 2022-03-18 | 中国农业科学院茶叶研究所 | Tea picking method based on image and point cloud data processing |
CN114283303A (en) * | 2021-12-14 | 2022-04-05 | 贵州大学 | Tea leaf classification method |
CN114494441A (en) * | 2022-04-01 | 2022-05-13 | 广东机电职业技术学院 | Grape and picking point synchronous identification and positioning method and device based on deep learning |
CN114708208A (en) * | 2022-03-16 | 2022-07-05 | 杭州电子科技大学 | Famous tea tender shoot identification and picking point positioning method based on machine vision |
CN115019226A (en) * | 2022-05-13 | 2022-09-06 | 云南农业大学 | Tea leaf picking and identifying method based on improved YoloV4 model |
CN115271200A (en) * | 2022-07-25 | 2022-11-01 | 仲恺农业工程学院 | Intelligent continuous picking system for famous and high-quality tea |
CN115965872A (en) * | 2022-07-22 | 2023-04-14 | 中科三清科技有限公司 | Tea leaf picking method and device, electronic equipment and storage medium |
-
2021
- 2021-07-31 CN CN202110876674.8A patent/CN113674226A/en active Pending
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114283303A (en) * | 2021-12-14 | 2022-04-05 | 贵州大学 | Tea leaf classification method |
CN114283303B (en) * | 2021-12-14 | 2022-07-12 | 贵州大学 | Tea leaf classification method |
CN114190166A (en) * | 2021-12-15 | 2022-03-18 | 中国农业科学院茶叶研究所 | Tea picking method based on image and point cloud data processing |
CN114708208A (en) * | 2022-03-16 | 2022-07-05 | 杭州电子科技大学 | Famous tea tender shoot identification and picking point positioning method based on machine vision |
CN114708208B (en) * | 2022-03-16 | 2023-06-16 | 杭州电子科技大学 | Machine vision-based famous tea tender bud identification and picking point positioning method |
CN114494441A (en) * | 2022-04-01 | 2022-05-13 | 广东机电职业技术学院 | Grape and picking point synchronous identification and positioning method and device based on deep learning |
CN114494441B (en) * | 2022-04-01 | 2022-06-17 | 广东机电职业技术学院 | Grape and picking point synchronous identification and positioning method and device based on deep learning |
CN115019226A (en) * | 2022-05-13 | 2022-09-06 | 云南农业大学 | Tea leaf picking and identifying method based on improved YoloV4 model |
CN115965872A (en) * | 2022-07-22 | 2023-04-14 | 中科三清科技有限公司 | Tea leaf picking method and device, electronic equipment and storage medium |
CN115965872B (en) * | 2022-07-22 | 2023-08-15 | 中科三清科技有限公司 | Tea picking method and device, electronic equipment and storage medium |
CN115271200A (en) * | 2022-07-25 | 2022-11-01 | 仲恺农业工程学院 | Intelligent continuous picking system for famous and high-quality tea |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113674226A (en) | Tea leaf picking machine tea leaf bud tip detection method based on deep learning | |
CN108562589B (en) | Method for detecting surface defects of magnetic circuit material | |
CN105718945B (en) | Apple picking robot night image recognition method based on watershed and neural network | |
CN111179225B (en) | Test paper surface texture defect detection method based on gray gradient clustering | |
CN109255757B (en) | Method for segmenting fruit stem region of grape bunch naturally placed by machine vision | |
WO2022236876A1 (en) | Cellophane defect recognition method, system and apparatus, and storage medium | |
CN112136505A (en) | Fruit picking sequence planning method based on visual attention selection mechanism | |
CN105389581B (en) | A kind of rice germ plumule integrity degree intelligent identifying system and its recognition methods | |
CN112614062A (en) | Bacterial colony counting method and device and computer storage medium | |
CN109886277B (en) | Contour analysis-based fresh tea leaf identification method | |
CN112990103B (en) | String mining secondary positioning method based on machine vision | |
CN111178177A (en) | Cucumber disease identification method based on convolutional neural network | |
CN113255434B (en) | Apple identification method integrating fruit characteristics and deep convolutional neural network | |
CN111784764A (en) | Tea tender shoot identification and positioning algorithm | |
CN114067206B (en) | Spherical fruit identification positioning method based on depth image | |
Feng et al. | A separating method of adjacent apples based on machine vision and chain code information | |
CN110852186A (en) | Visual identification and picking sequence planning method for citrus on tree and simulation system thereof | |
CN101770645A (en) | Method and system for quickly segmenting high-resolution color image of cotton foreign fibers | |
CN110276759A (en) | A kind of bad line defect diagnostic method of Mobile phone screen based on machine vision | |
CN105631451A (en) | Plant leave identification method based on android system | |
CN113449622A (en) | Image classification, identification and detection method for cotton plants and weeds | |
CN115601690B (en) | Edible fungus environment detection method based on intelligent agriculture | |
Huang et al. | Mango surface defect detection based on HALCON | |
CN115187878A (en) | Unmanned aerial vehicle image analysis-based blade defect detection method for wind power generation device | |
Li et al. | A novel denoising autoencoder assisted segmentation algorithm for cotton field |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination |