CN114842188A - Tea tender shoot picking point positioning method based on deep learning algorithm - Google Patents

Tea tender shoot picking point positioning method based on deep learning algorithm Download PDF

Info

Publication number
CN114842188A
CN114842188A CN202210228847.XA CN202210228847A CN114842188A CN 114842188 A CN114842188 A CN 114842188A CN 202210228847 A CN202210228847 A CN 202210228847A CN 114842188 A CN114842188 A CN 114842188A
Authority
CN
China
Prior art keywords
key point
tea
shoot
tender
semantic segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210228847.XA
Other languages
Chinese (zh)
Inventor
李杨
董春旺
马蓉
张人天
程亦帆
姜嘉胤
王慕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tea Research Institute Chinese Academy of Agricultural Sciences
Original Assignee
Tea Research Institute Chinese Academy of Agricultural Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tea Research Institute Chinese Academy of Agricultural Sciences filed Critical Tea Research Institute Chinese Academy of Agricultural Sciences
Priority to CN202210228847.XA priority Critical patent/CN114842188A/en
Publication of CN114842188A publication Critical patent/CN114842188A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N10/00Quantum computing, i.e. information processing based on quantum-mechanical phenomena
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Mathematics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a tea tender shoot picking point positioning method based on a deep learning algorithm, which comprises the following steps: firstly, acquiring tea tender shoot image pairs of a plurality of tea tender shoots by using image acquisition equipment, and labeling each pair of tea tender shoot image pairs according to the formats of two types of task input data of semantic segmentation and key point detection to obtain a tea tender shoot semantic segmentation database and a tea tender shoot key point detection database; secondly, respectively inputting a tender shoot semantic segmentation database and a key point detection database into a deep learning-based semantic segmentation model and a key point detection model for training to respectively obtain a semantic segmentation model and a key point detection model; and finally, processing the tea tree tender shoot image pairs in sequence by utilizing the trained semantic segmentation model and the trained key point detection model to obtain tender shoot key point positions, and carrying out picking point positioning on tea tree tender shoots in combination with the growth characteristics of the tender shoots to obtain tender shoot picking point positions. The invention improves the precision and efficiency of positioning the picking point of the tea tender shoot.

Description

Tea tender shoot picking point positioning method based on deep learning algorithm
Technical Field
The invention relates to the technical field of artificial intelligence recognition, machine vision and image processing, in particular to a tea tender shoot picking point positioning method based on a deep learning algorithm.
Background
The tea leaf picking mode is mainly manual picking and mechanical picking. The tea picking method has the advantages that the manual picking is selective, the tea quality is high, a person can judge whether the tea buds are suitable for picking or not and judge the grade of the tea buds through observing the characteristics of the shapes, the colors and the like of the tea buds, and pick the tender tea buds at the appointed position in a breaking mode, the bud shape integrity is guaranteed in the mode, the tea buds have high quality, the cost is high, and the problem of 'labor shortage' in tea picking seasons can occur due to the optimized adjustment of the industrial structure and the labor force transfer.
In recent years, vision-based automatic picking robots are used for picking famous tea, and automatic identification and positioning of picking points become key points and serious difficulties limiting development of the robots. The famous tea is light in weight, and the tea leaves can swing due to wind blowing or picking machine movement; the tea garden has complex environment, and the tea leaves are shielded from each other; the light is too strong or too dark, and the resolution of the tender shoots and old leaves is low. These factors make the identification and positioning of tender shoot picking points very difficult, and seriously limit the automatic picking of famous tea. The currently adopted positioning method for picking points of tea tender shoots is low in positioning accuracy and efficiency, and in order to realize quick identification and positioning of picking points and guarantee the picking efficiency and high quality requirements of a famous tea machine, a famous tea picking point position information acquisition method needs to be developed.
Disclosure of Invention
The invention mainly aims to provide a tea tender shoot picking point positioning method based on a deep learning algorithm, and aims to improve the precision and efficiency of tea tender shoot picking point positioning.
In order to achieve the above object, the present invention provides a tea tender shoot picking point positioning method based on a deep learning algorithm, which comprises the following steps:
acquiring tea tender shoot image pairs of a plurality of tea tender shoots, wherein the tea tender shoot image pairs comprise tea tender shoot thermal images and tea tender shoot RGB images;
labeling each pair of tea tender shoot image pairs to obtain a tea tender shoot semantic segmentation database and a tea tender shoot key point detection database;
inputting the tender shoot semantic segmentation database into a semantic segmentation model based on deep learning for training to obtain a trained semantic segmentation model;
inputting the tender shoot key point detection database into a key point detection model to obtain a trained key point detection model;
shooting an image pair of tender shoots of tea leaves to be picked by using image acquisition equipment;
obtaining the tender bud key point position according to the trained semantic segmentation model, the key point detection model and the image pair;
and carrying out picking point positioning on tea tree tender shoots according to tender shoot growth characteristics and tender shoot key point positions to obtain tender shoot picking point positions.
Optionally, the step of labeling each pair of the pairs of the tea shoot images to obtain a tea shoot semantic segmentation database and a tea shoot key point detection database includes:
labeling each pair of tea tender shoot image pairs to obtain semantic segmentation labeling data and key point labeling data corresponding to the tea tender shoot image pairs;
and respectively carrying out augmentation operation on the semantic segmentation annotation data and the key point annotation data to obtain a tea tender shoot semantic segmentation database and a tea tender shoot key point detection database.
Optionally, the step of inputting the shoot semantic segmentation database into a deep learning-based semantic segmentation model for training to obtain a semantic segmentation model includes:
dividing the tender shoot semantic segmentation database into a semantic segmentation training set and a semantic segmentation verification set according to a first preset proportion;
inputting the semantic segmentation training set into a semantic segmentation model based on deep learning for training to obtain a semantic segmentation model weight file;
and loading the trained key point detection model weight file into the key point detection model to obtain the trained semantic segmentation model.
Optionally, the step of inputting the tender shoot key point detection database into the key point detection model to obtain the key point detection model includes:
dividing the tender shoot key point detection database into a key point training set and a key point verification set according to a second preset proportion;
inputting the key point training set into a key point detection model for training to obtain a key point detection model weight file;
and loading the trained weight file of the key point detection model into the key point detection model to obtain the trained key point detection model.
Optionally, the step of inputting the key point training set into a key point detection model for training to obtain a weight file of the tea key point detection model includes:
establishing a key point detection model by utilizing a pytorech program based on an HRNet network, wherein the improved key point detection model is used for improving an input layer of the key point detection model into 4 channels for inputting images obtained by fusing thermal images and RGB images;
and inputting the key point training set into the key point detection model in batches and continuously iterating, and finishing the training of the key point detection model when the model is converged to obtain a key point detection model weight file.
Optionally, the loss function of the keypoint detection model is:
min L=λ1*L1+λ2*L2+λ3*L3;
Figure BDA0003537357300000031
L2=d(P 1 ,C);
L3=d(P 2 ,C);
wherein λ 1, λ 2, λ 3 are weight coefficients of L1, L2, L3; pk denotes the kth keypoint of the sample,
Figure BDA0003537357300000032
key point heatmap, y (P), representing network predictions k ) A heat map representing the actual values obtained; d (P) 1 And C) representing the Euclidean distance from the key point of the tea tender shoot predicted by the network to the center C of the rectangular area where the key point is located.
Optionally, the step of obtaining the positions of the tender shoot key points according to the trained semantic segmentation model, the trained key point detection model and the trained image pair includes:
performing semantic segmentation on the image pair through the trained semantic segmentation model to obtain a tea tender shoot detection result;
and carrying out key point detection on the tea tender shoot segmentation result through the trained key point detection model to obtain the position of a key point.
Optionally, the step of positioning picking points of tea plant tender shoots by combining tender shoot growth characteristics and tender shoot key point positions to obtain tender shoot picking point positions includes:
when the tender shoot key points are two points, d is the Euclidean distance between the two key points, a linear equation is established by utilizing P1 and P2, and a point which is 0.4d away from the P1 point is taken as the tea tender shoot picking point position on the line segment P1P 2;
when the tender shoot key point is a point, image segmentation is carried out by using a position and region growing algorithm based on P1 or P2 to obtain a binary image of the branches and the stems of the tender shoot of the tea, noise in the binary image is filtered through expansion corrosion manipulation, the binary image is fitted through a straight line fitting method, the key point is downwardly shifted on a straight line by a preset distance, and then the position of the picking point of the tender shoot of the tea is determined.
The invention provides a tea tender shoot picking point positioning method based on a deep learning algorithm. Through the mode, the picking points can be determined according to the growth postures of the tea tender shoots, the located picking point coordinates are ensured to fall on the leaf stalks of the tea tender shoots, the integrity of the picked tea tender shoots is improved, the influence of the surrounding environment on the location is reduced, and the accuracy and the efficiency of the location of the tea tender shoot picking points are improved.
Drawings
FIG. 1 is a schematic flow chart of a first embodiment of a tea tender shoot picking point positioning method based on a deep learning algorithm according to the present invention;
FIG. 2 is a schematic flow chart of the tea tender shoot picking point positioning method based on the deep learning algorithm.
FIG. 3 is a schematic diagram of the position of a key point of the tea tender shoot picking point positioning method based on the deep learning algorithm;
FIG. 4 is a thermal image of tea shoots taken by the tea shoot picking point positioning method based on the deep learning algorithm of the present invention;
FIG. 5 is a tea tender shoot RGB image shot by the tea tender shoot picking point positioning method based on the deep learning algorithm;
FIG. 6 is a labeling diagram of an image in a tea tender shoot semantic segmentation database of the tea tender shoot picking point positioning method based on a deep learning algorithm of the present invention;
FIG. 7 is an effect diagram of the present invention after performing a semantic segmentation model on a tea shoot thermal image and a tea shoot RGB image;
FIG. 8 is a graph of the minimum circumscribed rectangle effect after performing a semantic segmentation model on a tea shoot thermal image and a tea shoot RGB image;
FIG. 9 is a graph of the minimum circumscribed rectangle effect displayed on a tea shoot RGB image after performing a semantic segmentation model on the tea shoot thermal image and the tea shoot RGB image;
FIG. 10 is a flowchart of the tea tender shoot picking point positioning method step S71 based on the deep learning algorithm of the present invention;
fig. 11 is a flowchart of step S72 of the tea tender shoot picking point positioning method based on the deep learning algorithm of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
Referring to fig. 1, fig. 1 is a schematic flow chart of a tea tender shoot picking point positioning method based on a deep learning algorithm according to a first embodiment of the present invention.
In the embodiment of the invention, the tea tender shoot picking point positioning method based on the deep learning algorithm is applied to a tender shoot picking point positioning device, and the method comprises the following steps:
step S10, acquiring tea tender shoot image pairs of a plurality of tea tender shoots, wherein the tea tender shoot image pairs comprise tea tender shoot thermal images and tea tender shoot RGB images;
in the embodiment, in order to improve the precision and efficiency of positioning tea tender shoot picking points, the tender shoot picking point positioning device utilizes image acquisition equipment to vertically shoot a plurality of tea tender shoot image pairs of tea tender shoots from the side surfaces of the tender shoots at the height close to the tea tender shoots, so that the semantic division data acquisition of the tea tender shoots and the key point detection data acquisition of the tea tender shoots are realized; the tea tender shoot image pair comprises a tea tender shoot thermal image and a tea tender shoot RGB image, the tea tender shoot thermal image and the tea tender shoot RGB image are side views of tea tree tender shoots, and the tea tender shoot RGB image is a color image of the tea tender shoots and is an image composed of three primary colors of red, green and blue.
Step S20, performing semantic segmentation and key point detection on each pair of tea tender shoot images to obtain a tea tender shoot semantic segmentation database and a tea tender shoot key point detection database;
in this embodiment, after acquiring a plurality of tea shoot image pairs of tea shoots, the shoot picking point positioning device labels each pair of the tea shoot image pairs to obtain a tea shoot semantic segmentation database and a tea shoot key point detection database.
Step S20, labeling each pair of the tea shoot image pairs according to the formats of two types of task input data, namely semantic segmentation and key point detection, to obtain a tea shoot semantic segmentation database and a tea shoot key point detection database, which may include:
step S21, labeling each pair of the tea tender shoot images to obtain semantic segmentation labeling data and key point labeling data corresponding to the tea tender shoot images;
in this embodiment, in the present embodiment, after acquiring a plurality of tea shoot image pairs of tea shoots, the shoot picking point positioning device labels the tea shoots according to the formats of two types of task input data, namely semantic segmentation and key point detection, respectively, to obtain semantic segmentation labeling data and key point labeling data corresponding to the tea shoot image pairs. Wherein, when the semantic segmentation labeling is carried out, the region where the tea tender shoot is located on the image is labeled, the label of each tea tender shoot is 1, and the labels of the rest regions are 0; fig. 6 is a labeled diagram of an image in a tea tender shoot semantic segmentation database of the tea tender shoot picking point positioning method based on the deep learning algorithm, and the image is displayed on a tea tender shoot RGB image for easy visualization. Labeling the detection of the key points and labeling the acquired data according to the detection task of the key points; aiming at key point detection, two key points on a tea shoot are labeled, wherein the connecting point P1 of the tea shoot, one bud and one bud leaf, and the connecting point P2 of the bud leaf and branch corresponding to one bud and two leaves, and the labeling result of each group of tender shoots is information { (Px1, Py1, V1), (Px2, Py2, V) } of two key points P1 and P2, wherein the first two numbers are position coordinate information, the third number represents a visibility mark, is 0 to represent no mark, is 1 to represent a mark but not visible, and is 2 to represent a mark and visible.
And step S22, respectively carrying out augmentation operation on the semantic segmentation annotation data and the key point annotation data to obtain a tea tender shoot semantic segmentation database and a tea tender shoot key point detection database.
In this embodiment, after obtaining the semantic segmentation annotation data and the key point annotation data, the tender shoot picking point positioning device performs augmentation operation on the semantic segmentation annotation data and the key point annotation data, so as to implement data set sample expansion, and construct the tea tender shoot semantic segmentation database Ds and the tea tender shoot key point detection database Dk. The augmentation operation comprises operations of image translation, rotation, sharpening, turning, zooming and the like.
Step S30, inputting the tender shoot semantic segmentation database into a semantic segmentation model based on deep learning for training to obtain a semantic segmentation model;
in this embodiment, after obtaining the tea shoot semantic segmentation database and the tea shoot key point detection database, the shoot picking point positioning device inputs the shoot semantic segmentation database into a deep learning-based semantic segmentation model for training, so as to obtain a trained semantic segmentation model.
Step S30, training the deep learning semantic segmentation model input into the shoot semantic segmentation database to obtain semantic segmentation model weights, which may include:
step S31, dividing the tender shoot semantic segmentation database into a semantic segmentation training set and a semantic segmentation verification set according to a first preset proportion;
in the embodiment, after a tea tender shoot semantic segmentation database and a tea tender shoot key point detection database are obtained, a tender shoot picking point positioning device divides the tender shoot semantic segmentation database into a semantic segmentation training set and a semantic segmentation verification set according to a first preset proportion; wherein the first preset ratio is 8: 2.
Step S32, training the semantic segmentation training set input deep learning semantic segmentation model to obtain a semantic segmentation model weight file;
in this embodiment, after obtaining a semantic segmentation training set, the tender shoot picking point positioning device inputs the semantic segmentation training set into a deep learning semantic segmentation model for training to obtain a semantic segmentation model weight file; and verifying the trained deep convolutional neural network model through a semantic segmentation verification set. The deep learning semantic segmentation model is an FCN2s model.
And step S33, loading the trained target detection model weight file into the target detection model to obtain the trained target detection model.
In this embodiment, after obtaining the semantic segmentation model weight file, the tender shoot picking point positioning device loads the trained semantic segmentation model weight file into the semantic segmentation model to obtain the trained semantic segmentation model. The semantic segmentation model FCN2S and the corresponding semantic segmentation model weight file jointly form a trained semantic segmentation model Fs, and n tea tender shoot area coordinates S can be obtained on the image
Figure BDA0003537357300000081
Step S40, inputting the tender shoot key point detection database into the key point detection model for training to obtain a trained key point detection model;
in this embodiment, after the tender shoot picking point positioning device obtains the tea tender shoot semantic segmentation database and the tea tender shoot key point detection database, the tender shoot key point detection database is input into the key point detection model for training, so as to obtain a trained key point detection model.
Step S40, inputting the tender shoot key point detection database into the key point detection model to obtain a key point detection model, which may include:
step S41, dividing the tender shoot key point detection database into a key point training set and a key point verification set according to a second preset proportion;
in the embodiment, after the tender shoot picking point positioning device obtains the tea tender shoot semantic segmentation database and the tea tender shoot key point detection database, the tender shoot key point detection database is divided into a key point training set and a key point verification set according to a second preset proportion; wherein the second preset ratio is 8: 2.
Step S42, inputting the key point training set into a key point detection model for training to obtain a key point detection model;
in this embodiment, after obtaining the key point training set, the tender shoot picking point positioning device inputs the key point training set into a key point detection model for training to obtain a key point detection model weight file; and verifying the key point detection model through a key point verification set.
Step S42, inputting the key point training set into a key point detection model for training, to obtain a tea key point detection model weight file and a key point detection model, which may include:
step S421, establishing a key point detection model by utilizing a pytorech program based on an HRNet network, wherein the improved key point detection model is obtained by improving an input layer of the key point detection model into 4 channels for inputting an image formed by fusing a thermal image and an RGB image;
in this embodiment, after the tender shoot picking point positioning device obtains the key point training set, a key point detection model is established by using a pitorch program based on an HRNet network. The improved key point detection model is characterized in that an input layer of the key point detection model is improved into 4 channels for inputting images obtained by fusing thermal images and RGB images. The key point detection model is based on an HRNet network, the output of the model is 2 characteristic diagrams which respectively represent prediction diagrams of key points P1 and P2, and the loss function of the key point detection model redesigned by using the tea tender shoot key point characteristics is as follows: min L λ 1 × L1+ λ 2 × L2+ λ 3 × L3;
Figure BDA0003537357300000091
L2=d(P 1 ,C);
L3=d(P 2 ,C);
wherein λ 1, λ 2, λ 3 are weight coefficients of L1, L2, L3; pk denotes the kth keypoint of the sample,
Figure BDA0003537357300000092
key point heatmap, y (P), representing network predictions k ) A heat map representing the actual values obtained; d (P) 1 And C) tea representing network predictionThe Euclidean distance from the key point of the tender leaf bud to the center C of the rectangular area where the tender leaf bud is located. The loss function of the keypoint detection model is used in training the keypoint detection model.
Step S422, inputting the key point training set into a key point detection model in batches and continuously iterating, finishing the training of the key point detection model when the model is converged, and obtaining a weight file of the key point detection model;
in this embodiment, after the tender shoot picking point positioning device establishes the key point detection model, the key point training set is input into the key point detection model in batches and continuously iterated, and as iteration progresses, the model gradually converges, and finally training of the key point detection model is completed, so as to obtain a key point detection model weight file. In the training process, when the loss function of the training set tends to be stable and the descending amplitude is small, the model is considered to be converged, and the training is completed. In the model, when the difference value between the loss functions of 3 epochs in succession is less than 0.1 in the training process, the training is stopped.
One iterative process in model training includes forward propagation and backward propagation. When the image data is transmitted in the forward direction, the image data samples of one batch in the training set are input into the key point detection model, the image data samples are continuously transmitted backwards through the convolution layer in the model, the output result of the model is finally obtained, and the loss function of the model is calculated according to the output result of the model and the label file; during reverse propagation, the partial derivatives of the parameters in the convolutional layers are calculated by using the loss function, gradient back propagation is performed, and the weight parameters of each convolutional layer in the model are updated, so that the training of the model is realized.
The key point detection model and the corresponding weight file jointly form a function Fk, and the positions of the key points P1 and P2 can be obtained;
Figure BDA0003537357300000101
and step S43, loading the trained key point detection model weight file into the key point detection model to obtain the trained key point detection model.
In this embodiment, after obtaining the weight file of the key point detection model, the tender shoot picking point positioning device loads the trained weight file of the key point detection model into the key point detection model to obtain the trained key point detection model; and verifying the key point detection model through a key point verification set.
Step S50, shooting an image pair of tender shoots of tea leaves to be picked by using image acquisition equipment;
in this embodiment, after the trained semantic segmentation model and the trained key point detection model are obtained, the tender shoot picking point positioning device uses an image acquisition device to capture an image pair I of tender shoots of tea leaves to be picked. Wherein, the image pair is a thermal image and a color RGB image of tender shoots of tea leaves to be picked
Step S60, obtaining the tender shoot key point position according to the trained semantic segmentation model, the key point detection model and the image pair;
in this embodiment, after the image pair I is obtained, the tender shoot picking point positioning device obtains the tender shoot key point position according to the semantic segmentation model, the key point detection model and the image pair.
Step S60 is to obtain the tender shoot key point position according to the trained semantic segmentation model, key point detection model and image pair, which may include:
step S61, performing semantic segmentation on the image pair through the trained semantic segmentation model to obtain a tea tender shoot segmentation result;
in this embodiment, after the image pair I is obtained, the tender shoot picking point positioning device performs semantic segmentation on the image pair through the trained semantic segmentation model to obtain a tea tender shoot segmentation result. As shown in fig. 7, a semantic segmentation model and a corresponding semantic segmentation model weight file jointly form a trained semantic segmentation model Fs to perform semantic segmentation on an image pair to obtain a corresponding tea shoot semantic segmentation region, calculate a minimum circumscribed rectangle of the region where a shoot is located, cut out the region where the minimum circumscribed rectangle is located from an original image, and adjust the region to a fixed resolution (128 × 224) to obtain a tea shoot detection result; wherein, fig. 7 is an effect diagram of the tea shoot thermal image and the tea shoot RGB image after performing the semantic segmentation model, in which white represents a shoot area and black represents a background.
And step S62, performing key point detection on the tea tender shoot segmentation result through the trained key point detection model to obtain the position of a key point.
In this embodiment, after the tea tender shoot division result is obtained, the tender shoot picking point positioning device performs key point detection on the tea tender shoot detection image pair through the trained key point detection model to obtain the position of the key point. As shown in fig. 8 and 9, the key point detection model Fk is formed by the key point detection model and the corresponding key point detection model weight file, so as to detect the key points of the tea tender shoot detection results in the tea tender shoot detection frame; fig. 9 is a diagram showing a minimum circumscribed rectangle effect displayed on a tea shoot RGB image after a semantic segmentation model is performed on the tea shoot thermal image and the tea shoot RGB image, and is shown only on the tea shoot RGB image for visualization. The key point detection model and the corresponding key point detection weight file jointly form a function Fk, and the positions of the key points P1 and P2 can be obtained;
Figure BDA0003537357300000111
and step S70, carrying out picking point positioning on the tea tree tender shoots according to the tender shoot growth characteristics and the tender shoot key point positions to obtain tender shoot picking point positions.
In this embodiment, after the tender shoot key point position is obtained, the tender shoot picking point positioning device combines the tender shoot of the tea tree according to the growth characteristics of the tender shoot and the tender shoot key point position to obtain the tender shoot picking point position.
Step S70, combining the tea plant tender shoot picking point positioning according to tender shoot growth characteristics and tender shoot key point positions to obtain tender shoot picking point positions, which may include:
step S71, when the tender shoot key points are two points, as shown in fig. 10, that is, P1 and P2 exist at the same time, and d is the distance between the two key points, a linear equation is established by using P1 and P2, and according to the characteristic that the tender shoot picking point is always below the key point and is closer to the key point, the position of the key point and the fitted linear equation are combined, and a point with the distance of 0.4d from the P1 point is taken as the position of the tender shoot picking point of the tea leaf on the line segment P1P 2. Fig. 10 is a flowchart of step S71 of the tea tender shoot picking point positioning method based on the deep learning algorithm, and for convenience of visualization, the diagram is only shown on an RGB image of tea tender shoots, square points in the diagram represent key points of tea tender shoots detected by a model, and triangular points represent tea tender shoot picking points positioned by the method of the present invention.
Step S72, when the tender shoot key point is one point, as shown in fig. 11, that is, only one of P1 or P2 exists, performing image segmentation by using a position and region growing algorithm based on P1 or P2 to obtain a binarized image of the tender shoot branch of tea, filtering out noise in the binarized image through expansion and corrosion manipulation, fitting the binarized image by using a straight line fitting method, and finally determining the tender shoot picking point position of tea after the key point is downwardly shifted by a certain distance on a straight line according to the characteristic that the tender shoot picking point is always below the key point and is closer to the key point by combining the key point position and a fitted straight line equation, wherein the shift value is 5 pixels. Fig. 11 is a flowchart of step S72 of the method for positioning picking points of tea shoots based on the deep learning algorithm of the present invention, wherein the diagram is shown only on RGB images of tea shoots for easy visualization, and the uppermost and lowermost square points in the diagram represent picking points of key points of tea shoots detected by the present invention; the triangular points represent the tea tender shoot picking points positioned by the invention.
And step S73, when the number of the obtained tender shoot key points is zero, namely the key points are not detected, no subsequent operation is taken at the moment, and the next group of image pairs is calculated.
Through the scheme, the picking points are determined according to the growth postures of the tea tender shoots, the located picking point coordinates are ensured to fall on the leaf stalks of the tea tender shoots, the integrity of the picked tea tender shoots is improved, meanwhile, the influence of the surrounding environment on the location is reduced, and the accuracy and the efficiency of the location of the tea tender shoot picking points are improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are only for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. A tea tender shoot picking point positioning method based on a deep learning algorithm is characterized by comprising the following steps: the method comprises the following steps:
acquiring tea tender shoot image pairs of a plurality of tea tender shoots, wherein the tea tender shoot image pairs comprise tea tender shoot thermal images and tea tender shoot RGB images;
labeling each pair of tea tender shoot image pairs to obtain a tea tender shoot semantic segmentation database and a tea tender shoot key point detection database;
inputting the tender shoot semantic segmentation database into a semantic segmentation model based on deep learning for training to obtain a trained semantic segmentation model;
inputting the tender shoot key point detection database into a key point detection model for training to obtain a trained key point detection model;
shooting an image pair of tender shoots of tea leaves to be picked by using image acquisition equipment;
obtaining the tender bud key point position according to the trained semantic segmentation model, the key point detection model and the image pair;
and carrying out picking point positioning on tea tree tender shoots according to tender shoot growth characteristics and tender shoot key point positions to obtain tender shoot picking point positions.
2. The tea shoot picking point positioning method based on the deep learning algorithm as claimed in claim 1, wherein the step of labeling each pair of the tea shoot image pairs to obtain a tea shoot semantic segmentation database and a tea shoot key point detection database comprises:
labeling each pair of tea shoot image pairs according to the formats of two types of task input data of semantic segmentation and key point detection respectively to obtain semantic segmentation labeling data and key point labeling data corresponding to the tea shoot image pairs;
and respectively carrying out augmentation operation on the semantic segmentation annotation data and the key point annotation data to obtain a tea tender shoot semantic segmentation database and a tea tender shoot key point detection database.
3. The tea shoot picking point positioning method based on the deep learning algorithm as claimed in claim 1, wherein the step of inputting the shoot semantic segmentation database into a semantic segmentation model based on deep learning for training to obtain the semantic segmentation model comprises:
dividing the tender shoot semantic segmentation database into a semantic segmentation training set and a semantic segmentation verification set according to a first preset proportion;
inputting the semantic segmentation training set into a deep learning semantic segmentation model for training to obtain a semantic segmentation model weight file;
and loading the trained semantic segmentation model weight file into the semantic segmentation model to obtain the trained semantic segmentation model.
4. The tea tender shoot picking point positioning method based on the deep learning algorithm as claimed in claim 1, wherein the step of inputting a tender shoot key point detection database into a key point detection model for training to obtain the key point detection model comprises:
dividing the tender shoot key point detection database into a key point training set and a key point verification set according to a second preset proportion;
inputting the key point training set into a key point detection model for training to obtain a key point detection model weight file;
and loading the trained key point detection model weight file into the key point detection model to obtain the trained key point detection model.
5. The tea tender shoot picking point positioning method based on the deep learning algorithm as claimed in claim 4, wherein the step of inputting the key point training set into a key point detection model for training to obtain a tea key point detection model weight file and a key point detection model comprises:
establishing an improved key point detection model by utilizing a pytorech program based on an HRNet network, wherein the improved key point detection model is used for improving an input layer of the key point detection model into 4 channels for inputting an image formed by fusing a thermal image and an RGB image;
and inputting the key point training set into the key point detection model in batches and continuously iterating, and finishing the training of the key point detection model when the model is converged to obtain a key point detection model weight file.
6. The tea tender shoot picking point positioning method based on the deep learning algorithm as claimed in claim 5, wherein the loss function of the key point detection model is as follows:
min L=λ1*L1+λ2*L2+λ3*L3;
Figure FDA0003537357290000031
L2=d(P 1 ,C);
L3=d(P 2 ,C);
wherein λ 1, λ 2, λ 3 are weight coefficients of L1, L2, L3; pk represents the kth keypoint of the sample,
Figure FDA0003537357290000032
key point heatmap, y (P), representing network predictions k ) A heat map representing the actual values obtained; d (P) 1 And C) representing the Euclidean distance from the key point of the tea tender shoot predicted by the network to the center C of the rectangular area where the key point is located.
7. The tea tender shoot picking point positioning method based on the deep learning algorithm as claimed in claim 1, wherein the step of obtaining the positions of the tender shoot key points according to the trained semantic segmentation model, the key point detection model and the image pair comprises:
performing semantic segmentation on the image pair through the trained semantic segmentation model to obtain a tea tender shoot segmentation result;
and carrying out key point detection on the tea tender shoot segmentation result through the trained key point detection model to obtain the position of a key point.
8. The tea tender shoot picking point positioning method based on the deep learning algorithm as claimed in claim 1, wherein the step of combining the tender shoot growth characteristics and tender shoot key point positions to carry out picking point positioning on tea tender shoots to obtain tender shoot picking point positions comprises the following steps:
when the tender shoot key points are two points, d is the Euclidean distance between the two key points, a linear equation is established by utilizing P1 and P2, and a point which is 0.4d away from the P1 point is taken as the tea tender shoot picking point position on the line segment P1P 2;
when the tender shoot key point is a point, image segmentation is carried out by using a position and region growing algorithm based on P1 or P2 to obtain a binary image of the branches and the stems of the tender shoot of the tea, noise in the binary image is filtered through expansion corrosion manipulation, the binary image is fitted through a straight line fitting method, the key point is downwardly shifted on a straight line by a preset distance, and then the position of the picking point of the tender shoot of the tea is determined.
CN202210228847.XA 2022-03-08 2022-03-08 Tea tender shoot picking point positioning method based on deep learning algorithm Pending CN114842188A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210228847.XA CN114842188A (en) 2022-03-08 2022-03-08 Tea tender shoot picking point positioning method based on deep learning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210228847.XA CN114842188A (en) 2022-03-08 2022-03-08 Tea tender shoot picking point positioning method based on deep learning algorithm

Publications (1)

Publication Number Publication Date
CN114842188A true CN114842188A (en) 2022-08-02

Family

ID=82562834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210228847.XA Pending CN114842188A (en) 2022-03-08 2022-03-08 Tea tender shoot picking point positioning method based on deep learning algorithm

Country Status (1)

Country Link
CN (1) CN114842188A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187803A (en) * 2022-08-12 2022-10-14 仲恺农业工程学院 Positioning method for picking process of tender shoots of famous tea
CN117616999A (en) * 2024-01-08 2024-03-01 华南农业大学 Intelligent tea picking actuator, device and method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187803A (en) * 2022-08-12 2022-10-14 仲恺农业工程学院 Positioning method for picking process of tender shoots of famous tea
CN115187803B (en) * 2022-08-12 2023-04-21 仲恺农业工程学院 Positioning method for picking process of famous tea tender shoots
CN117616999A (en) * 2024-01-08 2024-03-01 华南农业大学 Intelligent tea picking actuator, device and method

Similar Documents

Publication Publication Date Title
US11144787B2 (en) Object location method, device and storage medium based on image segmentation
EP3770810A1 (en) Method and apparatus for acquiring boundary of area to be operated, and operation route planning method
CN114842188A (en) Tea tender shoot picking point positioning method based on deep learning algorithm
Liu et al. A method of segmenting apples at night based on color and position information
CN111213155A (en) Image processing method, device, movable platform, unmanned aerial vehicle and storage medium
CN110070571B (en) Phyllostachys pubescens morphological parameter detection method based on depth camera
CN115272204A (en) Bearing surface scratch detection method based on machine vision
CN111178177A (en) Cucumber disease identification method based on convolutional neural network
CN114067206B (en) Spherical fruit identification positioning method based on depth image
Grondin et al. Tree detection and diameter estimation based on deep learning
CN116091951A (en) Method and system for extracting boundary line between farmland and tractor-ploughing path
CN115861686A (en) Litchi key growth period identification and detection method and system based on edge deep learning
CN114842187A (en) Tea tender shoot picking point positioning method based on fusion of thermal image and RGB image
CN117152544B (en) Tea-leaf picking method, equipment, storage medium and device
CN112926648B (en) Method and device for detecting abnormality of tobacco leaf tip in tobacco leaf baking process
Afonso et al. Detection of tomato flowers from greenhouse images using colorspace transformations
CN114022777A (en) Sample manufacturing method and device for ground feature elements of remote sensing images
Wang et al. A transformer-based mask R-CNN for tomato detection and segmentation
CN112052811A (en) Pasture grassland desertification detection method based on artificial intelligence and aerial image
CN113192100B (en) Time-sharing overlapped plant image key feature area edge path acquisition method
CN116030324A (en) Target detection method based on fusion of spectral features and spatial features
CN116311218A (en) Noise plant point cloud semantic segmentation method and system based on self-attention feature fusion
CN116385477A (en) Tower image registration method based on image segmentation
CN115731257A (en) Leaf form information extraction method based on image
CN113932712A (en) Melon and fruit vegetable size measuring method based on depth camera and key points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination