CN114359546A - Day lily maturity identification method based on convolutional neural network - Google Patents

Day lily maturity identification method based on convolutional neural network Download PDF

Info

Publication number
CN114359546A
CN114359546A CN202111650998.6A CN202111650998A CN114359546A CN 114359546 A CN114359546 A CN 114359546A CN 202111650998 A CN202111650998 A CN 202111650998A CN 114359546 A CN114359546 A CN 114359546A
Authority
CN
China
Prior art keywords
daylily
neural network
image
point
maturity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111650998.6A
Other languages
Chinese (zh)
Other versions
CN114359546B (en
Inventor
张延军
张朋琳
赵建鑫
夏黎明
刘敏强
杨博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Science and Technology
Original Assignee
Taiyuan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Science and Technology filed Critical Taiyuan University of Science and Technology
Priority to CN202111650998.6A priority Critical patent/CN114359546B/en
Publication of CN114359546A publication Critical patent/CN114359546A/en
Application granted granted Critical
Publication of CN114359546B publication Critical patent/CN114359546B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a day lily maturity identification method based on a convolutional neural network, which comprises the following steps of: collecting image data of the daylily and marking characteristic points of the daylily; modifying an output layer, an activation function and a loss function of the YOLOv3 neural network; training on the acquired image dataset using neural network parameters pre-trained for the COCO dataset; predicting characteristic points of the daylily by using the trained neural network to obtain coordinates of the characteristic points in an image coordinate system; obtaining a space coordinate by utilizing the coordinate of the characteristic point in the image coordinate system and the depth image thereof; calculating the length characteristics of the daylily according to the space coordinates; and obtaining the maturity of the daylily through the mapping relation between the maturity of the daylily and the length characteristics of the daylily. The method obtains the position and length information of the daylily by combining the neural network recognition feature points with the three-dimensional information extracted by the depth camera, judges the maturity of the daylily according to the length, and has the advantages of high recognition rate and low calculation cost.

Description

Day lily maturity identification method based on convolutional neural network
Technical Field
The invention relates to the technical field of intelligent day lily maturity identification, in particular to a day lily maturity identification method based on a convolutional neural network.
Background
Daylily, also known as day lily and forgetmenot-sorrel, is a plant of hemerocallis of Africaceae. The method is suitable for planting in large quantities in the south of Qinling mountains, Hunan, Shanxi, Jiangsu, Zhejiang and other provinces of China. It is edible, is one of the traditional gourmets in China, and has various medicinal values of stopping bleeding, diminishing inflammation, clearing heat, promoting diuresis, etc.
At present, most agricultural harvesting machines in China are still large-scale manually operated machines, such as harvesting of wheat and corn. Such machines can only be used for harvesting crops that are hard and need to be re-planted every year without fear of the crops being broken by the machine during the harvesting process. For crops such as tomatoes, peppers and the like, large-scale harvesting machines are not suitable, because tomatoes are soft and are very easy to be damaged by the machines in the picking process, and peppers grow on trees and are not suitable for large-scale harvesting machines. In view of the situation, many researchers develop intelligent robots for picking the fruits, and in the development process of the intelligent robots, the most critical technology is the research of a visual identification technology and the research of an intelligent obstacle avoidance and path planning algorithm. For the visual picking algorithm, researchers usually adopt algorithms such as a clustering algorithm, Hough detection and the like to detect a target and carry out denoising through image processing algorithms such as expansion, corrosion and the like. Although the visual processing algorithm can also obtain a good recognition effect even if the visual processing algorithm is gradually mature, the visual processing algorithm can only detect targets with obvious color and gradient features, and the accuracy is often lower under the condition that the color and gradient features are not obvious. The color characteristics of the fruits and the branches of the day lily are not obvious, and the day lily is picked at night frequently, so that the difficulty of visual identification of a picking robot is increased.
Although the traditional visual recognition algorithm has been developed for many years, the accuracy is still low under the condition that many agricultural pickings are complex, and therefore, many researchers apply the neural network algorithm to the agricultural pickings. The method is widely applied to a target detection algorithm, the target is predicted and positioned through a convolutional neural network, the accuracy and the robustness are high, and the method has practical significance in identification of round crops such as kiwi fruits and apples. However, the algorithm adopts the anchor frame to perform recognition frame selection on the target, and the recognition effect on the strip-shaped fruits such as the day lily is poor, because the pixels of the day lily only occupy a small part of the whole anchor frame. Therefore, an image recognition method for the characteristic points of the daylily is needed.
Disclosure of Invention
The invention aims to provide a daylily maturity identification method based on a convolutional neural network, which is a method for predicting characteristic points of daylily by using the convolutional neural network, extracting length characteristics by using the obtained characteristic points through a 3D camera and finally obtaining the maturity and the position of the daylily, and solves the problems that the gradient and the color characteristics cannot be identified obviously by using a traditional visual identification method and the shape of the daylily is not suitable for a general target detection method by using an anchor frame.
In order to achieve the purpose, the invention provides the following scheme:
a day lily maturity identification method based on a convolutional neural network comprises the following steps:
s1, collecting images of the daylily at night and in the day by adopting a 3D camera based on TOF, labeling feature points of the daylily in the collected images by adopting a labeling program, and storing an image data set and labels thereof;
s2, constructing a convolutional neural network, and modifying an output layer, an activation function and a loss function of the YOLOv3 neural network, wherein the output layer is modified into x 24 vectors; the coordinates of the feature points which are not in the image are set to be (-0.1 ), and the activation function is a Leaky-ReLU function:
Figure BDA0003446931380000021
the loss function is defined as:
Figure BDA0003446931380000022
in the formula, xiAnd yiRespectively representing the coordinate prediction results of the neural network,
Figure BDA0003446931380000023
and
Figure BDA0003446931380000024
respectively representing true values of coordinates, C, in the datasetiAnd
Figure BDA0003446931380000025
respectively representing the confidence and true values of the prediction, MiAnd
Figure BDA0003446931380000026
respectively representing whether the predicted target is occluded or not; s represents the tensor side length finally predicted by the model, and B represents 3 size categories responsible for predicting the target; the first row of accumulated terms represents a penalty function for the node location in the image,
Figure BDA0003446931380000031
indicating that the predicted point is in the image; the second row of accumulated terms represents the position loss function of nodes not in the image, with respective weights of λcooinAnd λcoonoin
Figure BDA0003446931380000032
Indicating that the predicted point is not in the image; the third row accumulate term is a loss function of the probability that the target is present within the anchor frame,
Figure BDA0003446931380000033
indicating the presence of a target within the grid responsible for the prediction; the final row of accumulation terms represents a loss function of the probability that the target is occluded
Figure BDA0003446931380000034
Indicating that the object is occluded, with a loss weight of λmask
S3, training the image data set acquired in the step S1 by using the neural network parameters pre-trained by the COCO data set;
s4, capturing image data and a depth map of the day lily by using a TOF-based 3D camera in the picking process, predicting feature points of the day lily by using the neural network trained in the step S3, and obtaining coordinates of the feature points in an image coordinate system;
s5, obtaining the space coordinate of the feature point by using the coordinate of the feature point in the image coordinate system and the depth image thereof;
s6, calculating the distance between the characteristic points through the obtained space coordinates of the characteristic points, namely the length characteristics of the daylily;
and S7, obtaining the maturity of the daylily through the mapping relation between the maturity of the daylily and the length characteristics of the daylily, and representing the position of the daylily by using the space coordinates of the characteristic points to finish the identification and positioning of the daylily.
Further, in step S1, a 3D camera based on TOF is used to collect images of daylily at night and day, a labeling program is used to label feature points of the daylily in the collected images, and an image data set and labels thereof are stored, which specifically includes:
1) when the day lily data set is collected, the time periods are uniformly distributed, namely the number of pictures in each time period within 24 hours a day is close;
2) when an image is collected, the height of a 3D camera based on TOF is kept between 500mm and 1300mm, and the distance is the height range of the growth of the day lily and is the camera setting height during picking;
3) randomly dividing the collected image into 3 parts: training set, verification set and test set.
Further, in the step S2, constructing a convolutional neural network, and modifying an output layer, an activation function, and a loss function of the YOLOv3 neural network specifically include:
1) adjusting the neural network output layer:
the neural network has two outputs for predicting day lily targets of different sizes in the image, the dimensions of the output layers are respectively 13 × 12 × 2 and 26 × 12 × 2, and in order to use the training parameters of YOLOv3 to initialize, the neural network only changes the last layer 1 × 1 convolution layer;
13 × 13 and 26 × 26 in the dimension of the output layer of the neural network respectively represent the number of grids divided in two outputs, wherein each grid is responsible for predicting two targets, the two targets are respectively represented by ten-dimensional vectors, namely the probability that a target exists in the grid, the probability that the target is blocked and the coordinates of four feature points of the target, if the center point of a labeled real label falls in the grid, the grid is responsible for predicting the target, and the calculation formula of the center point of the target is as follows:
Figure BDA0003446931380000041
in the formula, xiAnd yiRespectively representing the image coordinate prediction results of the central point in the ith grid, wherein n represents the side length of the grid;
2) expression mode of output characteristic point coordinate
Adopting a coordinate expression method as a polar coordinate expression method, setting a grid sitting angle responsible for predicting the point as a coordinate origin, setting a polar coordinate position of a certain characteristic point relative to the coordinate origin as (theta, r), and carrying out normalization processing on the coordinate system to obtain a prediction label of a final output layer, wherein the normalization method comprises the following steps:
Figure BDA0003446931380000042
in the formula, w represents the size of an input image, and the prediction label of an output layer of a certain characteristic point is (theta ', r');
3) activation function of neural network
For the daylily target with the characteristic point not in the image, the real position label marks the characteristic point coordinate not in the image as a polar coordinate system (0, -0.1), and negative numerical values are output to predict the daylily target with the characteristic point not in the image; in order to make the output value be a negative number, the activation function of the neural network is Leaky-ReLU, and when the central point of the daylily target, of which the characteristic point is not in the image, is calculated, the characteristic point which is not in the image is omitted, and the coordinates of other characteristic points are averaged;
4) loss function of neural network
In the neural network for identifying the feature points by the daylily, the neural network comprises probability loss, occluded probability loss and position coordinate loss of the feature points, wherein the position coordinate loss of the feature points is further divided into coordinate loss of the feature points in the image and position loss of the feature point coordinates not in the image, a larger weight is distributed to the position loss in the image, a small weight is distributed to the position loss of the feature points not in the image, and finally a neural network loss function is obtained;
5) network architecture
The network structure is a structure corresponding to the first two output layers of the YOLOv3 network structure, namely, the output obtained by 32 times of downsampling and 16 times of downsampling, the output dimension of the last layer 1 x 1 convolution layer of the network is modified into 20, namely, the output dimension of the neural network is changed into 20, and simultaneously, the network structure branch corresponding to 8 times of downsampling is deleted, namely, the final neural network structure is obtained.
Further, in the step S3, the training of the neural network parameters pre-trained by using the COCO data set on the image data set acquired in the step S1 specifically includes:
1) acquiring a network weight of YOLOv3 as initialization data of the neural network, wherein the method comprises the steps of downloading YOLOv3 training data in an open source website, requiring the data to be trained and stored by using a pytorch, and carrying out target detection on a COCO data set;
2) extracting weights corresponding to the first two outputs in the network parameters by using a pitorch, namely the outputs obtained by 32 times of downsampling and 16 times of downsampling, deleting the weight of the last 1-1 convolutional layer, matching the weight with the established neural network model, and randomly initializing the network parameters of the last convolutional layer to obtain the initialized weights of all layers of the neural network;
3) the training process of the neural network is completed by using the training set, the verification set is used for verifying after all the training sets are trained once, the overfitting of the network is prevented, the Adam optimizer in the pyrrch is used for training in the training, and the training of the network is completed by setting the hyper-parameter reference YOLOv 3.
Further, in step S4, capturing image data and a depth map of the daylily by using a TOF-based 3D camera during picking, specifically including:
the method comprises the steps that a 3D camera based on TOF is installed on a wrist of a picking manipulator, three times of depth images and pictures need to be collected for recognition in the picking process, day lily to be picked and the position of the day lily are determined by taking a picture for the first time, the day lily is positioned again at the moment after a mechanical arm moves to the position close to the day lily, the day lily is taken for the third time before picking, remeasurement precision is carried out, if the position of the mechanical arm is not at a preset position, the day lily is repeatedly taken and calibrated, and if the position of the mechanical arm is right, a target day lily is picked.
Further, in step S5, the spatial coordinates of the feature points are obtained by using the coordinates of the feature points in the image coordinate system and the depth image thereof, specifically:
and calculating the three-dimensional coordinates of the feature points by using the identified feature points and the depth images, wherein the calculation formula takes the point A as an example and is as follows:
Figure BDA0003446931380000061
in the formula, xA',yA'The horizontal and vertical coordinates of the feature point in the image coordinate system, namely the feature point coordinate identified by the neural network, f is the focal length of the camera and is obtained by calibrating the camera, dAThe three-dimensional coordinate of the point A in the image coordinate system in the camera coordinate system is obtained as (x) through the formula for the distance measured by the TOF-based 3D cameraA,yA,zA)。
Further, in step S6, calculating a distance between the feature points, that is, a length feature of the daylily, by using the obtained spatial coordinates of the feature points, specifically including:
1) the calculation formula for calculating the distance between the two feature points, i.e., the distance between the point a and the point B, from the three-dimensional coordinates obtained in step S5 is as follows:
Figure BDA0003446931380000062
in the formula, length (A, B) is the distance between the point A and the point B, namely the length characteristic of the day lily;
2) setting 4 characteristic points to divide the day lily into 3 sections, wherein the length characteristic of the day lily is the sum of the distances of the 3 sections, and the first characteristic point is positioned at the bottommost part of the day lily, namely the position needing breaking off during picking; the second characteristic point is at the intersection point of the flower stalk and the bud of the day lily; the third characteristic point is positioned between the second characteristic point and the fourth characteristic point; the fourth characteristic point is positioned at the topmost end of the daylily.
Further, in step S7, obtaining the maturity of the daylily through the mapping relationship between the maturity of the daylily and the length characteristics of the daylily, and simultaneously representing the position of the daylily by using the spatial coordinates of the characteristic points to complete the identification and positioning of the daylily, specifically including:
the method for researching the mapping relation between the maturity and the length characteristics of the daylily comprises the following steps of:
selecting 100 daylily of the same variety which just starts to grow, regularly measuring and recording the length of the daylily, wherein the sampling period is 0.5 days, continuously sampling until the daylily is mature, finally obtaining 100 vectors with different dimensions, corresponding the vectors from back to front according to the sampling time, deleting the length information with overlong vector sampling time and earlier, averaging 100 data sampled every time to obtain a final vector, wherein each data of the vector corresponds to one maturity, recording the length corresponding to each maturity, and obtaining the mapping relation between the maturity and the length through a linear interpolation method.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the daylily maturity identification method based on the convolutional neural network provided by the invention is a method for predicting characteristic points of daylily by using the convolutional neural network, extracting length characteristics by using the obtained characteristic points through a 3D camera and finally obtaining the maturity and the position of the daylily.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a day lily maturity identification method based on a convolutional neural network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a neural network architecture according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an output layer structure according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of computing three-dimensional coordinates of feature points according to an embodiment of the present invention;
fig. 5 is a schematic diagram of the positions of the feature points of the daylily according to the embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a daylily maturity identification method based on a convolutional neural network, which is a method for predicting characteristic points of daylily by using the convolutional neural network, extracting length characteristics by using the obtained characteristic points through a 3D camera and finally obtaining the maturity and the position of the daylily.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The application scene of the embodiment of the invention is a day lily planting base, and the day lily in the base is a single variety of day lily and has relatively stable length, characteristic shape. Firstly, the image information of the day lily is collected, and the process is not limited to the collection in the base, and the image data can also be obtained through network downloading, video extraction and the like. And then labeling the collected data and saving a labeling result. Meanwhile, an experiment of the mapping relation between the length characteristics and the maturity is carried out on the base, and relevant data are recorded, processed and stored. The neural network shown in fig. 2 is trained using the above-mentioned saved data and its labels. Predicting feature points of the daylily by using the network obtained by training, obtaining a depth image by using a TOF-based 3D camera, extracting the depth of the feature points on the depth image, and calculating three-dimensional coordinates of the feature points.
As shown in fig. 1, the method for identifying the maturity of daylily based on the convolutional neural network provided by the invention comprises the following steps:
s1, collecting images of the daylily at night and in the day by adopting a 3D camera based on TOF, labeling feature points of the daylily in the collected images by adopting a labeling program, and storing an image data set and labels thereof;
the method specifically comprises the following steps:
1) when the day lily data set is collected, the time periods are uniformly distributed, namely the number of pictures in each time period within 24 hours a day is close;
2) when an image is collected, the height of a 3D camera based on TOF is kept between 500mm and 1300mm, and the distance is the height range of the growth of the day lily and is the camera setting height during picking;
3) randomly dividing the collected image into 3 parts: training set, verification set and test set.
S2, constructing a convolutional neural network, and modifying an output layer, an activation function and a loss function of the YOLOv3 neural network, wherein the output layer is modified into x 24 vectors; the coordinates of the feature points which are not in the image are set to be (-0.1 ), and the activation function is a Leaky-ReLU function:
Figure BDA0003446931380000091
the loss function is defined as:
Figure BDA0003446931380000092
in the formula, xiAnd yiRespectively representing the coordinate prediction results of the neural network,
Figure BDA0003446931380000093
and
Figure BDA0003446931380000094
respectively representing true values of coordinates, C, in the datasetiAnd
Figure BDA0003446931380000095
respectively representing the confidence and true values of the prediction, MiAnd
Figure BDA0003446931380000096
respectively representing whether the predicted target is occluded or not; s represents the finally predicted tensor side length of the model, B is the table3 size categories for representing targets responsible for prediction; the first row of accumulated terms represents a penalty function for the node location in the image,
Figure BDA0003446931380000097
indicating that the predicted point is in the image; the second row of accumulated terms represents the position loss function of nodes not in the image, with respective weights of λcooinAnd λcoonoin
Figure BDA0003446931380000098
Indicating that the predicted point is not in the image; the third row accumulate term is a loss function of the probability that the target is present within the anchor frame,
Figure BDA0003446931380000099
indicating the presence of a target within the grid responsible for the prediction; the final row of accumulation terms represents a loss function of the probability that the target is occluded
Figure BDA00034469313800000910
Indicating that the object is occluded, with a loss weight of λmask
S3, training the image data set acquired in the step S1 by using the neural network parameters pre-trained by the COCO data set;
s4, capturing image data and a depth map of the day lily by using a TOF-based 3D camera in the picking process, predicting feature points of the day lily by using the neural network trained in the step S3, and obtaining coordinates of the feature points in an image coordinate system;
s5, obtaining the space coordinate of the feature point by using the coordinate of the feature point in the image coordinate system and the depth image thereof;
s6, calculating the distance between the characteristic points through the obtained space coordinates of the characteristic points, namely the length characteristics of the daylily;
and S7, obtaining the maturity of the daylily through the mapping relation between the maturity of the daylily and the length characteristics of the daylily, and representing the position of the daylily by using the space coordinates of the characteristic points to finish the identification and positioning of the daylily.
The network structure finally obtained in step S2 is shown in fig. 2, and the specific details of the network structure are as follows:
1) adjusting the neural network output layer:
the neural network has two outputs, which are respectively used for predicting day lily targets with different sizes in the image, and the neural network refers to a Yolov3 neural network structure, and can completely refer to the first two outputs of Yolov3, namely the outputs obtained by 32 times of downsampling and 16 times of downsampling.
Since the neural network finally identifies that the target is not a prediction box but a characteristic point of the day lily, the dimension and meaning of the output layer are changed correspondingly. The output layer dimensions of the neural network are 13 × 12 × 2 and 26 × 12 × 2, and the neural network only changes the last layer 1 × 1 convolutional layer in order to be initialized with the training parameters of YOLOv 3.
13 x 13 and 26 x 26 in the neural network output layer dimension represent the number of meshes divided in the two outputs, respectively. Each grid is responsible for predicting two targets, and the two targets are respectively represented by a ten-dimensional vector, namely the probability that the target exists in the grid, the probability that the target is shielded and the coordinates of four feature points of the target. If the center point of the labeled real label is in the grid, the grid is responsible for predicting the target, and the center point of the target is calculated as follows:
Figure BDA0003446931380000101
in the formula, xiAnd yiRespectively representing the image coordinate prediction results of the central point in the ith grid, wherein n represents the side length of the grid;
2) expression mode of output characteristic point coordinate
In order to enhance the final prediction accuracy, it is necessary to design the relationship between the expression of the coordinates of the output feature point and the grid responsible for predicting the point. The invention adopts a coordinate expression method as a polar coordinate expression method, sets a grid sitting angle responsible for predicting the point as a coordinate origin, sets a polar coordinate position of a certain characteristic point relative to the coordinate origin as (theta, r), and normalizes the coordinate system to obtain a prediction label of a final output layer, wherein the normalization method comprises the following steps:
Figure BDA0003446931380000111
where w represents the size of the input image, and the prediction label of the output layer of a certain characteristic point is finally (θ ', r'), as shown in fig. 3;
3) activation function of neural network
In the prediction process of the network, characteristic points are often not in the image, and for such day lily targets, the real position label labels the coordinates of the day lily targets, which are not in the image, as a polar coordinate system (0, -0.1), so that the neural network needs to output negative values to predict the targets containing the points which are not in the image; in order to make the output value negative, the activation function of the neural network is Leaky-ReLU, and when calculating the central points of the targets, neglecting points which are not in the image to average the coordinates of other points;
4) loss function of neural network
The training process of the neural network is a process that the loss function of the neural network tends to be 0, so that the reasonable setting of the loss function not only influences the accuracy of final recognition, but also influences the training speed of the neural network. In the neural network for identifying the feature points by the daylily, probability loss, occluded probability loss and position coordinate loss of the feature points are included. The position coordinate loss of the characteristic point is further divided into the coordinate loss of the characteristic point in the image and the position loss of the characteristic point coordinate not in the image. Assigning a larger weight to the position loss in the image and a smaller weight to the position loss of the feature point not in the image, and finally obtaining the neural network loss function according to the first claim.
5) Network architecture
As shown in fig. 2, the network structure is a structure corresponding to the first two output layers of the YOLOv3 network structure, that is, 32 times down-sampling and 16 times down-sampling outputs, which correspond to y1 and y2 in fig. 2, respectively. And modifying the output dimension of the last layer 1 x 1 convolution layer of the network into 20, namely the output layer of the neural network, and simultaneously deleting the network structure branch corresponding to 8 times of down sampling, namely the final neural network structure.
In step S3, the training of the neural network parameters pre-trained by using the COCO data set on the image data set acquired in step S1 specifically includes:
1) acquiring a network weight of YOLOv3 as initialization data of the neural network, wherein the method comprises the steps of downloading YOLOv3 training data in an open source website, requiring the data to be trained and stored by using a pytorch, and carrying out target detection on a COCO data set;
2) extracting weights corresponding to the first two outputs in the network parameters by using a pitorch, namely the outputs obtained by 32 times of downsampling and 16 times of downsampling, deleting the weight of the last 1-1 convolutional layer, matching the weight with the established neural network model, and randomly initializing the network parameters of the last convolutional layer to obtain the initialized weights of all layers of the neural network;
3) the training process of the neural network is completed by using the training set, the verification set is used for verifying after all the training sets are trained once, the overfitting of the network is prevented, the Adam optimizer in the pyrrch is used for training in the training, and the training of the network is completed by setting the hyper-parameter reference YOLOv 3. Specifically, the hyper-parameter settings are shown in table 1.
TABLE 1 hyper-parameter settings
Learning rate 0.00258 Cosine annealing hyperparameter 0.17
Momentum of learning rate 0.779 Weighted attenuation coefficient 0.00058
In step S4, capturing image data and a depth map of daylily by using a TOF-based 3D camera in the picking process, specifically including:
the TOF camera is installed on the wrist of the picking manipulator, and three depth images and three photos need to be acquired for identification in the picking process. The daylily to be picked and the position of the daylily are determined by first photographing, and after the mechanical arm moves to the position near the daylily, the daylily is positioned again at the moment, so that the precision is higher than that of the first photographing. And the third photographing is photographing before picking, the photographing purpose is retest precision, photographing and calibration are repeated if the position of the manipulator is not at the preset position, and the target day lily is picked if the position of the manipulator is right. And acquiring images by the same method as the step S1 in the day lily picking process, namely acquiring the images by using a TOF 3D camera to identify and position the characteristic points of the day lily.
The working principle of the TOF camera is as follows: the infrared ray is transmitted forwards through the infrared transmitter arranged on the camera, meanwhile, the receiver is arranged on the camera, the infrared ray is reflected back when being transmitted to an object, and the distance between the camera and a target point is judged by calculating the flight time of the infrared ray. When the camera emits infrared rays to different angles and calculates the flight time, the depth image of the camera can be obtained.
As shown in fig. 4, in step S5, the spatial coordinates of the feature points are obtained by using the coordinates of the feature points in the image coordinate system and the depth image thereof, and the specific steps are as follows:
and calculating the three-dimensional coordinates of the feature points by using the identified feature points and the depth images, as shown in fig. 4, taking the point a as an example, the calculation formula is as follows:
Figure BDA0003446931380000131
in formula (II), x'A,y'AThe horizontal and vertical coordinates of the feature point in the image coordinate system, namely the feature point coordinate identified by the neural network, f is the focal length of the camera and is obtained by calibrating the camera, dAThe three-dimensional coordinate of the point A in the image coordinate system in the camera coordinate system is obtained as (x) through the formula for the distance measured by the TOF-based 3D cameraA,yA,zA)。
In step S6, calculating a distance between the feature points, that is, a length feature of the daylily, according to the obtained spatial coordinates of the feature points, specifically includes:
1) the calculation formula for calculating the distance between the two feature points, i.e., the distance between the point a and the point B, from the three-dimensional coordinates obtained in step S5 is as follows:
Figure BDA0003446931380000132
in the formula, length (A, B) is the distance between the point A and the point B, namely the length characteristic of the day lily;
2) as shown in fig. 5, since the fruits of the day lily are not necessarily a straight line, but are mostly a curve, 4 feature points are provided to divide the day lily into 3 segments, and the length of the day lily is characterized by the sum of the distances of the 3 segments. Wherein the first point is at the bottommost part of daylily, namely the position needing to be broken off during picking; the second point is at the intersection point of the flower stalk and the bud of the day lily; the third point is positioned between the second point and the fourth point; the fourth point is located at the topmost end of the daylily.
In step S7, obtaining the maturity of daylily through the mapping relationship between the maturity of daylily and the length characteristics thereof, and simultaneously representing the position of daylily by using the spatial coordinates of the characteristic points to complete the identification and positioning of daylily, specifically including:
the method for researching the mapping relation between the maturity and the length characteristics of the daylily comprises the following steps of:
selecting 100 daylily of the same variety which just starts to grow, regularly measuring and recording the length of the daylily, wherein the sampling period is 0.5 days, continuously sampling until the daylily is mature, finally obtaining 100 vectors with different dimensions, corresponding the vectors from back to front according to the sampling time, deleting the length information with overlong vector sampling time and earlier, averaging 100 data sampled every time to obtain a final vector, wherein each data of the vector corresponds to one maturity, recording the length corresponding to each maturity, and obtaining the mapping relation between the maturity and the length through a linear interpolation method.
The error generated by the embodiment mainly consists of two parts, including the error generated by the resolution of the camera and the error generated by the measurement depth. The maximum error can be obtained by:
Figure BDA0003446931380000141
in the formula, ex、ey、ezThe values of (A) are:
Figure BDA0003446931380000142
in the formula, ex'And d is the distance between the TOF camera and the target point, and f is the focal distance obtained by the calibration camera.
The maximum error of the method generated at different distances is shown in table 2 by calculation:
TABLE 2
Distance (mm) Maximum error (mm)
150 2.1
250 2.8
350 3.7
450 4.6
550 5.5
650 6.5
750 7.4
850 8.4
950 9.3
1050 10.3
As can be seen from table 2, the maximum error of the length feature recognized by the present invention is 10.3mm, and the distance between the camera and the recognition feature point is about 1m at this time, and the error is within the allowable range for the first recognition. As the mechanical arm approaches to daylily to be picked in the picking process, the identification error is gradually reduced, so that the invention can ensure the identification length characteristic and the positioning precision.
In conclusion, the daylily maturity identification method based on the convolutional neural network provided by the invention has the advantages that 1) the improved neural network has very strong anti-noise capability, and the characteristic points of the daylily are identified by utilizing the convolutional neural network, so that the daylily maturity identification method based on the convolutional neural network has the characteristics of good robustness, high precision and the like of the neural network; 2) compared with a semantic segmentation algorithm which directly makes the whole image, the method has the advantages of high labeling speed and low calculation power requirement by predicting the characteristic points of the daylily; 3) only characteristic points of the daylily are predicted, and finally point cloud data are not generated, so that the requirements on the performance of a computer are reduced, and the production cost of the picking robot can be reduced; 4) the maturity is identified by utilizing the length characteristics of the daylily, and when the daylily is used for different varieties, the identification mode can be switched among the different varieties only by changing the length setting, so that the method has the practical significance of picking identification and the popularization significance; 5) the neural network is trained according to the image characteristics of day lily in 24 hours in 1 day, the recognition effect is good in day and night, the positioning is completed in the recognition process, the growth direction of the day lily can be obtained, and the neural network has a direct guiding effect on the picking robot for planning the picking path.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (8)

1. A day lily maturity identification method based on a convolutional neural network is characterized by comprising the following steps:
s1, collecting images of the daylily at night and in the day by adopting a 3D camera based on TOF, labeling feature points of the daylily in the collected images by adopting a labeling program, and storing an image data set and labels thereof;
s2, improving the convolutional neural network, and modifying an output layer, an activation function and a loss function of the YOLOv3 neural network, wherein the output layer is modified into x 24 vectors; the coordinates of the feature points which are not in the image are set to be (-0.1 ), and the activation function is a Leaky-ReLU function:
Figure FDA0003446931370000011
the loss function is defined as:
Figure FDA0003446931370000012
in the formula, xiAnd yiRespectively representing the coordinate prediction results of the neural network,
Figure FDA0003446931370000013
and
Figure FDA0003446931370000014
respectively representing true values of coordinates, C, in the datasetiAnd
Figure FDA0003446931370000015
respectively representing the confidence and true values of the prediction, MiAnd
Figure FDA0003446931370000016
respectively representing whether the predicted target is occluded or not; s represents the tensor side length finally predicted by the model, and B represents 3 size categories responsible for predicting the target; the first row of accumulated terms represents a penalty function for the node location in the image,
Figure FDA0003446931370000017
indicating that the predicted point is in the image; the second row of accumulated terms represents the position loss function of nodes not in the image, with respective weights of λcooinAnd λcoonoin
Figure FDA0003446931370000018
Indicating that the predicted point is not in the image; the third row accumulate term is a loss function of the probability that the target is present within the anchor frame,
Figure FDA0003446931370000019
indicating the presence of a target within the grid responsible for the prediction; the final row of accumulation terms represents a loss function of the probability that the target is occluded
Figure FDA00034469313700000110
Indicating that the object is occluded, with a loss weight of λmask
S3, training the image data set acquired in the step S1 by using a neural network pre-trained by a COCO data set;
s4, capturing image data and a depth map of the day lily by using a TOF-based 3D camera in the picking process, predicting feature points of the day lily by using the neural network trained in the step S3, and obtaining coordinates of the feature points in an image coordinate system;
s5, obtaining the space coordinate of the feature point by using the coordinate of the feature point in the image coordinate system and the depth image thereof;
s6, calculating the distance between the characteristic points through the obtained space coordinates of the characteristic points, namely the length characteristics of the daylily;
and S7, obtaining the maturity of the daylily through the mapping relation between the maturity of the daylily and the length characteristics of the daylily, and representing the position of the daylily by using the space coordinates of the characteristic points to finish the identification and positioning of the daylily.
2. The method for identifying the maturity of daylily based on the convolutional neural network as claimed in claim 1, wherein in step S1, the TOF-based 3D camera is used to collect images of daylily at night and in the day, the annotation program is used to label the feature points of the daylily in the collected images, and the image dataset and the labels thereof are stored, which specifically comprises:
1) when the day lily data set is collected, the time periods are uniformly distributed, namely the number of pictures in each time period within 24 hours a day is close;
2) when an image is collected, the height of a 3D camera based on TOF is kept between 500mm and 1300mm, and the distance is the height range of the growth of the day lily and is the camera setting height during picking;
3) randomly dividing the collected image into 3 parts: training set, verification set and test set.
3. The daylily maturity identification method based on convolutional neural network of claim 2, wherein in the step S2, the convolutional neural network is constructed, and the modification of the output layer, the activation function and the loss function of the YOLOv3 neural network specifically comprises:
1) adjusting the neural network output layer:
the neural network has two outputs for predicting day lily targets of different sizes in the image, the dimensions of the output layers are respectively 13 × 12 × 2 and 26 × 12 × 2, and in order to use the training parameters of YOLOv3 to initialize, the neural network only changes the last layer 1 × 1 convolution layer;
13 × 13 and 26 × 26 in the dimension of the output layer of the neural network respectively represent the number of grids divided in two outputs, wherein each grid is responsible for predicting two targets, the two targets are respectively represented by ten-dimensional vectors, namely the probability that a target exists in the grid, the probability that the target is blocked and the coordinates of four feature points of the target, if the center point of a labeled real label falls in the grid, the grid is responsible for predicting the target, and the calculation formula of the center point of the target is as follows:
Figure FDA0003446931370000031
in the formula, xiAnd yiRespectively representing the image coordinate prediction results of the central point in the ith grid, wherein n represents the side length of the grid;
2) expression mode of output characteristic point coordinate
Adopting a coordinate expression method as a polar coordinate expression method, setting a grid sitting angle responsible for predicting the point as a coordinate origin, setting a polar coordinate position of a certain characteristic point relative to the coordinate origin as (theta, r), and carrying out normalization processing on the coordinate system to obtain a prediction label of a final output layer, wherein the normalization method comprises the following steps:
Figure FDA0003446931370000032
in the formula, w represents the size of an input image, and the prediction label of an output layer of a certain characteristic point is (theta ', r');
3) activation function of neural network
For the daylily target with the characteristic point not in the image, the real position label marks the characteristic point coordinate not in the image as a polar coordinate system (0, -0.1), and negative numerical values are output to predict the daylily target with the characteristic point not in the image; in order to make the output value be a negative number, the activation function of the neural network is Leaky-ReLU, and when the central point of the daylily target, of which the characteristic point is not in the image, is calculated, the characteristic point which is not in the image is omitted, and the coordinates of other characteristic points are averaged;
4) loss function of neural network
In the neural network for identifying the feature points by the daylily, the neural network comprises probability loss, occluded probability loss and position coordinate loss of the feature points, wherein the position coordinate loss of the feature points is further divided into coordinate loss of the feature points in the image and position loss of the feature point coordinates not in the image, a larger weight is distributed to the position loss in the image, a small weight is distributed to the position loss of the feature points not in the image, and finally a neural network loss function is obtained;
5) network architecture
The network structure is a structure corresponding to the first two output layers of the YOLOv3 network structure, namely, the output obtained by 32 times of downsampling and 16 times of downsampling, the output dimension of the last layer 1 x 1 convolution layer of the network is modified into 20, namely, the output dimension of the neural network is changed into 20, and simultaneously, the network structure branch corresponding to 8 times of downsampling is deleted, namely, the final neural network structure is obtained.
4. The method for identifying daylily maturity based on convolutional neural network as claimed in claim 3, wherein in step S3, the neural network pre-trained by using the COCO data set is trained on the image data set acquired in step S1, specifically comprising:
1) acquiring a network weight of YOLOv3 as initialization data of the neural network, wherein the method comprises the steps of downloading YOLOv3 training data in an open source website, requiring the data to be trained and stored by using a pytorch, and carrying out target detection on a COCO data set;
2) extracting weights corresponding to the first two outputs in the network parameters by using a pitorch, namely the outputs obtained by 32 times of downsampling and 16 times of downsampling, deleting the weight of the last 1-1 convolutional layer, matching the weight with the established neural network model, and randomly initializing the network parameters of the last convolutional layer to obtain the initialized weights of all layers of the neural network;
3) the training process of the neural network is completed by using the training set, the verification set is used for verifying after all the training sets are trained once, the overfitting of the network is prevented, the Adam optimizer in the pyrrch is used for training in the training, and the training of the network is completed by setting the hyper-parameter reference YOLOv 3.
5. The method for identifying daylily maturity based on convolutional neural network as claimed in claim 4, wherein in step S4, capturing image data and depth map of daylily by using TOF-based 3D camera during picking process specifically comprises:
the method comprises the steps that a 3D camera based on TOF is installed on a wrist of a picking manipulator, three times of depth images and pictures need to be collected for recognition in the picking process, day lily to be picked and the position of the day lily are determined by taking a picture for the first time, the day lily is positioned again at the moment after a mechanical arm moves to the position close to the day lily, the day lily is taken for the third time before picking, remeasurement precision is carried out, if the position of the mechanical arm is not at a preset position, the day lily is repeatedly taken and calibrated, and if the position of the mechanical arm is right, a target day lily is picked.
6. The method for identifying the maturity of daylily based on the convolutional neural network as claimed in claim 1, wherein in step S5, the spatial coordinates of the feature points are obtained by using the coordinates of the feature points in the image coordinate system and the depth image thereof, and the method specifically comprises:
and calculating the three-dimensional coordinates of the feature points by using the identified feature points and the depth images, wherein the calculation formula takes the point A as an example and is as follows:
Figure FDA0003446931370000051
in the formula, xA',yA'The horizontal and vertical coordinates of the feature point in the image coordinate system, namely the feature point coordinate identified by the neural network, f is the focal length of the camera and is obtained by calibrating the camera, dAThe three-dimensional coordinate of the point A in the image coordinate system in the camera coordinate system is obtained as (x) through the formula for the distance measured by the TOF-based 3D cameraA,yA,zA)。
7. The method for identifying daylily maturity based on convolutional neural network as claimed in claim 6, wherein in step S6, calculating the distance between feature points, that is, the length feature of daylily, by using the obtained spatial coordinates of the feature points, specifically comprises:
1) the calculation formula for calculating the distance between the two feature points, i.e., the distance between the point a and the point B, from the three-dimensional coordinates obtained in step S5 is as follows:
Figure FDA0003446931370000052
in the formula, length (A, B) is the distance between the point A and the point B, namely the length characteristic of the day lily;
2) setting 4 characteristic points to divide the day lily into 3 sections, wherein the length characteristic of the day lily is the sum of the distances of the 3 sections, and the first characteristic point is positioned at the bottommost part of the day lily, namely the position needing breaking off during picking; the second characteristic point is at the intersection point of the flower stalk and the bud of the day lily; the third characteristic point is positioned between the second characteristic point and the fourth characteristic point; the fourth characteristic point is positioned at the topmost end of the daylily.
8. The method for identifying the maturity of daylily based on the convolutional neural network as claimed in claim 7, wherein in step S7, the maturity of daylily is obtained through a mapping relationship between the maturity of daylily and length features thereof, and the spatial coordinates of feature points are used to represent the position of daylily, so as to complete identification and positioning of daylily, specifically comprising:
the method for researching the mapping relation between the maturity and the length characteristics of the daylily comprises the following steps of:
selecting 100 daylily of the same variety which just starts to grow, regularly measuring and recording the length of the daylily, wherein the sampling period is 0.5 days, continuously sampling until the daylily is mature, finally obtaining 100 vectors with different dimensions, corresponding the vectors from back to front according to the sampling time, deleting the length information with overlong vector sampling time and earlier, averaging 100 data sampled every time to obtain a final vector, wherein each data of the vector corresponds to one maturity, recording the length corresponding to each maturity, and obtaining the mapping relation between the maturity and the length through a linear interpolation method.
CN202111650998.6A 2021-12-30 2021-12-30 Day lily maturity identification method based on convolutional neural network Active CN114359546B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111650998.6A CN114359546B (en) 2021-12-30 2021-12-30 Day lily maturity identification method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111650998.6A CN114359546B (en) 2021-12-30 2021-12-30 Day lily maturity identification method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN114359546A true CN114359546A (en) 2022-04-15
CN114359546B CN114359546B (en) 2024-03-26

Family

ID=81104152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111650998.6A Active CN114359546B (en) 2021-12-30 2021-12-30 Day lily maturity identification method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN114359546B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690521A (en) * 2022-11-30 2023-02-03 仲恺农业工程学院 Cabbage mustard maturity identification method
CN116267226A (en) * 2023-05-16 2023-06-23 四川省农业机械研究设计院 Mulberry picking method and device based on intelligent machine vision recognition of maturity
CN117094997A (en) * 2023-10-18 2023-11-21 深圳市睿阳精视科技有限公司 Flower opening degree detection and evaluation method
CN117273869A (en) * 2023-11-21 2023-12-22 安徽农业大学 Intelligent agricultural product pushing method, system, device and medium based on user data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009525A (en) * 2017-12-25 2018-05-08 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
CN110930387A (en) * 2019-11-21 2020-03-27 中原工学院 Fabric defect detection method based on depth separable convolutional neural network
AU2020100705A4 (en) * 2020-05-05 2020-06-18 Chang, Jiaying Miss A helmet detection method with lightweight backbone based on yolov3 network
CN112446388A (en) * 2020-12-05 2021-03-05 天津职业技术师范大学(中国职业培训指导教师进修中心) Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model
US20210390723A1 (en) * 2020-06-15 2021-12-16 Dalian University Of Technology Monocular unsupervised depth estimation method based on contextual attention mechanism

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009525A (en) * 2017-12-25 2018-05-08 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
CN110930387A (en) * 2019-11-21 2020-03-27 中原工学院 Fabric defect detection method based on depth separable convolutional neural network
AU2020100705A4 (en) * 2020-05-05 2020-06-18 Chang, Jiaying Miss A helmet detection method with lightweight backbone based on yolov3 network
US20210390723A1 (en) * 2020-06-15 2021-12-16 Dalian University Of Technology Monocular unsupervised depth estimation method based on contextual attention mechanism
CN112446388A (en) * 2020-12-05 2021-03-05 天津职业技术师范大学(中国职业培训指导教师进修中心) Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
杨静亚;李景霞;王振宇;程海;: "基于卷积神经网络的花朵品种的识别", 黑龙江大学工程学报, no. 04, 25 December 2019 (2019-12-25) *
郭子琰;舒心;刘常燕;李雷;: "基于ReLU函数的卷积神经网络的花卉识别算法", 计算机技术与发展, no. 05, 8 February 2018 (2018-02-08) *
马世超;孙磊;何宏;郭延华;: "基于感兴趣区域的机器人抓取系统", 科学技术与工程, no. 11, 18 April 2020 (2020-04-18) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690521A (en) * 2022-11-30 2023-02-03 仲恺农业工程学院 Cabbage mustard maturity identification method
CN116267226A (en) * 2023-05-16 2023-06-23 四川省农业机械研究设计院 Mulberry picking method and device based on intelligent machine vision recognition of maturity
CN117094997A (en) * 2023-10-18 2023-11-21 深圳市睿阳精视科技有限公司 Flower opening degree detection and evaluation method
CN117094997B (en) * 2023-10-18 2024-02-02 深圳市睿阳精视科技有限公司 Flower opening degree detection and evaluation method
CN117273869A (en) * 2023-11-21 2023-12-22 安徽农业大学 Intelligent agricultural product pushing method, system, device and medium based on user data
CN117273869B (en) * 2023-11-21 2024-02-13 安徽农业大学 Intelligent agricultural product pushing method, system, device and medium based on user data

Also Published As

Publication number Publication date
CN114359546B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
CN114359546B (en) Day lily maturity identification method based on convolutional neural network
CN111795704B (en) Method and device for constructing visual point cloud map
CN109146948B (en) Crop growth phenotype parameter quantification and yield correlation analysis method based on vision
CN109934121B (en) Orchard pedestrian detection method based on YOLOv3 algorithm
Sun et al. Three-dimensional photogrammetric mapping of cotton bolls in situ based on point cloud segmentation and clustering
CN109785337B (en) In-column mammal counting method based on example segmentation algorithm
Gongal et al. Sensors and systems for fruit detection and localization: A review
CN113112504B (en) Plant point cloud data segmentation method and system
Wang et al. YOLOv3‐Litchi Detection Method of Densely Distributed Litchi in Large Vision Scenes
CN109255302A (en) Object recognition methods and terminal, mobile device control method and terminal
Rong et al. A peduncle detection method of tomato for autonomous harvesting
Zhang et al. 3D monitoring for plant growth parameters in field with a single camera by multi-view approach
Xiang et al. Field‐based robotic leaf angle detection and characterization of maize plants using stereo vision and deep convolutional neural networks
CN118096891B (en) Tea bud and leaf pose estimation method and system based on picking robot
US20240054776A1 (en) Tracking objects with changing appearances
Magistri et al. Towards in-field phenotyping exploiting differentiable rendering with self-consistency loss
CN113932712B (en) Melon and fruit vegetable size measurement method based on depth camera and key points
Rao Design of automatic cotton picking robot with Machine vision using Image Processing algorithms
Sapkota et al. Immature green apple detection and sizing in commercial orchards using YOLOv8 and shape fitting techniques
US20230133026A1 (en) Sparse and/or dense depth estimation from stereoscopic imaging
Patel et al. Deep Learning-Based Plant Organ Segmentation and Phenotyping of Sorghum Plants Using LiDAR Point Cloud
Badeka et al. Harvest crate detection for grapes harvesting robot based on YOLOv3 model
He et al. Visual recognition and location algorithm based on optimized YOLOv3 detector and RGB depth camera
Li et al. Nondestructive Detection of Key Phenotypes for the Canopy of the Watermelon Plug Seedlings Based on Deep Learning
Chu et al. High-precision fruit localization using active laser-camera scanning: Robust laser line extraction for 2D-3D transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant