CN111462044B - Greenhouse strawberry detection and maturity evaluation method based on deep learning model - Google Patents

Greenhouse strawberry detection and maturity evaluation method based on deep learning model Download PDF

Info

Publication number
CN111462044B
CN111462044B CN202010147075.8A CN202010147075A CN111462044B CN 111462044 B CN111462044 B CN 111462044B CN 202010147075 A CN202010147075 A CN 202010147075A CN 111462044 B CN111462044 B CN 111462044B
Authority
CN
China
Prior art keywords
strawberry
image
segmentation
maturity
carrying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010147075.8A
Other languages
Chinese (zh)
Other versions
CN111462044A (en
Inventor
周成全
叶宏宝
徐志福
华珊
许敏界
韩恺源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Academy of Agricultural Sciences
Original Assignee
Zhejiang Academy of Agricultural Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Academy of Agricultural Sciences filed Critical Zhejiang Academy of Agricultural Sciences
Priority to CN202010147075.8A priority Critical patent/CN111462044B/en
Publication of CN111462044A publication Critical patent/CN111462044A/en
Application granted granted Critical
Publication of CN111462044B publication Critical patent/CN111462044B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
    • Y02A40/25Greenhouse technology, e.g. cooling systems therefor

Abstract

The invention discloses a greenhouse strawberry detection and maturity evaluation method based on a deep learning model, which comprises the following steps: shooting the strawberry field block from the left direction and the right direction respectively by using an image acquisition system erected on a field mobile platform to obtain a field wide-angle photo and obtain an original image set; carrying out image preprocessing on images in the original image set to obtain a training data set; inputting the training data set into an improved ResNet network for training and learning to obtain a strawberry segmentation model; inputting the test data into the trained strawberry segmentation model to obtain a segmentation result graph; and (4) carrying out optimal threshold search by using a PSOA (particle swarm optimization) technology and an Otsu algorithm, carrying out secondary segmentation on the strawberry fruit part, and realizing maturity analysis. The method combines machine vision and deep learning technologies, analyzes the orthoimage obtained by the self-designed image acquisition platform, accurately extracts the number of strawberries and judges the maturity of the strawberries, and has good precision performance and strong robustness.

Description

Greenhouse strawberry detection and maturity evaluation method based on deep learning model
Technical Field
The invention relates to the technical field of intelligent agriculture, in particular to a greenhouse strawberry detection and maturity evaluation method based on a deep learning model.
Background
Strawberry is a perennial evergreen herbaceous plant, belongs to the Rosaceae (Rosaceae) strawberry genus, and enjoys the reputation of "fruit queen" due to delicious taste, rich nutrition and high nutritional value. China is a big country for strawberry production and consumption, and the planting area and the yield are at the top in the world at present. In the past, the yield estimation and maturity estimation of strawberries mainly depend on manpower, namely, the number statistics and related parameter measurement are carried out in the field by agricultural personnel regularly. However, traditional manual field surveys are inefficient, have strong subjectivity, and cannot provide real-time data. Meanwhile, the improvement of urbanization level enables agricultural population to be sharply reduced, so that the manual investigation cost is greatly increased. Currently, computer vision technology is widely applied to fruit extraction and analytical research in complex field environments to realize dynamic monitoring of crop growth; the image segmentation is used as the premise and key of a computer vision technology, and the precision of the image segmentation directly influences the precision and efficiency of subsequent work. The method for accurately segmenting the target fruit in the natural environment mainly comprises the following methods:
(1) Color space transformation and threshold segmentation based method
Because fruits, leaves and soil background have larger color difference, the plant organ segmentation and extraction through color characteristics have certain feasibility. The core idea of the method is to search channels with large differences in RGB, lab, HSV and other color spaces and perform threshold segmentation operation on the channels. The algorithm has the problems of insufficient robustness under the conditions of dynamic illumination and soil reflectivity and the like, and cannot effectively separate scenes under the natural environment.
(2) Machine learning method based on shallow model
The shallow model generates a high-order characteristic matrix capable of accurately distinguishing the fruit part and the background part by manually screening and extracting the characteristics of the fruit part and the background part, such as color, texture, character and the like, and trains the classifier so as to obtain the segmentation model. In practical application, the shallow model often has the conditions of incomplete target feature expression, separation of an extraction process and a detection process, and the like, so that the performance is reduced.
In recent years, the rise of deep learning provides a better technical means for fruit segmentation under a complex background, the fruit segmentation is driven by a large data set, high-dimensional data discrimination can be performed without setting specific target characteristics, and the fruit segmentation has strong information processing advantages.
In addition to accurate positioning and segmentation of the fruit parts, the maturity of the strawberry should be estimated according to the yellow-red color ratio of the head of the strawberry when the growth vigor of the strawberry is monitored. Because strawberry is a crop that needs to be selectively harvested, the maturity levels of different individuals on the same plot can vary greatly. How to accurately evaluate the maturity of each strawberry is the key to improve the harvesting efficiency and ensure the harvesting quality.
Therefore, technicians in the field are dedicated to developing a greenhouse strawberry detection and maturity evaluation method based on a deep learning model, machine vision and a deep learning technology are combined, an autonomously developed ground platform is utilized, training and learning are carried out on labeled data based on an improved ResNet network, maturity analysis is achieved by using a PSOA technology and an Otsu algorithm, accuracy performance is good, robustness is high, and certain reference value is achieved for phenotype analysis of field strawberries in the future.
Disclosure of Invention
In view of the above defects in the prior art, the technical problem to be solved by the present invention is how to provide a greenhouse strawberry detection and maturity evaluation method based on a deep learning model, wherein the method analyzes an orthoimage obtained by an autonomously designed image acquisition platform, accurately extracts the number of strawberries and determines the maturity of the strawberries, and has good precision performance and strong robustness.
In order to achieve the purpose, the invention provides a greenhouse strawberry detection and maturity evaluation method based on a deep learning model, which is characterized by comprising the following steps of:
step 1, shooting a strawberry field block from the left direction and the right direction by using an image acquisition system erected on a field mobile platform respectively to obtain a field wide-angle photo and obtain an original image set;
step 2, carrying out image preprocessing on the images in the original image set to obtain a training data set;
step 3, inputting the training data set into an improved ResNet network for training and learning to obtain a strawberry segmentation model;
step 4, inputting test data into the trained strawberry segmentation model to obtain a segmentation result graph;
and 5, carrying out optimal threshold search by using a PSOA (particle swarm optimization) technology and an Otsu algorithm, carrying out secondary segmentation on the strawberry fruit part, and realizing maturity analysis.
Further, in the step 1, the field mobile platform includes a wheel-type base, a three-degree-of-freedom support and an automatic control device, the image acquisition system includes two industrial cameras and a workstation, the three-degree-of-freedom support is disposed on the wheel-type base, the industrial cameras are fixedly mounted on the three-degree-of-freedom support, the industrial cameras are in communication connection with the workstation, and the automatic control device is configured to implement automatic and synchronous shooting of the industrial cameras and control the three-degree-of-freedom support to perform lifting operation.
Further, the industrial camera in the step 1 is of an MV-SUF1200M-T type, a 1' CMOS sensor is adopted, the resolution is 1200 ten thousand pixels, the frame rate is 30.5FPS, and all the taken picture formats are stored as JPEG.
Further, the image preprocessing in the step 2 specifically includes the following steps:
step 2.1, image cutting is carried out on the image in the original image set to obtain a sub-image sample;
step 2.2, carrying out image augmentation operation on the sub-image sample to obtain a sample to be marked;
and 2.3, carrying out sample marking on the sample to be marked to obtain a training data set.
Further, the sub-picture sample in step 2.1 has a pixel of 500 × 500.
Further, the image augmentation operation in step 2.2 includes flipping up and down, rotating, zooming, and adding salt and pepper noise.
Further, the performing sample labeling in 2.3 uses a labellmg tool.
Further, the training data set format in 2.3 is PASCAL VOC2007.
Further, the specific implementation manner of the improved ResNet network in step 3 is as follows:
step 3.1, embedding the SE module into a ResNet network;
step 3.2, performing traversal operation by using global pooling, then forming a bottleneck structure by using two fully-connected layers to model the correlation among the channels, and simultaneously outputting weights with the same quantity as the input features;
3.3, reducing the characteristic dimension to 1/16 of the original dimension, activating through a ReLU activation function of the model, inputting into a full connection layer, and returning to the original dimension;
step 3.4, carrying out normalization operation through a Sigmoid function to obtain a weight value between 0 and 1;
step 3.5, weighting the weight to each feature map by a Scale operation.
Further, the step 5 of performing the optimal threshold search by using the PSOA technique and the Otsu algorithm specifically includes the following steps:
step 5.1, uniformly throwing N particles on a one-dimensional plane;
step 5.2, calculating the variance corresponding to each particle gray value to obtain the maximum variance value
Figure BDA0002401143420000031
Step 5.3, updating the speed and the position of the particles according to a PSOA algorithm, and repeatedly iterating to realize threshold value optimization;
step 5.4, after the maximum iteration times are reached, the maximum variance appearing in the iteration is calculated
Figure BDA0002401143420000032
The corresponding gray value is used as the optimal segmentation threshold of the image.
The beneficial effects of the invention are:
1. based on the traditional ResNet network architecture, an improved ResNet network suitable for field real-time analysis of strawberries is designed, and the number of network parameters is reduced by adding an SE module, so that the aims of balancing the weight among characteristic graphs and accelerating the training speed are fulfilled;
2. on the basis of the existing strawberry maturity evaluation standard, a new strawberry maturity grading standard is provided by combining fruit trade practice. And (3) based on the standard, utilizing an improved Otsu algorithm based on a particle swarm model to realize the estimation of the area ratio of different color regions of the strawberry, and corresponding the ratio to a new grading standard one by one, thereby realizing the judgment of the quality of the strawberry before harvesting and providing technical support for automatic harvesting.
3. Compared with the traditional ResNet, googleNet and color space transformation methods, the method has the advantages that the evaluation indexes are greatly improved, and the problems of low training efficiency, poor robustness and the like of the traditional methods can be effectively solved.
Drawings
FIG. 1 is a system diagram of a preferred embodiment of the present invention;
FIG. 2 is a technical roadmap of a preferred embodiment of the invention;
FIG. 3 is a ResNet base module and a SE-ResNet base module in accordance with a preferred embodiment of the present invention;
FIG. 4 is a PSO-Otsu algorithm flow of a preferred embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings for clarity and understanding of technical contents. The present invention may be embodied in many different forms of embodiments and the scope of the invention is not limited to the embodiments set forth herein.
The invention provides a high-flux acquisition method of strawberry growth information by combining machine vision and a deep learning technology. Firstly, acquiring canopy ortho-images in a plurality of growth periods by utilizing a ground platform which is independently researched and developed; training and learning the labeled data by using the improved ResNet network to establish a quick and high-flux strawberry segmentation model; on the basis of the model segmentation, the fruit part is secondarily segmented by utilizing a PSOA (particle swarm optimization) technology and an Otsu algorithm to realize maturity analysis. The experimental result shows that the method has better precision performance and stronger robustness, and has certain reference value for the phenotype analysis of the strawberry in the future field.
The specific implementation method comprises the following steps:
1. research and development of experimental platform
1.1 field moving platform
As shown in fig. 1, the field mobile platform FieldScan Pro includes a wheeled base 1, a three-degree-of-freedom support 2 and an automatic control device. The wheel type base 1 adopts a central control design, a lithium battery carried at the rear part provides electric energy, four 24V direct current servo motors are respectively arranged at the front wheel and the rear wheel, and the driving and the steering of the four 24V direct current servo motors are controlled by changing signal voltage; an eddy current retarder is arranged at the rear wheel, and auxiliary braking can be performed in a driving state. The three-degree-of-freedom support 2 is a rigid device made of aluminum alloy materials, can be adjusted and fixed at any position in the X direction, the Y direction and the Z direction, and can be matched with a camera support with a damping function to achieve continuous and stable shooting of field strawberry pictures. The automatic control device takes a Programmable Logic Controller (PLC) as a core, and has five signal outputs: the Y4 port and the relay act together to trigger the shutter of the industrial camera to realize automatic and synchronous shooting; and the Y0-Y3 ports respectively output two paths of pulse signals to control the shooting bracket to lift.
1.2 image acquisition System
The image acquisition system consists of two high-speed industrial cameras 3 and a high-performance mobile workstation 4 (fig. 1). The industrial camera 3 is a model MV-SUF1200M-T developed by BASLER corporation, germany, using a 1' CMOS sensor, with a resolution of 1200 ten thousand pixels, a frame rate of 30.5FPS, and all the taken picture formats stored as JPEG. The collected image signals are transmitted to the workstation 4 through a USB 3.0Micro-B interface. Workstation 4 is pre-loaded with pylon Camerasoftware Suite software (provided by BASLER Inc.) and allows real-time viewing of the captured images for quality assurance.
By utilizing the ground platform which is independently researched and developed, the canopy ortho-images in a plurality of growth periods are obtained: two MV-SUF1200M-T type high-speed industrial cameras 3 erected on a field mobile platform FieldScan Pro are used for respectively shooting a strawberry 5 field from the left direction and the right direction so as to obtain a field wide-angle photo and increase the image information amount; the platform has certain anti-seismic performance and walking capability, and can realize continuous field operation.
2. Segmentation model
2.1 image preprocessing
As shown in fig. 2, the performance of the image segmentation algorithm based on the convolutional neural network 5 depends mainly on the size, quality and diversity of the training data set, i.e. the training data set can cover a sufficient number of actual scenes or similar environments. However, obtaining strawberry pictures under sufficiently different conditions consumes a lot of labor cost and time. In order to solve overfitting caused by sample scarcity and improve robustness of a training network to object difference, a data set is further expanded by using data augmentation technologies such as translation, scaling, rotation and noise addition. In order to improve the model training efficiency and reduce the GPU memory utilization rate, 240 original images 1 are cut into 500 x 500 pixel sub-image samples 2 through Photoshop CS6 software, and the samples are subjected to amplification operations such as up-down turning, rotation, scaling, salt and pepper noise addition and the like, so that 2131 samples to be labeled 3 are finally obtained.
And marking the 2000 samples to be marked 3 in the amplification result by using a labellmg tool to finally obtain a training data set 4, wherein the format of the data set in the embodiment is PASCAL VOC2007.
2.2 strawberry segmentation model based on improved ResNet
In the process of information transmission, the traditional convolutional network has information loss and loss, and simultaneously, gradient disappearance or gradient explosion occurs, so that the depth of the training network is limited. To solve this problem, doctor hokeming et al, microsoft research institute, proposed a Deep Residual Network (resurnet) in 2015. The model can accelerate the network training speed, meanwhile, the model accuracy rate is improved, and good popularization is kept. However, the filters of the convolutional layers of the classical ResNet network act locally, and the acquired feature maps are independent of each other and equal in weight. In fact, the significance of these feature maps is different, and the accuracy of the training results is affected by adopting the operation. The features can be readjusted according to their importance by adding an Squeeze-and-Excitation (SE) module, and global information is used as a criterion to measure the importance of individual features. As shown in fig. 3, the SE module is embedded in the ResNet network. Traversal (Squeeze) is performed using global pooling (global averaging posing), and then a Bottleneck (bottleeck) structure is composed of two Fully Connected layers (FC) to model the correlation between channels while outputting the same number of weights as the input features. Reducing the characteristic dimension to 1/16 of the original dimension, activating through a ReLU activation function of the model, inputting into a full connection layer, and returning to the original dimension. And finally, carrying out normalization operation through a Sigmoid function to obtain a weight numerical value between 0 and 1, and weighting the weight to each feature map by Scale operation.
3. Strawberry maturity analysis based on PSOA and Otsu algorithms
The method for distinguishing and screening the strawberry maturity is of great significance for evaluating the harvest quality and improving the trade value. Through a machine vision technology, the yellow-red color area ratio of the strawberry can be obtained from the image, and the maturity of the strawberry fruit is evaluated and graded according to the yellow-red color area ratio. The main idea of the more classical Otsu algorithm in the pixel dichotomy is to calculate the variance value sigma of the background and the target region 2 Let σ be 2 The largest gradation value is determined as the optimum division threshold. However, the Otsu algorithm needs to traverse all gray levels to determine the threshold when processing more complex and numerous images, which is extremely computationally intensive and difficult to meet the requirement of real-time processing. Aiming at the defects of complicated operation, high time complexity and the like of an Otsu Algorithm, the optimal threshold search is carried out based on a Particle Swarm Optimization (PSOA) technology on the basis of the traditional Otsu Algorithm:
(1) Uniformly throwing N particles on a one-dimensional plane;
(2) Calculating the variance corresponding to each particle gray value to obtain the maximum variance value
Figure BDA0002401143420000051
(3) Updating the speed and the position of the particles according to a PSOA algorithm, and repeatedly iterating to realize threshold value optimization;
(4) After reaching the maximum iteration times, the maximum variance appeared in the iteration
Figure BDA0002401143420000061
The corresponding gray value is used as the optimal segmentation threshold of the image.
The PSO-Otsu algorithm flow is shown in FIG. 4.
As shown in fig. 2, a segmentation result 7 is obtained on the basis of the segmentation model 6, and the fruit part is secondarily segmented by using the PSOA technique and Otsu algorithm, thereby implementing maturity analysis 8.
The strawberry maturity is divided into 5 grades according to the yellow-red ratio of the surface, and the specific division standard is as shown in table 1:
TABLE 1 evaluation index of strawberry maturity
Figure BDA0002401143420000062
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (6)

1. A greenhouse strawberry detection and maturity evaluation method based on a deep learning model is characterized by comprising the following steps:
step 1, shooting a strawberry field block from the left direction and the right direction by using an image acquisition system erected on a field mobile platform respectively to obtain a field wide-angle photo and obtain an original image set;
step 2, carrying out image preprocessing on the images in the original image set to obtain a training data set;
step 3, inputting the training data set into an improved ResNet network for training and learning to obtain a strawberry segmentation model;
step 4, inputting test data into the trained strawberry segmentation model to obtain a segmentation result graph;
step 5, carrying out optimal threshold search by utilizing a PSOA technology and an Otsu algorithm, and carrying out secondary segmentation on strawberry fruit parts to realize maturity analysis;
the image preprocessing in the step 2 specifically comprises the following steps:
step 2.1, image cutting is carried out on the image in the original image set to obtain a sub-image sample;
step 2.2, carrying out image augmentation operation on the sub-image sample to obtain a sample to be marked;
step 2.3, carrying out sample marking on the sample to be marked to obtain a training data set;
the image augmentation operation in the step 2.2 comprises up-down turning, rotation, zooming and salt and pepper noise addition;
the specific implementation manner of the improved ResNet network in the step 3 is as follows:
step 3.1, embedding the SE module into a ResNet network;
step 3.2, performing traversal operation by using global pooling, then forming a bottleneck structure by using two fully-connected layers to model the correlation among the channels, and simultaneously outputting weights with the same quantity as the input features;
3.3, reducing the characteristic dimension to 1/16 of the original dimension, activating through a ReLU activation function of the model, inputting into a full connection layer, and returning to the original dimension;
step 3.4, carrying out normalization operation through a Sigmoid function to obtain a weight value between 0 and 1;
step 3.5, weighting the weight to each feature map by Scale operation;
the step 5 of performing the optimal threshold search by using the PSOA technique and the Otsu algorithm specifically includes the following steps:
step 5.1, uniformly throwing N particles on a one-dimensional plane;
step 5.2, calculating the variance corresponding to each particle gray value to obtain the maximum variance value
Figure FDA0003817749360000011
Step 5.3, updating the speed and the position of the particles according to a PSOA algorithm, and repeatedly iterating to realize threshold value optimization;
step 5.4, after the maximum iteration times are reached, the maximum variance appeared in the iteration is calculated
Figure FDA0003817749360000012
The corresponding gray value is used as the optimal segmentation threshold of the image.
2. The method as claimed in claim 1, wherein in step 1, the field mobile platform includes a wheel-type base, a three-degree-of-freedom bracket, and an automatic control device, the image acquisition system includes two industrial cameras and a workstation, the three-degree-of-freedom bracket is disposed on the wheel-type base, the industrial cameras are fixedly mounted on the three-degree-of-freedom bracket, the industrial cameras are in communication connection with the workstation, and the automatic control device is configured to automatically and synchronously capture the industrial cameras and control the three-degree-of-freedom bracket to perform lifting operation.
3. The greenhouse strawberry detection and maturity evaluation method based on deep learning model as claimed in claim 2 wherein said industrial camera in step 1 is MV-SUF1200M-T type, 1 "CMOS sensor is adopted, resolution is 1200 ten thousand pixels, frame rate is 30.5FPS, and all taken picture formats are stored as JPEG.
4. The deep learning model-based greenhouse strawberry detection and maturity evaluation method of claim 1, wherein the sub-graph sample in step 2.1 has a pixel of 500 x 500.
5. The deep learning model-based greenhouse strawberry detection and maturity evaluation method of claim 1, wherein said performing sample labeling in 2.3 uses a labellmg tool.
6. The deep learning model-based greenhouse strawberry detection and maturity evaluation method of claim 1, wherein the training data set format in 2.3 is PASCAL VOC2007.
CN202010147075.8A 2020-03-05 2020-03-05 Greenhouse strawberry detection and maturity evaluation method based on deep learning model Active CN111462044B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010147075.8A CN111462044B (en) 2020-03-05 2020-03-05 Greenhouse strawberry detection and maturity evaluation method based on deep learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010147075.8A CN111462044B (en) 2020-03-05 2020-03-05 Greenhouse strawberry detection and maturity evaluation method based on deep learning model

Publications (2)

Publication Number Publication Date
CN111462044A CN111462044A (en) 2020-07-28
CN111462044B true CN111462044B (en) 2022-11-22

Family

ID=71685561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010147075.8A Active CN111462044B (en) 2020-03-05 2020-03-05 Greenhouse strawberry detection and maturity evaluation method based on deep learning model

Country Status (1)

Country Link
CN (1) CN111462044B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232442B (en) * 2020-11-19 2023-06-20 青岛海尔智能技术研发有限公司 Real-time pizza maturity estimation algorithm
CN112734005B (en) * 2020-12-31 2022-04-01 北京达佳互联信息技术有限公司 Method and device for determining prediction model, electronic equipment and storage medium
CN113295690A (en) * 2021-05-17 2021-08-24 福州大学 Strawberry maturity rapid discrimination method based on machine learning
CN113569470B (en) * 2021-07-16 2024-04-05 西安工业大学 Fruit and vegetable respiration rate model parameter estimation method based on improved particle swarm optimization
CN114112932A (en) * 2021-11-08 2022-03-01 南京林业大学 Hyperspectral detection method and sorting equipment for maturity of oil-tea camellia fruits based on deep learning
CN113963239B (en) * 2021-12-23 2022-03-29 北京林业大学 Method for intelligently detecting maturity of camellia oleifera fruits

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573303A (en) * 2018-04-25 2018-09-25 北京航空航天大学 It is a kind of that recovery policy is improved based on the complex network local failure for improving intensified learning certainly
CN108986106A (en) * 2017-12-15 2018-12-11 浙江中医药大学 Retinal vessel automatic division method towards glaucoma clinical diagnosis

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106239531A (en) * 2016-09-20 2016-12-21 华南理工大学 A kind of telepresence mutual robot of movable type
CN110529186B (en) * 2019-09-11 2021-03-30 上海同岩土木工程科技股份有限公司 Tunnel structure water leakage accurate identification device and method based on infrared thermal imaging

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986106A (en) * 2017-12-15 2018-12-11 浙江中医药大学 Retinal vessel automatic division method towards glaucoma clinical diagnosis
CN108573303A (en) * 2018-04-25 2018-09-25 北京航空航天大学 It is a kind of that recovery policy is improved based on the complex network local failure for improving intensified learning certainly

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进OTSU算法的快速作物图像分割;白元明等;《江苏农业科学》(第24期);第231-236页 *

Also Published As

Publication number Publication date
CN111462044A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111462044B (en) Greenhouse strawberry detection and maturity evaluation method based on deep learning model
Jia et al. Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot
Liu et al. Improved kiwifruit detection using pre-trained VGG16 with RGB and NIR information fusion
CN105718945B (en) Apple picking robot night image recognition method based on watershed and neural network
CN111460903B (en) System and method for monitoring growth of field broccoli based on deep learning
CN108596102B (en) RGB-D-based indoor scene object segmentation classifier construction method
CN110765916B (en) Farmland seedling ridge identification method and system based on semantics and example segmentation
Wang et al. Deep learning approach for apple edge detection to remotely monitor apple growth in orchards
CN111507275B (en) Video data time sequence information extraction method and device based on deep learning
CN111178177A (en) Cucumber disease identification method based on convolutional neural network
CN110399840A (en) A kind of quick lawn semantic segmentation and boundary detection method
CN114359727A (en) Tea disease identification method and system based on lightweight optimization Yolo v4
CN114140665A (en) Dense small target detection method based on improved YOLOv5
CN113139489A (en) Crowd counting method and system based on background extraction and multi-scale fusion network
CN115019302A (en) Improved YOLOX target detection model construction method and application thereof
CN114067207A (en) Vegetable seedling field weed detection method based on deep learning and image processing
CN115984698A (en) Litchi fruit growing period identification method based on improved YOLOv5
CN115861686A (en) Litchi key growth period identification and detection method and system based on edge deep learning
CN113191334B (en) Plant canopy dense leaf counting method based on improved CenterNet
Fernando et al. Ai based greenhouse farming support system with robotic monitoring
Zhong et al. Identification and depth localization of clustered pod pepper based on improved Faster R-CNN
AHM et al. A deep convolutional neural network based image processing framework for monitoring the growth of soybean crops
CN117456358A (en) Method for detecting plant diseases and insect pests based on YOLOv5 neural network
CN110853058B (en) High-resolution remote sensing image road extraction method based on visual saliency detection
CN115861768A (en) Honeysuckle target detection and picking point positioning method based on improved YOLOv5

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant