CN111709427B - Fruit segmentation method based on sparse convolution kernel - Google Patents

Fruit segmentation method based on sparse convolution kernel Download PDF

Info

Publication number
CN111709427B
CN111709427B CN202010458491.XA CN202010458491A CN111709427B CN 111709427 B CN111709427 B CN 111709427B CN 202010458491 A CN202010458491 A CN 202010458491A CN 111709427 B CN111709427 B CN 111709427B
Authority
CN
China
Prior art keywords
convolution kernel
fruit
sparse convolution
sparse
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010458491.XA
Other languages
Chinese (zh)
Other versions
CN111709427A (en
Inventor
刘晓洋
张青春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202010458491.XA priority Critical patent/CN111709427B/en
Publication of CN111709427A publication Critical patent/CN111709427A/en
Application granted granted Critical
Publication of CN111709427B publication Critical patent/CN111709427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Abstract

The invention discloses a fruit segmentation method based on sparse convolution kernels, which comprises the steps of analyzing the discrimination of main objects in fruit images under different color factors, selecting a proper color channel to reconstruct the images, then providing a sparse convolution kernel construction mode with spaced and non-adjacent elements, determining the elements in the sparse convolution kernels by adopting a linear classifier, and finally performing convolution operation on the reconstructed images by adopting the sparse convolution kernels, thereby realizing the fruit segmentation.

Description

Fruit segmentation method based on sparse convolution kernel
Technical Field
The invention relates to the field of computer vision and agricultural engineering, in particular to an image segmentation method for field fruit identification, and specifically relates to a fruit segmentation method based on sparse convolution kernels.
Background
Fruit identification is an important step in achieving automated picking and intelligent yield estimation of fruits. Color is a direct expression of fruits in vision, and is one of the most easily extracted image features, so that the color is widely applied to fruit identification, especially to fruits with large difference between colors and backgrounds, such as apples, tomatoes, oranges and the like.
The researchers propose to adopt the a component of Lab color space to realize the identification of mature oranges and propose to adopt R-G color operators to realize the identification of mature apples or tomatoes. To further improve the fruit segmentation effect, researchers propose to combine a plurality of color factors to segment fruits, such as segmenting ripe apples by using H and S components in HSI color space, and segmenting tomatoes of different ripeness degrees by using a component in Lab color space and I component in YIQ color space. However, the method has 2 defects, namely, the selection of color factors depends on experience and visual judgment, and a quantitative index is lacked; secondly, only the color information of a single pixel is considered during fruit segmentation, and the information of the neighborhood pixels is not considered.
Disclosure of Invention
Aiming at the technical problems, the technical scheme provides a fruit segmentation method based on sparse convolution kernel, which can effectively solve the problems.
The invention is realized by the following technical scheme:
a fruit segmentation method based on sparse convolution kernel comprises the following steps:
step 1: extracting sample pixels of main objects in the image, and analyzing the discrimination of the fruit pixels and other object pixels under different color factors;
step 2: selecting a proper color channel to reconstruct the color channel of the original image;
and step 3: providing a sparse convolution kernel with elements spaced and not adjacent to each other;
and 4, step 4: extracting main object sample pixels in a reconstructed image according to a sparse convolution kernel structure, dividing the main object sample pixels into a fruit sample and a non-fruit sample, training a linear classifier by adopting two types of samples, and outputting a corresponding classification model;
and 5: converting the coefficients of the trained classifier into elements in a sparse convolution kernel;
step 6: and carrying out convolution operation on the reconstructed image by adopting a sparse convolution kernel to realize fruit segmentation.
Further, in the calculation manner of the discrimination in step 1, the calculation formula of the discrimination between the sample data x and y is shown in formula (1):
Figure GDA0004051897300000021
in the formula: j represents the degree of discrimination of the image,
Figure GDA0004051897300000022
is inter-class variance +>
Figure GDA0004051897300000023
Is the intra-class variance.
Further, the calculation formula of the inter-class variance is shown in formula (2):
Figure GDA0004051897300000024
/>
in the formula,m x And m y Is the average of sample data x and y, P x And P y The ratios occupied by sample data x and y, respectively.
Further, the calculation formula of the intra-class variance is shown as formula (3):
Figure GDA0004051897300000031
in the formula (I), the compound is shown in the specification,
Figure GDA0004051897300000032
and &>
Figure GDA0004051897300000033
The variances of sample data x and y, respectively.
Further, in the step 2, image reconstruction is to select a color factor which can best reflect the difference between the fruit and the background, calculate the discrimination of the fruit under different color factors, and select a color factor with a higher discrimination to reconstruct the original image according to the requirement of an image segmentation method.
Further, in step 3, the sparse convolution kernel is a structural form of a convolution kernel in which elements are spaced from each other and are not adjacent to each other.
Further, in step 5, the determination method of the sparse convolution kernel is to convert the coefficient of the trained linear classifier into an element in the sparse convolution kernel.
Furthermore, in step 6, the sparse convolution kernel and the reconstructed image are subjected to convolution operation, so that the fruit segmentation is realized.
(III) advantageous effects
Compared with the prior art, the fruit segmentation method based on the sparse convolution kernel has the following beneficial effects:
(1) According to the technical scheme, the discrimination of the fruit pixels and other object pixels under different color factors is calculated, so that the influence of different color factors on fruit segmentation is quantized, and the selection of a proper color factor for fruit segmentation is facilitated. Meanwhile, the convolution operation brings the information of each pixel and the neighborhood pixels into the consideration range, and the accuracy of segmentation is improved. And a construction mode of the sparse convolution kernel is provided for further improving the efficiency of convolution operation, and the sparse convolution kernel can remarkably reduce the operation amount while ensuring the segmentation effect.
Drawings
FIG. 1 is a schematic block diagram of the overall process of the present invention.
FIG. 2a is a schematic representation of an apple sample containing red and red-green mixed parts, and also containing a front light and a back light.
FIG. 2b is a schematic representation of the leaf specimen with the front and back sides of the leaf, as well as old and young leaves.
FIG. 2c is a schematic representation of a branch sample containing old, young, dead and broken branches.
Fig. 2d is a schematic diagram of a sky sample with a frame in an overexposed state and mainly in a gray color.
FIG. 2e is a schematic representation of a soil sample of the present invention containing weeds, hay, and dead leaves.
FIG. 3 is a schematic diagram of a color factor image with higher resolution in the present invention.
Fig. 4 is a schematic diagram of the structure of a 5 × 5 sparse convolution kernel in the present invention.
Fig. 5 is a schematic diagram of the determination process of 5 × 5 × 3 convolution kernel elements in the present invention.
FIG. 6 is a sparse convolution kernel element determined by training in the present invention.
Fig. 7 is a diagram of the image segmentation effect in the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. The described embodiments are only some embodiments of the invention, and not all embodiments. Various modifications and improvements of the technical solutions of the present invention may be made by those skilled in the art without departing from the design concept of the present invention, and all of them should fall into the protection scope of the present invention.
Example 1:
a fruit segmentation method based on sparse convolution kernel, which is implemented in this embodiment by using a sparse convolution kernel of 5 × 5 size to segment an apple image, and the specific flow is shown in fig. 1, and includes the following steps:
step 1: extracting dominant object samples in apple images
Although the light environment is complex and the fruit state is various in the apple image, the composition of main objects in the image is relatively fixed. Analyzing the image, it can be seen that the objects constituting the image can be mainly classified into 5 types: fruits, leaves, branches, sky, and earth. In order to analyze the color characteristics of the 5 types of objects, 60 images are selected as sample images, and pixel samples of the 5 types of objects are respectively extracted from the images, wherein a part of the sample area is shown in fig. 2. In the sample selection, the diversity and the representativeness of sample pixels are fully considered. The difference is the difference of different parts of the same object and the difference of the same object under different light rays, and typically, the selected pixel sample should be widely present in different sample images. The apple sample in fig. 2a contains red and red-green mixed parts, as well as the cis-and retro-luminescent parts. The leaf sample in FIG. 2b contains the obverse and reverse sides of the leaf, as well as the old and young leaves. The branch sample in fig. 2c contains old branches, young branches, dead branches, broken branches, etc. The sky sample in fig. 2d is over-exposed and mainly off-white, with a border added to enable normal display. The soil sample in fig. 2e contains weeds, hay, and dead leaves, among others.
Step 2: calculating the discrimination of the fruit pixels and other object pixels under different color factors, and selecting a proper color factor to reconstruct the image
The discrimination calculation formula is shown in formula (1), wherein J represents the discrimination between sample data x and y,
Figure GDA0004051897300000054
is the between-class variance, is based on the mean value>
Figure GDA0004051897300000055
Is the intra-class variance, and the calculation formula of the inter-class variance is shown in formula (2), wherein m is x And m y Is the average of sample data x and y, respectively, P x And P y The ratio of sample data x and y, respectively, and the intra-class variance is represented by formula (3) in which
Figure GDA0004051897300000056
And &>
Figure GDA0004051897300000057
The variances of sample data x and y, respectively.
Figure GDA0004051897300000051
Figure GDA0004051897300000052
Figure GDA0004051897300000053
Color spaces commonly used for fruit segmentation include RGB, OHTA, HSI, HSV, YIQ, YCbCr, lab, and the like. In order to avoid confusion between color components represented by the same letter in different color spaces, the following will be distinguished by adding subscripts to the same letter. In calculating the discrimination of the apple and other sample pixels under the color components in the above color space, wherein the component I representing the brightness 1 、I 4 、V、Y 1 、Y 2 And L is not included because it does not contain color information and is susceptible to light.
Common color operators include R-G, R-B, 2R-G, EXR (EXR = 2R-G-B), EXG, (R-G)/(R + G), (R-G)/(G-B), R/B, R/G, R-I 4 And H 1 -I 4 And so on. Wherein R-B and EXG operators are associated with the color component I of the OHTA color space 2 And I 3 The conversion formula of (2) only has different coefficients, and the numerical values are completely the same after normalization, so that the two are eliminated. EXR and R-I 4 Also only the coefficients differ, thus deleting R-I 4 Only the EXR operator is retained. The 4 operators including division operation are uniformly set to 0 when the denominator is 0. In addition, the (R-G)/(G-B), R/B and R/G operators produce extremely large values with small denominators, and their range is limited to facilitate normalization of the values. The range of (R-G)/(G-B) is limited to [ -3,3]The R/B and R/G value ranges are limited to [0,3 ]]The value range can cover more than 85% of the value in the value range, and the value beyond the range is set as a corresponding boundary value. And finally, according to the value range of each color operator, carrying out normalization on each color operator and then carrying out discrimination calculation.
Table 1 and Table 2 show the color components of apple and other sample pixels in color space RGB, OHTA, HSI, HSV, YIQ, YCbCr, lab and R-G, 2R-G, EXR, (R-G)/(R + G), (R-G)/(G-B), R/B, R/G, H, respectively 1 -I 4 And (3) distinguishing degrees under an isochromatic operator, namely distinguishing degrees of the apples and the 4 objects under 24 color factors in total. The discrimination numerical values of each object and the apple sample are comprehensively analyzed, so that the discrimination numerical values of the sky under a plurality of color components or operators are all larger than 2, and the sky is the object which is most easily discriminated; the color difference between the leaves and the apples is large, the discrimination numerical value under a plurality of color factors is larger than 2, and the leaves and the apples are relatively easy to distinguish; and the numerical value of the distinguishing degree of the branches and the soil is small, so that the distinguishing difficulty is high.
Comparing the discrimination values in tables 1 and 2 transversely, it can be seen that I 2 The effect of distinguishing branches from sky is optimal, the effect of EXR distinguishing leaves is optimal, and the nonlinear component S 2 The effect of soil distinguishing is the best.
TABLE 1 apple discrimination with other objects under different color components
Figure GDA0004051897300000071
TABLE 1 (continuation)
Figure GDA0004051897300000072
TABLE 2 discrimination of apples from other objects under different color operators
Figure GDA0004051897300000073
/>
Figure GDA0004051897300000081
Selecting a color channel I 2 EXR and S 2 And reconstructing the original image to form a new 3-channel image. Fig. 3 selects 4 images containing main objects such as apples, branches, leaves and soil, and respectively shows the gray-scale images of the main objects under the 3 color factors with the highest degree of discrimination. As can be seen from the figure, the discrimination of the apple from different objects is different under the 3 color factors. For example, the color operator EXR image is clearly distinguished from apple leaves, and the color component S 2 Apple is not clearly distinguished from leaf in the image.
And step 3: provides a sparse convolution kernel structure and a method for determining elements in the convolution kernel
The convolution operation is simple and efficient, and is an effective method for calculating the neighborhood pixels. The sizes of the convolution kernels commonly used are 1 × 1,3 × 3, 5 × 5, and so on. As the size of the convolution kernel increases, the amount of data involved in the calculation also increases dramatically, and the increased amount of data slows down the algorithm operation while pixels further from the central pixel have lower correlation with the central pixel, and therefore convolution kernels of 7 × 7 × 4 or larger are no longer considered. To balance the contradiction between the data amount and the operation speed, a convolution kernel of 5 × 5 size is selected for image segmentation.
The image convolution operation needs to make a convolution kernel traverse all image pixels, and each pixel in the image can participate in the convolution operation for multiple times in the traversal process, namely, the same image information is repeatedly utilized for multiple times. In order to reduce the amount of computation of the convolution operation, it is proposed to use sparse convolution kernels instead of conventional convolution kernels. The sparse convolution kernel refers to convolution kernels having elements spaced apart from each other and not adjacent to each other, and a sparse convolution kernel structure of 5 × 5 size is shown in fig. 5. The white areas in the figure are free of elements.
The normal convolution kernel samples the pixels in the neighborhood range one by one, and the sparse convolution kernel samples the pixels in the neighborhood range at intervals. Due to the fact that the color information similarity of the adjacent pixels is large, compared with a normal convolution kernel, the sparse convolution kernel can reduce the calculation amount and meanwhile reduce the information redundancy. When a normal convolution kernel of 5 × 5 size is used to perform convolution operation on the whole image, the non-edge pixels in the image participate in 25 times of convolution operation. When a sparse convolution kernel of 5 × 5 size is used, non-edge pixels in the image only need to participate in 13 convolution operations.
The essence of the convolution calculation is to multiply and sum each element value in the convolution kernel with the pixel value of the corresponding region in the image, i.e. to perform linear operation on the values of all pixels in the neighborhood in different channels. Therefore, the linear classifier can be adopted to perform learning training on the relevant samples, so as to determine more reasonable convolution kernel elements. The process of element determination within a convolution kernel of size 5 x 3 is shown in fig. 5. Firstly, sample image blocks of different collected objects are converted into color channels EXR and I 2 And S 2 And (3) forming a 3-channel image, extracting neighborhood sample data with the size of 5 multiplied by 3 from a sample image block, expanding the neighborhood sample data into a 39 multiplied by 1 sample column vector, inputting the sample column vector into a linear classifier for training to judge the category of a central pixel in a neighborhood, namely dividing the central pixel into 2 categories of fruit pixels and non-fruit pixels, finally extracting the coefficient and the corresponding offset of the linear classifier, and converting the 39 multiplied by 1 weight coefficient into a convolution kernel with the size of 5 multiplied by 3.
The linear classifier is the key to determining the elements within the convolution kernel. LDA linear discriminant analysis is a commonly used linear classifier, can be used for linear classification and data dimension reduction, and is widely applied to a plurality of fields such as disease classification, product management, face recognition, machine learning and the like. In this embodiment, an LDA algorithm is used to learn neighborhood sample data. The fundamental principle of LDA is that the data dimension is reduced to one dimension by projecting high-dimensional sample data onto a vector in a certain direction, and the maximum ratio of the inter-class dispersion and intra-class dispersion of the projected sample data in the dimension is ensured, namely the separability of the sample data in the dimension is optimal. The calculation principle of data separability is similar to that of color discrimination, and the ratio of the inter-class dispersion and the intra-class dispersion of different classes of data is used as a measurement standard. But the two differ with respect to the way in which they are expressed with respect to inter-class dispersion and intra-class dispersion.
With class 2 data set X 1 And X 2 The corresponding data set after projective transformation is Z 1 And Z 2 The projection transformation formula is shown in formula 4. Inter-class dispersion function J of post-projection 2-class data B As shown in equation 5, is a data set Z 1 And Z 2 Squaring the distance between the means, substituting equation 4 into the data set X before projection 1 And X 2 And (4) performing correlation operation. Intra-class dispersion function J of post-projection 2-class data W As shown in equation 6, the data set X before projection can be converted by substituting equation 4 in an expression manner similar to variance 1 And X 2 And (4) performing correlation operation. Wherein S B Is an inter-class dispersion matrix as shown in formula 7, S W The intra-class dispersion matrix is shown in formula 8. The target function J (w) is represented by formula 9 as the ratio of the inter-class dispersion function to the intra-class dispersion function, and the expression is more concise by substituting the formula 7 and the formula 8. The calculation formula of the projected column vector w is shown in equation 10, in order to make S W Invertibility requires ensuring that the sample set is abundant, i.e., the number of samples is much larger than the sample dimension. Last fetch data set Z 1 And Z 2 The midpoint of the mean value on the projection vector is used as an offset to judge the category of the data.
z=w T ·x (4)
Figure GDA0004051897300000101
Figure GDA0004051897300000102
Figure GDA0004051897300000103
Figure GDA0004051897300000104
J(w)=J B /J W =w T S B w/(w T S W w) (9)
Figure GDA0004051897300000111
Wherein X is the data set X 1 And X 2 A high-dimensional vector of (1) medium,
Figure GDA0004051897300000112
and &>
Figure GDA0004051897300000113
Are respectively data set X 1 And X 2 Z is a one-dimensional projection data set Z 1 And Z 2 In the projection of (2), in combination with a projection of the image sensor>
Figure GDA0004051897300000114
And &>
Figure GDA0004051897300000115
Are each Z 1 And Z 2 I represents the serial number of the class 2 data is 1 or 2.
And 4, step 4: training and segmentation effect testing of linear models
Converting sample images of different objects into EXR, I 2 And S 2 And (3) constructing a 3-channel image, and then extracting 26777 neighborhood sample data from the sample image by taking a sparse convolution kernel with the size of 5 × 5 as a template, wherein 13020 fruit samples and 13757 non-fruit samples are obtained, and each neighborhood sample is a 39 × 1 column vector. And dividing the 2 types of samples into a training set and a testing set according to the proportion of 8The number of specimens from this and non-fruit was 2604 and 2752, respectively. And then inputting the training set into an LDA model for training, inputting the test set into the trained model for testing, and finally evaluating the performance of the model according to the test result.
And through multiple training, selecting the model with the minimum test error for classifying the fruit sample and the non-fruit sample. The coefficients of the model are then shifted by-34.62, as shown in fig. 6, and the coefficients are converted to 3 sparse convolution kernels of 5 x 5 size. The 3 sparse convolution kernels respectively correspond to color channels EXR and I from left to right 2 And S 2
In order to verify the segmentation effect of the sparse convolution kernel, a part of apple images are selected for testing. Firstly, test images are converted into EXR and I 2 And S 2 And forming a 3-channel image, performing convolution on 3 sparse convolution kernels and corresponding color channels respectively, adding convolution results, and finally adding an offset to perform logic judgment, wherein pixels with response values larger than 0 are judged as fruit pixels, and pixels smaller than or equal to 0 are judged as non-fruit pixels. The segmentation effect of the sparse convolution kernel is shown in fig. 7b, and it can be seen that the method can achieve effective segmentation of apple fruits.

Claims (7)

1. A fruit segmentation method based on sparse convolution kernel is characterized in that: comprises the following steps:
step 1: extracting sample pixels of main objects in the image, and analyzing the discrimination of the fruit pixels and other object pixels under different color factors; the calculation mode of the discrimination is that the calculation formula of the discrimination between the sample data x and y is shown as the formula (1):
Figure FDA0004079528320000011
in the formula: j represents the degree of discrimination of the image,
Figure FDA0004079528320000012
is inter-class variance +>
Figure FDA0004079528320000013
Is the intra-class variance;
step 2: selecting a proper color channel to reconstruct the color channel of the original image;
and step 3: providing a sparse convolution kernel with elements spaced and not adjacent to each other;
and 4, step 4: extracting main object sample pixels in a reconstructed image according to a sparse convolution kernel structure, dividing the main object sample pixels into a fruit sample and a non-fruit sample, training a linear classifier by adopting two types of samples, and outputting a corresponding classification model;
and 5: converting the coefficients of the trained classifier into elements in a sparse convolution kernel;
and 6: and carrying out convolution operation on the reconstructed image by adopting a sparse convolution kernel to realize fruit segmentation.
2. The sparse convolution kernel based fruit segmentation method of claim 1, wherein: the calculation formula of the between-class variance is shown as formula (2):
Figure FDA0004079528320000014
in the formula, m x And m y Is the average of sample data x and y, respectively, P x And P y Which are the ratios occupied by sample data x and y, respectively.
3. The sparse convolution kernel based fruit segmentation method of claim 2, wherein: the calculation formula of the intra-class variance is shown as formula (3):
Figure FDA0004079528320000021
in the formula (I), the compound is shown in the specification,
Figure FDA0004079528320000022
and &>
Figure FDA0004079528320000023
The variances of sample data x and y, respectively.
4. The sparse convolution kernel based fruit segmentation method of claim 1, wherein: in the step 2, image reconstruction is to select a color factor which can reflect the difference between the fruit and the background most, calculate the discrimination of the fruit under different color factors, and select a color factor with a larger discrimination to reconstruct the original image according to the requirement of an image segmentation method.
5. The sparse convolution kernel based fruit segmentation method of claim 1, wherein: and 3, the structural form of the sparse convolution kernel in the step 3, wherein the sparse convolution kernel refers to convolution kernels with elements which are spaced and not adjacent to each other.
6. The sparse convolution kernel based fruit segmentation method of claim 1, wherein: and 5, determining a sparse convolution kernel, namely converting the coefficient of the trained linear classifier into an element in the sparse convolution kernel.
7. The sparse convolution kernel based fruit segmentation method of claim 1, wherein: and 6, performing convolution operation on the sparse convolution kernel and the reconstructed image to realize fruit segmentation.
CN202010458491.XA 2020-05-26 2020-05-26 Fruit segmentation method based on sparse convolution kernel Active CN111709427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010458491.XA CN111709427B (en) 2020-05-26 2020-05-26 Fruit segmentation method based on sparse convolution kernel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010458491.XA CN111709427B (en) 2020-05-26 2020-05-26 Fruit segmentation method based on sparse convolution kernel

Publications (2)

Publication Number Publication Date
CN111709427A CN111709427A (en) 2020-09-25
CN111709427B true CN111709427B (en) 2023-04-07

Family

ID=72538322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010458491.XA Active CN111709427B (en) 2020-05-26 2020-05-26 Fruit segmentation method based on sparse convolution kernel

Country Status (1)

Country Link
CN (1) CN111709427B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269690A (en) * 2021-05-27 2021-08-17 山东大学 Method and system for detecting diseased region of blade

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2015261891A1 (en) * 2014-05-23 2016-10-13 Ventana Medical Systems, Inc. Systems and methods for detection of biological structures and/or patterns in images
US11645835B2 (en) * 2017-08-30 2023-05-09 Board Of Regents, The University Of Texas System Hypercomplex deep learning methods, architectures, and apparatus for multimodal small, medium, and large-scale data representation, analysis, and applications
CN109255757B (en) * 2018-04-25 2022-01-11 江苏大学 Method for segmenting fruit stem region of grape bunch naturally placed by machine vision
CN109344699A (en) * 2018-08-22 2019-02-15 天津科技大学 Winter jujube disease recognition method based on depth of seam division convolutional neural networks
CN111191583B (en) * 2019-12-30 2023-08-25 郑州科技学院 Space target recognition system and method based on convolutional neural network

Also Published As

Publication number Publication date
CN111709427A (en) 2020-09-25

Similar Documents

Publication Publication Date Title
Bhargava et al. Fruits and vegetables quality evaluation using computer vision: A review
Mim et al. Automatic detection of mango ripening stages–An application of information technology to botany
CN106845497B (en) Corn early-stage image drought identification method based on multi-feature fusion
Kumari et al. Hybridized approach of image segmentation in classification of fruit mango using BPNN and discriminant analyzer
CN108564085B (en) Method for automatically reading of pointer type instrument
Sabilla et al. Determining banana types and ripeness from image using machine learning methods
CN104680524A (en) Disease diagnosis method for leaf vegetables
US8983183B2 (en) Spatially varying log-chromaticity normals for use in an image process
CN104820841B (en) Hyperspectral classification method based on low order mutual information and spectrum context waveband selection
CN111259925A (en) Method for counting field wheat ears based on K-means clustering and width mutation algorithm
Trivedi et al. Automatic segmentation of plant leaves disease using min-max hue histogram and k-mean clustering
CN103528967A (en) Hyperspectral image based overripe Lonicera edulis fruit identification method
CN116559111A (en) Sorghum variety identification method based on hyperspectral imaging technology
CN111709427B (en) Fruit segmentation method based on sparse convolution kernel
Petrellis Plant disease diagnosis with color normalization
CN105869161B (en) Hyperspectral image band selection method based on image quality evaluation
Janardhana et al. Computer aided inspection system for food products using machine vision—a review
Ji et al. Apple color automatic grading method based on machine vision
Narendra et al. An intelligent system for identification of Indian Lentil types using Artificial Neural Network (BPNN)
CN111832569B (en) Wall painting pigment layer falling disease labeling method based on hyperspectral classification and segmentation
Zakiyyah et al. Characterization and Classification of Citrus reticulata var. Keprok Batu 55 Using Image Processing and Artificial Intelligence
Kangune et al. Automated estimation of grape ripeness
Rony et al. BottleNet18: Deep Learning-Based Bottle Gourd Leaf Disease Classification
Namias et al. Automatic grading of green intensity in soybean seeds
CN106709505A (en) Dictionary learning-based grain type classification method and system for corn grains

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant