CN109377487B - Fruit surface defect detection method based on deep learning segmentation - Google Patents

Fruit surface defect detection method based on deep learning segmentation Download PDF

Info

Publication number
CN109377487B
CN109377487B CN201811203154.5A CN201811203154A CN109377487B CN 109377487 B CN109377487 B CN 109377487B CN 201811203154 A CN201811203154 A CN 201811203154A CN 109377487 B CN109377487 B CN 109377487B
Authority
CN
China
Prior art keywords
image
layer
fruit
network unit
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811203154.5A
Other languages
Chinese (zh)
Other versions
CN109377487A (en
Inventor
容典
应义斌
饶秀勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201811203154.5A priority Critical patent/CN109377487B/en
Publication of CN109377487A publication Critical patent/CN109377487A/en
Application granted granted Critical
Publication of CN109377487B publication Critical patent/CN109377487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N2021/8466Investigation of vegetal material, e.g. leaves, plants, fruits
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a fruit surface defect detection method based on deep learning segmentation. The specific detection method of the invention comprises the following steps: obtaining a fruit RGB color image, removing a background, converting the fruit RGB color image into a gray image and unifying the size of the fruit RGB color image into 512 multiplied by 512; preparing positive sample image data and negative sample image data for training a convolutional neural segmentation network; designing a convolutional neural segmentation network for training, and storing a network connection weight matrix T after training is finished for a subsequent detection step; and (3) feeding the fruit image to be detected into the trained convolutional neural segmentation network to complete image segmentation, so as to obtain the fruit surface defect image. The invention has accurate and rapid detection, can effectively avoid the dependence on the shapes and sizes of fruits and agricultural products and the complex influence caused by brightness correction, has wide application objects and has greater application value.

Description

Fruit surface defect detection method based on deep learning segmentation
Technical Field
The invention relates to a computer vision image processing method, in particular to a fruit surface defect detection method based on deep learning segmentation.
Background
Surface defect detection is one of the important bases for fruit classification, and has strict regulations in fruit rating standards in various countries throughout the world. A large number of scholars at home and abroad study and detect the surface defects of fruits and agricultural products in a computer vision mode, but many agricultural products are spheroids, and the gray value in the middle of a two-dimensional graph is far larger than that of the edge, so that the image detection of the surface defects is difficult.
The prior art searches find that the method is mainly divided into three categories:
1) a processing method based on a sphere gray scale model. For example, chinese patent CN101984346A discloses a method for detecting fruit surface defects based on low-pass filtering, which includes obtaining an R component image without a background, performing low-pass filtering by using a fruit color image through discrete fourier transform, then performing inverse discrete fourier transform to obtain a surface luminance image, dividing the former image by the latter image to obtain a uniform luminance image, and then performing fruit surface defect segmentation by using a single threshold.
2) A processing method based on surface texture features. Lopez-Garci a F et al (2010) utilize multivariate image theory and surface texture feature algorithm training methods to detect navel orange surface defects, the algorithm is complex and not easy to use on-line, and the detection of navel orange surface defects is limited in type. (Lloyp, pez-Garc i a F, Andreu-Garc i a G, Blasco J, et al, automatic detection of skin defects in circulation using a Multireal image analysis apuach [ J ]. Computers and Electronics in Agriculture,2010,71(2): 189-19).
3) A processing method based on multispectral imaging technology. Blascoa et al, using a multi-spectral imaging device for navel orange surface defect analysis, is costly and complex in hardware (2007) (J.Blascoa, N.Aleixos. (2007) Citrus sortation by identification of the most common defect using multispectral computer vision. journal of Food Engineering 83(2007) 384-) 393).
The existing method has the problems of limited detection surface defect types, complex calculation method and difficult on-line detection or dependence on complex hardware imaging technology with high cost, so a new fruit surface defect detection method is needed.
Disclosure of Invention
In order to solve the problems that the detection surface defect types are limited, the calculation method is complex and is difficult to be used for online detection or the complicated hardware imaging technology with high cost is relied on in the background technology, the invention aims to provide a fruit surface defect detection method based on deep learning segmentation.
The technical scheme adopted by the invention comprises the following steps:
1) obtaining a fruit RGB color image, removing the background of the fruit RGB color image, and converting the fruit RGB color image into a gray image;
2) dividing the gray level image in the step 1) into a positive sample image and a negative sample image, and constructing training data for training a convolutional neural segmentation network, wherein the training data are the positive sample image and a binarization image sample corresponding to the positive sample image, and the negative sample image and a binarization image sample corresponding to the negative sample image;
3) designing a convolutional neural segmentation network structure for fruit surface defect detection, training by using the training data in the step 2), and acquiring a network connection weight matrix T after training is completed;
4) converting an RGB color image of the fruit to be detected into a gray image, inputting the gray image into the convolution neural segmentation network obtained in the step 3), and obtaining a fruit surface defect image after completing image segmentation.
The size of the gray image in step 1) needs to be uniformly adjusted to 512 × 512.
The specific steps of the step 2) are as follows:
2.1) dividing the gray level image in the step 1) into a positive sample image and a negative sample image according to whether the fruit is defective or not, wherein the positive sample image is the gray level image of the defective fruit, and the negative sample image is the gray level image of the non-defective fruit;
2.2) constructing a positive sample image and a binarization image sample with a corresponding defective area pixel value marked as 255 and a non-defective area pixel value marked as 0, and constructing a negative sample image and a binarization image sample with a corresponding pixel value marked as 0, thereby forming training data containing defective fruits and non-defective fruits.
The specific steps of designing the convolutional neural segmentation network structure for fruit surface defect detection in the step 3) are as follows:
3.1) designing a convolutional neural segmentation network with thirteen convolutional layers, twelve batch normalization layers, six maximum pooling layers, six upsampling layers, twelve ReLU function activation layers and one Sigmoid function classification layer;
3.2) the convolutional neural segmentation network is mainly composed of an input layer, six subunits, an output convolutional layer, a Sigmoid function classification layer and an output layer, wherein each subunit comprises a front feature network unit, a rear feature network unit and an upper sampling layer, the input layer is connected to the front feature network unit of the first subunit, the front feature network units of the six subunits are sequentially connected and transmitted, the rear feature network units of the six subunits are sequentially connected and transmitted, the output of the rear feature network unit connected with the first front feature network unit is connected to the output convolutional layer, and the output convolutional layer is connected to the output layer through the Sigmoid function classification layer;
inside each subunit, the front characteristic network unit is connected to the rear characteristic network unit through an upper sampling layer, and the front characteristic network unit is directly connected to the rear characteristic network unit; each front feature network unit internally comprises a convolution layer, a batch normalization layer, a ReLU function activation layer and a maximum pooling layer which are sequentially connected in a data transfer sequence; each post-feature network unit internally comprises a convolution layer, a batch normalization layer and a ReLU function activation layer which are sequentially connected in a data transmission sequence;
the output of the maximum pooling layer is used as the output of the front feature network unit, is transmitted to the convolution layer of the rear feature network unit through the upper sampling layer of the current subunit and is simultaneously transmitted to the convolution layer of the front feature network unit of the next subunit; the output of the ReLU function activation layer of the post-feature network element is passed to the up-sampling layer of the last sub-element as the output of the post-feature network element.
3.3) inputting the training data in the step 2) into an input layer of the convolutional neural segmentation network, training the convolutional neural segmentation network by adopting an RMSprop optimization algorithm instead of a commonly-used SGD algorithm until the error of the convolutional neural segmentation network reaches the minimum value to finish convergence, and obtaining a network connection weight matrix T constructed by each parameter in the trained convolutional neural segmentation network, wherein the loss calculation adopts two-class cross entropy.
The convolutional neural segmentation network in the step 4) is a convolutional neural segmentation network loaded with a connection weight matrix T.
The invention has the beneficial effects that:
1) the specially constructed network structure enables image detail processing to be richer through a plurality of layers of mutually crossed transmission information, reduces loss of image detail information, improves segmentation effect, enhances image analysis capability and reserves more multi-scale image detail information.
2) The method has good accuracy and real-time performance for detecting the fruit surface defects, the detection time of a fruit image with the size of 512 multiplied by 512 is only 15ms, the deep learning has more abstract feature extraction capability, so that the fruit defect detection capability is improved, meanwhile, the complex calculation caused by the imaging brightness correction of the fruit with the spheroid shape is effectively avoided, and the problem of high cost caused by dependence on hyperspectral and multispectral imaging hardware is also avoided.
3) The invention can effectively detect surface defects with different brightness characteristics, such as 9 surface defects of navel oranges (insect-damaged fruits, wind-damaged fruits, thrips fruits, scale fruits, canker fruits, cracked fruits, anthracnose, phytotoxicity fruits and heterochromous fruits). The method has wider application range, is simple and convenient, is easy to realize, and has larger application potential in the aspect of computer vision online detection of the quality of fruits and agricultural products.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a diagram of a deep learning segmentation network structure of the method of the present invention.
Fig. 3 is an original image in embodiment 1 of the present invention.
Fig. 4 is a surface defect image in example 1 of the present invention.
FIG. 5 is the original picture of the damaged fruit of navel orange.
FIG. 6 is a diagram of the surface defect detection result of navel orange.
FIG. 7 is the original picture of the navel orange after being damaged by wind.
FIG. 8 is a graph of the surface defect detection result of navel orange bruised fruit.
FIG. 9 is the original picture of thrips navel orange.
FIG. 10 is a graph showing the results of surface defect detection of thrips navel orange.
FIG. 11 is the original picture of the fruit of the navel orange scale insect.
FIG. 12 is a diagram of the detection result of the surface defect of the mesona aurantium fruit.
FIG. 13 is a raw picture of a navel orange canker fruit.
FIG. 14 is a graph of the surface defect detection of navel orange canker fruits.
FIG. 15 is the original picture of the fruit cracking of navel orange.
FIG. 16 is a graph showing the detection result of surface defects of cracked navel orange.
FIG. 17 is a raw picture of navel orange anthrax fruit.
FIG. 18 is a graph of the detection results of the surface defects of navel orange anthracnose fruits.
FIG. 19 is the original picture of the drug damage of navel orange.
FIG. 20 is a diagram showing the results of surface defect detection of navel orange phytotoxicity.
FIG. 21 is the original picture of the heterochrous fruit of navel orange.
FIG. 22 is a graph showing the detection result of surface defects of the orange heterochrosis fruit.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments.
As shown in fig. 1, the implementation of the present invention is as follows:
the method comprises the following steps: an RGB color image of a fruit is obtained, after background removal, the RGB color image is converted into a gray image and an image with a uniform size of 512 × 512 is obtained, as shown in fig. 3.
Step two: preparing a positive sample image data set and a negative sample image data set for training a convolutional neural segmentation network; the specific process is as follows:
2.1) dividing the gray level image in the step one into a positive sample image and a negative sample image according to whether the fruit is defective or not, wherein the positive sample image is the gray level image of the defective fruit, and the negative sample image is the gray level image of the non-defective fruit;
2.2) constructing a positive sample image and a binarization image sample with a corresponding defective area pixel value marked as 255 and a non-defective area pixel value marked as 0, and constructing a negative sample image and a binarization image sample with a corresponding pixel value marked as 0, thereby forming training data containing defective fruits and non-defective fruits.
2.3) randomly batching the training data in the step 2.2, adopting 50% of the training data as a training set, adopting 20% of the training data as a verification set and adopting 30% of the training data as a test set.
Step three: designing a convolutional neural segmentation network for fruit surface defect detection as shown in FIG. 2, training, and obtaining a segmentation network connection weight matrix T after training is finished for subsequent detection steps; the specific process is as follows:
3.1) designing a convolutional neural segmentation network with thirteen convolutional layers, twelve batch normalization layers, six maximum pooling layers, six upsampling layers, twelve ReLU function activation layers and one Sigmoid function classification layer.
3.2) the convolutional neural segmentation network is mainly composed of an input layer, six subunits, an output convolutional layer, a Sigmoid function classification layer and an output layer, wherein each subunit comprises a front feature network unit, a rear feature network unit and an upper sampling layer, the input layer is connected to the front feature network unit of the first subunit, the front feature network units of the six subunits are sequentially connected and transmitted, the rear feature network units of the six subunits are sequentially connected and transmitted, the output of the rear feature network unit connected with the first front feature network unit is connected to the output convolutional layer, and the output convolutional layer is connected to the output layer through the Sigmoid function classification layer;
inside each subunit, the front characteristic network unit is connected to the rear characteristic network unit through an upper sampling layer, and the front characteristic network unit is directly connected to the rear characteristic network unit; each front feature network unit internally comprises a convolution layer, a batch normalization layer, a ReLU function activation layer and a maximum pooling layer which are sequentially connected in a data transfer sequence; each post-feature network unit internally comprises a convolution layer, a batch normalization layer and a ReLU function activation layer which are sequentially connected in a data transmission sequence;
the output of the maximum pooling layer is used as the output of the front feature network unit, is transmitted to the convolution layer of the rear feature network unit through the upper sampling layer of the current subunit and is simultaneously transmitted to the convolution layer of the front feature network unit of the next subunit; the output of the ReLU function activation layer of the post-feature network element is passed to the up-sampling layer of the last sub-element as the output of the post-feature network element.
The front characteristic network unit of the first subunit sequentially comprises a convolution layer I, a batch normalization layer I, a ReLU function activation layer I and a maximum pooling layer I of 16 convolution kernels with the size of 3 multiplied by 3 in a data transfer sequence, the rear characteristic network unit of the first subunit sequentially comprises a convolution layer XII, a batch normalization layer XII and a ReLU function activation layer XII of 16 convolution kernels with the size of 3 multiplied by 3 in a data transfer sequence, and the upper sampling layer of the first subunit is an upper sampling layer VI; the front feature network unit of the second subunit sequentially comprises a convolution layer II, a batch normalization layer II, a ReLU function activation layer II and a maximum pooling layer II of 32 convolution kernels with the size of 3 x 3 in a data transfer sequence, the rear feature network unit of the second subunit sequentially comprises a convolution layer XI, a batch normalization layer XI and a ReLU function activation layer XI of 32 convolution kernels with the size of 3 x 3 in a data transfer sequence, and an upsampling layer of the second subunit is an upsampling layer V; the convolutional layer XIII is output as 1 convolutional layer of 1 × 1 size convolutional kernels.
The maximum pooling layer I is used as the output of the first previous characteristic network unit and is transmitted to the convolution layer XII through the up-sampling layer VI, and is simultaneously transmitted to the convolution layer II of the second subunit; the ReLU function activation layer XI is used as the output of the second post-feature network unit and is transmitted to the upper sampling layer VI of the first subunit; the ReLU function activation layer XII is connected to the Sigmoid function classification layer via convolution layer XIII.
3.3) inputting the training data obtained in the step 2) into an input layer of the convolutional neural segmentation network, training the convolutional neural segmentation network by adopting an RMSprop optimization algorithm instead of a common SGD algorithm until the error of the convolutional neural segmentation network reaches the minimum value to complete convergence, and obtaining a network connection weight matrix T constructed by each parameter in the trained convolutional neural segmentation network, wherein the loss calculation adopts a two-class cross entropy, so that a better global stationary learning rate can be obtained, and the training efficiency of the convolutional neural segmentation network is improved.
Step four: collecting RGB color images of fruits to be detected, converting the RGB color images into gray images, and completing image segmentation by using a convolution neural segmentation network loaded with a connection weight matrix T in the third step to obtain a surface defect image as shown in figure 4, wherein a white area is a defect area.
In the embodiment, the verification set is adopted for verification, the accuracy of the test result is 96%, and the accuracy of the test result is 95% after the test is carried out by adopting the test set.
The specific embodiment is as follows:
the invention respectively carries out the experiment on the navel orange insect-damaged fruit, the navel orange wind-damaged fruit, the navel orange thrips fruit, the navel orange scale insect fruit, the navel orange ulcer fruit, the navel orange cracked fruit, the navel orange anthracnose-damaged fruit, the navel orange drug-damaged fruit and the navel orange heterochromous fruit, the relevant original images and the detection result images are respectively shown in the attached figures 5 to 22,
fig. 5 is an original image of a navel orange insect-damaged fruit, and fig. 6 is a graph of the detection result of fig. 5, wherein a white area is a defect area.
Fig. 7 is an original image of a navel orange wind damaged fruit, and fig. 8 is a graph of the detection result of fig. 7, wherein a white area is a defect area.
FIG. 9 is an original image of thrips navel orange, and FIG. 10 is a graph of the detection result of FIG. 9, in which a white area is a defective area.
FIG. 11 is an original image of the fruit of the navel orange scale insects, and FIG. 12 is a graph of the detection results of FIG. 11, wherein the white area is the defect area.
Fig. 13 is a raw image of a navel orange canker fruit, and fig. 14 is a graph of the detection result of fig. 13, in which a white area is a defective area.
Fig. 15 is an original image of the fruit cracking of navel orange, and fig. 16 is a graph of the detection result of fig. 15, in which a white area is a defective area.
FIG. 17 is a raw image of navel orange anthracnose fruit, and FIG. 18 is a graph of the detection results of FIG. 17, in which white areas are defect areas.
FIG. 19 is the original image of the fruit of navel orange suffering from drug injury, and FIG. 20 is the image of the detection result of FIG. 19, in which the white area is the defect area.
Fig. 21 is an original image of a navel orange heterochromous fruit, and fig. 22 is a graph of the detection result of fig. 21, in which a white area is a defective area.
Compared with the original image and the detection result image in each embodiment, the method has good accuracy and practicability in detection, can effectively avoid dependence on the shapes and sizes of fruits and agricultural products and avoid complex influence brought by brightness correction, is wide in application range, and has a great application value in the aspect of computer vision online detection of the quality of the fruits and the agricultural products.
The foregoing detailed description is intended to illustrate and not limit the invention, which is intended to be within the spirit and scope of the appended claims, and any changes and modifications that fall within the true spirit and scope of the invention are intended to be covered by the following claims.

Claims (4)

1. A fruit surface defect detection method based on deep learning segmentation is characterized by comprising the following steps:
1) obtaining a fruit RGB color image, removing the background of the fruit RGB color image, and converting the fruit RGB color image into a gray image;
2) dividing the gray level image in the step 1) into a positive sample image and a negative sample image, and constructing training data for training a convolutional neural segmentation network, wherein the training data are the positive sample image and a corresponding binary image sample thereof, and the negative sample image and a corresponding binary image sample thereof, the positive sample image is a gray level image of a defective fruit, and the negative sample image is a gray level image of a non-defective fruit;
3) designing a convolutional neural segmentation network structure for fruit surface defect detection, training by using the training data in the step 2), and acquiring a network connection weight matrix T after training is completed;
the specific steps of designing the convolutional neural segmentation network structure for fruit surface defect detection in the step 3) are as follows:
3.1) designing a convolutional neural segmentation network with thirteen convolutional layers, twelve batch normalization layers, six maximum pooling layers, six upsampling layers, twelve ReLU function activation layers and one Sigmoid function classification layer;
3.2) the convolutional neural segmentation network is mainly composed of an input layer, six subunits, an output convolutional layer, a Sigmoid function classification layer and an output layer, wherein each subunit comprises a front feature network unit, a rear feature network unit and an upper sampling layer, the input layer is connected to the front feature network unit of the first subunit, the front feature network units of the six subunits are sequentially connected and transmitted, the rear feature network units of the six subunits are sequentially connected and transmitted, the output of the rear feature network unit connected with the first front feature network unit is connected to the output convolutional layer, and the output convolutional layer is connected to the output layer through the Sigmoid function classification layer;
inside each subunit, the front characteristic network unit is connected to the rear characteristic network unit through an upper sampling layer, and the front characteristic network unit is directly connected to the rear characteristic network unit; each front feature network unit internally comprises a convolution layer, a batch normalization layer, a ReLU function activation layer and a maximum pooling layer which are sequentially connected in a data transfer sequence; each post-feature network unit internally comprises a convolution layer, a batch normalization layer and a ReLU function activation layer which are sequentially connected in a data transmission sequence;
the output of the maximum pooling layer is used as the output of the front feature network unit, is transmitted to the convolution layer of the rear feature network unit through the upper sampling layer of the current subunit and is simultaneously transmitted to the convolution layer of the front feature network unit of the next subunit; the output of the ReLU function activation layer of the post-feature network unit is used as the output of the post-feature network unit and is transmitted to the upper sampling layer of the last subunit;
3.3) inputting the training data in the step 2) into an input layer of the convolutional neural segmentation network, training the convolutional neural segmentation network by adopting an RMSprop optimization algorithm instead of a commonly-used SGD algorithm until the error of the convolutional neural segmentation network reaches the minimum value to finish convergence, and obtaining a network connection weight matrix T constructed by each parameter in the trained convolutional neural segmentation network, wherein the loss calculation adopts two-classification cross entropy;
4) converting an RGB color image of the fruit to be detected into a gray image, inputting the gray image into the convolution neural segmentation network obtained in the step 3), and obtaining a fruit surface defect image after completing image segmentation.
2. The fruit surface defect detection method based on deep learning segmentation according to claim 1, characterized in that: the size of the gray image in step 1) needs to be uniformly adjusted to 512 × 512.
3. The fruit surface defect detection method based on deep learning segmentation according to claim 1, characterized in that: the specific steps of the step 2 are as follows:
2.1) dividing the gray level image of the step 1) into a positive sample image and a negative sample image according to whether the fruit is defective or not;
2.2) constructing a positive sample image and a binarization image sample with a corresponding defective area pixel value marked as 255 and a non-defective area pixel value marked as 0, and constructing a negative sample image and a binarization image sample with a corresponding pixel value marked as 0, thereby forming training data containing defective fruits and non-defective fruits.
4. The fruit surface defect detection method based on deep learning segmentation according to claim 1, characterized in that: the convolutional neural segmentation network in the step 4) is a convolutional neural segmentation network loaded with a network connection weight matrix T.
CN201811203154.5A 2018-10-16 2018-10-16 Fruit surface defect detection method based on deep learning segmentation Active CN109377487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811203154.5A CN109377487B (en) 2018-10-16 2018-10-16 Fruit surface defect detection method based on deep learning segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811203154.5A CN109377487B (en) 2018-10-16 2018-10-16 Fruit surface defect detection method based on deep learning segmentation

Publications (2)

Publication Number Publication Date
CN109377487A CN109377487A (en) 2019-02-22
CN109377487B true CN109377487B (en) 2022-04-12

Family

ID=65399976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811203154.5A Active CN109377487B (en) 2018-10-16 2018-10-16 Fruit surface defect detection method based on deep learning segmentation

Country Status (1)

Country Link
CN (1) CN109377487B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458818A (en) * 2019-08-06 2019-11-15 苏州感知线智能科技有限公司 A kind of betel nut detection method based on neural network algorithm
CN111223083B (en) * 2020-01-06 2023-11-21 广东宜通联云智能信息有限公司 Construction method, system, device and medium of surface scratch detection neural network
CN113066079A (en) * 2021-04-19 2021-07-02 北京滴普科技有限公司 Method, system and storage medium for automatically detecting wood defects
CN113269251A (en) * 2021-05-26 2021-08-17 安徽唯嵩光电科技有限公司 Fruit flaw classification method and device based on machine vision and deep learning fusion, storage medium and computer equipment
CN114565607A (en) * 2022-04-01 2022-05-31 南通沐沐兴晨纺织品有限公司 Fabric defect image segmentation method based on neural network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101008621B (en) * 2007-01-12 2010-05-26 浙江大学 Method and device for detecting fruit defects based on multi-sensor information fusion
GB201504360D0 (en) * 2015-03-16 2015-04-29 Univ Leuven Kath Automated quality control and selection system
CN106339591B (en) * 2016-08-25 2019-04-02 汤一平 A kind of self-service healthy cloud service system of prevention breast cancer based on depth convolutional neural networks
WO2018136262A1 (en) * 2017-01-20 2018-07-26 Aquifi, Inc. Systems and methods for defect detection
CN107633199A (en) * 2017-08-07 2018-01-26 浙江工业大学 A kind of apple picking robot fruit object detection method based on deep learning
CN108335300A (en) * 2018-06-22 2018-07-27 北京工商大学 A kind of food hyperspectral information analysis system and method based on CNN

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning

Also Published As

Publication number Publication date
CN109377487A (en) 2019-02-22

Similar Documents

Publication Publication Date Title
CN109377487B (en) Fruit surface defect detection method based on deep learning segmentation
CN111739075A (en) Deep network lung texture recognition method combining multi-scale attention
CN109359681B (en) Field crop pest and disease identification method based on improved full convolution neural network
CN107871316B (en) Automatic X-ray film hand bone interest area extraction method based on deep neural network
CN107966454A (en) A kind of end plug defect detecting device and detection method based on FPGA
CN103593670A (en) Copper sheet and strip surface defect detection method based on-line sequential extreme learning machine
CN110991511A (en) Sunflower crop seed sorting method based on deep convolutional neural network
CN110414538A (en) Defect classification method, defect classification based training method and device thereof
Alipasandi et al. Classification of three varieties of peach fruit using artificial neural network assisted with image processing techniques.
CN113222959B (en) Fresh jujube wormhole detection method based on hyperspectral image convolutional neural network
CN111178177A (en) Cucumber disease identification method based on convolutional neural network
CN106645180A (en) Method for checking defects of substrate glass, field terminal and server
Ünal et al. Classification of hazelnut kernels with deep learning
CN115035381A (en) Lightweight target detection network of SN-YOLOv5 and crop picking detection method
CN113421223B (en) Industrial product surface defect detection method based on deep learning and Gaussian mixture
Ji et al. Apple color automatic grading method based on machine vision
CN110929787B (en) Apple objective grading system based on image
Liang et al. Automated Detection of Coffee Bean Defects using Multi-Deep Learning Models
CN113269251A (en) Fruit flaw classification method and device based on machine vision and deep learning fusion, storage medium and computer equipment
CN112966781A (en) Hyperspectral image classification method based on triple loss and convolutional neural network
Herdajanti et al. Evaluation of histogram of oriented gradient (hog) and learning vector algorithm quantization (lvq) in classification carica vasconcellea cundinamarcencis
Sahitya et al. Quality Analysis on Agricultural Produce Using CNN
Chaugule et al. Seed technological development—A survey
CN117250322B (en) Red date food safety intelligent monitoring method and system based on big data
CN116485802B (en) Insulator flashover defect detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant