CN105718945B - Apple picking robot night image recognition method based on watershed and neural network - Google Patents

Apple picking robot night image recognition method based on watershed and neural network Download PDF

Info

Publication number
CN105718945B
CN105718945B CN201610035900.9A CN201610035900A CN105718945B CN 105718945 B CN105718945 B CN 105718945B CN 201610035900 A CN201610035900 A CN 201610035900A CN 105718945 B CN105718945 B CN 105718945B
Authority
CN
China
Prior art keywords
image
color
fragments
neural network
apple
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610035900.9A
Other languages
Chinese (zh)
Other versions
CN105718945A (en
Inventor
赵德安
刘晓洋
贾伟宽
陈玉
姬伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN201610035900.9A priority Critical patent/CN105718945B/en
Publication of CN105718945A publication Critical patent/CN105718945A/en
Application granted granted Critical
Publication of CN105718945B publication Critical patent/CN105718945B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables

Abstract

The invention discloses a watershed and neural network-based night image recognition method for an apple picking robot, which is used for acquiring color images in an apple orchard at night through auxiliary illumination of an artificial light source, fragmentizing the images along the edges of the images by adopting an improved watershed algorithm, extracting the color characteristics and the texture characteristics of each fragment, establishing a back propagation artificial neural network, training by using the characteristic quantity of known fragments, classifying by using the trained neural network according to the characteristic quantity of each fragment, and filtering out wrong classifications according to the position relationship among the fragments to correct the classification result and determine the position of an apple. According to the method, the apples at night are identified by fragmenting the image and classifying the fragments, so that the influence of uneven illumination, shadow and reflection phenomena caused by an artificial light source on apple identification can be effectively inhibited, and the completeness and the positioning accuracy of apple identification are improved.

Description

Apple picking robot night image recognition method based on watershed and neural network
Technical Field
The invention relates to a fruit tree picking robot, in particular to the technical field of image recognition of an apple picking robot at night, and aims to recognize apple fruits at night based on watershed and a neural network.
Background
China is the biggest apple producing country in the world, picking operation is one of the most time-consuming and labor-consuming links in apple planting production, and the demand on machine picking operation is more and more urgent. However, at present, the picking efficiency of the apple picking robot is limited by the image recognition speed and the picking speed of the manipulator, which are difficult to compare with the manual picking speed. In 2008, belgian scholars developed apple picking robots AFPM, the picking rate of apples with diameters of 6cm to 11cm was about 80%, and the average picking time was 9 seconds. The apple picking robot developed by the Chinese agricultural mechanized scientific research institute and Jiangsu university in 2011 has a picking success rate of 80% and a harvesting speed of 15 seconds per apple. However, the robot has the advantages of being not tired and capable of continuously working for a long time, so that the whole day and night operation can be realized through another method, and the overall working efficiency of the apple picking robot can be improved. Implementation of day and night operations requires that the vision system of the picking robot be adaptable to a variety of lighting conditions, of which picking operations under night-time artificial light source illumination are an important component.
In 2014, a domestic student adopts a support vector machine classifier which takes normalized G components and H and S color components of HSV color space as characteristic parameters and a threshold classifier which takes an ultragreen operator (2G-R-B) as a characteristic parameter to identify green apples under night illumination and estimate the yield, in 2014, an Australian student A.Payne and the like collects mango images under night L ED illumination, combines the color characteristics and the shape texture characteristics of YCbCr color space to identify mangoes and estimate the yield of the mangoes, in 2013, an American student D.Font and the like collects spherical images of red ripe grapes by using artificial light source illumination at night, calculates the number of the grapes by detecting the reflection peak value of the grape surface, in 2014, a Stephen Nuske and the like synthesize color, shape and texture various visual characteristics to identify green grapes under the night artificial light source to estimate the yield, and accurately position the fruit picking robot which aims to reduce the interference of the artificial light source for picking.
The basic idea of the watershed algorithm is to regard an image as a geodetic topological landform, each pixel value in the image represents the altitude of the point, and each local minimum value and an influence area thereof are called as a catchbasin. The formation of watershed can be illustrated by simulating the immersion process. And (3) piercing a small hole on the surface of each local minimum, then slowly immersing the whole model into water, wherein the influence area of each local minimum is gradually expanded outwards as the immersion is deepened, and constructing a dam at the junction of two water collecting basins, namely forming a watershed.
The morphological on-off filtering is to perform an on operation and then an off operation on the binary image. The opening operation has the functions of removing burrs, smoothing edges and filtering small isolated points; the closing operation has the functions of closing the break point and filling small holes. The opening operation can be geometrically described as rolling the structural element close to the inner boundary of the original image and ensuring that the structural element is always contained in the original image, and the position which can be reached by the point in the structural element and is closest to the inner boundary of the original image forms the outer boundary of the opening operation result. The closing operation and the opening operation are dual, and can also be geometrically described as that the structural element is rolled close to the outer boundary of the original image, the structural element is always ensured not to leave the original image in the rolling process, and the position which is closest to the outer boundary of the original image and can be reached by the point in the structural element forms the outer boundary of the closing operation.
Neural networks, also known as Artificial Neural Networks (ANNs), are abstractions and simulations of several basic characteristics of the human brain or biological Neural networks. The excellent learning of the neural network can greatly save the workload of analyzing and modeling the data and improve the classification efficiency. Among them, Back propagation artificial Neural Network (BP Neural Network) is one of the most widely used Neural networks.
Disclosure of Invention
The invention aims to provide an image recognition method for an apple picking robot under the condition of an artificial light source at night based on a watershed and a neural network, which can effectively inhibit the influence of uneven illumination, shadow and reflection phenomena caused by the artificial light source on apple recognition and improve the completeness and positioning accuracy of apple recognition.
The technical scheme includes that a color image in an apple orchard is collected through auxiliary illumination of an artificial light source at night, the image is fragmented along the edge of the image by adopting an improved watershed algorithm, color features and texture features of each fragment are extracted, a back propagation artificial neural network is established and trained by using feature quantities of known classes of fragments, then the trained neural network is used for classifying according to the feature quantities of each fragment, finally, error classification is filtered according to the position relation between the fragments to correct classification results and determine the position of an apple, wherein the improved watershed algorithm is used for changing input of the watershed algorithm into a gradient image of the collected color image, median filtering and on-off filtering are carried out on the gradient image by adopting a 3 × 3 template to smooth noise, the image fragmentation is used for dividing the image into fragments with different sizes on the basis of the edge detected by the improved watershed algorithm, the color features are color average values and smoothness of pixel points in an RGB color space in the image fragment, the features are all pixel mean values in the fragment, the image include statistical feature difference of the average value of the pixel mean value of the modified watershed histogram, the fragment, the classification of the image, the neural network classification of the branch point in the branch, the apple orchard, the neural network, the training of the branch entropy of the fragment is used for extracting the extracted nearest neighbor neural network, the branch of the fragment, the branch of the fragment, the neural network.
Further, the specific process of acquiring the color image in the apple orchard by the auxiliary illumination of the artificial light source at night is that firstly, a white L ED lamp is used as the artificial auxiliary illumination light source at night, and a CMOS color camera is selected to shoot a target fruit to finish image acquisition.
Further, the improved watershed algorithm comprises the following improved parts:
a) color image gradient calculation: compared with the gradient calculation of a gray image, the gradient calculation of a color image converts the calculation on a single gray into a three-dimensional vector calculation, and the gradient of the color image at the point (x, y) defines the formula as follows:
Figure BDA0000910295460000031
wherein R, G, B are the color components at point (x, y), respectively;
Figure BDA0000910295460000032
unit vectors along the R, G, B axes of the RGB color space, respectively;
Figure BDA0000910295460000033
the gradient vectors of the color image in the x and y directions at point (x, y), respectively.
b) And filtering the gradient image by adopting a method combining median filtering and morphological open-close filtering, namely performing median filtering on the gradient image of the color image by adopting a 3 × 3 square template, and performing open-close filtering on the gradient image by adopting a 3 × 3 square structural element.
Further, the image fragmenting is to divide the image into fragments of different sizes based on edges detected by the improved watershed algorithm.
Further, the color features are the color average value and variance of pixel points in the image fragment in the RGB color space; the texture features are statistical features of all pixel gray level histograms in the image fragments, and include: mean grayscale, standard deviation, smoothness, entropy.
Further, the back propagation artificial neural network training is to take a plurality of image fragments of apples, leaves, branches and backgrounds respectively, extract color features and texture features of the image fragments as neural network input, take category numbers corresponding to the fragments as neural network output, then train for multiple times and select a network with the minimum training error as a network finally used for classification; the back propagation artificial neural network classification is to extract the color features and texture features of each fragment in the image and divide the fragments into 4 classes of apples, leaves, branches and backgrounds by using a trained neural network.
Further, the positional relationship between the patches is an adjacency relationship between image patches described using a region adjacency graph.
Further, the filtering of the misclassification is to take the apple fragments which are isolated or are adjacent to the apple fragments as the misclassified fragments and filter the misclassified fragments.
The invention has the beneficial effects that:
1) the invention adopts a watershed algorithm to carry out edge detection on an original image and indirectly applies edge information to fragment the image so as to realize regional description of the image.
2) The conventional watershed algorithm is particularly sensitive to noise, and is easy to cause image gradient deterioration and offset of a segmentation contour; secondly, the over-segmentation phenomenon is easily generated. The invention optimizes the watershed segmentation effect by applying the classical algorithm and properly selects and improves the gradient image calculation method and the size of the filtering template aiming at the characteristics of the night image; therefore, the gradient calculation method of the RGB color vector space can be adopted to enable the detail expression of the gradient image to be more perfect and lay a foundation for the subsequent watershed segmentation; in order to solve the problem that the watershed algorithm is easily influenced by noise and the like to cause over-segmentation, the median filtering and the morphology are combined with the opening and closing calculation to filter the gradient image.
3) The watershed algorithm divides the image into regional fragments with moderate sizes, so that the target apple can be classified from a complex background only by classifying the image fragments and screening out the image fragments belonging to the target fruit. The method extracts the color features and the texture features of the fragments as the basis for fragment classification. By the method for fragmenting the image and classifying the image fragments according to the color and texture characteristics of the fragments, the influence of uneven illumination, shadow and reflection phenomena caused by an artificial light source on apple identification can be effectively inhibited, and the completeness and the positioning accuracy of the apple identification are improved.
4) The method classifies the fragments by using a back propagation Artificial Neural Network (BP Neural Network) according to the color characteristics and the texture characteristics of the image fragments, and further corrects the classified fragments according to the spatial position relationship among the fragments. The learning capability of the BP neural network can save the workload of artificial modeling in the early stage and achieve satisfactory effect on the training precision.
Drawings
FIG. 1 is an image under a night L ED light source;
FIG. 2 is a diagram of an effect of an original watershed segmentation;
FIG. 3 is a flow diagram of an improved watershed algorithm;
FIG. 4 is a diagram of an improved watershed segmentation effect;
FIG. 5 is a partially enlarged sample view of different views;
FIG. 6 is a topology diagram of a BP neural network;
FIG. 7 is a graph of image fragment classification effect;
FIG. 8 is a schematic diagram of region adjacency generation.
Detailed Description
The following further describes embodiments of the present invention with reference to the accompanying drawings.
The method mainly comprises three parts of image fragmentization, fragment feature extraction and fragment classification, and an improved watershed algorithm is adopted to fragment the image along the edge of the acquired image; extracting color features and texture features of each fragment; establishing a neural network, training by using the characteristic quantities of the fragments of known types, classifying by using the trained neural network according to the characteristic quantities of each fragment, filtering the apple fragments which are wrongly classified according to the position relation among the fragments, and determining the position of the apple. The method comprises the following specific steps:
1. image fragmentation
The method comprises the steps of firstly taking a white L ED lamp as an artificial auxiliary lighting source at night, selecting a CMOS color camera to shoot target fruits to finish image acquisition, wherein the acquired night apple fruit image is influenced by the artificial light source, so that uneven illumination is easily caused, local shadows, reflection and the like are formed, the brightness of a local area is too high or too low, and details and color information are damaged.
However, when the image is fragmented based on the edges of the image, the pixels within the same fragment have a range of differences in addition to relative consistency in color or gray scale. The difference in color indicates that the image characterization in terms of patch regions is less sensitive to illumination than the image characterization in terms of pixels. The edge is one of the basic features of the image, and the detection and application of the edge information helps to improve the accuracy of image recognition. In an image with a complex background, the application of edge information is limited, and more is applied to image recognition of a simple background. The method adopts a watershed algorithm to carry out edge detection on the original image and indirectly applies edge information to fragment the image so as to realize regional description of the image.
Compared with edge detection operators such as Roberts operators, Sobel operators and Prewitt operators, the watershed algorithm is adopted to detect the edge information of the image, so that the edge information of the image can be detected to generate a more stable segmentation result, and a complete boundary can be formed to fragment the image to form an area with relatively consistent internal color or gray. Watershed algorithms also have considerable disadvantages: firstly, the watershed algorithm is particularly sensitive to noise, and is easy to cause image gradient deterioration and the offset of a segmentation contour; secondly, the over-segmentation phenomenon is easily generated. The effect of segmenting the image using the original watershed algorithm without optimization improvement is shown in fig. 2. It can be seen from fig. 2 that the original watershed algorithm has a severe over-segmentation phenomenon, and the segmentation result is basically unavailable.
Therefore, the watershed algorithm needs to be improved necessarily to inhibit noise and over-segmentation phenomena, and the watershed segmentation after the image is transformed into a gradient image and filtered and smoothed is a classic watershed optimization algorithm. The invention optimizes the watershed segmentation effect by applying the classical algorithm and properly selects and improves the gradient image calculation method and the size of the filtering template aiming at the characteristics of the night image, and the specific flow is shown in figure 3.
The conventional color image gradient calculation is to calculate the gradient after graying the color image or calculate the gradient after splitting the color image into a plurality of images with single color channels, and then superpose and synthesize the images. However, the gradient images generated by the two methods are not accurate enough, and the effect of watershed segmentation is affected. In order to accurately express the gradient of the color image, the calculation about single gray in the gradient calculation formula needs to be converted into three-dimensional vector calculation. Therefore, the gradient calculation method of the RGB color vector space can enable the detail expression of the gradient image to be more perfect, and a foundation is laid for the following watershed segmentation. The gradient of the color image at point (x, y) defines the formula as follows:
Figure BDA0000910295460000051
wherein R, G, B are the color components at point (x, y), respectively;
Figure BDA0000910295460000052
unit vectors along the R, G, B axes of the RGB color space, respectively;
Figure BDA0000910295460000053
the gradient vectors of the color image in the x and y directions at point (x, y), respectively. The main differences between the definition of color image gradients and the definition of grayscale image gradients are: the gradient of the grayscale image along the x or y axis is a scalar; the gradient of a color image along the x or y axis is a vector, and the composition of the gradients for the individual color components is calculated as a vector rather than simply superimposed.
The invention adopts the combination of median filtering and morphology and open-close filtering to filter the gradient image, the median filtering adopts a 3 × 3 square template, the structural elements of the open-close filtering are also 3 × 3 square structures, the smooth color gradient image is segmented by the watershed algorithm to divide the image into fragments with different sizes, and the segmentation effect is shown in figure 4.
2. Fragment feature extraction
Watershed algorithms have segmented images into moderately sized regionalized fragments, so that target apples can be classified from complex backgrounds by only classifying the image fragments and screening out image fragments belonging to target fruits. The method extracts the color features and the texture features of the fragments as the basis for fragment classification.
2.1 color feature extraction
Color features are the most widely used visual features in image retrieval, mainly because colors tend to be very related to objects or scenes contained in images, and the color features have less dependence on the size, orientation, and viewing angle of the images themselves than other visual features. The description of the color feature first needs to select a suitable color space to express, the RGB color space is the most commonly used color space, and the original image data is also expressed in the RGB color space, so the RGB color space is directly used for describing the color feature. In order to reflect the consistency and difference of the color attributes of the pixels in each fragment, the color average value of the pixels in the fragment is considered
Figure BDA0000910295460000061
And variance (Var as a color feature quantity of the patch, wherein the variance (Var is calculated as follows:
Figure BDA0000910295460000062
and n is the total number of the pixel points in the fragment and is the color value of the ith pixel point in the fragment.
2.2 textural feature extraction
The texture feature is one of the inherent features of an image, is a pattern generated by converting a gray scale or a color in a space in a certain form, and sometimes has a certain periodicity. Fig. 5 is a partially enlarged image of a cut apple, leaf, branch and background, respectively. The texture feature of the image fragment adopts the statistical feature of the gray level histogram of all pixels in the fragment, and comprises the following steps: mean grayscale, standard deviation, smoothness, entropy. Firstly, the color image fragments need to be grayed, and the conversion formula is as follows:
the Gray is R × 0.299.299 + G × 0.587.587 + B × 0.114.114 (3), wherein Gray is a Gray value, and R, G and B are three color components of a pixel point in an RGB color space respectively.
(1) Mean gray level: an average measure of texture brightness.
Figure BDA0000910295460000063
Where m is the mean value of the gray levels, L is the total number of gray levels, represents the ith gray level, and p (z)i) Is the gray level ziThe probability of (c).
(2) Standard deviation: a measure of texture average contrast.
Figure BDA0000910295460000071
In the formula: σ is the standard deviation; the other parameters are as defined in formula (4).
(3) Smoothness: the relative smoothness measure of texture brightness.
Figure BDA0000910295460000072
In the formula: r is smoothness; σ is the standard deviation.
(4) Entropy: a measure of randomness.
Figure BDA0000910295460000073
In the formula: e is entropy; the other parameters are as defined in formula (4).
3. Debris classification
The method classifies the fragments by using a back propagation Artificial Neural Network (BP Neural Network) according to the color characteristics and the texture characteristics of the image fragments, and further corrects the classified fragments according to the spatial position relationship among the fragments.
3.1BP neural network classification
The back propagation artificial neural network classification is to extract the color features and texture features of each fragment in the image and divide the fragments into 4 classes of apples, leaves, branches and backgrounds by using a trained neural network.
The specific process is that the input data of the BP neural network is the color average value, the variance, the gray level average value, the standard deviation, the smoothness and the entropy (R, G, B, Var, m, R, e) of each image fragment, the dimensionality is 8. the output of the BP neural network is mainly divided into 4 types, namely apples, leaves, branches and backgrounds.
In order to make the established BP neural network exert the necessary classification effect, the necessary training is required. The back propagation artificial neural network training is to take a plurality of image fragments of apples, leaves, branches and backgrounds respectively and extract color features and texture features of the image fragments as neural network input, the class numbers corresponding to the fragments are used as neural network output, and then multiple times of training are carried out and a network with the minimum training error is selected as a network finally used for classification.
The specific process is as follows: collecting 60 image fragments of apples, leaves, branches and backgrounds respectively and calculating color average value, variance, gray level average value, standard deviation, smoothness and entropy of the image fragments to serve as input data of a BP neural network; meanwhile, a corresponding class label is established as output data of training, the network is trained for 20 times, the network with the minimum training error is selected as a final classification network to classify the images, and the classification effect is shown in fig. 7. As can be seen from fig. 7, the network classifies the image patches into four categories, where red patches represent apples, green patches represent leaves, gray patches represent branches, and black patches represent the background.
3.2 debris position relationship correction
And filtering out error classification according to the position relation among the fragments to correct the classification result and determine the position of the apple, wherein the filtering out error classification is to take the isolated or only one apple fragment and the adjacent apple fragment as the error classified fragments and filter out the error classified fragments.
The small-range deviation of branch and leaf area identification has small influence on apple picking, and the misjudgment of the target apple brings large interference to the positioning and identification of the target, so that the interference of a non-target area needs to be filtered. The misjudged region similar to the smaller red region in fig. 6 has the characteristics of small area and isolated position. Since there are usually no or few homogeneous regions in the adjacent fragment regions of the misinterpretation region, the isolation of the spatial position of the misinterpretation region is an important feature for distinguishing the misinterpretation region from other regions.
In order to describe the relationship between different patches in a two-dimensional space, the positional relationship between the patches is the adjacency relationship between image patches described by using a Region adjacency graph, first, a Region Adjacency Graph (RAG) needs to be established to express the adjacency relationship between different patch regions, the Region adjacency graph is a binary matrix of N × N, where N is the number of image patches, if P is a Region adjacency graph, where P (i, j) is 1, 2, 3 … N), it indicates that the ith image patch is adjacent to the jth image patch, and if P is 0, it is not adjacent, and a specific adjacency graph generation manner is shown in fig. 8.
And counting the number of adjacent similar fragments around each misjudged apple fragment region according to the description of the region adjacency graph. According to the statistical results of a plurality of images, the number of the adjacent similar fragments around most misjudged apple fragments is 0 or 1. Therefore, in the experiment, the number of the adjacent similar fragments of the apple fragments in each image is counted, and the fragments with the number of 0 or 1 are used as misjudgment fragments to filter and reduce interference.
And finally, performing closed operation on the apple fragments to enable adjacent apple fragments to be connected into a whole, and then performing hole filling, wherein the structural element of the closed operation is a square of 3 × 3, and then calculating the center of each apple communication area to determine the center position of the apple.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (2)

1. A night image recognition method of an apple picking robot based on watershed and neural network is characterized by comprising the following steps:
acquiring a color image in an apple orchard by auxiliary illumination of an artificial light source at night, fragmentizing the image along the edge of the image by adopting an improved watershed algorithm, wherein the improved watershed algorithm is characterized in that the input of the watershed algorithm is changed into a gradient image of the acquired color image, the gradient image is subjected to median filtering and on-off filtering by adopting a 3 × 3 template to smooth noise, then the color characteristic and the texture characteristic of each fragment are extracted, a back propagation artificial neural network is established, training is carried out by utilizing the image fragment characteristic quantities of apples, leaves, branches and backgrounds, then the trained neural network is utilized to classify according to the characteristic quantity of each fragment, and finally, the misclassification is filtered according to the position relationship among the fragments to correct the classification result and determine the position of the apples;
the improved watershed algorithm comprises the following improved parts:
a) color image gradient calculation: compared with the gradient calculation of a gray image, the gradient calculation of a color image converts the calculation on a single gray into a three-dimensional vector calculation, and the gradient of the color image at the point (x, y) defines the formula as follows:
Figure FDA0002312954290000011
wherein R, G, B are the color components at point (x, y), respectively;
Figure FDA0002312954290000012
unit vectors along the R, G, B axes of the RGB color space, respectively;
Figure FDA0002312954290000013
gradient vectors of the color image in the x and y directions at point (x, y), respectively;
b) filtering the gradient image by adopting a method combining median filtering and morphological open-close filtering, namely performing the median filtering on the gradient image of the color image by adopting a 3 × 3 square template, and performing the open-close filtering on the gradient image by adopting a 3 × 3 square structural element;
the back propagation artificial neural network training is to take a plurality of image fragments of apples, leaves, branches and backgrounds respectively, extract color features and texture features of the image fragments as neural network input, take category numbers corresponding to the fragments as neural network output, then carry out multiple times of training and select a network with the minimum training error as a network finally used for classification; the back propagation artificial neural network classification is to extract the color characteristic and the texture characteristic of each fragment in the image and divide the fragments into 4 classes of apples, leaves, branches and backgrounds by using the trained neural network;
the specific process of collecting color image in apple orchard by artificial light source auxiliary lighting at night includes the steps of taking white L ED lamp as artificial auxiliary lighting source at night, and taking CMOS color camera to shoot target fruit to complete image collection;
the image fragmenting is to divide the image into fragments with different sizes on the basis of edges detected by an improved watershed algorithm;
the color features are the color average value and the variance of pixel points in the image fragment in the RGB color space; the texture features are statistical features of all pixel gray level histograms in the image fragments, and include: mean gray level, standard deviation, smoothness, entropy;
the positional relationship between the patches is an adjacency relationship between image patches described using a region adjacency graph.
2. The method for identifying nighttime images of an apple picking robot based on watershed and neural network as claimed in claim 1, wherein the filtering out of misclassification is to filter out apple fragments which are isolated or adjacent to only one apple fragment as misclassified fragments.
CN201610035900.9A 2016-01-20 2016-01-20 Apple picking robot night image recognition method based on watershed and neural network Active CN105718945B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610035900.9A CN105718945B (en) 2016-01-20 2016-01-20 Apple picking robot night image recognition method based on watershed and neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610035900.9A CN105718945B (en) 2016-01-20 2016-01-20 Apple picking robot night image recognition method based on watershed and neural network

Publications (2)

Publication Number Publication Date
CN105718945A CN105718945A (en) 2016-06-29
CN105718945B true CN105718945B (en) 2020-07-31

Family

ID=56147397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610035900.9A Active CN105718945B (en) 2016-01-20 2016-01-20 Apple picking robot night image recognition method based on watershed and neural network

Country Status (1)

Country Link
CN (1) CN105718945B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910197B (en) * 2017-01-13 2019-05-28 广州中医药大学 A kind of dividing method of the complex background leaf image in single goal region
CN107292353A (en) * 2017-08-09 2017-10-24 广东工业大学 A kind of fruit tree classification method and system
US11023593B2 (en) 2017-09-25 2021-06-01 International Business Machines Corporation Protecting cognitive systems from model stealing attacks
US10657259B2 (en) * 2017-11-01 2020-05-19 International Business Machines Corporation Protecting cognitive systems from gradient based attacks through the use of deceiving gradients
CN107638289B (en) * 2017-11-06 2018-06-01 王红霞 Nasal cavity dirt real-time detector
CN108029340A (en) * 2017-12-05 2018-05-15 江苏科技大学 A kind of picking robot arm and its control method based on adaptive neural network
US10790432B2 (en) 2018-07-27 2020-09-29 International Business Machines Corporation Cryogenic device with multiple transmission lines and microwave attenuators
CN109583333B (en) * 2018-11-16 2020-12-11 中证信用增进股份有限公司 Image identification method based on flooding method and convolutional neural network
CN109858482B (en) * 2019-01-16 2020-04-14 创新奇智(重庆)科技有限公司 Image key area detection method and system and terminal equipment
CN110321817A (en) * 2019-06-20 2019-10-11 苏州经贸职业技术学院 A kind of loquat recognition methods
CN110472598A (en) * 2019-08-20 2019-11-19 齐鲁工业大学 SVM machine pick cotton flower based on provincial characteristics contains miscellaneous image partition method and system
CN111160180A (en) * 2019-12-16 2020-05-15 浙江工业大学 Night green apple identification method of apple picking robot
CN111401121A (en) * 2019-12-18 2020-07-10 浙江工业大学 Method for realizing citrus segmentation based on super-pixel feature extraction
CN111915704A (en) * 2020-06-13 2020-11-10 东北林业大学 Apple hierarchical identification method based on deep learning
CN111783693A (en) * 2020-07-06 2020-10-16 深圳市多彩汇通实业有限公司 Intelligent identification method of fruit and vegetable picking robot
CN113076819A (en) * 2021-03-17 2021-07-06 山东师范大学 Fruit identification method and device under homochromatic background and fruit picking robot
CN113255434B (en) * 2021-04-08 2023-12-19 淮阴工学院 Apple identification method integrating fruit characteristics and deep convolutional neural network
CN113570001B (en) * 2021-09-22 2022-02-15 深圳市信润富联数字科技有限公司 Classification identification positioning method, device, equipment and computer readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500439A (en) * 2013-09-03 2014-01-08 西安理工大学 Image printing method based on image processing technology

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE602007012132D1 (en) * 2006-09-28 2011-03-03 Acro Khlimburg AUTONOMOUS FRUIT PICKING MACHINE
CN101726251A (en) * 2009-11-13 2010-06-09 江苏大学 Automatic fruit identification method of apple picking robot on basis of support vector machine
CN102113434B (en) * 2011-01-14 2012-08-15 江苏大学 Picking method of picking robot under fruit oscillation condition
CN102165880A (en) * 2011-01-19 2011-08-31 南京农业大学 Automatic-navigation crawler-type mobile fruit picking robot and fruit picking method
CN104646305A (en) * 2013-11-25 2015-05-27 王健 Machine vision-based apple online automatic classification separation system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500439A (en) * 2013-09-03 2014-01-08 西安理工大学 Image printing method based on image processing technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"机器人采摘苹果果实的 K-means 和 GA-RBF-LMS 神经网络识别";贾伟宽等;《农 业 工 程 学 报》;20150930;第175-183页 *

Also Published As

Publication number Publication date
CN105718945A (en) 2016-06-29

Similar Documents

Publication Publication Date Title
CN105718945B (en) Apple picking robot night image recognition method based on watershed and neural network
Jia et al. Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot
Liu et al. A detection method for apple fruits based on color and shape features
Dias et al. Multispecies fruit flower detection using a refined semantic segmentation network
Aquino et al. Automated early yield prediction in vineyards from on-the-go image acquisition
CN106778902B (en) Dairy cow individual identification method based on deep convolutional neural network
CN107016405B (en) A kind of pest image classification method based on classification prediction convolutional neural networks
Lu et al. Immature citrus fruit detection based on local binary pattern feature and hierarchical contour analysis
Majeed et al. Apple tree trunk and branch segmentation for automatic trellis training using convolutional neural network based semantic segmentation
CN106951836B (en) crop coverage extraction method based on prior threshold optimization convolutional neural network
CN108537239B (en) Method for detecting image saliency target
CN108319973A (en) Citrusfruit detection method on a kind of tree
CN102214306B (en) Leaf disease spot identification method and device
CN110610506B (en) Image processing technology-based agaricus blazei murill fruiting body growth parameter detection method
Hernández-Rabadán et al. Integrating SOMs and a Bayesian classifier for segmenting diseased plants in uncontrolled environments
CN113255434B (en) Apple identification method integrating fruit characteristics and deep convolutional neural network
Ouyang et al. The research of the strawberry disease identification based on image processing and pattern recognition
Wang et al. Combining SUN-based visual attention model and saliency contour detection algorithm for apple image segmentation
CN111798470A (en) Crop image entity segmentation method and system applied to intelligent agriculture
CN111784764A (en) Tea tender shoot identification and positioning algorithm
Liu et al. Development of a machine vision algorithm for recognition of peach fruit in a natural scene
CN115861686A (en) Litchi key growth period identification and detection method and system based on edge deep learning
CN115862004A (en) Corn ear surface defect detection method and device
Rahman et al. Identification of mature grape bunches using image processing and computational intelligence methods
Tran et al. Automatic dragon fruit counting using adaptive thresholds for image segmentation and shape analysis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant