CN111401121A - Method for realizing citrus segmentation based on super-pixel feature extraction - Google Patents

Method for realizing citrus segmentation based on super-pixel feature extraction Download PDF

Info

Publication number
CN111401121A
CN111401121A CN201911310690.XA CN201911310690A CN111401121A CN 111401121 A CN111401121 A CN 111401121A CN 201911310690 A CN201911310690 A CN 201911310690A CN 111401121 A CN111401121 A CN 111401121A
Authority
CN
China
Prior art keywords
citrus
segmentation
super
image
superpixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911310690.XA
Other languages
Chinese (zh)
Inventor
杨庆华
陈一钦
鲍官军
荀一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201911310690.XA priority Critical patent/CN111401121A/en
Publication of CN111401121A publication Critical patent/CN111401121A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables

Abstract

The invention discloses a method for realizing citrus segmentation based on super-pixel feature extraction, which comprises the following steps: s1: acquiring an orange image; s2: performing super-pixel segmentation on the citrus image by using a segmentation algorithm; s3: extracting the super pixel color feature and the super pixel texture feature of the citrus image, and then performing feature fusion on the super pixel color feature and the super pixel texture feature of the citrus image; s4: training a superpixel prediction model; s5: after classifying the superpixels, extracting the superpixel mask predicted to be the citrus, and performing post-processing on the obtained mask. The invention provides a method for realizing citrus segmentation based on super-pixel feature extraction, which can still realize accurate fruit identification under the conditions of different illumination conditions, the condition that citrus is mutually shielded, the difference of citrus colors at different maturation stages and the change of the relative position of a light source caused by inevitable displacement in the working process of a robot.

Description

Method for realizing citrus segmentation based on super-pixel feature extraction
Technical Field
The invention relates to the field of image processing, in particular to a method for realizing citrus segmentation based on super-pixel feature extraction.
Background
At present, citrus is one of fruits widely planted in various parts of China, is also one of main agricultural products traded in the world, and is second only to wheat and corn. The citrus industry is widely planted in China as an important trade agricultural product development industry in China, the planting area is the first in the world, and the total output accounts for the third in the world. According to statistics, by 2016, the effective cultivation area of the citrus in China is 3835.05 ten thousand mu, and the total yield reaches 3618.8 ten thousand tons.
In the production operation of citrus fruits, the picking link of the fruits is a relatively complex link, and the specific gravity is relatively high in all the production operations. Due to the complexity of picking operation, the automation degree of the picking operation is low, the picking of the citrus fruits is usually carried out manually, the labor intensity is high, and a large amount of labor cost is generated. According to statistics, the picking cost of the citrus fruits accounts for 33% -50% of the total production cost. The picking robot is used for picking fruits instead of manpower, so that the labor intensity can be reduced, limited labor resources can be saved, the labor cost can be saved, and the labor productivity can be improved.
Before the robot picks up mature oranges, the robot must identify the oranges, extract relevant features and position the oranges, but in an unstructured natural scene, it is difficult to segment the oranges from a complex background, for example, the change of illumination is far greater than the color difference of fruits and leaves, so the color difference segmentation method is difficult to achieve an ideal effect. In addition, under strong light irradiation, the fruit part area may be saturated, so that the segmented image has large area of holes.
The problems of different illumination conditions, the mutual shielding condition of oranges and tangerines, the color difference of the oranges and tangerines in different mature stages in the natural environment, the change of the relative position of the oranges and the light source caused by inevitable displacement in the working process of the robot and the like can seriously affect the accurate identification of fruits, the accurate positioning of picking points and the execution of picking behaviors, thereby reducing the efficiency of picking work, and even causing the damage of a machine due to the collision of hard obstacles. The image recognition by using the machine vision is the key for accurate and ordered operation of agricultural mechanized integrated management and is the basis of follow-up work.
The invention provides a segmentation and identification method of mature citrus fruits, branches and leaves based on regional characteristics, which is disclosed by Chinese patent publication No. CN109711317A, published in 2019, 05 and 03, and the application comprises the steps of generating characteristic vectors by using the color characteristics of a color image, and performing characteristic dimension reduction on the color characteristics by using a characteristic mapping table so as to reduce the dimensions of the characteristic vectors; then, determining the size of the ROI of the target object through the working space of the picking robot, the size of the field of view of the binocular camera and the size of the citrus fruits, and taking the ratio of the number of pixel points in the target range in the R, B channel as a basis for selecting the ROI; and finally, performing score sequencing on the ROI with high coincidence degree in the obtained multiple initially selected ROIs, and selecting the ROI with the maximum score as an optimal segmentation identification area. In this application, no screening of superpixels is performed, the training time and difficulty cannot be reduced, an optimal range of the segmentation number is not disclosed, and the segmentation number cannot be determined according to the range of the optimal segmentation number of superpixels.
Disclosure of Invention
The invention provides a method for realizing citrus segmentation based on super-pixel feature extraction, which can still realize accurate identification of fruits under different illumination conditions, the mutual shielding condition of citrus, the difference of citrus colors at different ripening stages and the change of the relative position of the light source caused by the inevitable displacement in the working process of a robot, and aims to overcome the problems that the accurate identification of the fruits is seriously influenced by the change of the relative position of the light source caused by the inevitable displacement in the working process of the robot in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
The technical scheme adopted by the invention for solving the technical problems is as follows: a method for realizing citrus segmentation based on super-pixel feature extraction comprises the following steps:
S1: acquiring an orange image;
S2: performing super-pixel segmentation on the citrus image by using a segmentation algorithm;
S3: extracting the super pixel color feature and the super pixel texture feature of the citrus image, and then performing feature fusion on the super pixel color feature and the super pixel texture feature of the citrus image;
S4: training a superpixel prediction model;
S5: after classifying the superpixels, extracting the superpixel mask predicted to be the citrus, and performing post-processing on the obtained mask. The invention trains a superpixel feature model by using superpixel segmentation and feature extraction, carries out superpixel pre-segmentation and superpixel classification, generates a binary mask and carries out post-processing, and finally extracts a segmented citrus image through the mask, so that the citrus image can be identified, the part belonging to the citrus can be identified from the picture, the position of the citrus compared with the shooting position can be calculated by matching with the shooting position and angle, and a foundation is laid for mechanical harvesting of the citrus.
Preferably, the step S1 of acquiring the citrus image is to acquire the citrus image by shooting a citrus tree with a camera.
Preferably, the step S2 includes the steps of:
S21: selecting the segmentation number K of the super pixels by selecting the UE with the under-segmentation rate;
S22: a segmentation algorithm is used to segment the citrus image into K superpixel units. Because the final purpose of the invention is to obtain the complete segmented image of the citrus under the natural illumination condition, the standard of measurement is whether the super-pixel segmentation algorithm independently segments all citrus images, so the invention selects the under-segmentation rate UE to select the segmentation number K of the super-pixels.
Preferably, the specific process of S21 is as follows: the calculation formula of the under-segmentation rate UE is as follows:
Figure BDA0002324440210000021
In the formula, G iIndicating one of the marked citrus regions, i being 1,2,3, …, n, n being the number of the citrus regions marked by a certain image, S kDenotes a super pixel obtained by division, K is 1,2,3, …, K;
Selecting a plurality of citrus images, marking citrus parts in the citrus images, using a mask image generated by the marked citrus images to segment each citrus image into K superpixel units by using different segmentation quantities K, calculating under-segmentation rate Mean value Mean and Standard deviation after segmentation of each citrus image,
Figure BDA0002324440210000031
Figure BDA0002324440210000032
In the formula, P iThe under-segmentation rate of the super-pixels of each citrus picture is shown, and n is the super-pixel segmentation number of each citrus picture;
And according to the variation relation between the segmentation quantity K and the Mean value and Standard deviation of the under-segmentation rate, taking the selected segmentation quantity K as 350-450. When the division number K is 350-450, the mean value of the UE is at a local minimum value, the reduction trend of the mean value of the UE is no longer remarkable as the K value is continuously increased, and the standard deviation of the UE is no longer reduced in a remarkable trend when the division number K is 350-450, which shows that the superpixel division has a considerable effect when the division number K is 350-450. If K is increased to 800 or higher, which will consume more super-pixel division time in the future, it is not favorable to realize real-time division, so the division number K is taken as 350-450.
Preferably, the step S3 includes the following steps: extracting the color features of the super pixels by using a plurality of color spaces, and calculating the mean value and the standard deviation of each channel of each color space to form a V-dimensional color space feature vector;
extracting corresponding L BP superpixels by using a mask of the superpixels of the original image, performing histogram statistics on the extracted L BP superpixels, and quantizing the histogram level to be M level, namely finally obtaining an M-dimensional texture feature vector;
And fusing the color features and the texture features into a V + M-dimensional feature vector, namely completing the feature extraction of the superpixel. For natural environment images with uncontrollable illumination conditions and complex backgrounds, a single color space is not enough to realize ideal segmentation, so that a plurality of color spaces are adopted to realize segmentation, and a good segmentation effect can be obtained.
Preferably, the step S4 is as follows: and (3) solving the dimension s of the hidden layer according to an empirical formula, wherein the formula is as follows:
Figure BDA0002324440210000033
In the formula: m is the dimension of the input layer and n is the dimension of the output layer;
And marking the orange parts in a certain number of orange pictures obtained by shooting under natural illumination conditions, extracting the features according to the process in the step S3, training by using BPNN, and storing the model.
Preferably, the step of screening the segmented image superpixels first and then performing step S4 is as follows: setting a superpixel threshold value T, setting background superpixels within the superpixel threshold value T, setting citrus superpixels outside the superpixel threshold value T, screening the image superpixels, predicting the SOI of the screened interested superpixels by using a trained BPNN model to obtain the confidence coefficient of the SOI, if the confidence coefficient of the SOI is more than 0.5, classifying the SOI into the citrus superpixels, and if the confidence coefficient of the SOI is less than 0.5, classifying the SOI into the background superpixels. When the BPNN is used for predicting the citrus, the feature extraction of the segmented super pixels is needed, the feature extraction and the segmentation are difficult to realize in real time under the limited computing capacity, and in order to realize the purpose of real-time detection as far as possible, the segmented image super pixels are screened before prediction, and a threshold value T is set, so that the segmentation requirement is relaxed, namely, part of background super pixels are reserved, and the super pixels in a citrus region are prevented from being removed.
Preferably, the step S5 includes: after classifying the superpixels, extracting the superpixel mask predicted to be the citrus, performing morphological opening operation on the obtained mask, removing edge impurities, and segmenting by using the finally obtained mask to obtain the citrus in the original image.
Therefore, the invention has the following beneficial effects: (1) the method uses superpixel segmentation and feature extraction to train a superpixel feature model, carries out superpixel pre-segmentation and superpixel classification, generates a binary mask and carries out post-processing, and finally extracts a segmented citrus image through the mask, so that the citrus image can be identified, the part belonging to the citrus can be identified from the picture, the position of the citrus compared with the shooting position can be calculated by matching with the shooting position and angle, and a foundation is laid for mechanical harvesting of the citrus;
(2) When the BPNN is used for predicting the citrus, feature extraction needs to be carried out on segmented super pixels, the feature extraction and the segmentation are difficult to realize in real time under limited computing capacity, and in order to realize the purpose of real-time detection as far as possible, the segmented image super pixels are screened before prediction, and a threshold value T is set, namely the segmentation requirement is relaxed, namely partial background super pixels are reserved, and the super pixels in a citrus region are prevented from being removed;
(3) For natural environment images with uncontrollable illumination conditions and complex backgrounds, a single color space is not enough to realize ideal segmentation, so that a plurality of color spaces are adopted to realize segmentation, and a good segmentation effect can be obtained;
Drawings
FIG. 1 is a general flow chart of a segmentation method of the present invention
FIG. 2 is a flow chart of super-pixel feature extraction according to the present invention
FIG. 3 is a K-UE line drawing of the present invention
Detailed Description
The invention is further described with reference to the following detailed description and accompanying drawings.
Example (b): a method for implementing citrus segmentation based on superpixel feature extraction, as shown in fig. 1 and 2, the method comprises the following steps:
S1: acquiring a citrus image, wherein the acquiring of the citrus image is to acquire the citrus image by shooting a citrus tree through a camera;
S2: performing super-pixel segmentation on the citrus image by using a segmentation algorithm;
S21: selecting the segmentation quantity K of the super-pixels by selecting the under-segmentation rate UE, wherein the calculation formula of the under-segmentation rate UE is as follows:
Figure BDA0002324440210000051
In the formula, G iIndicating one of the marked citrus regions, i being 1,2,3, …, n, n being the number of the citrus regions marked by a certain image, S kDenotes a super pixel obtained by division, K is 1,2,3, …, K;
Selecting a plurality of citrus images, marking citrus parts in the citrus images, using a mask image generated by the marked citrus images to segment each citrus image into K superpixel units by using different segmentation quantities K, calculating under-segmentation rate Mean value Mean and Standard deviation after segmentation of each citrus image,
Figure BDA0002324440210000052
Figure BDA0002324440210000053
In the formula, P iThe under-segmentation rate of the super-pixels of each citrus picture is shown, and n is the super-pixel segmentation number of each citrus picture;
As shown in fig. 3, according to the variation relationship between the number of divisions K, the Mean value of the under-division ratio, and the Standard deviation Standard, the number of divisions K is 350 to 450, and further the number of divisions K is 400;
S22: segmenting the citrus image into K superpixel units by using a segmentation algorithm;
Because the final purpose of the invention is to obtain the complete segmented image of the citrus under the natural illumination condition, the standard of measurement is whether the super-pixel segmentation algorithm independently segments all citrus images, so the invention selects the under-segmentation rate UE to select the segmentation number K of the super-pixels.
When the division number K is 350-450, the mean value of the UE is at a local minimum value, the reduction trend of the mean value of the UE is no longer remarkable as the K value is continuously increased, and the standard deviation of the UE is no longer reduced in a remarkable trend when the division number K is 350-450, which shows that the superpixel division has a considerable effect when the division number K is 350-450. If K is increased to 800 or higher, which will consume more super-pixel division time in the future, it is not favorable to realize real-time division, so the division number K is taken as 350-450.
extracting super-pixel color features and super-pixel texture features of the citrus image, and then performing feature fusion on the super-pixel color features and the super-pixel texture features of the citrus image, wherein the specific steps comprise that a plurality of color spaces are used for extracting the color features of the super-pixels, RGB, YCrCb and L ab color spaces have a considerable segmentation effect on citrus under a specific illumination condition, the RGB, YCrCb and L ab color spaces can be used for extracting the color features of the super-pixels, each color space is provided with three channels, the mean value and the standard difference of each channel of each color space are calculated to form a color space number of 3, a color space channel number of 3 and two calculated values, namely, 18-dimensional color space feature vectors, and the two calculated values refer to the mean value and the standard difference;
extracting corresponding L BP superpixels by using a mask of the superpixels of the original image, performing histogram statistics on the extracted L BP superpixels, and quantizing the histogram level to be 16 levels, namely finally obtaining a 16-dimensional texture feature vector;
And fusing the color feature and the texture feature into a 34-dimensional feature vector, namely completing the feature extraction of the superpixel. For natural environment images with uncontrollable illumination conditions and complex backgrounds, a single color space is not enough to realize ideal segmentation, so that a plurality of color spaces are adopted to realize segmentation, and a good segmentation effect can be obtained.
The segmented image superpixels are firstly screened, and then the step S4 is performed, wherein the process of screening the segmented image superpixels is as follows: setting a super-pixel threshold value T, wherein the range of T is 0-50 or 180-255, background super-pixels are arranged in the super-pixel threshold value T, citrus super-pixels are arranged outside the super-pixel threshold value T, after the image super-pixels are screened, a trained BPNN model is used for predicting the SOI of the screened interested super-pixels and obtaining the confidence coefficient of the SOI, if the confidence coefficient of the SOI is more than 0.5, the SOI is classified as the citrus super-pixels, and if the confidence coefficient of the SOI is less than 0.5, the SOI is classified as the background super-pixels;
For natural environment images with uncontrollable illumination conditions and complex backgrounds, a single color space is not enough to realize ideal segmentation, so that a plurality of color spaces are adopted to realize segmentation, and a good segmentation effect can be obtained.
When the BPNN is used for predicting the citrus, the feature extraction of the segmented super pixels is needed, the feature extraction and the segmentation are difficult to realize in real time under the limited computing capacity, and in order to realize the purpose of real-time detection as far as possible, the segmented image super pixels are screened before prediction, and a threshold value T is set, so that the segmentation requirement is relaxed, namely, part of background super pixels are reserved, and the super pixels in a citrus region are prevented from being removed.
S4: training a super-pixel prediction model, and solving a hidden layer dimension s according to an empirical formula, wherein the formula is as follows:
Figure BDA0002324440210000061
In the formula: m is the dimension of the input layer and n is the dimension of the output layer;
Marking citrus parts in a certain number of citrus pictures obtained by shooting under natural illumination conditions, carrying out feature extraction according to the process in the step S3, training by using BPNN, and storing a model;
S5: after classifying the superpixels, extracting the superpixel mask predicted to be citrus, and carrying out post-processing on the obtained mask, wherein the specific process comprises the following steps: after classifying the superpixels, extracting the superpixel mask predicted to be the citrus, performing morphological opening operation on the obtained mask, removing edge impurities, and segmenting by using the finally obtained mask to obtain the citrus in the original image.
The invention trains a superpixel feature model by using superpixel segmentation and feature extraction, carries out superpixel pre-segmentation and superpixel classification, generates a binary mask and carries out post-processing, and finally extracts a segmented citrus image through the mask, so that the citrus image can be identified, the part belonging to the citrus can be identified from the picture, the position of the citrus compared with the shooting position can be calculated by matching with the shooting position and angle, and a foundation is laid for mechanical harvesting of the citrus.

Claims (8)

1. A method for realizing citrus segmentation based on super-pixel feature extraction is characterized by comprising the following steps:
S1: acquiring an orange image;
S2: performing super-pixel segmentation on the citrus image by using a segmentation algorithm;
S3: extracting the super pixel color feature and the super pixel texture feature of the citrus image, and then performing feature fusion on the super pixel color feature and the super pixel texture feature of the citrus image;
S4: training a superpixel prediction model;
S5: after classifying the superpixels, extracting the superpixel mask predicted to be the citrus, and performing post-processing on the obtained mask.
2. The method for realizing citrus segmentation based on superpixel feature extraction as claimed in claim 1, wherein said step of obtaining citrus image in step S1 is to obtain citrus image by shooting citrus tree with camera.
3. The method for realizing citrus segmentation based on superpixel feature extraction as claimed in claim 1, wherein step S2 comprises the following steps:
S21: selecting the segmentation number K of the super pixels by selecting the UE with the under-segmentation rate;
S22: a segmentation algorithm is used to segment the citrus image into K superpixel units.
4. The method for realizing citrus segmentation based on superpixel feature extraction according to claim 3, wherein the S21 comprises the following specific steps: the calculation formula of the under-segmentation rate UE is as follows:
Figure FDA0002324440200000011
In the formula, G iIndicating one of the marked citrus regions, i being 1,2,3, …, n, n being the number of the citrus regions marked by a certain image, S kDenotes a super pixel obtained by division, K is 1,2,3, …, K;
Selecting a plurality of citrus images, marking citrus parts in the citrus images, using a mask image generated by the marked citrus images to segment each citrus image into K superpixel units by using different segmentation quantities K, calculating under-segmentation rate Mean value Mean and Standard deviation after segmentation of each citrus image,
Figure FDA0002324440200000012
Figure FDA0002324440200000013
In the formula, P iThe under-segmentation rate of the super-pixels of each citrus picture is shown, and n is the super-pixel segmentation number of each citrus picture;
And according to the variation relation between the segmentation quantity K and the Mean value and Standard deviation of the under-segmentation rate, taking the selected segmentation quantity K as 350-450.
5. The method for realizing citrus segmentation based on superpixel feature extraction as claimed in claim 1, wherein the step S3 comprises the following steps: extracting the color features of the super pixels by using a plurality of color spaces, and calculating the mean value and the standard deviation of each channel of each color space to form a V-dimensional color space feature vector;
extracting corresponding L BP superpixels by using a mask of the superpixels of the original image, performing histogram statistics on the extracted L BP superpixels, and quantizing the histogram level to be M level, namely finally obtaining an M-dimensional texture feature vector;
And fusing the color features and the texture features into a V + M-dimensional feature vector, namely completing the feature extraction of the superpixel.
6. The method for realizing citrus segmentation based on superpixel feature extraction as claimed in claim 1, wherein the step S4 comprises the following steps: and (3) solving the dimension s of the hidden layer according to an empirical formula, wherein the formula is as follows:
Figure FDA0002324440200000021
In the formula: m is the dimension of the input layer and n is the dimension of the output layer;
And marking the orange parts in a certain number of orange pictures obtained by shooting under natural illumination conditions, extracting the features according to the process in the step S3, training by using BPNN, and storing the model.
7. The method for realizing citrus segmentation based on superpixel feature extraction as claimed in claim 6, wherein the segmented image superpixels are screened first, and then step S4 is performed, wherein the screening of the segmented image superpixels comprises the following steps: setting a superpixel threshold value T, setting background superpixels within the superpixel threshold value T, setting citrus superpixels outside the superpixel threshold value T, screening the image superpixels, predicting the SOI of the screened interested superpixels by using a trained BPNN model to obtain the confidence coefficient of the SOI, if the confidence coefficient of the SOI is more than 0.5, classifying the SOI into the citrus superpixels, and if the confidence coefficient of the SOI is less than 0.5, classifying the SOI into the background superpixels.
8. The method for realizing citrus segmentation based on superpixel feature extraction as claimed in claim 1, wherein the step S5 comprises the following steps: after classifying the superpixels, extracting the superpixel mask predicted to be the citrus, performing morphological opening operation on the obtained mask, removing edge impurities, and segmenting by using the finally obtained mask to obtain the citrus in the original image.
CN201911310690.XA 2019-12-18 2019-12-18 Method for realizing citrus segmentation based on super-pixel feature extraction Pending CN111401121A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911310690.XA CN111401121A (en) 2019-12-18 2019-12-18 Method for realizing citrus segmentation based on super-pixel feature extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911310690.XA CN111401121A (en) 2019-12-18 2019-12-18 Method for realizing citrus segmentation based on super-pixel feature extraction

Publications (1)

Publication Number Publication Date
CN111401121A true CN111401121A (en) 2020-07-10

Family

ID=71432505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911310690.XA Pending CN111401121A (en) 2019-12-18 2019-12-18 Method for realizing citrus segmentation based on super-pixel feature extraction

Country Status (1)

Country Link
CN (1) CN111401121A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489049A (en) * 2020-12-04 2021-03-12 山东大学 Mature tomato fruit segmentation method and system based on superpixels and SVM
CN114902872A (en) * 2022-04-26 2022-08-16 华南理工大学 Visual guidance method for picking fruits by robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120275703A1 (en) * 2011-04-27 2012-11-01 Xutao Lv Superpixel segmentation methods and systems
CN104636716A (en) * 2014-12-08 2015-05-20 宁波工程学院 Method for identifying green fruits
CN104700404A (en) * 2015-03-02 2015-06-10 中国农业大学 Fruit location identification method
CN105718945A (en) * 2016-01-20 2016-06-29 江苏大学 Apple picking robot night image identification method based on watershed and nerve network
US20180012365A1 (en) * 2015-03-20 2018-01-11 Ventana Medical Systems, Inc. System and method for image segmentation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120275703A1 (en) * 2011-04-27 2012-11-01 Xutao Lv Superpixel segmentation methods and systems
CN104636716A (en) * 2014-12-08 2015-05-20 宁波工程学院 Method for identifying green fruits
CN104700404A (en) * 2015-03-02 2015-06-10 中国农业大学 Fruit location identification method
US20180012365A1 (en) * 2015-03-20 2018-01-11 Ventana Medical Systems, Inc. System and method for image segmentation
CN105718945A (en) * 2016-01-20 2016-06-29 江苏大学 Apple picking robot night image identification method based on watershed and nerve network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘晓洋: "基于超像素特征的苹果采摘机器人果实分割方法" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489049A (en) * 2020-12-04 2021-03-12 山东大学 Mature tomato fruit segmentation method and system based on superpixels and SVM
CN114902872A (en) * 2022-04-26 2022-08-16 华南理工大学 Visual guidance method for picking fruits by robot

Similar Documents

Publication Publication Date Title
CN105718945B (en) Apple picking robot night image recognition method based on watershed and neural network
Dorj et al. An yield estimation in citrus orchards via fruit detection and counting using image processing
Dias et al. Multispecies fruit flower detection using a refined semantic segmentation network
Liu et al. A detection method for apple fruits based on color and shape features
Malik et al. Mature tomato fruit detection algorithm based on improved HSV and watershed algorithm
Liu et al. A computer vision system for early stage grape yield estimation based on shoot detection
Sannakki et al. Diagnosis and classification of grape leaf diseases using neural networks
Bulanon et al. Development of a real-time machine vision system for the apple harvesting robot
Li et al. Green apple recognition method based on the combination of texture and shape features
Rodríguez et al. A computer vision system for automatic cherry beans detection on coffee trees
CN107527343B (en) A kind of agaricus bisporus stage division based on image procossing
CN102663757A (en) Semi-automatic image cutting method based on nuclear transfer
CN102208099A (en) Illumination-change-resistant crop color image segmentation method
CN111798470A (en) Crop image entity segmentation method and system applied to intelligent agriculture
Wang et al. Combining SUN-based visual attention model and saliency contour detection algorithm for apple image segmentation
CN106682639A (en) Crop leaf abnormal image extraction method based on video monitoring
CN111401121A (en) Method for realizing citrus segmentation based on super-pixel feature extraction
CN111783693A (en) Intelligent identification method of fruit and vegetable picking robot
CN111046782B (en) Quick fruit identification method for apple picking robot
CN115984698A (en) Litchi fruit growing period identification method based on improved YOLOv5
US11880981B2 (en) Method and system for leaf age estimation based on morphological features extracted from segmented leaves
Nisar et al. Predicting yield of fruit and flowers using digital image analysis
Tran et al. Automatic dragon fruit counting using adaptive thresholds for image segmentation and shape analysis
Ekawaty et al. Automatic cacao pod detection under outdoor condition using computer vision
Nawawi et al. Comprehensive pineapple segmentation techniques with intelligent convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination