CN102938053A - Sugarcane characteristic extraction and recognition method based on computer vision - Google Patents
Sugarcane characteristic extraction and recognition method based on computer vision Download PDFInfo
- Publication number
- CN102938053A CN102938053A CN 201110233945 CN201110233945A CN102938053A CN 102938053 A CN102938053 A CN 102938053A CN 201110233945 CN201110233945 CN 201110233945 CN 201110233945 A CN201110233945 A CN 201110233945A CN 102938053 A CN102938053 A CN 102938053A
- Authority
- CN
- China
- Prior art keywords
- sugarcane
- image
- ratio
- row piece
- rugosity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention adopts the computer vision technology to recognize characteristics of shapes and nodes of sugarcanes for achieving growth monitoring of the sugarcanes or intelligent segmentation of sugarcane varieties containing sugarcane shoots. Firstly, sugarcane images are obtained through a digital device, S components in a hue saturation value (HSV) color space of the sugarcane images are processed through threshold segmentation and mathematical morphology filter to be used as a template, reserve images of H components are segmented by threshold and calculation to obtain a composite image, the composite image is divided into 64 column areas, characteristic index of mass center ratio, roughness ratio, white spot ratio and the like is extracted, the nodes and internode columns are classified and recognized through a support vector machine so as to obtain the relation of the nodes and positions, and the average recognition rate is 93.71%.
Description
Technical field
The invention belongs to technical field of image processing, is a kind of method that the sugarcane feature is extracted and identified in conjunction with sugarcane image and computation model.
Background technology
In the growth and post-processed process of sugarcane, the cutting of growth conditions and sugarcane sugarcane bud is all manually finished for a long time, this method is by identification and the processing of computer vision technique, can identify automatically the image of sugarcane, in the sugarcane kind is processed, need the sugarcane with whole to cut into the effective sugarcane kind fragment that comprises 1~3 sugarcane bud.Also mostly at present is that manually to finish be main.For raising the efficiency, reduce labour intensity and realize becoming more meticulous of cane planting, need development can identify the intelligent shearing device of stipes and internode, and wherein most critical be identification sugarcane stipes.It is blank that at present domestic research in this field also belongs to.Close research has Liu Qingting etc. to utilize High-speed Photography Analysis blade cuts cane stalk stalk destructive process, and is external, only Iranian Moshashai K utilize the gray level image Threshold segmentation method to the sugarcane stipes identification done Primary Study, yet be in the starting stage.The present invention adopts the method for machine vision, through support vector machines (support vectormachine) method stipes and internode is classified first, again the stipes class that identifies is carried out cluster identification, obtains stipes number and stipes position.
Summary of the invention
A kind of sugarcane feature extraction and recognition methods based on computer vision is characterized in that comprising following concrete steps:
(1) obtains the sugarcane original image;
(2) with the original image denoising with from the RGB color space conversion to the hsv color space, and the H in the Selection Model and S parameter are as the sugarcane Characteristic of Image;
(3) the sugarcane image is carried out the extraction of feature;
(4) the sugarcane characteristics of image being carried out model calculates and coupling;
(5) the sugarcane image of finishing feature extraction is carried out SVM identification.
Finishing of detailed process is:
1, image obtains
Adopt CANON S80 type digital camera to take the single sugarcane coloured image of red background at testing table, the image size is the 1600*1200 pixel, the JPG form.Process software adopts VC++6.0 and M atlab7.0.First sugarcane top is peeled off before the shooting, camera lens is vertical with worktable, apart from worktable 30cm.
2, image base conditioning
The sugarcane surface color occupies whole gray level from black to white transition in the image, adopts the RGB color space to be difficult to be partitioned into desirable sugarcane profile, and the correlativity of each component space also is difficult to embody stipes and internode feature difference.Through behind a large amount of Experimental Comparisons, find to adopt the HSV space of red background effectively background and sugarcane to be distinguished.
(1) based on the color space of HSV
The HSV space is relatively more directly perceived and meet human vision property, and H, S, V represent tone, saturation degree and brightness.Spatially disperse from the gray scale that the H in hsv color space divides spirogram can see sugarcane and background area, the tone of stipes and internode has notable difference, can be used as the basis of characterization of stipes and internode; From the S component, can see the sugarcane clear-cut, the background area uniform gray level, its grey level histogram manifests bimodality, is conducive to the sugarcane profile and extracts.
(2) Threshold segmentation
The H component can embody the minutia of sugarcane stipes, and the S component embodies the contour feature of sugarcane.Select suitable Threshold segmentation can obtain the binary map of desirable sugarcane profile and sign sugarcane stipes and internode difference; In S, the system of selection of H threshold value, respectively Otsu and artificial selection threshold value have been done comparison test, discovery Otsu auto Segmentation can not effectively be eliminated ground unrest, has large-area reflective phenomenon.And under worktable controlled condition, the histogram trough of S component appears between 0.45~0.55, threshold value is set between this can obtains desirable sugarcane boundary profile figure; In 0.4~0.6 scope, count with 0.1 interval region statistics gray level, with the segmentation threshold of corresponding gray-scale value as the S component of counting of minimum gray level in 0.1 scope.Research finds that the pixel of the overwhelming majority in the H histogram of component between gray level 0~0.05 and 0.95~1, arranges threshold value between interval 0.85~0.15, can effectively obtain the stipes feature.
(3) image is synthetic
Still might there be the isolated noise of background area in the S component bianry image that obtains through Threshold segmentation; Adopt the 3*3 stay in place form that S component bianry image is made closing operation of mathematical morphology to eliminate noise; Remake with computing after the bianry image that the bianry image that the S component is obtained after noise is eliminated and H component obtain is negated respectively and obtain composite diagram.
3, feature extraction
Because the white point of eustipes part is counted disperse phase to evenly, the white point number in stipes zone is intensive, and diameter is larger; Be 64 row pieces zones with the composite diagram divided by column, establishing image collection is X (i, j); The coboundary of image is P
t=(x
t, y
t), lower limb is P
t=(x
t, y
t), k row piece rugosity is
The rugosity of each row piece (being diameter) is that the rugosity ratio is with the ratio of maximum rugosity
By the coboundary of each row piece and lower boundary each row piece is divided 8 and wait capable piece, the ratio that the white point of center 4 row pieces is counted sum and the white point sum of its column is to distinguish ratio in 1/2, and its computing formula is
Each row piece rugosity and its left and right sides 5 row piece distances, two row piece rugosity and average ratio be the position slightly than, its computing formula is
4, SVM recognizer identification stipes and internode row piece
SVM is a kind of new model recognition methods, it takes into account training error and generalization ability, in solving small sample, non-linear, high dimension, local minimum isotype identification problem, show many distinctive advantages, in two quasi-mode identification problems, given training data { (x
i, y
i); x
i∈ R
Ny
i=± 1}, support vector go out the definite classifying rules of f (x)
The optimum solution of the Wo lft primal-dual optimization problem that application Lag range multiplier method obtains
5, cluster identification stipes number and position
The row piece that is obtained by svm classifier distributes as can be known: the stipes row piece that identifies is not unique, this and stipes have certain width to be consistent, the purpose of sugarcane stipes identification is to find cutting position to the cutting off tool of controller control, expresses with cane stalk joint number discrimination in the picture and stipes location recognition rate.Stipes is counted the number percent that discrimination is defined as from image actual stipes number in stipes number that algorithm identified goes out and figure.Be to characterize stipes location recognition rate, introduce and cut just rate concept, just cut when falling into the internode center with the tool position that rate is expressed as 100%, just cutting when cutter falls into the stipes center, rate is 0.Adopt the method for cluster can seek the stipes zone.Adopt at last the bee-line method to the cluster analysis of stipes class.
6, bee-line cluster
Bee-line method rule is: as long as the minor increment of two classes is just merged into a class with two classes less than threshold value, and definition D
I, jBe ω
iMinor increment in the class between all samples, namely
D wherein
UVBe ω
iSample U class and ω in the class
iDistance in the class between the sample V.If ω
jClass is by ω
m, ω
nThe merging of two classes forms, then
Description of drawings
Fig. 1 is sugarcane feature extraction and recognition methods system
Fig. 2 is the sugarcane original image
Fig. 3 is H component image in the HSV component
Fig. 4 is the composograph of sugarcane
Embodiment
From the image that gathers, extract 50 width of cloth sugarcane picture combined training storehouses and be used for check.Process through primary image, extract 50 width of cloth images, every 64 row pieces, totally 3200 samples calculate each sample characteristics index; Through the method for artificial cognition, divide the category attribute of 3200 samples.Find in the statistics to reach 10: 1 owing to the row piece ratio of internode class in piece image and stipes class, need to extract that the suitable training sample of ratio comes training pattern between class, thus from sample, extract again whole stipes class samples and part internode class sample totally 800 set up disaggregated model.In SVM, C=20 is set, G=0.01 through cross matching.
The performing step of SVM identification:
(1) obtains the stipes class that SVM identifies, calculate the quantity Nm of stipes class row piece, take the positional distance between the stipes class row piece as characteristic parameter.
(2) the minimum threshold of distance T that cluster is set is 20~30 (row piece distances).
(3) with all stipes row piece each minute one classes, the cluster centre number is Nm.
(4) to all stipes row piece circulations, find nearest two row piece pi, pj, establish distance and be D; If D then merges pi, pj less than or equal to T, with large being included in the little class of class-mark of class-mark, otherwise D withdraws from circulation greater than T.The class number that obtains is the stipes number.
Claims (3)
1. sugarcane feature extraction and recognition methods based on a computer vision is characterized in that comprising following concrete steps:
(1) obtains the sugarcane original image;
(2) with the original image denoising with from the RGB color space conversion to the hsv color space, and the H in the Selection Model and S parameter are as the sugarcane Characteristic of Image;
(3) the sugarcane image is carried out the extraction of feature;
(4) the sugarcane characteristics of image being carried out model calculates and coupling;
(5) the sugarcane image of finishing feature extraction is carried out SVM identification.
2. a kind of sugarcane feature extraction and recognition methods based on computer vision according to claim 1 is characterized in that:
In the extraction of feature, be 64 row piece zones with the composite diagram divided by column, establishing image collection is X (i, j); The coboundary of image is P
t=(x
t, y
t), lower limb is P
t=(x
t, y
t), k row piece rugosity is
The rugosity of each row piece (being diameter) is that the rugosity ratio is with the ratio of maximum rugosity
By the coboundary of each row piece and lower boundary each row piece is divided 8 and wait capable piece, the ratio that the white point of center 4 row pieces is counted sum and the white point sum of its column is to distinguish ratio in 1/2, and its computing formula is
Each row piece rugosity and its left and right sides 5 row piece distances, two row piece rugosity and average ratio be the position slightly than, its computing formula is
3. a kind of sugarcane feature extraction and recognition methods based on computer vision according to claim 1 is characterized in that:
In the SVM image recognition
In the pattern recognition problem, given training data { (x
i, y
t); x
i∈ R
Ny
i=± 1}, support vector is by the definite classifying rules of f (x)
The optimum solution of the Wo lfe primal-dual optimization problem that application Lag range multiplier method obtains
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110233945 CN102938053A (en) | 2011-08-16 | 2011-08-16 | Sugarcane characteristic extraction and recognition method based on computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110233945 CN102938053A (en) | 2011-08-16 | 2011-08-16 | Sugarcane characteristic extraction and recognition method based on computer vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102938053A true CN102938053A (en) | 2013-02-20 |
Family
ID=47696948
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110233945 Pending CN102938053A (en) | 2011-08-16 | 2011-08-16 | Sugarcane characteristic extraction and recognition method based on computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102938053A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105225228A (en) * | 2015-09-08 | 2016-01-06 | 广西大学 | Leifsonia image partition method under the natural background of field |
CN106845366A (en) * | 2016-12-29 | 2017-06-13 | 江苏省无线电科学研究所有限公司 | Sugarcane coverage automatic testing method based on image |
CN106951895A (en) * | 2016-01-07 | 2017-07-14 | 富士通株式会社 | Determine the method and system of the profile of area-of-interest in image |
CN108876767A (en) * | 2018-05-23 | 2018-11-23 | 广西民族大学 | A kind of quick identification device of sugarcane sugarcane section feature |
WO2019041147A1 (en) * | 2017-08-29 | 2019-03-07 | 广东虚拟现实科技有限公司 | Spot recognition method, device and system |
CN110624853A (en) * | 2019-09-25 | 2019-12-31 | 武汉易视维科技有限公司 | Online magic stick visual detection system |
CN113223097A (en) * | 2021-04-29 | 2021-08-06 | 武汉工程大学 | Image preprocessing method for improving density counting precision |
-
2011
- 2011-08-16 CN CN 201110233945 patent/CN102938053A/en active Pending
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105225228A (en) * | 2015-09-08 | 2016-01-06 | 广西大学 | Leifsonia image partition method under the natural background of field |
CN105225228B (en) * | 2015-09-08 | 2018-08-21 | 广西大学 | Leifsonia image partition method under the natural background of field |
CN106951895A (en) * | 2016-01-07 | 2017-07-14 | 富士通株式会社 | Determine the method and system of the profile of area-of-interest in image |
CN106845366A (en) * | 2016-12-29 | 2017-06-13 | 江苏省无线电科学研究所有限公司 | Sugarcane coverage automatic testing method based on image |
CN106845366B (en) * | 2016-12-29 | 2020-03-27 | 江苏省无线电科学研究所有限公司 | Sugarcane coverage automatic detection method based on image |
WO2019041147A1 (en) * | 2017-08-29 | 2019-03-07 | 广东虚拟现实科技有限公司 | Spot recognition method, device and system |
US10922846B2 (en) | 2017-08-29 | 2021-02-16 | Guangdong Virtual Reality Technology Co., Ltd. | Method, device and system for identifying light spot |
CN108876767A (en) * | 2018-05-23 | 2018-11-23 | 广西民族大学 | A kind of quick identification device of sugarcane sugarcane section feature |
CN108876767B (en) * | 2018-05-23 | 2021-04-27 | 广西民族大学 | Sugarcane festival characteristic quick identification device |
CN110624853A (en) * | 2019-09-25 | 2019-12-31 | 武汉易视维科技有限公司 | Online magic stick visual detection system |
CN113223097A (en) * | 2021-04-29 | 2021-08-06 | 武汉工程大学 | Image preprocessing method for improving density counting precision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | A detection method for apple fruits based on color and shape features | |
CN102938053A (en) | Sugarcane characteristic extraction and recognition method based on computer vision | |
CN105718945B (en) | Apple picking robot night image recognition method based on watershed and neural network | |
CN109636772A (en) | The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning | |
CN109684906B (en) | Method for detecting red fat bark beetles based on deep learning | |
CN102194108B (en) | Smile face expression recognition method based on clustering linear discriminant analysis of feature selection | |
CN104408449B (en) | Intelligent mobile terminal scene literal processing method | |
CN106650806A (en) | Cooperative type deep network model method for pedestrian detection | |
CN105654099A (en) | Sugarcane segmentation and identification method based on improved vision | |
CN104778481A (en) | Method and device for creating sample library for large-scale face mode analysis | |
CN104392240A (en) | Parasite egg identification method based on multi-feature fusion | |
CN103984953A (en) | Cityscape image semantic segmentation method based on multi-feature fusion and Boosting decision forest | |
CN103020639A (en) | Method for automatically identifying and counting white blood cells | |
CN101216943B (en) | A method for video moving object subdivision | |
CN108319966A (en) | The method for identifying and classifying of equipment in a kind of substation's complex background infrared image | |
CN103226088A (en) | Particulate counting method and device thereof | |
CN109034269A (en) | A kind of bollworm female male imago method of discrimination based on computer vision technique | |
CN103034838A (en) | Special vehicle instrument type identification and calibration method based on image characteristics | |
CN103295013A (en) | Pared area based single-image shadow detection method | |
CN101551853A (en) | Human ear detection method under complex static color background | |
CN103440035A (en) | Gesture recognition system in three-dimensional space and recognition method thereof | |
CN106294705A (en) | A kind of batch remote sensing image preprocess method | |
CN103198479A (en) | SAR image segmentation method based on semantic information classification | |
CN103679184A (en) | Method for leukocyte automatic identification based on relevant vector machine | |
CN107730499A (en) | A kind of leucocyte classification method based on nu SVMs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20130220 |