CN110287992A - Agricultural features information extracting method based on big data - Google Patents
Agricultural features information extracting method based on big data Download PDFInfo
- Publication number
- CN110287992A CN110287992A CN201910429947.7A CN201910429947A CN110287992A CN 110287992 A CN110287992 A CN 110287992A CN 201910429947 A CN201910429947 A CN 201910429947A CN 110287992 A CN110287992 A CN 110287992A
- Authority
- CN
- China
- Prior art keywords
- attribute
- classification
- data
- initial
- extracting method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23211—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with adaptive number of clusters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
Abstract
The present invention relates to big data feature extraction fields, for the reluctant problem of agricultural data, propose the agricultural features information extracting method based on big data, comprising: property set establishment step obtains agriculture attribute data, establishes knowledge system;Attribute classification sorting procedure establishes initial attribute classification according to initial attribute cluster centre, to all initial attribute classifications merge and lock out operation clustered after attribute classification;Characteristic attribute calculates step, and dependency degree of each attribute data in attribute classification relative to other attribute datas after calculating is whole chooses the peak of dependency degree as characteristic attribute value;Characteristic attribute collects step, chooses and the characteristic attribute value for collecting all properties classification obtains characteristic attribute collection.The present invention can solve bad to data utilizing status problem caused by nonlinear structure characteristics of the existing feature extracting method because that cannot reflect data.
Description
Technical field
The present invention relates to big data feature extraction fields, and in particular to the agricultural features information extraction side based on big data
Method.
Background technique
Data are basis and the foundation of decision, agricultural data be people be engaged in agricultural production or agricultural economy activity in institute
The various substance datas and energy datum being related to.With being increasingly enhanced of computing capability, environmental information data acquisition and parsing skill
The acquisition of the rapid development of art, extensive agricultural data is more more convenient and simple than previous.
Agricultural data is the basis of agricultural decision making.But agriculture big data include agricultural production data, soil quality data,
The Agricultural spatial datas such as meteorological and hydrological observation data, data volume is huge and complicated, results in agricultural data management and analysis
It is difficult.Therefore, how to extract to contain quick, effectively from a large amount of agricultural datas seems very in substantive characteristics wherein
It is important.Currently, common feature extracting method has a principal component analysis PCA and linear discriminant analysis LDA, but both methods institute
The feature of extraction can not reflect the nonlinear structure characteristics of agricultural data complexity, be unfavorable for the analysis and evaluation in later period, cause
The bad problem of data utilizing status.
Summary of the invention
The purpose of the present invention is to provide the agricultural features information extracting method based on big data, it is avoided that existing feature mentions
Take bad to data utilizing status problem caused by nonlinear structure characteristics of the method because that cannot reflect data.
Base case provided by the invention are as follows: the agricultural features information extracting method based on big data, comprising the following steps:
S1: property set establishment step obtains agriculture attribute data, establishes knowledge system;
S2: attribute classification sorting procedure is chosen attribute data from the property set of knowledge system and is clustered as initial attribute
Initial attribute classification is established according to initial attribute cluster centre in center, and all initial attribute classifications are merged and separated
Operate the attribute classification after being clustered;
S3: characteristic attribute calculates step, and each attribute data in attribute classification after calculating cluster is relative to other categories
The dependency degree of property data, chooses the peak of dependency degree as characteristic attribute value;
S4: characteristic attribute collects step, chooses and the characteristic attribute value for collecting all properties classification obtains characteristic attribute collection.
Beneficial effects of the present invention: 1) this programme merges and separates adjustment to initial attribute classification, has adjusted attribute
The quantity of classification realizes the function of attribute data dimensionality reduction, reduces data fluctuations, can more meet attribute and really be distributed;2)
The selection that characteristic attribute value is carried out using the algorithm of maximum dependency degree, it can be found that the knot inside inexact data and noise data
Structure connection, makes the extraction of feature have more accuracy, conducive to the analysis and evaluation in later period;3) redundant character attribute is deleted, is extracted
The characteristic attribute value of substantive characteristics, realizes compression and refinement to information, reflection can data nonlinear structure characteristics, be more convenient for
The excavation and analysis of agriculture big data are supported to realize that Making A Strategic Decision of The Agricultural Productions provides strong data.
This programme solves logarithm caused by nonlinear structure characteristics of the existing feature extracting method because that cannot reflect data
According to the bad problem of utilizing status.
Further, in attribute classification sorting procedure further include:
S201: parameter presets sub-step, presets expected cluster centre number, the minimum number of samples of each classification, expection
The minimum range and maximum number of iterations that maximum variance, two cluster centres allow.
The utility model has the advantages that S2 step carries out the attribute after hierarchical cluster attribute is adjusted to attribute data using ISODATA algorithm
Classification, by the minimum number of samples, expected maximum variance, two clusters of choosing expected cluster centre number, each classification
The minimum range and maximum number of iterations that center allows carry out the other division of Attribute class and union operation, realize the pressure to information
Contracting and refinement are conducive to the utilizing status for promoting data.
Further, in attribute classification sorting procedure further include:
S202: initial clustering sub-step is chosen several attribute datas from the property set of knowledge system and is belonged to as initial
Property cluster centre, establishes initial attribute classification according to initial attribute cluster centre;
S203: attribute data classify sub-step, the attribute data of computation attribute collection at a distance from initial attribute cluster centre,
Attribute data is assigned in the initial attribute classification belonging to the smallest initial attribute cluster centre;
S204: cluster centre corrects sub-step, and it is each to judge whether the number of attribute data in initial attribute classification is greater than
The minimum number of samples of classification, if so, amendment initial attribute cluster centre, obtains revised hierarchical cluster attribute center.
The utility model has the advantages that being realized other to Attribute class by the classification to data and the amendment to initial attribute cluster centre
First successive step is conducive to subsequent merging and division adjustment.
Further, in attribute classification sorting procedure further include:
S205: attribute classification oidiospore step calculates all properties data and hierarchical cluster attribute center in initial attribute classification
Variance, choose the variance maximum value and the comparison of expected maximum variance of initial attribute classification, if variance maximum value be greater than it is expected most
The initial attribute classification is then split into two attribute classifications by big variance;
S206: attribute categories combination sub-step calculates the distance between the hierarchical cluster attribute center of two initial attribute classifications,
If the distance is greater than the minimum range that two cluster centres allow, by two initial attribute categories combinations at an Attribute class
Not.
The utility model has the advantages that being further adjusted to attribute classification and hierarchical cluster attribute center by merging with splitting operation, energy
Extra property attribute classification is deleted, is conducive to extract characteristic attribute value.
Further, in attribute classification sorting procedure further include:
S207: attribute classification determines sub-step, repeats step S203 to S206 to initial attribute according to maximum number of iterations
Classification is adjusted, the attribute classification after being adjusted.
The utility model has the advantages that concentrating the quantity of attribute data to determine maximum number of iterations according to data attribute, by duplicate
Regulating step is conducive to delete extra characteristic attribute.
Further, in S204 step, if the number of attribute data is less than the minimum sample of each classification in initial attribute classification
This number then abandons the initial attribute classification, and the attribute data in the initial attribute classification is re-assigned to distance recently
Other initial attribute classifications in.
The utility model has the advantages that the attribute classification that attribute data is lower than few number of samples is invalid data, invalid data is deleted
Except being conducive to promote the treatment effect to data.
Further, in S204 step, hierarchical cluster attribute center correction formula is
Further, characteristic attribute calculates in step further include:
S301: equivalence class calculates step, calculates each attribute in attribute classification adjusted according to that can not recognize relationship
The equivalence class of data.
Further, characteristic attribute calculates in step further include:
S302: dependency degree calculates step, calculates each attribute data in attribute classification adjusted relative to other categories
The dependency degree of property data.
The utility model has the advantages that attribute dependability can be understood as attribute data to the raising degree of sample resolution capability, attribute according to
Lai Du is bigger, and attribute is more important, bigger to the influence for dividing decision.
Further, characteristic attribute calculates in step further include:
S303: characteristic attribute selecting step chooses the maximum attribute data conduct of dependency degree in attribute classification adjusted
Characteristic attribute value.
Detailed description of the invention
Fig. 1 is the logic diagram of agriculture characteristics information extraction method in the present embodiment;
Fig. 2 is the logic diagram of the sub-step of S2 step in the present embodiment;
Fig. 3 is the logic diagram of the sub-step of S3 step in the present embodiment.
Specific embodiment
Below by the further details of explanation of specific embodiment:
Embodiment:
Agricultural features information extracting method based on big data as shown in Figure 1:, comprising the following steps:
S1: property set establishment step obtains agriculture attribute data, establishes knowledge system.Attribute data includes agricultural production
Data, soil quality data, meteorology and hydrological observation data, establish property set after collecting all properties data;Knowledge system
It is object set for S=(U, A, V, f), U, A is property set, and V is attribute codomain, and f is mapping, and f reacts the value between object set.
S2: attribute classification sorting procedure randomly selects attribute data as initial attribute from the property set of knowledge system
Cluster centre establishes initial attribute classification according to initial attribute cluster centre, all initial attribute classifications are merged and
Lock out operation clustered after attribute classification.
S3: characteristic attribute calculates step, and each attribute data in attribute classification after calculating cluster is relative to other categories
The dependency degree of property data, chooses the peak of dependency degree as characteristic attribute value.
S4: characteristic attribute collects step, chooses and the characteristic attribute value for collecting all properties classification obtains characteristic attribute collection.
Wherein, as shown in Figure 2: S2 step further include:
S201: parameter presets sub-step, presets expected cluster centre number, the minimum number of samples of each classification, expection
The minimum range and maximum number of iterations that maximum variance, two cluster centres allow.
S202: initial clustering sub-step is chosen several attribute datas from the property set of knowledge system and is belonged to as initial
Property cluster centre, establishes initial attribute classification according to initial attribute cluster centre.
S203: attribute data classify sub-step, the attribute data of computation attribute collection at a distance from initial attribute cluster centre,
Attribute data is assigned in the initial attribute classification belonging to the smallest initial attribute cluster centre.
S204: cluster centre corrects sub-step, and it is each to judge whether the number of attribute data in initial attribute classification is greater than
The minimum number of samples of classification, if so, amendment initial attribute cluster centre, obtains revised hierarchical cluster attribute center.
S205: attribute classification oidiospore step calculates all properties data and hierarchical cluster attribute center in initial attribute classification
Variance, choose the variance maximum value and the comparison of expected maximum variance of initial attribute classification, if variance maximum value be greater than it is expected most
The initial attribute classification is then split into two attribute classifications by big variance.
S206: attribute categories combination sub-step calculates the distance between the hierarchical cluster attribute center of two initial attribute classifications,
If the distance is greater than the minimum range that two cluster centres allow, by two initial attribute categories combinations at an Attribute class
Not.
S207: iteration sub-step repeats step S203 to S206 according to maximum number of iterations, carries out to initial attribute classification
It repeats to adjust, the attribute classification after being adjusted.
Specifically, using ISODATA algorithm:
1, enabling expected cluster centre number is K0, minimum number of samples required by each class be Nmin, maximum variance be
The minimum range allowed between Sigma, two categorical clusters centers is dmin, maximum number of iterations m, property set U.
2, dependence concentration randomly selects expected cluster centre number K0A attribute data is as initial attribute cluster centre
3, (K is removed for other attribute datas in property set U0Attribute of a selection as initial attribute cluster centre
Data), calculate itself and K0Attribute data is assigned to and is clustered apart from the smallest initial attribute by the distance of a initial attribute cluster centre
In initial attribute classification described in center.
4, after the completion of classifying, judge the number that the attribute data in initial attribute classification is judged in each initial attribute classification
Whether the minimum number of samples N of each classification is greater thanmin: if it is less than Nmin, then the initial attribute classification is abandoned, and by the initial category
Attribute data in property classification is reassigned to the smallest initial attribute classification of distance in other remaining initial attribute classifications;If
Greater than Nmin, then initial attribute cluster centre is corrected.
The correction formula of initial attribute cluster centre are as follows:
After correcting initial attribute classification, division and union operation are carried out to initial attribute classification:
5, splitting operation calculates the variance of all properties data and hierarchical cluster attribute center in initial attribute classification, chooses just
The other variance maximum value of beginning Attribute class and expected maximum variance comparison, it is if variance maximum value is greater than default variance, this is initial
Attribute classification splits into two attribute classifications, specifically includes the following steps: 1) calculating in each classification all variables each
Variance in sample;2) the maximum variance value σ in each classification is selectedmax;If 3) σmax> Sigma (Sigma is maximum variance),
And variable number n contained by the initial attribute classificationi≥2nmax, then the initial attribute classification class is categorized into two attribute classifications simultaneously
K=K+1 is enabled,
6, union operation calculates the distance between the hierarchical cluster attribute center of two initial attribute classifications, if the distance is greater than
The minimum range that two cluster centres allow then merges two initial attribute classifications into an attribute classification.It specifically includes following
Step: 1) distance of all categories cluster centre between any two is calculated, is indicated with matrix D, wherein D (i, i)=0;2) by D (i,
j)<dminOne kind of (i ≠ j) two categories combination Cheng Xin.
New categorical clusters center isWherein, niAnd njIt is the two classifications respectively
Number of samples, miAnd mjIt is the cluster centre of the two classifications respectively.
7, the operation in step 3-6 is repeated, (m is maximum number of iterations, and when use is set according to data volume for Repeated m time
It is fixed), the attribute classification after being adjusted.
Wherein, as shown in Figure 3: S3 step further include:
S301: equivalence class calculates step, calculates each attribute in attribute classification adjusted according to that can not recognize relationship
The equivalence class of data.
S302: dependency degree calculates step, calculates each attribute data in attribute classification adjusted relative to other categories
The dependency degree of property data.
S303: characteristic attribute selecting step chooses the maximum attribute data conduct of dependency degree in attribute classification adjusted
Characteristic attribute value.
Specifically, using MDA-RS algorithm:
It is the random subset of A for knowledge system S=(U, A, V, f), B, U is object set, and A is attribute set, and V is to belong to
Property codomain, f be mapping, react object set between value.
For x, y ∈ U, and if only if for each attribute a ∈ B, f (x, a)=f (and y, a) when, then claim x, y to be about B
Relationship can not be recognized, IND (B) is denoted as,It will be apparent that A's is every
A subset, which can export one, can not uniquely recognize relationship, and the relationship that can not recognize is also known as equivalence relation, and equivalence relation
A unique cluster can be exported, the cluster of the U as derived from IND (B) is denoted as U/B, clusters the equivalence in U/B comprising x ∈ U
Class is denoted as [x] B.
In knowledge system S=(U, A, V, f), set D and C are the random subsets of attribute set A, if each in D
A value can be accurately associated with a value of C, then claiming D is functional dependence to C, is denoted asIf enabling K=Σ X
∈ U/D | C (X) |/| U |, then k is referred to as dependency degree, and D depends on C with k degree, is denoted asIf k=1, D are completely dependent on
In C;If k < 1, D partly depends on C.Coefficient k is described correctly be categorized into the element in domain U by attribute C and be drawn
Divide in the equivalence class of U/D.
If the maximum value of the maximum dependency degree of attribute only one, choose the maximum attribute of maximum dependency degree as feature
Attribute;If maximum value has multiple, need to carry out next round selection, i.e., under selecting in the attribute with identical maximum dependency degree
One maximum attribute of dependency degree, until selecting an attribute.Such as attribute is concentrated with 4 attributes A, B, C, D, they it
Between dependency degree it is as shown in table 1.The whole dependency degree k of comparison sheet 1, it can be found that maximum k is 1 to appear on A and B, then again
Other dependency degrees for comparing A, B, it is found that maximum k is 0.4 to appear on B, and thus selecting attribute B is characteristic attribute.
The maximum dependency degree selection rule of table 1
Algorithm complexity: it assuming that there is m attribute of n object in a knowledge system, may know that by algorithmic procedure, the algorithm
Need to calculate (n (n-1)) secondary attribute dependability, therefore the algorithm complexity of MDA-RS is O (n (n-1)+nm), is had lower
Algorithm complexity.
What has been described above is only an embodiment of the present invention, and the common sense such as well known specific structure and characteristic are not made herein in scheme
Excessive description, technical field that the present invention belongs to is all before one skilled in the art know the applying date or priority date
Ordinary technical knowledge can know the prior art all in the field, and have using routine experiment hand before the date
The ability of section, one skilled in the art can improve and be implemented in conjunction with self-ability under the enlightenment that the application provides
This programme, some typical known features or known method should not become one skilled in the art and implement the application
Obstacle.It should be pointed out that for those skilled in the art, without departing from the structure of the invention, can also make
Several modifications and improvements out, these also should be considered as protection scope of the present invention, these all will not influence the effect that the present invention is implemented
Fruit and patent practicability.The scope of protection required by this application should be based on the content of the claims, the tool in specification
The records such as body embodiment can be used for explaining the content of claim.
Claims (10)
1. the agricultural features information extracting method based on big data, which comprises the following steps:
S1: property set establishment step obtains agriculture attribute data, establishes knowledge system;
S2: attribute classification sorting procedure chooses attribute data as initial attribute cluster centre from the property set of knowledge system,
Initial attribute classification is established according to initial attribute cluster centre, all initial attribute classifications are merged and lock out operation obtains
Attribute classification after to cluster;
S3: characteristic attribute calculates step, and each attribute data in attribute classification after calculating cluster is relative to other attribute numbers
According to dependency degree, choose the highest attribute data of dependency degree as characteristic attribute value;
S4: characteristic attribute collects step, chooses and the characteristic attribute value for collecting all properties classification obtains characteristic attribute collection.
2. the agricultural features information extracting method according to claim 1 based on big data, which is characterized in that attribute classification
In sorting procedure further include:
S201: parameter presets sub-step, presets expected cluster centre number, the minimum number of samples of each classification, expected maximum
The minimum range and maximum number of iterations that variance, two cluster centres allow.
3. the agricultural features information extracting method according to claim 2 based on big data, which is characterized in that attribute classification
In sorting procedure further include:
S202: it is poly- as initial attribute to choose several attribute datas from the property set of knowledge system for initial clustering sub-step
Initial attribute classification is established according to initial attribute cluster centre in class center;
S203: attribute data classification sub-step, the attribute data of computation attribute collection will belong at a distance from initial attribute cluster centre
Property data are assigned in the initial attribute classification belonging to the smallest initial attribute cluster centre;
S204: cluster centre corrects sub-step, judges whether the number of attribute data in initial attribute classification is greater than each classification
Minimum number of samples, if so, amendment initial attribute cluster centre, obtain revised hierarchical cluster attribute center.
4. the agricultural features information extracting method according to claim 3 based on big data, which is characterized in that attribute classification
In sorting procedure further include:
S205: attribute classification oidiospore step calculates the side of all properties data and hierarchical cluster attribute center in initial attribute classification
Difference chooses the variance maximum value and expected maximum variance comparison of initial attribute classification, if variance maximum value is most generous greater than expection
The initial attribute classification is then split into two attribute classifications by difference;
S206: attribute categories combination sub-step calculates the distance between the hierarchical cluster attribute center of two initial attribute classifications, if should
Distance is greater than the minimum range that two cluster centres allow, then by two initial attribute categories combinations at an attribute classification.
5. the agricultural features information extracting method according to claim 4 based on big data, which is characterized in that attribute classification
In sorting procedure further include:
S207: iteration sub-step repeats step S203 to S206 according to maximum number of iterations, repeats to initial attribute classification
Adjustment, the attribute classification after being adjusted.
6. the agricultural features information extracting method according to claim 5 based on big data, it is characterised in that:
In S204 step, if the number of attribute data is less than the minimum number of samples of each classification in initial attribute classification, lose
The initial attribute classification is abandoned, and the attribute data in the initial attribute classification is re-assigned to other nearest initial categories of distance
In property classification.
7. the agricultural features information extracting method according to claim 6 based on big data, it is characterised in that:
In S204 step, initial attribute cluster centre correction formula is
8. the agricultural features information extracting method according to claim 7 based on big data, which is characterized in that characteristic attribute
It calculates in step further include:
S301: equivalence class calculates step, calculates each attribute data in attribute classification adjusted according to that can not recognize relationship
Equivalence class.
9. the agricultural features information extracting method according to claim 8 based on big data, which is characterized in that characteristic attribute
It calculates in step further include:
S302: dependency degree calculates step, calculates each attribute data in attribute classification adjusted relative to other attribute numbers
According to dependency degree.
10. the agricultural features information extracting method according to claim 9 based on big data, which is characterized in that feature category
Property calculate step in further include:
S303: characteristic attribute selecting step chooses in attribute classification adjusted the maximum attribute data of dependency degree as feature
Attribute value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910429947.7A CN110287992A (en) | 2019-05-22 | 2019-05-22 | Agricultural features information extracting method based on big data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910429947.7A CN110287992A (en) | 2019-05-22 | 2019-05-22 | Agricultural features information extracting method based on big data |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110287992A true CN110287992A (en) | 2019-09-27 |
Family
ID=68002759
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910429947.7A Pending CN110287992A (en) | 2019-05-22 | 2019-05-22 | Agricultural features information extracting method based on big data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110287992A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111201990A (en) * | 2020-01-09 | 2020-05-29 | 兰州石化职业技术学院 | Agricultural planting irrigation system based on Internet of things and information processing method |
CN111488369A (en) * | 2020-04-01 | 2020-08-04 | 黑龙江省农业科学院农业遥感与信息研究所 | Agricultural science and technology consultation service platform |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103745497A (en) * | 2013-12-11 | 2014-04-23 | 中国科学院深圳先进技术研究院 | Plant growth modeling method and system |
CN105930531A (en) * | 2016-06-08 | 2016-09-07 | 安徽农业大学 | Method for optimizing cloud dimensions of agricultural domain ontological knowledge on basis of hybrid models |
CN106598492A (en) * | 2016-11-30 | 2017-04-26 | 辽宁大学 | Compression optimization method applied to mass incomplete data |
US20170223900A1 (en) * | 2016-02-09 | 2017-08-10 | Tata Consultancy Services Limited | Method and system for agriculture field clustering and ecological forecasting |
-
2019
- 2019-05-22 CN CN201910429947.7A patent/CN110287992A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103745497A (en) * | 2013-12-11 | 2014-04-23 | 中国科学院深圳先进技术研究院 | Plant growth modeling method and system |
US20170223900A1 (en) * | 2016-02-09 | 2017-08-10 | Tata Consultancy Services Limited | Method and system for agriculture field clustering and ecological forecasting |
CN105930531A (en) * | 2016-06-08 | 2016-09-07 | 安徽农业大学 | Method for optimizing cloud dimensions of agricultural domain ontological knowledge on basis of hybrid models |
CN106598492A (en) * | 2016-11-30 | 2017-04-26 | 辽宁大学 | Compression optimization method applied to mass incomplete data |
Non-Patent Citations (3)
Title |
---|
TUTUT HERAWAN ET AL.: "A rough set approach for selecting clustering attribute", 《KNOWLEDGE-BASED SYSTEMS》 * |
林佳等: "耕地植被人为干扰格局动态变化特征及其尺度效应", 《农业工程学报》 * |
舒宁等: "《模式识别的理论与方法》", 31 December 2004, 武汉大学出版社 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111201990A (en) * | 2020-01-09 | 2020-05-29 | 兰州石化职业技术学院 | Agricultural planting irrigation system based on Internet of things and information processing method |
CN111488369A (en) * | 2020-04-01 | 2020-08-04 | 黑龙江省农业科学院农业遥感与信息研究所 | Agricultural science and technology consultation service platform |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108596362B (en) | Power load curve form clustering method based on adaptive piecewise aggregation approximation | |
CN104462184B (en) | A kind of large-scale data abnormality recognition method based on two-way sampling combination | |
Fan et al. | Robust deep auto-encoding Gaussian process regression for unsupervised anomaly detection | |
CN106971205A (en) | A kind of embedded dynamic feature selection method based on k nearest neighbor Mutual Information Estimation | |
CN103699678B (en) | A kind of hierarchy clustering method based on multistage stratified sampling and system | |
CN108846527A (en) | A kind of photovoltaic power generation power prediction method | |
CN106203478A (en) | A kind of load curve clustering method for the big data of intelligent electric meter | |
CN106991446A (en) | A kind of embedded dynamic feature selection method of the group policy of mutual information | |
CN102693299A (en) | System and method for parallel video copy detection | |
CN108333468B (en) | The recognition methods of bad data and device under a kind of active power distribution network | |
Jiang et al. | Classification methods of remote sensing image based on decision tree technologies | |
CN110287992A (en) | Agricultural features information extracting method based on big data | |
CN110134719A (en) | A kind of identification of structural data Sensitive Attributes and stage division of classifying | |
CN111326236A (en) | Medical image automatic processing system | |
CN102324031A (en) | Latent semantic feature extraction method in aged user multi-biometric identity authentication | |
CN101216886B (en) | A shot clustering method based on spectral segmentation theory | |
CN112434662B (en) | Tea leaf scab automatic identification algorithm based on multi-scale convolutional neural network | |
CN106611016A (en) | Image retrieval method based on decomposable word pack model | |
Yuan et al. | CSCIM_FS: Cosine similarity coefficient and information measurement criterion-based feature selection method for high-dimensional data | |
CN111461324A (en) | Hierarchical pruning method based on layer recovery sensitivity | |
CN110751201A (en) | SAR equipment task failure cause reasoning method based on textural feature transformation | |
CN110071884A (en) | A kind of Modulation Recognition of Communication Signal method based on improvement entropy cloud feature | |
CN104657473A (en) | Large-scale data mining method capable of guaranteeing quality monotony | |
CN109783586A (en) | Waterborne troops's comment detection system and method based on cluster resampling | |
CN108960657A (en) | One kind being based on the preferred building Load Characteristic Analysis method of feature |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190927 |
|
RJ01 | Rejection of invention patent application after publication |