CN101697167B - Clustering-decision tree based selection method of fine corn seeds - Google Patents

Clustering-decision tree based selection method of fine corn seeds Download PDF

Info

Publication number
CN101697167B
CN101697167B CN2009102334472A CN200910233447A CN101697167B CN 101697167 B CN101697167 B CN 101697167B CN 2009102334472 A CN2009102334472 A CN 2009102334472A CN 200910233447 A CN200910233447 A CN 200910233447A CN 101697167 B CN101697167 B CN 101697167B
Authority
CN
China
Prior art keywords
attribute
corn
decision tree
clustering
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009102334472A
Other languages
Chinese (zh)
Other versions
CN101697167A (en
Inventor
邱建林
季丹
陈建平
顾翔
李芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN2009102334472A priority Critical patent/CN101697167B/en
Publication of CN101697167A publication Critical patent/CN101697167A/en
Application granted granted Critical
Publication of CN101697167B publication Critical patent/CN101697167B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a clustering-decision tree based selection method of fine corn seeds, comprising the following steps: preprocessing data; establishing a decision tree; systematically judging the class of an input attribute value according to the established decision tree, and obtaining a point with a minimum distance from a three-dimensional point by computing the distances between the three-dimensional point and other points in the class, wherein the attribute of the point is most approximated to the corn attribute of an input corn seed; and querying the parent class and the mother class of the input corn seed from a children table so as to complete a corn seed selection function. The invention can optimize the fine corn seeds in the generational information, the growth information, the harvest information, and the like of corns according to requirements by combining a clustering and decision tree algorithm and can achieve the purposes of reducing the labor intensity and enhancing the decision efficiency.

Description

Selection method of fine corn seeds based on clustering-decision tree
Technical field:
The present invention relates to a kind of selection method of fine corn seeds.
Background technology:
Cluster is a kind of common data analysis tool, is meant with the method research of mathematics and handles given object, and be a kind of of multivariate statistical analysis.Thought based on " things of a kind come together, people of a mind fall into the same group ", it the set divide into several classes of mass data point or bunch, make between the data in each class farthest similarly, and the data in the inhomogeneity are farthest different, thereby find the distribution pattern of the overall situation and the mutual relationship between the data attribute.An outstanding feature of cluster analysis is to handle huge, complicated data set, but also can be used as the pre-treatment step of other algorithms.Clustering algorithm at present commonly used also has that k-is average, k-mould, k-central point, DIANA, AGNES, STING, COBWEB etc.
These classical clustering algorithms can only be handled simple, general data, and are not good enough for large-scale, complicated data set effect, so a lot of improved and new algorithm constantly is suggested.Such as, have document proposed a kind of evolution based on polycentric dynamic clustering algorithm, wherein bunch number need not pre-establish, for different bunch optional majority central points, and assess by two objective functions, improved the application of algorithm for data set with special characteristics and special distribution; The CUZ algorithm has improved the calculating of representative point, can not only differentiate data non-circular, non-rectangular shape effectively, can also the non-convex shape of treatment surface bunch; The GriDBSCAN algorithm is by at data space structure grid, the local earlier density algorithm that adopts, back polymerization generation real bunch, improved the efficient of DBSCAN greatly, reduced complexity.Also have some new algorithms, cohesion hierarchical clustering algorithm such as people such as Yu Wei proposition, adopted the spatial sequences of 84 dimensions to come indicated object,, finally can obtain than the efficient system tree graph by the matrix that is made of the Pearson's coefficient between object is in twos condensed; The novel ant algorithm that on the basis of population algorithm and intelligent body, proposes, the number that needn't stipulate in advance bunch, and can be by calculating local object but not all the method for objects reduce the whole calculated amount of algorithm, can find arbitrary shape bunch.
Traditional decision-tree is to use one of logical method more widely in the sorting technique, it infers the classifying rules of decision tree representation from one group of out of order, random example, it adopts top-down recursive fashion, utilize the mutual information (information gain) in the information theory to seek the field that has maximum fault information in the database, set up the node of decision tree, again according to the branch of the different value inspections tree of field, in each descendent subset, repeat to set up lower floor's node and the ramifying of tree.Paths from the root of decision tree to leaf node just corresponding a conjunction rule, whole decision tree just corresponding one group of expression formula of extracting.Method commonly used has C4.5, ID3, CLS etc.
Proposed a lot of improved decision Tree algorithms at present, the interest parameter of having introduced the user when differentiating the degree of correlation of attribute at the computing information entropy such as people such as Ding Rongtao has been accelerated the speed that information entropy reduces to reduce the redundancy between attribute; People such as Fadila Bentayeb propose a kind of new data mining algorithm based on decision tree, by the pre-service of before handling in data mining data being carried out, set up a possibility table, and ID3 algorithm and the information gain formula new to this enforcement, simplify the complexity of algorithm, finally reached the purpose of decision tree classification; Concern sorting technique RDC algorithm, use the ID array to propagate and upgrade to concern decision tree more, can prevent to set up the relation table of excessive redundancy, prevented losing of information effectively, optimized tree construction, strengthened algorithm validity.
Infotech is widely used in China's agricultural production, and it has advanced the development of China's agricultural production greatly, becomes the important support strength that developing modern agriculture is produced.Aspect the seed selection of fine corn seeds, storing bulk information, as plant height, fringe height, the time of infertility, cell production etc.,, will produce guiding effect to Agricultural Development and reform if can therefrom excavate correct, reliable, useful rule.
Summary of the invention:
The object of the present invention is to provide and a kind ofly can be as requested carry out the preferred of fine corn seeds, can reach reduction labour intensity, improve the selection method of fine corn seeds based on clustering-decision tree of the efficiency of decision-making growth and development information, growth information and the results information etc. of corn.
Technical solution of the present invention is:
A kind of selection method of fine corn seeds based on clustering-decision tree is characterized in that: comprise the following steps:
(1) data pre-service: the three-dimensional point in the corresponding one-tenth of three attributes space that will select, utilize the k-average algorithm in the clustering algorithm, calculate measuring point all in children's table and the distance between the center of gravity, relatively big or small, the most all records are poly-to be two bunches, and make the bigger similarity that records in each bunch, and different bunches have bigger distinctiveness ratio;
(2) set up decision tree: the property value of selected three attributes of first discretize, divide them into three classes, and with mass of 1000 kernel as categorical attribute, divide classification and be labeled as low yield, middle product, high yield, after the input time of infertility, mass of 1000 kernel and these three attributes of cell production, according to the Euclidean distance formula:
d ( x i , y j ) = ( Σ k = 1 m ( x ik - x jk ) 2 ) 1 2
Can judge which clustering cluster is this three-dimensional point belong to, after reaching a conclusion, this bunch carried out the data mining of ID3 algorithm, calculate attribute and the information gain value of cell production attribute in the time of infertility, and the attribute that will have bigger gain is as testing attribute, at the child node place of testing attribute segmentation, draw branch, divide whole record set, carry out successively again forming a decision tree of simplifying, wherein attribute S kThe information gain formula be:
G ( S K ) = h s ( s j ) - Σ k = 1 K ( n k n j × ( - Σ i = 1 c n ik n k × log 2 ( n ik n k ) ) )
N wherein jBe the occurrence number of total node, n kBe that the prediction property value is V kThe occurrence number of child node, n IkBe that categorical attribute is C iThe prediction property value be V kThe occurrence number of child node;
(3) according to the decision tree that forms, system judges which classification the property value of being imported belongs to, and by calculating the distance of other points in this three-dimensional point and the affiliated classification, obtain a point that distance is minimum, the attribute that this point is had is exactly the corn attribute that approaches to import corn variety most, from children's table, inquire parent and female class of this corn variety, promptly finish the function of corn seed selection.
The present invention can carry out the preferred of fine corn seeds to growth and development information, growth information and the results information etc. of corn as requested in conjunction with cluster and decision Tree algorithms, can reach reduction labour intensity, improves the purpose of the efficiency of decision-making.
Selection method of fine corn seeds based on clustering-decision tree is according to the corn attribute specification of being imported, utilize the algorithm of cluster and decision tree in the data mining, find out the most similar close corn variety to this attribute, and then from children table, find out parent and female class of this this corn variety, and the essential information of parent and female class exported, thereby obtain the parent of corn attribute specification of input and the breeding of female class.
This selection method of fine corn seeds is based on the auxiliary realization of computer software, has reduced labour intensity in the artificial fine-variety breeding greatly, has improved the efficiency of decision-making and the accuracy of fine corn seeds seed selection.
The invention will be further described below in conjunction with drawings and Examples.
Fig. 1 is the final decision figure of embodiment.
Fig. 2 is clustering algorithm description figure.
Fig. 3 is decision Tree algorithms description figure.
Embodiment:
1. select sample set;
Original sample set comes from the selected kind experiment of certain Agricultural Information group 2006 and gathers (Y group) tables of data the end of the year.Because original sample set data message amount is big, corn variety is more, describes for convenience of description, now only selects these eight records of Y1-Y8 to discuss, and cited to choose sample set as shown in table 1 below.Table 1 is selected sample set
Number of times Mean value (bunch 1) Mean value (bunch 2) Produce new bunch New mean value (bunch 1) New mean value (bunch 2)
1 (100, 200.8, 6.73) (101, 269.8, 7.83) ?{Y1,Y7},{?Y2,Y3,Y4,?Y5,Y6,Y8} (100, 213.9, 6.555) (100.33, 275.23, 7.67)
2 (100, 213.9, 6.555) (100.33, 275.23, 7.67) ?{Y1,Y6,Y7?},{Y2,Y3,Y?4,Y5,Y8} (100, 212.87, 6.75) (100.4, 288.12,7 .782)
3 (100, 213.9, 6.555) (100.33, 275.23, 7.67) ?{Y1,Y6,Y7?},{Y2,Y3,Y?4,Y5,Y8} (100, 212.87, 6.75) (100.4, 288.12,7 .782)
Table 2 cluster overall process
The corn variety code Period (cycle) Weight (weight) Yield (output)
Y1 100 200.8 6.73
Y2 101 269.8 7.83
Y3 99 287.3 6.70
Y4 101 303.5 7.54
Y5 100 255.5 7.56
Y6 100 210.8 7.13
Y7 100 227.0 6.38
Y8 101 324.5 9.28
2. sample set is carried out cluster, as the pre-treatment step of decision Tree algorithms.Table 2 has provided mean value calculation and process of clustering and result in the whole process;
Iteration for the first time: two objects that supposition is selected at random, be used as initial point as Y1 and Y2, find respectively from 2 nearest objects, and produce two bunches of { Y1, Y7} and { Y2, Y3, Y4, Y5, Y6, Y8}.Bunch difference calculating mean value for producing obtains mean point.For { mean point is (100,213.9,6.555) for Y1, Y7}; For Y2, and Y3, Y4, Y5, Y6, Y8}, mean point is (100.33,275.23,7.67).
Iteration for the second time: adjust object place bunch by mean value, cluster is about to all and presses mean point (100,213.9,6.555) again, and (100.33,275.23,7.67) nearest principle is redistributed.Obtain two new bunch: { Y1, Y6, Y7} and { Y2, Y3, Y4, Y5, Y8}.Recomputate a bunch mean point, obtain new mean point and be (100,212.87,6.75) and (100.4,288.12,7.782).
Iteration for the third time: all are pressed from mean point (100,212.87,6.75) and (100.4,288.12,7.782) and nearest principle redistributes, and adjusts object, bunch still be { Y1, Y6, Y7} and { Y1, Y6, Y7}, find not occur redistributing, and the criterion function convergence, EOP (end of program).
Through three iteration, can obtain final two cluster C1, C2 class.Wherein, C1 comprises Y1, Y6, and Y7, C2 comprises Y2, Y3, Y4, Y5, Y8.
3. import alternative condition, classification under judging;
Import a group selection condition at random, suppose Period=101, Weight=290, Yield=7.2, then can correspond to the three-dimensional point P (101,290,7.2) in the space, calculate P o'clock distance according to the Euclidean distance formula to two bunches of centers, obtain d1=77.14, d2=2.06 (d1 is the distance that P arrives class C1 center of gravity, and d2 is the distance that P arrives class C2 center of gravity), because d1>d2 can judge that will put P is included among the class C2.
4. the cluster of selecting is implemented the ID3 algorithm, select breeding; Determine that at first categorical attribute is Weight (sample of Weight<=300 is a low yield, and the sample of Weight>300 is a high yield), carries out segmentation (Period<=100, Period>100 to attribute Period and Yield then; Yield<=7.6, Yield>7.6).As shown in table 3.The attribute value table of table 3 sample
Y2 >100 >7.6 Low yield
Y3 <100 <7.6 Low yield
Y4 >100 <7.6 High yield
Y5 <100 <7.6 Low yield
Y8 >100 >7.6 High yield
The expectation information of calculating categorical attribute Weight is: I ( Weight ) = - 3 5 log 2 ( 3 5 ) - 2 5 log 2 ( 2 5 ) = 0.97 ;
The entropy of computation attribute Period again: E ( Period ) = 0 - 3 5 ( 1 3 log 2 ( 1 3 ) + 2 3 log 2 ( 2 3 ) ) = 0.55 ;
The entropy of attribute Yield:
E ( Yield ) = - 3 5 ( 1 3 log 2 ( 1 3 ) + 2 3 log 2 ( 2 3 ) ) - 2 5 ( 1 2 log 2 ( 1 2 ) + 1 2 log 2 ( 1 2 ) ) = 0.95 ;
Then the information gain of attribute Period is: Gain (Period)=I (Weight)-E (Period)=0.42;
The information gain of attribute Yield is: Gain (Yield)=I (Weight)-E (Yield)=0.02;
Because attribute Period has bigger information gain, then with Period as testing attribute, create decision tree, obtain Fig. 1.
According to the alternative condition of importing at random before, can determine that this kind belongs to Y2 low yield class, only need parent and the female class of Y2 in the Query Database, can obtain parents' essential information of breeding expectation.
2 arthmetic statements
Selection method of fine corn seeds based on clustering-decision tree is summarized as follows: at first all samples are carried out cluster, obtain required bunch of class, the attribute of the object of judging as required is included in the corresponding bunch class then.Last at bunch class implementation decision tree algorithm among a small circle that finds, create best decision tree, thereby find separating of optimizing.Wherein the description of cluster and decision Tree algorithms is shown in Fig. 2 and 3.
According to selection method of fine corn seeds based on clustering-decision tree, utilization VC++ programming and ACCESS database technology, development and Design based on the software systems of the fine corn seeds seed selection of clustering-decision tree, comprised basic database, knowledge base and reasoning storehouse in the ACCESS database of system.Wherein, basic database comprises user message table and corn information table, and knowledge base has comprised parents' table and children's table, and the reasoning storehouse is to set up according to the rule that corn breeding expert breeding is for many years summarized the experience.
The corn information management module comprises two parts, the one, and information inquiry operation, the one, the interpolation of corn essential information, deletion, retouching operation.
The function that fine corn seeds seed selection module realizes is according to the corn attribute of being imported, utilize the algorithm of cluster and decision tree in the data mining, find out the corn variety the most similar with these attributes, and then from children table, find out parent and female class of this corn variety, and their essential information is outputed in the list box.
Described " corn information table (corn) " is used to write down the essential information of corn, comprises that the time of infertility, plant height, fringe height, mass of 1000 kernel, cell production, comparison attribute such as shine.
Table 4 corn information table (corn)
Field name Data type Could be sky Constraint condition Explanation
Corn_id Automatic numbering Not Major key Numbering
Corn_name Text Not \ The kind code
Corn_period Text Not \ The time of infertility
Corn_high Text Not \ Plant height
Corn_shigh Text Not \ The fringe height
Corn_length Text Not \ Spike length
Corn_wide Text Not \ Fringe is thick
Corn_weight Text Not \ Mass of 1000 kernel
Corn_yield Text Not \ Cell production
Corn_percent Text Not \ Comparison is shone
Described " knowledge base-parents show (father) " can show that parents indicate and child information etc., and be as shown in table 5.
Table 5 parents show (father)
Field name Data type Could be sky Constraint condition Explanation
Pid Automatic numbering Not Major key Numbering
Pname Text Not \ The kind code
Pyield Text Not \ Cell production
Pweight Text Not \ Mass of 1000 kernel
Gperiod Text Not \ The time of infertility
Plogo Text Not \ Parents' sign
Pnum Text Not \ Children's number
Pfirst Text Not \ Initial children's code
" knowledge base-children show (son) " can show corresponding parent corn variety code, and be as shown in table 6.
Table 6 children show (son)
Field name Data type Could be sky Constraint condition Explanation
Sid Automatic numbering Not Major key Numbering
Sname Text Not \ The kind code
Syield Text Not \ Cell production
Sweight Text Not \ Mass of 1000 kernel
Speriod Text Not \ The time of infertility
Fathername Text Not \ Parent kind code
Mothername Text Not \ Female veriety code
Described " reasoning storehouse ", be according to corn breeding expert breeding for many years two rules of summarizing the experience out: the comparison of ◇ filial generation is according to>=0; The breeding time of the breeding time<parent of ◇ filial generation.From parents' table, select the parents of breeding corn.
Described " fine corn seeds seed selection module ", be according to the corn attribute specification of being imported, utilize the algorithm of cluster and decision tree in the data mining, find out the most similar close corn variety to this attribute, and then from children table, find out parent and female class of this this corn variety, and the essential information of parent and female class exported, thereby obtain the parent of corn attribute specification of input and the breeding of female class.This selection method of fine corn seeds is based on the auxiliary realization of computer software, has reduced labour intensity in the artificial fine-variety breeding greatly, has improved the efficiency of decision-making and the accuracy of fine corn seeds seed selection.

Claims (1)

1. the selection method of fine corn seeds based on clustering-decision tree is characterized in that: comprise the following steps:
(1) data pre-service: the three-dimensional point in the corresponding one-tenth of three attributes space that will select, utilize the k-average algorithm in the clustering algorithm, calculate measuring point all in children's table and the distance between the center of gravity, and it is relatively big or small, the most all records are poly-to be two bunches, and make the bigger similarity that records in each bunch, and different bunches have bigger distinctiveness ratio;
(2) set up decision tree: the property value of selected three attributes of first discretize, divide them into three classes, and with mass of 1000 kernel as categorical attribute, divide classification and be labeled as low yield, middle product, high yield, after the input time of infertility, mass of 1000 kernel and these three attributes of cell production, according to the Euclidean distance formula:
d ( x i , y j ) = ( Σ k = 1 m ( x ik - x jk ) 2 ) 1 2
Can judge which clustering cluster is this three-dimensional point belong to, after reaching a conclusion, this bunch carried out the data mining of ID3 algorithm, calculate attribute and the information gain value of cell production attribute in the time of infertility, and the attribute that will have bigger gain is as testing attribute, at the child node place of testing attribute segmentation, draw branch, divide whole record set, carry out successively again forming a decision tree of simplifying, wherein attribute S kThe information gain formula be:
G ( S K ) = h s ( s j ) - Σ k = 1 K ( n k n j × ( - Σ i = 1 c n ik n k × log 2 ( n ik n k ) ) )
N wherein jBe the occurrence number of total node, n kBe that the prediction property value is V kThe occurrence number of child node, n IkBe that categorical attribute is C iThe prediction property value be V kThe occurrence number of child node;
(3) according to the decision tree that forms, system judges which classification the property value of being imported belongs to, and by calculating the distance of other points in this three-dimensional point and the affiliated classification, obtain a point that distance is minimum, the attribute that this point is had is exactly the corn attribute that approaches to import corn variety most, from children's table, inquire parent and female class of this corn variety, promptly finish the function of corn seed selection.
CN2009102334472A 2009-10-30 2009-10-30 Clustering-decision tree based selection method of fine corn seeds Expired - Fee Related CN101697167B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009102334472A CN101697167B (en) 2009-10-30 2009-10-30 Clustering-decision tree based selection method of fine corn seeds

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009102334472A CN101697167B (en) 2009-10-30 2009-10-30 Clustering-decision tree based selection method of fine corn seeds

Publications (2)

Publication Number Publication Date
CN101697167A CN101697167A (en) 2010-04-21
CN101697167B true CN101697167B (en) 2011-12-14

Family

ID=42142272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009102334472A Expired - Fee Related CN101697167B (en) 2009-10-30 2009-10-30 Clustering-decision tree based selection method of fine corn seeds

Country Status (1)

Country Link
CN (1) CN101697167B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156710A (en) * 2011-03-02 2011-08-17 上海大学 Plant identification method based on cloud model and TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method
CN103004578A (en) * 2012-11-16 2013-04-03 安阳工学院 Intelligent decision making system for similarity and difference culture of crops
CN103020864B (en) * 2012-12-07 2014-03-12 南通大学 Corn fine breed breeding method
KR102524360B1 (en) * 2016-07-18 2023-04-24 강원대학교산학협력단 Determination technique of corn grade using fuzzy clustering for the control of sterilization at producing corn sterilized at high temperature and pressure
CN108108759B (en) * 2017-12-19 2021-11-02 四川九洲电器集团有限责任公司 Dynamic grouping method for multiple intelligent agents
CN111105083A (en) * 2019-12-11 2020-05-05 成都信息工程大学 Crop growth early warning method and device based on data mining
CN111080524A (en) * 2019-12-19 2020-04-28 吉林农业大学 Plant disease and insect pest identification method based on deep learning
CN114951047B (en) * 2022-05-26 2023-08-22 河海大学 Universal intelligent sorting method in vibration feeding based on optical fiber sensor
CN117933580B (en) * 2024-03-25 2024-05-31 河北省农林科学院农业信息与经济研究所 Breeding material optimization evaluation method for wheat breeding management system

Also Published As

Publication number Publication date
CN101697167A (en) 2010-04-21

Similar Documents

Publication Publication Date Title
CN101697167B (en) Clustering-decision tree based selection method of fine corn seeds
Huang et al. Automated variable weighting in k-means type clustering
CN104199857B (en) A kind of tax document hierarchy classification method based on multi-tag classification
CN107193967A (en) A kind of multi-source heterogeneous industry field big data handles full link solution
CN103136355B (en) A kind of Text Clustering Method based on automatic threshold fish-swarm algorithm
CN103744928A (en) Network video classification method based on historical access records
CN115795131B (en) Electronic file classification method and device based on artificial intelligence and electronic equipment
CN109117992A (en) Ultra-short term wind power prediction method based on WD-LA-WRF model
CN104504018A (en) Top-down real-time big data query optimization method based on bushy tree
Ibrahim et al. Compact weighted class association rule mining using information gain
CN105930531A (en) Method for optimizing cloud dimensions of agricultural domain ontological knowledge on basis of hybrid models
Si et al. Establishment and improvement of financial decision support system using artificial intelligence and big data
CN110619364A (en) Wavelet neural network three-dimensional model classification method based on cloud model
Hsu et al. An integrated framework for visualized and exploratory pattern discovery in mixed data
CN101763476A (en) Multilevel security policy conversion method
Goyle et al. Dataassist: A machine learning approach to data cleaning and preparation
CN105787113A (en) Mining algorithm for DPIPP (distributed parameterized intelligent product platform) process information on basis of PLM (product lifecycle management) database
CN105335499A (en) Document clustering method based on distribution-convergence model
Sun et al. Dynamic Intelligent Supply-Demand Adaptation Model Towards Intelligent Cloud Manufacturing.
CN103020864B (en) Corn fine breed breeding method
CN101984431B (en) Automatic prediction method of network news expression distribution
Reddy A review of data warehouses multidimensional model and data mining
Ma et al. Pbar: Parallelized brain storm optimization for association rule mining
Ferranti et al. A multi-objective evolutionary fuzzy system for big data
Rajaboevich et al. Models and algorithms for solving problems associated with large amounts of data in the military sphere

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Free format text: FORMER OWNER: JI DAN CHEN JIANPING GU XIANG LI FEN

Owner name: NANTONG UNIVERSITY

Free format text: FORMER OWNER: QIU JIANLIN

Effective date: 20101227

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 226019 COLLEGE OF COMPUTER SCIENCE, NANTONG UNIVERSITY, NO.9, SEYUAN ROAD, NANTONG CITY, JIANGSU PROVINCE TO: 226019 NO.9, SEYUAN ROAD, NANTONG CITY, JIANGSU PROVINCE

TA01 Transfer of patent application right

Effective date of registration: 20101227

Address after: 226019 Jiangsu city of Nantong province sik Road No. 9

Applicant after: Nantong University

Address before: 226019 Jiangsu city of Nantong province sik Road No. 9 Computer College of Nantong University

Applicant before: Qiu Jianlin

Co-applicant before: Ji Dan

Co-applicant before: Chen Jianping

Co-applicant before: Gu Xiang

Co-applicant before: Li Fen

C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111214

Termination date: 20151030

EXPY Termination of patent right or utility model