CN115186776B - Method, device and storage medium for classifying ruby producing areas - Google Patents
Method, device and storage medium for classifying ruby producing areas Download PDFInfo
- Publication number
- CN115186776B CN115186776B CN202211107096.2A CN202211107096A CN115186776B CN 115186776 B CN115186776 B CN 115186776B CN 202211107096 A CN202211107096 A CN 202211107096A CN 115186776 B CN115186776 B CN 115186776B
- Authority
- CN
- China
- Prior art keywords
- sample
- ruby
- origin
- training
- classification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000010979 ruby Substances 0.000 title claims abstract description 128
- 229910001750 ruby Inorganic materials 0.000 title claims abstract description 128
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000003860 storage Methods 0.000 title claims abstract description 18
- 238000012549 training Methods 0.000 claims abstract description 115
- 238000003066 decision tree Methods 0.000 claims abstract description 97
- 230000006870 function Effects 0.000 claims abstract description 61
- 238000007637 random forest analysis Methods 0.000 claims description 34
- 238000013528 artificial neural network Methods 0.000 claims description 24
- 239000010410 layer Substances 0.000 claims description 24
- 238000004422 calculation algorithm Methods 0.000 claims description 22
- 239000011573 trace mineral Substances 0.000 claims description 17
- 235000013619 trace mineral Nutrition 0.000 claims description 17
- 229910052729 chemical element Inorganic materials 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 16
- 238000004519 manufacturing process Methods 0.000 claims description 16
- 239000012535 impurity Substances 0.000 claims description 11
- 230000009467 reduction Effects 0.000 claims description 11
- XEEYBQQBJWHFJM-UHFFFAOYSA-N Iron Chemical compound [Fe] XEEYBQQBJWHFJM-UHFFFAOYSA-N 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 10
- 230000008447 perception Effects 0.000 claims description 10
- 239000002356 single layer Substances 0.000 claims description 10
- 230000009466 transformation Effects 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 7
- VYZAMTAEIAYCRO-UHFFFAOYSA-N Chromium Chemical compound [Cr] VYZAMTAEIAYCRO-UHFFFAOYSA-N 0.000 claims description 6
- 230000003213 activating effect Effects 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 6
- 229910052804 chromium Inorganic materials 0.000 claims description 6
- 239000011651 chromium Substances 0.000 claims description 6
- 238000009795 derivation Methods 0.000 claims description 6
- RTAQQCXQSZGOHL-UHFFFAOYSA-N Titanium Chemical compound [Ti] RTAQQCXQSZGOHL-UHFFFAOYSA-N 0.000 claims description 5
- 229910052742 iron Inorganic materials 0.000 claims description 5
- 229910052719 titanium Inorganic materials 0.000 claims description 5
- 239000010936 titanium Substances 0.000 claims description 5
- 229910052720 vanadium Inorganic materials 0.000 claims description 5
- LEONUFNNVUYDNQ-UHFFFAOYSA-N vanadium atom Chemical compound [V] LEONUFNNVUYDNQ-UHFFFAOYSA-N 0.000 claims description 5
- OYPRJOBELJOOCE-UHFFFAOYSA-N Calcium Chemical compound [Ca] OYPRJOBELJOOCE-UHFFFAOYSA-N 0.000 claims description 4
- GYHNNYVSQQEPJS-UHFFFAOYSA-N Gallium Chemical compound [Ga] GYHNNYVSQQEPJS-UHFFFAOYSA-N 0.000 claims description 4
- FYYHWMGAXLPEAU-UHFFFAOYSA-N Magnesium Chemical compound [Mg] FYYHWMGAXLPEAU-UHFFFAOYSA-N 0.000 claims description 4
- ZLMJMSJWJFRBEC-UHFFFAOYSA-N Potassium Chemical compound [K] ZLMJMSJWJFRBEC-UHFFFAOYSA-N 0.000 claims description 4
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 claims description 4
- HCHKCACWOHOZIP-UHFFFAOYSA-N Zinc Chemical compound [Zn] HCHKCACWOHOZIP-UHFFFAOYSA-N 0.000 claims description 4
- 239000011575 calcium Substances 0.000 claims description 4
- 229910052791 calcium Inorganic materials 0.000 claims description 4
- 229910052733 gallium Inorganic materials 0.000 claims description 4
- 239000011777 magnesium Substances 0.000 claims description 4
- 229910052749 magnesium Inorganic materials 0.000 claims description 4
- 229910052700 potassium Inorganic materials 0.000 claims description 4
- 239000011591 potassium Substances 0.000 claims description 4
- 239000010703 silicon Substances 0.000 claims description 4
- 229910052710 silicon Inorganic materials 0.000 claims description 4
- 229910052725 zinc Inorganic materials 0.000 claims description 4
- 239000011701 zinc Substances 0.000 claims description 4
- 238000012216 screening Methods 0.000 abstract description 6
- 238000001514 detection method Methods 0.000 description 15
- 238000013473 artificial intelligence Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 4
- 229910052593 corundum Inorganic materials 0.000 description 3
- 239000010431 corundum Substances 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 229910052500 inorganic mineral Inorganic materials 0.000 description 2
- 235000010755 mineral Nutrition 0.000 description 2
- 239000011707 mineral Substances 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011002 quantification Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000002149 energy-dispersive X-ray emission spectroscopy Methods 0.000 description 1
- 239000010437 gem Substances 0.000 description 1
- 229910001751 gemstone Inorganic materials 0.000 description 1
- 238000009659 non-destructive testing Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- TWNQGVIAIRXVLR-UHFFFAOYSA-N oxo(oxoalumanyloxy)alumane Chemical compound O=[Al]O[Al]=O TWNQGVIAIRXVLR-UHFFFAOYSA-N 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a method, a device and a storage medium for classifying the origin of ruby, wherein the method comprises the following steps: acquiring training sample data to generate a training set; randomly selecting a plurality of sample features in a training set to generate a plurality of feature sets; generating a plurality of decision trees according to the plurality of feature sets; acquiring characteristic data of a sample to be detected, inputting the characteristic data into a plurality of decision trees so as to enable each decision tree to output a producing area prediction result, and screening the first N predicted producing areas with the number sorted from large to small from the producing area prediction results of all the decision trees as a coarse classification result; inputting the characteristic data of the sample to be detected into the corresponding N perceptrons according to the rough classification result to obtain N origin judgment results; according to the characteristic data of the sample to be detected, loss functions of the N sensing machines are respectively calculated, the origin judgment result output by the sensing machine with the minimum loss function value is selected as the origin classification result, and the precision of the ruby origin classification method is improved.
Description
Technical Field
The invention relates to the technical field of ruby origin identification, in particular to a ruby origin classification method, a ruby origin classification device and a storage medium.
Background
The ruby is corundum with red color. Belongs to corundum group minerals, the main component of the corundum group minerals is aluminum oxide, and trace impurity elements such as vanadium, chromium, iron, titanium and the like can be contained. The ruby color component contains chromium and is red to pink, and the higher the content is, the more bright the color is. The ruby is mainly produced in Burma, mornebick, thailand, srilanka, madagasca, vietnam, tanshinia, etc. Because ruby of different producing areas has obvious premium, the market demand of the method for identifying the producing area of ruby is stronger, the main means at present is to utilize the macroscopic representation of ruby to identify the producing area under the participation of a jewel expert, and the defects of high cost, low repeatability of the process of identifying the producing area, low accuracy, insufficient precision and the like exist; the method for identifying the spectral characteristics has the defects of overlapping characteristics, unobvious characteristics and the like; the application of artificial intelligence in the prior art in the jewelry identification industry is still in an original stage, the intelligent degree is very low, and the artificial intelligence is only simple mathematical linear discrimination, normalization processing and the like, so the depth is to be further improved; the nondestructive testing of the ruby chemical composition data is generally a ruby-free standard sample testing method, the data cannot be traced, and the technical defect of instability of matrix effect data is greatly existed, so that the obtained data has low quantification degree, low accuracy and high error; only individual samples of individual producing areas are researched, the database support of complete producing areas and autonomous analysis is lacked, the accuracy is low, and the misjudgment risk of the producing areas is large.
Disclosure of Invention
The invention provides a method, a device and a storage medium for classifying ruby producing areas, which utilize quantitative detection and identification of a historical identification database and classification calculation of an artificial intelligence algorithm to realize the effect of improving the precision of the ruby producing area classification method.
In order to improve the accuracy of the classification method of the ruby origin, the embodiment of the invention provides a classification method of the ruby origin, which comprises the following steps: obtaining training sample data to generate a training set, wherein the training sample data comprise ruby training samples, corresponding origin classification information and corresponding lossless quantitative curves, and the ruby training samples take corresponding characteristic chemical elements as sample characteristics; randomly selecting a plurality of sample features in the training set to generate a feature set, and repeating the random selection operation for a plurality of times to obtain a plurality of feature sets, wherein the feature sets comprise the sample features and the classification information of the origin; generating a plurality of decision trees according to a plurality of feature sets;
acquiring characteristic data of a sample to be tested, inputting the characteristic data into a plurality of decision trees so that a producing area prediction result output by each decision tree screens out the first N predicted producing areas with the number of the predicted producing areas ordered from large to small as a coarse classification result from the producing area prediction results of all the decision trees, wherein N is a positive integer;
calculating a plurality of loss functions of the perception machines for judging different producing areas according to the data of the training set; calculating the gradient of each hidden layer of the corresponding perceptron according to the loss function, and performing gradient reduction on the parameter of each hidden layer; updating the loss function of each perceptron and the gradient of each hidden layer of each perceptron until the loss function of each perceptron converges according to the data of different training sets; obtaining a plurality of sensing machines for judging different producing areas, wherein one sensing machine judges one producing area of input data;
inputting the characteristic data of the sample to be detected into corresponding N perceptrons according to the rough classification result to obtain N origin judgment results, which specifically comprise:
inputting the characteristic data of N samples to be detected into N corresponding perception machines, judging whether the samples to be detected are produced from the predicted producing areas by utilizing a multilayer full-connection neural network and a sigmoid activating function and combining with chain type derivation of a neural network back propagation algorithm, and obtaining N producing area judgment results; and respectively calculating loss functions of the N sensing machines according to the characteristic data of the sample to be detected, and selecting the origin judgment result output by the sensing machine with the minimum loss function value as the origin classification result.
As a preferred scheme, the nondestructive quantitative curve of the training sample is used as training data, the quantitative degree of the ruby component element detection is high, and the ruby component element detection accuracy and repeatability are improved; based on ruby component element detection, rough classification is carried out by utilizing a random forest algorithm, then a deep neural network is used, and a multilayer perceptron carries out fine classification, so that the purpose of ruby origin classification is achieved, the ruby component element characteristics are screened and analyzed through an artificial intelligence algorithm, the automatic judgment of the ruby origin is realized, and the precision of ruby origin classification is improved.
As a preferred scheme, acquiring training sample data to generate a training set, specifically:
obtaining a large number of ruby training samples of known production areas; classifying the ruby training samples according to the producing areas to obtain corresponding producing area classification information; determining characteristic chemical elements of the ruby training sample, and establishing a corresponding lossless quantitative curve in a lossless component analysis instrument, wherein the characteristic chemical elements comprise silicon, magnesium, potassium, calcium, titanium, vanadium, chromium, iron, gallium and zinc, and the lossless quantitative curve comprises the linear combination of trace element content, trace element content ratio and trace element content of ruby;
and generating a training set by taking the ruby training sample, the corresponding origin classification information and the corresponding lossless quantitative curve as training sample data, wherein the ruby training sample takes corresponding characteristic chemical elements as the characteristics of the training sample.
As a preferred scheme, the method can be used for effectively collecting the component element characteristics of the ruby produced in different mining areas without damage, so that the integrity of the ruby sample is ensured. By collecting the component element characteristics of ruby produced in different mining areas, including the contents of trace elements and the ratio and linear combination thereof as training data, an accurate historical identification database is established. The nondestructive quantitative curve of the component element characteristics is used as training data, the quantitative degree of the ruby component element detection is high, the ruby component element detection accuracy and repeatability are improved, and the precision of ruby place classification is improved.
As a preferred scheme, a plurality of decision trees are generated according to a plurality of feature sets, specifically:
according to the characteristics of each decision tree, calculating characteristics selected by the Kini index to serve as root nodes, setting nodes of non-leaf nodes of each decision tree to serve as decision nodes, and setting leaf nodes of each decision tree to serve as output units, wherein each decision node is a sample characteristic and a corresponding judgment value, and each leaf node corresponds to a production area prediction result.
As a preferred scheme, a plurality of features are randomly selected from all training sets to form a feature set to generate decision trees to form a plurality of decision trees with random features, the decision nodes of the decision trees judge the sample features one by one, finally, each decision tree generates a producing area prediction result, the decision trees formed by combining a plurality of random features judge and screen the component element features of the predicted ruby, and the producing areas of the ruby are roughly classified. The classification method for rough classification is carried out by judging and screening the component element characteristics of the predicted ruby by using the random forest algorithm, so that the precision of classification of the ruby producing area is improved.
As a preferred scheme, after generating a plurality of decision trees, the method further comprises:
in the training process of a plurality of decision trees, the divided features are selected in a mode of evaluating the reduction amount of impurity degree by using an information gain ratio or a kini index, and the specific steps are as follows:
when the division characteristics are selected in a mode of evaluating the reduction amount of impurity degree by using the information gain ratio, dividing the characteristics f into m value intervals and usingThe characteristics divide a sample set X of the nodes to generate m branch nodes, wherein the sample set contained in the jth branch nodeObtaining a sample subset of the j-th value interval on the characteristic f in the sample set X;
the information gain resulting from dividing the sample set X using the feature f is:
wherein,for division into subsets in XThe sample ratio of (a);as a set of samplesThe information gain of (1);
given the feature f, the classification criterion with the largest information gain is used for feature selection:
when the division features are selected in a mode of evaluating the impurity reduction amount by using the kini index, the kini index of the feature f with respect to the sample set X is as follows:
wherein,the proportion of the number of samples belonging to the class i in the current sample set at the node to the total number of samples isIn the case of (2), feature selection is performed using the kini criterion:
preferably, in the training process of the decision tree, the division characteristics are selected in a mode of evaluating the impurity reduction amount by using an information gain ratio or a kini index. The division characteristics are selected and applied to the automatic node bifurcation and the self-adaptive characteristic weight, the key characteristics are selected as the analysis key points of the classification judgment of the producing area, the influence of the key characteristics on the classification judgment of the producing area is improved, and the precision of the classification of the producing area of the ruby is improved.
As a preferred scheme, in the prediction results of the producing areas of all the decision trees, the first N predicted producing areas, the number of which is sorted from most to least, are screened out as coarse classification results, and the method specifically comprises the following steps:
inputting the prediction results of all the decision trees in the producing area into a combiner for decision voting, wherein the characteristic data of the sample to be tested is x, and the random forest model formed by all the decision trees isWherein, in the process,k in (1) is the number of decision trees in the random forest model,as a random forest modelThe k-th decision tree in (1)The characteristic data x are in random forestIs outputted asAnd the total votes of the category i of the origin prediction result are as follows:
wherein,voting the weight value of the corresponding decision tree j on the category i of the producing area prediction result; k is the number of decision trees in the random forest model;
and selecting the first N predicted producing areas with the voting numbers sorted from more to less as a coarse classification result.
As a preferable scheme, the method judges and screens the component element characteristics of the predicted ruby through a decision tree formed by combining a plurality of random characteristics to obtain a plurality of production place prediction results, screens N production places with the largest quantity, and judges that the prediction sample belongs to one of the N production places in all the production places. The classification method for rough classification is carried out by judging and screening the component element characteristics of the predicted ruby by using a random forest algorithm, so that the precision of classification of the ruby producing areas is improved.
As a preferred scheme, inputting the characteristic data of the sample to be detected into the corresponding N sensing machines to obtain N origin judgment results, specifically:
the single-layer full-link in the perceptron is as follows:
wherein f is an activation function, the sample to be detected is x, and the producing area is i;a single-layer fully-connected transformation matrix corresponding to a sample x to be detected;is the ith column of the transformation matrix;the ith component of x;
the neural network back propagation objective function is:
wherein x is the sample to be measured, t is the expected output, and z is the actual output.
The architecture of the perceptron is as follows:
wherein, the sizes of different hidden layers are respectively:
as a preferred scheme, the method judges and screens the component element characteristics of the predicted ruby through a random forest algorithm, roughly classifies the producing area of the ruby, judges that the predicted sample belongs to one of N producing areas, finely classifies the sample by using a deep neural network and a multilayer perceptron, and determines the final producing area of the predicted sample. The characteristics of the ruby component elements are analyzed through an artificial intelligence algorithm, so that the automatic judgment of the ruby producing area is realized, and the precision of ruby producing area classification is improved.
Correspondingly, the invention also provides a device for classifying the origin of the ruby, which comprises: the device comprises a training module, a rough classification module and a classification judgment module;
the training module is used for acquiring training sample data to generate a training set, wherein the training sample data comprises a ruby training sample, corresponding origin classification information and a corresponding lossless quantitative curve, and the ruby training sample takes corresponding characteristic chemical elements as sample characteristics; randomly selecting a plurality of sample features in the training set to generate a feature set, and repeating the random selection operation for a plurality of times to obtain a plurality of feature sets, wherein the feature sets comprise the sample features and the classification information of the origin; generating a plurality of decision trees according to a plurality of feature sets;
the rough classification module is used for acquiring feature data of a sample to be detected, inputting the feature data into a plurality of decision trees, so that a producing area prediction result output by each decision tree screens out the first N predicted producing areas with the number sorted from large to small from the producing area prediction results of all the decision trees as a rough classification result, wherein N is a positive integer;
the classification judgment module is used for calculating a plurality of loss functions of the perception machines judging different producing areas according to the data of the training set; calculating the gradient of each hidden layer of the corresponding perceptron according to the loss function, and performing gradient descent on the parameter of each hidden layer; updating the loss function of each perceptron and the gradient of each hidden layer of each perceptron according to data of different training sets until the loss function of each perceptron converges; obtaining a plurality of sensing machines for judging different producing areas, wherein one sensing machine judges one producing area of input data; inputting the characteristic data of the sample to be detected into corresponding N perceptrons according to the rough classification result to obtain N origin judgment results, which specifically comprise: inputting the characteristic data of N samples to be detected into corresponding N perceptrons, judging whether the samples to be detected are produced from the predicted producing areas by utilizing a multilayer full-connection neural network and a sigmoid activating function and combining chain type derivation of a neural network back propagation algorithm, and obtaining N producing area judging results; and respectively calculating loss functions of the N sensing machines according to the characteristic data of the sample to be detected, and selecting the origin judgment result output by the sensing machine with the minimum loss function value as the origin classification result.
As a preferred scheme, the training module of the ruby producing area classifying device utilizes the lossless quantitative curve of the training sample as training data, the quantitative degree of the ruby component element detection is high, and the ruby component element detection accuracy and repeatability are improved; the rough classification module is used for carrying out rough classification by utilizing a random forest algorithm based on ruby component element detection, the classification judgment module is used for carrying out fine classification by utilizing a deep neural network and a multilayer perceptron, the purpose of ruby origin classification is achieved, the ruby origin automatic judgment is realized by screening and analyzing ruby component element characteristics through an artificial intelligence algorithm, and the precision of ruby origin classification is improved.
Preferably, the coarse classification module comprises: a decision voting unit and a classification result unit;
inputting the prediction results of all the decision trees in the producing area into a combiner for decision voting, wherein the characteristic data of the sample to be tested is x, and the random forest model formed by all the decision trees isWhereink in (1) is the number of decision trees in the random forest model,as a random forest modelThe k decision tree in (1), the feature data x in a random forestIs output asThe result of said prediction of originThe total number of votes for category i is:
wherein,the weight value voted on the category i of the producing area prediction result for the corresponding decision tree j; k is the number of decision trees in the random forest model;
and the classification result unit is used for selecting the first N predicted producing areas with the voting numbers sorted from more to less as a rough classification result.
As a preferred scheme, the rough classification module judges and screens the component element characteristics of the predicted ruby through a decision tree formed by combining a plurality of random characteristics to obtain a plurality of producing area prediction results, screens N producing areas with the largest quantity, and judges that the prediction sample belongs to one of the N producing areas in all producing areas. The classification method for rough classification is carried out by judging and screening the component element characteristics of the predicted ruby by using the random forest algorithm, so that the precision of classification of the ruby producing area is improved.
Preferably, the classification judgment module includes: the judgment result unit specifically comprises:
the single-layer full-link in the perceptron is as follows:
wherein f is an activation function, the sample to be detected is x, and the place of origin is i;a single-layer fully-connected transformation matrix corresponding to a sample x to be detected;is the ith column of the transformation matrix;the ith component of x;
the neural network back propagation objective function is:
wherein x is the sample to be measured, t is the expected output, and z is the actual output.
The architecture of the perceptron is as follows:
wherein, the sizes of different hidden layers are respectively as follows:
as a preferred scheme, the method judges and screens the component element characteristics of the predicted ruby through a random forest algorithm, roughly classifies the producing area of the ruby, judges that the predicted sample belongs to one of N producing areas, and finely classifies the predicted sample by using a deep neural network and a multilayer perceptron to determine the final producing area of the predicted sample. The characteristics of the ruby component elements are analyzed through an artificial intelligence algorithm, so that the automatic judgment of the ruby producing area is realized, and the precision of ruby producing area classification is improved.
Accordingly, the present invention also provides a computer readable storage medium comprising a stored computer program; wherein the computer program, when executed, controls an apparatus in which the computer readable storage medium is located to perform a method of classifying a region of origin of a ruby according to the present disclosure.
Drawings
FIG. 1 is a schematic flow chart diagram of one embodiment of a method for classifying the origin of ruby provided by the present invention;
fig. 2 is a schematic structural diagram of an embodiment of the ruby origin sorting device provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, a method for classifying a producing area of a ruby according to an embodiment of the present invention includes steps S101 to S103:
step S101: obtaining training sample data to generate a training set, wherein the training sample data comprise ruby training samples, corresponding origin classification information and corresponding lossless quantitative curves, and the ruby training samples take corresponding characteristic chemical elements as sample characteristics; randomly selecting a plurality of sample features in the training set to generate a feature set, and repeating the random selection operation for a plurality of times to obtain a plurality of feature sets, wherein the feature set comprises the sample features and the classification information of the origin; and generating a plurality of decision trees according to a plurality of feature sets.
In this embodiment, obtaining training sample data to generate a training set specifically includes:
obtaining a large number of ruby training samples of known production areas; classifying the ruby training samples according to the producing areas to obtain corresponding producing area classification information; determining characteristic chemical elements of the ruby training sample, and establishing a corresponding lossless quantitative curve in a lossless component analysis instrument, wherein the characteristic chemical elements comprise silicon, magnesium, potassium, calcium, titanium, vanadium, chromium, iron, gallium and zinc, and the lossless quantitative curve comprises the trace element content, the trace element content ratio and the linear combination of the trace element content of the ruby;
and generating a training set by taking the ruby training sample, the corresponding origin classification information and the corresponding lossless quantitative curve as training sample data, wherein the ruby training sample takes corresponding characteristic chemical elements as the characteristics of the training sample.
In this example, 659 ruby training samples were obtained, for a total of 9 sites, and the samples were classified by site as shown in the following table:
3 samples were selected from each of the 9 sites available, for a total of 27 samples. Ten feature element constant values were extracted from the 27 samples, respectively. Determining characteristic chemical elements of silicon, magnesium, potassium, calcium, titanium, vanadium, chromium, iron, gallium and zinc of the ruby training sample, establishing corresponding 10 lossless quantitative curves in a lossless component analysis instrument EDXRF, and obtaining training sample data, wherein the curves comprise trace element content of the ruby training sample, a ratio of the trace element content and a linear combination of the trace element content. And obtaining 100 feature combinations of the ruby training sample according to the permutation and combination of the 10 features of the ruby training sample.
In this embodiment, 10 sample features are randomly selected from 100 feature combinations in the training set to generate a feature set, the random selection operation is repeated K times to obtain K feature sets, K is a positive integer, corresponding K decision trees are constructed according to the K feature sets, and a random forest is constructed by the K decision trees.
In this embodiment, the content of training sample data is read, which includes the mapping relationship between the number of data, the number of trace elements, the content of trace elements, the name of a place of production and a serial number; and packing the corresponding training sample data into a matrix type for constructing the decision tree operation.
In this embodiment, a plurality of decision trees are generated according to a plurality of feature sets, specifically:
according to the characteristics of each decision tree, calculating characteristics selected by the Kini index to serve as root nodes, setting nodes of non-leaf nodes of each decision tree to serve as decision nodes, and setting leaf nodes of each decision tree to serve as output units, wherein each decision node is a sample characteristic and a corresponding judgment value, and each leaf node corresponds to a production area prediction result.
In this embodiment, after generating the plurality of decision trees, the method further includes:
in the training process of a plurality of decision trees, the divided features are selected in a mode of evaluating the reduction amount of impurity degree by using an information gain ratio or a kini index, and the specific steps are as follows:
when the division characteristics are selected in a mode of evaluating the reduction amount of the impurity degree by using the information gain ratio, dividing the characteristics f into m value-taking intervals, and dividing a sample set X of the nodes by using the characteristics to generate m branch nodes, wherein the sample set contained in the jth branch nodeObtaining a sample subset of the j-th value interval on the characteristic f in the sample set X;
the information gain resulting from dividing the sample set X using the feature f is:
given the feature f, the classification criterion with the largest information gain is used for feature selection:
when the division features are selected in a mode of evaluating the impurity reduction amount by using the kini index, the kini index of the feature f with respect to the sample set X is as follows:
wherein,the proportion of the number of samples belonging to the class i in the current sample set at the node to the total number of samples isIn the case of (2), feature selection is performed using the kini criterion:
in this embodiment, when each decision tree is trained, if the quantity of feature data corresponding to a certain leaf node of the decision tree is greater than a set minimum number of branches, the leaf node is branched according to a training strategy, where the minimum number of branches is 5. According to different training sets, adaptive weights are adopted for different features, and the weight given to the key feature is larger than the weight given to other features.
Step S102: acquiring characteristic data of a sample to be tested, inputting the characteristic data into a plurality of decision trees so that a producing area prediction result output by each decision tree screens out the first N predicted producing areas with the number of the predicted producing areas in the order of more than or less from the producing area prediction results of all the decision trees as a coarse classification result, wherein N is a positive integer.
In this embodiment, the number of sample data to be tested, the number of trace elements and the content of trace elements are taken; and for each feature in the sample to be tested, performing data distribution on each decision tree according to the feature data related to each decision tree by the feature, and operating each decision tree to obtain the classification result of each decision tree.
In this embodiment, in the prediction results of the producing areas of all the decision trees, the first N predicted producing areas, the number of which is sorted from more to less, are screened out as the coarse classification results, which specifically includes:
inputting the prediction results of all decision trees into a combiner for decision voting, wherein the characteristic data of the sample to be tested is x, and the random forest model formed by all the decision trees isThe characteristic data x are in random forestIs output asAnd the total votes of the category i of the origin prediction result are as follows:
wherein,voting the weight value of the corresponding decision tree j on the category i of the producing area prediction result;
and selecting the first N predicted producing areas with the voting numbers sorted from more to less as a coarse classification result.
In this embodiment, the top 5 predicted producing areas with the number sorted from more to less among the producing area prediction results of the K decision trees are selected as the rough classification result.
Step S103: inputting the characteristic data of the sample to be detected into corresponding N perceptrons according to the rough classification result to obtain N origin judgment results; and respectively calculating loss functions of the N sensing machines according to the characteristic data of the sample to be detected, and selecting the origin judgment result output by the sensing machine with the minimum loss function value as the origin classification result.
In this embodiment, the feature data of the sample to be detected is input into the corresponding N sensing machines to obtain N origin determination results, which specifically include:
calculating a plurality of loss functions of the perception machines for judging different producing areas according to the data of the training set; calculating the gradient of each hidden layer of the corresponding perceptron according to the loss function, and performing gradient descent on the parameter of each hidden layer; updating the loss function of each perceptron and the gradient of each hidden layer of each perceptron until the loss function of each perceptron converges according to the data of different training sets; obtaining a plurality of sensing machines for judging different producing areas, wherein one sensing machine judges one producing area of input data;
inputting the characteristic data of N samples to be detected into N corresponding perception machines, judging whether the samples to be detected are produced from the predicted producing areas by utilizing a multilayer full-connection neural network and a sigmoid activating function and combining with chain type derivation of a neural network back propagation algorithm, and obtaining N producing area judgment results;
wherein the single layer full link in the perceptron is:
wherein f is an activation function, the sample to be detected is x, and the place of origin is i;
the neural network back propagation objective function is:
wherein x is the sample to be measured, t is the expected output, and z is the actual output.
The architecture of the perceptron is as follows:
wherein, the sizes of different hidden layers are respectively:
in this embodiment, according to the 5 predicted producing areas of the rough classification result, the corresponding 5 producing area perceivers are selected, the data to be measured is input into the 5 perceivers, the loss functions of the 5 perceivers are respectively calculated, and the producing area corresponding to the perceiver with the smallest loss function is used as the producing area judgment result. And mapping the production place result obtained in the last step into an actual production place name according to the mapping relation of the production place number, writing the production place name into a fixed text file as a production place file, detecting a modification record of the production place file, updating a corresponding field if the modification record exists, and displaying to finish prediction.
The embodiment of the invention has the following effects:
the method utilizes the nondestructive quantitative curve of the training sample as training data, has high quantification degree on the detection of the ruby component elements, and improves the accuracy and repeatability of the ruby component element detection; based on ruby component element detection, rough classification is carried out by utilizing a random forest algorithm, then a deep neural network is used, and a multilayer perceptron carries out fine classification, so that the purpose of ruby origin classification is achieved, the ruby component element characteristics are screened and analyzed through an artificial intelligence algorithm, the automatic judgment of the ruby origin is realized, and the precision of ruby origin classification is improved.
Example two
Referring to fig. 2, a device for classifying a producing area of a ruby according to an embodiment of the present invention includes: a training module 201, a rough classification module 202 and a classification judgment module 203;
the training module 201 is configured to obtain training sample data to generate a training set, where the training sample data includes a ruby training sample, corresponding origin classification information, and a corresponding lossless quantitative curve, and the ruby training sample uses corresponding characteristic chemical elements as sample characteristics; randomly selecting a plurality of sample features in the training set to generate a feature set, and repeating the random selection operation for a plurality of times to obtain a plurality of feature sets, wherein the feature sets comprise the sample features and the classification information of the origin; generating a plurality of decision trees according to a plurality of feature sets;
the rough classification module 202 is configured to obtain feature data of a sample to be tested, and input the feature data into a plurality of decision trees, so that a place prediction result output by each decision tree screens, as rough classification results, first N predicted places of origin, the number of which is ordered from most to least, from the place of origin prediction results of all the decision trees, where N is a positive integer;
the classification judgment module 203 is configured to input the feature data of the sample to be detected into the corresponding N perceptrons according to the coarse classification result, so as to obtain N origin judgment results; and respectively calculating loss functions of the N sensing machines according to the characteristic data of the sample to be detected, and selecting the origin judgment result output by the sensing machine with the minimum loss function value as the origin classification result.
In this embodiment, the coarse classification module includes: a decision voting unit and a classification result unit;
inputting the prediction results of all the decision trees in the producing area into a combiner for decision voting, wherein the characteristic data of the sample to be tested is x, and the random forest model formed by all the decision trees isThe characteristic data x are in random forestIs output asAnd the total votes of the category i of the origin prediction result are as follows:
wherein,voting the weight value of the corresponding decision tree j on the category i of the producing area prediction result;
and the classification result unit is used for selecting the first N predicted producing areas with the voting numbers sorted from more to less as a rough classification result.
In this embodiment, the classification judgment module includes: a perception computer computing unit and a judgment result unit;
the perceptron calculation unit is used for calculating a plurality of loss functions of perceptrons judging different producing areas according to the data of the training set; calculating the gradient of each hidden layer of the corresponding perceptron according to the loss function, and performing gradient descent on the parameter of each hidden layer; updating the loss function of each perceptron and the gradient of each hidden layer of each perceptron until the loss function of each perceptron converges according to the data of different training sets; obtaining a plurality of sensing machines for judging different producing areas, wherein one sensing machine judges one producing area of input data;
the judgment result unit is used for inputting the characteristic data of the N samples to be detected into the corresponding N perceptrons, judging whether the samples to be detected are produced from the predicted producing area by utilizing the multilayer fully-connected neural network and the sigmoid activating function and combining with the chain type derivation of the neural network back propagation algorithm, and obtaining N producing area judgment results;
wherein the single-layer full-link in the perceptron is:
wherein f is an activation function, the sample to be detected is x, and the place of origin is i;
the neural network back propagation objective function is:
wherein x is the sample to be measured, t is the expected output, and z is the actual output.
The architecture of the perceptron is as follows:
wherein, the sizes of different hidden layers are respectively:
the device for classifying the origin of the ruby can implement the method for classifying the origin of the ruby of the embodiment of the method. The alternatives in the above-described method embodiments are also applicable to this embodiment and will not be described in detail here. The rest of the embodiments of the present application may refer to the contents of the above method embodiments, and in this embodiment, details are not described again.
The embodiment of the invention has the following effects:
the training module of the ruby producing area classifying device utilizes the lossless quantitative curve of the training sample as training data, the quantitative degree of the ruby component element detection is high, and the ruby component element detection accuracy and repeatability are improved; the rough classification module is used for carrying out rough classification by utilizing a random forest algorithm based on ruby component element detection, the classification judgment module is used for carrying out fine classification by utilizing a deep neural network and a multilayer perceptron, the purpose of ruby origin classification is achieved, the ruby origin automatic judgment is realized by screening and analyzing ruby component element characteristics through an artificial intelligence algorithm, and the precision of ruby origin classification is improved.
EXAMPLE III
Accordingly, the present invention also provides a computer readable storage medium comprising a stored computer program, wherein the computer program when executed controls an apparatus in which the computer readable storage medium is located to perform a method for classification of ruby origin as described in any one of the above embodiments.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to implement the invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program in the terminal device.
The terminal device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The terminal device may include, but is not limited to, a processor, a memory.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is the control center of the terminal device and connects the various parts of the whole terminal device using various interfaces and lines.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the terminal device by running or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the mobile terminal, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the terminal device integrated module/unit can be stored in a computer readable storage medium if it is implemented in the form of software functional unit and sold or used as an independent product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
The above-mentioned embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, and it should be understood that the above-mentioned embodiments are only examples of the present invention and are not intended to limit the scope of the present invention. It should be understood that any modifications, equivalents, improvements and the like, which come within the spirit and principle of the invention, may occur to those skilled in the art and are intended to be included within the scope of the invention.
Claims (10)
1. A method of classifying the origin of a ruby, comprising:
obtaining training sample data to generate a training set, wherein the training sample data comprises a ruby training sample, corresponding origin classification information and a corresponding lossless quantitative curve, and the ruby training sample takes corresponding characteristic chemical elements as sample characteristics; randomly selecting a plurality of sample features in the training set to generate a feature set, and repeating the random selection operation for a plurality of times to obtain a plurality of feature sets, wherein the feature set comprises the sample features and the classification information of the origin; generating a plurality of decision trees according to a plurality of feature sets;
acquiring characteristic data of a sample to be tested, inputting the characteristic data into a plurality of decision trees so that a producing area prediction result output by each decision tree screens out the first N predicted producing areas with the number of the predicted producing areas ordered from large to small as a coarse classification result from the producing area prediction results of all the decision trees, wherein N is a positive integer;
calculating a plurality of loss functions of the perception machines for judging different producing areas according to the data of the training set; calculating the gradient of each hidden layer of the corresponding perceptron according to the loss function, and performing gradient descent on the parameter of each hidden layer; updating the loss function of each perceptron and the gradient of each hidden layer of each perceptron until the loss function of each perceptron converges according to the data of different training sets; obtaining a plurality of sensing machines for judging different producing areas, wherein one sensing machine judges one producing area of input data;
inputting the characteristic data of the sample to be detected into corresponding N perceptrons according to the rough classification result to obtain N origin judgment results, which specifically comprise:
inputting the characteristic data of N samples to be detected into N corresponding perception machines, judging whether the samples to be detected are produced from the predicted producing areas by utilizing a multilayer full-connection neural network and a sigmoid activating function and combining with chain type derivation of a neural network back propagation algorithm, and obtaining N producing area judgment results;
and respectively calculating loss functions of the N sensing machines according to the characteristic data of the sample to be detected, and selecting the origin judgment result output by the sensing machine with the minimum loss function value as the origin classification result.
2. The method according to claim 1, wherein the obtaining training sample data generates a training set, specifically:
obtaining a large number of ruby training samples of known production places; classifying the ruby training samples according to the producing areas to obtain corresponding producing area classification information; determining characteristic chemical elements of the ruby training sample, and establishing a corresponding lossless quantitative curve in a lossless component analysis instrument, wherein the characteristic chemical elements comprise silicon, magnesium, potassium, calcium, titanium, vanadium, chromium, iron, gallium and zinc, and the lossless quantitative curve comprises the linear combination of trace element content, trace element content ratio and trace element content of ruby;
and generating a training set by taking the ruby training sample, the corresponding origin classification information and the corresponding lossless quantitative curve as training sample data, wherein the ruby training sample takes corresponding characteristic chemical elements as the characteristics of the training sample.
3. A method for classifying a region of origin of a ruby according to claim 1, wherein said generating a plurality of decision trees from a plurality of said feature sets comprises:
according to the characteristics of each decision tree, calculating characteristics selected by the Kini index to serve as root nodes, setting nodes of non-leaf nodes of each decision tree to serve as decision nodes, and setting leaf nodes of each decision tree to serve as output units, wherein each decision node is a sample characteristic and a corresponding judgment value, and each leaf node corresponds to a place of origin prediction result.
4. A method for ruby origin classification according to claim 3, wherein after said generating a plurality of decision trees, further comprising:
in the training process of a plurality of decision trees, the divided features are selected in a mode of evaluating the reduction amount of impurity degree by using an information gain ratio or a kini index, and the specific steps are as follows:
when the division characteristics are selected in a mode of evaluating the reduction amount of impurity degree by using the information gain ratio, dividing the characteristics f into m value intervals, and using the characteristics to sample sets of nodesX is divided to generate m branch nodes, wherein the j-th branch node comprises a sample setObtaining a sample subset with a j-th value interval on the characteristic f in the sample set X;
the information gain resulting from dividing the sample set X using the feature f is:
wherein,for division into subsets in XThe sample ratio of (a);as a set of samplesThe information gain of (2);
given the feature f, the classification criterion with the largest information gain is used for feature selection:
when the division features are selected in a mode of evaluating the impurity reduction amount by using the kini index, the kini index of the feature f with respect to the sample set X is as follows:
wherein,and for the proportion of the number of samples of the current sample set belonging to the category i to the total number of samples at the node, performing feature selection by using a Gini criterion under the condition of a given feature f:
5. the method for classifying ruby origins according to claim 3, wherein the first N predicted origins with the number being ranked from most to least are selected from the predicted origins of all the decision trees as rough classification results, specifically:
inputting the prediction results of all the decision trees in the producing area into a combiner for decision voting, wherein the characteristic data of the sample to be tested is x, and the random forest model formed by all the decision trees isWhereink in (1) is the number of decision trees in the random forest model,as a random forest modelThe k decision tree in (1), the feature data x in random forestIs outputted asAnd the total votes of the category i of the origin prediction result are as follows:
wherein,the weight value voted on the category i of the producing area prediction result for the corresponding decision tree j; k is the number of decision trees in the random forest model;
and selecting the first N predicted producing areas with the voting numbers sorted from more to less as a coarse classification result.
6. The method for classifying a ruby origin according to claim 1, wherein said feature data of said sample to be measured is input to corresponding N perceptrons to obtain N origin determination results, specifically:
the single-layer full-link in the perceptron is as follows:
wherein f is an activation function, the sample to be detected is x, and the place of origin is i;a single-layer fully-connected transformation matrix corresponding to a sample x to be detected;is the ith column of the transformation matrix;the ith component of x;
the neural network back propagation objective function is:
wherein x is a sample to be measured, t is an expected output, and z is an actual output;
the architecture of the perceptron is as follows:
wherein, the sizes of different hidden layers are respectively:
7. a device for classifying the origin of ruby, comprising: the device comprises a training module, a rough classification module and a classification judgment module;
the training module is used for acquiring training sample data to generate a training set, wherein the training sample data comprises a ruby training sample, corresponding origin classification information and a corresponding lossless quantitative curve, and the ruby training sample takes corresponding characteristic chemical elements as sample characteristics; randomly selecting a plurality of sample features in the training set to generate a feature set, and repeating the random selection operation for a plurality of times to obtain a plurality of feature sets, wherein the feature sets comprise the sample features and the classification information of the origin; generating a plurality of decision trees according to a plurality of feature sets;
the rough classification module is used for acquiring feature data of a sample to be detected, inputting the feature data into a plurality of decision trees, so that a producing area prediction result output by each decision tree screens out the first N predicted producing areas with the number sorted from large to small from the producing area prediction results of all the decision trees as a rough classification result, wherein N is a positive integer;
the classification judgment module is used for calculating a plurality of loss functions of the perception machines judging different producing areas according to the data of the training set; calculating the gradient of each hidden layer of the corresponding perceptron according to the loss function, and performing gradient descent on the parameter of each hidden layer; updating the loss function of each perceptron and the gradient of each hidden layer of each perceptron according to data of different training sets until the loss function of each perceptron converges; obtaining a plurality of sensing machines for judging different producing areas, wherein one sensing machine judges one producing area of input data;
inputting the characteristic data of the sample to be detected into corresponding N perceptrons according to the rough classification result to obtain N origin judgment results, which specifically comprise: inputting the characteristic data of N samples to be detected into N corresponding perception machines, judging whether the samples to be detected are produced from the predicted producing areas by utilizing a multilayer full-connection neural network and a sigmoid activating function and combining with chain type derivation of a neural network back propagation algorithm, and obtaining N producing area judgment results; and respectively calculating loss functions of the N sensing machines according to the characteristic data of the sample to be detected, and selecting the origin judgment result output by the sensing machine with the minimum loss function value as the origin classification result.
8. A ruby origin sorting apparatus according to claim 7, wherein said rough sorting module comprises: a decision voting unit and a classification result unit;
the decision voting unit is used for inputting the producing area prediction results of all decision trees into the combiner for decision voting, the characteristic data of the sample to be tested is x, and the random forest model formed by all the decision trees is xWhereink in (1) is the number of decision trees in the random forest model,as a random forest modelThe k decision tree in (1), the feature data x in a random forestIs output asAnd the total ticket number of the category i of the origin prediction result is as follows:
wherein,voting the weight value of the corresponding decision tree j on the category i of the producing area prediction result; k is the number of decision trees in the random forest model;
and the classification result unit is used for selecting the first N predicted producing areas with the voting numbers sorted from more to less as a rough classification result.
9. A ruby habitat classification device according to claim 7 and wherein said classification decision module comprises: the judgment result unit specifically comprises:
the single-layer full-link in the perceptron is as follows:
wherein f is an activation function, the sample to be detected is x, and the place of origin is i;for single-layer fully-connected transformation corresponding to sample x to be measuredA matrix;is the ith column of the transformation matrix;the ith component of x;
the neural network back propagation objective function is:
wherein x is a sample to be measured, t is an expected output, and z is an actual output;
the architecture of the perceptron is as follows:
wherein, the sizes of different hidden layers are respectively:
10. a computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored computer program; wherein the computer program, when executed, controls an apparatus in which the computer readable storage medium is located to perform a method of classification of ruby origin according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211107096.2A CN115186776B (en) | 2022-09-13 | 2022-09-13 | Method, device and storage medium for classifying ruby producing areas |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211107096.2A CN115186776B (en) | 2022-09-13 | 2022-09-13 | Method, device and storage medium for classifying ruby producing areas |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115186776A CN115186776A (en) | 2022-10-14 |
CN115186776B true CN115186776B (en) | 2022-12-13 |
Family
ID=83524524
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211107096.2A Active CN115186776B (en) | 2022-09-13 | 2022-09-13 | Method, device and storage medium for classifying ruby producing areas |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115186776B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115618282B (en) * | 2022-12-16 | 2023-06-06 | 国检中心深圳珠宝检验实验室有限公司 | Identification method, device and storage medium for synthetic precious stone |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103489088A (en) * | 2013-09-17 | 2014-01-01 | 北京农业信息技术研究中心 | Method and device for collecting and processing loading and unloading goods information |
CN110412115A (en) * | 2019-07-30 | 2019-11-05 | 浙江省农业科学院 | Unknown time green tea source area prediction technique based on stable isotope and multielement |
CN110569581A (en) * | 2019-08-27 | 2019-12-13 | 中国检验检疫科学研究院 | Method for distinguishing production places of Chinese wolfberry based on multi-element combination random forest algorithm |
CN112666119A (en) * | 2020-12-03 | 2021-04-16 | 山东省科学院自动化研究所 | Method and system for detecting ginseng tract geology based on terahertz time-domain spectroscopy |
WO2021211840A1 (en) * | 2020-04-15 | 2021-10-21 | Chan Zuckerberg Biohub, Inc. | Local-ancestry inference with machine learning model |
CN114112983A (en) * | 2021-10-18 | 2022-03-01 | 中国科学院西北高原生物研究所 | Python data fusion-based Tibetan medicine all-leaf artemisia rupestris L producing area distinguishing method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113744869B (en) * | 2021-09-07 | 2024-03-26 | 中国医科大学附属盛京医院 | Method for establishing early screening light chain type amyloidosis based on machine learning and application thereof |
-
2022
- 2022-09-13 CN CN202211107096.2A patent/CN115186776B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103489088A (en) * | 2013-09-17 | 2014-01-01 | 北京农业信息技术研究中心 | Method and device for collecting and processing loading and unloading goods information |
CN110412115A (en) * | 2019-07-30 | 2019-11-05 | 浙江省农业科学院 | Unknown time green tea source area prediction technique based on stable isotope and multielement |
CN110569581A (en) * | 2019-08-27 | 2019-12-13 | 中国检验检疫科学研究院 | Method for distinguishing production places of Chinese wolfberry based on multi-element combination random forest algorithm |
WO2021211840A1 (en) * | 2020-04-15 | 2021-10-21 | Chan Zuckerberg Biohub, Inc. | Local-ancestry inference with machine learning model |
CN112666119A (en) * | 2020-12-03 | 2021-04-16 | 山东省科学院自动化研究所 | Method and system for detecting ginseng tract geology based on terahertz time-domain spectroscopy |
CN114112983A (en) * | 2021-10-18 | 2022-03-01 | 中国科学院西北高原生物研究所 | Python data fusion-based Tibetan medicine all-leaf artemisia rupestris L producing area distinguishing method |
Non-Patent Citations (2)
Title |
---|
A REVIEW OF ANALYTICAL METHODS USED IN GEOGRAPHIC ORIGIN DETERMINATION OF GEMSTONES;Lee A. Groat等;《Gems & Gemology》;20191231;第55卷(第4期);512-535 * |
武夷岩茶中真菌群落多样性及其产地识别特征信息的研究;楼云霄;《中国优秀硕士学位论文全文数据库(工程科技Ⅰ辑)》;20190315(第03期);B024-161 * |
Also Published As
Publication number | Publication date |
---|---|
CN115186776A (en) | 2022-10-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113792825B (en) | Fault classification model training method and device for electricity information acquisition equipment | |
CN112215696A (en) | Personal credit evaluation and interpretation method, device, equipment and storage medium based on time sequence attribution analysis | |
CN112559900B (en) | Product recommendation method and device, computer equipment and storage medium | |
CN112598294A (en) | Method, device, machine readable medium and equipment for establishing scoring card model on line | |
CN115186776B (en) | Method, device and storage medium for classifying ruby producing areas | |
CN109345050A (en) | A kind of quantization transaction prediction technique, device and equipment | |
CN110930038A (en) | Loan demand identification method, loan demand identification device, loan demand identification terminal and loan demand identification storage medium | |
CN115618282B (en) | Identification method, device and storage medium for synthetic precious stone | |
CN111860698A (en) | Method and device for determining stability of learning model | |
CN111105041B (en) | Machine learning method and device for intelligent data collision | |
CN111815209A (en) | Data dimension reduction method and device applied to wind control model | |
Al-Fraihat et al. | Hyperparameter Optimization for Software Bug Prediction Using Ensemble Learning | |
CN114418748A (en) | Vehicle credit evaluation method, device, equipment and storage medium | |
CN112434862B (en) | Method and device for predicting financial dilemma of marketing enterprises | |
CN112232724B (en) | Quantitative evaluation method, system, equipment and storage medium for personnel ability | |
CN117235633A (en) | Mechanism classification method, mechanism classification device, computer equipment and storage medium | |
CN111582647A (en) | User data processing method and device and electronic equipment | |
CN109858541A (en) | A kind of specific data self-adapting detecting method based on data integration | |
CN115423600A (en) | Data screening method, device, medium and electronic equipment | |
CN114021716A (en) | Model training method and system and electronic equipment | |
CN111612626A (en) | Method and device for preprocessing bond evaluation data | |
US20240160696A1 (en) | Method for Automatic Detection of Pair-Wise Interaction Effects Among Large Number of Variables | |
CN112184708B (en) | Sperm survival rate detection method and device | |
CN113240353B (en) | Cross-border e-commerce oriented export factory classification method and device | |
Chang et al. | Applying Decision Tree to Detect Credit Card Fraud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |