CN115186776B - Method, device and storage medium for classifying ruby producing areas - Google Patents

Method, device and storage medium for classifying ruby producing areas Download PDF

Info

Publication number
CN115186776B
CN115186776B CN202211107096.2A CN202211107096A CN115186776B CN 115186776 B CN115186776 B CN 115186776B CN 202211107096 A CN202211107096 A CN 202211107096A CN 115186776 B CN115186776 B CN 115186776B
Authority
CN
China
Prior art keywords
sample
ruby
origin
training
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211107096.2A
Other languages
Chinese (zh)
Other versions
CN115186776A (en
Inventor
宁珮莹
张天阳
唐娜
丁汀
黎辉煌
蒙彩珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guo Jian Center Shenzhen Jewelry Inspection Laboratory Co ltd
Original Assignee
Guo Jian Center Shenzhen Jewelry Inspection Laboratory Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guo Jian Center Shenzhen Jewelry Inspection Laboratory Co ltd filed Critical Guo Jian Center Shenzhen Jewelry Inspection Laboratory Co ltd
Priority to CN202211107096.2A priority Critical patent/CN115186776B/en
Publication of CN115186776A publication Critical patent/CN115186776A/en
Application granted granted Critical
Publication of CN115186776B publication Critical patent/CN115186776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method, a device and a storage medium for classifying the origin of ruby, wherein the method comprises the following steps: acquiring training sample data to generate a training set; randomly selecting a plurality of sample features in a training set to generate a plurality of feature sets; generating a plurality of decision trees according to the plurality of feature sets; acquiring characteristic data of a sample to be detected, inputting the characteristic data into a plurality of decision trees so as to enable each decision tree to output a producing area prediction result, and screening the first N predicted producing areas with the number sorted from large to small from the producing area prediction results of all the decision trees as a coarse classification result; inputting the characteristic data of the sample to be detected into the corresponding N perceptrons according to the rough classification result to obtain N origin judgment results; according to the characteristic data of the sample to be detected, loss functions of the N sensing machines are respectively calculated, the origin judgment result output by the sensing machine with the minimum loss function value is selected as the origin classification result, and the precision of the ruby origin classification method is improved.

Description

Method, device and storage medium for classifying ruby producing areas
Technical Field
The invention relates to the technical field of ruby origin identification, in particular to a ruby origin classification method, a ruby origin classification device and a storage medium.
Background
The ruby is corundum with red color. Belongs to corundum group minerals, the main component of the corundum group minerals is aluminum oxide, and trace impurity elements such as vanadium, chromium, iron, titanium and the like can be contained. The ruby color component contains chromium and is red to pink, and the higher the content is, the more bright the color is. The ruby is mainly produced in Burma, mornebick, thailand, srilanka, madagasca, vietnam, tanshinia, etc. Because ruby of different producing areas has obvious premium, the market demand of the method for identifying the producing area of ruby is stronger, the main means at present is to utilize the macroscopic representation of ruby to identify the producing area under the participation of a jewel expert, and the defects of high cost, low repeatability of the process of identifying the producing area, low accuracy, insufficient precision and the like exist; the method for identifying the spectral characteristics has the defects of overlapping characteristics, unobvious characteristics and the like; the application of artificial intelligence in the prior art in the jewelry identification industry is still in an original stage, the intelligent degree is very low, and the artificial intelligence is only simple mathematical linear discrimination, normalization processing and the like, so the depth is to be further improved; the nondestructive testing of the ruby chemical composition data is generally a ruby-free standard sample testing method, the data cannot be traced, and the technical defect of instability of matrix effect data is greatly existed, so that the obtained data has low quantification degree, low accuracy and high error; only individual samples of individual producing areas are researched, the database support of complete producing areas and autonomous analysis is lacked, the accuracy is low, and the misjudgment risk of the producing areas is large.
Disclosure of Invention
The invention provides a method, a device and a storage medium for classifying ruby producing areas, which utilize quantitative detection and identification of a historical identification database and classification calculation of an artificial intelligence algorithm to realize the effect of improving the precision of the ruby producing area classification method.
In order to improve the accuracy of the classification method of the ruby origin, the embodiment of the invention provides a classification method of the ruby origin, which comprises the following steps: obtaining training sample data to generate a training set, wherein the training sample data comprise ruby training samples, corresponding origin classification information and corresponding lossless quantitative curves, and the ruby training samples take corresponding characteristic chemical elements as sample characteristics; randomly selecting a plurality of sample features in the training set to generate a feature set, and repeating the random selection operation for a plurality of times to obtain a plurality of feature sets, wherein the feature sets comprise the sample features and the classification information of the origin; generating a plurality of decision trees according to a plurality of feature sets;
acquiring characteristic data of a sample to be tested, inputting the characteristic data into a plurality of decision trees so that a producing area prediction result output by each decision tree screens out the first N predicted producing areas with the number of the predicted producing areas ordered from large to small as a coarse classification result from the producing area prediction results of all the decision trees, wherein N is a positive integer;
calculating a plurality of loss functions of the perception machines for judging different producing areas according to the data of the training set; calculating the gradient of each hidden layer of the corresponding perceptron according to the loss function, and performing gradient reduction on the parameter of each hidden layer; updating the loss function of each perceptron and the gradient of each hidden layer of each perceptron until the loss function of each perceptron converges according to the data of different training sets; obtaining a plurality of sensing machines for judging different producing areas, wherein one sensing machine judges one producing area of input data;
inputting the characteristic data of the sample to be detected into corresponding N perceptrons according to the rough classification result to obtain N origin judgment results, which specifically comprise:
inputting the characteristic data of N samples to be detected into N corresponding perception machines, judging whether the samples to be detected are produced from the predicted producing areas by utilizing a multilayer full-connection neural network and a sigmoid activating function and combining with chain type derivation of a neural network back propagation algorithm, and obtaining N producing area judgment results; and respectively calculating loss functions of the N sensing machines according to the characteristic data of the sample to be detected, and selecting the origin judgment result output by the sensing machine with the minimum loss function value as the origin classification result.
As a preferred scheme, the nondestructive quantitative curve of the training sample is used as training data, the quantitative degree of the ruby component element detection is high, and the ruby component element detection accuracy and repeatability are improved; based on ruby component element detection, rough classification is carried out by utilizing a random forest algorithm, then a deep neural network is used, and a multilayer perceptron carries out fine classification, so that the purpose of ruby origin classification is achieved, the ruby component element characteristics are screened and analyzed through an artificial intelligence algorithm, the automatic judgment of the ruby origin is realized, and the precision of ruby origin classification is improved.
As a preferred scheme, acquiring training sample data to generate a training set, specifically:
obtaining a large number of ruby training samples of known production areas; classifying the ruby training samples according to the producing areas to obtain corresponding producing area classification information; determining characteristic chemical elements of the ruby training sample, and establishing a corresponding lossless quantitative curve in a lossless component analysis instrument, wherein the characteristic chemical elements comprise silicon, magnesium, potassium, calcium, titanium, vanadium, chromium, iron, gallium and zinc, and the lossless quantitative curve comprises the linear combination of trace element content, trace element content ratio and trace element content of ruby;
and generating a training set by taking the ruby training sample, the corresponding origin classification information and the corresponding lossless quantitative curve as training sample data, wherein the ruby training sample takes corresponding characteristic chemical elements as the characteristics of the training sample.
As a preferred scheme, the method can be used for effectively collecting the component element characteristics of the ruby produced in different mining areas without damage, so that the integrity of the ruby sample is ensured. By collecting the component element characteristics of ruby produced in different mining areas, including the contents of trace elements and the ratio and linear combination thereof as training data, an accurate historical identification database is established. The nondestructive quantitative curve of the component element characteristics is used as training data, the quantitative degree of the ruby component element detection is high, the ruby component element detection accuracy and repeatability are improved, and the precision of ruby place classification is improved.
As a preferred scheme, a plurality of decision trees are generated according to a plurality of feature sets, specifically:
according to the characteristics of each decision tree, calculating characteristics selected by the Kini index to serve as root nodes, setting nodes of non-leaf nodes of each decision tree to serve as decision nodes, and setting leaf nodes of each decision tree to serve as output units, wherein each decision node is a sample characteristic and a corresponding judgment value, and each leaf node corresponds to a production area prediction result.
As a preferred scheme, a plurality of features are randomly selected from all training sets to form a feature set to generate decision trees to form a plurality of decision trees with random features, the decision nodes of the decision trees judge the sample features one by one, finally, each decision tree generates a producing area prediction result, the decision trees formed by combining a plurality of random features judge and screen the component element features of the predicted ruby, and the producing areas of the ruby are roughly classified. The classification method for rough classification is carried out by judging and screening the component element characteristics of the predicted ruby by using the random forest algorithm, so that the precision of classification of the ruby producing area is improved.
As a preferred scheme, after generating a plurality of decision trees, the method further comprises:
in the training process of a plurality of decision trees, the divided features are selected in a mode of evaluating the reduction amount of impurity degree by using an information gain ratio or a kini index, and the specific steps are as follows:
when the division characteristics are selected in a mode of evaluating the reduction amount of impurity degree by using the information gain ratio, dividing the characteristics f into m value intervals and usingThe characteristics divide a sample set X of the nodes to generate m branch nodes, wherein the sample set contained in the jth branch node
Figure 29263DEST_PATH_IMAGE001
Obtaining a sample subset of the j-th value interval on the characteristic f in the sample set X;
the information gain resulting from dividing the sample set X using the feature f is:
Figure 696874DEST_PATH_IMAGE002
wherein,
Figure 989315DEST_PATH_IMAGE003
for division into subsets in X
Figure 846413DEST_PATH_IMAGE001
The sample ratio of (a);
Figure 478382DEST_PATH_IMAGE004
as a set of samples
Figure 129943DEST_PATH_IMAGE005
The information gain of (1);
given the feature f, the classification criterion with the largest information gain is used for feature selection:
Figure 175260DEST_PATH_IMAGE006
when the division features are selected in a mode of evaluating the impurity reduction amount by using the kini index, the kini index of the feature f with respect to the sample set X is as follows:
Figure 429524DEST_PATH_IMAGE007
wherein,
Figure 978317DEST_PATH_IMAGE008
the proportion of the number of samples belonging to the class i in the current sample set at the node to the total number of samples is
Figure 66358DEST_PATH_IMAGE009
In the case of (2), feature selection is performed using the kini criterion:
Figure 598971DEST_PATH_IMAGE010
preferably, in the training process of the decision tree, the division characteristics are selected in a mode of evaluating the impurity reduction amount by using an information gain ratio or a kini index. The division characteristics are selected and applied to the automatic node bifurcation and the self-adaptive characteristic weight, the key characteristics are selected as the analysis key points of the classification judgment of the producing area, the influence of the key characteristics on the classification judgment of the producing area is improved, and the precision of the classification of the producing area of the ruby is improved.
As a preferred scheme, in the prediction results of the producing areas of all the decision trees, the first N predicted producing areas, the number of which is sorted from most to least, are screened out as coarse classification results, and the method specifically comprises the following steps:
inputting the prediction results of all the decision trees in the producing area into a combiner for decision voting, wherein the characteristic data of the sample to be tested is x, and the random forest model formed by all the decision trees is
Figure 532292DEST_PATH_IMAGE011
Wherein, in the process,
Figure 201171DEST_PATH_IMAGE012
k in (1) is the number of decision trees in the random forest model,
Figure 647064DEST_PATH_IMAGE012
as a random forest model
Figure 135814DEST_PATH_IMAGE011
The k-th decision tree in (1)The characteristic data x are in random forest
Figure 76089DEST_PATH_IMAGE013
Is outputted as
Figure 599474DEST_PATH_IMAGE014
And the total votes of the category i of the origin prediction result are as follows:
Figure 763739DEST_PATH_IMAGE015
wherein,
Figure 192315DEST_PATH_IMAGE016
voting the weight value of the corresponding decision tree j on the category i of the producing area prediction result; k is the number of decision trees in the random forest model;
and selecting the first N predicted producing areas with the voting numbers sorted from more to less as a coarse classification result.
As a preferable scheme, the method judges and screens the component element characteristics of the predicted ruby through a decision tree formed by combining a plurality of random characteristics to obtain a plurality of production place prediction results, screens N production places with the largest quantity, and judges that the prediction sample belongs to one of the N production places in all the production places. The classification method for rough classification is carried out by judging and screening the component element characteristics of the predicted ruby by using a random forest algorithm, so that the precision of classification of the ruby producing areas is improved.
As a preferred scheme, inputting the characteristic data of the sample to be detected into the corresponding N sensing machines to obtain N origin judgment results, specifically:
the single-layer full-link in the perceptron is as follows:
Figure 733018DEST_PATH_IMAGE017
wherein f is an activation function, the sample to be detected is x, and the producing area is i;
Figure 110910DEST_PATH_IMAGE018
a single-layer fully-connected transformation matrix corresponding to a sample x to be detected;
Figure 649338DEST_PATH_IMAGE019
is the ith column of the transformation matrix;
Figure 112681DEST_PATH_IMAGE020
the ith component of x;
the neural network back propagation objective function is:
Figure 457074DEST_PATH_IMAGE021
wherein x is the sample to be measured, t is the expected output, and z is the actual output.
The architecture of the perceptron is as follows:
Figure 147862DEST_PATH_IMAGE022
wherein, the sizes of different hidden layers are respectively:
Figure 653930DEST_PATH_IMAGE023
as a preferred scheme, the method judges and screens the component element characteristics of the predicted ruby through a random forest algorithm, roughly classifies the producing area of the ruby, judges that the predicted sample belongs to one of N producing areas, finely classifies the sample by using a deep neural network and a multilayer perceptron, and determines the final producing area of the predicted sample. The characteristics of the ruby component elements are analyzed through an artificial intelligence algorithm, so that the automatic judgment of the ruby producing area is realized, and the precision of ruby producing area classification is improved.
Correspondingly, the invention also provides a device for classifying the origin of the ruby, which comprises: the device comprises a training module, a rough classification module and a classification judgment module;
the training module is used for acquiring training sample data to generate a training set, wherein the training sample data comprises a ruby training sample, corresponding origin classification information and a corresponding lossless quantitative curve, and the ruby training sample takes corresponding characteristic chemical elements as sample characteristics; randomly selecting a plurality of sample features in the training set to generate a feature set, and repeating the random selection operation for a plurality of times to obtain a plurality of feature sets, wherein the feature sets comprise the sample features and the classification information of the origin; generating a plurality of decision trees according to a plurality of feature sets;
the rough classification module is used for acquiring feature data of a sample to be detected, inputting the feature data into a plurality of decision trees, so that a producing area prediction result output by each decision tree screens out the first N predicted producing areas with the number sorted from large to small from the producing area prediction results of all the decision trees as a rough classification result, wherein N is a positive integer;
the classification judgment module is used for calculating a plurality of loss functions of the perception machines judging different producing areas according to the data of the training set; calculating the gradient of each hidden layer of the corresponding perceptron according to the loss function, and performing gradient descent on the parameter of each hidden layer; updating the loss function of each perceptron and the gradient of each hidden layer of each perceptron according to data of different training sets until the loss function of each perceptron converges; obtaining a plurality of sensing machines for judging different producing areas, wherein one sensing machine judges one producing area of input data; inputting the characteristic data of the sample to be detected into corresponding N perceptrons according to the rough classification result to obtain N origin judgment results, which specifically comprise: inputting the characteristic data of N samples to be detected into corresponding N perceptrons, judging whether the samples to be detected are produced from the predicted producing areas by utilizing a multilayer full-connection neural network and a sigmoid activating function and combining chain type derivation of a neural network back propagation algorithm, and obtaining N producing area judging results; and respectively calculating loss functions of the N sensing machines according to the characteristic data of the sample to be detected, and selecting the origin judgment result output by the sensing machine with the minimum loss function value as the origin classification result.
As a preferred scheme, the training module of the ruby producing area classifying device utilizes the lossless quantitative curve of the training sample as training data, the quantitative degree of the ruby component element detection is high, and the ruby component element detection accuracy and repeatability are improved; the rough classification module is used for carrying out rough classification by utilizing a random forest algorithm based on ruby component element detection, the classification judgment module is used for carrying out fine classification by utilizing a deep neural network and a multilayer perceptron, the purpose of ruby origin classification is achieved, the ruby origin automatic judgment is realized by screening and analyzing ruby component element characteristics through an artificial intelligence algorithm, and the precision of ruby origin classification is improved.
Preferably, the coarse classification module comprises: a decision voting unit and a classification result unit;
inputting the prediction results of all the decision trees in the producing area into a combiner for decision voting, wherein the characteristic data of the sample to be tested is x, and the random forest model formed by all the decision trees is
Figure 870147DEST_PATH_IMAGE011
Wherein
Figure 690336DEST_PATH_IMAGE012
k in (1) is the number of decision trees in the random forest model,
Figure 42820DEST_PATH_IMAGE012
as a random forest model
Figure 719789DEST_PATH_IMAGE011
The k decision tree in (1), the feature data x in a random forest
Figure 344674DEST_PATH_IMAGE013
Is output as
Figure 30870DEST_PATH_IMAGE014
The result of said prediction of originThe total number of votes for category i is:
Figure 237861DEST_PATH_IMAGE015
wherein,
Figure 288993DEST_PATH_IMAGE016
the weight value voted on the category i of the producing area prediction result for the corresponding decision tree j; k is the number of decision trees in the random forest model;
and the classification result unit is used for selecting the first N predicted producing areas with the voting numbers sorted from more to less as a rough classification result.
As a preferred scheme, the rough classification module judges and screens the component element characteristics of the predicted ruby through a decision tree formed by combining a plurality of random characteristics to obtain a plurality of producing area prediction results, screens N producing areas with the largest quantity, and judges that the prediction sample belongs to one of the N producing areas in all producing areas. The classification method for rough classification is carried out by judging and screening the component element characteristics of the predicted ruby by using the random forest algorithm, so that the precision of classification of the ruby producing area is improved.
Preferably, the classification judgment module includes: the judgment result unit specifically comprises:
the single-layer full-link in the perceptron is as follows:
Figure 214224DEST_PATH_IMAGE017
wherein f is an activation function, the sample to be detected is x, and the place of origin is i;
Figure 625483DEST_PATH_IMAGE018
a single-layer fully-connected transformation matrix corresponding to a sample x to be detected;
Figure 952559DEST_PATH_IMAGE019
is the ith column of the transformation matrix;
Figure 610811DEST_PATH_IMAGE020
the ith component of x;
the neural network back propagation objective function is:
Figure 695442DEST_PATH_IMAGE021
wherein x is the sample to be measured, t is the expected output, and z is the actual output.
The architecture of the perceptron is as follows:
Figure 723440DEST_PATH_IMAGE022
wherein, the sizes of different hidden layers are respectively as follows:
Figure 905023DEST_PATH_IMAGE023
as a preferred scheme, the method judges and screens the component element characteristics of the predicted ruby through a random forest algorithm, roughly classifies the producing area of the ruby, judges that the predicted sample belongs to one of N producing areas, and finely classifies the predicted sample by using a deep neural network and a multilayer perceptron to determine the final producing area of the predicted sample. The characteristics of the ruby component elements are analyzed through an artificial intelligence algorithm, so that the automatic judgment of the ruby producing area is realized, and the precision of ruby producing area classification is improved.
Accordingly, the present invention also provides a computer readable storage medium comprising a stored computer program; wherein the computer program, when executed, controls an apparatus in which the computer readable storage medium is located to perform a method of classifying a region of origin of a ruby according to the present disclosure.
Drawings
FIG. 1 is a schematic flow chart diagram of one embodiment of a method for classifying the origin of ruby provided by the present invention;
fig. 2 is a schematic structural diagram of an embodiment of the ruby origin sorting device provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, a method for classifying a producing area of a ruby according to an embodiment of the present invention includes steps S101 to S103:
step S101: obtaining training sample data to generate a training set, wherein the training sample data comprise ruby training samples, corresponding origin classification information and corresponding lossless quantitative curves, and the ruby training samples take corresponding characteristic chemical elements as sample characteristics; randomly selecting a plurality of sample features in the training set to generate a feature set, and repeating the random selection operation for a plurality of times to obtain a plurality of feature sets, wherein the feature set comprises the sample features and the classification information of the origin; and generating a plurality of decision trees according to a plurality of feature sets.
In this embodiment, obtaining training sample data to generate a training set specifically includes:
obtaining a large number of ruby training samples of known production areas; classifying the ruby training samples according to the producing areas to obtain corresponding producing area classification information; determining characteristic chemical elements of the ruby training sample, and establishing a corresponding lossless quantitative curve in a lossless component analysis instrument, wherein the characteristic chemical elements comprise silicon, magnesium, potassium, calcium, titanium, vanadium, chromium, iron, gallium and zinc, and the lossless quantitative curve comprises the trace element content, the trace element content ratio and the linear combination of the trace element content of the ruby;
and generating a training set by taking the ruby training sample, the corresponding origin classification information and the corresponding lossless quantitative curve as training sample data, wherein the ruby training sample takes corresponding characteristic chemical elements as the characteristics of the training sample.
In this example, 659 ruby training samples were obtained, for a total of 9 sites, and the samples were classified by site as shown in the following table:
Figure DEST_PATH_IMAGE024
3 samples were selected from each of the 9 sites available, for a total of 27 samples. Ten feature element constant values were extracted from the 27 samples, respectively. Determining characteristic chemical elements of silicon, magnesium, potassium, calcium, titanium, vanadium, chromium, iron, gallium and zinc of the ruby training sample, establishing corresponding 10 lossless quantitative curves in a lossless component analysis instrument EDXRF, and obtaining training sample data, wherein the curves comprise trace element content of the ruby training sample, a ratio of the trace element content and a linear combination of the trace element content. And obtaining 100 feature combinations of the ruby training sample according to the permutation and combination of the 10 features of the ruby training sample.
In this embodiment, 10 sample features are randomly selected from 100 feature combinations in the training set to generate a feature set, the random selection operation is repeated K times to obtain K feature sets, K is a positive integer, corresponding K decision trees are constructed according to the K feature sets, and a random forest is constructed by the K decision trees.
In this embodiment, the content of training sample data is read, which includes the mapping relationship between the number of data, the number of trace elements, the content of trace elements, the name of a place of production and a serial number; and packing the corresponding training sample data into a matrix type for constructing the decision tree operation.
In this embodiment, a plurality of decision trees are generated according to a plurality of feature sets, specifically:
according to the characteristics of each decision tree, calculating characteristics selected by the Kini index to serve as root nodes, setting nodes of non-leaf nodes of each decision tree to serve as decision nodes, and setting leaf nodes of each decision tree to serve as output units, wherein each decision node is a sample characteristic and a corresponding judgment value, and each leaf node corresponds to a production area prediction result.
In this embodiment, after generating the plurality of decision trees, the method further includes:
in the training process of a plurality of decision trees, the divided features are selected in a mode of evaluating the reduction amount of impurity degree by using an information gain ratio or a kini index, and the specific steps are as follows:
when the division characteristics are selected in a mode of evaluating the reduction amount of the impurity degree by using the information gain ratio, dividing the characteristics f into m value-taking intervals, and dividing a sample set X of the nodes by using the characteristics to generate m branch nodes, wherein the sample set contained in the jth branch node
Figure 547226DEST_PATH_IMAGE001
Obtaining a sample subset of the j-th value interval on the characteristic f in the sample set X;
the information gain resulting from dividing the sample set X using the feature f is:
Figure 447049DEST_PATH_IMAGE002
wherein,
Figure 950842DEST_PATH_IMAGE003
for division into subsets in X
Figure 986932DEST_PATH_IMAGE001
The sample ratio of (a);
given the feature f, the classification criterion with the largest information gain is used for feature selection:
Figure 613085DEST_PATH_IMAGE006
when the division features are selected in a mode of evaluating the impurity reduction amount by using the kini index, the kini index of the feature f with respect to the sample set X is as follows:
Figure 921575DEST_PATH_IMAGE007
wherein,
Figure 291377DEST_PATH_IMAGE008
the proportion of the number of samples belonging to the class i in the current sample set at the node to the total number of samples is
Figure 447552DEST_PATH_IMAGE009
In the case of (2), feature selection is performed using the kini criterion:
Figure 244606DEST_PATH_IMAGE010
in this embodiment, when each decision tree is trained, if the quantity of feature data corresponding to a certain leaf node of the decision tree is greater than a set minimum number of branches, the leaf node is branched according to a training strategy, where the minimum number of branches is 5. According to different training sets, adaptive weights are adopted for different features, and the weight given to the key feature is larger than the weight given to other features.
Step S102: acquiring characteristic data of a sample to be tested, inputting the characteristic data into a plurality of decision trees so that a producing area prediction result output by each decision tree screens out the first N predicted producing areas with the number of the predicted producing areas in the order of more than or less from the producing area prediction results of all the decision trees as a coarse classification result, wherein N is a positive integer.
In this embodiment, the number of sample data to be tested, the number of trace elements and the content of trace elements are taken; and for each feature in the sample to be tested, performing data distribution on each decision tree according to the feature data related to each decision tree by the feature, and operating each decision tree to obtain the classification result of each decision tree.
In this embodiment, in the prediction results of the producing areas of all the decision trees, the first N predicted producing areas, the number of which is sorted from more to less, are screened out as the coarse classification results, which specifically includes:
inputting the prediction results of all decision trees into a combiner for decision voting, wherein the characteristic data of the sample to be tested is x, and the random forest model formed by all the decision trees is
Figure 791125DEST_PATH_IMAGE011
The characteristic data x are in random forest
Figure 699039DEST_PATH_IMAGE013
Is output as
Figure 709720DEST_PATH_IMAGE014
And the total votes of the category i of the origin prediction result are as follows:
Figure 858767DEST_PATH_IMAGE015
wherein,
Figure 954899DEST_PATH_IMAGE016
voting the weight value of the corresponding decision tree j on the category i of the producing area prediction result;
and selecting the first N predicted producing areas with the voting numbers sorted from more to less as a coarse classification result.
In this embodiment, the top 5 predicted producing areas with the number sorted from more to less among the producing area prediction results of the K decision trees are selected as the rough classification result.
Step S103: inputting the characteristic data of the sample to be detected into corresponding N perceptrons according to the rough classification result to obtain N origin judgment results; and respectively calculating loss functions of the N sensing machines according to the characteristic data of the sample to be detected, and selecting the origin judgment result output by the sensing machine with the minimum loss function value as the origin classification result.
In this embodiment, the feature data of the sample to be detected is input into the corresponding N sensing machines to obtain N origin determination results, which specifically include:
calculating a plurality of loss functions of the perception machines for judging different producing areas according to the data of the training set; calculating the gradient of each hidden layer of the corresponding perceptron according to the loss function, and performing gradient descent on the parameter of each hidden layer; updating the loss function of each perceptron and the gradient of each hidden layer of each perceptron until the loss function of each perceptron converges according to the data of different training sets; obtaining a plurality of sensing machines for judging different producing areas, wherein one sensing machine judges one producing area of input data;
inputting the characteristic data of N samples to be detected into N corresponding perception machines, judging whether the samples to be detected are produced from the predicted producing areas by utilizing a multilayer full-connection neural network and a sigmoid activating function and combining with chain type derivation of a neural network back propagation algorithm, and obtaining N producing area judgment results;
wherein the single layer full link in the perceptron is:
Figure 666503DEST_PATH_IMAGE017
wherein f is an activation function, the sample to be detected is x, and the place of origin is i;
the neural network back propagation objective function is:
Figure 734953DEST_PATH_IMAGE021
wherein x is the sample to be measured, t is the expected output, and z is the actual output.
The architecture of the perceptron is as follows:
Figure 873811DEST_PATH_IMAGE022
wherein, the sizes of different hidden layers are respectively:
Figure 378610DEST_PATH_IMAGE025
in this embodiment, according to the 5 predicted producing areas of the rough classification result, the corresponding 5 producing area perceivers are selected, the data to be measured is input into the 5 perceivers, the loss functions of the 5 perceivers are respectively calculated, and the producing area corresponding to the perceiver with the smallest loss function is used as the producing area judgment result. And mapping the production place result obtained in the last step into an actual production place name according to the mapping relation of the production place number, writing the production place name into a fixed text file as a production place file, detecting a modification record of the production place file, updating a corresponding field if the modification record exists, and displaying to finish prediction.
The embodiment of the invention has the following effects:
the method utilizes the nondestructive quantitative curve of the training sample as training data, has high quantification degree on the detection of the ruby component elements, and improves the accuracy and repeatability of the ruby component element detection; based on ruby component element detection, rough classification is carried out by utilizing a random forest algorithm, then a deep neural network is used, and a multilayer perceptron carries out fine classification, so that the purpose of ruby origin classification is achieved, the ruby component element characteristics are screened and analyzed through an artificial intelligence algorithm, the automatic judgment of the ruby origin is realized, and the precision of ruby origin classification is improved.
Example two
Referring to fig. 2, a device for classifying a producing area of a ruby according to an embodiment of the present invention includes: a training module 201, a rough classification module 202 and a classification judgment module 203;
the training module 201 is configured to obtain training sample data to generate a training set, where the training sample data includes a ruby training sample, corresponding origin classification information, and a corresponding lossless quantitative curve, and the ruby training sample uses corresponding characteristic chemical elements as sample characteristics; randomly selecting a plurality of sample features in the training set to generate a feature set, and repeating the random selection operation for a plurality of times to obtain a plurality of feature sets, wherein the feature sets comprise the sample features and the classification information of the origin; generating a plurality of decision trees according to a plurality of feature sets;
the rough classification module 202 is configured to obtain feature data of a sample to be tested, and input the feature data into a plurality of decision trees, so that a place prediction result output by each decision tree screens, as rough classification results, first N predicted places of origin, the number of which is ordered from most to least, from the place of origin prediction results of all the decision trees, where N is a positive integer;
the classification judgment module 203 is configured to input the feature data of the sample to be detected into the corresponding N perceptrons according to the coarse classification result, so as to obtain N origin judgment results; and respectively calculating loss functions of the N sensing machines according to the characteristic data of the sample to be detected, and selecting the origin judgment result output by the sensing machine with the minimum loss function value as the origin classification result.
In this embodiment, the coarse classification module includes: a decision voting unit and a classification result unit;
inputting the prediction results of all the decision trees in the producing area into a combiner for decision voting, wherein the characteristic data of the sample to be tested is x, and the random forest model formed by all the decision trees is
Figure 893905DEST_PATH_IMAGE011
The characteristic data x are in random forest
Figure 879179DEST_PATH_IMAGE013
Is output as
Figure 126620DEST_PATH_IMAGE014
And the total votes of the category i of the origin prediction result are as follows:
Figure 931765DEST_PATH_IMAGE015
wherein,
Figure 172123DEST_PATH_IMAGE016
voting the weight value of the corresponding decision tree j on the category i of the producing area prediction result;
and the classification result unit is used for selecting the first N predicted producing areas with the voting numbers sorted from more to less as a rough classification result.
In this embodiment, the classification judgment module includes: a perception computer computing unit and a judgment result unit;
the perceptron calculation unit is used for calculating a plurality of loss functions of perceptrons judging different producing areas according to the data of the training set; calculating the gradient of each hidden layer of the corresponding perceptron according to the loss function, and performing gradient descent on the parameter of each hidden layer; updating the loss function of each perceptron and the gradient of each hidden layer of each perceptron until the loss function of each perceptron converges according to the data of different training sets; obtaining a plurality of sensing machines for judging different producing areas, wherein one sensing machine judges one producing area of input data;
the judgment result unit is used for inputting the characteristic data of the N samples to be detected into the corresponding N perceptrons, judging whether the samples to be detected are produced from the predicted producing area by utilizing the multilayer fully-connected neural network and the sigmoid activating function and combining with the chain type derivation of the neural network back propagation algorithm, and obtaining N producing area judgment results;
wherein the single-layer full-link in the perceptron is:
Figure 11903DEST_PATH_IMAGE017
wherein f is an activation function, the sample to be detected is x, and the place of origin is i;
the neural network back propagation objective function is:
Figure 492563DEST_PATH_IMAGE021
wherein x is the sample to be measured, t is the expected output, and z is the actual output.
The architecture of the perceptron is as follows:
Figure 988266DEST_PATH_IMAGE022
wherein, the sizes of different hidden layers are respectively:
Figure 579784DEST_PATH_IMAGE025
the device for classifying the origin of the ruby can implement the method for classifying the origin of the ruby of the embodiment of the method. The alternatives in the above-described method embodiments are also applicable to this embodiment and will not be described in detail here. The rest of the embodiments of the present application may refer to the contents of the above method embodiments, and in this embodiment, details are not described again.
The embodiment of the invention has the following effects:
the training module of the ruby producing area classifying device utilizes the lossless quantitative curve of the training sample as training data, the quantitative degree of the ruby component element detection is high, and the ruby component element detection accuracy and repeatability are improved; the rough classification module is used for carrying out rough classification by utilizing a random forest algorithm based on ruby component element detection, the classification judgment module is used for carrying out fine classification by utilizing a deep neural network and a multilayer perceptron, the purpose of ruby origin classification is achieved, the ruby origin automatic judgment is realized by screening and analyzing ruby component element characteristics through an artificial intelligence algorithm, and the precision of ruby origin classification is improved.
EXAMPLE III
Accordingly, the present invention also provides a computer readable storage medium comprising a stored computer program, wherein the computer program when executed controls an apparatus in which the computer readable storage medium is located to perform a method for classification of ruby origin as described in any one of the above embodiments.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to implement the invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program in the terminal device.
The terminal device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The terminal device may include, but is not limited to, a processor, a memory.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is the control center of the terminal device and connects the various parts of the whole terminal device using various interfaces and lines.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the terminal device by running or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the mobile terminal, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the terminal device integrated module/unit can be stored in a computer readable storage medium if it is implemented in the form of software functional unit and sold or used as an independent product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
The above-mentioned embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, and it should be understood that the above-mentioned embodiments are only examples of the present invention and are not intended to limit the scope of the present invention. It should be understood that any modifications, equivalents, improvements and the like, which come within the spirit and principle of the invention, may occur to those skilled in the art and are intended to be included within the scope of the invention.

Claims (10)

1. A method of classifying the origin of a ruby, comprising:
obtaining training sample data to generate a training set, wherein the training sample data comprises a ruby training sample, corresponding origin classification information and a corresponding lossless quantitative curve, and the ruby training sample takes corresponding characteristic chemical elements as sample characteristics; randomly selecting a plurality of sample features in the training set to generate a feature set, and repeating the random selection operation for a plurality of times to obtain a plurality of feature sets, wherein the feature set comprises the sample features and the classification information of the origin; generating a plurality of decision trees according to a plurality of feature sets;
acquiring characteristic data of a sample to be tested, inputting the characteristic data into a plurality of decision trees so that a producing area prediction result output by each decision tree screens out the first N predicted producing areas with the number of the predicted producing areas ordered from large to small as a coarse classification result from the producing area prediction results of all the decision trees, wherein N is a positive integer;
calculating a plurality of loss functions of the perception machines for judging different producing areas according to the data of the training set; calculating the gradient of each hidden layer of the corresponding perceptron according to the loss function, and performing gradient descent on the parameter of each hidden layer; updating the loss function of each perceptron and the gradient of each hidden layer of each perceptron until the loss function of each perceptron converges according to the data of different training sets; obtaining a plurality of sensing machines for judging different producing areas, wherein one sensing machine judges one producing area of input data;
inputting the characteristic data of the sample to be detected into corresponding N perceptrons according to the rough classification result to obtain N origin judgment results, which specifically comprise:
inputting the characteristic data of N samples to be detected into N corresponding perception machines, judging whether the samples to be detected are produced from the predicted producing areas by utilizing a multilayer full-connection neural network and a sigmoid activating function and combining with chain type derivation of a neural network back propagation algorithm, and obtaining N producing area judgment results;
and respectively calculating loss functions of the N sensing machines according to the characteristic data of the sample to be detected, and selecting the origin judgment result output by the sensing machine with the minimum loss function value as the origin classification result.
2. The method according to claim 1, wherein the obtaining training sample data generates a training set, specifically:
obtaining a large number of ruby training samples of known production places; classifying the ruby training samples according to the producing areas to obtain corresponding producing area classification information; determining characteristic chemical elements of the ruby training sample, and establishing a corresponding lossless quantitative curve in a lossless component analysis instrument, wherein the characteristic chemical elements comprise silicon, magnesium, potassium, calcium, titanium, vanadium, chromium, iron, gallium and zinc, and the lossless quantitative curve comprises the linear combination of trace element content, trace element content ratio and trace element content of ruby;
and generating a training set by taking the ruby training sample, the corresponding origin classification information and the corresponding lossless quantitative curve as training sample data, wherein the ruby training sample takes corresponding characteristic chemical elements as the characteristics of the training sample.
3. A method for classifying a region of origin of a ruby according to claim 1, wherein said generating a plurality of decision trees from a plurality of said feature sets comprises:
according to the characteristics of each decision tree, calculating characteristics selected by the Kini index to serve as root nodes, setting nodes of non-leaf nodes of each decision tree to serve as decision nodes, and setting leaf nodes of each decision tree to serve as output units, wherein each decision node is a sample characteristic and a corresponding judgment value, and each leaf node corresponds to a place of origin prediction result.
4. A method for ruby origin classification according to claim 3, wherein after said generating a plurality of decision trees, further comprising:
in the training process of a plurality of decision trees, the divided features are selected in a mode of evaluating the reduction amount of impurity degree by using an information gain ratio or a kini index, and the specific steps are as follows:
when the division characteristics are selected in a mode of evaluating the reduction amount of impurity degree by using the information gain ratio, dividing the characteristics f into m value intervals, and using the characteristics to sample sets of nodesX is divided to generate m branch nodes, wherein the j-th branch node comprises a sample set
Figure 530884DEST_PATH_IMAGE001
Obtaining a sample subset with a j-th value interval on the characteristic f in the sample set X;
the information gain resulting from dividing the sample set X using the feature f is:
Figure 974635DEST_PATH_IMAGE002
wherein,
Figure 190853DEST_PATH_IMAGE003
for division into subsets in X
Figure 260309DEST_PATH_IMAGE001
The sample ratio of (a);
Figure 612793DEST_PATH_IMAGE004
as a set of samples
Figure 555341DEST_PATH_IMAGE005
The information gain of (2);
given the feature f, the classification criterion with the largest information gain is used for feature selection:
Figure 930959DEST_PATH_IMAGE006
when the division features are selected in a mode of evaluating the impurity reduction amount by using the kini index, the kini index of the feature f with respect to the sample set X is as follows:
Figure 351576DEST_PATH_IMAGE007
wherein,
Figure 824145DEST_PATH_IMAGE008
and for the proportion of the number of samples of the current sample set belonging to the category i to the total number of samples at the node, performing feature selection by using a Gini criterion under the condition of a given feature f:
Figure 858966DEST_PATH_IMAGE009
5. the method for classifying ruby origins according to claim 3, wherein the first N predicted origins with the number being ranked from most to least are selected from the predicted origins of all the decision trees as rough classification results, specifically:
inputting the prediction results of all the decision trees in the producing area into a combiner for decision voting, wherein the characteristic data of the sample to be tested is x, and the random forest model formed by all the decision trees is
Figure 784197DEST_PATH_IMAGE010
Wherein
Figure 8505DEST_PATH_IMAGE011
k in (1) is the number of decision trees in the random forest model,
Figure 273264DEST_PATH_IMAGE011
as a random forest model
Figure 557615DEST_PATH_IMAGE010
The k decision tree in (1), the feature data x in random forest
Figure 970142DEST_PATH_IMAGE012
Is outputted as
Figure 919512DEST_PATH_IMAGE013
And the total votes of the category i of the origin prediction result are as follows:
Figure 101095DEST_PATH_IMAGE014
wherein,
Figure 556347DEST_PATH_IMAGE015
the weight value voted on the category i of the producing area prediction result for the corresponding decision tree j; k is the number of decision trees in the random forest model;
and selecting the first N predicted producing areas with the voting numbers sorted from more to less as a coarse classification result.
6. The method for classifying a ruby origin according to claim 1, wherein said feature data of said sample to be measured is input to corresponding N perceptrons to obtain N origin determination results, specifically:
the single-layer full-link in the perceptron is as follows:
Figure 393853DEST_PATH_IMAGE016
wherein f is an activation function, the sample to be detected is x, and the place of origin is i;
Figure 959963DEST_PATH_IMAGE017
a single-layer fully-connected transformation matrix corresponding to a sample x to be detected;
Figure 996052DEST_PATH_IMAGE018
is the ith column of the transformation matrix;
Figure 809157DEST_PATH_IMAGE019
the ith component of x;
the neural network back propagation objective function is:
Figure 196276DEST_PATH_IMAGE020
wherein x is a sample to be measured, t is an expected output, and z is an actual output;
the architecture of the perceptron is as follows:
Figure 300498DEST_PATH_IMAGE021
wherein, the sizes of different hidden layers are respectively:
Figure 394356DEST_PATH_IMAGE022
7. a device for classifying the origin of ruby, comprising: the device comprises a training module, a rough classification module and a classification judgment module;
the training module is used for acquiring training sample data to generate a training set, wherein the training sample data comprises a ruby training sample, corresponding origin classification information and a corresponding lossless quantitative curve, and the ruby training sample takes corresponding characteristic chemical elements as sample characteristics; randomly selecting a plurality of sample features in the training set to generate a feature set, and repeating the random selection operation for a plurality of times to obtain a plurality of feature sets, wherein the feature sets comprise the sample features and the classification information of the origin; generating a plurality of decision trees according to a plurality of feature sets;
the rough classification module is used for acquiring feature data of a sample to be detected, inputting the feature data into a plurality of decision trees, so that a producing area prediction result output by each decision tree screens out the first N predicted producing areas with the number sorted from large to small from the producing area prediction results of all the decision trees as a rough classification result, wherein N is a positive integer;
the classification judgment module is used for calculating a plurality of loss functions of the perception machines judging different producing areas according to the data of the training set; calculating the gradient of each hidden layer of the corresponding perceptron according to the loss function, and performing gradient descent on the parameter of each hidden layer; updating the loss function of each perceptron and the gradient of each hidden layer of each perceptron according to data of different training sets until the loss function of each perceptron converges; obtaining a plurality of sensing machines for judging different producing areas, wherein one sensing machine judges one producing area of input data;
inputting the characteristic data of the sample to be detected into corresponding N perceptrons according to the rough classification result to obtain N origin judgment results, which specifically comprise: inputting the characteristic data of N samples to be detected into N corresponding perception machines, judging whether the samples to be detected are produced from the predicted producing areas by utilizing a multilayer full-connection neural network and a sigmoid activating function and combining with chain type derivation of a neural network back propagation algorithm, and obtaining N producing area judgment results; and respectively calculating loss functions of the N sensing machines according to the characteristic data of the sample to be detected, and selecting the origin judgment result output by the sensing machine with the minimum loss function value as the origin classification result.
8. A ruby origin sorting apparatus according to claim 7, wherein said rough sorting module comprises: a decision voting unit and a classification result unit;
the decision voting unit is used for inputting the producing area prediction results of all decision trees into the combiner for decision voting, the characteristic data of the sample to be tested is x, and the random forest model formed by all the decision trees is x
Figure 191410DEST_PATH_IMAGE010
Wherein
Figure 800246DEST_PATH_IMAGE023
k in (1) is the number of decision trees in the random forest model,
Figure 160689DEST_PATH_IMAGE023
as a random forest model
Figure 171371DEST_PATH_IMAGE010
The k decision tree in (1), the feature data x in a random forest
Figure 139327DEST_PATH_IMAGE012
Is output as
Figure 173142DEST_PATH_IMAGE013
And the total ticket number of the category i of the origin prediction result is as follows:
Figure 619167DEST_PATH_IMAGE014
wherein,
Figure 749934DEST_PATH_IMAGE015
voting the weight value of the corresponding decision tree j on the category i of the producing area prediction result; k is the number of decision trees in the random forest model;
and the classification result unit is used for selecting the first N predicted producing areas with the voting numbers sorted from more to less as a rough classification result.
9. A ruby habitat classification device according to claim 7 and wherein said classification decision module comprises: the judgment result unit specifically comprises:
the single-layer full-link in the perceptron is as follows:
Figure 81601DEST_PATH_IMAGE016
wherein f is an activation function, the sample to be detected is x, and the place of origin is i;
Figure 665029DEST_PATH_IMAGE017
for single-layer fully-connected transformation corresponding to sample x to be measuredA matrix;
Figure 852428DEST_PATH_IMAGE018
is the ith column of the transformation matrix;
Figure 837701DEST_PATH_IMAGE019
the ith component of x;
the neural network back propagation objective function is:
Figure 147460DEST_PATH_IMAGE020
wherein x is a sample to be measured, t is an expected output, and z is an actual output;
the architecture of the perceptron is as follows:
Figure 405135DEST_PATH_IMAGE021
wherein, the sizes of different hidden layers are respectively:
Figure 458542DEST_PATH_IMAGE022
10. a computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored computer program; wherein the computer program, when executed, controls an apparatus in which the computer readable storage medium is located to perform a method of classification of ruby origin according to any one of claims 1 to 6.
CN202211107096.2A 2022-09-13 2022-09-13 Method, device and storage medium for classifying ruby producing areas Active CN115186776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211107096.2A CN115186776B (en) 2022-09-13 2022-09-13 Method, device and storage medium for classifying ruby producing areas

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211107096.2A CN115186776B (en) 2022-09-13 2022-09-13 Method, device and storage medium for classifying ruby producing areas

Publications (2)

Publication Number Publication Date
CN115186776A CN115186776A (en) 2022-10-14
CN115186776B true CN115186776B (en) 2022-12-13

Family

ID=83524524

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211107096.2A Active CN115186776B (en) 2022-09-13 2022-09-13 Method, device and storage medium for classifying ruby producing areas

Country Status (1)

Country Link
CN (1) CN115186776B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115618282B (en) * 2022-12-16 2023-06-06 国检中心深圳珠宝检验实验室有限公司 Identification method, device and storage medium for synthetic precious stone

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489088A (en) * 2013-09-17 2014-01-01 北京农业信息技术研究中心 Method and device for collecting and processing loading and unloading goods information
CN110412115A (en) * 2019-07-30 2019-11-05 浙江省农业科学院 Unknown time green tea source area prediction technique based on stable isotope and multielement
CN110569581A (en) * 2019-08-27 2019-12-13 中国检验检疫科学研究院 Method for distinguishing production places of Chinese wolfberry based on multi-element combination random forest algorithm
CN112666119A (en) * 2020-12-03 2021-04-16 山东省科学院自动化研究所 Method and system for detecting ginseng tract geology based on terahertz time-domain spectroscopy
WO2021211840A1 (en) * 2020-04-15 2021-10-21 Chan Zuckerberg Biohub, Inc. Local-ancestry inference with machine learning model
CN114112983A (en) * 2021-10-18 2022-03-01 中国科学院西北高原生物研究所 Python data fusion-based Tibetan medicine all-leaf artemisia rupestris L producing area distinguishing method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744869B (en) * 2021-09-07 2024-03-26 中国医科大学附属盛京医院 Method for establishing early screening light chain type amyloidosis based on machine learning and application thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489088A (en) * 2013-09-17 2014-01-01 北京农业信息技术研究中心 Method and device for collecting and processing loading and unloading goods information
CN110412115A (en) * 2019-07-30 2019-11-05 浙江省农业科学院 Unknown time green tea source area prediction technique based on stable isotope and multielement
CN110569581A (en) * 2019-08-27 2019-12-13 中国检验检疫科学研究院 Method for distinguishing production places of Chinese wolfberry based on multi-element combination random forest algorithm
WO2021211840A1 (en) * 2020-04-15 2021-10-21 Chan Zuckerberg Biohub, Inc. Local-ancestry inference with machine learning model
CN112666119A (en) * 2020-12-03 2021-04-16 山东省科学院自动化研究所 Method and system for detecting ginseng tract geology based on terahertz time-domain spectroscopy
CN114112983A (en) * 2021-10-18 2022-03-01 中国科学院西北高原生物研究所 Python data fusion-based Tibetan medicine all-leaf artemisia rupestris L producing area distinguishing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A REVIEW OF ANALYTICAL METHODS USED IN GEOGRAPHIC ORIGIN DETERMINATION OF GEMSTONES;Lee A. Groat等;《Gems & Gemology》;20191231;第55卷(第4期);512-535 *
武夷岩茶中真菌群落多样性及其产地识别特征信息的研究;楼云霄;《中国优秀硕士学位论文全文数据库(工程科技Ⅰ辑)》;20190315(第03期);B024-161 *

Also Published As

Publication number Publication date
CN115186776A (en) 2022-10-14

Similar Documents

Publication Publication Date Title
CN113792825B (en) Fault classification model training method and device for electricity information acquisition equipment
CN112215696A (en) Personal credit evaluation and interpretation method, device, equipment and storage medium based on time sequence attribution analysis
CN112559900B (en) Product recommendation method and device, computer equipment and storage medium
CN112598294A (en) Method, device, machine readable medium and equipment for establishing scoring card model on line
CN115186776B (en) Method, device and storage medium for classifying ruby producing areas
CN109345050A (en) A kind of quantization transaction prediction technique, device and equipment
CN110930038A (en) Loan demand identification method, loan demand identification device, loan demand identification terminal and loan demand identification storage medium
CN115618282B (en) Identification method, device and storage medium for synthetic precious stone
CN111860698A (en) Method and device for determining stability of learning model
CN111105041B (en) Machine learning method and device for intelligent data collision
CN111815209A (en) Data dimension reduction method and device applied to wind control model
Al-Fraihat et al. Hyperparameter Optimization for Software Bug Prediction Using Ensemble Learning
CN114418748A (en) Vehicle credit evaluation method, device, equipment and storage medium
CN112434862B (en) Method and device for predicting financial dilemma of marketing enterprises
CN112232724B (en) Quantitative evaluation method, system, equipment and storage medium for personnel ability
CN117235633A (en) Mechanism classification method, mechanism classification device, computer equipment and storage medium
CN111582647A (en) User data processing method and device and electronic equipment
CN109858541A (en) A kind of specific data self-adapting detecting method based on data integration
CN115423600A (en) Data screening method, device, medium and electronic equipment
CN114021716A (en) Model training method and system and electronic equipment
CN111612626A (en) Method and device for preprocessing bond evaluation data
US20240160696A1 (en) Method for Automatic Detection of Pair-Wise Interaction Effects Among Large Number of Variables
CN112184708B (en) Sperm survival rate detection method and device
CN113240353B (en) Cross-border e-commerce oriented export factory classification method and device
Chang et al. Applying Decision Tree to Detect Credit Card Fraud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant