CN115374763B - System for acquiring user priority - Google Patents

System for acquiring user priority Download PDF

Info

Publication number
CN115374763B
CN115374763B CN202211300026.9A CN202211300026A CN115374763B CN 115374763 B CN115374763 B CN 115374763B CN 202211300026 A CN202211300026 A CN 202211300026A CN 115374763 B CN115374763 B CN 115374763B
Authority
CN
China
Prior art keywords
decision tree
tree model
target
list
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211300026.9A
Other languages
Chinese (zh)
Other versions
CN115374763A (en
Inventor
赵洲洋
石江枫
王全修
于伟
靳雯
王林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rizhao Ruian Information Technology Co ltd
Beijing Rich Information Technology Co ltd
Original Assignee
Rizhao Ruian Information Technology Co ltd
Beijing Rich Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rizhao Ruian Information Technology Co ltd, Beijing Rich Information Technology Co ltd filed Critical Rizhao Ruian Information Technology Co ltd
Priority to CN202211300026.9A priority Critical patent/CN115374763B/en
Publication of CN115374763A publication Critical patent/CN115374763A/en
Application granted granted Critical
Publication of CN115374763B publication Critical patent/CN115374763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/194Calculation of difference between files

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a system for obtaining user priorities, which obtains the priorities of a processor by obtaining target text characteristics and target decision tree model characteristics corresponding to a target event, further calculates the similarity between the target text characteristics and the target decision tree model characteristics to obtain an intermediate decision tree model list, performs union processing on the intermediate decision tree model list to obtain a key decision tree model list and obtain the operation results of the key decision tree model list to obtain the priorities corresponding to the target event.

Description

System for acquiring user priority
Technical Field
The invention relates to the technical field of decision tree models, in particular to a system for acquiring user priorities.
Background
In the prior art, when a target event needs to be judged, a plurality of decision tree models are formed by judging conditions, all features of a text corresponding to the target event are extracted, and all decision tree models are operated based on the features; therefore, the decision tree model has a large number of operations, which results in an excessive load on the processor, a long operation time of the processor, and a low operation efficiency.
In addition, when the decision tree models need to be maintained, all the decision tree models are often required to be traversed for searching, the decision tree models which need to be maintained are found and maintained, and when the decision tree models are maintained, if multiple decision tree models with the characteristics which need to be maintained are simultaneously suitable, each decision tree model also needs to be independently searched for maintenance, and the same type of decision tree models which are the same in type and need to maintain the same characteristics are not maintained simultaneously, so that the maintenance efficiency is low, the processor is heavily burdened, and a large amount of time is consumed.
Disclosure of Invention
Aiming at the technical problem, the technical scheme adopted by the invention is as follows:
a system for obtaining user priorities, the system comprising: a database, a processor, and a memory storing a computer program, wherein the database comprises: target rule list C = { C = 1 ,C 2 ,...,C g ,...,C z Wherein, the g-th target rule C g ={C g1 ,...,C gx ,...,C gqg },C gx Is C g Corresponding x-th target rule characteristic, wherein the value of x is 1 to qg, and qg is C g The number of corresponding target rule features, g being 1 to z, z being the number of target rules, when the computer program is executed by the processor, the following steps are implemented:
s100, obtaining a target text list A = { A) corresponding to a target event 1 ,A 2 ,...,A j ,...,A m J, wherein, the jth target text A j ={A j1 ,A j2 ,...,A jr ,...,A jsj },A jr Is A j The value of r is 1 to sj, and sj is A j The number of corresponding target text features, j is 1 to m, and m is the number of the target texts.
S200, constructing a target decision tree model list B = { B ] based on C 1 ,B 2 ,...,B g ,...,B z H, wherein the g-th target decision tree model B g ={B g1 ,B g2 ,...,B gx ,...,B gqg },B gx Is C gx Corresponding target decision tree model features.
S300, sequentially adding A j And B g Respectively calculating similarity based on A j And B g The degree of similarity between the two images is determined, obtaining an intermediate decision tree model list Z = { Z = { Z = 1 ,Z 2 ,...,Z j ,...,Z m H, wherein the jth intermediate decision tree model Z j ={Z j1 ,Z j2 ,...,Z ja ,...,Z jcj },Z ja Is A j The value of a is 1 to cj, and cj is A in the corresponding a-th intermediate decision tree model j The number of corresponding intermediate decision tree models is A j And B g Is greater than B g Corresponding preset similarity threshold B 0 g The target decision tree model of (1).
S400, obtaining a key decision tree model list B \697, = { B \697 1 ,Bʹ 2 ,...,Bʹ t ,...,Bʹ k },Bʹ t The t-th key decision tree model is obtained, the value of t is 1 to k, k is the number of the key decision tree models, B \697 = Z 1 ∪...∪Z j ∪...∪Z m
S500, obtaining a designated decision tree model list D = { D ] according to B \697 1 ,D 2 ,...,D t ,...,D k },D t Is B \ 697 t And the specified decision tree model is a key decision tree model with an operation result being a first key decision tree model result, and the first key decision tree model result is marked as 0.
S600, according to D, obtaining the priority S corresponding to the target event 0 ,S 0 The following conditions are met:
Figure DEST_PATH_IMAGE002
wherein wt is D t The corresponding weight.
The invention has at least the following beneficial effects:
(1) The method comprises the steps of obtaining target text characteristics and target decision tree model characteristics corresponding to a target event, further calculating similarity between the target text characteristics and the target decision tree model characteristics to obtain an intermediate decision tree model list, performing union processing on the intermediate decision tree model list to obtain a key decision tree model list and obtain an operation result of the key decision tree model list to obtain a priority corresponding to the target event, and therefore, only the decision tree models in the key decision tree model list need to be operated without operating all the target decision tree models, the number of operation decision tree models of a processor can be saved, the operation efficiency of the processor is improved, and the operation time of the processor is saved.
(2) When a decision text is newly added and needs to be judged by using a target decision tree model, a middle text feature list is obtained by extracting features of the middle text, and a final decision tree model list is generated by calculating the similarity between the middle text features and the target decision tree model to obtain the target decision tree model with the similarity between the middle text features and the target decision tree model being greater than a preset similarity threshold.
(3) The classification maintenance mode of the target decision points is provided, and the classification maintenance mode of the target decision points is obtained by obtaining the first label list and the second label list and performing barrel separation processing according to the first label list and the second label list to obtain the category list of the target decision tree model.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a system for obtaining user priorities executed by a processor according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention provides a system for acquiring user priority, which comprises: a database, a processor, and a memory storing a computer program, wherein the database comprises: target rule list C = { C = { (C) } 1 ,C 2 ,...,C g ,...,C z Wherein, the g-th target rule C g ={C g1 ,...,C gx ,...,C gqg },C gx Is C g Corresponding xth target rule characteristic, wherein the value of x is 1 to qg, and qg is C g The number of corresponding target rule features, g being 1 to z, z being the number of target rules, when the computer program is executed by the processorThen, as shown in fig. 1, the following steps are implemented:
s100, obtaining a target text list A = { A) corresponding to a target event 1 ,A 2 ,...,A j ,...,A m J, wherein, the jth target text A j ={A j1 ,A j2 ,...,A jr ,...,A jsj },A jr Is A j The value of r is 1 to sj, and sj is A j And the number of the corresponding target text features, wherein j is 1 to m, and m is the number of the target texts.
Specifically, the target text includes: the text describing the target event, the text judging different stages of the target event and the like.
Further, target text features are extracted from the target text based on OCR (Optical Character Recognition).
S200, building a target decision tree model list B = { B ] based on C 1 ,B 2 ,...,B g ,...,B z H, wherein the g-th target decision tree model B g ={B g1 ,B g2 ,...,B gx ,...,B gqg },B gx Is C gx Corresponding target decision tree model features.
Specifically, the execution process of the target decision tree model can be understood as a process of directly imitating decision making of human beings in real life on the basis of a tree structure according to the existing materials.
Further, the target decision tree model feature is a condition for judging a target text feature corresponding to a target rule feature obtained from a target rule corresponding to the target decision tree model.
Specifically, those skilled in the art know that any method for obtaining a target decision tree model according to target rule features falls within the scope of the present invention, and will not be described herein again.
Further, any target decision tree model independently executes the target text, which can be understood that the target decision tree models independently exist, and any target decision tree model does not affect each other and operates independently.
S300, sequentially adding A j And B g Respectively calculating similarity based on A j And B g Obtaining the similarity between the model lists Z = { Z = and the model lists 1 ,Z 2 ,...,Z j ,...,Z m H, wherein the jth intermediate decision tree model Z j ={Z j1 ,Z j2 ,...,Z ja ,...,Z jcj },Z ja Is A j The value of a is 1 to cj, and cj is A in the corresponding a-th intermediate decision tree model j The number of corresponding intermediate decision tree models is A j And B g Has a similarity greater than B g Corresponding preset similarity threshold B 0 g The target decision tree model of (1).
Specifically, those skilled in the art know that any way of calculating the similarity between the target decision tree model and the target text falls within the scope of the present invention, and will not be described herein again.
Further, B \697, [ epsilon ] B, can be understood as k ≦ z.
Further, B 0 1 =B 0 2 =…=B 0 g =…=B 0 z
Preferably, the preset similarity thresholds corresponding to the target decision tree model are different, and a person skilled in the art can set the corresponding preset similarity threshold according to the characteristics of the target decision tree model, which is not described herein again.
S400, obtaining a key decision tree model list B \697, = { B \697 1 ,...,Bʹ t ,...,Bʹ k },Bʹ t The t-th key decision tree model is obtained, the value of t is 1 to k, k is the number of the key decision tree models, B \697 = Z 1 ∪...∪Z j ∪...∪Z m
Specifically, in the embodiment of the present invention, the user may also designate a key decision tree model, and when the user selects the designated key decision tree model, only the key decision tree model designated by the user is operated in the key decision tree model operation stage; when the user does not select the appointed key decision tree model, all the key decision tree models are operated in the operation stage of the key decision tree model, so that the number of the operation decision tree models of the processor can be saved, the operation efficiency of the processor is improved, and the operation time of the processor is saved.
Further, the user can select the key decision tree model to be operated according to the click mode.
Further, in another embodiment of the present invention, after S400, the method further includes the following steps:
s410, when the intermediate text X is received, acquiring an intermediate text feature set X = { X = } X 1 ,...,X i ,...,X n },X i The value of i is 1 to n, and n is the number of the intermediate text features corresponding to the intermediate text.
S420, traversing B and B g Calculating the similarity with X, and obtaining a final decision tree model list B' = { B 1 ,Bʹ 2 ,...,B〞 u ,...,B〞 w },B〞 u The value of u is 1 to w, w is the number of the final decision tree models, and the final decision tree model is X i And B gx Is greater than B g Corresponding preset similarity threshold B 0 g The target decision tree model of (1).
In the embodiment of the invention, the user can also designate the final decision tree model, and when the user selects to designate the final decision tree model, only the final decision tree model designated by the user is operated in the operation stage of the final decision tree model; when the user does not select to designate the final decision tree model, all the final decision tree models are operated in the operation stage of the final decision tree model, so that the number of the operation decision tree models of the processor can be saved, the operation efficiency of the processor is improved, and the operation time of the processor is saved.
In the above, in steps S410 to S420, the feature extraction is performed on the intermediate text to obtain the intermediate text feature list, and the similarity between the intermediate text feature and the target decision tree model is calculated to obtain the target decision tree model with the similarity between the intermediate text feature and the target decision tree model being greater than the preset similarity threshold, so as to generate the final decision tree model list.
S500, obtaining a specified decision tree model list D = { D } according to B \ 697 1 ,D 2 ,...,D t ,...,D k },D t Is B \ 697 t And correspondingly assigning a decision tree model, wherein the assigned decision tree model is a key decision tree model with an operation result being a first key decision tree model result, and the first key decision tree model result is marked as 0.
Specifically, the first critical decision tree model result labeled 0 means that the first critical decision tree result is a critical decision tree result whose execution result is wrong.
S600, according to D, obtaining the priority S corresponding to the target event 0 ,S 0 The following conditions are met:
Figure DEST_PATH_IMAGE004
wherein wt is D t The corresponding weight.
Specifically, w1= w2 =8230 = wb =8230 = wd and thus, the corresponding priority of the target event is conveniently counted.
Further, those skilled in the art may set the weight corresponding to the designated decision tree according to actual requirements, which is not described herein again.
Further, in another embodiment of the present invention, S corresponding to the target event may also be obtained through the following steps 0
S501, obtaining a key decision tree model result list S = { S } corresponding to the key decision tree model list 1 ,S 2 ,...,S t ,...,S k },S t Is B \ 697 t A corresponding key decision tree model result, the key decision tree model result being a first key decision tree modelA result or a second key decision tree model result, the first key decision tree result being labeled 0 and the second key decision tree result being labeled 1;
s502, obtaining S according to S 01 And S 02 ,S 01 Is the number of 0S in S, S 02 Is the number of 1 in S;
s5033 according to S 01 And S 02 Obtaining the corresponding priority S of the target event 0 ,S 0 The following conditions are met:
S 0 =S 01 /S 02 ×100%。
the invention provides a system for obtaining the priority corresponding to a target event, which obtains the priority corresponding to the target event by obtaining the target text characteristic and the target decision tree model characteristic corresponding to the target event and further calculating the similarity of the target text characteristic and the target decision tree model characteristic to obtain an intermediate decision tree model list, and carrying out union processing on the intermediate decision tree model list to obtain a key decision tree model list and obtain the operation result of the key decision tree model list to obtain the priority corresponding to the target event.
Further, in an embodiment of the present invention, after S200, the method further includes the following steps of classifying the target decision tree model:
s10, obtaining a first label list L = { L = { (L) } 1 ,L 2 ,...,L α ,...,L β },L α The first label is the alpha-th first label, the value of alpha is 1 to beta, beta is the number of the first labels, and the first label is the data type on which the target decision tree model depends.
Specifically, the target data type is a data type on which a target decision tree depends, and the target data type may include: event data types, e.g., target event type, target event phase, and time of target event establishment; text data types, e.g., target text type, events established by the target text, and specific structured fields of the target text; and intelligent information comprises a fingerprint type, a signature information type and the like.
S20, obtaining a second tag list Y = { Y = 1 ,Y 2 ,...,Y δ ,...,Y ε },Y δ The number of the delta second label is 1 to epsilon, epsilon is the number of the second label, and the second label is a detection mode of the target decision tree model.
Specifically, the first label is a checking mode of a target decision tree, and the target checking mode includes: and judging whether the two types of texts corresponding to the target event exist at the same time or not and judging that the establishment time of the two types of texts corresponding to the target event does not exceed a preset time threshold.
S30, according to L and Y, pair B r Performing barrel division processing to obtain a target decision tree model class list K = { K = } 1 ,K 2 ,...,K α ,...,K β },K α =(K α1 ,K α2 ,...,K αη ,...,K αθα ),K αη The method comprises the steps that eta is from 1 to theta alpha, theta alpha is the number of target decision tree model categories in an alpha target decision tree model category list, and the target decision tree model categories are target decision tree models with the same first label.
In the above, the step S10-S30 obtains the target decision tree model category list by obtaining the first tag list and the second tag list and performing the bucket division processing according to the first tag list and the second tag list, so that when the target decision tree model needs to be maintained, only the rule templates corresponding to the target decision tree models of the same type need to be maintained, thereby saving the maintenance time and improving the maintenance efficiency.
The present specification provides method steps as described in the examples or flowcharts, but may include more or fewer steps based on routine or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In practice, the system or server product may be implemented in a sequential or parallel manner (e.g., parallel processor or multi-threaded environment) according to the embodiments or methods shown in the figures.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus and computer device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple and reference may be made to some of the description of the method embodiments for related points.
Although some specific embodiments of the present invention have been described in detail by way of illustration, it should be understood by those skilled in the art that the above illustration is only for the purpose of illustration and is not intended to limit the scope of the invention. It will also be appreciated by those skilled in the art that various modifications may be made to the embodiments without departing from the scope and spirit of the invention. The scope of the invention is defined by the appended claims.

Claims (8)

1. A system for obtaining user priorities, the system comprising: a database, a processor, and a memory storing a computer program, wherein the database comprises: target rule list C = { C = 1 ,C 2 ,...,C g ,...,C z H, wherein the g-th target rule C g ={C g1 ,...,C gx ,...,C gqg },C gx Is C g Corresponding x-th target rule characteristic, wherein the value of x is 1 to qg, and qg is C g The number of corresponding target rule features, g being 1 to z, z being the number of target rules, when the computer program is executed by the processor, the following steps are implemented:
s100, obtaining a target text list A = { A } corresponding to a target event 1 ,A 2 ,...,A j ,...,A m J, wherein, the jth target text A j ={A j1 ,A j2 ,...,A jr ,...,A jsj },A jr Is A j The value of r is 1 to sj, and sj is A j The number of corresponding target text features, wherein j is 1 to m, and m is the number of the target texts;
s200, constructing a target decision tree model list B = { B ] based on C 1 ,B 2 ,...,B g ,...,B z H, wherein the g-th target decision tree model B g ={B g1 ,B g2 ,...,B gx ,...,B gqg },B gx Is C gx Corresponding target decision tree model features;
s300, sequentially adding A j And B g Respectively calculating similarity based on A j And B g The degree of similarity between the two images, obtaining an intermediate decision tree model list Z = { Z = { Z = 1 ,Z 2 ,...,Z j ,...,Z m H, wherein the jth intermediate decision tree model Z j ={Z j1 ,Z j2 ,...,Z ja ,...,Z jcj },Z ja Is A j The value of a is 1 to cj, and cj is A in the corresponding a-th intermediate decision tree model j The number of corresponding intermediate decision tree models is A j And B g Has a similarity greater than B g Corresponding preset similarity threshold B 0 g The target decision tree model of (1);
s400, obtaining a key decision tree model list B \697according to Z 1 ,Bʹ 2 ,...,Bʹ t ,...,Bʹ k },Bʹ t The t-th key decision tree model is obtained, the value of t is 1 to k, k is the number of the key decision tree models, B \697 = Z 1 ∪...∪Z j ∪...∪Z m
S500, obtaining a designated decision tree model list D = { D ] according to B \697 1 ,D 2 ,...,D t ,...,D k },D t Is B \ 697 t A corresponding appointed decision tree model, wherein the appointed decision tree model is a key decision tree model with an operation result being a first key decision tree model result, and the first key decision tree model result is marked as 0;
s600, according to D, obtaining the priority S corresponding to the target event 0 ,S 0 The following conditions are met:
Figure DEST_PATH_IMAGE001
wherein wt is D t The corresponding weight.
2. The system for obtaining user priority according to claim 1, wherein in S200, any target decision tree model is a decision tree model for independently executing target text.
3. The system for obtaining user priority according to claim 1, wherein in S100, the target text features are extracted from the target text by an optical character recognition technique.
4. The system for obtaining user priority according to claim 1, wherein in S300, B 0 1 =B 0 2 =…=B 0 g =…=B 0 z
5. The system for obtaining user priority according to claim 1, further comprising the following steps after S400:
s410, when the intermediate text X is received, acquiring an intermediate text feature set X = { X = } X 1 ,X 2 ,...,X i ,...,X n },X i The value of i is 1 to n, and n is the number of the intermediate text features corresponding to the intermediate text;
s420, traversing B and B g Calculating the similarity with X, and obtaining a final decision tree model list B' = { B 1 ,B〞 2 ,...,B〞 u ,...,B〞 v },B〞 u Is the u-th final decision tree model, the value of u is 1 to v, v is the number of the final decision tree models,the final decision tree model is X i And B gx Has a similarity greater than B g Corresponding preset similarity threshold B 0 g The target decision tree model of (1).
6. The system for obtaining user priority according to claim 1, further comprising the following steps after S200 for classifying the objective decision tree model:
s10 obtaining a first label list L = { L 1 ,L 2 ,...,L α ,...,L β },L α The first label is the alpha first label, the value of alpha is 1 to beta, beta is the number of the first label, and the first label is the data type on which the target decision tree model depends;
s20, obtaining a second tag list Y = { Y = 1 ,Y 2 ,...,Y δ ,...,Y ε },Y δ The number of the second labels is delta second label, the value of delta is 1 to epsilon, epsilon is the number of the second labels, and the second labels are the detection mode of the target decision tree model;
s30, according to L and Y, pair B g Performing barrel division processing to obtain a target decision tree model class list K = { K = } 1 ,K 2 ,...,K α ,...,K β },K α =(K α1 ,K α2 ,...,K αη ,...,K αθα ),K αη The method comprises the steps that eta is from 1 to theta alpha, theta alpha is the number of target decision tree model categories in an alpha target decision tree model category list, and any category of the target decision tree models is a target decision tree model with the same first label.
7. The system for acquiring user priority according to claim 1, wherein in S600, w1= w2 =8230 = wt =8230 = wk.
8. System for obtaining user priorities according to claim 1, characterized in that S is also obtained by 0
S501, obtaining a key decision tree model result list S = { S } corresponding to the key decision tree model list B \697 1 ,S 2 ,...,S t ,...,S k },S t Is B \ 697 t A corresponding key decision tree model result, wherein the key decision tree model result is a first key decision tree model result or a second key decision tree model result, and the second key decision tree model result is marked as 1;
s502, obtaining S according to S 01 And S 02 ,S 01 Is the number of 0S in S, S 02 Is the number of 1 in S;
s5033 according to S 01 And S 02 Obtaining the corresponding priority S of the target event 0 ,S 0 The following conditions are met:
S 0 =S 01 /S 02 ×100%。
CN202211300026.9A 2022-10-24 2022-10-24 System for acquiring user priority Active CN115374763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211300026.9A CN115374763B (en) 2022-10-24 2022-10-24 System for acquiring user priority

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211300026.9A CN115374763B (en) 2022-10-24 2022-10-24 System for acquiring user priority

Publications (2)

Publication Number Publication Date
CN115374763A CN115374763A (en) 2022-11-22
CN115374763B true CN115374763B (en) 2022-12-23

Family

ID=84074085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211300026.9A Active CN115374763B (en) 2022-10-24 2022-10-24 System for acquiring user priority

Country Status (1)

Country Link
CN (1) CN115374763B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116167595B (en) * 2023-04-25 2023-08-11 天信达信息技术有限公司 Method for determining personnel group decision variables, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930924A (en) * 2016-04-15 2016-09-07 中国电力科学研究院 Power distribution network situation sensing method based on complex event processing technology and decision tree
WO2017167097A1 (en) * 2016-03-31 2017-10-05 阿里巴巴集团控股有限公司 Method and apparatus for training model based on random forest
CN112329843A (en) * 2020-11-03 2021-02-05 中国平安人寿保险股份有限公司 Call data processing method, device, equipment and storage medium based on decision tree
CN115115004A (en) * 2022-07-21 2022-09-27 中国平安财产保险股份有限公司 Decision tree model construction and application method, device and related equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017167097A1 (en) * 2016-03-31 2017-10-05 阿里巴巴集团控股有限公司 Method and apparatus for training model based on random forest
CN105930924A (en) * 2016-04-15 2016-09-07 中国电力科学研究院 Power distribution network situation sensing method based on complex event processing technology and decision tree
CN112329843A (en) * 2020-11-03 2021-02-05 中国平安人寿保险股份有限公司 Call data processing method, device, equipment and storage medium based on decision tree
CN115115004A (en) * 2022-07-21 2022-09-27 中国平安财产保险股份有限公司 Decision tree model construction and application method, device and related equipment

Also Published As

Publication number Publication date
CN115374763A (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN109766950B (en) Industrial user short-term load prediction method based on morphological clustering and LightGBM
JP2010500688A (en) Automatic classification of objects in images
CN115374763B (en) System for acquiring user priority
US7930700B1 (en) Method of ordering operations
CN110633667B (en) Action prediction method based on multitask random forest
CN111898739A (en) Data screening model construction method based on meta-learning, data screening method and device, computer equipment and storage medium
US6798911B1 (en) Method and system for fuzzy clustering of images
US20070150435A1 (en) Parameter adjustment device
CN113823356A (en) Methylation site identification method and device
US7272583B2 (en) Using supervised classifiers with unsupervised data
Chen et al. Big data analytic for multivariate fault detection and classification in semiconductor manufacturing
JP4194697B2 (en) Classification rule search type cluster analyzer
US8370276B2 (en) Rule learning method, program, and device selecting rule for updating weights based on confidence value
CN112766423B (en) Training method and device for face recognition model, computer equipment and storage medium
CN113517998B (en) Processing method, device, equipment and storage medium of early warning configuration data
KR100538451B1 (en) High performance sequence searching system and method for dna and protein in distributed computing environment
CN108388774A (en) A kind of on-line analysis of polypeptide spectrum matched data
CN114694746A (en) Plant pri-miRNA coding peptide prediction method based on improved MRMD algorithm and DF model
CN117425937A (en) Predictive method for determining the pathogenicity of a combination of double-or oligogenic variants
Merschmann et al. A lazy data mining approach for protein classification
JP3602084B2 (en) Database management device
JP4036009B2 (en) Image data classification device
Bonidia et al. Selecting the most relevant features for the identification of long non-coding RNAs in plants
Zikratov et al. The method of elf-files identification based on the metric classification algorithms
JP7335379B1 (en) LEARNING APPARATUS, LEARNING METHOD, AND PROGRAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant