CN115391620A - Model operation method, device, equipment, storage medium and program product - Google Patents

Model operation method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN115391620A
CN115391620A CN202211183707.1A CN202211183707A CN115391620A CN 115391620 A CN115391620 A CN 115391620A CN 202211183707 A CN202211183707 A CN 202211183707A CN 115391620 A CN115391620 A CN 115391620A
Authority
CN
China
Prior art keywords
model
processing object
query statement
processing
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211183707.1A
Other languages
Chinese (zh)
Inventor
林乐凝
陈桂花
刘锦山
徐蔚峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
CCB Finetech Co Ltd
Original Assignee
China Construction Bank Corp
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp, CCB Finetech Co Ltd filed Critical China Construction Bank Corp
Priority to CN202211183707.1A priority Critical patent/CN115391620A/en
Publication of CN115391620A publication Critical patent/CN115391620A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a model operation method, a device, equipment, a storage medium and a program product. The method comprises the following steps: the method comprises the steps of obtaining a first query statement set corresponding to a first type model, comparing an execution template of each first query statement in the first query statement set with preset abnormal templates, screening N abnormal first query statements according to a comparison result, screening a second query statement with the shortest query time from the N abnormal first query statements, executing the second query statement, obtaining a first processing object corresponding to the first type model, and inputting the first processing object into the first type model to obtain an operation result of the first type model. According to the embodiment of the application, the running efficiency of the batch running machine can be improved.

Description

Model operation method, device, equipment, storage medium and program product
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a model operation method, apparatus, device, storage medium, and program product.
Background
"batching" is also called "batch processing" or "batch processing", and is one of the common services in various IT systems nowadays, and according to statistics, 70% of operations in a service system are completed by batching. The 'batch running' is simply that the same type of business is 'accumulated' to a certain amount (the same business is in batch), and automatic processing is started at a specified time point, so that the purposes of simplifying operation and improving efficiency are achieved. Analyzing the batch processing process, the characteristics of the batch processing service can be easily summarized: the processing capacity is large (batch), and the automatic processing can be realized due to the specific trigger time (designated time point).
However, at present, batch tasks of different businesses in a banking system still need to be manually configured with batch parameters, and different batch tasks are distributed into different models. In the case of a bank with a large number of customers, the allocation of the batching tasks and the model calculation both consume a large amount of time, while the equipment resources of the bank are usually very limited, and the manual allocation of the batching tasks causes the inefficiency of the batching machine.
Disclosure of Invention
The embodiment of the application provides a model operation method, a model operation device, model operation equipment, a storage medium and a program product, and can solve the problem that the existing batch running machine is low in operation efficiency.
In a first aspect, an embodiment of the present application provides a model operation method, where the method includes:
acquiring a first query statement set corresponding to the first type model;
comparing the execution template of each first query statement in the first query statement set with preset abnormal templates, and screening N abnormal first query statements according to the comparison result;
screening out a second query statement with the shortest query time from the N first query statements without abnormality;
executing the second query statement to obtain a first processing object corresponding to the first type of model;
and inputting the first processing object into the first-class model to obtain an operation result of the first-class model.
In some embodiments, the first type of model comprises at least one model;
the inputting the first processing object into the first type of model to obtain the operation result of the first type of model includes:
splitting the first processing object to obtain at least one group, wherein each group comprises at least one processing object in the first processing object;
determining a model corresponding to each group in the at least one group in the first type of model;
and inputting the groups into corresponding models respectively to obtain the prediction probability value of each group.
In some embodiments, the inputting the groups into the corresponding models respectively to obtain the predicted probability values of the groups includes:
and sequentially inputting the at least one processing object in each group into the corresponding model to obtain a prediction probability value corresponding to each processing object in the at least one processing object one to one.
In some embodiments, the sequentially inputting the at least one processing object in each of the groups into the corresponding model to obtain a prediction probability value corresponding to each processing object in the at least one processing object in a one-to-one manner includes:
determining standardized scores corresponding to the processing objects one by one according to the prediction probability values;
determining a contribution score of each feature of each processing object in the standardized score according to the prediction probability value and the standardized score;
analyzing the processing object based on the normalized score and the contribution score.
In some embodiments, the determining the normalized scores of the processing objects in a one-to-one correspondence according to the predicted probability values includes:
determining the ratio of the bad quality to the good quality of the processing object according to the predicted probability value;
and determining the standardized score according to the good-good ratio.
In some embodiments, the splitting the first processing object to obtain at least one packet includes:
acquiring the number of processing objects included in the first processing object;
acquiring the number of processing objects included in each preset group of the first type model;
and splitting the first processing object according to the number of the processing objects included in the first processing object and the number of the processing objects included in each preset group to obtain at least one group.
In a second aspect, an embodiment of the present application provides a model running apparatus, including:
the acquisition module is used for acquiring a first query statement set corresponding to the first type of model;
the first screening module is used for comparing the execution template of each first query statement in the first query statement set with each preset abnormal template, and screening N abnormal first query statements according to the comparison result;
the second screening module is used for executing the second query statement and acquiring a first processing object corresponding to the first type of model;
the execution module is used for executing the second query statement and acquiring a first processing object corresponding to the first type of model;
and the operation module is used for inputting the first processing object into the first type model to obtain an operation result of the first type model.
In a third aspect, an embodiment of the present application provides a model running device, where the model running device includes: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the model execution method as described above.
In a fourth aspect, the present application provides a computer storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the method for executing the model as above is implemented.
In a fifth aspect, the present application provides a computer program product, which includes computer program instructions, and when the computer program instructions are executed by a processor, the method for running the model as above is implemented.
In the method, corresponding relations are established for different query statement sets and different types of models, execution templates of the query statements in the query statement sets are compared with preset abnormal templates, abnormal query statements are obtained based on comparison results, normal query statements with the shortest time consumption are further selected, the most appropriate query statements in the models for the different types are selected, batch running tasks are automatically acquired from a database based on the most appropriate query statements, the batch running tasks are prevented from being manually distributed to batch running machines, distribution and running time of the batch running tasks are shortened, and running efficiency of the batch running machines is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram illustrating a method for operating a model according to an embodiment of the present disclosure;
fig. 2 is a schematic hardware structure diagram of a model running device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a model operating apparatus according to an embodiment of the present application.
Detailed Description
Features of various aspects and exemplary embodiments of the present application will be described in detail below, and in order to make objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of, and not restrictive on, the present application. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by illustrating examples thereof.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising 8230; \8230;" comprises 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The embodiments will be described in detail below with reference to the accompanying drawings.
"batching" is also called "batch processing" or "batch processing", and is one of the common services in various IT systems nowadays, and according to statistics, 70% of operations in a service system are completed by batching. The 'batch running' is simply that the same type of business is 'accumulated' to a certain amount (the same business is in batch), and automatic processing is started at a specified time point, so that the purposes of simplifying operation and improving efficiency are achieved. Analyzing the batch processing process, the characteristics of the batch processing service can be easily summarized: the processing capacity is large (batch), and the processing can be automatically carried out due to specific trigger time (designated time point).
In general, for a model of a non-real-time batch, the allocation of batch tasks and model calculations do not consume much resources. However, at present, different batch tasks of different businesses in the banking system still need to be configured manually with different batch parameters, and the different batch tasks are distributed into different models. Under the condition that a bank has a large number of customers, the allocation and model calculation of the batch running task need to consume a large amount of resources, the equipment resources of the bank are usually very limited, and the method for manually allocating the batch running task causes low resource utilization rate of a batch running machine.
In order to solve the problems, the corresponding relation is established between different query sentences and different types of models, and different query sentences are adopted to distribute batch running tasks to different types of models, so that the batch running tasks do not need to be manually distributed to batch running machines, and the running efficiency of the batch running machines is improved.
Specifically, in order to solve the problem in the prior art, embodiments of the present application provide a model running method, apparatus, device, storage medium, and program product. The following first describes a model operation method provided in the embodiment of the present application.
Fig. 1 shows a schematic flow chart of a model operation method according to an embodiment of the present application. The method comprises the following steps:
s110, a first query statement set corresponding to the first type model is obtained.
In this embodiment, in the batch machine, there is at least one type of model, the first type of model is any one type of model, and each type of model may include at least one model. The models may include the XGBOST model and the lightGBM model, among others. The models of different types and different first query statement sets in the database have corresponding relations, and the database stores the distributed batch running tasks. The first type model may obtain a first set of query statements corresponding thereto.
The first query statement set comprises at least one first query statement, and the first type model can query any one of the first query statements to obtain the required first processing object. And the first type of model only needs to apply one first query statement at a time, so that one first query statement can be screened out from the first query statement set for query.
S120, comparing the execution template of each first query statement in the first query statement set with each preset abnormal template, and screening N abnormal first query statements according to the comparison result;
in this embodiment, the execution template refers to a statement obtained by abstracting and normalizing a specific parameter value, i.e., a variable, of a query statement. The execution templates of the first query statements with various exceptions can be stored as exception templates, the execution templates of the first query statements in the first query statement set are compared with the set exception templates before the first query statements are screened each time, if the execution templates are found to be abnormal, the first query statements corresponding to the execution templates with the exceptions are filtered, and the N first query statements without the exceptions can be determined in such a way.
S130, screening out a second query statement with the shortest query time from the N non-abnormal first query statements;
in this embodiment, in order to improve the efficiency of the batch running task and save the batch running time, the second query statement with the shortest query time consumption needs to be screened from the N non-abnormal first query statements for querying the first-class model.
In an embodiment, an average time consumption of the first three times of queries of each first query statement in the N non-abnormal first query statements may be obtained, and the non-abnormal first query statement with the shortest average time consumption may be determined as the second query statement.
S140, executing the first query statement, and acquiring a first processing object corresponding to the first type of model.
In this embodiment, after the first type model obtains the first query statement corresponding to the first type model, the first processing object may be obtained from the database through the first query statement, and the first processing object is a batch task assigned to the first type model.
In an embodiment, a server responsible for distributing batch tasks distributes the batch tasks in the server to databases of different batch machines according to a preset distribution rule, each batch machine is provided with at least one first type model, each first type model is provided with a first query statement corresponding to the first type model in the database, and the first type model obtains a corresponding first processing object from the database through the corresponding first query statement.
In an embodiment, different batching tasks in the database have identifications corresponding to the different batching tasks one by one, and the first query statement can query to obtain the identifications corresponding to the batching tasks and call the needed first processing object from the database based on the identifications.
Under the condition that each batch running machine can execute q batch running tasks at most, the preset allocation rule can be that all batch running machines are traversed, each batch running machine is monitored in sequence, and if the batch running tasks being executed on the batch running machines are smaller than q, the batch running tasks are sent to the batch running machines; if the running batch tasks being executed on the running batch machine are more than or equal to q, the running batch tasks are not distributed to the running batch machine. If the number of the tasks being executed on all batch running machines is larger than or equal to q, the server enters a dormant state, and after a first preset time interval, all batch running machines are traversed again, and distribution is carried out according to the preset distribution rule;
the preset allocation rule may be that the batch running task is equally allocated to all the batches.
S150, inputting the first processing object into the first type model to obtain an operation result of the first type model.
In this embodiment, after the first processing object is obtained, the first processing object is input into the first class model corresponding to the first processing object, and the first class model may output the predicted probability value of the first processing object, which may be understood that the operation result of the first class model is the predicted probability value of the first processing object.
According to the method and the device, corresponding relations are established for different query statement sets and different types of models, execution templates of the query statements in the query statement sets are compared with preset abnormal templates, abnormal query statements are obtained based on comparison results, normal query statements with the shortest time consumption are further selected, the most appropriate query statements in the models for the different types are selected, batch running tasks are automatically acquired from a database based on the most appropriate query statements, the batch running tasks are prevented from being manually distributed to batch running machines, distribution and running time of the batch running tasks are shortened, and resource utilization rate of the batch running machines is improved.
As an alternative embodiment, the first type of model includes at least one model, and in order to match the processing object with the model, the S130 may include:
s210, splitting the first processing object to obtain at least one group, wherein each group comprises at least one processing object in the first processing object;
s220, determining a model corresponding to each group in the at least one group in the first type of model;
and S230, inputting the groups into corresponding models respectively to obtain the prediction probability values of the groups.
In the embodiment, in the process of batch processing of the batch tasks, the same batch machine comprises different types of models, and the same type of model comprises a plurality of models with different model parameters; although the plurality of models belong to the same type of model, the different model parameters of the models lead to different algorithms of the models, and thus the corresponding processing objects are also different.
Since the first processing object includes a plurality of processing objects, the plurality of processing objects in the first processing object need to be input into different models, respectively. Therefore, in this embodiment, the first processing object is split to obtain at least one packet, each packet further includes at least one processing object in the first processing object, and the processing objects in different packets are input into the corresponding models to obtain the predicted probability values of the packets.
In one embodiment, the number of processing objects in each group is the maximum number of processing objects that can be processed by the same model at the same time. In this way, it is ensured that different processing objects are input into the corresponding models, and that the models can be maximized in efficiency.
As an alternative embodiment, the step S230 may further include:
and S310, sequentially inputting the at least one processing object in each group into the corresponding model to obtain the prediction probability value corresponding to each processing object in the at least one processing object one to one.
In this embodiment, at least one processing object exists in each group, and the prediction probability values corresponding to the processing objects can be obtained independently by inputting each processing object into the corresponding model.
The specific representation content of the predicted probability value is determined by the model algorithm and the characteristics contained in the processing object, and can represent the probability of timely loan repayment of the user and also can represent the probability of loan of the user.
As an alternative embodiment, after the step S310, the method may further include:
s410, determining standardized scores corresponding to the processing objects one by one according to the prediction probability values;
s420, determining a contribution score of each feature of each processing object in the standardized score according to the prediction probability value and the standardized score;
s430, analyzing the processing object based on the standardized score and the contribution score.
In this embodiment, the standardized score refers to a process of mapping the operation result of the model, i.e. the predicted probability value, to a specific good-bad ratio, which is performed to facilitate the business management of the model result.
The normalized score is used for representing the contribution of all the features in the processing object to the prediction probability value, after the normalized score is obtained, the contribution score of each feature in the processing object to the prediction probability value can be calculated based on the normalized score, and the importance degree of each feature in the processing object is analyzed based on the proportion of the contribution score of the feature in the normalized score.
As an alternative embodiment, the step S410 may include:
s510, determining the bad-to-good ratio of the processing object based on the predicted probability value;
s520, determining the standardized score based on the natural logarithm of the good-bad ratio.
In this embodiment, the conversion formula of the prediction probability value and the normalized score is:
Figure BDA0003867756040000091
wherein score is the normalized score, p is the predicted probability value, A and B are both user-set constants, A is a first preset constant, B is a second preset constant,
Figure BDA0003867756040000092
the ratio is bad.
In one embodiment, the method can be used for
Figure BDA0003867756040000093
As an interpretable value corresponding to the processing object, the prediction probability value is converted into a normalized score by the interpretable value:
Figure BDA0003867756040000094
wherein shape is an interpretable value corresponding to the processing object, and p is a predicted probability value of the processing object.
Furthermore, each feature in the processing object has a corresponding interpretable value, and the sum of the interpretable values corresponding to the features in the processing object is an interpretable value corresponding to the processing object, and the contribution score of each feature can be determined according to the interpretable value of each feature, which is specifically as follows:
shap=base+shap1+shap2+…+shapk
the shape is an interpretable value corresponding to the processing object, the processing object comprises k features, the shape is an interpretable value corresponding to the kth feature, and the base is a basic score.
After calculating the interpretable values of the k features, the contribution score of each feature can be calculated according to the interpretable values of the k features, and the calculation process is as follows:
scorek=72.13*shapk
wherein scorek is the contribution score of the kth feature, and shape is the interpretable value corresponding to the kth feature.
Therefore, in this embodiment, the normalized score of the processing object is the sum of the first preset constant and the contribution score corresponding to each feature.
As an alternative embodiment, in order to group the processing objects, the step S210 includes:
s610, acquiring the number of processing objects included in the first processing object;
s620, acquiring the number of processing objects included in each preset group of the first type model;
s630, splitting the first processing object according to the number of processing objects included in the first processing object and the number of processing objects included in each preset group, so as to obtain at least one group.
In this embodiment, the number of groups can be calculated by the following formula according to the number of processing objects included in the first processing object and the number of processing objects included in each group:
GN=ceiling(N/BN)
wherein N is the number of processing objects included in the first processing object, BN is the number of processing objects included in each preset group, GN is the number of groups, and ceiling () is rounding up.
Through the grouping mode, processing objects corresponding to different algorithms can be divided into different groups, and the number of the processing objects processed by the model at the same time can be ensured not to exceed BN.
Based on the model operation method provided by the embodiment, correspondingly, the application also provides a specific implementation mode of the model operation device. Please see the examples below.
Referring first to fig. 2, a model operating apparatus 200 provided in an embodiment of the present application includes the following modules:
an obtaining module 201, configured to obtain a first query statement set corresponding to the first type of model;
a first screening module 202, configured to compare the execution template of each first query statement in the first query statement set with each preset exception template, and screen out N first query statements without exception according to a comparison result;
a second screening template 203, configured to screen a second query statement that takes the shortest time for query from the N non-abnormal first query statements;
an executing module 204, configured to execute the first query statement, and obtain a first processing object corresponding to the first class model;
the running module 205 is configured to input the first processing object into the first class model, so as to obtain a running result of the first class model.
The equipment can establish corresponding relations between different query statement sets and different types of models, compare execution templates of the query statements in the query statement sets with preset abnormal templates, obtain abnormal query statements based on comparison results, further screen out normal query statements consuming the shortest time, screen out the most appropriate query statements used in the different types of models, and automatically acquire batch running tasks from a database based on the most appropriate query statements, so that the batch running tasks are prevented from being manually allocated to batch running machines, the allocation and the running time of the batch running tasks are shortened, and the resource utilization rate of the batch running machines is improved.
As an implementation manner of the present application, in order to match the processing object with the model, the execution module 205 may further include:
a grouping unit, configured to split the first processing object to obtain at least one group, where each group includes at least one processing object in the first processing object;
a matching unit, configured to determine a model corresponding to each of the at least one group in the first type of model;
and the prediction unit is used for inputting the groups into corresponding models respectively to obtain the prediction probability values of the groups.
As an implementation manner of the present application, the prediction unit may further include:
and the predicting subunit is used for sequentially inputting the at least one processing object in each group into the corresponding model to obtain a prediction probability value corresponding to each processing object in the at least one processing object one to one.
As an implementation manner of the present application, in order to record information of an isolated device, the model operating apparatus 200 may further include:
a normalized score determining unit configured to determine normalized scores corresponding to the processing objects one to one according to the prediction probability values;
a contribution score determining unit, configured to determine a contribution score of each feature of each processing object in the normalized score according to the prediction probability value and the normalized score;
an analysis unit configured to analyze the processing object based on the normalized score and the contribution score.
As an implementation manner of the present application, the normalized score determining unit may further include:
a bad-good ratio determination unit configured to determine a bad-good ratio of the processing object based on the prediction probability value;
a normalized score determining subunit for determining the normalized score based on a natural logarithm of the bad-good ratio.
As an implementation manner of the present application, the running module 205 may further include:
a first acquisition unit configured to acquire the number of processing objects included in the first processing object;
a second obtaining unit, configured to obtain the number of processing objects included in each preset group of the first type model;
and the splitting subunit is configured to split the first processing object according to the number of the processing objects included in the first processing object and the number of the processing objects included in each preset group, so as to obtain at least one group.
Fig. 3 shows a hardware structure diagram of a model operating device provided in an embodiment of the present application.
The device may comprise a processor 301 and a memory 302 in which computer program instructions are stored.
Specifically, the processor 301 may include a central processing unit (CpU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 302 may include mass storage for data or instructions. By way of example, and not limitation, memory 302 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, tape, or Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 302 may include removable or non-removable (or fixed) media, where appropriate. The memory 302 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 302 is a non-volatile solid-state memory.
The memory may include Read Only Memory (ROM), random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., a memory device) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors) it is operable to perform operations described with reference to the method according to an aspect of the disclosure.
The processor 301 implements any of the above-described model execution methods in the embodiments by reading and executing computer program instructions stored in the memory 302.
In one example, the model execution device may also include a communication interface 303 and a bus 310. As shown in fig. 3, the processor 301, the memory 302, and the communication interface 303 are connected via a bus 310 to complete communication therebetween.
The communication interface 303 is mainly used for implementing communication between modules, apparatuses, units and/or devices in the embodiment of the present application.
Bus 310 comprises hardware, software, or both coupling the components of the model execution device to each other. By way of example, and not limitation, a bus may include an accelerated graphics port (AGp) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hyper Transport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a low pin count (LpC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a peripheral component interconnect (pCI) bus, a pCI-Express (pCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of these. Bus 310 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The model operation device may be based on the above embodiments, thereby implementing the model operation method and apparatus described above.
In addition, in combination with the model running method in the foregoing embodiments, the embodiments of the present application may provide a computer storage medium to implement. The computer storage medium having computer program instructions stored thereon; when executed by a processor, the computer program instructions implement any one of the model operation methods in the embodiments described above, and can achieve the same technical effects, and are not described herein again to avoid repetition. The computer-readable storage medium may include a non-transitory computer-readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and is not limited herein.
In addition, the present application also provides a computer program product, which includes computer program instructions, and when the computer program instructions are executed by a processor, the steps and the corresponding contents of the foregoing method embodiments can be implemented.
It is to be understood that the present application is not limited to the particular arrangements and instrumentality described above and shown in the attached drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions, or change the order between the steps, after comprehending the spirit of the present application.
The functional blocks shown in the above structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed at the same time.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware for performing the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As will be apparent to those skilled in the art, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application.

Claims (10)

1. A method of model operation, comprising:
acquiring a first query statement set corresponding to the first type model;
comparing the execution template of each first query statement in the first query statement set with preset abnormal templates, and screening N abnormal first query statements according to the comparison result;
screening out a second query statement with the shortest query time from the N first query statements without abnormality;
executing the second query statement to obtain a first processing object corresponding to the first type of model;
and inputting the first processing object into the first type model to obtain an operation result of the first type model.
2. The model operation method according to claim 1, wherein the first type of model includes at least one model;
the inputting the first processing object into the first class model to obtain the operation result of the first class model includes:
splitting the first processing object to obtain at least one group, wherein each group comprises at least one processing object in the first processing object;
determining a model corresponding to each group in the at least one group in the first type of model;
and inputting the groups into corresponding models respectively to obtain the prediction probability value of each group.
3. The method of claim 2, wherein the inputting the groups into the corresponding models to obtain the predicted probability values of the groups comprises:
and sequentially inputting the at least one processing object in each group into the corresponding model to obtain a prediction probability value corresponding to each processing object in the at least one processing object one to one.
4. The method for operating a model according to claim 3, wherein after the at least one processing object in each of the groups is sequentially input to the corresponding model to obtain the predicted probability values corresponding to the processing objects in the at least one processing object one to one, the method further comprises:
determining standardized scores corresponding to the processing objects one by one according to the predicted probability values;
determining a contribution score of each feature of each processing object in the standardized score according to the prediction probability value and the standardized score;
analyzing the processing object based on the normalized score and the contribution score.
5. The method of claim 4, wherein said determining a one-to-one normalized score for each of said processing objects based on said predicted probability values comprises:
determining the ratio of the bad quality to the good quality of the processing object according to the predicted probability value;
and determining the standardized score according to the good-bad ratio.
6. The model execution method of claim 2, wherein the splitting the first processing object into at least one group comprises:
acquiring the number of processing objects included in the first processing object;
acquiring the number of processing objects included in each preset group of the first type model;
and splitting the first processing object according to the number of the processing objects included in the first processing object and the number of the processing objects included in each preset group to obtain at least one group.
7. A model running apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a first query statement set corresponding to the first type of model;
the first screening module is used for comparing the execution template of each first query statement in the first query statement set with each preset abnormal template, and screening N abnormal first query statements according to the comparison result;
the second screening module is used for executing the second query statement and acquiring a first processing object corresponding to the first type of model;
the execution module is used for executing the second query statement and acquiring a first processing object corresponding to the first type of model;
and the operation module is used for inputting the first processing object into the first type model to obtain an operation result of the first type model.
8. A model execution apparatus characterized by comprising: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the model operation method of any of claims 1-6.
9. A computer storage medium, characterized in that the computer storage medium has stored thereon computer program instructions which, when executed by a processor, implement the model execution method according to any one of claims 1-6.
10. A computer program product, characterized in that it comprises computer program instructions which, when executed by a processor, implement the model execution method of any one of claims 1-6.
CN202211183707.1A 2022-09-27 2022-09-27 Model operation method, device, equipment, storage medium and program product Pending CN115391620A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211183707.1A CN115391620A (en) 2022-09-27 2022-09-27 Model operation method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211183707.1A CN115391620A (en) 2022-09-27 2022-09-27 Model operation method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN115391620A true CN115391620A (en) 2022-11-25

Family

ID=84128311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211183707.1A Pending CN115391620A (en) 2022-09-27 2022-09-27 Model operation method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN115391620A (en)

Similar Documents

Publication Publication Date Title
CN114757587B (en) Product quality control system and method based on big data
CN112364014A (en) Data query method, device, server and storage medium
CN116664335B (en) Intelligent monitoring-based operation analysis method and system for semiconductor production system
CN113543117B (en) Prediction method and device for number portability user and computing equipment
CN111414528B (en) Method and device for determining equipment identification, storage medium and electronic equipment
CN112035286A (en) Method and device for determining fault cause, storage medium and electronic device
CN115391620A (en) Model operation method, device, equipment, storage medium and program product
CN113094415B (en) Data extraction method, data extraction device, computer readable medium and electronic equipment
CN111459540B (en) Hardware performance improvement suggestion method and device and electronic equipment
CN113656046A (en) Application deployment method and device
CN114330720A (en) Knowledge graph construction method and device for cloud computing and storage medium
CN113112102A (en) Priority determination method, device, equipment and storage medium
CN113239236B (en) Video processing method and device, electronic equipment and storage medium
CN110717077B (en) Resource scoring and processing method and device, electronic equipment and medium
CN115563279A (en) Vehicle machine function evaluation method and device, electronic equipment and storage medium
CN117807056A (en) Data auditing method and device, electronic equipment and storage medium
CN118152134A (en) Resource adjustment method, device, equipment, medium and program product
CN114153748A (en) Data generation method, data recommendation method, data testing method, electronic device, and storage medium
CN115292606A (en) Information pushing method, device, equipment and medium
CN117454174A (en) Anomaly detection model training and data detection methods, devices, equipment and media
CN116821721A (en) Method, device, equipment and medium for identifying cross-city network about car
CN117171020A (en) Test case ordering method, device, equipment and medium based on ordering learning
CN116502841A (en) Event processing method and device, electronic equipment and medium
CN117520601A (en) Graph database query method and device, storage medium, equipment and product
CN115935078A (en) Application recommendation method, device, equipment, medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination