CN112579429A - Problem positioning method and device - Google Patents

Problem positioning method and device Download PDF

Info

Publication number
CN112579429A
CN112579429A CN201910943664.4A CN201910943664A CN112579429A CN 112579429 A CN112579429 A CN 112579429A CN 201910943664 A CN201910943664 A CN 201910943664A CN 112579429 A CN112579429 A CN 112579429A
Authority
CN
China
Prior art keywords
log
training
classification model
clustering
consistency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910943664.4A
Other languages
Chinese (zh)
Inventor
魏乔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910943664.4A priority Critical patent/CN112579429A/en
Publication of CN112579429A publication Critical patent/CN112579429A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a problem positioning method and device, and relates to the technical field of computers. One embodiment of the method comprises: obtaining a training set according to the labeled and unlabeled log sample sets, and performing first training on the problem classification model; and taking logs of a preset number of problem scenes as a test set, performing consistency evaluation on the first trained problem classification model, performing second training on the first trained problem classification model according to a consistency evaluation result, and repeating the consistency evaluation and the second training process until a final consistency evaluation result meets a preset condition to obtain a trained problem classification model for determining the problem category in the log data of the problem to be positioned. The embodiment can realize automatic and accurate positioning and classification of the bug, help the tester to avoid low-level problems, enhance the independence of the tester, save the cost for repairing problems for developers, and has low requirement on test resources, low calculation cost and strong self-adaptive capacity.

Description

Problem positioning method and device
Technical Field
The invention relates to the technical field of computers, in particular to a problem positioning method and device.
Background
Most test environment backstage bug (problem) location at present is the problem of locating by manual log search of testers, the location of the bug depends on the technical level and experience accumulation of the testers per se under the common condition, abnormal log screenshots can be processed for research and development when the testers cannot locate the bug, developers need to analyze codes for location, and the difficulty and cost of problem location are greatly increased when a system is in charge of by multiple people and the logic is complex. Therefore, the automatic positioning of the background bug in the test environment is very important.
The existing bug detection based on path analysis and iterative metamorphic testing mainly realizes full path coverage of test cases based on a white box criterion, and the method needs continuous iteration on the test cases and has higher requirements on test resources; the software fault location model based on the Back Propagation (BP) neural network has high algorithm calculation cost and poor adaptive capacity; although the accuracy rate is improved to a certain extent, the derived software defect positioning method based on the code structure information is still biased to the defect diagnosis of code logic, structures and frames by developers, can be applied to a self-test link of development, can also be applied to code analysis after test personnel submit specific bugs, and has not been involved in a method for automatically positioning bugs in a test environment.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
the bug cannot be automatically and accurately positioned in the test environment, and the method has the defects of high requirement on test resources, high algorithm calculation cost, poor self-adaptive capability and the like.
Disclosure of Invention
In view of this, embodiments of the present invention provide a problem location method and apparatus, which can implement automatic and accurate location and classification of bugs, help testers to avoid some low-level problems, enhance independence of the testers, and save cost for developers to repair problems, and have low requirements for testing resources, low computation cost, and strong adaptive capability.
To achieve the above object, according to an aspect of an embodiment of the present invention, a problem location method is provided.
A problem location method, comprising: obtaining a training set according to a first log sample set with a label and a second log sample set without the label, and performing first training on a problem classification model, wherein the label is used for indicating a problem category; taking logs of a preset number of problem scenes as a test set, performing consistency evaluation on a first trained problem classification model by using the test set, performing second training on the first trained problem classification model according to a consistency evaluation result, repeating the consistency evaluation and the second training until a final consistency evaluation result meets a preset condition, and not continuing the second training to obtain a trained problem classification model; and inputting the log data of the problem to be positioned into the trained problem classification model so as to determine the problem category in the log data.
Optionally, the problem classification model is a clustering model, a training set is obtained according to a first log sample set with a label and a second log sample set without a label, and a step of performing a first training on the problem classification model includes: determining a plurality of initial clustering centers according to the first log sample set, calculating Euclidean distances from each log sample in the first log sample set and the second log sample set to each initial clustering center so as to cluster each log sample, and re-determining the clustering center of each cluster; and continuously iterating the processes of calculating the Euclidean distances, clustering the log samples and re-determining the clustering centers until the difference value between the sum of the Euclidean distances obtained by the k +1 iteration and the sum of the Euclidean distances obtained by the k iteration is smaller than the set precision, stopping the iteration to finish the first training, wherein the sum of the Euclidean distances is the sum of the Euclidean distances from the log samples to the clustering centers after the iteration.
Optionally, the step of performing consistency evaluation on the first trained problem classification model by using the test set, and performing second training on the first trained problem classification model according to a consistency evaluation result includes: clustering the logs in the inspection set by using the first trained problem classification model to obtain a problem clustering result; generating a cross classification table by using the problem clustering result and the problem classification expected result of the inspection centralized log; calculating a consistency coefficient according to the cross classification table, wherein the consistency coefficient represents the consistency of the problem clustering result and the problem classification expected result; and if the consistency coefficient does not meet the preset condition, adding the log in the inspection set, of which the problem clustering result is inconsistent with the expected problem classification result, into the training set, and performing second training on the first trained problem classification model.
Optionally, the tagged first set of log samples and the untagged second set of log samples are obtained by: extracting a plurality of abnormal log samples with defective keywords from the effective log sample set, wherein the abnormal log samples are effective log samples with problems; respectively extracting a plurality of characteristics of the defect keywords from each abnormal log sample, and carrying out scaling and normalization processing on each characteristic to obtain a characteristic vector corresponding to each abnormal log sample; and obtaining the first log sample set according to the feature vector of the abnormal log sample of the known problem category, and obtaining the second log sample set according to the feature vector of the abnormal log sample of the unknown problem category.
Optionally, the method further comprises: configuring a system needing monitoring logs to collect logs from the system; filtering the collected log to obtain the effective log sample set comprising a plurality of effective log samples.
According to another aspect of embodiments of the present invention, a problem locating device is provided.
A problem locating device comprising: the system comprises a first training module, a consistency evaluation module, a second training module and a problem category prediction module, wherein: the first training module is used for obtaining a training set according to a first log sample set with a label and a second log sample set without the label, and performing first training on a problem classification model, wherein the label is used for indicating a problem category; the consistency evaluation module is used for taking logs of a preset number of problem scenes as a test set and utilizing the test set to carry out consistency evaluation on the problem classification model after the first training; the second training module is used for carrying out second training on the problem classification model after the first training according to the consistency evaluation result; repeating the processes of consistency evaluation and second training through the consistency evaluation module and the second training module until the final consistency evaluation result meets a preset condition, and not continuing the second training to obtain a trained problem classification model; and the problem category prediction module is used for inputting the log data of the problem to be positioned into the trained problem classification model so as to determine the problem category in the log data.
Optionally, the problem classification model is a clustering model, and the first training module is further configured to: determining a plurality of initial clustering centers according to the first log sample set, calculating Euclidean distances from each log sample in the first log sample set and the second log sample set to each initial clustering center so as to cluster each log sample, and re-determining the clustering center of each cluster; and continuously iterating the processes of calculating the Euclidean distances, clustering the log samples and re-determining the clustering centers until the difference value between the sum of the Euclidean distances obtained by the k +1 iteration and the sum of the Euclidean distances obtained by the k iteration is smaller than the set precision, stopping the iteration to finish the first training, wherein the sum of the Euclidean distances is the sum of the Euclidean distances from the log samples to the clustering centers after the iteration.
Optionally, the consistency evaluation module is further configured to: clustering the logs in the inspection set by using the first trained problem classification model to obtain a problem clustering result; generating a cross classification table by using the problem clustering result and the problem classification expected result of the inspection centralized log; calculating a consistency coefficient according to the cross classification table, wherein the consistency coefficient represents the consistency of the problem clustering result and the problem classification expected result; and if the consistency coefficient does not meet the preset condition, adding the log in the inspection set, of which the problem clustering result is inconsistent with the expected problem classification result, into the training set, so that the second training module performs second training on the first trained problem classification model.
Optionally, the method further includes a training set generation module, configured to obtain a first labeled log sample set and a second unlabeled log sample set, where: extracting a plurality of abnormal log samples with defective keywords from the effective log sample set, wherein the abnormal log samples are effective log samples with problems; respectively extracting a plurality of characteristics of the defect keywords from each abnormal log sample, and carrying out scaling and normalization processing on each characteristic to obtain a characteristic vector corresponding to each abnormal log sample; and obtaining the first log sample set according to the feature vector of the abnormal log sample of the known problem category, and obtaining the second log sample set according to the feature vector of the abnormal log sample of the unknown problem category.
Optionally, the system further comprises a system configuration module, configured to configure a system that needs to monitor the log; the log acquisition module is used for acquiring logs from the system; and the log cleaning module is used for filtering the collected logs to obtain the effective log sample set comprising a plurality of effective log samples.
According to yet another aspect of an embodiment of the present invention, an electronic device is provided.
An electronic device, comprising: one or more processors; a memory for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the problem location method provided by the present invention.
According to yet another aspect of an embodiment of the present invention, a computer-readable medium is provided.
A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the problem localization method provided by the present invention.
One embodiment of the above invention has the following advantages or benefits: performing first training on the problem classification model, taking logs of a preset number of problem scenes as a test set, performing consistency evaluation on the problem classification model after the first training, performing second training on the problem classification model after the first training according to a consistency evaluation result, and repeating the processes of the consistency evaluation and the second training until a final consistency evaluation result meets a preset condition to obtain the trained problem classification model for determining the problem category in the log data of the problem to be positioned. The bug automatic positioning and classification method can realize automatic accurate positioning and classification of bugs, help testers to avoid some low-level problems, enhance independence of the testers, save cost for repairing problems for developers, and has low requirement on testing resources, low calculation cost and strong self-adaptive capacity.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of the main steps of a problem location method according to one embodiment of the present invention;
FIG. 2 is a schematic diagram of a problem classification model training flow according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a consistency evaluation process of a problem classification model according to one embodiment of the present invention;
FIG. 4 is a schematic diagram of the main modules of a problem locating device according to one embodiment of the present invention;
FIG. 5 is a schematic diagram of an overall framework of a problem locating device according to one embodiment of the present invention;
FIG. 6 is a schematic diagram of the classification of a testing environment bug according to one embodiment of the invention;
FIG. 7 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 8 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
FIG. 1 is a schematic diagram of the main steps of a problem location method according to one embodiment of the present invention.
As shown in fig. 1, the problem location method according to an embodiment of the present invention mainly includes the following steps S101 to S103.
Step S101: and obtaining a training set according to the first log sample set with the label and the second log sample set without the label, and performing first training on the problem classification model.
Wherein the label is used to indicate the question category.
A system requiring monitoring of logs can be configured to collect logs from the system; the collected logs are filtered to obtain an effective log sample set comprising a plurality of effective log samples, the effective log sample set comprising abnormal log samples, the abnormal log samples being the effective log samples with problems (i.e. bugs). Wherein a portion of the anomaly log samples are of a known problem category and another portion of the anomaly log samples are of an unknown problem category. An exception log sample of known problem categories is analyzed manually for bugs to determine the problem category.
By extracting the defect keywords from the valid log samples, abnormal log samples can be obtained. Each type of problem can be identified according to the corresponding defect keyword, the defect keyword is used for representing the bug, and the existence of the defect keyword indicates the bug. For example, the following defect keyword java, lang, nullpointersception represents a null pointer problem, No live provider represents a connection problem of upstream and downstream interface calls, and the like, when a certain valid log sample has a defect keyword, the valid log sample can be extracted as an abnormal log sample.
The labeled first set of log samples and the unlabeled second set of log samples may be obtained by: extracting a plurality of abnormal log samples with defective keywords from the effective log sample set; respectively extracting a plurality of characteristics of the defect keywords from each abnormal log sample, and carrying out scaling and normalization processing on each characteristic to obtain a characteristic vector corresponding to each abnormal log sample; and obtaining a first log sample set according to the feature vector of the abnormal log sample of the known problem category, and obtaining a second log sample set according to the feature vector of the abnormal log sample of the unknown problem category.
The steps of scaling and normalizing each feature specifically include: and converting the obtained features of all the defect keywords into scalars, for example, numbering the features of the defect keywords, wherein each number corresponds to one scalar, so that each feature is quantized to obtain a scalar data set A. The resulting scalar data set a is normalized by [0,1] using the min-max method, which can be normalized as follows:
Figure BDA0002223606290000071
where x' is the scalar value after x normalization processing. And the features of each abnormal log sample after the scaling and normalization processing form a feature vector of the abnormal log sample.
The problem classification model can be a clustering model, specifically a k-means clustering algorithm model, and can also be other clustering algorithm models.
Step S101 may specifically include: determining a plurality of initial clustering centers according to the first log sample set, calculating Euclidean distances from each log sample in the first log sample set and the second log sample set to each initial clustering center so as to cluster each log sample, and re-determining the clustering center of each cluster; and continuously iterating the processes of calculating the Euclidean distances, clustering the log samples and re-determining the clustering centers until the difference value between the sum of the Euclidean distances obtained by the k +1 th iteration and the sum of the Euclidean distances obtained by the k th iteration is smaller than the set precision, stopping iteration to finish the first training, wherein the sum of the Euclidean distances is the sum of the Euclidean distances from the log samples to the clustering centers after the iteration.
Step S102: and taking logs of a preset number of problem scenes as a test set, performing consistency evaluation on the first trained problem classification model by using the test set, performing second training on the first trained problem classification model according to a consistency evaluation result, repeating the consistency evaluation and second training processes until the final consistency evaluation result meets a preset condition, and not continuing the second training to obtain the trained problem classification model.
The consistency evaluation is performed on the first trained problem classification model by using the inspection set, and according to the consistency evaluation result, the second training is performed on the first trained problem classification model, which specifically includes: clustering the logs in the inspection set by using the first trained problem classification model to obtain a problem clustering result; generating a cross classification table by using the obtained problem clustering result and a problem classification expected result of the inspection centralized log; calculating a consistency coefficient according to the cross classification table, wherein the consistency coefficient reflects the consistency of the problem clustering result and the problem classification expected result; and if the consistency coefficient does not meet the preset condition, adding the log in the inspection set with the inconsistent problem clustering result and the expected problem classification result into the training set, and performing second training on the first trained problem classification model.
The preset condition that the final consistency evaluation result satisfies, that is, the preset condition that the consistency coefficient satisfies, specifically is: the consistency factor after each training is satisfied such that the probability that the problem clustering result is completely consistent with the expected result of the problem classification reaches a preset value, for example, 95%, that is, at a 95% confidence interval of kappa ═ 1, kappa is the consistency factor.
The second training has the same steps as the first training, except that a partial test set is added to the training set of the second training on the basis of the training set of the first training, and the specific steps of the second training can be referred to the description of the steps of the first training, which is not described herein again.
Step S103: and inputting the log data of the problem to be positioned into the trained problem classification model so as to determine the problem category in the log data.
The embodiment of the invention also can synchronize the positioned problems and the problem types to the current tester to be processed in time.
The problem distribution in a period of time can be displayed visually and specifically in a graph generating mode, and therefore the problem collection and analysis in a period of time can be realized, and the development of testers and research personnel can be assisted through analysis and collection.
And the related logs of the positioned problems and the corresponding problem categories can be summarized and summarized, and a problem knowledge base is established, so that testers can conveniently and specifically recognize bugs.
The located problems, the category information of the problems and the complete log information in one month are stored in a distributed storage mode, and data support is provided for the generation of the log report and the problem knowledge base.
FIG. 2 is a schematic diagram of a problem classification model training process according to an embodiment of the present invention.
Because the testing environment bug is generated by various reasons and can not be enumerated, one embodiment of the invention classifies the bug category by using k-means clustering, and constructs a problem classification model for semi-supervised learning based on the learning of a small amount of known classified bug logs, thereby overcoming the defects that the supervised learning needs a large amount of historical data and has higher cost, and overcoming the defects that the unsupervised learning is not supervised at all, the learning effect is unsatisfactory, and the defect misclassification is easily caused.
As shown in fig. 2, the problem classification model training process includes steps S201 to S206.
Step S201: a first set of labeled log samples L and a second set of unlabeled log samples U are obtained.
First log sample set L ═ X1,Y1]Second log sample set U ═ X2]. Wherein X ═ X1+X2X denotes a set of feature vectors of an anomaly log sample, where X is1Is a set of feature vectors, X, of abnormal log samples of known problem classes2Set of feature vectors, Y, of an anomaly log sample of unknown problem category1Is X1A set of labels (i.e., problem categories) for each feature vector in (a).
Step S202: and configuring the cluster number.
The number of clusters l ═ len (unique (Y)1)),len(unique(Y1) ) represents Y1The number of non-identical values of (a). Step S202 may be performed before step S201.
Step S203: and determining a plurality of initial clustering centers and obtaining an initial clustering set.
In the k-means algorithm, the difference of initial clustering center selection can cause the difference of final clustering results, and the clustering accuracy can be seriously influenced by the initial clustering centers. In the unsupervised learning k-means algorithm, the initial clustering center is randomly selected, which is easy to cause defect misclassification1An initial cluster center is determined. The initial cluster center is as follows:
Figure BDA0002223606290000101
in the formula, the numerator represents the sum of the feature vectors of all the abnormal log samples in the category, and the denominator represents the number of the abnormal log samples in the category. Wherein x isiRepresents X1Is given by the ith value, yiRepresents Y1I.e. the ith category. L (x)j)=yiIndicating an anomalous log sample that falls within the category.
Will initiate yiAll x of the same valueiThe values are classified into one class, and an initial clustering set phi is obtained as [ C ═ C1,C2,...,Cl]。C1,C2,...,ClRepresenting the i initial clusters.
Step S204: and calculating Euclidean distances from each point in the set X of the feature vectors of the abnormal log samples to each cluster center.
Each point in X corresponds to each X in XiThe value is obtained.
Step S205: and adjusting the clustering center of each cluster according to the Euclidean distance and clustering.
The above-described process of calculating Euclidean distances, clustering the log samples, and re-determining the cluster centers is iterated continuously, wherein D is the numberkMinimum principle adjusts the cluster center, D, of each clusterkAnd representing the sum of Euclidean distances from each log sample to each cluster center after the k-th iteration.
For any iteration, the sum D of Euclidean distances from each log sample to each cluster center is as follows:
Figure BDA0002223606290000111
step S206: judging whether the difference value of the sum of Euclidean distances obtained by the k +1 th iteration and the sum of Euclidean distances obtained by the k th iteration is smaller than the set precision delta or not, if not, returning to the step S204, if so, stopping the iteration, namely phik+1=Φk,Φk+1And phikRespectively representing the cluster sets obtained by the (k + 1) th iteration and the k < th > iteration.
FIG. 3 is a flowchart illustrating a consistency evaluation process of a problem classification model according to an embodiment of the present invention.
According to the embodiment of the invention, the problem classification model is continuously optimized through consistency evaluation so as to improve the clustering precision, thereby improving the accuracy of problem (bug) positioning. As shown in fig. 3, the consistency evaluation flow includes steps S301 to S304.
Step S301: collecting logs of n question scenes as a check set in a local environment, and manually setting expected results of question classification of each log.
Problem classification the expected result is the expected bug classification.
After the test set is collected, a preprocessing operation such as cleaning may be performed on the test set, and the operation of step S302 may be performed using the preprocessed test set.
Step S302: and clustering the logs in the inspection set by using the first trained problem classification model to obtain a problem clustering result.
The first trained problem classification model is a problem classification model trained by a training flow shown in fig. 2, in which a first labeled log sample set L and a second unlabeled log sample set U are used as training sets. And inputting the test set into the first trained problem classification model, and outputting a bug classification result of the test set, namely a problem clustering result.
Step S303: and combining the obtained problem clustering result and the problem classification expected result of the log in the inspection set into a cross classification table, and calculating a consistency coefficient kappa.
The consistency factor, i.e., the kappa factor, is calculated as follows:
Figure BDA0002223606290000121
wherein N is the total number of cells in the cross-sorting table, AiiFor elements in the cross-sorted list at diagonal positions, Ai.Is the sum of the elements of row i, A.jIs the sum of the elements in column j.
Generally, the closer the value of the kappa coefficient is to 1 indicates the better the consistency of the problem clustering result and the expected result of the problem classification. The form of the cross-sorted list is shown in Table 1, Aij(1. ltoreq. i.ltoreq.5, 1. ltoreq. j.ltoreq.5) denotes the frequency of occurrence of the respective cases, for example A11Indicating the frequency of the null pointer problem, both prediction, which refers to the problem cluster result described above, and expectation, which refers to the problem classification expected result described above.
TABLE 1
Figure BDA0002223606290000122
Step S304: and judging whether the consistency coefficient meets a preset condition, if not, adding the logs which are inconsistent with the expected problem classification result in the n problem scenes into a training set, and continuing the second training until the second training is terminated when the preset condition is met.
And judging whether the consistency coefficient meets a preset condition, namely judging whether the consistency coefficient is in a 95% confidence interval with the kappa being 1 at present, if not, adding logs (specifically in the form of feature vectors) which are inconsistent with the expected result of the problem classification in the n problem scenes into a training set, and continuing to train according to the training flow of the figure 2 (namely, second training). The second training is not terminated until the determination is at a 95% confidence interval of kappa-1. The judgment result is in the 95% confidence interval of kappa-1, which means that the probability that the problem clustering result is completely consistent with the expected result of the problem classification reaches 95%, namely 95 times of 100 training reach kappa-1.
The embodiment of the invention effectively improves the accuracy of k-means clustering results and realizes accurate classification of bugs through semi-supervised learning and consistency evaluation, is not only suitable for automatically positioning bugs in a test environment, but also can be popularized in a generation environment, and provides help for operation and maintenance personnel to position on-line problems. The embodiment of the invention has low requirements on test resources, low calculation cost and strong self-adaptive capability.
FIG. 4 is a schematic diagram of the main modules of a problem locating device according to one embodiment of the present invention.
As shown in FIG. 4, the problem locating apparatus 400 according to one embodiment of the present invention mainly includes: a first training module 401, a consistency evaluation module 402, a second training module 403, and a problem category prediction module 404.
The first training module 401 is configured to obtain a training set according to a first log sample set with a label and a second log sample set without the label, perform first training on the problem classification model, and use the label to indicate a problem category.
The problem locating apparatus 400 may further include a system configuration module, configured to configure a system that needs to monitor the log, specifically, the system may include information such as a system IP that configures the current log that needs to be monitored, a log rollback period, and the like. For the functionality of the system configuration module, see the detailed description of the system configuration module 502 below.
The issue locating device 400 may also include a log collection module for collecting logs from the system to be monitored. For example, a syslog service is built using a Linux server to collect logs. For the function of the log collection module, see the following detailed description of the log collection module 503.
The issue locating device 400 may further include a log cleansing module for filtering the collected log to obtain an effective log sample set including a plurality of effective log samples. For the functionality of the log flush module, reference may be made to the following detailed description of the log flush module 504.
The problem locating apparatus 400 may further include a training set generation module for obtaining a first set of labeled log samples and a second set of unlabeled log samples, wherein: extracting a plurality of abnormal log samples with defective keywords from the effective log sample set, wherein the abnormal log samples are effective log samples with problems; respectively extracting a plurality of characteristics of the defect keywords from each abnormal log sample, and carrying out scaling and normalization processing on each characteristic to obtain a characteristic vector corresponding to each abnormal log sample; and obtaining a first log sample set according to the feature vector of the abnormal log sample of the known problem category, and obtaining a second log sample set according to the feature vector of the abnormal log sample of the unknown problem category.
The problem classification model may be a clustering model, specifically a k-means clustering algorithm model, or other clustering algorithm models.
The first training module 401 may specifically be configured to: determining a plurality of initial clustering centers according to the first log sample set, calculating Euclidean distances from each log sample in the first log sample set and the second log sample set to each initial clustering center so as to cluster each log sample, and re-determining the clustering center of each cluster; and continuously iterating the processes of calculating the Euclidean distances, clustering the log samples and re-determining the clustering centers until the difference value between the sum of the Euclidean distances obtained by the k +1 iteration and the sum of the Euclidean distances obtained by the k iteration is smaller than the set precision, stopping iteration to finish the first training, wherein the sum of the Euclidean distances is the sum of the Euclidean distances from the log samples to the clustering centers after the iteration.
And a consistency evaluation module 402, configured to use logs of a preset number of problem scenes as a test set, and perform consistency evaluation on the first trained problem classification model by using the test set.
The consistency evaluation module 402 may be specifically configured to: clustering the logs in the inspection set by using the first trained problem classification model to obtain a problem clustering result; generating a cross classification table by using the problem clustering result and the problem classification expected result of the inspection centralized log; calculating a consistency coefficient according to the cross classification table, wherein the consistency coefficient reflects the consistency of the problem clustering result and the problem classification expected result; and if the consistency coefficient does not meet the preset condition, adding the log in the inspection set, of which the problem clustering result is inconsistent with the expected problem classification result, into the training set, so that a second training module performs second training on the first trained problem classification model.
And a second training module 403, configured to perform second training on the problem classification model after the first training according to the consistency evaluation result.
The consistency evaluation module 402 and the second training module 403 repeat the consistency evaluation and the second training process until the final consistency evaluation result meets the preset condition, and the second training is not continued to obtain the trained problem classification model.
And the problem category prediction module 404 is configured to input the log data of the problem to be located into the trained problem classification model to determine a problem category in the log data.
FIG. 5 is a schematic diagram of the overall framework of a problem locating device according to one embodiment of the present invention
As shown in fig. 5, the overall frame of the problem locating apparatus according to an embodiment of the present invention includes: a UI (user interface) interaction module 501, a system configuration module 502, a log collection module 503, a log cleaning module 504, a bug identification module 505, an intelligent analysis module 506 and a distributed storage module 507.
The UI interaction module 501 mainly relates to an anomaly monitoring alarm, a report generation, a bug knowledge base and the like, wherein the anomaly monitoring alarm function mainly synchronizes captured bugs and classification types thereof to current testers for timely processing; the report generation function mainly provides a bug summary analysis function in a period of time for a user, vividly and specifically shows bug distribution in the near period of time in a graph generation mode, and helps testers and research and development personnel to grow together through analysis summary; the bug knowledge base summarizes and summarizes related logs of the bug located by the log and corresponding classifications of the logs, and therefore testers can conveniently and specifically recognize the bug quickly.
The system configuration module 502 mainly includes user management, log source system related configuration, parameter setting, etc., and a tester can set information such as a system IP of a current log to be monitored, and can also set a log rollback period of a system, generally, positioning of problems is required to be real-time, summary of the problems is asynchronous, and mass log data is not necessarily stored completely.
The log collection module 503 can use a Linux server to build a syslog service to collect logs, and provide a data source for subsequent intelligent analysis of the logs. Meanwhile, the uploading of log files is supported, a large number of abnormal log samples of known problem categories are needed in the training process of the problem classification model, and a large number of locally stored historical records and expected results obtained through analysis can be uploaded to serve as data sources of a training set.
The log washing module 504 washes out the fixed output system log through a filter. Because the log obtained by the log collection module 503 has many repeated system logs, and the occupied proportion of the log in the log is far greater than that of the effective log at some times, the fixed output system log is cleaned by cleaning the log, so that the effective log is obtained, the data base number participating in intelligent analysis can be reduced, and the processing efficiency is improved. These valid logs may be used as valid log samples to form a valid log sample set.
The bug identification module 505 may act as a training set generation module (which has been described in detail above). The bug identifying module 505 is mainly configured to identify and extract logs where bugs exist, and the extracted logs where bugs exist may be used as abnormal log samples. The log bug identification of the test environment strongly depends on the output control of the log when the log is coded by developers, so that the log bug identification needs to be agreed with the developers, and the log printing is carried out as completely as possible, so that the problem diagnosis can be carried out through the analysis of the log when the problem occurs. The classification of the test environment bug is shown in fig. 6, where fig. 6 only shows some common classifications by way of example. Each type of bug can be identified according to the corresponding defect keyword, for example, java.
The embodiment of the invention mainly uses a method for extracting the defect keywords to extract and analyze the defect keywords of the logs in the effective log sample set so as to obtain a plurality of abnormal log samples. All bug information of the exception log sample is then preprocessed to turn the resulting features of all defect keys into scalars.
In order to facilitate the calculation of the subsequent clustering algorithm, the obtained scalar data set A is normalized by [0,1] by using a min-max method, and the formula is as follows:
Figure BDA0002223606290000171
wherein x' is a scalar value after x normalization processing, the normalized data set can be used as a direct data source of the intelligent analysis module 506, the scalar quantity of each abnormal log sample and each feature after normalization processing form a feature vector, a first log sample set with a label can be obtained according to the feature vector of the abnormal log sample of the known problem category, and a second log sample set without a label can be obtained according to the feature vector of the abnormal log sample of the unknown problem category to obtain a training set.
The intelligent analysis module 506 mainly classifies bugs by using semi-supervised k-means clustering, firstly constructs a problem classification model for the learning of a small number of logs of known bug classes, and then continuously optimizes the problem classification model through consistency evaluation. Effective output of the log is a key factor for measuring maintainability of the system, the log is crucial to monitoring of the system and analysis and positioning of problems no matter in a production environment or a test environment, and the embodiment of the invention mainly performs positioning and classification on the bug based on intelligent analysis of the log. The intelligent analysis module 506 is specifically subdivided into a first training module, a consistency evaluation module, a second training module, and a problem category prediction module, and the functions of the four models are respectively the same as those of the first training module 401, the consistency evaluation module 402, the second training module 403, and the problem category prediction module 404, which are not described herein again.
The distributed storage module 507 stores the collected log information in a distributed storage manner. Because the data volume of the log information is large, the contents to be stored mainly comprise the located bug and the category information thereof and complete log information within one month, and the data support can be provided for the generation of log reports and the bug knowledge base.
In addition, the detailed implementation of the problem locating device in the embodiment of the present invention has been described in detail in the problem locating method, and therefore, the repeated content will not be described again.
FIG. 7 illustrates an exemplary system architecture 700 in which the problem locating method or problem locating apparatus of embodiments of the present invention may be applied.
As shown in fig. 7, the system architecture 700 may include terminal devices 701, 702, 703, a network 704, and a server 705. The network 704 serves to provide a medium for communication links between the terminal devices 701, 702, 703 and the server 705. Network 704 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 701, 702, 703 to interact with a server 705 over a network 704, to receive or send messages or the like. The terminal devices 701, 702, 703 may have installed thereon various communication client applications, such as a shopping-like application, a web browser application, a search-like application, an instant messaging tool, a mailbox client, social platform software, etc. (by way of example only).
The terminal devices 701, 702, 703 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 705 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using the terminal devices 701, 702, 703. The backend management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (for example, target push information, product information — just an example) to the terminal device.
It should be noted that the problem location method provided by the embodiment of the present invention is generally executed by the server 705, and accordingly, the problem location apparatus is generally disposed in the server 705.
It should be understood that the number of terminal devices, networks, and servers in fig. 7 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 8, shown is a block diagram of a computer system 800 suitable for use in implementing a terminal device or server of an embodiment of the present application. The terminal device or the server shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 8, the computer system 800 includes a Central Processing Unit (CPU)801 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the system 800 are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program executes the above-described functions defined in the system of the present application when executed by the Central Processing Unit (CPU) 801.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a first training module, a consistency evaluation module, a second training module, and a problem category prediction module. Where the names of these modules do not in some cases constitute a limitation on the module itself, for example, a first training module may also be described as a "module for first training a problem classification model based on a first set of labeled log samples and a second set of unlabeled log samples to obtain a training set".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: obtaining a training set according to a first log sample set with a label and a second log sample set without the label, and performing first training on a problem classification model, wherein the label is used for indicating a problem category; taking logs of a preset number of problem scenes as a test set, performing consistency evaluation on a first trained problem classification model by using the test set, performing second training on the first trained problem classification model according to a consistency evaluation result, repeating the consistency evaluation and the second training until a final consistency evaluation result meets a preset condition, and not continuing the second training to obtain a trained problem classification model; and inputting the log data of the problem to be positioned into the trained problem classification model so as to determine the problem category in the log data.
According to the technical scheme of the embodiment of the invention, a problem classification model is subjected to first training, logs of a preset number of problem scenes are used as a test set, consistency evaluation is carried out on the problem classification model after the first training, second training is carried out on the problem classification model after the first training according to a consistency evaluation result, the consistency evaluation and the second training process are repeated until a final consistency evaluation result meets a preset condition, and the trained problem classification model is obtained and used for determining the problem category in log data of a problem to be positioned. The bug automatic positioning and classification method can realize automatic accurate positioning and classification of bugs, help testers to avoid some low-level problems, enhance independence of the testers, save cost for repairing problems for developers, and has low requirement on testing resources, low calculation cost and strong self-adaptive capacity.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A problem location method, comprising:
obtaining a training set according to a first log sample set with a label and a second log sample set without the label, and performing first training on a problem classification model, wherein the label is used for indicating a problem category;
taking logs of a preset number of problem scenes as a test set, performing consistency evaluation on a first trained problem classification model by using the test set, performing second training on the first trained problem classification model according to a consistency evaluation result, repeating the consistency evaluation and the second training until a final consistency evaluation result meets a preset condition, and not continuing the second training to obtain a trained problem classification model;
and inputting the log data of the problem to be positioned into the trained problem classification model so as to determine the problem category in the log data.
2. The method of claim 1, wherein the problem classification model is a clustering model,
obtaining a training set according to a first log sample set with a label and a second log sample set without a label, and performing a first training step on the problem classification model, wherein the first training step comprises the following steps:
determining a plurality of initial clustering centers according to the first log sample set, calculating Euclidean distances from each log sample in the first log sample set and the second log sample set to each initial clustering center so as to cluster each log sample, and re-determining the clustering center of each cluster;
and continuously iterating the processes of calculating the Euclidean distances, clustering the log samples and re-determining the clustering centers until the difference value between the sum of the Euclidean distances obtained by the k +1 iteration and the sum of the Euclidean distances obtained by the k iteration is smaller than the set precision, stopping the iteration to finish the first training, wherein the sum of the Euclidean distances is the sum of the Euclidean distances from the log samples to the clustering centers after the iteration.
3. The method of claim 1, wherein the step of performing a consistency evaluation on a first trained problem classification model using the test set and performing a second training on the first trained problem classification model based on a consistency evaluation result comprises:
clustering the logs in the inspection set by using the first trained problem classification model to obtain a problem clustering result;
generating a cross classification table by using the problem clustering result and the problem classification expected result of the inspection centralized log;
calculating a consistency coefficient according to the cross classification table, wherein the consistency coefficient represents the consistency of the problem clustering result and the problem classification expected result;
and if the consistency coefficient does not meet the preset condition, adding the log in the inspection set, of which the problem clustering result is inconsistent with the expected problem classification result, into the training set, and performing second training on the first trained problem classification model.
4. The method of claim 1, wherein the tagged first set of log samples and the untagged second set of log samples are obtained by:
extracting a plurality of abnormal log samples with defective keywords from the effective log sample set, wherein the abnormal log samples are effective log samples with problems;
respectively extracting a plurality of characteristics of the defect keywords from each abnormal log sample, and carrying out scaling and normalization processing on each characteristic to obtain a characteristic vector corresponding to each abnormal log sample;
and obtaining the first log sample set according to the feature vector of the abnormal log sample of the known problem category, and obtaining the second log sample set according to the feature vector of the abnormal log sample of the unknown problem category.
5. The method of claim 4, further comprising: configuring a system needing monitoring logs to collect logs from the system; filtering the collected log to obtain the effective log sample set comprising a plurality of effective log samples.
6. A problem locating device, comprising: the system comprises a first training module, a consistency evaluation module, a second training module and a problem category prediction module, wherein:
the first training module is used for obtaining a training set according to a first log sample set with a label and a second log sample set without the label, and performing first training on a problem classification model, wherein the label is used for indicating a problem category;
the consistency evaluation module is used for taking logs of a preset number of problem scenes as a test set and utilizing the test set to carry out consistency evaluation on the problem classification model after the first training;
the second training module is used for carrying out second training on the problem classification model after the first training according to the consistency evaluation result;
repeating the processes of consistency evaluation and second training through the consistency evaluation module and the second training module until the final consistency evaluation result meets a preset condition, and not continuing the second training to obtain a trained problem classification model;
and the problem category prediction module is used for inputting the log data of the problem to be positioned into the trained problem classification model so as to determine the problem category in the log data.
7. The apparatus of claim 6, wherein the problem classification model is a clustering model,
the first training module is further to:
determining a plurality of initial clustering centers according to the first log sample set, calculating Euclidean distances from each log sample in the first log sample set and the second log sample set to each initial clustering center so as to cluster each log sample, and re-determining the clustering center of each cluster;
and continuously iterating the processes of calculating the Euclidean distances, clustering the log samples and re-determining the clustering centers until the difference value between the sum of the Euclidean distances obtained by the k +1 iteration and the sum of the Euclidean distances obtained by the k iteration is smaller than the set precision, stopping the iteration to finish the first training, wherein the sum of the Euclidean distances is the sum of the Euclidean distances from the log samples to the clustering centers after the iteration.
8. The apparatus of claim 6, wherein the consistency evaluation module is further configured to:
clustering the logs in the inspection set by using the first trained problem classification model to obtain a problem clustering result;
generating a cross classification table by using the problem clustering result and the problem classification expected result of the inspection centralized log;
calculating a consistency coefficient according to the cross classification table, wherein the consistency coefficient represents the consistency of the problem clustering result and the problem classification expected result;
and if the consistency coefficient does not meet the preset condition, adding the log in the inspection set, of which the problem clustering result is inconsistent with the expected problem classification result, into the training set, so that the second training module performs second training on the first trained problem classification model.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-5.
10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201910943664.4A 2019-09-30 2019-09-30 Problem positioning method and device Pending CN112579429A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910943664.4A CN112579429A (en) 2019-09-30 2019-09-30 Problem positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910943664.4A CN112579429A (en) 2019-09-30 2019-09-30 Problem positioning method and device

Publications (1)

Publication Number Publication Date
CN112579429A true CN112579429A (en) 2021-03-30

Family

ID=75116576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910943664.4A Pending CN112579429A (en) 2019-09-30 2019-09-30 Problem positioning method and device

Country Status (1)

Country Link
CN (1) CN112579429A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113254329A (en) * 2021-04-30 2021-08-13 展讯通信(天津)有限公司 Bug processing method, system, equipment and storage medium based on machine learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113254329A (en) * 2021-04-30 2021-08-13 展讯通信(天津)有限公司 Bug processing method, system, equipment and storage medium based on machine learning

Similar Documents

Publication Publication Date Title
CN111178456B (en) Abnormal index detection method and device, computer equipment and storage medium
CN111352971A (en) Bank system monitoring data anomaly detection method and system
CN110335168B (en) Method and system for optimizing power utilization information acquisition terminal fault prediction model based on GRU
CN113792825A (en) Fault classification model training method and device for electricity information acquisition equipment
CN114328198A (en) System fault detection method, device, equipment and medium
CN108985279A (en) The method for diagnosing faults and device of double-unit traction controller waveform
CN111796957B (en) Transaction abnormal root cause analysis method and system based on application log
CN113704389A (en) Data evaluation method and device, computer equipment and storage medium
US11645540B2 (en) Deep graph de-noise by differentiable ranking
CN115204536A (en) Building equipment fault prediction method, device, equipment and storage medium
CN112966957A (en) Data link abnormity positioning method and device, electronic equipment and storage medium
CN117060353A (en) Fault diagnosis method and system for high-voltage direct-current transmission system based on feedforward neural network
CN116361147A (en) Method for positioning root cause of test case, device, equipment, medium and product thereof
CN117952100A (en) Data processing method, device, electronic equipment and storage medium
CN117880062A (en) Hybrid cloud fault automatic diagnosis method based on self-updating GPT model
CN112579429A (en) Problem positioning method and device
CN117493797A (en) Fault prediction method and device of Internet of things equipment, electronic equipment and storage medium
CN114077663A (en) Application log analysis method and device
CN113590484B (en) Algorithm model service testing method, system, equipment and storage medium
CN113052509B (en) Model evaluation method, model evaluation device, electronic apparatus, and storage medium
CN116155541A (en) Automatic machine learning platform and method for network security application
CN114706856A (en) Fault processing method and device, electronic equipment and computer readable storage medium
CN115765153A (en) Method and system for fusion monitoring of Internet of things and online monitoring data of primary electric power equipment
CN113110984B (en) Report processing method, report processing device, computer system and readable storage medium
CN111798237B (en) Abnormal transaction diagnosis method and system based on application log

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination