WO2022121656A1 - 一种系统性能优化方法、装置、电子设备及其可读介质 - Google Patents

一种系统性能优化方法、装置、电子设备及其可读介质 Download PDF

Info

Publication number
WO2022121656A1
WO2022121656A1 PCT/CN2021/131517 CN2021131517W WO2022121656A1 WO 2022121656 A1 WO2022121656 A1 WO 2022121656A1 CN 2021131517 W CN2021131517 W CN 2021131517W WO 2022121656 A1 WO2022121656 A1 WO 2022121656A1
Authority
WO
WIPO (PCT)
Prior art keywords
performance
feature information
nlp
performance optimization
present application
Prior art date
Application number
PCT/CN2021/131517
Other languages
English (en)
French (fr)
Inventor
刘博�
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2022121656A1 publication Critical patent/WO2022121656A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations

Definitions

  • the present application relates to the field of computers, and in particular, to a system performance optimization method, apparatus, electronic device, and readable medium thereof.
  • Embodiments of the present application provide a system performance optimization method, apparatus, electronic device, and readable medium thereof.
  • the system performance optimization method provided by the embodiment of the present application includes: extracting system state feature information from state information output by the system; obtaining the type and probability of performance problems in the system according to the system state feature information; The probability of the problem is displayed.
  • the embodiment of the present application also provides a system performance optimization device, including: a system state feature extraction module, an NLP performance optimization model, an early warning processing module, and a solution processing setting module; a system state feature extraction module, which filters and extracts systems in a running state State feature information; NLP performance optimization model, which obtains the type and probability of performance problems in the system according to the system state feature information; early warning processing module, which displays the types of performance problems and the probability of performance problems, and can Select to perform corresponding processing; the solution processing setting module, which receives user input and sets the processing method of the performance problem type.
  • a system state feature extraction module which filters and extracts systems in a running state State feature information
  • NLP performance optimization model which obtains the type and probability of performance problems in the system according to the system state feature information
  • early warning processing module which displays the types of performance problems and the probability of performance problems, and can Select to perform corresponding processing
  • the solution processing setting module which receives user input and sets the processing method of the performance problem type.
  • Embodiments of the present application further provide an electronic device, including a processor; and a memory arranged to store computer-executable instructions, which, when executed, cause the processor to perform the steps of the above-mentioned system performance optimization method.
  • Embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium stores one or more programs, and the one or more programs, when executed by an electronic device including multiple application programs, cause the electronic device to execute the foregoing system Steps of a performance optimization method.
  • FIG. 1 is a schematic flowchart of a method for optimizing system performance according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a system performance optimization based on an NLP-based performance optimization model according to an embodiment of the present application
  • FIG. 3 is a schematic flowchart of a state feature extraction module according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a training flow of an NLP performance optimization model according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of an early warning processing module according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a solution processing setting module according to an embodiment of the present application.
  • FIG. 7 is a flowchart of sample and model training based on Android system state information and optimization strategies according to an embodiment of the present application
  • FIG. 8 is a schematic diagram of the operation flow of the automatic performance optimization system based on the Android system according to an embodiment of the present application.
  • FIG. 9 is a structural frame diagram of a system performance optimization apparatus according to an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a method for optimizing system performance according to an embodiment of the present application. Referring to FIG. 1 , the method for optimizing system performance in an embodiment of the present application will be described in detail below.
  • step 101 system state feature information is extracted from the state information output by the system.
  • NLP Natural Language Processing, natural language processing, a sub-field of AI
  • the system state information data is first received, it is converted into a binary stream, because the processing efficiency of the binary stream is more efficient than other methods.
  • the system status data may contain a lot of invalid (meaningless) information, such as performance-independent parameters, special symbols, etc.
  • the stream codes corresponding to these invalid information are filtered from the stream codes in the source system status information and processed. Elimination, the purpose is to improve the efficiency of system state feature extraction.
  • the extraction of system state feature information depends on the system state feature structure, which constrains the information content that the system state feature needs to contain. Since different types of performance problems need to extract different system state feature information, the content structure of this structure supports expansion.
  • the attributes of the system state feature structure include content, length, and value.
  • the timestamp in the content is essential. It is the root information of the system state feature. The training and prediction of the model depend on this value.
  • the content of the feature information content in the content should not have too many entries (too many entries will affect the model training efficiency and will increase exponentially).
  • the content can include performance indicators such as memory occupancy, CPU occupancy, and I/O throughput.
  • the content length is usually variable, because its value can be a number, a single word, a token, etc., and the content of these values is handled as a string object.
  • step 102 the type and probability of system performance problems are acquired according to the system state feature information.
  • the PO-NLP model is trained based on the NLP model in the AI field.
  • the above-mentioned vector objects are called samples in AI model training, and the content of the samples includes the running state of the system and the methods or strategies used by developers to improve system performance in the current state, such as Android (Android) system.
  • the optimization strategy of the upper process the cache strategy of the server system, the I/O scheduling optimization strategy of the database system, etc.
  • PO-NLP Before PO-NLP is commercialized, it is a finished product after training according to a known performance problem and a sample of the optimization strategy corresponding to the problem (as shown in Figure 3): first, classify the system state of the system product with performance problems in the testing phase ( (such as error reporting, crash, memory leak, etc.), all system status information and corresponding performance optimization strategies for each performance problem are used as samples of the PO-NLP model, and then according to the traditional AI model training steps - input training samples, in training Perform cyclic training on the sample by inferring the model, calculating the loss, and adjusting the model parameters. When the value of the loss function reaches the optimal solution, the training is stopped. At this time, the model has the ability to identify the type of performance problem. Similarly, the samples of various performance problem types are trained, and finally the PO-NLP model has the ability to identify various performance problem types and solve the corresponding problems.
  • the testing phase such as error reporting, crash, memory leak, etc.
  • step 103 the types and probabilities of performance problems are presented.
  • the type of the performance problem and the probability of the performance problem are displayed to the user through the early warning processing module, and then corresponding operations are performed according to the user's selection.
  • FIG. 2 is a schematic diagram of a system performance optimization flowchart of an NLP performance optimization model according to an embodiment of the present application. Referring to FIG. 2 , the AI-based system performance optimization of an embodiment of the present application will be described in detail below.
  • step 201 the solution processing setting module is set to run the automatic performance optimization system.
  • the solution processing setting module is set.
  • the user can customize the content of the extended processing method. Start the system performance optimization system to ensure its normal operation.
  • step 202 the system status is monitored.
  • the state information of the system is written into the cache area in a segmented reading manner, and the processing strategy of the cache area is to execute step 203 if the content of the stored system state information reaches a saturated state; otherwise, continue reading Contents of system status information.
  • step 203 the system state feature extraction module.
  • the content of the system state information is read from the buffer area, the feature information of the system state is extracted through the system state feature extraction module, and the feature information of the system state is converted into a PO-NLP model (that is, an NLP performance optimization model) that can be processed. vector object.
  • a PO-NLP model that is, an NLP performance optimization model
  • the performance of the NLP optimizes the model.
  • the performance optimization model of NLP is the PO-NLP model.
  • the model receives the vector object passed in by the system state feature extraction module, it uses the forward propagation algorithm and classification layers such as soft-max to represent the vector
  • the characteristic information of the system is classified, and the performance problem type to which the system state characteristic information belongs and its corresponding occurrence probability value are output.
  • step 205 check whether the system performance is abnormal.
  • step 206 after receiving the data list of the performance problem types and their probability values returned by the PO-NLP model, if the probability value of one or more of the performance problems is greater than 0, step 206 is executed; otherwise, execute Step 207.
  • step 206 an early warning processing module.
  • the early warning processing module is called to display the performance problem type and its probability value, and if there are multiple performance problems, it is displayed in the form of a table.
  • Step 207 is executed after the user has completed the respective processing.
  • step 207 the system state feature information is released.
  • the vector object in the cache that is, the feature information, is released to relieve the operating pressure of the RAM.
  • step 208 the automated performance optimization system is terminated.
  • step S209 it is determined whether to stop the system performance optimization system, and if so, step S209 is performed; otherwise, step S202 is performed.
  • step 301 the system state is read into the cache.
  • the system state information is read, and the encoding format of the information is unified, for example, UTF-8 or GBK encoding.
  • step 302 it is converted into a binary stream.
  • the system state information is converted into a binary stream, because other methods, such as byte streams, are not as efficient as binary streams in extracting features and producing vectors.
  • step 303 invalid content is filtered.
  • invalid information in the system state information is filtered out, including meaningless system parameters, invalid words, uniformly encoded garbled characters, and information content without time stamps, and the filtering method may use regular expressions.
  • step 304 feature information is created according to the feature structure.
  • feature information is extracted from the binary stream.
  • the types of feature information can be numerical values, single words, symbols, etc., and the extraction method can use regular expressions or other text processing methods.
  • the extracted information will be stored as data structure objects, which are called characteristic information of the system state.
  • step 305 the feature information is converted into a vector.
  • word2vec, FastText or other model tools are used to convert the system state feature information into vector objects (ie, mathematical symbols).
  • vector objects ie, mathematical symbols.
  • FIG. 4 is a schematic diagram of a training process of an NLP performance optimization model according to an embodiment of the present application. The following will describe the NLP performance optimization model training in an embodiment of the present application in detail with reference to FIG. 4 .
  • step 401 the system states and optimization strategies for different system performance problems are classified and formed into a system state set.
  • the system state information and corresponding optimization strategies of different performance problem types are classified (assuming that there are n types), and the system state information and corresponding optimization strategies of each performance problem type are formed into a set, denoted by is Sn .
  • Sn the system state information and corresponding optimization strategies of each performance problem type are formed into a set, denoted by is Sn .
  • add a training mark to each set S i (i 1,...n), if the mark is 1, it means that the training is completed; if the mark is 0, it means that it has not been trained.
  • step 403 the system state file set of the single-type problem is retrieved.
  • the stateful ensemble file is not trained.
  • the training mark of L k is detected. If the mark value is 0, step 405 is performed; if the mark value is 1, step 401 is performed.
  • step 405 the system state feature extraction module.
  • L k enters the system state feature extraction module in a text file or other carrier mode, and outputs in the form of a vector object (referred to as vec-objs).
  • the NLP model is trained.
  • model parameters are initialized, vec-objs are input, the NLP inference model is executed on vec-objs, and the loss value of the loss function is calculated.
  • the model parameters are updated.
  • the model parameters are updated by methods such as gradient descent, so as to minimize the loss.
  • the loss value reaches the optimal solution, the training of the model for Lk is stopped. and set the training flag value of L k to 1. Step S404 is executed.
  • the model training is stopped, and the model after the training is completed is called a PO-NLP model, that is, an NLP performance optimization model.
  • FIG. 5 is a schematic flowchart of an early warning processing module according to an embodiment of the present application.
  • the early warning processing module according to the embodiment of the present invention will be described in detail below with reference to FIG. 5 .
  • step 501 a warning prompt box is popped up to the user.
  • the result list of performance problem types input by the PO-NLP model is read, and if the probability of occurrence of a certain performance problem(s) is greater than 0, a warning prompt box will pop up.
  • step 502 the details of the hidden dangers existing in the system performance at this time are displayed.
  • the content prompted by the early warning prompt box includes the name of the performance problem type, the probability of occurrence, the processing method, and the ignore button. If the user clicks the ignore button, the prompt box closes and all content of this alert is skipped.
  • step 503 the user chooses to ignore this warning.
  • step 504 if the user clicks the ignore button, step 504 is performed; otherwise, step 506 is performed.
  • the prompt box is closed and no processing is performed to ensure the normal operation of the system.
  • step 505 the system operates normally.
  • step 506 various processing schemes are proposed for the user to choose.
  • the protocol processes the setup module.
  • FIG. 6 is a schematic diagram of a scheme processing setting module according to an embodiment of the present application.
  • the scheme processing setting module of the embodiment of the present application will be described in detail below with reference to FIG. 6 .
  • the intelligent processing 601 is the default setting.
  • the user can use this option to hand over the processing rights to the performance optimization system for processing, and the performance optimization system will automatically process it.
  • the performance optimization system will select the best optimization strategy for optimization according to the current system state.
  • the stop problem task 602 is a recommended setting. It is suitable for faults such as abnormal performance caused by a running task. It is recommended to use this strategy when it is detected that a certain (some) subtasks of the system cause abnormal system performance. The abnormal high probability of subtasks will lead to abnormal performance of the entire system. Stopping the operation of subtasks can reduce the risk of system failure.
  • the RAM space 603 is released, the RAM space is released, and the setting is recommended. It is recommended to use this strategy when it is detected that the system runs out of memory. Insufficient system memory will cause the system to run slowly, or even crash, restart and other serious problems. It is a better way to actively stop some background processes with low activity to free up RAM space.
  • the network connection is disconnected 604, the network connection is disconnected, and the setting is recommended. It is recommended to use this strategy when it is detected that the system is or may be subject to network attacks, virus intrusions, etc. The problem of network infringement will damage user data and even steal user privacy, etc. It is the best way to disconnect and repair the network in time.
  • the running of the software product is forcibly stopped 605, and the running of the software product is forcibly stopped, and the setting is recommended. It is recommended to use this processing strategy when monitoring system failures such as memory leaks and crashes. Such problems are difficult to solve in time, and the consequences are relatively serious, which may cause losses to users. Forcibly stopping and restarting the system is the best way.
  • the user can customize the processing methods according to other types of performance problems. Due to the variety of types of performance issues and the way they are handled, an extensible setup capability is necessary.
  • FIG. 7 is a training flow chart based on Android system state information and optimization strategy samples and models. The following will describe the training process based on Android system state information and optimization strategy samples and models in detail with reference to FIG. 7 .
  • step 701 the Android system terminal stress test is performed.
  • a stress test is performed on a mobile phone based on the Android system, and the purpose of the stress test is to perform a monkey test on the mobile phone for a long time without interruption, so as to detect the stability of the mobile phone during the stress test.
  • step 702 whether the system has a performance problem.
  • step 703 it is detected whether a performance problem occurs in the mobile phone during the test.
  • Performance problems include crashes, ANRs, phone freezes, and phone overheating. If at least one of the problem types occurs, step 703 is performed; otherwise, step 701 is performed.
  • step 703 the status information of the Android system terminal is extracted.
  • the system state information of the mobile phone operating system at this time is extracted.
  • the system state information may include the CPU, RAM, ROM space, process queue, and system log of the mobile phone.
  • the optimization strategies used by the developer for the performance problem are collected.
  • an optimization strategy for solving the problem is collected.
  • the optimization strategy mainly refers to the development experience of developers to solve such problems, which can be abstracted into machine instructions or command scripts.
  • step 705 the system state information and optimization strategy are sampled.
  • the system state information collected for the performance problem type and the corresponding optimization strategy are combined into a training file, which can be called a state file and is the basic unit for making AI training samples. All state files of the type are called state sets. Finally, the system state features and the system state feature extraction module are used to make these state sets into samples for AI model training.
  • step 706 an Android-based NLP performance optimization model is trained.
  • the training is performed according to the AI model training process shown in FIG. 4 .
  • the model is an Android-based NLP performance optimization model, which can be used on the Android mobile phone system and can monitor the performance status of the mobile phone. .
  • FIG. 8 is a schematic diagram of the operation flow of the automatic performance optimization system based on the Android system according to the embodiment of the present application.
  • the operation flow of the automatic performance optimization method based on the Android system according to the embodiment of the present application will be described in detail below with reference to FIG. 8 .
  • step 801 an automated performance optimization system is run on the Android terminal.
  • the automatic performance optimization system is run on the Android mobile phone.
  • the core of this step is to integrate the trained NLP performance optimization model into the mobile phone and run it normally.
  • step 802 the Android system status is monitored.
  • the operating state of the mobile phone system is monitored.
  • the optimization system will collect system performance status information in real time, and the status information will be temporarily stored in the cache area.
  • step 803 Android system state information is extracted.
  • system state feature extraction module is used to filter the invalid information of the system state information and convert it into vector parameters identifiable by the model.
  • step 804 the performance state of the Android system is analyzed using the NLP performance optimization model.
  • the NLP performance optimization model is used to calculate the system state information.
  • the model can infer whether there is a performance problem in the system at this time, and when inferring the existence of a performance problem, it can give the type of problem and its probability of occurrence.
  • step 805 check whether the system performance is abnormal.
  • the output of the model includes the type of performance problem and its probability, as well as the corresponding optimization strategy. If yes, go to step 806; otherwise go to step 802.
  • step 806 the Android system is automatically optimized or a recommended processing method is provided to the user.
  • an optimization strategy is used to perform automatic optimization according to the type of problem that occurs, or a recommended processing method is given.
  • the optimization strategy is provided by the performance optimization model of NLP.
  • the system is optimized according to the optimization strategy provided by the system or the processing method selected by the user. After the optimization is completed, go to step 802 .
  • FIG. 9 is a structural frame diagram of an apparatus for optimizing system performance according to an embodiment of the present application. The following will describe the apparatus for optimizing system performance in an embodiment of the present application in detail with reference to FIG. 9 .
  • the system state feature extraction module 901 First, the system state feature extraction module 901 .
  • the module after receiving the system state information data, the module first converts it into a binary stream, because the processing efficiency of the binary stream is more efficient than other methods. Secondly, because the system status data may contain a lot of invalid (meaningless) information, such as performance-independent parameters, special symbols, etc., the stream codes corresponding to these invalid information are filtered from the stream codes in the source system status information and processed. Elimination, the purpose is to improve the efficiency of system state feature extraction.
  • the extraction of system state feature information depends on the system state feature structure, which constrains the information content that the system state feature needs to contain. Since different types of performance problems need to extract different system state feature information, the content structure of this structure supports expansion.
  • the attributes of the system state feature structure include content, length, and value.
  • the timestamp in the content is essential. It is the root information of the system state feature. The training and prediction of the model depend on this value.
  • the content of the feature information content in the content should not have too many entries (too many entries will affect the model training efficiency and will increase exponentially).
  • the content can include performance indicators such as memory occupancy, CPU occupancy, and I/O throughput.
  • the content length is usually variable, because its value can be a number, a single word, a token, etc., and the content of these values is handled as a string object.
  • the performance optimization model of NLP is a PO-NLP model trained based on the NLP model in the AI field.
  • the above-mentioned vector objects are called samples in AI model training, and the content of the samples includes the running state of the system and the methods or strategies used by developers to improve system performance in the current state, such as the process of the Android system. Optimization strategy, cache strategy of server system, I/O scheduling optimization strategy of database system, etc.
  • PO-NLP Before PO-NLP is commercialized, it is a finished product after training according to a known performance problem and a sample of the optimization strategy corresponding to the problem (as shown in Figure 3): first, classify the system state of the system product with performance problems in the testing phase ( (such as error reporting, crash, memory leak, etc.), all system status information and corresponding performance optimization strategies for each performance problem are used as samples of the PO-NLP model, and then according to the traditional AI model training steps - input training samples, in training Perform cyclic training on the sample by inferring the model, calculating the loss, and adjusting the model parameters. When the value of the loss function reaches the optimal solution, the training is stopped. At this time, the model has the ability to identify the type of performance problem. In the same way, the samples of various performance problem types are trained, and finally the PO-NLP model has the ability to identify various performance problem types and solve the corresponding problems.
  • the testing phase such as error reporting, crash, memory leak, etc.
  • Early warning processing module 903 Early warning processing module 903 .
  • a warning prompt interface will pop up, and the interface will display the performance problem type and its corresponding probability value, and under each performance problem type
  • One or more processing methods are provided, and the contents of these processing methods are all from the solution processing setting module.
  • the prompt interface also provides an ignore function, the user can select this function, and the system will continue to run; if one or more processing methods are selected, the module will process according to the selected processing methods.
  • the scenario handles the setup module 904 .
  • one or more processing methods are provided by default for the user to select.
  • the present application detects that there is a performance risk in the system at a certain moment, the setting content of the module will be displayed to the user in the form of an interface.
  • the default is intelligent processing, that is, the performance optimization system will optimize according to the learned optimization strategy for the current performance problem (which optimization strategy is used for which problem is learned after the AI model is trained).
  • this module also provides some recommended optimization solutions, that is, when serious problems such as memory leaks and crashes are detected in the system, the user is advised to forcibly stop the software running in the system; when it is detected that a subtask causes system performance When an exception occurs, the user is advised to stop the subtask; when a problem such as insufficient memory is detected, the user is advised to release the memory; when a possible network attack or network virus intrusion is detected, the user is advised to disconnect the network connection.
  • this module supports expansion. When there is a new performance problem type, the user can add it and add a custom processing method for the performance problem type.
  • FIG. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device includes a processor, and may also include an internal bus, a network interface, and a memory.
  • the memory may include memory, such as high-speed random-access memory (Random-Access Memory, RAM), or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
  • RAM random-Access Memory
  • non-volatile memory such as at least one disk memory.
  • the electronic equipment may also include hardware required for other services.
  • the processor, network interface and memory can be connected to each other through an internal bus, which can be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or an EISA (Extended Component Interconnect) bus. Industry Standard Architecture, extended industry standard structure) bus, etc.
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one bidirectional arrow is shown in FIG. 10, but it does not mean that there is only one bus or one type of bus.
  • the program may include program code, and the program code includes computer operation instructions.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and executes it, forming a shared resource access control device on a logical level.
  • the processor executes the program stored in the memory, and is specifically configured to execute the steps of the above timer method.
  • An embodiment of the present application also provides a computer-readable storage medium, where the computer-readable storage medium stores one or more programs, and the one or more programs include instructions, and the instructions, when used by a portable electronic device including multiple application programs When executed, the portable electronic device can be made to execute the method of the embodiment shown in the accompanying drawings, and is specifically used to execute the steps of the above-mentioned timer method.
  • the system performance optimization method of the embodiment of the present application effectively solves the problem that performance optimization cannot be automated, and unifies and integrates the optimization schemes of different systems, thereby also solving the cross-platform problem; Optimize the system's existing funnels, unreasonable operating states, or hidden dangers such as system crashes;
  • This application trains the performance status of the system operation and the optimization strategies made by developers for different systems, so that the performance optimization model of NLP has the system optimization capability similar to that of developers, and by monitoring the performance status of the system in real time, it calculates The probability of system performance problems and the learned optimization strategies are used to provide corresponding solutions to achieve system parameter tuning, prevent system performance degradation, and adjust system status.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Quality & Reliability (AREA)
  • Development Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Debugging And Monitoring (AREA)

Abstract

一种系统性能优化方法、装置、电子设备及其可读介质,其中该方法包括:从系统输出的状态信息中提取出系统状态特征信息(101);根据系统状态特征信息,获取系统出现性能问题的类型和概率(102);将性能问题的类型和性能问题的概率进行展示(103)。

Description

一种系统性能优化方法、装置、电子设备及其可读介质
相关申请的交叉引用
本申请基于申请号为202011443796.X、申请日为2020年12月8日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及计算机领域,尤其涉及一种系统性能优化方法、装置、电子设备及其可读介质。
背景技术
移动终端、服务器以及嵌入式系统产品等,都离不开操作系统的支持。然而系统中的各种软件在运行时会消耗系统资源(如CPU、RAM以及存储空间等),由于软件产品的质量参差不齐,必会导致系统存在性能隐患,如资源耗尽导致系统崩溃等。从而影响企业产品的健壮性,导致产品用户体验差、系统不稳定等。因此减少影响企业形象的致命缺陷志在必行,性能优化的工作即是重中之重。
然而现行的系统优化方案多是采用的固定的优化值,在系统运行过程中,系统的运行状况会发生变化,不同参数之间除彼此互相影响外,还会共同对系统性能产生影响,因此,系统开发阶段所选的最优值难以持续满足系统的运行需求。
发明内容
本申请实施例提供一种系统性能优化方法、装置、电子设备及其可读介质。
本申请实施例提供的系统性能优化方法,包括:从系统输出的状态信息中提取出系统状态特征信息;根据系统状态特征信息,获取系统出现性能问题的类型和概率;将性能问题的类型和性能问题的概率进行展示。
本申请实施例还提供一种系统性能优化装置,包括:系统状态特征提取模块、NLP的性能优化模型、预警处理模块和方案处理设置模块;系统状态特征提取模块,过滤并提取处于运行状态的系统状态特征信息;NLP的性能优化模型,其根据系统状态特征信息,获取系统出现性能问题的类型和概率;预警处理模块,其将性能问题的类型和性能问题的概率进行展示,并可以根据用户的选择进行相应的处理;方案处理设置模块,其接收用户输入,设置性能问题类型处理方式。
本申请实施例还提供一种电子设备,包括,处理器;以及被安排成存储计算机可执行指令的存储器,可执行指令在被执行时使处理器执行上述系统性能优化方法的步骤。
本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质存储一个或多个程序,一个或多个程序当被包括多个应用程序的电子设备执行时,使得电子设备执行上述系统性能优化方法的步骤。
附图说明
图1为根据本申请实施例的系统性能优化方法流程示意图;
图2为根据本申请实施例的基于NLP的性能优化模型的系统性能优化流程示意图;
图3为根据本申请实施例的状态特征提取模块流程示意图;
图4为根据本申请实施例的NLP的性能优化模型训练流程示意图;
图5为根据本申请实施例的预警处理模块流程示意图;
图6为根据本申请实施例的方案处理设置模块示意图;
图7为根据本申请实施例的基于安卓系统状态信息及优化策略的样本及模型训练流程图;
图8为根据本申请实施例的基于安卓系统的自动性能优化系统运行流程示意图;
图9为根据本申请实施例的系统性能优化装置结构框架图;
图10为根据本申请的一个实施例电子设备的结构示意图。
具体实施方式
以下结合附图对本申请的一些实施例进行说明,应当理解,此处所描述的实施例仅用于说明和解释本申请,并不用于限定本申请。
实施例1
图1为根据本申请实施例的系统性能优化方法流程示意图,下面将参考图1,对本申请实施例的系统性能优化方法进行详细描述。
首先,步骤101,从系统输出的状态信息中提取出系统状态特征信息。
本申请实施例中,由于系统状态信息量过大,需要对关键信息进行提取,并将其转化为NLP的性能优化模型能够直接处理的对象类型,保证实时监测的高效性。
本申请实施例中,NLP(Natural Language Processing,自然语言处理,一个AI的子领域)。
本申请实施例中,首先接收到系统状态信息数据后,会将其转化为二进制流,因为二进制流的处理效率相对于其它方式更为高效。其次,由于系统状态数据中可能会含有很多无效(无意义)的信息,如性能无关的参数、特殊符号等,将这些无效信息对应的流编码从源系统状态信息中的流编码中进行过滤并剔除,目的是提高系统状态特征提取的效率。系统状态特征信息的提取依赖于系统状态特征结构,该结构约束了系统状态特征需要包含的信息内容。由于不同的性能问题类型需要提取不同的系统状态特征信息,因此该结构体的内容结构是支持扩展的。系统状态特征结构体的属性包括内容、长度和取值,内容中的时间戳是必不可少的,它是系统状态特征的根信息,模型的训练及预测均依赖于该值。内容中的特征信息内容的条目不宜过多(条目过多会影响模型训练效率,会呈指数级增长),内容可以包括内存占有率、CPU占有率、I/O吞吐率等性能指标。内容长度通常为可变的,因为其取值可以是数值、单个词语、标记符号等,并且这些取值内容均作为字符串对象处理。系统状态特征结构体定义完成后,根据结构体的定义从系统状态信息流中抽出符合定义的内容,并将其封装成数据结构对象。最后,将封装成的所有数据结构对象通过word2vec等模型转化为AI模型可以处理的向量对象。
在步骤102,根据系统状态特征信息,获取系统出现性能问题的类型和概率。
本申请实施例中,基于AI领域中的NLP模型训练完成的PO-NLP模型。
本申请实施例中,上述向量对象在AI模型训练中称之为样本,样本的内容包含系统的运行状态以及在当前状态下开发人员使用的提升系统性能的方法或策略,例如Android(安卓)系统上进程的优化策略、服务器系统的缓存策略、数据库系统的I/O调度优化策略等。PO-NLP在商用之前是根据已知的性能问题和该问题对应的优化策略的样本训练后的成品(如图3所示):首先将系统产品在测试阶段出现性能问题的系统状态进行分类(如报错、崩溃、内存泄露等),针对每一性能问题的所有系统状态信息以及对应的性能优化策略作为PO-NLP模型的样本,然后根据传统的AI模型训练步骤——输入训练样本、在训练样本上执行推断模型、计算损失、调整模型参数等进行循环训练,待损失函数的值达到最优解时则停止训练,此时的模型便具备了识别该性能问题类型的能力。同理对多种性能问题类型的样本进行训练,最终PO-NLP模型具备了识别多种性能问题类型以及解决对应问题的能力。
在步骤103,将性能问题的类型和概率进行展示。
本申请实施例中,通过预警处理模块将上述性能问题的类型和性能问题的概率向用户展示,然后根据用户的选择进行相应的操作。
实施例2
图2为根据本申请实施例的NLP的性能优化模型的系统性能优化流程示意图,下面将参考图2,对本申请实施例的基于AI的系统性能优化进行详细描述。
首先,在步骤201,对方案处理设置模块进行设置,运行自动化性能优化系统。
本申请实施例中,对方案处理设置模块进行设置,除默认的处理方式外,用户可自定义扩展处理方式的内容。启动系统性能优化系统,保证其正常运行。
在步骤202,监测系统状态。
本申请实施例中,将系统的状态信息以分段式读取的方式写入缓存区,缓存区的处理策略是若存储系统状态信息的内容达到了饱和状态则执行步骤203;否则继续读取系统状态信息内容。
在步骤203,系统状态特征提取模块。
本申请实施例中,从缓存区读取系统状态信息内容,通过系统状态特征提取模块提取系统状态的特征信息,并将其转化为PO-NLP模型(即,NLP的性能优化模型)可以处理的向量对象。
在步骤204,NLP的性能优化模型。
本申请实施例中,NLP的性能优化模型为PO-NLP模型,在该模型接收到系统状态特征提取模块传入的向量对象后,使用前向传播算法和soft-max等分类层将向量所表示的特征信息进行分类,并输出系统状态特征信息所属的性能问题类型及其对应的发生概率值。
在步骤205,系统性能是否存在异常。
本申请实施例中,当接收到PO-NLP模型返回的性能问题类型及其概率值的数据列表后,如果其中一或多项的性能问题发生概率值大于0时,则执行步骤206;否则执行步骤207。
在步骤206,预警处理模块。
本申请实施例中,调用预警处理模块显示性能问题类型及其概率值,如果为多个性能问题,则以表格的形式进行显示。当用户分别处理完成后,则执行步骤207。
在步骤207,释放系统状态特征信息。
本申请实施例中,将缓存中的向量对象,即特征信息进行释放,以缓解RAM的运行压力。
在步骤208,结束自动化性能优化系统。
本申请实施例中,判断是否停止系统性能优化系统,如果是则执行步骤S209;否则执行步骤S202。
在步骤209,结束。
本申请实施例中,结束上述流程。
实施例3
图3为根据本申请实施例的状态特征提取模块流程示意图,下面将结合图3,对本申请实施例的状态特征提取模块进行详细描述。
首先,在步骤301,系统状态读入缓存。
本申请实施例中,读取系统状态信息,将信息的编码格式统一化,如统一为UTF-8或GBK编码等。
在步骤302,转化为二进制流。
本申请实施例中,将系统状态信息转化为二进制流,因为其它方式如字节流等在提取特征和制作向量方面效率不如二进制流。
在步骤303,过滤无效内容。
本申请实施例中,过滤掉系统状态信息中的无效信息,包括无意义的系统参数、无效词语、统一编码后的乱码、无时间戳的信息内容等,过滤方法可使用正则表达式等。
在步骤304,依据特征结构制作特征信息。
本申请实施例中,从二进制流中提取特征信息。特征信息的类型可为数值、单个词语、标记符号等,提取的方法可使用正则表达式或其它文本处理方法。提取出的信息将存储成数据结构对象,这些对象被称为系统状态的特征信息。
在步骤305,特征信息转化为向量。
本申请实施例中,使用word2vec、FastText或其它模型工具将系统状态特征信息转化为向量对象(即数学符号)。到此步骤该模块完成了系统状态特征的提取并输出了AI模型可以直接处理的向量对象。
实施例4
图4为根据本申请实施例的NLP的性能优化模型训练流程示意图,下面将结合图4,对本申请实施例的NLP的性能优化模型训练进行详细描述。
首先,在步骤401,针对不同系统性能问题的系统状态及优化策略进行分类并组成系统状态集合。
本申请实施例中,对不同性能问题类型的系统状态信息和对应的优化策略进行分类(假设有n种类型),并将每个性能问题类型的系统状态信息和对应的优化策略组成集合,记 为S n。并为每个集合S i(i=1,...n)添加训练标记,若标记为1,则代表训练完成;若标记为0,则代表未被训练。
在步骤402,系统状态集合存在未被训练过的子集。
本申请实施例中,检测S i的训练标记,若标记值为0,则执行步骤403;若S i(i=1,...n)的标记值均为1,则执行步骤408。
在步骤403,取出单类问题的系统状态文件集合。
本申请实施例中,取出集合S i的所有系统状态信息,其数量记为q,则每个状态信息记为L k(k=1,...q),并为L k进行添加训练标记,若标记为1,则代表训练完成;若标记为0,则代表未被训练。
在步骤404,有状态集合文件未被训练。
本申请实施例中,检测L k的训练标记,若标记值为0,执行步骤405;若标记值为1,执行步骤401。
在步骤405,系统状态特征提取模块。
本申请实施例中,L k以文本文件或者其它的载体方式进入系统状态特征提取模块,以向量对象(记为vec-objs)的方式输出。
在步骤406,NLP模型训练。
本申请实施例中,初始化模型参数,输入vec-objs,在vec-objs上执行NLP推断模型,计算损失函数的损失值。
在步骤407,更新模型参数。
本申请实施例中,通过梯度下降等方法,更新模型参数,使得损失最小化。当损失值达到最优解时,停止模型对L k的训练。并将L k的训练标记值置为1。执行步骤S404。
在步骤408,训练完成。
本申请实施例中,停止模型训练,训练完成后的模型称为PO-NLP模型,即,NLP的性能优化模型。
实施例5
图5为根据本申请实施例的预警处理模块流程示意图,下面将结合图5,对发明实施例的预警处理模块进行详细描述。
首先,在步骤501,向用户弹出预警提示框。
本申请实施例中,读取PO-NLP模型传入的性能问题类型结果列表,若某个(些)性能问题的发生概率大于0时,则弹出预警提示框。
在步骤502,显示此时系统性能存在的隐患详情。
本申请实施例中,预警提示框提示的内容包括性能问题类型名称、发生概率、处理方式以及忽略按钮。用户如果点击了忽略按钮,则提示框关闭并会跳过此次预警的所有内容。
在步骤503,用户选择忽略此次预警。
本申请实施例中,如果用户点击忽略按钮,则执行步骤504;否则执行步骤506。
在步骤504,不做任何处理。
本申请实施例中,关闭提示框,不做任何处理,保证系统正常运行。
在步骤505,系统正常运行。
本申请实施例中,系统继续正常运行。
在步骤506,提出多种处理方案供用户选择。
本申请实施例中,用户选择处理方式后,根据处理方式的内容作出相应处理。当所有处理方式全部执行完成后,关闭提示框。
在步骤507,方案处理设置模块。
本申请实施例中,通过方案处理设置模块显示多种处理方案供用户选择。
实施例6
图6为根据本申请实施例的方案处理设置模块示意图,下面将结合图6,对本申请实施例的方案处理设置模块进行详细描述。
本申请实施例中,智能处理601,此为默认设置。用户可以使用该选项将处理的权利交给性能优化系统进行处理,由性能优化系统自动处理。性能优化系统会根据当前的系统状态选择最佳的优化策略进行优化。
本申请实施例中,停止问题任务602,为推荐设置。适合由于某运行任务引起的性能异常等故障。当监测出系统的某个(些)子任务引起系统性能异常时,建议使用该策略。子任务的异常大概率会导致整个系统性能异常,停止子任务的运行可以将降低系统出现故障的风险。
本申请实施例中,释放RAM空间603,释放RAM空间,推荐设置。当监测出系统运行内存不足时,建议使用该策略。系统内存不足会导致系统运行缓慢,甚至出现死机、重启等严重问题,主动停止一些后台活跃度不高的进程来释放RAM的空间是比较好的方式。
本申请实施例中,断开网络连接604,断开网络连接,推荐设置。当监测出系统正在或可能会遭受网络攻击、病毒入侵等问题时,建议使用该策略。网络侵害的问题会损害用户数据、甚至盗取用户隐私等,及时断网并修复是最佳的方式。
本申请实施例中,强行停止软件产品的运行605,强行停止软件产品的运行,推荐设置。当监测出系统会出现内存泄露、崩溃等故障时,建议使用该处理策略。此类问题比较难以及时解决,且产生的后果相对严重,有可能会给用户带来损失,强行停止并重启系统是最佳的方式。
本申请实施例中,......606,除了以上几种推荐的处理方式,用户可以根据其它的性能问题类型自定义处理方式。由于性能问题类型多种多样,所以处理的方式也会各有不同,因此可扩展的设置功能是必要的。
实施例7
图7为基于安卓系统状态信息及优化策略样本及模型的训练流程图,下面将结合图7,对基于安卓系统状态信息及优化策略样本及模型的训练流程进行详细说明。
首先,在步骤701,进行安卓系统终端压力测试。
本申请实施例中,对基于Android系统的手机进行压力测试,压力测试的目的在于长时间不间断的对手机进行monkey测试,以检测手机在压力测试期间的稳定性。
在步骤702,系统是否出现性能问题。
本申请实施例中,检测手机在测试期间是否出现性能问题。性能问题包括Crash、ANR、 手机卡顿、手机发热等多种类型。如果出现其中至少一种问题类型,则执行步骤703;否则执行步骤701。
在步骤703,提取安卓系统终端的状态信息。
本申请实施例中,提取此时手机操作系统的系统状态信息。系统状态信息可以包括手机的CPU、RAM、ROM空间、进程队列以及系统Log等。
在步骤704,收集针对该性能问题开发人员使用的优化策略。
本申请实施例中,针对此时手机系统出现的性能问题,收集解决该问题的优化策略。优化策略主要是指开发人员解决该类问题的开发经验,它可以被抽象成机器指令或命令脚本。
在步骤705,将系统状态信息和优化策略制作成样本。
本申请实施例中,将针对该性能问题类型收集到的系统状态信息和对应的优化策略组合成为一个训练文件,该文件可以被称为状态文件,是制作AI训练样本的基本单位,将同一问题类型的所有状态文件称为状态集合。最后利用系统状态特征和系统状态特征提取模块将这些状态集合制作成样本,供AI模型进行训练。
在步骤706,训练基于安卓系统的NLP的性能优化模型。
本申请实施例中,根据图4所示的AI模型训练流程进行训练,该模型即为基于Android的NLP的性能优化模型,其可以使用在Android手机系统上并能对手机的性能状态进行性能监测。
实施例8
图8为根据本申请实施例的基于安卓系统的自动性能优化系统运行流程示意图,下面将结合图8,对本申请实施例的基于安卓系统的自动性能优化方法运行流程进行详细描述。
首先,在步骤801,在安卓终端上运行自动化性能优化系统。
本申请实施例中,在Android手机上运行自动性能优化系统。该步骤的核心是将训练完成的NLP的性能优化模型集成到手机上并正常运行。
在步骤802,监测安卓系统状态。
本申请实施例中,监测手机系统运行状态。手机在运行时,优化系统会实时收集系统的性能状态信息,状态信息会被临时存储在缓存区域中。
在步骤803,提取安卓系统状态信息。
本申请实施例中,使用系统状态特征提取模块,将系统状态信息进行无效信息的过滤并转化为模型可识别的向量参数。
在步骤804,使用NLP的性能优化模型对安卓系统的性能状态进行分析。
本申请实施例中,使用NLP的性能优化模型对系统状态信息进行推算。该模型能够推断出此时系统是否存在性能问题,且在推断出性能问题存在的同时能够给出问题类型及其出现的概率。
在步骤805,系统性能是否存在异常。
本申请实施例中,收到来自NLP的性能优化模型的结果后,判断是否存在性能问题。模型的输出结果包括性能问题类型及其概率,以及对应的优化策略。如果是则执行步骤806;否则执行步骤802。
在步骤806,对安卓系统进行自动优化或给用户提供推荐的处理方式。
本申请实施例中,针对出现的问题类型使用优化策略进行自动优化,或者给出推荐的处理方式。优化策略是由NLP的性能优化模型提供。
在步骤807,优化完成。
本申请实施例中,根据系统提供的优化策略或用户选择的处理方式对系统进行优化。优化完成后,进入步骤802。
实施例9
图9为根据本申请实施例的系统性能优化装置结构框架图,下面将结合图9,对本申请实施例的系统性能优化装置进行详细描述。
首先,系统状态特征提取模块901。
本申请实施例中,由于系统状态信息量过大,需要对关键信息进行提取,并将其转化为AI模型能够直接处理的对象类型,保证实时监测的高效性。
本申请实施例中,首先该模块接收到系统状态信息数据后,会将其转化为二进制流,因为二进制流的处理效率相对于其它方式更为高效。其次,由于系统状态数据中可能会含有很多无效(无意义)的信息,如性能无关的参数、特殊符号等,将这些无效信息对应的流编码从源系统状态信息中的流编码中进行过滤并剔除,目的是提高系统状态特征提取的效率。系统状态特征信息的提取依赖于系统状态特征结构,该结构约束了系统状态特征需要包含的信息内容。由于不同的性能问题类型需要提取不同的系统状态特征信息,因此该结构体的内容结构是支持扩展的。系统状态特征结构体的属性包括内容、长度和取值,内容中的时间戳是必不可少的,它是系统状态特征的根信息,模型的训练及预测均依赖于该值。内容中的特征信息内容的条目不宜过多(条目过多会影响模型训练效率,会呈指数级增长),内容可以包括内存占有率、CPU占有率、I/O吞吐率等性能指标。内容长度通常为可变的,因为其取值可以是数值、单个词语、标记符号等,并且这些取值内容均作为字符串对象处理。系统状态特征结构体定义完成后,根据结构体的定义从系统状态信息流中抽出符合定义的内容,并将其封装成数据结构对象。最后,将封装成的所有数据结构对象通过word2vec等模型转化为AI模型可以处理的向量对象。
NLP的性能优化模型902。
本申请实施例中,NLP的性能优化模型为基于AI领域中的NLP模型训练完成的PO-NLP模型。
本申请实施例中,上述向量对象在AI模型训练中称之为样本,样本的内容包含系统的运行状态以及在当前状态下开发人员使用的提升系统性能的方法或策略,例如Android系统上进程的优化策略、服务器系统的缓存策略、数据库系统的I/O调度优化策略等。PO-NLP在商用之前是根据已知的性能问题和该问题对应的优化策略的样本训练后的成品(如图3所示):首先将系统产品在测试阶段出现性能问题的系统状态进行分类(如报错、崩溃、内存泄露等),针对每一性能问题的所有系统状态信息以及对应的性能优化策略作为PO-NLP模型的样本,然后根据传统的AI模型训练步骤——输入训练样本、在训练样本上执行推断模型、计算损失、调整模型参数等进行循环训练,待损失函数的值达到最优解时则停止训练,此时的模型便具备了识别该性能问题类型的能力。同理对多种性能问题类型的样本进 行训练,最终PO-NLP模型具备了识别多种性能问题类型以及解决对应问题的能力。
预警处理模块903。
本申请实施例中,该模块收到来自于PO-NLP模型提供的概率值后,会弹出预警提示界面,界面上将显示性能问题类型及其对应的概率值,并在每种性能问题类型下提供了一种或多种处理方式,这些处理方式的内容均来自于方案处理设置模块。同时提示界面也提供了忽略功能,用户可以选择此功能,则系统会继续运行;如果选择了其中的一个或多个处理方式,则该模块会按照所选的处理方式进行处理。
方案处理设置模块904。
本申请实施例中,针对不同的性能问题类型,会默认提供一种或多种处理方式供用户选择。当本申请监测出系统在某一时刻存在性能风险时,该模块的设置内容就会以界面的形式显示给用户。默认情况下为智能处理,即针对当前的性能问题性能优化系统会根据学习到的优化策略进行优化(何种问题用何种优化策略是AI模型训练后学习到的)。同时,该模块也提供了一些推荐性的优化方案,即当监测出系统会出现内存泄露、崩溃等严重问题时,会建议用户强行停止系统中运行的软件;当监测出某子任务引起系统性能异常时,会建议用户停止该子任务;当监测出内存不足等问题时,会建议用户释放内存;当监测出可能存在网络攻击、网络病毒入侵等问题时,会建议用户断开网络连接。同时该模块支持扩展,当有新的性能问题类型时,用户可以进行添加,并针对该性能问题类型增加自定义的处理方式等。
实施例10
图10为本申请实施例的一个实施例电子设备的结构示意图,如图10所示,在硬件层面,该电子设备包括处理器,还可包括内部总线、网络接口、存储器。其中,存储器可能包含内存,例如高速随机存取存储器(Random-Access Memory,RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少1个磁盘存储器等。当然,该电子设备还可能包括其他业务所需要的硬件。
处理器、网络接口和存储器可以通过内部总线相互连接,该内部总线可以是ISA(Industry Standard Architecture,工业标准体系结构)总线、PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(Extended Industry Standard Architecture,扩展工业标准结构)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图10中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。
存储器,用于存放程序。具体地,程序可以包括程序代码,程序代码包括计算机操作指令。
处理器从非易失性存储器中读取对应的计算机程序到存储器中然后运行,在逻辑层面上形成共享资源访问控制装置。处理器,执行存储器所存放的程序,并具体用于执行上述定时器方法的步骤。
实施例11
本申请实施例还提出了一种计算机可读存储介质,该计算机可读存储介质存储一个或多个程序,该一个或多个程序包括指令,该指令当被包括多个应用程序的便携式电子设备执行 时,能够使该便携式电子设备执行附图中所示实施例的方法,并具体用于执行上述定时器方法的步骤。
本申请实施例的系统性能优化方法有效的解决了性能优化无法自动化的问题,且将不同系统的优化方案进行了统一和融合,从而也解决了跨平台问题;通过对系统性能状态实时监测,以对系统存在的漏斗、不合理的运行状态、或是造成系统崩溃等隐患时进行优化;
本申请将系统运行的性能状态以及开发人员针对不同系统做出的优化策略通过训练,使NLP的性能优化模型拥有类似于开发人员的系统优化能力,并通过实时监测系统性能状态,计算出此时系统出现性能问题的概率并利用学习到的优化策略提供相应的解决方案,以达到系统参数调优、预防系统性能下降、调整系统状态。
虽然本申请所揭露的实施方式如上,但的内容仅为便于理解本申请而采用的实施方式,并非用以限定本申请。任何本申请所属领域内的技术人员,在不脱离本申请所揭露的范围的前提下,可以在实施的形式及细节上进行任何的修改与变化,但本申请的专利保护范围,仍须以所附的权利要求书所界定的范围为准。

Claims (10)

  1. 一种系统性能优化方法,包括:
    从系统输出的状态信息中提取出系统状态特征信息;
    根据所述系统状态特征信息,获取所述系统出现性能问题的类型和概率;
    将所述性能问题的类型和所述性能问题的概率进行展示。
  2. 如权利要求1所述的系统性能优化方法,其中,在所述从系统输出的状态信息中提取出状态特征信息之前,还包括:设置性能问题类型处理方式;
    所述性能问题类型处理方式,包括:智能处理、强行停止系统、释放内存,以及扩展处理。
  3. 如权利要求1所述的系统性能优化方法,其中,所述系统状态特征信息包括:内容、长度和取值;
    所述内容包括时间戳、内存占有率、CPU占有率、I/O吞吐率、进程队列以及系统Log;
    所述长度为可变的;
    所述取值为数值、单个词语、标记符号。
  4. 如权利要求1所述的系统性能优化方法,其中,所述从系统输出的状态信息中提取出系统状态特征信息,还包括:
    将所述状态特征信息的数据转化为二进制流;
    将所述系统状态信息进行无效信息的过滤并转化为NLP的性能优化模型可识别的向量参数。
  5. 如权利要求1所述的系统性能优化方法,其中,所述根据所述系统状态特征信息,获取系统出现性能问题的类型和概率,还包括:
    基于NLP的性能优化模型接受的所述系统特征信息,获取所述系统特征信息所属的性能问题特征集合的子集;
    根据所述性能问题特征集合的子集的覆盖率计算出系统出现性能问题的概率。
  6. 如权利要求5所述的系统性能优化方法,其中,在所述基于NLP的性能优化模型接受的所述系统特征信息,获取所述系统特征信息所属的性能问题特征集合的子集之前,还包括:
    针对所述系统性能问题类型,收集解决所述系统性能问题类型的优化策略;
    所述状态特征提取模块将同一性能问题类型的系统状态特征信息和对应的优化策略,集合成NLP的性能优化模型的训练样本。
  7. 如权利要求5所述的系统性能优化方法,其中,所述基于NLP的性能优化模型接受的所述系统特征信息,获取所述系统特征信息所属的性能问题特征集合的子集,还包括:
    基于所述NLP的性能优化模型接收的所述系统特征信息,通过前向传播算法,得到此时的特征信息属于哪些性能问题特征集合的子集,获取所述系统特征信息所属的性能问题特征集合的子集。
  8. 一种系统性能优化装置,包括:
    系统状态特征提取模块、NLP的性能优化模型、预警处理模块和方案处理设置模块;
    所述系统状态特征提取模块,过滤并提取处于运行状态的系统状态特征信息;
    所述NLP的性能优化模型,其根据所述系统状态特征信息,获取系统出现性能问题的类 型和概率;
    所述预警处理模块,其将所述性能问题的类型和所述性能问题的概率进行展示,并可以根据用户的选择进行相应的处理;
    所述方案处理设置模块,其接收用户输入,设置性能问题类型处理方式。
  9. 一种电子设备,包括:
    处理器;以及
    被安排成存储计算机可执行指令的存储器,其中,所述可执行指令在被执行时使所述处理器执行权利要求1-7任一项所述系统性能优化方法的步骤。
  10. 一种计算机可读存储介质,存储一个或多个程序,其中,所述一个或多个程序当被包括多个应用程序的电子设备执行时,使得所述电子设备执行权利要求1-7任一项所述系统性能优化方法的步骤。
PCT/CN2021/131517 2020-12-08 2021-11-18 一种系统性能优化方法、装置、电子设备及其可读介质 WO2022121656A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011443796.X 2020-12-08
CN202011443796.XA CN114611743A (zh) 2020-12-08 2020-12-08 一种系统性能优化方法、装置、电子设备及其可读介质

Publications (1)

Publication Number Publication Date
WO2022121656A1 true WO2022121656A1 (zh) 2022-06-16

Family

ID=81855804

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131517 WO2022121656A1 (zh) 2020-12-08 2021-11-18 一种系统性能优化方法、装置、电子设备及其可读介质

Country Status (2)

Country Link
CN (1) CN114611743A (zh)
WO (1) WO2022121656A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115296991A (zh) * 2022-08-02 2022-11-04 广东电网有限责任公司 一种网元性能的计算方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105653444A (zh) * 2015-12-23 2016-06-08 北京大学 基于互联网日志数据的软件缺陷故障识别方法和系统
CN106156401A (zh) * 2016-06-07 2016-11-23 西北工业大学 基于多组合分类器的数据驱动系统状态模型在线辨识方法
US20180239688A1 (en) * 2017-02-22 2018-08-23 Webomates LLC Method and system for real-time identification of anomalous behavior in a software program
CN111198817A (zh) * 2019-12-30 2020-05-26 武汉大学 一种基于卷积神经网络的SaaS软件故障诊断方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105653444A (zh) * 2015-12-23 2016-06-08 北京大学 基于互联网日志数据的软件缺陷故障识别方法和系统
CN106156401A (zh) * 2016-06-07 2016-11-23 西北工业大学 基于多组合分类器的数据驱动系统状态模型在线辨识方法
US20180239688A1 (en) * 2017-02-22 2018-08-23 Webomates LLC Method and system for real-time identification of anomalous behavior in a software program
CN111198817A (zh) * 2019-12-30 2020-05-26 武汉大学 一种基于卷积神经网络的SaaS软件故障诊断方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115296991A (zh) * 2022-08-02 2022-11-04 广东电网有限责任公司 一种网元性能的计算方法及装置
CN115296991B (zh) * 2022-08-02 2023-07-14 广东电网有限责任公司 一种网元性能的计算方法及装置

Also Published As

Publication number Publication date
CN114611743A (zh) 2022-06-10

Similar Documents

Publication Publication Date Title
CN106227780B (zh) 一种海量网页的自动化截图取证方法和系统
CN110515793B (zh) 系统性能监控方法、装置、设备及存储介质
WO2022121656A1 (zh) 一种系统性能优化方法、装置、电子设备及其可读介质
CN107544832A (zh) 一种虚拟机进程的监控方法、装置和系统
CN112954031B (zh) 一种基于云手机的设备状态通知方法
CN113760652B (zh) 基于应用的全链路监控的方法、系统、设备和存储介质
CN109194739A (zh) 一种文件上传方法、存储介质和服务器
CN107645546A (zh) 基于安卓系统的文件监听方法、智能设备及存储介质
CN111669281A (zh) 告警分析方法、装置、设备及存储介质
CN111130867B (zh) 一种基于物联网的智能家居设备告警方法及装置
CN115718674A (zh) 一种数据容灾恢复方法及装置
CN115794472A (zh) 芯片的错误收集及错误处理方法、装置及存储介质
CN115357450A (zh) 基于人工智能的节点维护方法、装置、计算机设备及介质
CN113656252B (zh) 故障定位方法、装置、电子设备以及存储介质
CN112363841B (zh) 应用进程的查杀方法、装置、电子设备及存储介质
CN114064402A (zh) 服务器系统监控方法
CN109189652A (zh) 一种封闭网络终端行为数据的采集方法及系统
CN112910733A (zh) 一种基于大数据的全链路监控系统及方法
CN113411224B (zh) 数据处理方法、装置、电子设备及存储介质
CN115525392A (zh) 容器监控方法、装置、电子设备及存储介质
CN115209452A (zh) 核心网隐患排查方法、装置、电子设备和存储介质
CN113656239A (zh) 针对中间件的监控方法、装置及计算机程序产品
CN111694705A (zh) 监控方法、装置、设备及计算机可读存储介质
CN116701127B (zh) 一种基于大数据的应用性能监控方法及平台
CN108234188B (zh) 一种业务平台资源调度处理方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21902363

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 30.10.2023)