CN114741192A - Caton prediction method, device, terminal equipment and computer readable storage medium - Google Patents

Caton prediction method, device, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN114741192A
CN114741192A CN202210355749.2A CN202210355749A CN114741192A CN 114741192 A CN114741192 A CN 114741192A CN 202210355749 A CN202210355749 A CN 202210355749A CN 114741192 A CN114741192 A CN 114741192A
Authority
CN
China
Prior art keywords
rendering
trained
time period
time
prediction model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210355749.2A
Other languages
Chinese (zh)
Inventor
黄文涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210355749.2A priority Critical patent/CN114741192A/en
Publication of CN114741192A publication Critical patent/CN114741192A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Abstract

The application is applicable to the technical field of computers, and provides a stuck prediction method, a stuck prediction device, terminal equipment and a computer-readable storage medium. The method comprises the steps of obtaining an operational capability parameter of the terminal equipment in a first time period and a rendering operation amount in a second time period; wherein the first time period and the second time period are consecutive; the method comprises the steps that a predicted rendering time of the terminal equipment in a second time period is obtained through a trained pause prediction model according to operation capability parameters and rendering operation amount in a first time period, whether the terminal equipment is paused in the second time period can be predicted in advance, and when the second time period is predicted to be paused, specific pause time can be accurately predicted according to the predicted rendering time, so that the terminal equipment can respond in advance according to the specific pause time, the pause phenomenon is avoided, and the processing flexibility of the terminal equipment facing the impending pause is improved.

Description

Caton prediction method, device, terminal equipment and computer readable storage medium
Technical Field
The present application belongs to the field of computer technologies, and in particular, to a stuck prediction method, an apparatus, a terminal device, and a computer-readable storage medium.
Background
With the rapid progress of the technology level and the living standard, electronic devices such as personal computers and smart phones have become common consumer electronics in daily life. Currently, an electronic device is usually equipped with a Graphics Processing Unit (GPU), so that the electronic device can perform image rendering to have certain image Processing capability. The performance of the graphics processor of the electronic device is limited, and when the operation workload of image rendering in a short time is too large, the graphics processor cannot complete image rendering in time and output the image rendering to the display module of the electronic device, so that the display of the electronic device is easy to be jammed.
At present, a method for detecting whether an electronic device is stuck is generally to collect display information to analyze the cause of the stuck after the electronic device is stuck, and the electronic device cannot respond to the stuck to be about to appear in advance, so that the stuck phenomenon cannot be avoided. Therefore, how to predict whether the electronic device will be stuck in advance and accurately becomes a problem to be solved.
Disclosure of Invention
In view of this, an embodiment of the present application provides a stuck prediction method to solve a problem that when an operation workload of image rendering is too large in a short time, a graphics processor cannot complete image rendering in time and output the image rendering to a display module of an electronic device, so that display of the electronic device is stuck.
A first aspect of the embodiments of the present application provides a calton prediction method, which is applied to a terminal device, and the method includes:
acquiring an operational capability parameter of the terminal equipment in a first time period and a rendering operation amount in a second time period; wherein the first time period and the second time period are consecutive;
and acquiring the predicted rendering time of the terminal equipment in the second time period according to the calculation capability parameter and the rendering calculation amount in the first time period through a trained Carton prediction model.
A first aspect of the embodiments of the present application provides a stuck prediction method, so that a terminal device can predict in advance whether a stuck occurs in a second time period, and when a stuck occurs in the second time period, a specific stuck duration can be accurately predicted according to a predicted rendering time, so that the terminal device can respond in advance according to the specific stuck duration, thereby avoiding a stuck phenomenon, and improving the flexibility of processing the terminal device facing an upcoming stuck.
A second aspect of an embodiment of the present application provides a stuck prediction apparatus, including:
the acquisition module is used for acquiring the operational capability parameter of the terminal equipment in a first time period and the rendering operand in a second time period; wherein the first time period and the second time period are consecutive;
and the prediction module is used for acquiring the predicted rendering time of the terminal equipment in the second time period according to the operational capability parameter and the rendering operand in the first time period through the trained Kanton prediction model.
A third aspect of an embodiment of the present application provides a display device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the katon prediction method provided by the first aspect of the embodiment of the present application when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the katon prediction method provided by the first aspect of the embodiments of the present application.
It is understood that the beneficial effects of the second to fourth aspects can be seen from the description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal device provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a first stuck prediction method according to an embodiment of the present disclosure;
fig. 3 is a timing diagram illustrating a rendering time predicted by a trained katon prediction model when a terminal device outputs a rendered screen according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a second stuck prediction method according to an embodiment of the present application;
FIG. 5 is a third flowchart illustrating a stuck prediction method according to an embodiment of the present disclosure;
fig. 6 is a fourth flowchart illustrating a katon prediction method according to an embodiment of the present application;
FIG. 7 is a block diagram illustrating the construction, training, screening and application of the Kanton prediction model according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a stuck prediction apparatus according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In application, in the current method for detecting whether the electronic device is stuck, after the electronic device is stuck, the display information is collected to analyze the reason of the stuck, and the electronic device cannot respond to the stuck to be appeared in advance, so that the stuck phenomenon cannot be avoided. Therefore, how to predict whether the electronic device will be stuck in advance and accurately becomes a problem to be solved.
In order to solve the above technical problem, an embodiment of the present application provides a stuck prediction method, in which a computation capability parameter of a terminal device in a first time period and a rendering computation amount in a second time period are obtained; wherein the first time period and the second time period are consecutive; the method comprises the steps that a predicted rendering time of the terminal equipment in a second time period is obtained through a trained pause prediction model according to operation capability parameters and rendering operation amount in a first time period, whether the terminal equipment is paused in the second time period can be predicted in advance, and when the second time period is predicted to be paused, specific pause time can be accurately predicted according to the predicted rendering time, so that the terminal equipment can respond in advance according to the specific pause time, the pause phenomenon is avoided, and the processing flexibility of the terminal equipment facing the impending pause is improved.
The katon prediction method provided by the embodiment of the application can be applied to any terminal equipment configured with a graphic processor. The terminal device may be a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and the like, and the specific type of the terminal device is not limited in this embodiment.
Fig. 1 schematically shows a structure of a terminal device 1. The terminal device 1 may include a processor 10, a graphic processor 11, a memory 20, a power module 30, an audio module 40, a camera module 50, a sensor module 60, an input module 70, a display module 80, a wireless communication module 90, and the like. The audio module 40 may include a speaker 41, a microphone 42, and the like, the camera module 50 may include a short-focus camera 51, a long-focus camera 52, a flash 53, and the like, the sensor module 60 may include an infrared sensor 61, an acceleration sensor 62, a position sensor 63, a fingerprint sensor 64, an iris sensor 65, a gyroscope sensor 66, and the like, the input module 70 may include a touch panel 71 and an external input unit 72, and the Wireless Communication module 90 may include a Wireless Communication unit such as bluetooth, ZigBee (ZigBee), Optical Wireless Communication (Optical Wireless), Wireless Local Area Network (WLAN), Near Field Communication (NFC), and the like.
In an Application, the Processor 10 may be a Central Processing Unit (CPU), and the Processor 10 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In application, the graphic processor 11 may be an Integrated Graphics (Integrated Graphics) or an independent Graphics Card (Discrete Graphics Card) according to a manner of accessing a motherboard, and may be a computer graphic processor (Personal computer Graphics Processing Unit), a server graphic processor (server Graphics Processing Unit), or a mobile graphic processor (mobile Graphics Processing Unit) according to a type of a terminal device to be mounted; the graphics processor 11 may also be a microprocessor running graphics operations. When the graphic processor 11 is an integrated graphics card, it may be integrated into a main board of the terminal device, or may be integrated into the processor 10 of the terminal device. The embodiment of the present application does not set any limit to the specific type of the graphics processor 11.
In application, the storage 20 may be an internal storage unit of the terminal device in some embodiments, for example, a hard disk or a memory of the terminal device. The memory may also be an external storage device of the terminal device in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal device. Further, the memory may also include both an internal storage unit of the terminal device and an external storage device. The memory is used for storing an operating system, application programs, a BootLoader (BootLoader), data, and other programs, such as program codes of computer programs. The memory may also be used to temporarily store data that has been output or is to be output.
It is to be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation to the terminal device 1. In other embodiments of the present application, the terminal device 1 may include more or less components than those shown, or combine some of the components, or different components, and may further include, for example, an output device, a network access device, and the like. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
As shown in fig. 2, the katton prediction method provided in the embodiment of the present application, applied to a terminal device, includes the following steps S201 and S202:
step S201, acquiring an operational capability parameter of the terminal equipment in a first time period and a rendering operation amount in a second time period; wherein the first time period and the second time period are consecutive.
In application, the first time period represents a time period for the terminal device to output a rendered picture, when the terminal device outputs the rendered picture in real time in the first time period, the computing capacity parameter of the terminal device in the first time period can be acquired, the computing capacity parameter can include a CPU Usage rate (Usage), a CPU Frequency (Frequency), a GPU Usage rate, a GPU Frequency and the like, and the terminal device can acquire the computing capacity parameter by reading the operating conditions of the CPU and the GPU.
In application, the second time period and the first time period are continuous and are after the first time period, when the terminal device outputs the rendered pictures in real time in the first time period, the pictures to be rendered in the second time period are sent to the GPU, the GPU can obtain the rendering operand according to the pictures to be rendered in the second time period, specifically, the GPU can firstly determine the number of the pictures to be rendered in the second time period, namely the number of Frames (Frames) in the second time period, and obtain the rendering operand for rendering each picture, and the rendering operand for each picture is added to obtain the rendering operand in the second time period. The CPU of the terminal device can obtain the rendering operand in the second time period by reading the operation condition of the GPU, and can also set the GPU to send the rendering operand to the CPU after obtaining the rendering operand in the second time period.
It should be noted that the time lengths of the first time period and the second time period may be equal or unequal, and the specific time lengths of the first time period and the second time period may be set according to actual needs.
And S202, acquiring the predicted rendering time of the terminal equipment in the second time period according to the calculation capability parameter and the rendering calculation amount in the first time period through the trained Caton prediction model.
In the application, the trained katon prediction model inputs the calculation capability parameter and the rendering calculation amount, and outputs the prediction rendering time, wherein the rendering calculation amount and the prediction rendering time are in negative correlation; the calculation capability parameter and the predicted rendering time are in positive correlation, namely the stronger the calculation capability of the terminal equipment is, the faster the rendering speed is. The trained katon prediction model can determine the rendering speed of the terminal equipment according to the operational capability parameter, so that the predicted rendering time is obtained according to the current rendering speed and the rendering operand.
In application, the calculation capability parameter in the first time period and the rendering calculation amount in the second time period are input into the trained katon prediction model, so that the predicted rendering time in the second time period can be obtained. The working principle is that assuming that the calculation capability parameter of the terminal device in the second time period is consistent with the calculation capability parameter in the first time period, the rendering time required for completing the rendering budget amount in the second time period is obtained, so that the rendering time in the second time period is predicted in the first time period, and whether the terminal device is blocked or not is predicted in advance. The pause time duration of the electronic equipment can be accurately predicted according to the predicted rendering time, specifically, the predicted pause time duration of the terminal equipment in the second time period can be obtained according to the time difference between the predicted rendering time and the first preset time, when the time difference is greater than 0, the GPU cannot finish rendering in the second time period in time, the terminal equipment can be paused in the second time period, and the longer the time difference is, the longer the predicted pause time duration is; when the time difference is less than or equal to 0, it indicates that the GPU can complete rendering within the second time period, and the terminal device does not have a pause phenomenon within the second time period. The first preset time may be determined according to a duration of the rendered picture output by the terminal device, and may specifically be determined according to the second time period (for example, equal to the second time period).
Fig. 3 exemplarily shows a timing diagram of predicting rendering time by using a trained katon prediction model when a terminal device outputs a rendered screen, it should be noted that, in the embodiment of the present application, only a predicted rendering time of a second time period in a first time period is exemplarily described, and a katon prediction method of an adjacent time period (for example, a predicted rendering time of a third time period in the second time period) may refer to the above-mentioned katon prediction method of predicting rendering time of a second time period in a first time period, and is not described herein again.
In application, the calculation capacity parameter of the terminal equipment in a first time period and the rendering calculation amount in a second time period are obtained; wherein the first time period and the second time period are consecutive; the method comprises the steps that a predicted rendering time of the terminal equipment in a second time period is obtained through a trained pause prediction model according to operation capability parameters and rendering operation amount in a first time period, whether the terminal equipment is paused in the second time period can be predicted in advance, and when the second time period is predicted to be paused, specific pause time can be accurately predicted according to the predicted rendering time, so that the terminal equipment can respond in advance according to the specific pause time, the pause phenomenon is avoided, and the processing flexibility of the terminal equipment facing the impending pause is improved.
As shown in fig. 4, based on the embodiment corresponding to fig. 2, in one embodiment, the method includes the following steps S401 to S405:
the following describes the construction, training, and screening method of the katon prediction model through steps S401 to S403:
step S401, performing rendering operation according to a training task, and acquiring first training information to construct at least one Kanton prediction model to be trained; the first training information comprises hardware information of the terminal device, a first operational capability parameter, a first rendering operand and first rendering time.
In application, the training task may be to control the terminal device to render a preset image, or may be to render an image specified by a user when the terminal device runs daily (for example, drawing an image, running a game, playing a video, or the like, which requires rendering an image by an image processor).
In application, when the terminal device executes rendering operation according to a training task, the operating condition of the GPU may be collected to obtain training information, where the training information may include parameters such as an operation capability parameter, a rendering operand, and rendering time. According to different purposes of the training information, the training information can be divided into first training information and second training information, wherein the first training information is used for building a stuck prediction model, and the second training information is used for training the stuck prediction model. It should be noted that hardware information of the terminal device also needs to be obtained when the morton prediction model is constructed, so that the first training information further includes the hardware information of the terminal device, and specifically may include a Memory (RAM) capacity, a Memory frequency, a CPU model, a GPU model, and a motherboard model of the terminal device. The method comprises the following steps that performance information such as the core number, the frequency range, the cache capacity and the like of a CPU of a corresponding model can be determined according to the CPU model; similarly, performance information such as the core number, the frequency range, the cache capacity and the like of the GPU of the corresponding model can be determined according to the GPU signal; and determining the performance information such as the CPU power supply voltage, the GPU power supply voltage, the memory power supply voltage and the like of the mainboard with the corresponding model according to the model of the mainboard.
In application, the hardware information of the terminal equipment can represent the basic operational capability of the terminal equipment and is a constant in the Kanton prediction model; the first/second operational capability parameter can represent the actual operational capability of the terminal device when the rendering operation is executed, and the first/second operational capability parameter, the first rendering operand and the first rendering time are all independent variables in the katton prediction model. Inputting the hardware information, the first operational capability parameter, the first rendering operand and the first rendering time of the terminal device into the machine learning model, and preliminarily determining the model parameters of the stuck prediction model to complete the construction of the stuck prediction model. Specific construction methods can be referred to the following description of step S501 and step S502.
Step S402, performing rendering operation according to the training task, and acquiring second training information to train at least one Kanton prediction model to be trained; the second training information includes a second computation capability parameter, a second rendering computation amount, and a second rendering time.
In application, after the construction of the katon prediction model is completed, the katon prediction model to be trained needs to be trained, specifically, a second operation capability parameter of a dependent variable and a second rendering operation quantity can be input into the katon prediction model, the rendering time is predicted according to the second rendering time and the output dependent variable, and the katon prediction model to be trained is trained so as to improve the accuracy of the predicted rendering time output by the katon prediction model. The specific training method may refer to the following description of steps S503 to S506.
And S403, obtaining comprehensive performance scores of at least one Kayton prediction model to be trained, and screening to obtain the trained Kayton prediction model.
In the application, in the training process of each katon prediction model to be trained, or after the training times of each katon prediction model to be trained reach the preset optimization times, the comprehensive performance score of each katon prediction model to be trained can be obtained, the katon prediction model to be trained with the highest comprehensive performance score is screened as the trained katon prediction model, and therefore the katon prediction model with the best performance of the prediction rendering time is obtained from the various katon prediction models to be trained, and the prediction accuracy of the katon prediction model is improved. Specific screening methods can be referred to the following description of steps S507 to S509.
S404, acquiring an operational capability parameter of the terminal equipment in a first time period and a rendering operation amount in a second time period; wherein the first time period and the second time period are consecutive;
and S405, acquiring the predicted rendering time of the terminal equipment in the second time period according to the operational capability parameter and the rendering operand in the first time period through the trained Kanton prediction model.
In application, steps S404 and S405 are consistent with the katton prediction method provided in steps S201 and S202, and are not described herein again.
In application, the rendering operation is executed according to the training task, the first training information and the second training information are obtained, at least one stuck prediction model to be trained is constructed according to the hardware information, the first calculation capacity parameter, the first rendering calculation amount and the first rendering time of the terminal equipment, the stuck prediction model can be constructed based on the actual hardware information of the terminal equipment, the stuck prediction model can be suitable for the terminal equipment of different hardware platforms, and the applicability of the stuck prediction model is improved; by training at least one stuck prediction model to be trained according to the second operational capability parameter, the second rendering operand and the second rendering time, the stuck prediction model can be trained according to second training information output by the terminal equipment in the actual rendering operation process, the training effect is ensured, and the accuracy of the predicted rendering time obtained by the trained stuck prediction model is improved; the method has the advantages that the trained Canton prediction model is obtained by obtaining the comprehensive performance score of at least one Canton prediction model to be trained and screening, the Canton prediction model with the best comprehensive performance can be screened from various Canton prediction models to be trained, and the accuracy of the predicted rendering time obtained by the Canton prediction model is further improved.
As shown in fig. 5, based on the embodiment corresponding to fig. 4, in one embodiment, the following steps S501 to S511 are included:
step S501, rendering operation is executed according to the first training task to obtain hardware information, a first operational capacity parameter, a first rendering operand and first rendering time of the terminal device;
step S502, taking hardware information, a first operational capability parameter and a first rendering operand of the terminal equipment as independent variables, taking first rendering time as a dependent variable, and constructing at least one stuck prediction model to be trained by adopting at least one machine learning model; a machine learning model is used for constructing a Katon prediction model to be trained.
In application, when a katton prediction model to be trained is constructed, hardware information, a first operational capability parameter and a first rendering operand of a terminal device are used as independent variables, a first rendering time is used as a dependent variable, and a model structure of the katton prediction model to be trained is determined according to a selected Machine Learning (Machine Learning) model, and specifically different types of Machine Learning models such as a Linear Regression (LR) model, a Polynomial Regression (PR) model or a Recurrent Neural Network (RNN) model can be used.
In one embodiment, the linear regression model is used to construct the katton prediction model to be trained with the relation:
Time=α1Configuration+α2Computing+α3Calculation+ε;
wherein, Time represents a first rendering Time, ConfiguratiOn represents a numerical value corresponding to the hardware information of the terminal device, Computing represents a first computation capability parameter, calibration represents a first rendering computation amount, and α1Model parameter, alpha, representing a value corresponding to hardware information of a terminal device2Model parameter, α, representing a first calculation capability parameter3And the model parameter represents a first rendering operand, and epsilon represents a first preset model parameter.
In one embodiment, the polynomial regression model is used to construct the relationship of the katton prediction model to be trained as follows:
Figure BDA0003582596430000111
wherein, beta0Representing a second predetermined model parameter, β1Representing a first integrated model parameter, beta2Representing a second integrated model parameter, betanRepresents the nth synthesis model parameter, and n is a positive integer greater than or equal to 1.
It should be noted that the above relational expression of the katton prediction model to be trained is exemplary, and the embodiments of the present application do not set any limit to the type of the katton prediction model to be trained, the type of the machine learning model used, and the specific relational expression.
Step S503, executing rendering operation according to the second training task to obtain a second operational capability parameter, a second rendering operand and a second rendering time of the terminal device;
and step S504, for each Kayton prediction model to be trained, obtaining the predicted rendering time according to the second operational capability parameter and the second rendering operand through the Kayton prediction model to be trained.
In application, for each type of katton prediction model to be trained, after the construction of the katton prediction model to be trained is completed, a second operation capability parameter and a second rendering operand can be input to the katton prediction model to be trained, so that the katton prediction model to be trained outputs a prediction rendering time. The hardware information of the terminal device is a constant, so that the independent variable input to the katton prediction model to be trained is a second operational capability parameter and a second rendering operand, and the output dependent variable is the predicted rendering time.
And step S505, constructing a loss function according to the predicted rendering time and the second rendering time.
In application, the second operation capability parameter and the second rendering operation amount of each set of input have corresponding second rendering time and corresponding predicted rendering time. And aiming at each group of second training information, taking the corresponding second rendering time as a true value and the corresponding predicted rendering time as a predicted value, and constructing a loss function according to the true value and the predicted value so as to quantify the accuracy of the predicted rendering time output by the current Kayton prediction model to be trained.
In application, the Loss Function may be different types of Loss functions such as a 0-1 Loss Function, an L1 norm Loss Function (Mean Absolute Error Loss Function), a logarithmic Loss Function (Logistic Loss Function), a Quadratic Loss Function (quadrate Loss Function), an Exponential Loss Function (Exponential Loss Function), and a Cross Entropy Loss Function (Cross Entropy Loss Function), and the type of the Loss Function may be selected according to actual needs.
And S506, optimizing the Kanton prediction model to be trained according to the loss function.
In application, for each kind of stuck prediction model to be trained, a corresponding loss function can be obtained by inputting a group of second training information to the stuck prediction model to be trained, and the stuck prediction model to be trained at present can be optimized based on the loss function so as to adjust model parameters of the stuck prediction model to be trained, so that the accuracy of the predicted rendering time output by the stuck prediction model to be trained is continuously improved.
Step S507, for each Kayton prediction model to be trained, obtaining the prediction rendering time after the latest k is suboptimal after the optimization times of the Kayton prediction model to be trained reach the preset optimization times;
and step S508, acquiring the comprehensive performance score of the Katon prediction model to be trained according to the second rendering time and the predicted rendering time after the latest k is suboptimal.
In application, the target optimization times of the katon prediction model to be trained can be determined by setting the preset optimization times, and the target optimization times of the prediction rendering time output by the katon prediction model to be trained can also be determined by setting the preset accuracy of the prediction rendering time so as to set the training requirement of the katon prediction model to be trained.
In application, when the target optimization times of the stuck prediction model to be trained are determined by setting the preset optimization times, k predicted rendering times output by the stuck prediction model to be trained after the optimization times reach the preset optimization times and the latest k secondary optimizations are obtained, and k error times are obtained by subtracting each predicted rendering time from the corresponding second rendering time, so that the comprehensive performance score of the stuck prediction model to be trained is obtained according to the k error times.
In one embodiment, step S508 includes:
obtaining a stable performance score and an accurate performance score of the Katon prediction model to be trained according to the predicted rendering time and the second rendering time after the latest k is suboptimal;
and calculating the comprehensive performance score of the Katon prediction model to be trained according to the stable performance score and the accurate performance score of the Katon prediction model to be trained.
In application, after k error times are obtained, the variance or standard deviation of the k error times can be calculated to reflect the fluctuation of the k error times and obtain the stability performance score of the Katon prediction model to be trained, wherein the variance or standard deviation of the k error times is in negative correlation with the stability performance score, namely the smaller the fluctuation of the k error times is, the higher the stability performance score is; the average error time can be obtained by calculating the average value of k error times, and the accurate performance score of the Katon prediction model to be trained is obtained, wherein the size of the average error time is in negative correlation with the accurate performance score, namely the smaller the average error time is, the higher the accurate performance score is.
In application, according to the stable performance score and the accurate performance score of the Katon prediction model to be trained, the relation formula for calculating the comprehensive performance score of the Katon prediction model to be trained is as follows:
Synthesize=η1Stability+η2Veracity;
wherein Synthesis represents the composite Performance score, Stability represents the Stable Performance score, Veracity represents the accurate Performance score, η1Fractional coefficient, η, representing the stability performance score2A scaling factor representing the accurate performance score. Eta1May be equal to 0.3, η2May be equal to 0.7, η1And η2The value of (b) can be set according to actual needs, and the embodiment of the application does not compare eta1And η2Is not limited in any way.
And step S509, screening to obtain the trained katon prediction model according to the comprehensive performance score of each type of katon prediction model to be trained.
In application, the comprehensive performance score of each Kayton prediction model to be trained can be obtained, and the Kayton prediction model to be trained with the highest comprehensive performance score is screened as the trained Kayton prediction model, so that the Kayton prediction model with the best performance of rendering time prediction is obtained from the various Kayton prediction models to be trained, and the prediction accuracy of the Kayton prediction model is improved.
Step S510, acquiring an operational capability parameter of the terminal equipment in a first time period and a rendering operation amount in a second time period; wherein the first time period and the second time period are consecutive;
and step S511, obtaining the predicted rendering time of the terminal equipment in the second time period according to the operational capability parameter and the rendering operand in the first time period through the trained Kanton prediction model.
In application, steps S510 and S511 are consistent with the katton prediction method provided in steps S201 and S202, and will not be described herein again.
In application, various types of stuck prediction models to be trained can be constructed and trained, the stuck prediction model with the best comprehensive performance can be screened out from the various types of stuck prediction models to be trained, and the accuracy of the predicted rendering time output by the stuck prediction model is improved.
As shown in fig. 6, based on the embodiment corresponding to fig. 5, in one embodiment, the following steps S601 to S613 are included:
step S601, performing rendering operation according to the first training task to obtain hardware information, a first operational capability parameter, a first rendering operand and first rendering time of the terminal equipment;
step S602, taking hardware information, a first operational capability parameter and a first rendering operand of the terminal equipment as independent variables, taking first rendering time as a dependent variable, and constructing at least one stuck prediction model to be trained by adopting at least one machine learning model; a machine learning model is used for constructing a Katon prediction model to be trained.
Step S603, performing rendering operation according to the second training task to obtain a second operational capability parameter, a second rendering operand and second rendering time of the terminal equipment;
step S604, for each Kanton prediction model to be trained, obtaining predicted rendering time according to a second operational capability parameter and a second rendering operand through the Kanton prediction model to be trained;
step S605, constructing a loss function according to the predicted rendering time and the second rendering time;
and S606, optimizing the Katon prediction model to be trained according to the loss function.
Step S607, for each Kayton prediction model to be trained, obtaining the prediction rendering time after the latest k is suboptimal after the optimization times of the Kayton prediction model to be trained reach the preset optimization times;
step S608, acquiring comprehensive performance scores of the Katon prediction model to be trained according to the second rendering time and the predicted rendering time after the k is optimized recently;
step S609, according to the comprehensive performance score of each Kanton prediction model to be trained, screening to obtain a trained Kanton prediction model;
step S610, acquiring an operational capability parameter of the terminal equipment in a first time period and a rendering operation amount in a second time period; wherein the first time period and the second time period are consecutive;
step S611, obtaining a predicted rendering time of the terminal device in the second time period according to the computation capability parameter and the rendering computation amount in the first time period through the trained katon prediction model.
In application, steps S601 to S611 are identical to the katon prediction method provided in steps S501 to S511, and are not repeated herein.
Step S612, obtaining a frame rate stability score in the second time period according to the predicted rendering time of each frame in the second time period.
In application, after the predicted rendering time in the second time period is obtained through the trained katon prediction model, the predicted rendering time of each frame in the second time period can be obtained, and the frame rate stability score in the second time period is obtained. Specifically, the frame rate stability score in the second time period may be obtained by calculating a method or a standard deviation of the predicted rendering time of each frame in the second time period, so as to obtain the frame rate stability in the second time period, where the frame rate stability score is negatively correlated with a variance or a standard deviation of the predicted rendering time of each frame in the second time period.
Step S613, when the frame rate stability score is smaller than the first preset stability score or the predicted rendering time is greater than the first preset time, adjusting the operation capability parameter to improve the operation capability of the terminal device in the second time period.
In application, whether the terminal device is stuck or not can be quantified by setting a preset stability score and a first preset time, specifically, when the frame rate stability score is smaller than the first preset stability score or the preset rendering time is longer than the first preset time, the terminal device can be judged to be stuck, and at the moment, the operational capability of the terminal device in a second time period can be improved by adjusting the operational capability parameter. Specifically, the computing capability of the terminal device in the second time period can be improved by improving any one or more parameters of the CPU utilization rate, the CPU frequency, the GPU utilization rate, and the GPU frequency.
In application, after the terminal equipment is judged to be blocked, a stability score difference value between a preset stability score and a first preset stability score can be calculated, and stability score difference value grades to which the stability score difference values belong are judged, wherein each stability score difference value grade has a corresponding preset calculation capability parameter grade, and each preset calculation capability parameter grade is provided with any one or more of specific CPU utilization rate, CPU frequency, GPU utilization rate or GPU frequency; similarly, the time difference between the preset rendering time and the first preset time can be calculated, the time difference level to which the time difference belongs is judged, each time difference level has a corresponding preset calculation capability parameter level, multi-level adjustment of the preset calculation parameters is achieved, different levels of preset calculation parameter adjustment can be made according to the actual blocking condition of the terminal equipment, the blocking phenomenon is eliminated, the power consumption is reduced, and balance between the performance and the power consumption is achieved.
In application, when the operational capability parameter is adjusted, a smooth parameter adjustment sequence can be formed between the current operational capability parameter and the target operational capability parameter through interpolation operation, so that the current operational capability parameter approaches to the target operational capability parameter frame by frame, the display picture can be ensured not to generate mutation, and the display effect of the terminal equipment is improved.
In one embodiment, after step S612, the method further includes:
and when the frame rate stability score is greater than or equal to a second preset stability score and the predicted rendering time of the second time period is less than or equal to a second preset time, adjusting the calculation capacity parameter to reduce the calculation capacity of the terminal equipment in the second time period.
In application, when the frame rate stability score is greater than or equal to the second preset stability score, or when the preset rendering time is less than or equal to the first preset time, it may be determined that the terminal device is not stuck, and at this time, the operational capability of the terminal device in the second time period may be reduced by reducing the operational capability parameter, so as to reduce the power consumption of the terminal device.
It should be noted that the specific sizes of the first preset stable score, the second preset stable frequency division, the first preset time and the second preset time may be set according to actual needs, and the difference level of the stable score, the preset calculation capability parameter level, the number of the levels of the time difference level, and the specific numerical value at each level may be set according to actual needs.
Fig. 7 exemplarily shows an architecture diagram of construction, training, screening and application of the katton prediction model, wherein the specific method for obtaining training information may refer to step S601 and step S603, the specific method for constructing and training the katton prediction model may refer to step S602, and step S604 to step S606, the specific method for screening the katton prediction model with the optimal comprehensive performance may refer to step S607 to step S609, the specific method for predicting rendering time by using the trained katton prediction model may refer to step S610 and step S611, and the specific method for adjusting the computation capability according to the katton condition may refer to step S612 and step S613, which is not described herein again.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
As shown in fig. 8, an embodiment of the present application further provides a stuck prediction apparatus 8, configured to execute the steps in the embodiment of the stuck prediction method applied to the terminal device. The seizure prediction means may be a virtual appliance (virtual application) in the terminal device, which is executed by a processor of the terminal device, or may be the terminal device itself.
As shown in fig. 8, a stuck prediction apparatus 8 according to an embodiment of the present application includes:
the acquisition module 81 is configured to acquire an arithmetic capability parameter of the terminal device in a first time period and a rendering operand of the terminal device in a second time period; wherein the first time period and the second time period are consecutive;
and the prediction module 82 is configured to obtain, by using a trained katon prediction model, a predicted rendering time of the terminal device in the second time period according to the computation capability parameter and the rendering computation amount in the first time period.
In application, each module in the katon prediction apparatus 8 may be a software program module, may be implemented by different logic circuits integrated in a processor, and may also be implemented by a plurality of distributed processors.
It should be noted that, because the contents of information interaction, execution process, and the like between the modules are based on the same concept as that of the embodiment of the method of the present application, specific functions and technical effects thereof may be specifically referred to a part of the embodiment of the method, and details are not described here.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely illustrated, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. Each functional module in the embodiments may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module, and the integrated module may be implemented in a form of hardware, or in a form of software functional module. In addition, specific names of the functional modules are only used for distinguishing one functional module from another, and are not used for limiting the protection scope of the present application. For the specific working process of the modules in the system, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program can implement the steps in the foregoing various embodiments of the katton prediction method.
The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may be stored in a computer-readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or apparatus capable of carrying computer program code to a photographing terminal device, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed terminal device and method may be implemented in other ways. For example, the above-described terminal device embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and there may be other divisions when actually implementing, for example, a plurality of modules or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present application, and they should be construed as being included in the present application.

Claims (10)

1. A Carton prediction method applied to a terminal device, the method comprising:
acquiring an operational capability parameter of the terminal equipment in a first time period and a rendering operation amount in a second time period; wherein the first time period and the second time period are consecutive;
and acquiring the predicted rendering time of the terminal equipment in the second time period according to the operational capability parameter and the rendering operand in the first time period through a trained Kanton prediction model.
2. The katon prediction method of claim 1, wherein the obtaining, by the trained katon prediction model, the predicted rendering time of the terminal device in the second time period before the predicted rendering time according to the operation capability parameter and the rendering operation amount in the first time period further comprises:
performing rendering operation according to the training task to obtain first training information so as to construct at least one Kanton prediction model to be trained; the first training information comprises hardware information of the terminal equipment, a first operational capability parameter, a first rendering operand and first rendering time;
performing rendering operation according to the training task, and acquiring second training information to train the at least one Kayton prediction model to be trained; the second training information comprises a second operational capability parameter, a second rendering operand and second rendering time;
and acquiring the comprehensive performance score of the at least one Kayton prediction model to be trained, and screening to obtain the trained Kayton prediction model.
3. The katon prediction method of claim 2, wherein the performing a rendering operation according to a training task to obtain first training information to construct at least one katon prediction model to be trained comprises:
executing rendering operation according to a first training task to obtain hardware information, a first operational capability parameter, a first rendering operand and first rendering time of the terminal equipment;
taking the hardware information, the first operational capability parameter and the first rendering operand of the terminal equipment as independent variables, taking the first rendering time as a dependent variable, and constructing at least one Cartesian prediction model to be trained by adopting at least one machine learning model; a machine learning model is used for constructing a Katon prediction model to be trained.
4. The katon prediction method of claim 2, wherein the performing a rendering operation according to a training task to obtain second training information for training the at least one katon prediction model to be trained comprises:
executing rendering operation according to a second training task to obtain a second operational capability parameter, a second rendering operand and second rendering time of the terminal equipment;
for each Kanton prediction model to be trained, obtaining predicted rendering time according to the second operational capability parameter and the second rendering operand through the Kanton prediction model to be trained;
constructing a loss function according to the predicted rendering time and the second rendering time;
and optimizing the Katon prediction model to be trained according to the loss function.
5. The katon prediction method of claim 2, wherein the obtaining the comprehensive performance score of the at least one katon prediction model to be trained and the screening to obtain the trained katon prediction model comprises:
for each Kanton prediction model to be trained, obtaining the predicted rendering time after the latest k sub-optimization after the optimization times of the Kanton prediction model to be trained reach the preset optimization times;
acquiring the comprehensive performance score of the Katon prediction model to be trained according to the second rendering time and the predicted rendering time after the k-times optimization;
screening to obtain a trained katon prediction model according to the comprehensive performance score of each katon prediction model to be trained;
wherein k is a positive integer.
6. The katon prediction method of claim 5, wherein the obtaining a composite performance score of the katon prediction model to be trained based on the second rendering time and the predicted rendering time after the last k sub-optimizations comprises:
obtaining a stable performance score and an accurate performance score of the Katon prediction model to be trained according to the predicted rendering time and the second rendering time after the latest k is suboptimal;
and calculating the comprehensive performance score of the Katon prediction model to be trained according to the stable performance score and the accurate performance score of the Katon prediction model to be trained.
7. The katon prediction method of any one of claims 1 to 6, wherein after the obtaining the predicted rendering time of the terminal device in the second time period according to the computation capability parameter and the rendering computation amount in the first time period by the trained katon prediction model, further comprising:
acquiring a frame rate stability score in the second time period according to the predicted rendering time of each frame in the second time period;
and when the frame rate stability score is smaller than a first preset stability score or the predicted rendering time is greater than a first preset time, adjusting the computing capability parameter so as to improve the computing capability of the terminal equipment in the second time period.
8. A stuck prediction apparatus, comprising:
the acquisition module is used for acquiring the operational capability parameter of the terminal equipment in a first time period and the rendering operand in a second time period; wherein the first time period and the second time period are consecutive;
and the prediction module is used for acquiring the predicted rendering time of the terminal equipment in the second time period according to the calculation capability parameter and the rendering calculation amount in the first time period through the trained Canton prediction model.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the katon prediction method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the katon prediction method according to any one of claims 1 to 7.
CN202210355749.2A 2022-04-06 2022-04-06 Caton prediction method, device, terminal equipment and computer readable storage medium Pending CN114741192A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210355749.2A CN114741192A (en) 2022-04-06 2022-04-06 Caton prediction method, device, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210355749.2A CN114741192A (en) 2022-04-06 2022-04-06 Caton prediction method, device, terminal equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114741192A true CN114741192A (en) 2022-07-12

Family

ID=82278930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210355749.2A Pending CN114741192A (en) 2022-04-06 2022-04-06 Caton prediction method, device, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114741192A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115495321A (en) * 2022-11-18 2022-12-20 天河超级计算淮海分中心 Automatic identification method for use state of super-computation node

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115495321A (en) * 2022-11-18 2022-12-20 天河超级计算淮海分中心 Automatic identification method for use state of super-computation node

Similar Documents

Publication Publication Date Title
US20220188840A1 (en) Target account detection method and apparatus, electronic device, and storage medium
US10514799B2 (en) Deep machine learning to perform touch motion prediction
CN110163405B (en) Method, device, terminal and storage medium for determining transit time
US11921561B2 (en) Neural network inference circuit employing dynamic memory sleep
US11392799B2 (en) Method for improving temporal consistency of deep neural networks
EP2557495A1 (en) Information processing terminal and control method therefor
CN111738488A (en) Task scheduling method and device
CN110288614A (en) Image processing method, device, equipment and storage medium
CN104704469A (en) Dynamically rebalancing graphics processor resources
CN115199240B (en) Shale gas well yield prediction method, shale gas well yield prediction device and storage medium
CN111902790B (en) Frequency modulation method, frequency modulation device and computer readable storage medium
CN114741192A (en) Caton prediction method, device, terminal equipment and computer readable storage medium
CN115205925A (en) Expression coefficient determining method and device, electronic equipment and storage medium
CN102812428B (en) The information processing terminal and control method thereof
CN107729144B (en) Application control method and device, storage medium and electronic equipment
CN111627029B (en) Image instance segmentation result acquisition method and device
CN112966592A (en) Hand key point detection method, device, equipment and medium
CN108038563A (en) A kind of data predication method, server and computer-readable recording medium
EP4131852A1 (en) Automated pausing of audio and/or video during a conferencing session
CN112949850B (en) Super-parameter determination method, device, deep reinforcement learning framework, medium and equipment
CN113762585B (en) Data processing method, account type identification method and device
CN115079832A (en) Virtual reality scene display processing method and virtual reality equipment
CN111722693B (en) Power consumption adjusting method and device, storage medium, server and terminal
CN113989121A (en) Normalization processing method and device, electronic equipment and storage medium
KR20210156538A (en) Method and appratus for processing data using neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination