CN112130940B - Terminal control method and device, storage medium and electronic equipment - Google Patents

Terminal control method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112130940B
CN112130940B CN202010876059.2A CN202010876059A CN112130940B CN 112130940 B CN112130940 B CN 112130940B CN 202010876059 A CN202010876059 A CN 202010876059A CN 112130940 B CN112130940 B CN 112130940B
Authority
CN
China
Prior art keywords
display interface
target
parameters
control
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010876059.2A
Other languages
Chinese (zh)
Other versions
CN112130940A (en
Inventor
刘任
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202010876059.2A priority Critical patent/CN112130940B/en
Publication of CN112130940A publication Critical patent/CN112130940A/en
Application granted granted Critical
Publication of CN112130940B publication Critical patent/CN112130940B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a control method, a device, a storage medium and an electronic apparatus of a terminal, wherein the method comprises the following steps: the method comprises the steps of obtaining target drawing parameters corresponding to target display interface frames, wherein the target drawing parameters are used for indicating time information for drawing the target display interface frames, the target display interface frames are interfaces displayed by a terminal at the current moment, determining control parameters according to the target drawing parameters and drawing parameter groups through a pre-trained processing model, wherein the drawing parameter groups comprise drawing parameters corresponding to each historical display interface frame in a preset number of historical display interface frames in front of the target display interface frames, and controlling the frequency of a processor of the terminal according to the control parameters. According to the method and the device, the control parameters are determined through the processing model according to the target drawing parameters and the drawing parameter sets, so that the frequency of the processor is dynamically controlled according to the control parameters, the possible clamping situation of the terminal is processed in advance, and therefore the power consumption of the terminal is reduced on the premise that clamping of the display interface frame is reduced.

Description

Terminal control method and device, storage medium and electronic equipment
Technical Field
The disclosure relates to the technical field of terminals, and in particular relates to a control method and device of a terminal, a storage medium and electronic equipment.
Background
With the continuous development of terminal technology, more and more software is installed on the terminal, and in order to improve the experience effect of the software, the requirements on the processing capability of the CPU (English: central Processing Unit, chinese: central processing unit) of the terminal are also continuously improved, so that the situation that the terminal is blocked when the CPU is heavy in load, and the normal use of a user is affected. In order to ensure normal use of a user, the operation performance of the terminal can be improved by improving the working frequency of the CPU, so that the occurrence of a clamping condition is avoided. However, if the operating frequency of the CPU is always increased, the occurrence of a stuck condition can be avoided, but excessive power consumption of the terminal may result.
In the related art, the load condition of the current CPU can be monitored by the CPU, and if the load of the current CPU is heavier, the working frequency of the CPU is improved. However, there is significant hysteresis in this manner, which can easily lead to a stuck condition for scenes with high real-time requirements (e.g., game scenes). The CPU can also monitor the drawing duration of the current display interface frame of the terminal, and if the drawing duration of the current display interface frame is too long, the working frequency of the CPU is improved. However, this approach may result in a display interface frame when the load of the CPU is increased, and a jam may occur, which may affect the normal use of the user.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a control method, apparatus, storage medium, and electronic device for a terminal.
According to a first aspect of an embodiment of the present disclosure, there is provided a control method of a terminal, the method including:
obtaining a target drawing parameter corresponding to a target display interface frame; the target drawing parameters are used for indicating time information for drawing the target display interface frame, and the target display interface frame is an interface displayed by the terminal at the current moment;
determining control parameters through a pre-trained processing model according to the target drawing parameters and the drawing parameter sets; the drawing parameter set comprises drawing parameters corresponding to each historical display interface frame in a preset number of historical display interface frames before the target display interface frame, wherein the drawing parameters are used for indicating time information for drawing the corresponding historical display interface frames;
and controlling the frequency of a processor of the terminal according to the control parameter.
Optionally, the processing model is a supervised learning model, and the determining, according to the target drawing parameters and the drawing parameter set, control parameters through a pre-trained processing model includes:
Inputting the target drawing parameters and the drawing parameter sets into the supervised learning model to obtain the control parameters output by the supervised learning model; the control parameters are used for representing the clamping degree of the next display interface frame of the target display interface frame;
the controlling the frequency of the processor of the terminal according to the control parameter comprises the following steps:
and if the control parameter meets a preset condition, increasing the frequency of the processor.
Optionally, if the control parameter meets a preset condition, increasing the frequency of the processor includes:
determining a target frequency according to the control parameter, and controlling the frequency of the processor to rise to the target frequency; and/or the number of the groups of groups,
and determining target time according to the control parameter, and controlling the frequency of the processor to rise and maintain the target time.
Optionally, the supervised learning model is trained by:
obtaining a sample input set and a sample output set, wherein each sample input in the sample input set comprises a training drawing parameter and a training drawing parameter set corresponding to the training drawing parameter, each sample output set comprises a sample output corresponding to each sample input, and each sample output comprises a training control parameter corresponding to the corresponding training drawing parameter;
And taking the sample input set as the input of a preset long-short-term memory network LSTM, and taking the sample output set as the output of the LSTM so as to obtain the supervised learning model.
Optionally, the processing model is a reinforcement learning model, and the determining, according to the target drawing parameters and the drawing parameter set, control parameters through a pre-trained processing model includes:
inputting the target drawing parameters and the drawing parameter sets into the reinforcement learning model to obtain the control parameters output by the reinforcement learning model; wherein the control parameters comprise control time and control duration;
the controlling the frequency of the processor of the terminal according to the control parameter comprises the following steps:
and raising the frequency of the processor to a target frequency at the control moment, and maintaining the control duration.
Optionally, the reinforcement learning model is trained by:
inputting training drawing parameters and training drawing parameter sets corresponding to a display interface frame into a preset DQN network in a specified time range, so as to control the frequency of the processor according to the output of the DQN network, wherein the specified time range comprises a plurality of time windows;
Acquiring the output of the DQN network in each time window, and the corresponding cartoon time and the total frequency raising time of each time window;
according to the output of the DQN network in a target time window, adjusting the DQN network, wherein the clamping times and the total frequency-up time corresponding to the target time window meet preset conditions;
and repeatedly executing the steps of inputting training drawing parameters and training drawing parameter sets corresponding to the display interface frames into a preset DQN network in a specified time range, controlling the frequency of the processor according to the output of the DQN network, and adjusting the DQN network according to the output of the DQN network in a target time window so as to obtain the reinforcement learning model.
Optionally, the inputting the training drawing parameters and the training drawing parameter sets corresponding to the display interface frame into a preset DQN network in the specified time range, so as to control the frequency of the processor according to the output of the DQN network, including:
inputting a first training drawing parameter and a first training drawing parameter set corresponding to a first display interface frame into the DQN network to obtain a first training control parameter, wherein the first display interface frame is any frame interface displayed by the terminal within the appointed time range;
Controlling the frequency of the processor according to the first training control parameter to obtain a second training drawing parameter and a second training drawing parameter set corresponding to a second display interface frame, wherein the second display interface frame is the next display interface frame of the first display interface frame;
updating the first display interface frame to the second display interface frame;
and repeatedly executing the first training drawing parameters and the first training drawing parameter sets corresponding to the first display interface frame in the appointed time range, inputting the first training drawing parameters and the first training drawing parameter sets into the DQN network to obtain first training control parameters, and updating the first display interface frame into the second display interface frame.
According to a second aspect of embodiments of the present disclosure, there is provided a control apparatus of a terminal, the apparatus including:
the acquisition module is configured to acquire target drawing parameters corresponding to the target display interface frame; the target drawing parameters are used for indicating time information for drawing the target display interface frame, and the target display interface frame is an interface displayed by the terminal at the current moment;
a determining module configured to determine control parameters by a pre-trained process model according to the target drawing parameters and the drawing parameter set; the drawing parameter set comprises drawing parameters corresponding to each historical display interface frame in a preset number of historical display interface frames before the target display interface frame, wherein the drawing parameters are used for indicating time information for drawing the corresponding historical display interface frames;
And the control module is configured to control the frequency of the processor of the terminal according to the control parameter.
Optionally, the processing model is a supervised learning model, and the determining module is configured to:
inputting the target drawing parameters and the drawing parameter sets into the supervised learning model to obtain the control parameters output by the supervised learning model; the control parameters are used for representing the clamping degree of the next display interface frame of the target display interface frame;
the control module is configured to:
and if the control parameter meets a preset condition, increasing the frequency of the processor.
Optionally, the control module is configured to:
determining a target frequency according to the control parameter, and controlling the frequency of the processor to rise to the target frequency; and/or the number of the groups of groups,
and determining target time according to the control parameter, and controlling the frequency of the processor to rise and maintain the target time.
Optionally, the supervised learning model is trained by:
obtaining a sample input set and a sample output set, wherein each sample input in the sample input set comprises a training drawing parameter and a training drawing parameter set corresponding to the training drawing parameter, each sample output set comprises a sample output corresponding to each sample input, and each sample output comprises a training control parameter corresponding to the corresponding training drawing parameter;
And taking the sample input set as the input of a preset long-short-term memory network LSTM, and taking the sample output set as the output of the LSTM so as to obtain the supervised learning model.
Optionally, the processing model is a reinforcement learning model, and the determining module is configured to:
inputting the target drawing parameters and the drawing parameter sets into the reinforcement learning model to obtain the control parameters output by the reinforcement learning model; wherein the control parameters comprise control time and control duration;
the control module is configured to:
and raising the frequency of the processor to a target frequency at the control moment, and maintaining the control duration.
Optionally, the reinforcement learning model is trained by:
inputting training drawing parameters and training drawing parameter sets corresponding to a display interface frame into a preset DQN network in a specified time range, so as to control the frequency of the processor according to the output of the DQN network, wherein the specified time range comprises a plurality of time windows;
acquiring the output of the DQN network in each time window, and the corresponding cartoon time and the total frequency raising time of each time window;
According to the output of the DQN network in a target time window, adjusting the DQN network, wherein the clamping times and the total frequency-up time corresponding to the target time window meet preset conditions;
and repeatedly executing the steps of inputting training drawing parameters and training drawing parameter sets corresponding to the display interface frames into a preset DQN network in a specified time range, controlling the frequency of the processor according to the output of the DQN network, and adjusting the DQN network according to the output of the DQN network in a target time window so as to obtain the reinforcement learning model.
Optionally, the inputting the training drawing parameters and the training drawing parameter sets corresponding to the display interface frame into a preset DQN network in the specified time range, so as to control the frequency of the processor according to the output of the DQN network, including:
inputting a first training drawing parameter and a first training drawing parameter set corresponding to a first display interface frame into the DQN network to obtain a first training control parameter, wherein the first display interface frame is any frame interface displayed by the terminal within the appointed time range;
controlling the frequency of the processor according to the first training control parameter to obtain a second training drawing parameter and a second training drawing parameter set corresponding to a second display interface frame, wherein the second display interface frame is the next display interface frame of the first display interface frame;
Updating the first display interface frame to the second display interface frame;
and repeatedly executing the first training drawing parameters and the first training drawing parameter sets corresponding to the first display interface frame in the appointed time range, inputting the first training drawing parameters and the first training drawing parameter sets into the DQN network to obtain first training control parameters, and updating the first display interface frame into the second display interface frame.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the control method of the terminal provided in the first aspect of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the control method of a terminal provided by the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: the method comprises the steps of firstly obtaining target drawing parameters corresponding to target display interface frames, wherein the target drawing parameters are used for indicating time information for drawing the target display interface frames, the target display interface frames are interfaces displayed by a terminal at the current moment, then determining control parameters according to the target drawing parameters and drawing parameter groups through a pre-trained processing model, wherein the drawing parameter groups comprise drawing parameters corresponding to each historical display interface frame in a preset number of historical display interface frames in front of the target display interface frames, the drawing parameters are used for indicating the time information for drawing the corresponding historical display interface frames, and finally controlling the frequency of a processor of the terminal according to the control parameters. According to the method and the device, the control parameters are determined through the processing model according to the target drawing parameters and the drawing parameter sets, so that the frequency of the processor is dynamically controlled according to the control parameters, the possible clamping situation of the terminal is processed in advance, and therefore the power consumption of the terminal is reduced on the premise that clamping of the display interface frame is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a control method of a terminal according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating another control method of a terminal according to an exemplary embodiment.
FIG. 3 is a flow chart illustrating a training supervised learning model according to the embodiment of FIG. 1.
Fig. 4 is a flowchart illustrating a control method of yet another terminal according to an exemplary embodiment.
FIG. 5 is a flow chart illustrating a training reinforcement learning model according to the embodiment shown in FIG. 1.
Fig. 6 is a block diagram illustrating a control apparatus of a terminal according to an exemplary embodiment.
Fig. 7 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Before describing the control method, the device, the storage medium and the electronic equipment of the terminal provided by the present disclosure, first, description is given of an application scenario related to each embodiment of the present disclosure, where the application scenario includes a terminal provided with a processor, and the processor may be, for example, a processing unit with a picture drawing function, such as a CPU, an MCU (english: microcontroller Unit, chinese: micro control unit). The processor may draw a corresponding display interface frame for software running on the terminal so that the terminal may display the display interface frame. The display interface frame is understood to be a picture displayed by the software on the terminal. For example, when the software is a game APP (english: application, chinese: application), the display interface frame may be a screen corresponding to a game being executed on the terminal. The terminal may be a mobile terminal such as a smart phone, a tablet personal computer, a smart watch, a smart bracelet, a PDA (english: personal Digital Assistant, chinese: personal digital assistant), or a stationary terminal such as a desktop computer.
Fig. 1 is a flowchart illustrating a control method of a terminal according to an exemplary embodiment. As shown in fig. 1, the method may include the steps of:
In step 101, a target rendering parameter corresponding to a target display interface frame is acquired.
The target drawing parameters are used for indicating time information for drawing a target display interface frame, and the target display interface frame is an interface displayed by the terminal at the current moment.
For example, the picture occurrence of the terminal is actually a phenomenon that occurs when the processor of the terminal draws a display interface frame for a longer drawing time than a standard drawing time. The display Interface frame is an Interface displayed when the terminal runs the target software, the target software is the software which is required to be displayed when the terminal runs, the Interface displayed by the terminal can be a UI (User Interface) Interface, the standard drawing duration is determined according to the refresh rate of the terminal screen, for example, when the refresh rate of the terminal screen is 60Hz, the standard drawing duration is 16.67ms. In order to avoid the occurrence of the jamming condition, under the condition that the terminal runs the target software, a processor can acquire target drawing parameters corresponding to the target display interface frame and capable of indicating time information for drawing the target display interface frame in real time, so that the possible jamming condition of the terminal can be predicted according to the target drawing parameters. Wherein the target rendering parameters may include: the running time of one or more threads used for drawing the target display interface frame on the processor, IO (Input Output Chinese) blocking time and drawing time of the target display interface frame.
In step 102, control parameters are determined from the target rendering parameters and the set of rendering parameters by a pre-trained process model.
The drawing parameter set comprises drawing parameters corresponding to each historical display interface frame in a preset number of historical display interface frames before the target display interface frame, wherein the drawing parameters are used for indicating time information for drawing the corresponding historical display interface frame.
For example, a pre-trained processing model may be set in the terminal, after the processor obtains the target drawing parameters, the processor may input a drawing parameter set formed by the target drawing parameters and drawing parameters corresponding to each historical display interface frame in a preset number of historical display interface frames before the target display interface frame into the processing model, and the processing model predicts a possible katon condition of the terminal and generates control parameters. The drawing parameter set may be understood as a preset number of drawing parameters collected by the processor before the current time, where each drawing parameter corresponds to a historical display interface frame. For example, the processor may allocate a memory space for the set of drawing parameters for storing a preset number of drawing parameters. It should be noted that, after step 103 is performed, the processor may delete one drawing parameter with the earliest acquisition time from the drawing parameter sets, and store the target drawing parameter into the drawing parameter sets, that is, the drawing parameter sets always store a preset number of drawing parameters acquired before the current time.
The control parameters determined through the processing model can reflect the prediction result of the possible occurrence of the jamming of the terminal, so that the terminal adjusts the frequency of the processor to process the possible occurrence of the jamming condition of the terminal in advance. The control parameter may be a specific parameter for adjusting the frequency of the processor, for example, the control parameter may include a control time and a control duration for adjusting the frequency of the processor, the control parameter may also be a parameter for characterizing a stuck condition of a next display interface frame of the drawing target display interface frame, and the control parameter may also be an instruction for controlling the processor to raise the frequency, which is not particularly limited in the disclosure. The process model may be a supervised learning model, a reinforcement learning model, or the like.
In step 103, the frequency of the processor of the terminal is controlled according to the control parameters.
In this step, if the prediction result of the possible stuck condition of the terminal is that the terminal may be stuck, the processor may raise the frequency of the processor to the specified frequency according to the control parameter, for example, in the case where the control parameter includes a control time and a control duration for adjusting the frequency of the processor, the processor may raise the frequency of the processor to the specified frequency at the control time and maintain the control duration. If the prediction result of the possible jamming condition of the terminal is that the terminal does not jam, the processor can keep the current frequency of the processor unchanged.
In summary, the present disclosure first obtains a target drawing parameter corresponding to a target display interface frame, where the target drawing parameter is used to indicate time information for drawing the target display interface frame, the target display interface frame is an interface displayed by a terminal at a current moment, then determines, according to the target drawing parameter and a drawing parameter set, a control parameter according to a pre-trained processing model, where the drawing parameter set includes drawing parameters corresponding to each historical display interface frame in a preset number of historical display interface frames before the target display interface frame, and finally controls, according to the control parameter, a frequency of a processor of the terminal. According to the method and the device, the control parameters are determined through the processing model according to the target drawing parameters and the drawing parameter sets, so that the frequency of the processor is dynamically controlled according to the control parameters, the possible clamping situation of the terminal is processed in advance, and therefore the power consumption of the terminal is reduced on the premise that clamping of the display interface frame is reduced.
Fig. 2 is a flowchart illustrating another control method of a terminal according to an exemplary embodiment. As shown in fig. 2, the process model is a supervised learning model, and step 102 may be implemented by:
And inputting the target drawing parameters and the drawing parameter groups into a supervised learning model to obtain control parameters output by the supervised learning model.
The control parameters are used for representing the clamping degree of the next display interface frame of the drawing target display interface frame.
In one scenario, the process model may be a supervised learning model, for example, the supervised learning model may be an LSTM (English: short-term memory) model. After the processor acquires the target drawing parameters, the drawing parameters corresponding to each historical display interface frame included in the drawing parameter group can be converted into first vectors corresponding to the historical display interface frames, the target drawing parameters are converted into second vectors, and then the first vectors and the second vectors are spliced in sequence to obtain spliced target vectors. And then, inputting the target vector into the supervised learning model to obtain control parameters which are output by the supervised learning model and are used for representing the degree of clamping of the next display interface frame of the drawing target display interface frame. That is, the supervised learning model may predict the degree of stuck of the next display interface frame of the target display interface frame by using the method of supervised learning through the drawing parameters corresponding to each of the preset number (the preset number may be, for example, 12) of history display interface frames and the target drawing parameters corresponding to the target display interface frame. The control parameter may include one control tag of a plurality of preset control tags, where each control tag corresponds to a degree of jamming of different levels, for example, the level of the degree of jamming may be divided into: the non-jamming, the relatively light jamming degree and the serious jamming degree. Taking an example that the control parameters include a first control tag and a second control tag, the degree of jamming corresponding to the first control tag may be jamming (for example, may be represented by 1), and the degree of jamming corresponding to the second control tag may be non-jamming (for example, may be represented by 0).
Further, step 103 may be implemented by:
if the control parameter meets the preset condition, the frequency of the processor is increased.
For example, after determining the control parameter, if the control parameter meets the preset condition, the processor may increase the frequency of the processor to increase the execution speed of one or more threads used for drawing the next display interface frame of the target display interface frame, so as to avoid possible jamming. The preset condition may be that the degree of jamming corresponding to the control tag included in the control parameter is that jamming exists, that is, the supervised learning model predicts that jamming will occur in the next display interface frame of the target display interface frame.
The way to raise the frequency of the processor may be: and determining a target frequency by the processor according to the control parameter, and controlling the frequency of the processor to rise to the target frequency. The target frequency may be a fixed frequency set in advance, and when the frequency of the processor is raised, the frequency of the processor may be directly raised to the fixed target frequency. The target frequency may also be a frequency determined according to the control parameter by using a first preset relationship, where the first preset relationship is a correspondence relationship between the control tag and the frequency. It can be understood that, according to the grade of the degree of jamming corresponding to the control tag, the target frequencies of different grades are set so as to more flexibly adjust the frequency of the processor.
The way to raise the frequency of the processor may also be: the processor determines a target time according to the control parameter, and controls the frequency of the processor to rise and maintain the target time. The target time may be a fixed time set in advance, and when raising the frequency of the processor, the frequency of the processor may be directly raised to a specified frequency and maintained at the fixed target time. The target time may be a time determined according to the control parameter by using a second preset relationship, where the second preset relationship is a correspondence between the control tag and the time. It can be understood that, according to the grade of the degree of jamming corresponding to the control tag, the target time of different grades is set so as to more flexibly adjust the frequency of the processor. The way to raise the frequency of the processor may also be: and determining a target frequency and a target time by the processor according to the control parameters, controlling the frequency of the processor to rise to the target frequency, and maintaining the target time.
FIG. 3 is a flow chart illustrating a training supervised learning model according to the embodiment of FIG. 1. As shown in fig. 3, the supervised learning model may be trained by:
in step 201, a sample input set and a sample output set are obtained.
Each sample input in the sample input set comprises a training drawing parameter and a training drawing parameter set corresponding to the training drawing parameter, the sample output set comprises a sample output corresponding to each sample input, and each sample output comprises a training control parameter corresponding to the corresponding training drawing parameter.
In step 202, a sample input set is used as an input of a preset LSTM, and a sample output set is used as an output of the LSTM, so as to obtain a supervised learning model.
For example, when training the supervised learning model, the training drawing parameters and the training drawing parameter sets included in each sample input in the sample input set may be sequentially used as the inputs of the LSTM, and the training control parameters included in the sample output corresponding to the sample input in the sample output set may be used as the outputs of the LSTM, so as to train the LSTM to obtain the supervised learning model. The training drawing parameter set includes a preset number of historical training drawing parameters, and the historical training drawing parameters may be a preset number of drawing parameters collected before the training drawing parameters. Training the rendering parameters may include: the running time of one or more threads used for drawing the display interface frame on the processor, the IO blocking time and the drawing time of the display interface frame, and the training control parameters comprise a training control label. Taking the preset number as 3 and the standard drawing time length as 16.67ms, the drawing parameters comprise the running time lengths of the rendering thread 1 and the rendering thread 2 on the processor, and the IO blocking time length and the drawing time length of the display interface frame are described as examples, as shown in table 1, the drawing time length of the display interface frame in table 1 is used for representing whether the display interface frame is stuck, if the drawing time length of the display interface frame is less than 16.67ms, the display interface frame is not stuck, and if the drawing time length of the display interface frame is greater than 16.67ms, the display interface frame is stuck.
TABLE 1
The training drawing parameter "5.35ms 3.45ms 1.45ms0.45ms" corresponding to the display interface frame 4 and the training drawing parameter set "15.35ms 13.45ms1.45ms 0.45ms,5.35ms 3.45ms1.45ms 0.45ms,23.35ms 13.45ms9.45ms 0.45ms" corresponding to the display interface frames 1-3 may be used as a sample input to predict the next display interface frame (i.e., the display interface frame 5) of the display interface frame 4, and as shown in table 1, the display interface frame 5 is jammed, so that the first control label for indicating that the display interface frame is jammed may be used as the training control parameter corresponding to the training drawing parameter.
Fig. 4 is a flowchart illustrating a control method of yet another terminal according to an exemplary embodiment. As shown in fig. 4, the process model is a reinforcement learning model, and step 102 may be implemented by:
inputting the target drawing parameters and the drawing parameter groups into the reinforcement learning model to obtain control parameters output by the reinforcement learning model. Wherein the control parameters include control time and control duration.
In another scenario, the process model may be a reinforcement learning model, for example, the supervised learning model may be a DQN (English: deep Q Network). After the processor acquires the target drawing parameters, the drawing parameters corresponding to each historical display interface frame included in the drawing parameter group can be converted into first vectors corresponding to the historical display interface frames, the target drawing parameters are converted into second vectors, and then the first vectors and the second vectors are spliced in sequence to obtain spliced target vectors. The processor may then input the target vector into the reinforcement learning model to obtain control parameters including control time and control duration output by the reinforcement learning model. That is, the reinforcement learning model may predict the possible stuck condition of the terminal by presetting the drawing parameters corresponding to each historical display interface frame in the plurality of historical display interface frames and the target drawing parameters corresponding to the target display interface frame, and intelligently select the control time and the control duration of the frequency of the control processor, so that the power consumption of the terminal is smaller on the premise of reducing the occurrence of stuck display interface frames.
Further, step 103 may be implemented by:
raising the frequency of the processor to the target frequency at the control time and maintaining the control duration.
For example, after determining the control parameters, the processor may raise the frequency of the processor to the target frequency at the control time and maintain the control duration. For example, in the case where the control time is 10ms and the control time is 100ms, the processor may raise the frequency of the processor to the target frequency and hold for 100ms after drawing the target display interface frame for 10 ms. Wherein the target frequency may be a preset frequency.
FIG. 5 is a flow chart illustrating a training reinforcement learning model according to the embodiment shown in FIG. 1. As shown in fig. 5, the reinforcement learning model may be trained by:
in step 301, training drawing parameters and training drawing parameter sets corresponding to the display interface frames are input into a preset DQN network within a specified time range, so as to control the frequency of the processor according to the output of the DQN network, wherein the specified time range includes a plurality of time windows.
For example, when training the reinforcement learning model, the terminal may first be enabled to continuously run the target software within a specified time range including a plurality of time windows (for example, each time window may be 1 hour, the specified time range may be N hours, and then the specified time range includes N time windows, where N is a positive integer greater than 0), and the target software may be preset software that needs to reduce the occurrence of a jam in the display interface frame, for example, may be a game APP. And then, sequentially inputting training drawing parameters corresponding to each display interface frame and training drawing parameter groups corresponding to the display interface frames acquired by the processor in a specified time range into the DQN network by the processor, and controlling the frequency of the processor according to the output of the DQN network (the output of the DQN network is the training control parameters including control time and control duration corresponding to each training drawing parameter in the specified time range), namely sequentially controlling the frequency of the processor according to each training control parameter output by the DQN network in the specified time range. Wherein training the rendering parameters may include: the running time of one or more threads used for drawing the display interface frame on the processor, the IO blocking time and the drawing time of the display interface frame.
Specifically, step 301 may be implemented in the following manner: firstly, a first training drawing parameter and a first training drawing parameter group corresponding to a first display interface frame can be sequentially input into a DQN network by a processor to obtain a first training control parameter, wherein the first display interface frame is any frame interface displayed by a terminal in a specified time range, and the first training control parameter can be randomly generated by the DQN network. And then, controlling the frequency of the processor by the processor according to the first training control parameter so as to obtain a second training drawing parameter and a second training drawing parameter set corresponding to a second display interface frame, wherein the second display interface frame is the next display interface frame of the first display interface frame. And finally, updating the first display interface frame into a second display interface frame, and repeatedly executing the steps of inputting the first training drawing parameters and the first training drawing parameter groups corresponding to the first display interface frame into the DQN network in a specified time range to obtain first training control parameters, and updating the first display interface frame into the second display interface frame.
In step 302, the output of the DQN network in each time window, and the number of camping and the total time of the frequency up-conversion corresponding to each time window are obtained.
In this step, according to the training drawing parameters corresponding to each display interface frame obtained by the processor in each time window, the number of drawing time lengths of the display interface frames included in the training drawing parameters in the time window is determined to be longer than the standard drawing time length, and the number is used as the number of the click times corresponding to the time window. Meanwhile, training control parameters corresponding to each training drawing parameter output by the DQN network in each time window can be obtained, and the sum of control duration included by each training control parameter in the time window is used as the total frequency-up time corresponding to the time window.
In step 303, the DQN network is adjusted according to the output of the DQN network in the target time window, where the number of camping and the total time of the frequency rise corresponding to the target time window satisfy the preset condition.
Steps 301 to 302 are repeatedly performed to obtain the reinforcement learning model.
For example, after the number of times of blocking and the total time of frequency raising corresponding to each time window are obtained, a target time window in which the number of times of blocking and the total time of frequency raising meet preset conditions may be determined according to the number of times of blocking and the total time of frequency raising corresponding to each time window. The preset condition may be that a time window with the minimum number of blocking times and the minimum total time of frequency up-conversion is selected from a plurality of time windows as the target time window. The weight parameters in the DQN network can then be adjusted according to the output of the DQN network in the target time window (i.e. the training control parameters corresponding to each training drawing parameter within the target time window). The preset condition can be understood as that when the DQN network is within the target time window and meets the preset condition, a reward can be set for the DQN network, and the DQN network can learn how to intelligently select the most reasonable training control parameters according to the training drawing parameters through the reward, namely how to select the control parameters which enable the power consumption of the terminal to be smaller on the premise of reducing the occurrence of the katon of the display interface frame. And finally, repeating the steps until the DQN network meets a preset training target to obtain a trained reinforcement learning model. The training target may repeatedly perform the network training for the DQN network for a preset number of specified time ranges, for example, the preset number may be 10000. The training target may also be a target time reached by the time of training the network, for example, the specified time range is 10 hours, the time window is 1 hour, and the target time may be one week, that is, the DQN network needs to be trained continuously in one week. The training target may be that the DQN network training is determined to be completed when the number of the stuck times corresponding to the multiple time windows remains stable or the total time of the frequency raising corresponding to the multiple time windows remains consistent, which is not specifically limited in the present disclosure.
In summary, the present disclosure first obtains a target drawing parameter corresponding to a target display interface frame, where the target drawing parameter is used to indicate time information for drawing the target display interface frame, the target display interface frame is an interface displayed by a terminal at a current moment, then determines, according to the target drawing parameter and a drawing parameter set, a control parameter according to a pre-trained processing model, where the drawing parameter set includes drawing parameters corresponding to each historical display interface frame in a preset number of historical display interface frames before the target display interface frame, and finally controls, according to the control parameter, a frequency of a processor of the terminal. According to the method and the device, the control parameters are determined through the processing model according to the target drawing parameters and the drawing parameter sets, so that the frequency of the processor is dynamically controlled according to the control parameters, the possible clamping situation of the terminal is processed in advance, and therefore the power consumption of the terminal is reduced on the premise that clamping of the display interface frame is reduced.
Fig. 6 is a block diagram illustrating a control apparatus of a terminal according to an exemplary embodiment. As shown in fig. 6, the apparatus 400 includes an acquisition module 401, a determination module 402, and a control module 403.
The obtaining module 401 is configured to obtain a target drawing parameter corresponding to the target display interface frame.
The target drawing parameters are used for indicating time information for drawing a target display interface frame, and the target display interface frame is an interface displayed by the terminal at the current moment.
A determination module 402 configured to determine control parameters from the target rendering parameters and the set of rendering parameters by a pre-trained process model.
The drawing parameter set comprises drawing parameters corresponding to each historical display interface frame in a preset number of historical display interface frames before the target display interface frame, wherein the drawing parameters are used for indicating time information for drawing the corresponding historical display interface frame.
A control module 403 configured to control the frequency of the processor of the terminal according to the control parameters.
Optionally, the processing model is a supervised learning model, and the determining module 402 is configured to:
and inputting the target drawing parameters and the drawing parameter groups into a supervised learning model to obtain control parameters output by the supervised learning model.
The control parameters are used for representing the clamping degree of the next display interface frame of the drawing target display interface frame.
The control module 403 is configured to:
if the control parameter meets the preset condition, the frequency of the processor is increased.
Optionally, the control module 403 is configured to:
and determining a target frequency according to the control parameter, and controlling the frequency of the processor to rise to the target frequency. And/or the number of the groups of groups,
and determining target time according to the control parameters, and controlling the frequency of the processor to rise and maintain the target time.
Optionally, the supervised learning model is trained by:
a sample input set and a sample output set are obtained.
Each sample input in the sample input set comprises a training drawing parameter and a training drawing parameter set corresponding to the training drawing parameter, the sample output set comprises a sample output corresponding to each sample input, and each sample output comprises a training control parameter corresponding to the corresponding training drawing parameter.
And taking the sample input set as the input of a preset long-short-term memory network LSTM, and taking the sample output set as the output of the LSTM so as to obtain a supervised learning model.
Optionally, the processing model is a reinforcement learning model, and the determining module 402 is configured to:
inputting the target drawing parameters and the drawing parameter groups into the reinforcement learning model to obtain control parameters output by the reinforcement learning model.
Wherein the control parameters include control time and control duration.
The control module 403 is configured to:
Raising the frequency of the processor to the target frequency at the control time and maintaining the control duration.
Optionally, the reinforcement learning model is trained by:
and inputting training drawing parameters and training drawing parameter sets corresponding to the display interface frames into a preset DQN network in a specified time range so as to control the frequency of a processor according to the output of the DQN network, wherein the specified time range comprises a plurality of time windows.
And obtaining the output of the DQN network in each time window, and the corresponding blocking times and the total frequency-up time of each time window.
And adjusting the DQN network according to the output of the DQN network in the target time window, wherein the clamping times and the total frequency-up time corresponding to the target time window meet preset conditions.
And repeatedly executing the steps of inputting training drawing parameters and training drawing parameter sets corresponding to the display interface frames into a preset DQN network in a specified time range, controlling the frequency of a processor according to the output of the DQN network, and adjusting the DQN network according to the output of the DQN network in a target time window so as to obtain a reinforcement learning model.
Optionally, inputting training drawing parameters and training drawing parameter sets corresponding to the display interface frame into a preset DQN network within a specified time range, so as to control the frequency of the processor according to the output of the DQN network, including:
Inputting a first training drawing parameter and a first training drawing parameter group corresponding to a first display interface frame into the DQN network to obtain a first training control parameter, wherein the first display interface frame is any frame interface displayed by a terminal in a specified time range.
And controlling the frequency of the processor according to the first training control parameters to obtain a second training drawing parameter and a second training drawing parameter set corresponding to a second display interface frame, wherein the second display interface frame is the next display interface frame of the first display interface frame.
And updating the first display interface frame into a second display interface frame.
And repeatedly executing the steps of inputting the first training drawing parameters and the first training drawing parameter groups corresponding to the first display interface frame into the DQN network within the appointed time range to obtain the first training control parameters, and updating the first display interface frame into the second display interface frame.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
In summary, the present disclosure first obtains a target drawing parameter corresponding to a target display interface frame, where the target drawing parameter is used to indicate time information for drawing the target display interface frame, the target display interface frame is an interface displayed by a terminal at a current moment, then determines, according to the target drawing parameter and a drawing parameter set, a control parameter according to a pre-trained processing model, where the drawing parameter set includes drawing parameters corresponding to each historical display interface frame in a preset number of historical display interface frames before the target display interface frame, and finally controls, according to the control parameter, a frequency of a processor of the terminal. According to the method and the device, the control parameters are determined through the processing model according to the target drawing parameters and the drawing parameter sets, so that the frequency of the processor is dynamically controlled according to the control parameters, the possible clamping situation of the terminal is processed in advance, and therefore the power consumption of the terminal is reduced on the premise that clamping of the display interface frame is reduced.
The present disclosure also provides a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the control method of the terminal provided by the present disclosure.
Fig. 7 is a block diagram of an electronic device 800, according to an example embodiment. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 7, an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the control method of the terminal described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 806 provides power to the various components of the electronic device 800. Power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the control methods of the terminals described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including instructions executable by processor 820 of electronic device 800 to perform the control method of the terminal described above. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the control method of a terminal as described above when being executed by the programmable apparatus.
In summary, the present disclosure first obtains a target drawing parameter corresponding to a target display interface frame, where the target drawing parameter is used to indicate time information for drawing the target display interface frame, the target display interface frame is an interface displayed by a terminal at a current moment, then determines, according to the target drawing parameter and a drawing parameter set, a control parameter according to a pre-trained processing model, where the drawing parameter set includes drawing parameters corresponding to each historical display interface frame in a preset number of historical display interface frames before the target display interface frame, and finally controls, according to the control parameter, a frequency of a processor of the terminal. According to the method and the device, the control parameters are determined through the processing model according to the target drawing parameters and the drawing parameter sets, so that the frequency of the processor is dynamically controlled according to the control parameters, the possible clamping situation of the terminal is processed in advance, and therefore the power consumption of the terminal is reduced on the premise that clamping of the display interface frame is reduced.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for controlling a terminal, the method comprising:
obtaining a target drawing parameter corresponding to a target display interface frame; the target drawing parameters are used for indicating time information for drawing the target display interface frame, and the target display interface frame is an interface displayed by the terminal at the current moment;
determining control parameters through a pre-trained processing model according to the target drawing parameters and the drawing parameter sets; the drawing parameter set comprises drawing parameters corresponding to each historical display interface frame in a preset number of historical display interface frames before the target display interface frame, wherein the drawing parameters are used for indicating time information for drawing the corresponding historical display interface frames;
And controlling the frequency of a processor of the terminal according to the control parameter.
2. The method of claim 1, wherein the process model is a supervised learning model, and wherein the determining control parameters from the target rendering parameters and the set of rendering parameters by a pre-trained process model comprises:
inputting the target drawing parameters and the drawing parameter sets into the supervised learning model to obtain the control parameters output by the supervised learning model; the control parameters are used for representing the clamping degree of the next display interface frame of the target display interface frame;
the controlling the frequency of the processor of the terminal according to the control parameter comprises the following steps:
and if the control parameter meets a preset condition, increasing the frequency of the processor.
3. The method of claim 2, wherein increasing the frequency of the processor if the control parameter satisfies a preset condition comprises:
determining a target frequency according to the control parameter, and controlling the frequency of the processor to rise to the target frequency; and/or the number of the groups of groups,
and determining target time according to the control parameter, and controlling the frequency of the processor to rise and maintain the target time.
4. A method according to claim 2 or 3, characterized in that the supervised learning model is trained by:
obtaining a sample input set and a sample output set, wherein each sample input in the sample input set comprises a training drawing parameter and a training drawing parameter set corresponding to the training drawing parameter, each sample output set comprises a sample output corresponding to each sample input, and each sample output comprises a training control parameter corresponding to the corresponding training drawing parameter;
and taking the sample input set as the input of a preset long-short-term memory network LSTM, and taking the sample output set as the output of the LSTM so as to obtain the supervised learning model.
5. The method of claim 1, wherein the process model is a reinforcement learning model, and wherein the determining control parameters from the target rendering parameters and the set of rendering parameters by a pre-trained process model comprises:
inputting the target drawing parameters and the drawing parameter sets into the reinforcement learning model to obtain the control parameters output by the reinforcement learning model; wherein the control parameters comprise control time and control duration;
The controlling the frequency of the processor of the terminal according to the control parameter comprises the following steps:
and raising the frequency of the processor to a target frequency at the control moment, and maintaining the control duration.
6. The method of claim 5, wherein the reinforcement learning model is trained by:
inputting training drawing parameters and training drawing parameter sets corresponding to a display interface frame into a preset DQN network in a specified time range, so as to control the frequency of the processor according to the output of the DQN network, wherein the specified time range comprises a plurality of time windows;
acquiring the output of the DQN network in each time window, and the corresponding cartoon time and the total frequency raising time of each time window;
according to the output of the DQN network in a target time window, adjusting the DQN network, wherein the clamping times and the total frequency-up time corresponding to the target time window meet preset conditions;
and repeatedly executing the steps of inputting training drawing parameters and training drawing parameter sets corresponding to the display interface frames into a preset DQN network in a specified time range, controlling the frequency of the processor according to the output of the DQN network, and adjusting the DQN network according to the output of the DQN network in a target time window so as to obtain the reinforcement learning model.
7. The method of claim 6, wherein inputting training drawing parameters and training drawing parameter sets corresponding to display interface frames into a preset DQN network within a specified time frame to control the frequency of the processor according to the output of the DQN network, comprises:
inputting a first training drawing parameter and a first training drawing parameter set corresponding to a first display interface frame into the DQN network to obtain a first training control parameter, wherein the first display interface frame is any frame interface displayed by the terminal within the appointed time range;
controlling the frequency of the processor according to the first training control parameter to obtain a second training drawing parameter and a second training drawing parameter set corresponding to a second display interface frame, wherein the second display interface frame is the next display interface frame of the first display interface frame;
updating the first display interface frame to the second display interface frame;
and repeatedly executing the first training drawing parameters and the first training drawing parameter sets corresponding to the first display interface frame in the appointed time range, inputting the first training drawing parameters and the first training drawing parameter sets into the DQN network to obtain first training control parameters, and updating the first display interface frame into the second display interface frame.
8. A control apparatus of a terminal, the apparatus comprising:
the acquisition module is configured to acquire target drawing parameters corresponding to the target display interface frame; the target drawing parameters are used for indicating time information for drawing the target display interface frame, and the target display interface frame is an interface displayed by the terminal at the current moment;
a determining module configured to determine control parameters by a pre-trained process model according to the target drawing parameters and the drawing parameter set; the drawing parameter set comprises drawing parameters corresponding to each historical display interface frame in a preset number of historical display interface frames before the target display interface frame, wherein the drawing parameters are used for indicating time information for drawing the corresponding historical display interface frames;
and the control module is configured to control the frequency of the processor of the terminal according to the control parameter.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of any of claims 1-7.
10. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the method of any of claims 1-7.
CN202010876059.2A 2020-08-25 2020-08-25 Terminal control method and device, storage medium and electronic equipment Active CN112130940B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010876059.2A CN112130940B (en) 2020-08-25 2020-08-25 Terminal control method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010876059.2A CN112130940B (en) 2020-08-25 2020-08-25 Terminal control method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112130940A CN112130940A (en) 2020-12-25
CN112130940B true CN112130940B (en) 2023-11-17

Family

ID=73848631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010876059.2A Active CN112130940B (en) 2020-08-25 2020-08-25 Terminal control method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112130940B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112667407B (en) * 2021-01-18 2023-09-19 成都国科微电子有限公司 Processor parameter adjusting method and device, electronic equipment and storage medium
CN115994007A (en) * 2021-10-18 2023-04-21 华为技术有限公司 Animation effect display method and electronic equipment
CN115097927A (en) * 2022-06-27 2022-09-23 中国银行股份有限公司 Android system-oriented processor frequency modulation method and device
CN116737357A (en) * 2022-09-28 2023-09-12 荣耀终端有限公司 Method and device for adjusting processor frequency
CN117687495A (en) * 2024-02-04 2024-03-12 荣耀终端有限公司 Data acquisition method, training method and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020108309A1 (en) * 2018-11-27 2020-06-04 深圳前海达闼云端智能科技有限公司 Method and apparatus for controlling device movement, storage medium, and electronic device
CN111338838A (en) * 2020-03-23 2020-06-26 Oppo广东移动通信有限公司 Method for controlling frequency of central processing unit and related device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020108309A1 (en) * 2018-11-27 2020-06-04 深圳前海达闼云端智能科技有限公司 Method and apparatus for controlling device movement, storage medium, and electronic device
CN111338838A (en) * 2020-03-23 2020-06-26 Oppo广东移动通信有限公司 Method for controlling frequency of central processing unit and related device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于Caffe的并行绘制系统帧绘制时间预测;丁祝祥;应三丛;;计算机应用(第S2期);全文 *
基于众核平台的多路超高清视频编码系统设计;姚佳;高志勇;张小云;;电视技术(第04期);全文 *

Also Published As

Publication number Publication date
CN112130940A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN112130940B (en) Terminal control method and device, storage medium and electronic equipment
CN111460150B (en) Classification model training method, classification method, device and storage medium
EP3306441A1 (en) Temperature control method and apparatus
CN110796988B (en) Backlight adjusting method and device
JP6526814B2 (en) Mode switching method and apparatus
US10444953B2 (en) View angle switching method and apparatus
EP3176776A1 (en) Luminance adjusting method and apparatus, computer program and recording medium
EP2991067A1 (en) Backlight brightness control method and device
US20180240069A1 (en) Method and apparatus for updating information, and storage medium
CN106354504B (en) Message display method and device
CN111611034A (en) Screen display adjusting method and device and storage medium
CN109272118B (en) Data training method, device, equipment and storage medium
CN108200279B (en) Backlight adjusting method, device and equipment
CN108304078B (en) Input method and device and electronic equipment
EP3425533A1 (en) Displaying page
CN111601377B (en) Temperature control method, device and storage medium
CN112685269A (en) Data acquisition method and device, electronic equipment and storage medium
CN113311984A (en) Touch screen track data processing method and device, mobile terminal and electronic equipment
CN108597456B (en) Backlight brightness adjusting method and device
CN110659625A (en) Training method and device of object recognition network, electronic equipment and storage medium
CN106775246B (en) Screen brightness adjusting method and device
CN112037719B (en) Screen brightness adjusting method and device
CN114442792A (en) Method and device for adjusting operating frequency of processor and storage medium
CN107979695B (en) Network message receiving method and device and storage medium
CN113360051A (en) Search prompting method and device, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant