CN113655880A - Interface rendering method and device, terminal equipment and computer readable storage medium - Google Patents

Interface rendering method and device, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN113655880A
CN113655880A CN202110887978.4A CN202110887978A CN113655880A CN 113655880 A CN113655880 A CN 113655880A CN 202110887978 A CN202110887978 A CN 202110887978A CN 113655880 A CN113655880 A CN 113655880A
Authority
CN
China
Prior art keywords
rendering
control object
display interface
interaction
virtual control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110887978.4A
Other languages
Chinese (zh)
Inventor
迟民强
商泽利
邹良辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110887978.4A priority Critical patent/CN113655880A/en
Publication of CN113655880A publication Critical patent/CN113655880A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • G06F1/3265Power saving in display device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The embodiment of the application discloses an interface rendering method, an interface rendering device, terminal equipment and a computer readable storage medium. The method comprises the following steps: determining an interaction state of a virtual control object in a display interface, wherein the interaction state comprises a first interaction state or a second interaction state, the first interaction state is a state that interaction data meet a first threshold condition, and the second interaction state is a state that the interaction data meet a second threshold condition; if the virtual control object is in a first interaction state, rendering the display interface according to a rendering strategy corresponding to the first interaction state; and when the virtual control object is in the first interaction state, the rendering picture quality of the display interface is lower than that of the display interface when the virtual control object is in the second interaction state. The interface rendering method, the interface rendering device, the terminal equipment and the computer readable storage medium can reduce power consumption generated by rendering a display interface.

Description

Interface rendering method and device, terminal equipment and computer readable storage medium
Technical Field
The present application relates to the field of display technologies, and in particular, to an interface rendering method and apparatus, a terminal device, and a computer-readable storage medium.
Background
With the rapid development of terminal technologies, many terminal devices (such as mobile phones, tablet computers, smart wearable devices, and the like) have a powerful image processing function and can render and display high-quality interface images, but rendering high-quality interface images also brings a large power consumption burden to the terminal devices.
Disclosure of Invention
The embodiment of the application discloses an interface rendering method, an interface rendering device, terminal equipment and a computer readable storage medium, which can reduce power consumption generated by rendering a display interface.
The embodiment of the application discloses an interface rendering method, which comprises the following steps:
determining an interaction state of a virtual control object in a display interface, wherein the interaction state comprises a first interaction state or a second interaction state, the first interaction state is a state in which interaction data meet a first threshold condition, the second interaction state is a state in which interaction data meet a second threshold condition, and a first threshold in the first threshold condition is less than or equal to a second threshold in the second threshold condition;
if the virtual control object is in a first interaction state, rendering the display interface according to a rendering strategy corresponding to the first interaction state;
and when the virtual control object is in the first interaction state, the rendering picture quality of the display interface is lower than that of the display interface when the virtual control object is in the second interaction state.
The embodiment of the application discloses interface rendering device includes:
the state determination module is used for determining an interaction state of a virtual control object in a display interface, wherein the interaction state comprises a first interaction state or a second interaction state, the first interaction state is a state in which interaction data meet a first threshold condition, the second interaction state is a state in which the interaction data meet a second threshold condition, and a first threshold value in the first threshold condition is smaller than or equal to a second threshold value in the second threshold condition;
the rendering module is used for rendering the display interface according to a rendering strategy corresponding to the first interaction state if the virtual control object is in the first interaction state;
and when the virtual control object is in the first interaction state, the rendering picture quality of the display interface is lower than that of the display interface when the virtual control object is in the second interaction state.
The embodiment of the application discloses a terminal device, which comprises a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor is enabled to realize the method.
An embodiment of the application discloses a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method as described above.
According to the interface rendering method, the interface rendering device, the terminal device and the computer readable storage medium, the interaction state of the virtual control object in the display interface is determined, when the virtual control object is in the first interaction state, the display interface is rendered according to the rendering strategy corresponding to the first interaction state, wherein the rendering picture quality of the display interface when the virtual control object is in the first interaction state is lower than that of the display interface when the virtual control object is in the second interaction state, and when the virtual control object is in the first interaction state, the power consumption generated by rendering the display interface is reduced by reducing the rendering picture quality of the display interface.
In addition, when the virtual control object is in the first interaction state with infrequent interaction, the rendering picture quality of the display interface is reduced, and adverse effects on a user caused by the reduction of the rendering picture quality can be reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a diagram of an application scenario of an interface rendering method in one embodiment;
FIG. 2 is a flow diagram of a method for interface rendering in one embodiment;
FIG. 3 is a flow diagram for determining that a virtual control object is in a first interaction state, under an embodiment;
FIG. 4 is a schematic illustration of a game interface in one embodiment;
FIG. 5 is a schematic diagram of a rendering pipeline in one embodiment;
FIG. 6 is a flowchart of an interface rendering method in another embodiment;
FIG. 7 is a block diagram of an interface rendering apparatus in one embodiment;
fig. 8 is a block diagram of a terminal device in one embodiment.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the examples and figures of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first core may be referred to as a second core, and similarly, a second core may be referred to as a first core, without departing from the scope of the present application. The first core and the second core are both cores in the processor, but are not the same core.
Fig. 1 is an application scene diagram of an interface rendering method in an embodiment. As shown in fig. 1, in an application scenario of the embodiment of the present application, a display interface may be displayed on a screen of a terminal device 10, and a user may interact with a virtual control object in the display interface to implement control on the virtual control object. The terminal device 10 may determine an interaction state of a virtual control object in the display interface, where the interaction state of the virtual control object may be a second interaction state or a first interaction state, where the first interaction state is a state where the interaction data satisfies a first threshold condition, and the second interaction state is a state where the interaction data satisfies a second threshold condition. And if the virtual control object is in the first interactive state and has less interaction with the virtual control object, rendering the display interface according to a rendering strategy corresponding to the first interactive state. When the virtual control object is in the first interaction state, the rendering picture quality of the display interface of the terminal device 10 is lower than that of the display interface when the virtual control object is in the second interaction state, and by reducing the rendering picture quality of the display interface, the power consumption generated by rendering the display interface can be reduced, and the adverse effect on the user caused by the reduction of the rendering picture quality can be reduced as much as possible.
As shown in fig. 2, in one embodiment, an interface rendering method is provided, which may be applied to a terminal device, which may include, but is not limited to, a mobile phone, a smart wearable device, a tablet Computer, a notebook Computer, a Personal Computer (PC), a vehicle-mounted terminal, a smart television, and the like. The method may comprise the steps of:
step 210, determining the interaction state of the virtual control object in the display interface.
The virtual control object may refer to a virtual object capable of interacting with a user, that is, a virtual object that can be controlled by the user through an interactive operation. As an optional implementation manner, it may be determined whether a first application program to which the display interface belongs is a target application program, where power consumption generated by the target application program when rendering the display interface is large, for example, the target application program may include, but is not limited to, an application program with rich picture content and high picture quality requirement, such as a game application, a drawing application, a design application, an AR (Augmented Reality)/VR (Virtual Reality) application, and the like, and if the first application program to which the display interface belongs is the target application program, the interaction state of a Virtual control object in the display interface may be determined.
The display interface can comprise one or more (two or more) virtual control objects, and a user can select one virtual control object to control or simultaneously control a plurality of virtual control objects by carrying out interactive operation. The virtual control object may be, but is not limited to, a virtual character, a virtual building, a virtual object, and the like, a virtual interactive component.
The above interaction operations may include, but are not limited to, touch interaction operations, interaction operations performed through an input device (such as a remote controller, a mouse, a keyboard, and the like), gesture interaction operations, line-of-sight interaction operations, and the like, and the specific operation manner of the interaction operations is not limited in the embodiments of the present application.
The terminal device may determine an interaction state of a virtual control object in the display interface, where the interaction state may include a second interaction state or a first interaction state, where an interaction frequency degree corresponding to the first interaction state may be less than an interaction frequency degree corresponding to the second interaction state, the second interaction state may refer to a state in which a user interacts with the virtual control object more frequently, and the first interaction state may refer to a state in which the user does not interact with the virtual control object or interacts with the virtual control object less frequently, for example, a game character in a game application is in an on-hook state, and may belong to the first interaction state.
The first interactive state is a state in which the interactive data meets a first threshold condition, and the second interactive state is a state in which the interactive data meets a second threshold condition.
As an implementation manner, the terminal device may obtain the interactive data in real time, and determine whether the interactive data meets the first threshold condition or the second threshold condition. The interaction data may include, but is not limited to, one or more of operation trigger data, interaction priority corresponding to the first application, interaction time, and the like.
The operation trigger data refers to data for triggering interactive operation by a user, and the operation trigger data can be determined according to the collected operation data, and can include operation trigger times, operation trigger frequency and the like. The operation modes of the user for performing the interactive operation are different, and the collected operation data are also different, for example, the interactive operation is a touch interactive operation, and the operation data may be touch data (such as a touch position, a touch frequency, a touch direction, and the like) collected by a touch screen of the terminal device; the interactive operation is a sight line interactive operation, and the operation data may be eye data (such as sight line direction, blinking times, and the like) acquired by a sight line tracking device; the interaction operation is a gesture interaction operation, and the operation data may be gesture motion data (such as a gesture shape, a gesture movement direction, and the like) collected by a gesture tracking device (such as a gesture tracking sensor, a camera, and the like). Alternatively, if the operation data is not acquired within a certain time period (e.g., 30 seconds, 20 seconds, 35 seconds, etc.), which may indicate that the user does not perform the interactive operation within the time period, it may be determined that the virtual control object is in the first interactive state.
The interaction priority corresponding to the first application program may be determined according to the task priority corresponding to the first application program. Optionally, the task priority is determined by the urgency of the first application program during running, and the task priority corresponding to the application program with the higher urgency may be higher, for example, the first application program is a game application, and if the terminal device does not run other application programs currently, the task priority of the game application may be the highest; if the terminal device still runs the conversation application, the task priority of the game application is lower than that of the conversation application because the emergency degree of the conversation application is higher than that of the game application.
In some embodiments, the interaction priority corresponding to the first application may be directly a task priority corresponding to the first application, or a correspondence between the interaction priority and the task priority may be established, and after the task priority corresponding to the first application is determined, the corresponding interaction priority is determined based on the correspondence.
The interaction time may include at least one of an operation time when the user performs the interaction operation each time, a duration of the interaction operation performed by the user, and the like, and if an interval between two times of interaction operation performed by the user is within an interval threshold, the interaction operation may be considered to be performed continuously, and if the user performs two times of interaction operation in 0.5 second, the two times of interaction operation may be considered to be performed continuously.
Different first threshold values and different second threshold values can be respectively set for different interactive data, and for the first threshold value and the second threshold value corresponding to the same interactive data, the first threshold value included in the first threshold value condition is smaller than or equal to the second threshold value included in the second threshold value condition. For example, the number of operation triggers may correspond to a first count threshold and a second count threshold, and the first count threshold may be less than or equal to the second count threshold; the operation trigger frequency may correspond to a first frequency threshold and a second frequency threshold, and the first frequency threshold may be less than or equal to the second frequency threshold; the interaction priority may correspond to a first priority threshold and a second priority threshold, the first priority threshold may be less than or equal to the second priority threshold, and so on. The condition that the first interaction state corresponds to the first threshold may be set according to a first threshold corresponding to one or more interaction data, and the second threshold corresponding to the second interaction state may be set according to a second threshold corresponding to one or more interaction data.
And step 220, if the virtual control object is in the first interactive state, rendering the display interface according to a rendering strategy corresponding to the first interactive state.
When the virtual control object is in different interaction states, different rendering strategies can be adopted to render the display interface, so that the visual experience brought to a user by the display interface and the power consumption are considered. The rendering picture quality of the display interface when the virtual control object is in the first interaction state is lower than that when the virtual control object is in the second interaction state, that is, the power consumption of the rendering display interface when the virtual control object is in the first interaction state is lower than that when the virtual control object is in the second interaction state. When the virtual control object is in the second interaction state, the user is shown to pay attention to the display picture, and a display interface with higher picture quality can be rendered, so that better visual experience is brought to the user in the interaction process; when the virtual control object is in the first interactive state, it indicates that the attention of the user to the display image is reduced, and if the display interface is rendered with high image quality, unnecessary power consumption waste is caused to the terminal device, which is not beneficial to the endurance of the terminal device.
When the virtual control object is in the first interaction state, the display interface may be rendered according to a rendering policy corresponding to the first interaction state, where the rendering policy may be a rendering policy that reduces the quality of a rendered screen. For example, the rendering policy may include, but is not limited to, at least one of a lower rendering frame rate policy, a lower rendering precision policy, a policy for reducing rendering content of the display interface, a lower screen resolution policy, a lower image quality effect policy, and the like; wherein, reducing the rendering frame rate means reducing the number of frames of rendering display pictures per second; the rendering precision reduction strategy is to reduce the fineness of the picture content in the display picture; the strategy for reducing the rendering content of the display interface refers to reducing the content in the rendering display interface (such as content far away from the screen, non-important scene content and the like); the strategy of reducing the picture resolution refers to reducing the resolution of a display picture; the strategy for reducing the image quality effect refers to reducing the image quality effect (such as light and shadow effect, skill effect of game picture, etc.) of the display picture.
In the embodiment of the application, when the virtual control object is in the first interaction state, the rendering picture quality of the display interface is lower than that of the display interface when the virtual control object is in the second interaction state, when the virtual control object is in the first interaction state, the power consumption generated by rendering the display interface is reduced by reducing the rendering picture quality of the display interface, the normal display of the display picture is not influenced, and because the virtual control object is in the first interaction state, the adverse effect of the reduction of the rendering picture quality on a user can be reduced.
As shown in FIG. 3, in one embodiment, the step of determining the interaction state of the virtual control object in the display interface may comprise the steps of:
step 302, operation trigger data within a first time period is acquired.
The terminal equipment can obtain the operation data in the first time period and analyze the operation data to obtain the operation trigger data in the first time period. The first time period may be set according to actual requirements, for example, 20 seconds, 30 seconds, 27 seconds, and the like. The operation trigger data may be used to characterize a trigger condition for an interactive operation. Alternatively, the operation trigger data may include at least one of an operation trigger number and an operation trigger frequency, the operation trigger number may refer to a total number of times the interactive operation is triggered in the first time period, and the operation trigger frequency may refer to a number of times the interactive operation is triggered in a unit time (e.g., 1 second, 3 seconds, etc.).
Step 304, if the operation trigger data meets a first threshold condition, determining that the virtual control object in the display interface is in a first interaction state.
After the terminal device obtains the operation trigger data in the first time period, whether the operation trigger data meets a first threshold condition or not can be judged, if the operation trigger data meets the first threshold condition, it can be shown that a user does not pay attention to the display interface at present, and then the virtual control object in the display interface can be determined to be in a first interaction state. Optionally, the first threshold condition may include at least one of the number of operation triggers being lower than a first time threshold and the operation trigger frequency being lower than a first frequency threshold, and the first time threshold and the first frequency threshold may be set according to actual requirements, for example, the first time threshold may be 2 times, 3 times, etc., and the first frequency threshold may be 1 time/second, etc., but is not limited thereto.
In some embodiments, different first count threshold values and first frequency threshold values may also be set for different application programs, and the first count threshold values and the first frequency threshold values may be set according to application types of the application programs corresponding to the display interface. For example, for a game-type application, since a user needs to frequently perform an interactive operation to control a virtual control object in the application when interacting with the game-type application, a larger first-time threshold and a larger first-frequency threshold may be set; for mapping applications, the user typically does not interact very frequently when using the mapping application, so a small first time threshold and a first frequency threshold may be set. The first threshold value in the first threshold value condition is dynamically set according to the application type of the application program, so that the interactive state of the virtual control object can be determined more accurately.
In some embodiments, after the terminal device acquires the operation trigger data in the first time period, the operation trigger data may be compared with historical operation trigger data to obtain a data difference therebetween, where the historical operation trigger data may be the operation trigger data acquired in the same time period before the first time period. The first threshold condition may also include a data difference between the operation trigger data and the historical operation trigger data over the first time period being greater than a first difference threshold. For example, the terminal device obtains an operation trigger frequency within 1 minute 30 seconds to 2 minutes, may compare the operation trigger frequency with a historical operation trigger frequency within 1 minute to 1 minute 30 seconds, and may determine that the virtual control object in the display interface is in the first interaction state if the operation trigger frequency within 1 minute 30 seconds to 2 minutes is less than the historical operation trigger frequency within 1 minute to 1 minute 30 seconds, and a difference between the two is greater than or equal to 20 times/second. In other embodiments, the historical operation trigger data may also be operation trigger data that is obtained by the terminal device in a historical time period and is specific to the application program corresponding to the display interface by the user, for example, the operation trigger data may be operation trigger data that is used by the user in the past 1 week and is corresponding to the application program corresponding to the display interface. The real-time operation trigger data and the historical operation trigger data are compared to determine the interaction state of the virtual control role, so that the accuracy of interaction state identification can be improved.
In some embodiments, the operation trigger data may be valid trigger data of the interactive operation, and the operation trigger data may include at least one of a number of times of triggering the valid operation and a frequency of triggering the valid operation, and the valid operation may refer to the interactive operation that can be used to control the virtual control object. In some embodiments, the display interface may include an effective interaction area and an ineffective interaction area, and when it is detected that the user performs an interaction operation for the effective interaction area, the virtual control character may be controlled to perform an interaction action, and the interaction operation for the effective interaction area may belong to the effective interaction operation; when the interactive operation aiming at the invalid touch area is detected by the user, the control of the virtual control role is not triggered, and the interactive operation aiming at the invalid touch area belongs to the invalid interactive operation.
For example, as shown in fig. 4, taking the display interface as the game application interface 410 and the virtual control object as the virtual vehicle 412 in the game application interface 410 as an example, the user may control the driving direction, the driving speed, and the like of the virtual vehicle 412 by touching the interaction control set in the effective interaction area 414 on the game application interface 410. When a touch operation is detected on an interaction control arranged in the user effective interaction area 414, it may be determined that the touch operation belongs to an effective interaction operation. When it is detected that the user does not control the driving direction, the driving speed, and the like of the virtual vehicle 412 in an invalid interaction area (e.g., a scene area where the user touches buildings, trees, and the like on both sides of a road) of the game application interface 410, it may be determined that the touch operation belongs to an invalid interaction operation.
The first threshold condition may include at least one of a number of triggers for active operations being below a first count threshold, a frequency of triggers for active operations being below a first frequency threshold, no active operations being detected, etc. By detecting the triggering condition of the effective interactive operation of the user aiming at the virtual control object of the display interface, the invalid interactive operation and the effective interactive operation can be distinguished, and the interactive state of the virtual control role can be more accurately identified.
In some embodiments, the step of determining the interaction state of the virtual control object in the display interface may include: acquiring a first interaction priority corresponding to a first application program to which a display interface belongs; and if the first interaction priority meets a first threshold condition, determining that the virtual control object in the display interface is in a first interaction state.
The higher the first interaction priority corresponding to the first application program to which the display interface belongs, the higher the possibility that the user pays attention to the picture quality of the display interface. The first threshold condition corresponding to the interaction priority comprises that the first interaction priority corresponding to the first application program to which the display interface belongs is smaller than a first priority threshold, or the first interaction priority is smaller than the second interaction priority corresponding to other currently running application programs.
When the first interaction priority is smaller than the first priority threshold, the first interaction priority is smaller than the first priority threshold and lower, and therefore, it can be determined that the virtual control object is in the first interaction state, and the rendering picture quality of the display interface is reduced, so that power consumption is reduced. If other application programs with higher task priorities are currently operated by the terminal equipment, the second interaction priorities corresponding to the other application programs are higher than the first interaction priorities corresponding to the first application programs to which the display interfaces belong, and the user is more likely to pay attention to the other application programs, so that the virtual control object can be determined to be in the first interaction state, the quality of rendered pictures of the display interfaces is reduced, and the power consumption is reduced.
In some embodiments, the step of determining the interaction state of the virtual control object in the display interface may include: if a hosting instruction for a virtual control object in the display interface is received, the virtual control object is determined to be in a first interaction state.
The hosting instruction can be used for indicating the terminal device to take over the control authority of the virtual control object, that is, the terminal device controls the interaction of the virtual control object, and the user does not participate in the control. In some embodiments, a hosting button may be disposed in the display interface, and a user may trigger the hosting button, and when the hosting button is detected to be triggered, a hosting instruction may be generated, and it is determined that the virtual control object enters a hosting state according to the hosting instruction. For example, in the process that the user uses the game application in the terminal device, if the user temporarily has special situations (such as needing to leave, needing to have a conversation with another person, and wanting to rest) and does not want to directly exit the game application, the user may select to trigger the hosting button, and the terminal device takes over the control authority of the virtual control object.
Upon receiving a hosting instruction for a virtual control object in the display interface, it may be stated that the user is not paying attention to the display interface, and thus, it may be determined that the virtual control object is in the first interaction state.
It should be noted that other manners may also be adopted to determine the interaction state of the virtual control object, which is not limited to the above-described manners, for example, the user image may be collected by a camera, and if it is detected that the user is not located around the terminal device through the user image, it may be determined that the virtual control object is located in the first interaction state; for another example, the gaze tracking device may track the gaze focus of the user, and if the gaze focus of the user is not on the display screen of the terminal device for a certain duration, it may be determined that the virtual control object is in the first interaction state, and the like. The interaction state of the virtual control object can also be determined by a combination of any of the above-mentioned several ways.
In the embodiment of the application, the interaction state of the virtual control object in the display interface can be accurately determined, so that the display interface can be rendered more accurately, and the visual experience and the power consumption brought to a user by the display interface are considered.
In one embodiment, the step of rendering the display interface according to the rendering policy corresponding to the first interaction state may include: determining a rendering level corresponding to the first interaction state according to the interaction data; and rendering the display interface according to the rendering strategy matched with the rendering level.
A plurality of rendering levels can be preset, different rendering levels can be matched with different rendering strategies, and the rendering image quality of different rendering levels can be different, for example, the higher the rendering level is, the higher the rendering image quality is, the lower the rendering level is, and the lower the rendering image quality is; or the higher the rendering level is, the lower the rendering picture quality is, and the lower the rendering level is, the higher the rendering picture quality is.
When the virtual control object is in the first interaction state, the rendering level may be determined according to the interaction data, as an embodiment, if the first threshold condition only includes one first threshold, the rendering level may be determined according to a difference between the interaction data and the first threshold, and if the difference is larger, the rendering level with lower rendering picture quality may be selected. For example, the first threshold condition only includes a first frequency threshold, and if the acquired operation trigger frequency is less than the first frequency threshold and the difference between the acquired operation trigger frequency and the first frequency threshold is larger, which indicates that the user is less interested in the picture quality of the display interface, the rendering level with lower rendering picture quality may be selected.
In another embodiment, if the first threshold condition includes a plurality of first thresholds, the rendering level may be determined by combining a plurality of interactive data. Alternatively, if the more kinds of interaction data satisfying the corresponding first threshold condition indicate that the user is more likely to be oblivious to the picture quality of the display interface, the rendering level with the lower picture quality may be selected for rendering. For example, the first threshold condition may include a first frequency threshold, a first number of times threshold, a first priority threshold, etc., and if the acquired number of operation triggers is less than the first frequency threshold, the operation trigger frequency is less than the first number of times threshold, and the interaction priority is lower than the first priority threshold, the first rendering level may be selected; if the acquired operation triggering frequency is less than a first frequency threshold, the operation triggering frequency is less than a first frequency threshold, and the interaction priority is not lower than a first priority threshold, a second rendering level can be selected, and the rendering picture quality of the first rendering level can be lower than that of the second rendering level.
Optionally, the rendering level may also be determined in connection with a device state of the terminal device, which may include a remaining power of the terminal device, and the like. And under the condition that the acquired interactive data are consistent, if the residual electric quantity is less, the rendering grade with lower rendering picture quality can be selected so as to improve the amplitude of saving power consumption.
By dividing a plurality of rendering levels, the rendering strategy selection of the display interface can be more flexible, and the power consumption saving effect is improved.
In this embodiment of the present application, when the virtual control object is in the first interaction state, the display interface may be rendered according to a rendering policy corresponding to the first interaction state. In some embodiments, the rendering policy may include a frame-down rendering policy, and the step of rendering the display interface according to the rendering policy corresponding to the first interaction state may include: and reading rendering resources of the display interface according to a first time interval, and rendering the display interface according to the rendering resources, wherein the first time interval is greater than a rendering time interval corresponding to the display interface when the virtual control object is in the second interaction state.
When the virtual control object is in the first interaction state, the rendering frame rate of the display interface may be smaller than the rendering frame rate of the display interface when the virtual control object is in the second interaction state. The first time interval may refer to a rendering time interval of the display interface when the virtual control object is in the first interaction state, and may be a ratio of the unit time to the rendering frame rate. For example, when the virtual control object is in the first interaction state, the rendering frame rate is 30 frames/second, and the first time interval may be a ratio of 1 second to 30 frames/second, i.e., 33.33 milliseconds.
Rendering resources of the display interface may be read from the rendering buffer at a first time interval and the display interface may be rendered according to the rendering resources. The rendering resources may include vertex data, which may include vertex coordinates of virtual content of the display interface in a virtual space, vertex colors, texture coordinates corresponding to the vertex coordinates, and the like, and texture data, which may include surface texture maps of the virtual content. By lengthening the time interval for reading the rendering resources, the rendering frame rate of the display interface can be reduced, and the power consumption generated by rendering the display interface can be reduced.
In some embodiments, different first time intervals may be respectively set for different rendering levels, and the lower the rendering picture quality corresponding to a rendering level, the longer the corresponding first time interval may be. The first time sound interval corresponding to the rendering level can be obtained according to the determined rendering level, and the rendering resources of the display interface are read according to the first time interval.
Optionally, when it is determined that the virtual control object is in the first interaction state, it may be determined whether a current rendering frame rate of the display interface is greater than a frame rate threshold, and if the current rendering frame rate is greater than the frame rate threshold, a first time interval may be determined according to the frame rate threshold, and rendering resources of the display interface may be read according to the first time interval, so that the rendering frame rate of the display interface may be adjusted down to the frame rate threshold. The frame rate threshold may be set according to actual requirements, such as 30 frames/second, 28 frames/second, 35 frames/second, and so on. Optionally, if the current rendering frame rate is not greater than the frame rate threshold, the rendering frame rate of the display interface may not be adjusted, so as to avoid a situation that the display interface is obviously stuck due to too low rendering frame rate.
In the embodiment of the application, by reducing the rendering frame rate of the display interface when the virtual control object is in the first interaction state, the power consumption generated by rendering the display interface when the virtual control object is in the first interaction state can be reduced, and the cruising ability of the terminal device is improved.
In some embodiments, the rendering policy corresponding to the first interaction state may include a reduced-precision rendering policy, and the step of rendering the display interface according to the rendering policy corresponding to the first interaction state may include: and rendering the virtual control object according to the first rendering precision, and rendering the virtual scene according to the second rendering precision to obtain a display interface.
The display interface can comprise a virtual control object and a virtual scene, the virtual control object is a virtual object which can be controlled by a user and interacts with the user, and the virtual scene refers to virtual content which cannot be controlled by the user in the display interface. Taking the game interface 410 of fig. 4 as an example, wherein the virtual control object is a virtual vehicle 412, the virtual scene may include virtual contents such as roads, buildings beside the roads, trees, clouds, etc. in the game interface 410.
The rendering precision may reflect a degree of fineness in the screen content presented to the user, and the user tends to pay more attention to the virtual control object than to the virtual scene, and therefore, when the virtual control object is in the first interaction state, the virtual control object may be rendered according to the first rendering precision, and the virtual scene may be rendered according to the second rendering precision, and optionally, the first rendering precision may be greater than the second rendering precision. The first rendering precision may be a rendering precision of the virtual control object when the virtual control object is in the second interaction state, that is, the rendering precision of the virtual control object in the first interaction state is the same as the rendering precision in the second interaction state, so that the visual effect of the virtual control object can be ensured. The second rendering precision of the virtual scene when the virtual control object is in the first interactive state may be smaller than the rendering precision of the virtual scene when the virtual control object is in the second interactive state, that is, the rendering precision of the virtual scene is reduced.
In some embodiments, different second rendering accuracies may be respectively set for different rendering levels, and the lower the rendering picture quality corresponding to a rendering level, the lower the corresponding second rendering accuracy may be. A second rendering precision corresponding to the rendering level can be obtained according to the determined rendering level, and the virtual scene is rendered according to the second rendering precision.
As an embodiment, the rendering precision may include a mapping precision, which may include at least one of a mapping size, a mapping resolution, and the like. When the virtual control object is determined to be in the first interaction state, first texture data with first mapping precision corresponding to the virtual control object and second texture data with second mapping precision corresponding to the virtual scene can be obtained, the virtual control object is rendered according to the first texture data, and the virtual scene is rendered according to the second texture data, so that a display interface is obtained.
The first mapping precision may be mapping precision of the virtual control object when the virtual control object is in the second interaction state, and the second mapping precision may be smaller than mapping precision of the virtual scene when the virtual control object is in the second interaction state. In some embodiments, a plurality of accuracy levels can be set, and the mapping accuracy and the accuracy level can be in a negative correlation relationship, that is, the greater the accuracy level, the lower the mapping accuracy; the mapping precision and the precision level can also have positive correlation, namely, the greater the precision level, the higher the mapping precision. Texture data of the display interface at each precision level can be stored in advance, and the size and the resolution of a chartlet at each precision level can be different.
First texture data of a first precision level corresponding to the virtual control object and second texture data of a second precision level corresponding to the virtual scene can be obtained, wherein the first texture data of the first precision level corresponds to first mapping precision, and the second texture data of the second precision level corresponds to second mapping precision. For example, 0 to 7 precision levels may be set, the accuracy of the map corresponding to the precision level 0 is the maximum, and the accuracy of the map corresponding to the precision level 7 is the minimum, so that the virtual control object uses the texture data with the accuracy level of 1 no matter the virtual control object is in the second interactive state or the first interactive state, the virtual scene may use the texture data with the accuracy level of 2 when the virtual control object is in the second interactive state, and the virtual scene may use the texture data with the accuracy level of 4 when the virtual control object is in the first interactive state.
As shown in fig. 5, in some embodiments, the rendering of the display interface by the terminal device may be performed according to various processes in a rendering pipeline, which may include a vertex shader, a rasterization stage, a fragment shader, and a blending test stage. The main function of the vertex shader is to realize coordinate transformation to obtain data such as vertex size, vertex color, corresponding texture coordinates and the like, rasterization can convert the data output by the vertex shader into fragments, namely, geometric primitives are converted into two-dimensional pixel points, and the fragment shader can be used for calculating the color of each pixel point obtained after rasterization; the blending test may include a testing stage and a blending stage, the testing may include at least one of a cropping test, an Alpha test, a stencil test, a depth test, and the like, the tested fragments may be blended, for example, the fragments may be blended according to the Alpha values of the fragments to generate a semi-transparent effect, and the like, and finally the generated rendering result may be placed in the frame buffer.
In the embodiment of the application, when the virtual control object is in the first interaction state, the virtual scene map with a smaller map size and a smaller map resolution can be obtained, so that the calculation amount of the fragment shader in calculating the color of the pixel point corresponding to the virtual scene can be reduced, the power consumption of the terminal device can be reduced, and the memory and the bandwidth of data transmission can be reduced.
In some embodiments, a distance between scene content of a virtual scene in the display interface and the virtual camera may be determined, and a second mapping precision corresponding to each scene content may be determined according to the distance. The virtual camera is determined when the display interface is rendered, and can be used for simulating a visual angle for observing virtual content of the display interface in a virtual space. As an embodiment, the virtual camera may be used as an origin of a world coordinate system in the virtual space, vertex coordinates of each scene content in the display interface are converted into the world coordinate system, and a distance between each scene content and the virtual camera is determined according to the converted vertex coordinates. The farther away the scene content is, the lower the corresponding second mapping accuracy may be, and the closer the scene content is, the higher the corresponding second mapping accuracy may be. For example, when the virtual control object is in the second interaction state, the scene content closer to the virtual control object may use texture data of accuracy level 2, and the scene content farther from the virtual control object may use texture data of accuracy level 5; when the virtual control object is in the first interaction state, the scene content at a close distance can use the texture data with the precision level of 4, and the scene content at a far distance can use the texture data with the precision level of 7, so that the whole picture effect of a display interface can be ensured, and the power consumption can be reduced.
For example, taking the game interface 410 in fig. 4 as an example, if a tree on the left side of the game interface 410 is closer to the virtual camera, and a cloud on the upper right side of the game interface 410 is farther from the virtual camera, the mapping accuracy of the tree may be higher than that of the cloud.
In some embodiments, the distance between the scene content of the virtual scene in the display interface and the central area of the display interface may also be determined, and the second mapping accuracy corresponding to each scene content may be determined according to the distance between the scene content and the central area. The smaller the distance between the scene content and the central area is, the higher the corresponding second mapping accuracy can be, and the larger the distance between the scene content and the central area is, the smaller the corresponding second mapping accuracy can be, that is, the lower the mapping accuracy of the scene content closer to the edge of the display interface is, so that the whole picture effect of the display interface can be ensured, and the power consumption can be reduced.
In the embodiment of the application, when the virtual control object is in the first interaction state, the mapping precision of the virtual scene in the display interface is reduced, and the mapping precision of the virtual control object can be kept unchanged, so that the overall picture effect of the display interface can be ensured, the visual experience of causing particularly obvious image quality reduction to a user is avoided, and the power consumption generated by rendering the display interface can be reduced.
As one embodiment, the rendering precision may include model precision, and the model precision may include the number of vertices contained in the virtual model, the number of triangle surfaces composed of vertices, and the like, and the greater the number of vertices contained in the virtual model, the greater the number of triangle surfaces composed of vertices, and the richer the level of detail of the virtual model, the higher the model precision. When the virtual control object is determined to be in the first interaction state, the virtual control object can be rendered according to the first model precision, and the virtual scene can be rendered according to the second model precision, so that a display interface is obtained. Alternatively, the first model accuracy may be greater than the second model accuracy. The first model precision can be the model precision of the virtual control object when the virtual control object is in the second interaction state, namely the model precision of the virtual control object can be kept unchanged; the second model accuracy may be less than the model accuracy of the virtual scene when the virtual control object is in the second interaction state, i.e., the model accuracy of the virtual scene is reduced.
In some embodiments, the model accuracy may be expressed in terms of LOD (Levels of Detail), the larger the LOD, the more facets and details that describe the model. When the virtual control object is in the first interaction state, in the vertex shader stage of the rendering pipeline, the virtual scene may use a lower LOD (Levels of Detail) model than when the virtual control object is in the second interaction state, so as to reduce the number of model surfaces and details of the virtual scene and reduce the rendering computation amount. In some embodiments, a plurality of LOD levels may be provided, and the model accuracy and the LOD level may have a positive correlation, and the greater the LOD level, the higher the model accuracy may be.
When the virtual control object is in the first interaction state, the virtual control object may be rendered according to a first LOD level, and the virtual scene may be rendered according to a second LOD level, where the first LOD level is the LOD level of the virtual control object in the second interaction state, and the second LOD level may be smaller than the LOD level of the virtual scene in the second interaction state. For example, 1-5 LOD levels can be set, the virtual control object can be rendered at LOD level 5 all the time, the virtual scene can be rendered at LOD level 4 when the virtual control object is in the second interaction state, and the virtual control object can be rendered at LOD level 2 when the virtual control object is in the first interaction state.
In some embodiments, a distance between scene content of a virtual scene in the display interface and the virtual camera may be determined, and a second model accuracy corresponding to each scene content may be determined according to the distance, and the farther the scene content is from the virtual camera, the lower the corresponding second model accuracy may be. As another embodiment, the distance between the scene content of the virtual scene in the display interface and the central area of the display interface may be determined, and the second model accuracy corresponding to each scene content may be determined according to the distance between the scene content and the central area, and the farther the scene content is from the central area, the lower the corresponding second model accuracy may be. The power consumption of the rendering display interface can be reduced while the whole picture effect of the display interface is ensured.
In some embodiments, when the virtual control object is in the first interaction state, the mapping precision and the model precision of the virtual scene may also be reduced at the same time, and the amount of computation is reduced at the vertex shader stage and the fragment shader stage at the same time, so as to reduce the power consumption for rendering the display interface.
In other embodiments, when the virtual control object is in the first interaction state, other rendering strategies may also be used to reduce the rendering effect of the virtual scene, for example, a part of scene content in the virtual scene may be selected to be not rendered.
In the embodiment of the application, when the virtual control object is in the first interaction state, the model precision of the virtual scene in the display interface is reduced, and the model precision of the virtual control object can be kept unchanged, so that the overall picture effect of the display interface can be ensured, the visual experience of causing particularly obvious image quality reduction to a user is avoided, and the power consumption generated by rendering the display interface can be reduced.
In another embodiment, as shown in fig. 6, an interface rendering method is provided, which is applicable to the terminal device, and the method may include the following steps:
step 602, determining an interaction state of a virtual control object in a display interface.
The description of step 602 may refer to the related descriptions in the above embodiments, and will not be repeated herein.
Step 604, if the virtual control object is in the first interactive state, the rendering process corresponding to the display interface is switched from the first core of the processor to the second core for operation.
The processor of the terminal device may be a multi-core processor, such as a 4-core processor, an 8-core processor, and the like, and the processor may include at least a first core and a second core, where a maximum operating frequency of the first core may be greater than a maximum operating frequency of the second core, and the operating frequency may refer to a clock frequency of the cores, which reflects a processing speed of the cores, and the processing performance of the first core may be better than that of the second core, and the power consumption of the second core may be lower than that of the first core.
The rendering process of the display interface is used for managing rendering work of the display interface, the rendering process can be operated in a first core of the processor by default, when the virtual control object is detected to be in the first interaction state, the rendering process corresponding to the display interface can be switched to be operated in a second core from the first core of the processor, and the second core with lower power consumption is adopted to operate the rendering process, so that the power consumption can be reduced.
And 606, rendering the display interface according to the rendering strategy corresponding to the first interaction state through the rendering process.
And after the rendering process corresponding to the display interface is switched from the first core of the processor to the second core for operation, the display interface can be rendered through the rendering process according to the rendering strategy corresponding to the first interaction state. The manner of rendering the display interface according to the rendering policy corresponding to the first interaction state may refer to the rendering manner described in each embodiment, and details are not repeated here. When the virtual control object is in the first interaction state, the rendering process corresponding to the display interface is switched to run from the first core of the processor to the second core with lower power consumption, although the processing speed of the second core is poorer than that of the first core, the rendering picture quality of the display interface is reduced, so that the processing performance requirement of the processor is reduced, and the situation of obvious display stagnation can not occur when the rendering process running in the second core is used for rendering the display interface.
If the virtual control object is detected to reenter the second interactive state, the rendering process of the display interface can be switched back to the first core from the second core for operation, and the display interface is rendered according to the rendering strategy corresponding to the second interactive state, so that the rendering picture quality of the display interface is ensured.
In some embodiments, a rendering process of a display interface may include a first rendering sub-process for managing rendering of a virtual scene, and a second rendering sub-process for managing rendering of virtual control objects. The first rendering sub-process corresponding to the virtual scene may be switched from the first core to the second core of the processor to run, and the second rendering sub-process corresponding to the virtual control object may still run in the first core of the processor. And then according to a rendering strategy corresponding to the first interaction state, rendering the virtual scene through the first rendering sub-process, and rendering the virtual control object through the second rendering sub-process running in the first core to obtain a display interface.
For the rendering strategies in some embodiments, when the virtual control object is in the first interaction state, the rendering precision of the virtual control object remains unchanged, and the rendering precision of the virtual scene is reduced, so that the first rendering sub-process corresponding to the virtual scene can be switched to operate in the second core with poor processing performance and low power consumption, the normal rendering of the virtual scene is not affected, and the power consumption can be reduced.
In the embodiment of the application, a core running rendering process of a proper processor can be selected according to the interaction state of the virtual control object, when the virtual control object is in the first interaction state, the core running rendering process with lower power consumption is selected, the power consumption of the virtual control object in the first interaction state is further reduced, and due to the fact that the quality of a rendering picture of the display interface is reduced, the core running rendering process with lower power consumption is switched to, the normal rendering of the display interface cannot be influenced, the condition of obvious display blockage cannot occur, and the display effect of the display interface is considered.
As shown in fig. 7, in an embodiment, an interface rendering apparatus 700 is provided, which can be applied to the terminal device described above, and the interface rendering apparatus 700 can include a state determining module 710 and a rendering module 720.
The state determining module 710 is configured to determine an interaction state of a virtual control object in a display interface, where the interaction state includes a first interaction state or a second interaction state, the first interaction state is a state where interaction data meets a first threshold condition, the second interaction state is a state where interaction data meets a second threshold condition, and a first threshold in the first threshold condition is less than or equal to a second threshold in the second threshold condition.
And a rendering module 720, configured to, if the virtual control object is in the first interaction state, render the display interface according to a rendering policy corresponding to the first interaction state.
And when the virtual control object is in the first interaction state, the rendering picture quality of the display interface is lower than that of the display interface when the virtual control object is in the second interaction state.
In the embodiment of the application, when the virtual control object is in the first interaction state, the rendering picture quality of the display interface is lower than that of the display interface when the virtual control object is in the second interaction state, when the virtual control object is in the first interaction state, the power consumption generated by rendering the display interface is reduced by reducing the rendering picture quality of the display interface, and because the virtual control object is in the first interaction state, the adverse effect of the reduction of the rendering picture quality on a user can be reduced.
In one embodiment, the interaction data comprises operation trigger data. The state determination module 710 is further configured to obtain operation trigger data in a first time period; if the operation trigger data meet a first threshold condition, determining that a virtual control object in the display interface is in a first interaction state; the operation triggering data comprises at least one of operation triggering times and operation triggering frequency, and the first threshold condition comprises at least one of the operation triggering times being lower than a first time threshold and the operation triggering frequency being lower than a first frequency threshold.
In one embodiment, the interaction data includes an interaction priority. The state determining module 710 is further configured to obtain a first interaction priority corresponding to a first application to which the display interface belongs; and if the first interaction priority meets a first threshold condition, determining that the virtual control object in the display interface is in a first interaction state, wherein the first threshold condition comprises that the first interaction priority is smaller than a first priority threshold, or the first interaction priority is smaller than a second interaction priority corresponding to other currently running application programs.
In one embodiment, the state determination module 710 is further configured to determine that the virtual control object is in the first interaction state if a hosting instruction for the virtual control object in the display interface is received.
In the embodiment of the application, the interaction state of the virtual control object in the display interface can be accurately determined, so that the display interface can be rendered more accurately, and the visual experience and the power consumption brought to a user by the display interface are considered.
In an embodiment, the rendering module 720 is further configured to determine, if the virtual control object is in the first interaction state, a rendering level corresponding to the first interaction state according to the interaction data, and render the display interface according to a rendering policy matched with the rendering level.
In one embodiment, the rendering policy comprises a down frame rendering policy. The rendering module 720 is further configured to, if the virtual control object is in the first interaction state, read rendering resources of the display interface according to a first time interval, and render the display interface according to the rendering resources, where the first time interval is greater than a rendering time interval corresponding to the display interface when the virtual control object is in the second interaction state.
In one embodiment, the rendering strategy includes a reduced precision rendering strategy. The rendering module 720 is further configured to, if the virtual control object is in the first interaction state, render the virtual control object according to the first rendering precision, and render the virtual scene according to the second rendering precision, so as to obtain a display interface; the first rendering precision is the rendering precision of the virtual control object when the virtual control object is in the second interaction state; the second rendering precision is less than the rendering precision of the virtual scene when the virtual control object is in the second interaction state.
In an embodiment, the rendering module 720 is further configured to, if the virtual control object is in the first interaction state, obtain first texture data of a first mapping precision corresponding to the virtual control object, and obtain second texture data of a second mapping precision corresponding to the virtual scene; and rendering the virtual control object according to the first texture data, and rendering the virtual scene according to the second texture data to obtain a display interface. The first mapping precision is mapping precision of the virtual control object when the virtual control object is in the second interaction state; the second mapping accuracy is less than the mapping accuracy of the virtual scene when the virtual control object is in the second interaction state.
In an embodiment, the rendering module 720 is further configured to, if the virtual control object is in the first interaction state, render the virtual control object according to the first model precision, and render the virtual scene according to the second model precision, so as to obtain the display interface. The first model precision is the model precision of the virtual control object when the virtual control object is in the second interaction state; the second model accuracy is less than the model accuracy of the virtual scene when the virtual control object is in the second interaction state.
In the embodiment of the application, when the virtual control object is in the first interaction state, the rendering precision of the virtual scene in the display interface is reduced, and the rendering precision of the virtual control object can be kept unchanged, so that the overall picture effect of the display interface can be ensured, the visual experience of causing particularly obvious image quality reduction to a user is avoided, and the power consumption generated by rendering the display interface can be reduced.
In an embodiment, the rendering module 720 is further configured to, if the virtual control object is in the first interaction state, switch a rendering process corresponding to the display interface from the first core to the second core of the processor to run, and render, through the rendering process, the display interface according to a rendering policy corresponding to the first interaction state. Wherein the maximum operating frequency of the first core is greater than the maximum operating frequency of the second core.
In an embodiment, the rendering module 720 is further configured to switch the first rendering sub-process corresponding to the virtual scene from the first core to the second core of the processor to run if the virtual control object is in the first interaction state; and the virtual control object is rendered through a second rendering sub-process operated in the first core to obtain a display interface.
In the embodiment of the application, a core running rendering process of a proper processor can be selected according to the interaction state of the virtual control object, when the virtual control object is in the first interaction state, the core running rendering process with lower power consumption is selected, the power consumption of the virtual control object in the first interaction state is further reduced, and due to the fact that the quality of a rendering picture of the display interface is reduced, the core running rendering process with lower power consumption is switched to, the normal rendering of the display interface cannot be influenced, the condition of obvious display blockage cannot occur, and the display effect of the display interface is considered.
Fig. 8 is a block diagram of a terminal device in one embodiment. As shown in fig. 8, terminal device 800 may include one or more of the following components: a processor 810, a memory 820 coupled to the processor 810, wherein the memory 820 may store one or more computer programs that may be configured to implement the methods described in the embodiments above when executed by the one or more processors 810.
Processor 810 may include one or more processing cores. The processor 810 connects various parts within the entire terminal device 800 using various interfaces and lines, and performs various functions of the terminal device 800 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 820 and calling data stored in the memory 820. Alternatively, the processor 810 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 810 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 810, but may be implemented by a communication chip.
The Memory 820 may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). The memory 820 may be used to store instructions, programs, code sets, or instruction sets. The memory 820 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like. The storage data area may also store data created by the terminal device 800 in use, and the like.
It is understood that the terminal device 800 may include more or less structural elements than those shown in the above structural block diagrams, for example, a power module, a physical button, a WiFi (Wireless Fidelity) module, a speaker, a bluetooth module, a sensor, etc., and is not limited herein.
The embodiment of the application discloses a computer readable storage medium, which stores a computer program, wherein the computer program realizes the method described in the above embodiment when being executed by a processor.
Embodiments of the present application disclose a computer program product comprising a non-transitory computer readable storage medium storing a computer program, and the computer program, when executed by a processor, implements the method as described in the embodiments above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. The storage medium may be a magnetic disk, an optical disk, a ROM, etc.
Any reference to memory, storage, database, or other medium as used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM can take many forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), Rambus Direct RAM (RDRAM), and Direct Rambus DRAM (DRDRAM).
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application.
In various embodiments of the present application, it should be understood that the size of the serial number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The interface rendering method, the interface rendering device, the terminal device, and the computer-readable storage medium disclosed in the embodiments of the present application are described in detail above, and specific examples are applied in the present application to explain the principles and implementations of the present application. Meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (14)

1. An interface rendering method, comprising:
determining an interaction state of a virtual control object in a display interface, wherein the interaction state comprises a first interaction state or a second interaction state, the first interaction state is a state in which interaction data meet a first threshold condition, the second interaction state is a state in which interaction data meet a second threshold condition, and a first threshold included in the first threshold condition is smaller than or equal to a second threshold included in the second threshold condition;
if the virtual control object is in the first interaction state, rendering the display interface according to a rendering strategy corresponding to the first interaction state;
and when the virtual control object is in the first interaction state, the rendering picture quality of the display interface is lower than that of the display interface when the virtual control object is in the second interaction state.
2. The method of claim 1, wherein the interaction data comprises operation trigger data; the determining the interaction state of the virtual control object in the display interface comprises:
acquiring operation trigger data in a first time period;
if the operation trigger data meet a first threshold condition, determining that a virtual control object in a display interface is in a first interaction state; the operation trigger data comprises at least one of operation trigger times and operation trigger frequencies, and the first threshold condition comprises at least one of the operation trigger times being lower than a first time threshold and the operation trigger frequencies being lower than a first frequency threshold.
3. The method of claim 1, wherein the interaction data comprises an interaction priority; the determining the interaction state of the virtual control object in the display interface comprises:
acquiring a first interaction priority corresponding to a first application program to which a display interface belongs;
and if the first interaction priority meets a first threshold condition, determining that the virtual control object in the display interface is in a first interaction state, wherein the first threshold condition comprises that the first interaction priority is smaller than a first priority threshold, or the first interaction priority is smaller than a second interaction priority corresponding to other currently running application programs.
4. The method of claim 1, wherein the rendering the display interface in accordance with the rendering policy corresponding to the first interaction state comprises:
determining a rendering level corresponding to the first interaction state according to the interaction data;
and rendering the display interface according to the rendering strategy matched with the rendering grade.
5. The method of claim 1, wherein determining the interaction state of the virtual control object in the display interface comprises:
if a hosting instruction for a virtual control object in a display interface is received, determining that the virtual control object is in a first interaction state.
6. The method according to any one of claims 1 to 5, wherein the rendering strategy comprises a down-frame rendering strategy; rendering the display interface according to the rendering strategy corresponding to the first interaction state comprises:
and reading rendering resources of the display interface according to a first time interval, and rendering the display interface according to the rendering resources, wherein the first time interval is greater than a rendering time interval corresponding to the display interface when the virtual control object is in a second interaction state.
7. The method according to any one of claims 1 to 5, wherein the rendering strategy comprises a reduced precision rendering strategy; the display interface comprises the virtual control object and a virtual scene; rendering the display interface according to the rendering strategy corresponding to the first interaction state comprises:
rendering the virtual control object according to a first rendering precision, and rendering the virtual scene according to a second rendering precision to obtain the display interface;
the first rendering precision is the rendering precision of the virtual control object when the virtual control object is in a second interaction state;
the second rendering precision is less than the rendering precision of the virtual scene when the virtual control object is in the second interaction state.
8. The method of claim 7, wherein rendering the virtual control object at a first rendering precision and rendering the virtual scene at a second rendering precision to obtain the display interface comprises:
acquiring first texture data of first mapping precision corresponding to the virtual control object and acquiring second texture data of second mapping precision corresponding to the virtual scene; the first mapping precision is mapping precision of the virtual control object when the virtual control object is in a second interaction state; the second mapping precision is smaller than the mapping precision of the virtual scene when the virtual control object is in a second interaction state;
and rendering the virtual control object according to the first texture data, and rendering the virtual scene according to the second texture data to obtain the display interface.
9. The method of claim 7, wherein rendering the virtual control object at a first rendering precision and rendering the virtual scene at a second rendering precision to obtain the display interface comprises:
rendering the virtual control object according to the first model precision, and rendering the virtual scene according to the second model precision to obtain the display interface; the first model precision is the model precision of the virtual control object when the virtual control object is in a second interaction state; the second model accuracy is less than the model accuracy of the virtual scene when the virtual control object is in the second interaction state.
10. The method according to any one of claims 1 to 5, wherein the rendering the display interface according to the rendering policy corresponding to the first interaction state comprises:
switching a rendering process corresponding to the display interface from a first core of a processor to a second core for operation, wherein the maximum working frequency of the first core is greater than that of the second core;
and rendering the display interface according to a rendering strategy corresponding to the first interaction state through the rendering process.
11. The method according to any one of claims 1 to 5, wherein the display interface comprises the virtual control object and a virtual scene; rendering the display interface according to the rendering strategy corresponding to the first interaction state comprises:
switching a first rendering sub-process corresponding to the virtual scene from a first core of a processor to a second core for running, wherein the maximum working frequency of the first core is greater than that of the second core;
and according to a rendering strategy corresponding to the first interaction state, rendering the virtual scene through the first rendering sub-process, and rendering the virtual control object through a second rendering sub-process running in the first core to obtain the display interface.
12. An interface rendering apparatus, comprising:
the state determination module is used for determining an interaction state of a virtual control object in a display interface, wherein the interaction state comprises a first interaction state or a second interaction state, the first interaction state is a state in which interaction data meet a first threshold condition, the second interaction state is a state in which the interaction data meet a second threshold condition, and a first threshold value in the first threshold condition is smaller than or equal to a second threshold value in the second threshold condition;
the rendering module is used for rendering the display interface according to a rendering strategy corresponding to the first interaction state if the virtual control object is in the first interaction state;
and when the virtual control object is in the first interaction state, the rendering picture quality of the display interface is lower than that of the display interface when the virtual control object is in the second interaction state.
13. A terminal device comprising a memory and a processor, the memory having stored thereon a computer program which, when executed by the processor, causes the processor to carry out the method of any one of claims 1 to 11.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 11.
CN202110887978.4A 2021-08-03 2021-08-03 Interface rendering method and device, terminal equipment and computer readable storage medium Pending CN113655880A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110887978.4A CN113655880A (en) 2021-08-03 2021-08-03 Interface rendering method and device, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110887978.4A CN113655880A (en) 2021-08-03 2021-08-03 Interface rendering method and device, terminal equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113655880A true CN113655880A (en) 2021-11-16

Family

ID=78490623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110887978.4A Pending CN113655880A (en) 2021-08-03 2021-08-03 Interface rendering method and device, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113655880A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104602116A (en) * 2014-12-26 2015-05-06 北京农业智能装备技术研究中心 Interactive media-rich visual rendering method and system
CN106776259A (en) * 2017-01-10 2017-05-31 广东欧珀移动通信有限公司 A kind of mobile terminal frame rate detection method, device and mobile terminal
CN106936995A (en) * 2017-03-10 2017-07-07 广东欧珀移动通信有限公司 A kind of control method of mobile terminal frame per second, device and mobile terminal
CN109499059A (en) * 2018-11-15 2019-03-22 腾讯科技(深圳)有限公司 The rendering method and device of object, storage medium, electronic device
CN111381885A (en) * 2018-12-29 2020-07-07 畅想芯科有限公司 Asymmetric multi-core heterogeneous parallel processing system
CN111643901A (en) * 2020-06-02 2020-09-11 三星电子(中国)研发中心 Method and device for intelligently rendering cloud game interface
CN113126741A (en) * 2019-12-26 2021-07-16 深圳市万普拉斯科技有限公司 Method and device for controlling frame rate of mobile terminal, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104602116A (en) * 2014-12-26 2015-05-06 北京农业智能装备技术研究中心 Interactive media-rich visual rendering method and system
CN106776259A (en) * 2017-01-10 2017-05-31 广东欧珀移动通信有限公司 A kind of mobile terminal frame rate detection method, device and mobile terminal
CN106936995A (en) * 2017-03-10 2017-07-07 广东欧珀移动通信有限公司 A kind of control method of mobile terminal frame per second, device and mobile terminal
CN109499059A (en) * 2018-11-15 2019-03-22 腾讯科技(深圳)有限公司 The rendering method and device of object, storage medium, electronic device
CN111381885A (en) * 2018-12-29 2020-07-07 畅想芯科有限公司 Asymmetric multi-core heterogeneous parallel processing system
CN113126741A (en) * 2019-12-26 2021-07-16 深圳市万普拉斯科技有限公司 Method and device for controlling frame rate of mobile terminal, computer equipment and storage medium
CN111643901A (en) * 2020-06-02 2020-09-11 三星电子(中国)研发中心 Method and device for intelligently rendering cloud game interface

Similar Documents

Publication Publication Date Title
CN107154063B (en) Method and device for setting shape of image display area
KR20150047091A (en) Techniques for determining an adjustment for a visual output
CN111147749A (en) Photographing method, photographing device, terminal and storage medium
EP4231244A1 (en) Method, apparatus and device for selecting anti-aliasing algorithm and readable storage medium
US20220012529A1 (en) Map Display Method And Apparatus
EP4213102A1 (en) Rendering method and apparatus, and device
CN108665510B (en) Rendering method and device of continuous shooting image, storage medium and terminal
CN115482325A (en) Picture rendering method, device, system, equipment and medium
CN113838184A (en) Rendering method, device and system
CN115512025A (en) Method and device for detecting model rendering performance, electronic device and storage medium
CN109448123B (en) Model control method and device, storage medium and electronic equipment
CN114842120A (en) Image rendering processing method, device, equipment and medium
RU2666300C2 (en) Technologies of reducing pixel shading
CN113655880A (en) Interface rendering method and device, terminal equipment and computer readable storage medium
US11830125B2 (en) Ray-guided water caustics
US20220076482A1 (en) Ray-tracing for auto exposure
CN115330925A (en) Image rendering method and device, electronic equipment and storage medium
CN115761091A (en) Game picture rendering method and device, electronic equipment and storage medium
CN112973121B (en) Reflection effect generation method and device, storage medium and computer equipment
CN109814703B (en) Display method, device, equipment and medium
CN111524240A (en) Scene switching method and device and augmented reality equipment
CN110941389A (en) Method and device for triggering AR information points by focus
US20240153159A1 (en) Method, apparatus, electronic device and storage medium for controlling based on extended reality
CN115761123B (en) Three-dimensional model processing method, three-dimensional model processing device, electronic equipment and storage medium
WO2024088141A1 (en) Special-effect processing method and apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination