CN114741193A - Scene rendering method and device, computer readable medium and electronic equipment - Google Patents

Scene rendering method and device, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN114741193A
CN114741193A CN202210355815.6A CN202210355815A CN114741193A CN 114741193 A CN114741193 A CN 114741193A CN 202210355815 A CN202210355815 A CN 202210355815A CN 114741193 A CN114741193 A CN 114741193A
Authority
CN
China
Prior art keywords
rendering
core unit
thread
scene
image processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210355815.6A
Other languages
Chinese (zh)
Inventor
张俊辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210355815.6A priority Critical patent/CN114741193A/en
Publication of CN114741193A publication Critical patent/CN114741193A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5094Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The disclosure provides a scene rendering method, a scene rendering device, a computer readable medium and an electronic device, and relates to the technical field of image processing. The method is applied to a central processing unit comprising at least two core unit groups, and comprises the following steps: acquiring a rendering task corresponding to a target scene, and splitting the rendering task by using a first rendering thread of a first core unit group to obtain M rendering instruction lists; and parallelly submitting M rendering instruction lists to the image processor by using the first rendering thread and N second rendering threads corresponding to the N second core unit groups so that the image processor renders the target scene. This is disclosed through the heavy load task that bears the first core unit group that will first render the thread place, and the dispersion bears to a plurality of core unit groups, can effectively reduce the load of first core unit group, and make full use of second core unit group simultaneously, and then reduces the consumption to a certain extent, promotes the performance, avoids CPU to generate heat.

Description

Scene rendering method and device, computer readable medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a scene rendering method, a scene rendering apparatus, a computer readable medium, and an electronic device.
Background
The realistic scene effect drawing has wide application in the fields of movie and television special effect making, virtual reality, games and the like. In the related art, a virtual scene is generally rendered using a computer technique in order to quickly obtain various special effect pictures. However, when rendering a virtual three-dimensional scene, a rendering thread is often required to carry a large number of tasks. For example, in the field of gaming, scene traversal, scene culling, and rendering command submission by a rendering thread are typically required. At this time, the rendering thread bears a heavy-load task in the game running process, so that the CPU where the thread is located consumes a large amount of computing resources, and further, the problems of frequency increase, overlarge power consumption, performance reduction, heating of the CPU where the thread is located and the like are caused.
Disclosure of Invention
The present disclosure aims to provide a scene rendering method, a scene rendering apparatus, a computer readable medium, and an electronic device, so as to reduce power consumption, improve performance, and avoid heat generation of a CPU at least to a certain extent.
According to a first aspect of the present disclosure, there is provided a scene rendering method applied to a central processing unit including at least two core unit groups, the method including: acquiring a rendering task corresponding to a target scene, and splitting the rendering task by using a first rendering thread of a first core unit group to obtain M rendering instruction lists; wherein M is an integer greater than 1; the first rendering thread and N second rendering threads corresponding to the N second core unit groups are utilized to submit M rendering instruction lists to the image processor in parallel so that the image processor can render the target scene; where N is equal to M minus 1.
According to a second aspect of the present disclosure, there is provided a scene rendering apparatus applied to a central processor including at least two core unit groups, the apparatus including: the list splitting module is used for obtaining a rendering task corresponding to a target scene, splitting the rendering task by using a first rendering thread of a first core unit group, and obtaining M rendering instruction lists; wherein M is an integer greater than 1; the scene rendering module is used for utilizing the first rendering thread and the N second rendering threads corresponding to the N second core unit groups to submit the M rendering instruction lists to the image processor in parallel so that the image processor can render the target scene; where N is equal to M minus 1.
According to a third aspect of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the above-mentioned method.
According to a fourth aspect of the present disclosure, there is provided an electronic apparatus, comprising: a processor; and memory storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the above-described method.
According to the scene rendering method provided by the embodiment of the disclosure, after the rendering task corresponding to the target scene is obtained, the rendering task can be split by using the first rendering thread of the first core unit group of the central processing unit, so as to obtain M rendering instruction lists; and then respectively submitting M rendering instruction lists to an image processor through a first rendering thread of a first core unit group in the central processing unit and a second rendering thread of each second core unit group in the N second core unit groups, so as to achieve the purpose of submitting the rendering instruction lists in parallel, and further render the target scene through the image processor. According to the technical scheme, the rendering task is split through the first rendering thread, the M rendering instruction lists obtained after splitting are distributed to the second rendering thread of the second core unit group to be submitted in parallel, the heavy-load task borne by the first core unit group where the first rendering thread is located can be dispersed to be borne by the plurality of core unit groups, the load of the first core unit group is effectively reduced, meanwhile, the second core unit group is fully utilized, the power consumption is reduced to a certain extent, the performance is improved, and CPU heating is avoided.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which embodiments of the present disclosure may be applied;
FIG. 2 schematically illustrates a flow chart of a method of scene rendering in an exemplary embodiment of the disclosure;
FIG. 3 schematically illustrates a flow chart of a rendering task splitting method in an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic illustration of a split render instruction in an exemplary embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating instruction flow for processing rendering instructions by an image processor in an exemplary embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating instruction flow for processing rendering instructions by another image processor in an exemplary embodiment of the present disclosure;
FIG. 7 is a diagram schematically illustrating a scene rendering method in the field of games in an exemplary embodiment of the present disclosure;
FIG. 8 is a diagram schematically illustrating another scene rendering method in the field of games in an exemplary embodiment of the present disclosure;
fig. 9 schematically illustrates a composition diagram of a scene rendering apparatus in an exemplary embodiment of the present disclosure;
fig. 10 shows a schematic diagram of an electronic device to which embodiments of the present disclosure may be applied.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 is a schematic diagram illustrating a system architecture of an exemplary application environment to which a scene rendering method and apparatus according to an embodiment of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include a central processor 101 and an image processor 102, wherein the central processor 101 may include at least 2 core unit groups. The central processor 101 and the image processor 102 may belong to various terminal devices having an image processing function, including but not limited to desktop computers, portable computers, smart phones, tablet computers, and the like. It should be understood that the number of central processors, core unit groups, and image processors in fig. 1 is merely illustrative. There may be any number of central processors and image processors, and more than 2 core cell groups, as desired for implementation.
The scene rendering method provided by the embodiment of the present disclosure is generally executed by the central processing unit 101 and the image processor 102 of the terminal device, and accordingly, the scene rendering apparatus is generally disposed in the central processing unit 101 and the image processor 102 of the terminal device. However, it is easily understood by those skilled in the art that the rendering method provided in the embodiment of the present disclosure may also be executed by the central processor 101 and the image processor 102 of the server, and accordingly, the rendering apparatus may also be disposed in the server, which is not particularly limited in the exemplary embodiment.
The present example embodiment provides a scene rendering method. The scene rendering method can be applied to the fields of movie and television special effect making, virtual reality, games and the like, and is not particularly limited in the exemplary embodiment. Referring to fig. 2, the scene rendering method may include the following steps S210 and S220:
in step S210, a rendering task corresponding to the target scene is obtained, and the rendering task is split by using the first rendering thread of the first core unit group, so as to obtain M rendering instruction lists.
The target scene may include a scene corresponding to an image, or may include a scene corresponding to a frame in a video or a game; rendering tasks comprise all tasks required to be executed when a target scene is displayed on a display device such as a screen; the first core unit may be any one of core unit groups in a central processing unit including at least two core unit groups; the first rendering thread may include a rendering thread in the first core unit group, configured to execute a rendering task corresponding to the application; for example, in the field of games, game threads include threads that can perform game logic operations, and rendering threads include threads that can perform rendering tasks such as scene traversal and scene culling; m is an integer larger than 1, and can be set according to the number of the central processing unit central core unit groups and a specific application scene.
In an exemplary embodiment, when the rendering task corresponding to the target scene is obtained, the rendering task can be obtained in a scene traversal manner. Specifically, all elements in the target scene may be traversed by using the first rendering thread to obtain all rendering tasks that need to be executed to render the target scene.
In an exemplary embodiment, after all rendering tasks are obtained, scene culling may also be performed on the rendering tasks. Specifically, elements in the scene can be eliminated by judging the distance between the object and the camera (distance elimination), judging whether the object is shielded by other objects in the visible range of the camera, whether the object is in a view cone of the camera (view cone elimination) and the like, so that unnecessary rendering is effectively reduced. In practical application, the scene elimination of the rendering task can be realized by adopting algorithms such as distance elimination, visual cone elimination, viewport elimination, back elimination, shielding query and the like.
In an exemplary embodiment, when the rendering task is split by the first rendering thread of the first core unit group, as shown in fig. 3, the following steps S310 and S320 may be included:
in step S310, the first rendering thread splits the rendering task into K rendering instructions according to the independence of the rendering contexts.
Wherein K may be an integer greater than 1. Specifically, the rendering task usually needs to be implemented by executing a series of steps, and therefore, after the rendering task is obtained, the rendering task can be split into K rendering instructions that can be executed independently according to the independence of a rendering context (OpenGL context).
In step S320, the K rendering instructions are recombined to obtain M rendering instruction lists.
In an exemplary embodiment, after the render task is split into K rendering instructions that can be executed independently, the K rendering instructions may be reassembled to obtain M rendering instruction lists. It should be noted that, during the reorganization, the reorganization may be performed according to a customized reorganization rule, and the reorganization rule may include adding M rendering instruction lists in sequence according to the splitting order, adding M rendering instruction lists at random according to the splitting result, and the like, which is not particularly limited in this disclosure. Wherein the split order may include an order in which the rendering instructions are split.
For example, referring to fig. 4, it is assumed that a certain rendering task is split into 10 rendering instructions that can be executed independently, and rendering instructions 1 to 10 are respectively performed in the split order; if the rendering instruction list needs to be recombined into 2 rendering instruction lists, 10 rendering instructions can be sequentially added into the 2 rendering instruction lists according to the splitting order, so that the following 2 rendering instruction lists are obtained: rendering instruction 1, rendering instruction 3, rendering instruction 5, rendering instruction 7 and rendering instruction 9; rendering instruction 2, rendering instruction 4, rendering instruction 6, rendering instruction 8, rendering instruction 10.
In step S220, the first rendering thread and the N second rendering threads corresponding to the N second core unit groups are utilized to submit M rendering instruction lists to the image processor in parallel, so that the image processor renders the target scene.
The second core unit group may include any core unit group except the first core unit group in the central processing unit; n, etc. M minus 1. It should be noted that the maximum value of the number M of the rendering instruction lists split by the first rendering thread does not exceed the number of the core unit groups in the central processing unit, so as to avoid a step of performing parallel commit without an additional core unit group.
For example, when the central processing unit includes 4 core unit groups, any one of the core unit groups serves as a first core unit group, and a first rendering thread of the first core unit group may split the rendering task to obtain 2, 3, or 4 rendering instruction lists. When the number of the rendering instruction lists is 2, submitting a rendering instruction list 1 based on a first rendering thread of a first core unit group, and submitting a rendering instruction list 2 in parallel based on a second rendering thread of 1 second core unit group; when the number of the rendering instruction lists is 3, submitting a rendering instruction list 1 based on a first rendering thread of a first core unit group, and respectively submitting a rendering instruction list 2 and a rendering instruction list 3 in parallel based on a second rendering thread of 2 second core unit groups; when the number of the rendering instruction lists is 4, the rendering instruction list 1 may be submitted based on a first rendering thread of the first core unit group, and the rendering instruction list 2, the rendering instruction list 3, and the rendering instruction list 4 may be submitted in parallel based on second rendering threads of the 3 second core unit groups, respectively.
In an exemplary embodiment, when a list of rendering instructions is submitted to an image processor, the image processor typically operates based on the order of submission. At this time, in order to achieve that the operation sequence of the image processor is consistent with the rendering sequence corresponding to the rendering task, and further avoid the problem of rendering result error caused by the rendering sequence error of the image processor, when the image processor performs operation, the rendering instructions in the M rendering instruction lists need to be operated according to the rendering sequence corresponding to the rendering task, so as to render the target scene based on the operation result. The rendering sequence refers to an execution sequence of rendering instructions obtained by splitting to achieve a rendering effect to be achieved by the rendering task.
For example, in a system including rendering instruction list 1 submitted by a first rendering thread: rendering instruction 1 and rendering instruction 3, submitting a rendering instruction list 2 through 1 second rendering thread: when rendering the instruction 2 and the instruction 4, assuming that the first rendering thread and the second rendering thread are submitted in parallel, the submission sequence is as follows: when the instruction 1, the instruction 2, the instruction 3 and the instruction 4 are rendered, but rendering orders corresponding to rendering tasks are the instruction 2, the instruction 1, the instruction 4 and the instruction 3, the image processor performs operations according to the order of the instruction 2, the instruction 1, the instruction 4 and the instruction 3, and then renders the target scene based on the operation result.
In an exemplary embodiment, when rendering instructions in the M rendering instruction lists are concurrently committed by the first rendering thread and the second rendering thread, it is likely that the commit order is different from the rendering order. In order to enable the image processor to operate according to the rendering sequence, the rendering order marking can be performed on the rendering instructions obtained by splitting according to the rendering sequence corresponding to the rendering task after the splitting is finished, so that the decoupling of the operation sequence of the image processor and the submission sequence of the central processing unit is realized.
For example, 10 rendering instructions, which are rendering instruction 1 to rendering instruction 10 respectively, are split into 2 rendering instruction lists according to the splitting order: rendering instruction 1, rendering instruction 3, rendering instruction 5, rendering instruction 7 and rendering instruction 9; rendering instruction 2, rendering instruction 4, rendering instruction 6, rendering instruction 8, rendering instruction 10. Assuming that the data volume of the rendering instruction 1 is smaller than that of the rendering instruction 2, and the submission efficiencies of the first rendering thread and the second rendering thread are the same, when the rendering instructions are submitted in parallel, the submission order of the rendering instruction 1 and the rendering instruction 2 is the rendering instruction 1 and the rendering instruction 2. Referring to FIG. 5, in a typical scenario, the image processor will process the rendering instructions based on the commit order, i.e., in the order of rendering instruction 1, rendering instruction 2. In fact, rendering instructions 1 and 2 are rendered in the rendering task in the order rendering instruction 2 and rendering instruction 1. Therefore, the rendering instructions, i.e. rendering instructions 2(1) and 1(2), can be marked according to the rendering order corresponding to the rendering task, so as to decouple the operation order of the image processor (rendering instruction 2 and rendering instruction 1) from the submission order of the central processor (rendering instruction 1 and rendering instruction 2), as shown in fig. 6.
In addition, the operation sequence of the image processor may be defined in other manners, for example, by sending complete rendering instruction sequence information to the image processor, so that the image processor may render according to the rendering instruction sequence information in the rendering sequence.
In an exemplary embodiment, after the rendering order is marked on the split rendering instruction, correspondingly, when the image processor performs operation, the rendering instructions in the M rendering instruction lists may be directly operated according to the rendering order mark corresponding to the rendering instruction, and then the target scene is rendered based on the operation result. By the method, the submission sequence of the central processing unit and the processing sequence of the image processor can be decoupled, the image processor can calculate according to the actual required sequence, and the problems of rendering result errors and the like are avoided.
Further, in an exemplary embodiment, in order to avoid the problem that the multithread submission causes concurrent calls in the image processor and thus causes data confusion, a semaphore mechanism may be established between the rendering thread and the image processor. At this time, the image processor may operate on rendering instructions in the M rendering instruction lists based on a semaphore mechanism.
The following takes scene rendering of game application and 2 rendering instruction lists as examples, and details the technical solution of the embodiment of the present disclosure are described with reference to fig. 7 and 8:
in a game-like application, the central processor may generally implement the running of the game application based on the game thread and the rendering thread. Referring to fig. 7, the game thread is mainly used for executing game logic operations, and the rendering thread needs to perform scene traversal, scene culling, and submit rendering instructions to the image processor.
In an exemplary embodiment, referring to fig. 8, when a rendering thread executes a task of submitting rendering instructions to an image processor, a rendering logic of a current frame game scene may be split into a plurality of rendering instructions according to independence of rendering contexts, and then additional rendering instructions in 1 rendering list including the plurality of rendering instructions, which are originally submitted by a rendering thread corresponding to one core unit group, are recombined into 2 rendering instruction lists, and split into rendering threads (a first rendering thread and 1 second rendering thread) corresponding to 2 core unit groups (a first core unit group and 1 second core unit group) to be processed in parallel, so as to be submitted to the image processor in parallel.
Referring to fig. 6, in order to guarantee the execution sequence of the image processor, a semaphore mechanism is established among the first rendering thread, the second rendering thread and the image processor, and meanwhile, a rendering sequence flag is added to the rendering instruction, so that the execution sequence of the rendering instruction on the image processor is decoupled from the submission sequence of the central processor, and the image processor is guaranteed to execute the rendering instruction according to the rendering sequence corresponding to the rendering task.
In summary, the exemplary embodiment can achieve load reduction of the first core unit group where the first rendering thread is located, and simultaneously make full use of the effect of the load of the second core unit group, so that under the condition of rendering thread loads on different hardware platforms and scenes, power consumption can be reduced to a certain extent, and performance is improved. On the basis, the rendering thread is split in a finer granularity, so that the overall load of the image processor can be reduced, and the utilization rate of the resources of the image processor is improved; in addition, a semaphore mechanism is established between the rendering thread and the image processor to mark the rendering sequence of the rendering instructions, so that the safety and reliability of data can be ensured.
It is noted that the above-mentioned figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Further, referring to fig. 9, in an exemplary embodiment of the present disclosure, a scene rendering apparatus 900 is provided, which is applied to a central processor including at least two core unit groups, and includes a list splitting module 910 and a scene rendering module 920. Wherein:
the list splitting module 910 may be configured to obtain a rendering task corresponding to a target scene, and split the rendering task by using a first rendering thread of a first core unit group to obtain M rendering instruction lists; wherein M is an integer greater than 1;
the scene rendering module 920 may be configured to submit M rendering instruction lists to the image processor in parallel by using the first rendering thread and N second rendering threads corresponding to the N second core unit groups, so that the image processor renders the target scene; where N is equal to M minus 1.
In an exemplary embodiment, the list splitting module 910 may be configured to split the rendering task into K rendering instructions using a first rendering thread according to independence of rendering contexts; wherein K is an integer greater than 1; and recombining the K rendering instructions to obtain M rendering instruction lists.
In an exemplary embodiment, the list splitting module 910 may be configured to traverse elements in the target scene by using a first rendering thread to obtain a rendering task corresponding to the target scene.
In an exemplary embodiment, the list splitting module 910 may be configured to perform scene culling on the rendering task, so as to obtain a culled rendering task.
In an exemplary embodiment, the list splitting module 910 may be configured to mark the rendering order of the split rendering instructions.
In an exemplary embodiment, the scene rendering module 920 may be configured to operate, by the image processor, the rendering instructions in the M rendering instruction lists according to the rendering order flags corresponding to the rendering instructions, so as to render the target scene based on the operation result.
In an exemplary embodiment, the scene rendering module 920 may be configured to obtain rendering instructions in the M rendering instruction lists for operation based on a semaphore mechanism by using an image processor
The specific details of each module in the above apparatus have been described in detail in the method section, and details that are not disclosed may refer to the method section, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
There is also provided in an exemplary embodiment of the present disclosure an electronic device for implementing a scene rendering method, the electronic device comprising at least a processor and a memory, the memory for storing executable instructions of the processor, the processor being configured to perform the scene rendering method via execution of the executable instructions.
The following takes the mobile terminal 1000 in fig. 10 as an example, and exemplifies the configuration of the electronic device in the embodiment of the present disclosure. It will be appreciated by those skilled in the art that the configuration of fig. 10 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes. In other embodiments, mobile terminal 1000 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. The interfacing relationship between the various components is shown schematically and is not meant to be a structural limitation of the mobile terminal 1000. In other embodiments, mobile terminal 1000 may also interface differently than shown in FIG. 10 or a combination of different interfaces.
As shown in fig. 10, the mobile terminal 1000 may specifically include: a processor 1010, an internal memory 1021, an external memory interface 1022, a Universal Serial Bus (USB) interface 1030, a charging management module 1040, a power management module 1041, a battery 1042, an antenna 1, an antenna 2, a mobile communication module 1050, a wireless communication module 1060, an audio module 1070, a speaker 1071, a receiver 1072, a microphone 1073, an earphone interface 1074, a sensor module 1080, a display 1090, a camera module 1091, an indicator 1092, a motor 1093, a button 1094, a Subscriber Identity Module (SIM) card interface 1095, and the like. Wherein the sensor module 1080 may include a depth sensor 10801, a pressure sensor 10802, a gyroscope sensor 10803, and the like.
Processor 1010 may include one or more processing units, such as: processor 1010 may include a Central Processing Unit (CPU), a modem Processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors. The CPU may be a CPU including a plurality of core unit groups.
A memory is provided in the processor 1010. The memory may store instructions for implementing six modular functions: detection instructions, connection instructions, information management instructions, analysis instructions, data transmission instructions, and notification instructions, and are controlled for execution by processor 1010.
The mobile terminal 1000 implements a display function through the GPU, the display screen 1090, the application processor, and the like. The GPU is a microprocessor for image processing, connected to the display screen 1090 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 1010 may include one or more GPUs that execute program instructions to generate or alter display information.
The mobile terminal 1000 may implement a photographing function through an ISP, a camera module 1091, a video codec, a GPU, a display screen 1090, an application processor, and the like. The ISP is used for processing data fed back by the camera module 1091; the camera module 1091 is used for capturing still images or videos; the digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals; the video codec is used to compress or decompress digital video, and the mobile terminal 1000 may also support one or more video codecs.
The external memory interface 1022 may be used for connecting an external memory card, such as a Micro SD card, to extend the memory capability of the mobile terminal 1000. The external memory card communicates with the processor 1010 through the external memory interface 1022 to implement data storage functions. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 1021 may be used to store computer-executable program code, which includes instructions. The internal memory 1021 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the mobile terminal 1000, and the like. In addition, the internal memory 1021 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk Storage device, a Flash memory device, a Universal Flash Storage (UFS), and the like. Processor 1010 executes various functional applications and data processing of mobile terminal 1000 by executing instructions stored in internal memory 1021 and/or instructions stored in a memory provided in the processor.
Furthermore, the exemplary embodiments of the present disclosure also provide a computer-readable storage medium on which a program product capable of implementing the above-described method of the present specification is stored. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the above-mentioned "exemplary methods" section of this specification, for example, any one or more of the steps in fig. 2-3, when the program product is run on the terminal device.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Furthermore, program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (10)

1. A scene rendering method is applied to a central processing unit comprising at least two core unit groups, and the method comprises the following steps:
the method comprises the steps of obtaining a rendering task corresponding to a target scene, splitting the rendering task by using a first rendering thread of a first core unit group, and obtaining M rendering instruction lists; wherein M is an integer greater than 1;
utilizing the first rendering thread and N second rendering threads corresponding to the N second core unit groups to submit M rendering instruction lists to an image processor in parallel so that the image processor renders the target scene; wherein N is equal to M minus 1.
2. The method of claim 1, wherein the splitting the rendering task with the first rendering thread of the first core unit group to obtain M rendering instruction lists comprises:
the first rendering thread splits the rendering task into K rendering instructions according to the independence of the rendering context; wherein K is an integer greater than 1;
and recombining the K rendering instructions to obtain M rendering instruction lists.
3. The method according to claim 1, wherein the obtaining of the rendering task corresponding to the target scene includes:
and traversing the elements in the target scene by using the first rendering thread to obtain a rendering task corresponding to the target scene.
4. The method of claim 3, further comprising:
and eliminating scenes of the rendering tasks to obtain the eliminated rendering tasks.
5. The method of claim 1, wherein after the splitting the rendering task with the first rendering thread of the first set of core units, the method further comprises:
and marking the rendering sequence of the rendering instruction obtained by splitting.
6. The method of claim 5, wherein the image processor renders the target scene, comprising:
and the image processor operates the rendering instructions in the M rendering instruction lists according to the rendering sequence marks corresponding to the rendering instructions so as to render the target scene based on the operation result.
7. The method of claim 6, further comprising:
the image processor operates on rendering instructions in the list of M rendering instructions based on a semaphore mechanism.
8. A scene rendering apparatus, applied to a central processing unit including at least two core unit groups, the apparatus comprising:
the list splitting module is used for obtaining a rendering task corresponding to a target scene, and splitting the rendering task by using a first rendering thread of a first core unit group to obtain M rendering instruction lists; wherein M is an integer greater than 1;
the scene rendering module is used for utilizing the first rendering thread and N second rendering threads corresponding to N second core unit groups to submit M rendering instruction lists to an image processor in parallel so that the image processor renders the target scene; wherein N is equal to M minus 1.
9. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-7 via execution of the executable instructions.
CN202210355815.6A 2022-04-06 2022-04-06 Scene rendering method and device, computer readable medium and electronic equipment Pending CN114741193A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210355815.6A CN114741193A (en) 2022-04-06 2022-04-06 Scene rendering method and device, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210355815.6A CN114741193A (en) 2022-04-06 2022-04-06 Scene rendering method and device, computer readable medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114741193A true CN114741193A (en) 2022-07-12

Family

ID=82278766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210355815.6A Pending CN114741193A (en) 2022-04-06 2022-04-06 Scene rendering method and device, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114741193A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116880740A (en) * 2023-09-06 2023-10-13 深圳硬之城信息技术有限公司 Method, device and storage medium for rendering table scrolling

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116880740A (en) * 2023-09-06 2023-10-13 深圳硬之城信息技术有限公司 Method, device and storage medium for rendering table scrolling
CN116880740B (en) * 2023-09-06 2024-01-02 深圳硬之城信息技术有限公司 Method, device and storage medium for rendering table scrolling

Similar Documents

Publication Publication Date Title
KR101980990B1 (en) Exploiting frame to frame coherency in a sort-middle architecture
CN110413812B (en) Neural network model training method and device, electronic equipment and storage medium
CN110070496B (en) Method and device for generating image special effect and hardware device
CN111026493B (en) Interface rendering processing method and device
CN110007936B (en) Data processing method and device
CN113784049B (en) Camera calling method of android system virtual machine, electronic equipment and storage medium
CN111447504B (en) Three-dimensional video processing method and device, readable storage medium and electronic equipment
CN111324834B (en) Method, device, electronic equipment and computer readable medium for image-text mixed arrangement
CN114741193A (en) Scene rendering method and device, computer readable medium and electronic equipment
CN112929728A (en) Video rendering method, device and system, electronic equipment and storage medium
CN112351221B (en) Image special effect processing method, device, electronic equipment and computer readable storage medium
CN111784811A (en) Image processing method and device, electronic equipment and storage medium
CN109816791B (en) Method and apparatus for generating information
CN116596748A (en) Image stylization processing method, apparatus, device, storage medium, and program product
CN115861510A (en) Object rendering method, device, electronic equipment, storage medium and program product
CN105727556A (en) Image drawing method, related equipment and system
CN115576470A (en) Image processing method and apparatus, augmented reality system, and medium
CN113836455A (en) Special effect rendering method, device, equipment, storage medium and computer program product
CN111813407B (en) Game development method, game running device and electronic equipment
CN113837918A (en) Method and device for realizing rendering isolation by multiple processes
CN112085035A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN112019919B (en) Video sticker adding method and device and electronic equipment
CN116309974B (en) Animation scene rendering method, system, electronic equipment and medium
CN110069570B (en) Data processing method and device
US20240036891A1 (en) Sub-application running method and apparatus, electronic device, program product, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination