CN117008796B - Multi-screen collaborative rendering method, device, equipment and medium - Google Patents

Multi-screen collaborative rendering method, device, equipment and medium Download PDF

Info

Publication number
CN117008796B
CN117008796B CN202311250622.5A CN202311250622A CN117008796B CN 117008796 B CN117008796 B CN 117008796B CN 202311250622 A CN202311250622 A CN 202311250622A CN 117008796 B CN117008796 B CN 117008796B
Authority
CN
China
Prior art keywords
rendering
host
screen
arbitrary
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311250622.5A
Other languages
Chinese (zh)
Other versions
CN117008796A (en
Inventor
黄亚平
余杰敏
黄圣峻
李昱臻
石蕊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tus Digital Technology Shenzhen Co ltd
Original Assignee
Tus Digital Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tus Digital Technology Shenzhen Co ltd filed Critical Tus Digital Technology Shenzhen Co ltd
Priority to CN202311250622.5A priority Critical patent/CN117008796B/en
Publication of CN117008796A publication Critical patent/CN117008796A/en
Application granted granted Critical
Publication of CN117008796B publication Critical patent/CN117008796B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to the technical field of data processing, and provides a multi-screen collaborative rendering method, a device, equipment and a medium, on one hand, a central control host broadcasts an instruction carrying a frame sequence number at regular time, so that frame synchronization is realized among multiple screens, message waiting is not needed, and extra delay caused by message transmission is avoided; on the one hand, based on at least one rendering host machine for performing fragmented rendering, a plurality of rendering host machines share the rendering load, and the rendering performance is not limited by the rendering resolution of a single frame image; on the other hand, based on off-screen frame buffering and screen frame buffering, rendering and screen display are independently processed, so that decoupling of the rendering and the screen display is realized, and the rendering process has greater flexibility.

Description

Multi-screen collaborative rendering method, device, equipment and medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method, an apparatus, a device, and a medium for collaborative rendering of multiple screens.
Background
Rendering of three-dimensional scenes (scenes) is based on the computational effort of a physical GPU (Graphics Processing Unit, graphics processor), and when the number of Scene model vertices located within the virtual camera view cone or the actual rendering size (Drawing buffer size) of the rendering pipeline exceeds a certain threshold, the computational effort of a single GPU device is necessarily such that it cannot afford such a high load rendering task.
Based on the basic attribute characteristics of the three-dimensional graphics rendering pipeline, most of real-time rendering performance load is concentrated on 'per-primitive calculation', when a plurality of vertexes are processed in a Vertex Shader (Vertex Shader) stage, primitive assembly operation is performed, after the operation, primitive division is performed on the primitives in the view cone, the number of the primitives is far beyond that of the vertexes, and rendering effect is affected due to overhigh GPU load.
Therefore, when it is required to provide a high-precision real-time three-dimensional scene rendering support for displaying ultra-high resolution physical large screens for scenes such as venues, a new three-dimensional distributed collaborative rendering solution based on multiple devices is needed.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a multi-screen collaborative rendering method, apparatus, device and medium, which aims to solve the problem that the rendering effect is affected due to the excessive load of the GPU.
The multi-screen collaborative rendering method is applied to a multi-screen collaborative rendering system, and the multi-screen collaborative rendering system comprises a central control host, at least one rendering host and a display screen corresponding to each rendering host in the at least one rendering host; the multi-screen collaborative rendering method comprises the following steps:
The central control host broadcasts instructions to each rendering host at preset time intervals;
when any rendering host in the at least one rendering host receives the instruction, the any rendering host detects the rendering state of the any rendering host;
when the rendering state of the arbitrary rendering host is in non-rendering, the arbitrary rendering host acquires a frame sequence number carried in the instruction as a current frame sequence number;
the arbitrary rendering host acquires a frame sequence number stored in an off-screen frame buffer of the arbitrary rendering host as a last frame sequence number;
the arbitrary rendering host compares the current frame sequence number with the previous frame sequence number;
when the current frame number is the same as the last frame number, the arbitrary rendering host acquires the piece metadata corresponding to the last frame number from the off-screen frame buffer, and writes the piece metadata into the own screen frame buffer to perform screen display of the corresponding display screen.
According to a preferred embodiment of the invention, the method further comprises:
the multi-screen collaborative rendering system acquires the geometric structure, the content complexity and the rendering requirement data of the current scene;
the multi-screen collaborative rendering system determines the number of fragments as the number of the display screens according to the geometric structure, the content complexity and the rendering requirement data;
The multi-screen collaborative rendering system configures a corresponding rendering host for each display screen;
the multi-screen collaborative rendering system acquires the computing capacity, network bandwidth and current load of each rendering host;
the multi-screen collaborative rendering system schedules each rendering host according to the computing capacity, network bandwidth and current load of each rendering host.
According to a preferred embodiment of the present invention, the initial value of the frame number in each rendering host is 1 greater than the initial value of the frame number in the central host; the method further comprises the steps of:
when the rendering state of the arbitrary rendering host is in rendering, the arbitrary rendering host determines that the rendering of the arbitrary rendering host is overtime;
and the arbitrary rendering host continues the rendering processing, increases the value of the frame sequence number stored in the self off-screen frame buffer by 1, and gives up the rendering processing of the frame corresponding to the current frame sequence number.
According to a preferred embodiment of the present invention, before the arbitrary rendering host obtains the piece metadata corresponding to the previous frame number from the off-screen frame buffer, the method further includes:
the arbitrary rendering host performs rendering processing on the frame corresponding to the last frame sequence number to obtain the piece metadata;
The arbitrary rendering host stores the tile metadata to the off-screen frame buffer.
According to a preferred embodiment of the invention, the method further comprises:
when the current frame number is different from the last frame number, or after the arbitrary rendering host writes the piece metadata into the own screen frame buffer to perform the screen display of the corresponding display screen, the arbitrary rendering host increases the value of the frame number stored in the own off-screen frame buffer by 1, and performs rendering processing on the frame corresponding to the current frame number.
According to a preferred embodiment of the invention, the method further comprises:
the arbitrary rendering host maintains the off-screen frame buffer and the screen frame buffer based on the WebGL2 frame buffer object.
According to a preferred embodiment of the invention, the method further comprises:
when the instruction is the first instruction received by the arbitrary rendering host, the arbitrary rendering host stores initialized 3D context clearing color data in the screen frame buffer.
The multi-screen collaborative rendering device runs in a multi-screen collaborative rendering system, and the multi-screen collaborative rendering system comprises a central control host, at least one rendering host and a display screen corresponding to each rendering host in the at least one rendering host; the multi-screen collaborative rendering apparatus includes:
The central control host is used for broadcasting instructions to each rendering host at preset time intervals;
any rendering host in the at least one rendering host is used for detecting the rendering state of the rendering host when the instruction is received;
the arbitrary rendering host is further configured to obtain, when the rendering state of the arbitrary rendering host is in non-rendering, a frame sequence number carried in the instruction as a current frame sequence number;
the arbitrary rendering host is further configured to obtain a frame sequence number stored in the off-screen frame buffer of the arbitrary rendering host as a previous frame sequence number;
the arbitrary rendering host is further configured to compare the current frame sequence number with the previous frame sequence number;
and the arbitrary rendering host is further configured to acquire, when the current frame number is the same as the previous frame number, piece metadata corresponding to the previous frame number from the off-screen frame buffer, and write the piece metadata into a self screen frame buffer to perform screen display of a corresponding display screen.
A computer device, the computer device comprising:
a memory storing at least one instruction; a kind of electronic device with high-pressure air-conditioning system
And the processor executes the instructions stored in the memory to realize the multi-screen collaborative rendering method.
A computer-readable storage medium having stored therein at least one instruction for execution by a processor in a computer device to implement the multi-screen collaborative rendering method.
According to the technical scheme, on one hand, the central control host broadcasts the instruction carrying the frame sequence number at regular time, so that the frame synchronization among multiple screens is realized, message waiting is not needed, and extra delay caused by message transmission is avoided; on the one hand, based on at least one rendering host machine for performing fragmented rendering, a plurality of rendering host machines share the rendering load, and the rendering performance is not limited by the rendering resolution of a single frame image; on the other hand, based on off-screen frame buffering and screen frame buffering, rendering and screen display are independently processed, so that decoupling of the rendering and the screen display is realized, and the rendering process has greater flexibility.
Drawings
Fig. 1 is a schematic view of an application environment of a multi-screen collaborative rendering method according to the present invention.
FIG. 2 is a flow chart of a preferred embodiment of the multi-screen collaborative rendering method of the present invention.
FIG. 3 is a schematic diagram of a multi-screen collaborative rendering effect of the present invention.
FIG. 4 is a functional block diagram of a multi-screen collaborative rendering apparatus according to a preferred embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a computer device implementing a preferred embodiment of the multi-screen collaborative rendering method according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a schematic view of an application environment of the multi-screen collaborative rendering method according to the present invention.
The multi-screen collaborative rendering method is applied to a multi-screen collaborative rendering system, and the multi-screen collaborative rendering system includes a central control host, at least one rendering host (only 4 rendering hosts are taken as an example in fig. 1, and not limited to 4 rendering hosts in actual use scenes), and a display screen corresponding to each rendering host in the at least one rendering host, specifically as shown in fig. 1, the rendering host 1 corresponds to the display screen 1, the rendering host 2 corresponds to the display screen 2, the rendering host 3 corresponds to the display screen 3, and the rendering host 4 corresponds to the display screen 4. The central control host broadcasts instructions to the rendering host 1, the rendering host 2, the rendering host 3 and the rendering host 4, and after the rendering host 1, the rendering host 2, the rendering host 3 and the rendering host 4 receive the instructions broadcast by the central control host, the rendering processing is locally executed respectively based on a screen caching technology, and the final display of the rendering pictures is carried out by combining a plurality of display screens, so that the multi-screen collaborative rendering is realized, and the animation display effect of the super-resolution spliced large screen can be realized.
FIG. 2 is a flow chart of a multi-screen collaborative rendering method according to a preferred embodiment of the present invention. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs.
The multi-screen collaborative rendering method is applied to one or more computer devices, wherein the computer device is a device capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and the hardware of the computer device comprises a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, an ASIC), a programmable gate array (Field-Programmable Gate Array, an FPGA), a digital processor (Digital Signal Processor, a DSP), an embedded device and the like.
The computer device may be any electronic product that can interact with a user in a human-computer manner, such as a personal computer, tablet computer, smart phone, personal digital assistant (Personal Digital Assistant, PDA), game console, interactive internet protocol television (Internet Protocol Television, IPTV), smart wearable device, etc.
The computer device may also include a network device and/or a user device. Wherein the network device includes, but is not limited to, a single network server, a server group composed of a plurality of network servers, or a Cloud based Cloud Computing (Cloud Computing) composed of a large number of hosts or network servers.
The server may be an independent server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The network in which the computer device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (Virtual Private Network, VPN), and the like.
Specifically, the multi-screen collaborative rendering method includes:
s10, broadcasting instructions to each rendering host by the central control host at preset time intervals.
The instruction may carry data such as a frame number, a virtual camera type, and parameters (e.g., a rendering range of a virtual camera) for frame rendering and screen display.
The preset time interval corresponds to a lock frame, and may be configured in a user-defined manner according to an actual use scenario, for example: the preset time interval may be configured to lock 60 frames per second, i.e. the interval between two frames is 1000ms/60, approximately equal to 16.67ms.
In this way, the central control host can send instructions to all rendering hosts through Websocket at fixed 1000ms/60 time intervals.
The purpose of the broadcast instruction is to initiate the next frame rendering and screen the result of the previous frame rendering.
It will be appreciated that the frame synchronization of the final multi-screen co-rendering is dependent on the broadcast timing. And the transmission of instruction data adopts unidirectional data flow, so that the problem that the rendering performance is affected by communication delay caused by message return is avoided.
In this embodiment, the method further includes:
the multi-screen collaborative rendering system acquires the geometric structure, the content complexity and the rendering requirement data of the current scene;
The multi-screen collaborative rendering system determines the number of fragments as the number of the display screens according to the geometric structure, the content complexity and the rendering requirement data;
the multi-screen collaborative rendering system configures a corresponding rendering host for each display screen;
the multi-screen collaborative rendering system acquires the computing capacity, network bandwidth and current load of each rendering host;
the multi-screen collaborative rendering system schedules each rendering host according to the computing capacity, network bandwidth and current load of each rendering host.
In the above embodiment, firstly, the scene to be rendered is partitioned into slices according to the geometry structure, the content complexity and the rendering requirement data of the current scene, so as to realize the slicing, the rasterization and the final rendering. And configuring corresponding rendering hosts for each fragment (namely each display screen) according to the computing capacity, the network bandwidth and the current load, reasonably scheduling and controlling the distributed rendering hosts, realizing the dynamic screen cutting, and rendering the respective responsible fragment scenes by the rendering hosts.
S11, when any rendering host in the at least one rendering host receives the instruction, the any rendering host detects the rendering state of the any rendering host.
Wherein the rendering state may include in-rendering, non-in-rendering.
In this embodiment, the initial value of the frame number in each rendering host is 1 greater than the initial value of the frame number in the central host. For example: the initial value of the frame sequence number maintained on the central control host is 0, and the frame sequence number is increased by 1 before the central control host sends an instruction each time; correspondingly, the initial value of the frame sequence number on each rendering host is 1, when the rendering host does not generate the rendering timeout, the frame sequence number is increased by 1 before each frame of rendering is started, and if the rendering timeout is detected, the local frame sequence number value is set to be the frame sequence number value +1 carried by the current instruction, so that the frame synchronization among multiple screens is ensured.
Under normal conditions, according to the fixed instruction issuing frequency, when the central control host issues the next frame instruction, the corresponding rendering host should just complete the rendering of the previous frame, so when the rendering state of the arbitrary rendering host is in rendering, the rendering timeout of the arbitrary rendering host is indicated.
In this embodiment, the method further includes:
when the rendering state of the arbitrary rendering host is in rendering, the arbitrary rendering host determines that the rendering of the arbitrary rendering host is overtime;
And the arbitrary rendering host continues the rendering processing, increases the value of the frame sequence number stored in the self off-screen frame buffer by 1, and gives up the rendering processing of the frame corresponding to the current frame sequence number.
In the above embodiment, when the rendering timeout occurs in the arbitrary rendering host, the arbitrary rendering host continues to complete the current rendering task, and increases the value of the frame sequence number stored in the self off-screen frame buffer by 1, so as to ensure that the frame sequence number stored in the central control host is the same as the frame sequence number stored in the arbitrary rendering host when the central control host issues the instruction next time. And the rendering processing of the frame corresponding to the current frame number is abandoned, and the interval between two frames of instructions is proper due to the configuration of the lock frame, so that only slight jitter can be generated on the corresponding display screen, the serious influence on the visual effect can not be generated, the frame synchronization between the display screens can be kept in the follow-up process, the message waiting is not needed, and the additional delay caused by the message transmission is effectively avoided.
And S12, when the rendering state of the arbitrary rendering host is in non-rendering, the arbitrary rendering host acquires the frame sequence number carried in the instruction as the current frame sequence number.
In this embodiment, when the rendering state of the arbitrary rendering host is in non-rendering, it is indicated that the arbitrary rendering host has completed the rendering task currently being executed, and the subsequent operation may be continued.
Specifically, the arbitrary rendering host acquires the frame sequence number carried in the instruction as the current frame sequence number for use in subsequent comparison.
S13, the arbitrary rendering host acquires the frame sequence number stored in the off-screen frame buffer of the arbitrary rendering host as the last frame sequence number.
In this embodiment, the off-screen frame buffer is configured to store data of a previous frame, including a sequence number of the previous frame, tile metadata obtained after rendering of the previous frame is completed, and so on. That is, the data in the off-screen frame buffer is the rendering result image data corresponding to the last time the rendering host received the instruction.
The off-screen frame buffering is specifically realized according to a frame buffering object of WebGL 2.
S14, the arbitrary rendering host compares the current frame sequence number with the last frame sequence number.
In this embodiment, the arbitrary rendering host compares the current frame number with the previous frame number, so as to perform different processing according to different comparison results.
And S15, when the current frame number is the same as the last frame number, the arbitrary rendering host acquires the piece metadata corresponding to the last frame number from the off-screen frame buffer, and writes the piece metadata into the own screen frame buffer to display the corresponding display screen.
In this embodiment, when the current frame number is the same as the previous frame number, it is indicated that frame synchronization is maintained between the central control host and the arbitrary rendering host, so that the piece metadata obtained after rendering the previous frame stored in the off-screen frame buffer may be displayed, that is, the piece metadata corresponding to the previous frame number stored in the off-screen frame buffer is sent to a screen frame buffer, and the piece metadata is displayed on a display screen corresponding to the arbitrary rendering host.
The data stored in the screen frame buffer is used for determining the display content of the corresponding display screen, and is a color frame buffer area for rendering the current display screen.
In this embodiment, before the arbitrary rendering host obtains the piece metadata corresponding to the previous frame number from the off-screen frame buffer, the method further includes:
the arbitrary rendering host performs rendering processing on the frame corresponding to the last frame sequence number to obtain the piece metadata;
The arbitrary rendering host stores the tile metadata to the off-screen frame buffer.
The off-screen frame buffer is used for storing the piece metadata of which the previous frame is rendered, and the screen frame buffer is used for storing the data needing to be displayed on the screen currently, so that independent processing of rendering and screen display is realized through the off-screen frame buffer and the screen frame buffer, further, decoupling of the rendering and the screen display is realized, and the rendering process has greater flexibility.
In this embodiment, the method further includes:
when the current frame number is different from the last frame number, or after the arbitrary rendering host writes the piece metadata into the own screen frame buffer to perform the screen display of the corresponding display screen, the arbitrary rendering host increases the value of the frame number stored in the own off-screen frame buffer by 1, and performs rendering processing on the frame corresponding to the current frame number.
In the above embodiment, by increasing the value of the frame number stored in the self off-screen frame buffer by 1, it is possible to keep the subsequent frame synchronization, and perform rendering processing on the frame corresponding to the current frame number for subsequent screen display.
In this embodiment, the method further includes:
The arbitrary rendering host maintains the off-screen frame buffer and the screen frame buffer based on the WebGL2 frame buffer object.
In the above embodiment, compared with other solutions adopting a local native platform, such as DirectX, openGL, the webgl2 technology based on a WEB (World Wide WEB) platform can support cross-platform and multi-terminal, and support lightweight deployment, so as to support synchronous collaborative processing of multiple rendering hosts.
In this embodiment, the method further includes:
when the instruction is the first instruction received by the arbitrary rendering host, the arbitrary rendering host stores initialized 3D (3D) context clear color data in the screen frame buffer.
Further, the display pictures of the display screens corresponding to each rendering host are spliced, so that the rendering pictures of the large physically spliced screen can be obtained, and the rendering pictures are synchronously displayed on the logic screen of the central control host.
In this embodiment, virtual cameras with different parameters may be respectively set on multiple rendering hosts to divide a World space (World space) of the same three-dimensional scene, so as to achieve a slicing and dividing effect on a final drawn result image. Fig. 3 is a schematic diagram of a multi-screen collaborative rendering effect according to the present invention. Taking 4 display screens corresponding to 4 rendering hosts as an example, the upper part of the picture in fig. 3 is for each display screen to independently display the effect of the partition responsible for the picture, and the lower part of the picture is for the display effect of the corresponding logic large screen. And (3) distributing the rendering load to 4 rendering hosts, and synthesizing display pictures of the 4 display screens to finally realize the display effect of displaying each fragment on one logic large screen.
Moreover, the slicing effect can be respectively adapted to two different projection situations of orthographic projection (Orthographic projection) or perspective projection (Perspective projection) only by matching with and modifying the projection transformation matrix and the corresponding virtual camera parameters (the essence is that the specific slicing modes are different for the three-dimensional space).
Particularly, the key of the slicing division of the three-dimensional scene under the perspective projection condition is to configure a corresponding projection matrix, a method similar to a frustum in name is realized in a common vector matrix operation library, and the splitting of a Viewing frustum can be realized by setting left, right, bottom, top parameters of the method. The load of the GPU (Graphics Processing Unit, graphics processor) can be shared by a scene segmentation technology based on view cone segmentation, so that a solution based on a Web platform is provided for real-time rendering of large scenes under ultra-high resolution.
According to the technical scheme, on one hand, the central control host broadcasts the instruction carrying the frame sequence number at regular time, so that the frame synchronization among multiple screens is realized, message waiting is not needed, and extra delay caused by message transmission is avoided; on the one hand, based on at least one rendering host machine for performing fragmented rendering, a plurality of rendering host machines share the rendering load, and the rendering performance is not limited by the rendering resolution of a single frame image; on the other hand, based on off-screen frame buffering and screen frame buffering, rendering and screen display are independently processed, so that decoupling of the rendering and the screen display is realized, and the rendering process has greater flexibility.
FIG. 4 is a functional block diagram of a multi-screen collaborative rendering apparatus according to a preferred embodiment of the present invention. The multi-screen collaborative rendering apparatus 11 includes a central control host 110, an arbitrary rendering host 111, which is a series of computer program segments capable of being executed by a processor and of performing a fixed function, and which is stored in a memory. In this embodiment, specific functions will be described in detail in the following embodiments.
In this embodiment, the multi-screen collaborative rendering apparatus 11 operates in a multi-screen collaborative rendering system, which includes the central control host 110, at least one rendering host (including the arbitrary rendering host 111), and a display screen corresponding to each of the at least one rendering host; the device comprises:
the central control host 110 is configured to broadcast an instruction to each rendering host at preset time intervals;
any rendering host 111 of the at least one rendering host is configured to detect a rendering state of itself when the instruction is received;
the arbitrary rendering host 111 is further configured to obtain, when the rendering state of the arbitrary rendering host 111 is in non-rendering, a frame number carried in the instruction as a current frame number;
The arbitrary rendering host 111 is further configured to obtain a frame sequence number stored in its own off-screen frame buffer as a previous frame sequence number;
the arbitrary rendering host 111 is further configured to compare the current frame number with the previous frame number;
the arbitrary rendering host 111 is further configured to, when the current frame number is the same as the previous frame number, obtain, from the off-screen frame buffer, piece metadata corresponding to the previous frame number, and write the piece metadata into a screen frame buffer of the arbitrary rendering host to perform a screen display of a corresponding display screen.
According to the technical scheme, on one hand, the central control host broadcasts the instruction carrying the frame sequence number at regular time, so that the frame synchronization among multiple screens is realized, message waiting is not needed, and extra delay caused by message transmission is avoided; on the one hand, based on at least one rendering host machine for performing fragmented rendering, a plurality of rendering host machines share the rendering load, and the rendering performance is not limited by the rendering resolution of a single frame image; on the other hand, based on off-screen frame buffering and screen frame buffering, rendering and screen display are independently processed, so that decoupling of the rendering and the screen display is realized, and the rendering process has greater flexibility.
Fig. 5 is a schematic structural diagram of a computer device according to a preferred embodiment of the present invention for implementing the multi-screen collaborative rendering method.
The computer device 1 may comprise a memory 12, a processor 13 and a bus, and may further comprise a computer program stored in the memory 12 and executable on the processor 13, such as a multi-screen co-rendering program.
It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the computer device 1 and does not constitute a limitation of the computer device 1, the computer device 1 may be a bus type structure, a star type structure, the computer device 1 may further comprise more or less other hardware or software than illustrated, or a different arrangement of components, for example, the computer device 1 may further comprise an input-output device, a network access device, etc.
It should be noted that the computer device 1 is only used as an example, and other electronic products that may be present in the present invention or may be present in the future are also included in the scope of the present invention by way of reference.
The memory 12 includes at least one type of readable storage medium including flash memory, a removable hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 12 may in some embodiments be an internal storage unit of the computer device 1, such as a removable hard disk of the computer device 1. The memory 12 may in other embodiments also be an external storage device of the computer device 1, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the computer device 1. Further, the memory 12 may also include both an internal storage unit and an external storage device of the computer device 1. The memory 12 may be used not only for storing application software installed in the computer device 1 and various types of data, such as codes of a multi-screen collaborative rendering program, etc., but also for temporarily storing data that has been output or is to be output.
The processor 13 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, a combination of various control chips, and the like. The processor 13 is a Control Unit (Control Unit) of the computer apparatus 1, connects the respective components of the entire computer apparatus 1 using various interfaces and lines, executes various functions of the computer apparatus 1 and processes data by running or executing programs or modules stored in the memory 12 (for example, executing a multi-screen collaborative rendering program, etc.), and calls data stored in the memory 12.
The processor 13 executes the operating system of the computer device 1 and various types of applications installed. The processor 13 executes the application program to implement the steps in the various multi-screen collaborative rendering method embodiments described above, such as the steps shown in fig. 2.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory 12 and executed by the processor 13 to complete the present invention. The one or more modules/units may be a series of computer readable instruction segments capable of performing the specified functions, which instruction segments describe the execution of the computer program in the computer device 1. For example, the computer program may be partitioned into a central host 110, an arbitrary rendering host 111.
The integrated units implemented in the form of software functional modules described above may be stored in a computer readable storage medium. The software functional module is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a computer device, or a network device, etc.) or a processor (processor) to execute portions of the multi-screen collaborative rendering method according to various embodiments of the present invention.
The modules/units integrated in the computer device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on this understanding, the present invention may also be implemented by a computer program for instructing a relevant hardware device to implement all or part of the procedures of the above-mentioned embodiment method, where the computer program may be stored in a computer readable storage medium and the computer program may be executed by a processor to implement the steps of each of the above-mentioned method embodiments.
Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory, or the like.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
The bus may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one straight line is shown in fig. 5, but not only one bus or one type of bus. The bus is arranged to enable a connection communication between the memory 12 and at least one processor 13 or the like.
Although not shown, the computer device 1 may further comprise a power source (such as a battery) for powering the various components, preferably the power source may be logically connected to the at least one processor 13 via a power management means, whereby the functions of charge management, discharge management, and power consumption management are achieved by the power management means. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The computer device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described in detail herein.
Further, the computer device 1 may also comprise a network interface, optionally comprising a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used for establishing a communication connection between the computer device 1 and other computer devices.
The computer device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the computer device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
Fig. 5 shows only a computer device 1 with components 12-13, it will be understood by those skilled in the art that the structure shown in fig. 5 is not limiting of the computer device 1 and may include fewer or more components than shown, or may combine certain components, or a different arrangement of components.
In connection with fig. 2, the memory 12 in the computer device 1 stores a plurality of instructions to implement a multi-screen collaborative rendering method, the processor 13 being executable to implement:
the central control host broadcasts instructions to each rendering host at preset time intervals;
when any rendering host in the at least one rendering host receives the instruction, the any rendering host detects the rendering state of the any rendering host;
when the rendering state of the arbitrary rendering host is in non-rendering, the arbitrary rendering host acquires a frame sequence number carried in the instruction as a current frame sequence number;
the arbitrary rendering host acquires a frame sequence number stored in an off-screen frame buffer of the arbitrary rendering host as a last frame sequence number;
The arbitrary rendering host compares the current frame sequence number with the previous frame sequence number;
when the current frame number is the same as the last frame number, the arbitrary rendering host acquires the piece metadata corresponding to the last frame number from the off-screen frame buffer, and writes the piece metadata into the own screen frame buffer to perform screen display of the corresponding display screen.
Specifically, the specific implementation method of the above instructions by the processor 13 may refer to the description of the relevant steps in the corresponding embodiment of fig. 2, which is not repeated herein.
The data in this case were obtained legally.
In the several embodiments provided in the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The invention is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. The units or means stated in the invention may also be implemented by one unit or means, either by software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (9)

1. The multi-screen collaborative rendering method is characterized by being applied to a multi-screen collaborative rendering system, wherein the multi-screen collaborative rendering system comprises a central control host, at least one rendering host and a display screen corresponding to each rendering host in the at least one rendering host; the multi-screen collaborative rendering method comprises the following steps:
the central control host broadcasts instructions to each rendering host at preset time intervals;
when any rendering host in the at least one rendering host receives the instruction, the any rendering host detects the rendering state of the any rendering host;
When the rendering state of the arbitrary rendering host is in non-rendering, the arbitrary rendering host acquires a frame sequence number carried in the instruction as a current frame sequence number;
the arbitrary rendering host acquires a frame sequence number stored in an off-screen frame buffer of the arbitrary rendering host as a last frame sequence number;
the arbitrary rendering host compares the current frame sequence number with the previous frame sequence number;
when the current frame number is the same as the last frame number, the arbitrary rendering host acquires piece metadata corresponding to the last frame number from the off-screen frame buffer, and writes the piece metadata into a screen frame buffer of the arbitrary rendering host to perform screen display of a corresponding display screen;
wherein, the initial value of the frame sequence number in each rendering host is 1 larger than the initial value of the frame sequence number in the central control host; when the rendering state of the arbitrary rendering host is in rendering, the arbitrary rendering host determines that the rendering of the arbitrary rendering host is overtime; and the arbitrary rendering host continues the rendering processing, increases the value of the frame sequence number stored in the self off-screen frame buffer by 1, and gives up the rendering processing of the frame corresponding to the current frame sequence number.
2. A multi-screen collaborative rendering method according to claim 1, wherein the method further comprises:
The multi-screen collaborative rendering system acquires the geometric structure, the content complexity and the rendering requirement data of the current scene;
the multi-screen collaborative rendering system determines the number of fragments as the number of the display screens according to the geometric structure, the content complexity and the rendering requirement data;
the multi-screen collaborative rendering system configures a corresponding rendering host for each display screen;
the multi-screen collaborative rendering system acquires the computing capacity, network bandwidth and current load of each rendering host;
the multi-screen collaborative rendering system schedules each rendering host according to the computing capacity, network bandwidth and current load of each rendering host.
3. The multi-screen collaborative rendering method according to claim 1, wherein before the arbitrary rendering host obtains the piece metadata corresponding to the last frame number from the off-screen frame buffer, the method further comprises:
the arbitrary rendering host performs rendering processing on the frame corresponding to the last frame sequence number to obtain the piece metadata;
the arbitrary rendering host stores the tile metadata to the off-screen frame buffer.
4. A multi-screen collaborative rendering method according to claim 1, wherein the method further comprises:
When the current frame number is different from the last frame number, or after the arbitrary rendering host writes the piece metadata into the own screen frame buffer to perform the screen display of the corresponding display screen, the arbitrary rendering host increases the value of the frame number stored in the own off-screen frame buffer by 1, and performs rendering processing on the frame corresponding to the current frame number.
5. A multi-screen collaborative rendering method according to claim 1, wherein the method further comprises:
the arbitrary rendering host maintains the off-screen frame buffer and the screen frame buffer based on the WebGL2 frame buffer object.
6. A multi-screen collaborative rendering method according to claim 1, wherein the method further comprises:
when the instruction is the first instruction received by the arbitrary rendering host, the arbitrary rendering host stores initialized 3D context clearing color data in the screen frame buffer.
7. The multi-screen collaborative rendering device is characterized by running in a multi-screen collaborative rendering system, wherein the multi-screen collaborative rendering system comprises a central control host, at least one rendering host and a display screen corresponding to each rendering host in the at least one rendering host; the multi-screen collaborative rendering apparatus includes:
The central control host is used for broadcasting instructions to each rendering host at preset time intervals;
any rendering host in the at least one rendering host is used for detecting the rendering state of the rendering host when the instruction is received;
the arbitrary rendering host is further configured to obtain, when the rendering state of the arbitrary rendering host is in non-rendering, a frame sequence number carried in the instruction as a current frame sequence number;
the arbitrary rendering host is further configured to obtain a frame sequence number stored in the off-screen frame buffer of the arbitrary rendering host as a previous frame sequence number;
the arbitrary rendering host is further configured to compare the current frame sequence number with the previous frame sequence number;
the arbitrary rendering host is further configured to obtain, when the current frame number is the same as the previous frame number, piece metadata corresponding to the previous frame number from the off-screen frame buffer, and write the piece metadata into a screen frame buffer of the arbitrary rendering host to perform a screen display of a corresponding display screen;
wherein, the initial value of the frame sequence number in each rendering host is 1 larger than the initial value of the frame sequence number in the central control host; when the rendering state of the arbitrary rendering host is in rendering, the arbitrary rendering host determines that the rendering of the arbitrary rendering host is overtime; and the arbitrary rendering host continues the rendering processing, increases the value of the frame sequence number stored in the self off-screen frame buffer by 1, and gives up the rendering processing of the frame corresponding to the current frame sequence number.
8. A computer device, the computer device comprising:
a memory storing at least one instruction; a kind of electronic device with high-pressure air-conditioning system
A processor executing instructions stored in the memory to implement the multi-screen collaborative rendering method of any one of claims 1-6.
9. A computer-readable storage medium, characterized by: the computer-readable storage medium having stored therein at least one instruction for execution by a processor in a computer device to implement the multi-screen collaborative rendering method of any of claims 1-6.
CN202311250622.5A 2023-09-26 2023-09-26 Multi-screen collaborative rendering method, device, equipment and medium Active CN117008796B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311250622.5A CN117008796B (en) 2023-09-26 2023-09-26 Multi-screen collaborative rendering method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311250622.5A CN117008796B (en) 2023-09-26 2023-09-26 Multi-screen collaborative rendering method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN117008796A CN117008796A (en) 2023-11-07
CN117008796B true CN117008796B (en) 2023-12-26

Family

ID=88567490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311250622.5A Active CN117008796B (en) 2023-09-26 2023-09-26 Multi-screen collaborative rendering method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117008796B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103491317A (en) * 2013-09-06 2014-01-01 北京东方艾迪普科技发展有限公司 Three-dimensional figure and image multi-screen synchronous broadcasting method, device and system
CN105635805A (en) * 2015-12-18 2016-06-01 歌尔声学股份有限公司 Method and apparatus for optimizing motion image in virtual reality scene
CN109343774A (en) * 2018-10-29 2019-02-15 广东明星创意动画有限公司 A kind of rapid file pretreatment rendering system
CN110806847A (en) * 2019-10-30 2020-02-18 支付宝(杭州)信息技术有限公司 Distributed multi-screen display method, device, equipment and system
CN112347408A (en) * 2021-01-07 2021-02-09 北京小米移动软件有限公司 Rendering method, rendering device, electronic equipment and storage medium
CN113778362A (en) * 2021-09-16 2021-12-10 平安银行股份有限公司 Screen control method, device and equipment based on artificial intelligence and storage medium
CN114297746A (en) * 2021-12-06 2022-04-08 万翼科技有限公司 Rendering method and device of building information model, electronic equipment and storage medium
CN115543244A (en) * 2022-09-23 2022-12-30 瑞芯微电子股份有限公司 Multi-screen splicing method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116821040B (en) * 2023-08-30 2023-11-21 西安芯云半导体技术有限公司 Display acceleration method, device and medium based on GPU direct memory access

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103491317A (en) * 2013-09-06 2014-01-01 北京东方艾迪普科技发展有限公司 Three-dimensional figure and image multi-screen synchronous broadcasting method, device and system
CN105635805A (en) * 2015-12-18 2016-06-01 歌尔声学股份有限公司 Method and apparatus for optimizing motion image in virtual reality scene
CN109343774A (en) * 2018-10-29 2019-02-15 广东明星创意动画有限公司 A kind of rapid file pretreatment rendering system
CN110806847A (en) * 2019-10-30 2020-02-18 支付宝(杭州)信息技术有限公司 Distributed multi-screen display method, device, equipment and system
CN112347408A (en) * 2021-01-07 2021-02-09 北京小米移动软件有限公司 Rendering method, rendering device, electronic equipment and storage medium
CN113778362A (en) * 2021-09-16 2021-12-10 平安银行股份有限公司 Screen control method, device and equipment based on artificial intelligence and storage medium
CN114297746A (en) * 2021-12-06 2022-04-08 万翼科技有限公司 Rendering method and device of building information model, electronic equipment and storage medium
CN115543244A (en) * 2022-09-23 2022-12-30 瑞芯微电子股份有限公司 Multi-screen splicing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN117008796A (en) 2023-11-07

Similar Documents

Publication Publication Date Title
US7782327B2 (en) Multiple parallel processor computer graphics system
KR101563098B1 (en) Graphics processing unit with command processor
RU2677584C1 (en) Exploiting frame to frame coherency in architecture of image construction with primitives sorting at intermediate stage
EP2068279B1 (en) System and method for using a secondary processor in a graphics system
US9479570B2 (en) System and method for processing load balancing of graphic streams
CN111754614A (en) Video rendering method and device based on VR (virtual reality), electronic equipment and storage medium
CN102609971A (en) Quick rendering system using embedded GPU (Graphics Processing Unit) for realizing 3D-GIS (Three Dimensional-Geographic Information System)
US20200020067A1 (en) Concurrent binning and rendering
CN115034990A (en) Image defogging processing method, device, equipment and medium in real-time scene
TW202141418A (en) Methods and apparatus for handling occlusions in split rendering
CN112316433A (en) Game picture rendering method, device, server and storage medium
US20180285129A1 (en) Systems and methods for providing computer interface interaction in a virtualized environment
CN117008796B (en) Multi-screen collaborative rendering method, device, equipment and medium
JP2021190098A (en) Image preprocessing method, device, electronic apparatus, and storage medium
CN102087465B (en) Method for directly assigning and displaying true three-dimensional simulation regions
CN114428573B (en) Special effect image processing method and device, electronic equipment and storage medium
Nonaka et al. Hybrid hardware-accelerated image composition for sort-last parallel rendering on graphics clusters with commodity image compositor
CN115705668A (en) View drawing method and device and storage medium
US10678553B2 (en) Pro-active GPU hardware bootup
CN117032618B (en) Animation rotation method, equipment and medium based on multiple screens
CN117032617B (en) Multi-screen-based grid pickup method, device, equipment and medium
US20130106887A1 (en) Texture generation using a transformation matrix
EP4258218A1 (en) Rendering method, device, and system
US8587599B1 (en) Asset server for shared hardware graphic data
CN102368211A (en) Method for implementing hardware mouse by using acceleration of OSD (on-screen display)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant