CN109727315B - One-to-many cluster rendering method, device, equipment and storage medium - Google Patents

One-to-many cluster rendering method, device, equipment and storage medium Download PDF

Info

Publication number
CN109727315B
CN109727315B CN201811642965.5A CN201811642965A CN109727315B CN 109727315 B CN109727315 B CN 109727315B CN 201811642965 A CN201811642965 A CN 201811642965A CN 109727315 B CN109727315 B CN 109727315B
Authority
CN
China
Prior art keywords
display screen
determining
configuration file
display
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811642965.5A
Other languages
Chinese (zh)
Other versions
CN109727315A (en
Inventor
周清会
梁志祥
于丽莎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Manheng Digital Technology Co ltd
Original Assignee
Shanghai Manheng Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Manheng Digital Technology Co ltd filed Critical Shanghai Manheng Digital Technology Co ltd
Priority to CN201811642965.5A priority Critical patent/CN109727315B/en
Publication of CN109727315A publication Critical patent/CN109727315A/en
Application granted granted Critical
Publication of CN109727315B publication Critical patent/CN109727315B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a one-to-many cluster rendering method, a device, equipment and a storage medium. The method comprises the following steps: determining a current simulation environment according to the configuration file; according to the corresponding relation between the equipment in the configuration file and each display screen, determining the working state of each simulation camera in the equipment; and rendering pictures in the range of the cone of vision taking the simulation camera as the vertex into the display screen corresponding to each simulation camera. According to the corresponding relation between the equipment and each display screen, the working state of each simulation camera in the equipment is determined, and rendering is carried out on the display screen corresponding to each simulation camera, so that output display of one equipment corresponding to a plurality of display screens is realized, rendering efficiency is improved, hardware resources are fully utilized, and synchronous splicing and three-dimensional display of multiple display screens is realized through picture synchronization of the output of the equipment.

Description

One-to-many cluster rendering method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of cluster rendering, in particular to a one-to-many cluster rendering method, a device, equipment and a storage medium.
Background
The one-to-many cluster rendering technique is a technique that generates a time difference between the left and right eyes of a person by using a series of optical methods, so that different pictures are received, and a stereoscopic effect is further formed in the brain. The user can see the stereoscopic image by wearing the 3D eye device, and the user is personally on the scene. At present, the method is widely applied to occasions such as movie theatres, game halls and the like.
However, in the current one-to-many cluster rendering technology, the viewing angle of the watched image cannot be changed according to the real-time change of the user position, the position and the angle of the watched image cannot be adaptively adjusted according to the user position and the angle, the picture is fixed and single, and the immersive experience of the user cannot be brought. In addition, in the current multi-screen LED display environment, the conventional cluster rendering technology cannot realize that one computer outputs pictures corresponding to a plurality of screens, cannot save hardware cost, and cannot realize multi-screen combined display.
Disclosure of Invention
The embodiment of the invention provides a one-to-many cluster rendering method, a device, equipment and a storage medium, which are used for realizing picture output of one equipment corresponding to a plurality of display screens, improving the efficiency of picture output, fully and reasonably utilizing hardware resources and realizing splicing display of the plurality of display screens.
In a first aspect, an embodiment of the present invention provides a one-to-many cluster rendering method, including:
determining a current simulation environment according to the configuration file;
according to the corresponding relation between the equipment in the configuration file and each display screen, determining the working state of each simulation camera in the equipment;
and rendering pictures in the range of the cone of vision taking the simulation camera as the vertex into the display screen corresponding to each simulation camera.
In a second aspect, an embodiment of the present invention further provides a one-to-many cluster rendering apparatus, where the apparatus includes:
the simulation environment determining module is used for determining the current simulation environment according to the configuration file;
the determining module is used for determining the working state of each simulation camera in the equipment according to the corresponding relation between the equipment in the configuration file and each display screen;
and the rendering module is used for rendering pictures in the range of the visual cone taking the simulation camera as the vertex into the display screen corresponding to each simulation camera.
In a third aspect, an embodiment of the present invention further provides an apparatus, including:
one or more processors;
and a memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement any one of the one-to-many cluster rendering methods of the embodiments of the present invention.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any one of the one-to-many cluster rendering methods of the embodiments of the present invention.
The embodiment of the invention determines the current simulation environment according to the configuration file; according to the corresponding relation between the equipment in the configuration file and each display screen, determining the working state of each simulation camera in the equipment; and rendering pictures in the range of the cone of view taking the simulation camera as the vertex into the display screen corresponding to each simulation camera, so that one device corresponds to a plurality of display screens to output and display, the rendering efficiency is improved, the hardware resources are fully utilized, and the synchronous three-dimensional display and the spliced display of a plurality of display screens are realized.
Drawings
FIG. 1 is a flow chart of a one-to-many cluster rendering method in accordance with a first embodiment of the present invention;
FIG. 2 is a schematic view of a real environment type in accordance with a first embodiment of the present invention;
FIG. 3 is a schematic diagram of a G-Cave environment in accordance with a first embodiment of the present invention;
FIG. 4 is a flow chart of a one-to-many cluster rendering method in a second embodiment of the invention;
FIG. 5 is a diagram of a screen index value according to a second embodiment of the present invention;
fig. 6 is a schematic structural diagram of a one-to-many cluster rendering device in a third embodiment of the present invention;
fig. 7 is a schematic structural view of an apparatus according to a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 1 is a flowchart of a one-to-many cluster rendering method according to a first embodiment of the present invention. The one-to-many cluster rendering method provided in this embodiment may be suitable for a case where a plurality of display screens are output through one device, where the device may be a computer, a computer device, or the like, and the method may be specifically performed by a one-to-many cluster rendering apparatus, where the apparatus may be implemented by software and/or hardware, and the apparatus may be integrated in the device. The method provided by the embodiment of the invention can be realized through SVR (Sai-VR, virtual reality bridging software). Referring to fig. 1, the method of the present embodiment specifically includes the following steps:
s110, determining the current simulation environment according to the configuration file.
For example, a display environment for a user to view stereoscopic display images is set, and the user wears the 3D head display device in the display environment, so that three-dimensional images can be viewed, as shown in fig. 2. The display environment can be an arc screen environment, a straight screen environment, a three-folded screen environment, a G-Float environment, a G-Discover environment or a G-Cave environment, and the schematic diagrams of the environments are shown in FIG. 2. The rendering method applied in each environment is not limited, a plurality of computers can be selected for cluster rendering, one computer can be applied for one-to-many rendering, and one-to-many cluster rendering can be realized by the plurality of computers and each computer corresponds to a plurality of display screen output pictures. In the embodiment of the invention, the image output of one device corresponding to a plurality of display screens is realized through the establishment of a hardware environment and the setting of software SVR.
Specifically, a current simulation environment is determined according to the configuration file. The simulation environment is the same as the real display environment, and the size of the display screen and the angle between the display screens are set to be the same as the size of the display screen and the angle between the display screens in the real display environment.
S120, according to the corresponding relation between the equipment in the configuration file and each display screen, determining the working state of each simulation camera in the equipment.
For example, fig. 3 is a schematic diagram of a G-Cave environment in the first embodiment of the present invention, where the display environment in the G-Cave environment in fig. 3 includes a left-right mu, a front screen and a ground screen, and optionally, a first computer is set to correspond to the left screen and the right screen, and a second computer is set to correspond to the front screen and the ground screen. According to the corresponding relation between the computer and the display screen, the working state of the simulation camera in the computer is further determined, and the left curtain camera and the right curtain camera in the operation project in the first computer are opened, and the other cameras are closed, so that the left curtain and the right curtain can be rendered through the left curtain camera and the right curtain camera. And opening the front curtain camera and the ground curtain camera in the second computer operation project, and closing other cameras, so that the front curtain and the ground curtain can be rendered through the front curtain camera and the ground curtain camera.
And S130, rendering pictures in the range of the cone of view taking the simulation camera as the vertex into the display screen corresponding to each simulation camera.
For example, the left curtain camera is opposite to the left curtain, the left curtain is used for rendering the picture in the range of the visual cone taking the left curtain camera as the vertex, the right curtain camera is opposite to the right curtain, the right curtain is used for rendering the picture in the range of the visual cone taking the right curtain camera as the vertex, and other cameras are the same. Optionally, the method further comprises: and determining the resolution of the output picture corresponding to the equipment according to the arrangement mode of the display screens corresponding to the equipment and the resolution of each display screen. For example, according to the display screen arrangement in the G-Cave environment, the left screen and the right screen are arranged in a horizontal direction, the front screen and the ground screen are arranged in a vertical direction, when the resolutions of the left screen and the right screen are respectively (1024×768) dpi, the resolution of the picture output by the first computer is [ (1024+1024) ×768] dpi, that is, (2048×768) dpi, and when the resolutions of the front screen and the ground screen are respectively (1024×768) dpi, the resolution of the picture output by the second computer is [1024×768+768) ] dpi, that is, (1024×1536) dpi.
According to the technical scheme of the embodiment, the current simulation environment is determined according to the configuration file; according to the corresponding relation between the equipment in the configuration file and each display screen, determining the working state of each simulation camera in the equipment; and rendering pictures in the range of the cone of vision taking the simulation camera as the vertex into the display screen corresponding to each simulation camera. According to the corresponding relation between the equipment and each display screen, the working state of each simulation camera in the equipment is determined, and rendering is carried out on the display screen corresponding to each simulation camera, so that output display of one equipment corresponding to a plurality of display screens is realized, rendering efficiency is improved, hardware resources are fully utilized, and synchronous three-dimensional display of multiple display screens is realized.
Example two
Fig. 4 is a flowchart of a one-to-many cluster rendering method in a second embodiment of the present invention. The present embodiment is optimized on the basis of the above-described embodiment, and details not described in detail in the present embodiment are detailed in the above-described embodiment. Referring to fig. 4, the one-to-many cluster rendering method provided in the present embodiment includes:
s210, a configuration file is established, and a real display environment is built in the configuration file according to the length and the width of at least two display screens and the included angle between the display screens.
Exemplary, a configuration file is established, parameters such as names, numbers, positions, angles, sizes, resolutions and the like of at least two display screens are configured in the configuration file, and a real display environment is built in the configuration file according to the length, the width, the included angle between the display screens, the positions of the display screens and the like of the at least two display screens.
S220, determining the display screen of the corresponding projection of each device according to the corresponding relation between the index value of each display screen and the IP address of the device.
Configuring the corresponding relation between each display screen and the corresponding equipment in the configuration file, optionally, determining the corresponding projected display screen of each equipment according to the corresponding relation between the index value of each display screen and the IP address of the equipment, including: and determining a row index and a column index of each display screen according to the arrangement mode of each display screen, wherein the row index and the column index form an index value of each display screen. The display screen arrangement in the G-Cave environment is divided into a left screen, a right screen, a front screen and a ground screen, wherein the left screen and the right screen are arranged transversely, the first computer controls the output of the first computer correspondingly, the front screen and the ground screen are arranged vertically, the second computer controls the output of the second computer, the index value of each display screen is expressed by (x, y), wherein x represents the row index of the current display screen in the display screen arrangement, y represents the column index of the current display screen in the display screen arrangement, so that the left screen index value of the transverse arrangement is (0, 0), the right screen index value of the right screen is (0, 1), the front screen index value of the vertical arrangement is (0, 0), and fig. 5 is a screen index value schematic diagram in the second embodiment of the invention, as shown in fig. 5. The display screen of each index value designates the corresponding computer, and the corresponding relation is established through the IP address of the computer, so that the corresponding relation between each display screen and the computer is determined.
S230, reading the configuration file, and determining the simulation environment which is the same as the real display environment built in the configuration file according to the length and the width of at least two display screens and the included angle between the display screens.
The simulation environment is built according to the configuration file, and the simulation environment and the real display screen built according to the longsu, the width and the included angle between the display screens in the configuration file are the same.
S240, according to the corresponding relation between the equipment in the configuration file and each display screen, determining the working state of each simulation camera in the equipment.
S250, determining one of the devices as a master device, and synchronously transmitting the position information and the angle information of the first person to other devices in real time through the master device.
When cluster rendering is performed, one rendering end of the plurality of rendering ends is taken as a main rendering end, and the main rendering end synchronously transmits the position and angle information of the first person in the display environment to other rendering ends through a UDP protocol, so that the same time is ensured, the position and angle of the first person in each project program in the simulation environment are the same, shooting of the cameras in the same position in the simulation environment is realized, and picture splicing and three-dimensional display output by the rendering ends are realized.
And S260, rendering pictures in the range of the cone of vision taking the simulation camera as the vertex into the display screen corresponding to each simulation camera.
According to the technical scheme, a configuration file is established, and a real display environment is built in the configuration file according to the length and the width of at least two display screens and the included angle between the display screens; determining the display screen of the corresponding projection of each device according to the corresponding relation between the index value of each display screen and the IP address of the device; reading a configuration file, and determining a simulation environment which is the same as a real display environment built in the configuration file according to the length and the width of at least two display screens and the included angle between the display screens; according to the corresponding relation between the equipment in the configuration file and each display screen, determining the working state of each simulation camera in the equipment; and rendering pictures in the cone-viewing range taking the simulation camera as the vertex into the display screen corresponding to each simulation camera, so that one device can output and display corresponding to a plurality of display screens, the rendering efficiency is improved, the hardware resources are fully utilized, and the synchronous three-dimensional display of a plurality of display screens is realized.
Example III
Fig. 6 is a schematic structural diagram of a one-to-many cluster rendering device according to a fourth embodiment of the present invention. The device is suitable for the situation that a plurality of display screens are subjected to cluster rendering through one control end, can be realized by software and/or hardware, and can be particularly integrated in equipment. Referring to fig. 6, the apparatus specifically includes:
a simulation environment determining module 310, configured to determine a current simulation environment according to the configuration file;
a determining module 320, configured to determine a working state of each simulation camera in the device according to a corresponding relationship between the device in the configuration file and each display screen;
and the rendering module 330 is used for rendering the pictures in the range of the cone of view taking the simulation camera as the vertex into the display screen corresponding to each simulation camera.
Optionally, the method further comprises:
the configuration file building module is used for building a real display environment in the configuration file according to the length and the width of at least two display screens and the included angle between the display screens;
and the corresponding module is used for determining the display screen corresponding to the projection of each device according to the corresponding relation between the index value of each display screen and the IP address of the device.
Optionally, the corresponding module is specifically configured to:
and determining a row index and a column index of each display screen according to the arrangement mode of each display screen, wherein the row index and the column index form an index value of each display screen.
Optionally, the method further comprises: the resolution determining module is used for determining the resolution of the output picture corresponding to the equipment according to the arrangement mode of the display screens corresponding to the equipment and the resolution of each display screen.
Optionally, the simulation camera includes a left eye camera and a right eye camera.
Optionally, the simulation environment determining module 310 is specifically configured to:
and reading the configuration file, and determining the simulation environment which is the same as the real display environment built in the configuration file according to the length and the width of at least two display screens and the included angle between the display screens.
According to the technical scheme of the embodiment, a simulation environment determining module determines a current simulation environment according to the configuration file; the determining module determines the working state of each simulation camera in the equipment according to the corresponding relation between the equipment in the configuration file and each display screen; the rendering module renders pictures in the range of the cone of view taking the simulation camera as the vertex into the display screen corresponding to each simulation camera. According to the corresponding relation between the equipment and each display screen, the working state of each simulation camera in the equipment is determined, and rendering is carried out on the display screen corresponding to each simulation camera, so that output display of one equipment corresponding to a plurality of display screens is realized, rendering efficiency is improved, hardware resources are fully utilized, and synchronous three-dimensional display of multiple display screens is realized.
Example IV
Fig. 7 is a schematic structural view of an apparatus according to a fourth embodiment of the present invention. Fig. 7 shows a block diagram of an exemplary device 412 suitable for use in implementing embodiments of the invention. The device 412 shown in fig. 7 is only an example and should not be construed as limiting the functionality and scope of use of embodiments of the invention.
As shown in fig. 7, device 412 is in the form of a general purpose computing device. Components of device 412 may include, but are not limited to: one or more processors or processors 416, a system memory 428, and a bus 418 that connects the various system components (including the system memory 428 and the processor 416).
Bus 418 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Device 412 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by device 412 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 428 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 430 and/or cache memory 432. Device 412 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 434 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, commonly referred to as a "hard disk drive"). Although not shown in fig. 7, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 418 via one or more data medium interfaces. Memory 428 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the invention.
A program/utility 440 having a set (at least one) of program modules 442 may be stored in, for example, memory 428, such program modules 442 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 442 generally perform the functions and/or methodologies in the described embodiments of the invention.
The device 412 may also communicate with one or more external devices 414 (e.g., keyboard, pointing device, display 424, etc.), one or more devices that enable a user to interact with the device 412, and/or any devices (e.g., network card, modem, etc.) that enable the device 412 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 422. Also, device 412 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through network adapter 420. As shown, network adapter 420 communicates with other modules of device 412 over bus 418. It should be appreciated that although not shown in fig. 7, other hardware and/or software modules may be used in connection with device 412, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processor 416 performs various functional applications and data processing by executing at least one of the other programs among the plurality of programs stored in the system memory 428, for example, to implement the one-to-many cluster rendering method provided by the embodiment of the present invention, including:
determining a current simulation environment according to the configuration file;
according to the corresponding relation between the equipment in the configuration file and each display screen, determining the working state of each simulation camera in the equipment;
and rendering pictures in the range of the cone of vision taking the simulation camera as the vertex into the display screen corresponding to each simulation camera.
Example five
The seventh embodiment of the present invention also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing a one-to-many cluster rendering method:
determining a current simulation environment according to the configuration file;
according to the corresponding relation between the equipment in the configuration file and each display screen, determining the working state of each simulation camera in the equipment;
and rendering pictures in the range of the cone of vision taking the simulation camera as the vertex into the display screen corresponding to each simulation camera.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or device. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (8)

1. A one-to-many cluster rendering method, the method comprising:
determining a current simulation environment according to the configuration file;
according to the corresponding relation between the equipment in the configuration file and each display screen, determining the working state of each simulation camera in the equipment;
determining one of the devices as a main device, and synchronously transmitting the position information and the angle information of the first person to other devices in real time through the main device;
rendering pictures in the range of the cone of vision taking the simulation camera as the vertex into the display screen corresponding to each simulation camera;
before determining the current simulation environment according to the configuration file, the method comprises the following steps:
setting up a configuration file, and setting up a real display environment in the configuration file according to the length and the width of at least two display screens and the included angle between the display screens;
and determining the display screen of the corresponding projection of each device according to the corresponding relation between the index value of each display screen and the IP address of the device.
2. The method of claim 1, wherein determining the display screen on which each device corresponds to a projection based on the correspondence between the index value of each display screen and the IP address of the device, comprises:
and determining a row index and a column index of each display screen according to the arrangement mode of each display screen, wherein the row index and the column index form an index value of each display screen.
3. The method as recited in claim 1, further comprising:
and determining the resolution of the output picture corresponding to the equipment according to the arrangement mode of the display screens corresponding to the equipment and the resolution of each display screen.
4. The method of claim 1, wherein the simulated camera comprises a left eye camera and a right eye camera.
5. The method of claim 1, wherein determining the current simulation environment from the configuration file comprises:
and reading the configuration file, and determining the simulation environment which is the same as the real display environment built in the configuration file according to the length and the width of at least two display screens and the included angle between the display screens.
6. A one-to-many cluster rendering apparatus, comprising:
the simulation environment determining module is used for determining the current simulation environment according to the configuration file;
the determining module is used for determining the working state of each simulation camera in the equipment according to the corresponding relation between the equipment in the configuration file and each display screen;
the determining module is further used for determining one of the devices as a master device, and sending the first person-named position information and the first person-named angle information to other devices synchronously in real time through the master device;
the rendering module is used for rendering pictures in the range of the visual cone taking the simulation camera as the vertex into the display screen corresponding to each simulation camera;
the apparatus further comprises:
the configuration file establishing module is used for establishing a real display environment in the configuration file according to the length and the width of at least two display screens and the included angle between the display screens before the simulation environment determining module determines the current simulation environment according to the configuration file;
and the corresponding module is used for determining the display screen corresponding to the projection of each device according to the corresponding relation between the index value of each display screen and the IP address of the device.
7. An apparatus, the apparatus comprising:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the one-to-many cluster rendering method of any of claims 1-5.
8. A computer readable storage medium, having stored thereon a computer program, which when executed by a processor implements a one-to-many cluster rendering method as claimed in any of claims 1-5.
CN201811642965.5A 2018-12-29 2018-12-29 One-to-many cluster rendering method, device, equipment and storage medium Active CN109727315B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811642965.5A CN109727315B (en) 2018-12-29 2018-12-29 One-to-many cluster rendering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811642965.5A CN109727315B (en) 2018-12-29 2018-12-29 One-to-many cluster rendering method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109727315A CN109727315A (en) 2019-05-07
CN109727315B true CN109727315B (en) 2023-08-22

Family

ID=66299379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811642965.5A Active CN109727315B (en) 2018-12-29 2018-12-29 One-to-many cluster rendering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109727315B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256317B (en) * 2020-10-21 2022-07-29 上海曼恒数字技术股份有限公司 Rapid construction method, medium and equipment of virtual reality immersion type large-screen tracking system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530495A (en) * 2012-06-29 2014-01-22 迪士尼企业公司 Augmented reality simulation continuum
CN106600672A (en) * 2016-11-29 2017-04-26 上海金陵电子网络股份有限公司 Network-based distributed synchronous rendering system and method
CN107728986A (en) * 2017-11-07 2018-02-23 北京小鸟看看科技有限公司 The display methods and display device of a kind of double-display screen
CN107783306A (en) * 2017-11-28 2018-03-09 河南新汉普影视技术有限公司 A kind of bore hole 3D immersions exchange method and system based on simulated teaching Training Room

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530495A (en) * 2012-06-29 2014-01-22 迪士尼企业公司 Augmented reality simulation continuum
CN106600672A (en) * 2016-11-29 2017-04-26 上海金陵电子网络股份有限公司 Network-based distributed synchronous rendering system and method
CN107728986A (en) * 2017-11-07 2018-02-23 北京小鸟看看科技有限公司 The display methods and display device of a kind of double-display screen
CN107783306A (en) * 2017-11-28 2018-03-09 河南新汉普影视技术有限公司 A kind of bore hole 3D immersions exchange method and system based on simulated teaching Training Room

Also Published As

Publication number Publication date
CN109727315A (en) 2019-05-07

Similar Documents

Publication Publication Date Title
US11272165B2 (en) Image processing method and device
US11100664B2 (en) Depth-aware photo editing
CN107251567B (en) Method and apparatus for generating annotations of a video stream
KR102463304B1 (en) Video processing method and device, electronic device, computer-readable storage medium and computer program
US11533468B2 (en) System and method for generating a mixed reality experience
US11589027B2 (en) Methods, systems, and media for generating and rendering immersive video content
CN111654746A (en) Video frame insertion method and device, electronic equipment and storage medium
US11044398B2 (en) Panoramic light field capture, processing, and display
CN114175630A (en) Methods, systems, and media for rendering immersive video content using a point of gaze grid
KR20190018914A (en) Server, display apparatus and control method thereof
CN111095348A (en) Transparent display based on camera
CN109727315B (en) One-to-many cluster rendering method, device, equipment and storage medium
CN107580228B (en) Monitoring video processing method, device and equipment
CN112565883A (en) Video rendering processing system and computer equipment for virtual reality scene
US10482671B2 (en) System and method of providing a virtual environment
CN109859328B (en) Scene switching method, device, equipment and medium
WO2022011817A1 (en) Three-dimensional sphere-oriented visualization system
TWM630947U (en) Stereoscopic image playback apparatus
CN108920598B (en) Panorama browsing method and device, terminal equipment, server and storage medium
JP7172036B2 (en) SYSTEM, METHOD, AND PROGRAM FOR INTERVIEWING 3DCG SPACE VIEWING CONDITIONS
US20210297649A1 (en) Image data output device, content creation device, content reproduction device, image data output method, content creation method, and content reproduction method
CN110430417B (en) Multi-view stereoscopic image generation method and device, computer equipment and storage medium
US20240078743A1 (en) Stereo Depth Markers
WO2022033314A1 (en) 3d display method, 3d display apparatus and 3d display device
CN118068952A (en) Barrage moving method and device and virtual reality equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant