WO2024060663A1 - 三维模型场景交互方法、系统、设备、装置、存储介质及计算机程序产品 - Google Patents

三维模型场景交互方法、系统、设备、装置、存储介质及计算机程序产品 Download PDF

Info

Publication number
WO2024060663A1
WO2024060663A1 PCT/CN2023/096436 CN2023096436W WO2024060663A1 WO 2024060663 A1 WO2024060663 A1 WO 2024060663A1 CN 2023096436 W CN2023096436 W CN 2023096436W WO 2024060663 A1 WO2024060663 A1 WO 2024060663A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional model
interaction
user
model scene
rendering
Prior art date
Application number
PCT/CN2023/096436
Other languages
English (en)
French (fr)
Inventor
郑秋宏
张志超
魏莱
丁鹏
沈云
Original Assignee
中国电信股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国电信股份有限公司 filed Critical 中国电信股份有限公司
Publication of WO2024060663A1 publication Critical patent/WO2024060663A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations

Definitions

  • the present invention relates to the field of computer technology, and specifically to three-dimensional model scene interaction methods, systems, equipment, devices, storage media and computer program products.
  • Digital Twin also known as digital mapping or digital mirroring, refers to the simulation of physical entities, processes or systems within an information platform, similar to the twins of physical systems within the information platform. With the help of digital twins, the status of physical entities can be understood on the information platform, and even the predefined interface components in the physical entity can be controlled, thereby helping organizations monitor operations, perform predictive maintenance and improve processes.
  • digital twins The essence of digital twins is information modeling, which aims to build completely consistent digital models of physical objects in the real world in the digital virtual world.
  • information modeling involved in digital twins is no longer based on the traditional underlying information transmission format, but an overall abstract description of the external form, internal mechanism, and operating relationship of the entity object. Its difficulty and application effect are relatively high. Exponential growth compared to traditional modeling.
  • digital twins can have multiple transformations, that is, digital models of different shapes can be constructed according to different uses and scenarios.
  • the purpose of the present invention is to provide a three-dimensional model scene interaction method, system, equipment, device, storage medium and computer program product, which overcomes the difficulties of the prior art and can reduce the hardware requirements for user equipment. Get high-quality 3D model scenes.
  • Embodiments of the present invention provide a three-dimensional model scene interaction method, which is provided on a server.
  • the three-dimensional model scene interaction method includes:
  • Type rendering engine to obtain the rendering flow of the 3D model scene
  • receiving a user-submitted three-dimensional model scene interaction request from the client includes:
  • the rendering stream is pushed to the client for screen display through the data interaction channel.
  • a 3D model scene interaction request in response to a 3D model scene interaction request, user interaction data is obtained, and the 3D model rendering engine is run based on the user interaction data to obtain a rendering stream of the 3D model scene, including:
  • the 3D model rendering engine is called to parse the reconstructed user interaction data. Based on the parsed user interaction data, the encapsulated extended function component is called to perform corresponding functions on the 3D model scene to obtain a rendering stream.
  • the 3D model scene interaction method before receiving the 3D model scene interaction request submitted by the user from the client, the 3D model scene interaction method further includes:
  • the 3D model scene interaction method before receiving the 3D model scene interaction request submitted by the user from the client, the 3D model scene interaction method further includes:
  • the access configuration information is obtained, and the initial rendering of the 3D model scene is performed based on the access configuration information, and the initial rendering stream of the 3D model scene is obtained and pushed to the client through the data interaction channel.
  • obtaining user configuration information in response to a three-dimensional model rendering engine access request, and establishing a data interaction channel based on the user configuration information includes:
  • Embodiments of the present invention also provide a three-dimensional model scene interaction system, which includes:
  • the client is set to receive user input, respond to the user's input, send a 3D model scene interaction request to the server, and when receiving the rendering stream from the server, parse the rendering stream to obtain and display the 3D model scene picture;
  • the server is configured to respond to the 3D model scene interaction request, obtain user interaction data, run the 3D model rendering engine based on the user interaction data, obtain the rendering stream of the 3D model scene, and push the rendering stream to the client.
  • Embodiments of the present invention also provide a three-dimensional model scene interaction device, which is provided on the server.
  • the three-dimensional model scene interaction device Interoperable devices include:
  • the receiving module is configured to receive the 3D model scene interaction request submitted by the user from the client;
  • the rendering module is configured to respond to the 3D model scene interaction request, obtain user interaction data, and run the 3D model rendering engine based on the user interaction data to obtain the rendering stream of the 3D model scene;
  • the push module is set to push the rendering stream to the client for screen display.
  • An embodiment of the present invention further provides an electronic device, including:
  • Memory in which the executable instructions of the processor are stored
  • the processor is configured to execute the steps of the above three-dimensional model scene interaction method by executing executable instructions.
  • Embodiments of the present invention also provide a computer-readable storage medium for storing a program. When the program is executed, the steps of the above three-dimensional model scene interaction method are implemented.
  • a computer program product including a computer program that implements any one of the above three-dimensional model scene interaction methods when executed by a processor.
  • the three-dimensional model scene interaction method, system, equipment, device, storage medium and computer program product obtained by the embodiments of the present disclosure obtain user interaction data by responding to the three-dimensional model scene interaction request, and run three-dimensional model rendering based on the user interaction data
  • the engine obtains the rendering stream of the 3D model scene and pushes the rendering stream to the client for screen display.
  • the 3D model scene rendering process based on the digital twin application is placed on the server.
  • the server can provide more powerful GPU computing capabilities and has more powerful 3D model scene rendering capabilities, significantly improving The image quality of the 3D model scene on the client side. At the same time, it effectively reduces the hardware requirements for user equipment and reduces its operating pressure.
  • Figure 1 is an architectural diagram of a three-dimensional model scene interaction system provided by an embodiment of the present disclosure
  • Figure 2 is a sequence diagram of a three-dimensional model scene interaction method provided by an embodiment of the present disclosure
  • Figure 3 is a schematic diagram of the principle of a three-dimensional model scene interaction system provided by an embodiment of the present disclosure
  • Figure 4 is a flow chart of a three-dimensional model scene interaction method provided by an embodiment of the present disclosure
  • Figure 5 is a schematic module structure diagram of a three-dimensional model scene interaction device provided by an embodiment of the present disclosure
  • Figure 6 is a schematic diagram of the operation of the electronic device of the present invention.
  • Figure 7 shows a schematic diagram of a storage medium according to an embodiment of the present disclosure.
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • Example embodiments may, however, be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
  • high-quality rendering of complex 3D model scenes relies on the computing power of high-performance GPU hosts, but ordinary users' devices usually do not have local rendering conditions.
  • the embodiment of the present disclosure proposes a three-dimensional model scene construction method based on digital twin application.
  • the inventive idea is to deploy the three-dimensional model rendering engine on the server, interact with the server through the digital twin application, and use the three-dimensional model rendering engine to render the three-dimensional model scene.
  • the server can provide more powerful GPU computing power, have more powerful 3D model scene rendering capabilities, and significantly improve the image quality of the 3D model scene.
  • Figure 1 shows an architectural diagram of a three-dimensional model scene rendering system provided by an embodiment of the present disclosure.
  • the three-dimensional model scene rendering system includes a client 11 and a server 12.
  • the client 11 may be a digital twin application, a browser, or a user device running a digital twin application.
  • Figure 2 shows a timing diagram based on a three-dimensional model scene rendering method.
  • the three-dimensional model scene rendering method includes the following steps:
  • Step 210 The client 11 receives the user's input
  • Step 220 In response to the user's input, the client 11 sends a three-dimensional model scene interaction request to the server 12;
  • Step 230 The server 12 responds to the three-dimensional model scene interaction request, obtains user interaction data, and runs the three-dimensional model rendering engine based on the user interaction data to obtain the rendering stream of the three-dimensional model scene;
  • Step 240 Server 12 pushes the rendering stream to client 11;
  • Step 250 The client 11 parses the rendering stream to obtain and display the picture of the three-dimensional model scene.
  • the three-dimensional model scene is rendered on the server 12 side, and the rendering stream can be sent to the client 11 in the form of an HTML file.
  • the client 11 After this HTML text is parsed by the client 11, it does not need to be executed by a JavaScript script. You can directly build the desired DOM tree, that is, the picture of the three-dimensional model scene, and display it on the page.
  • the 3D model scene rendering process based on the digital twin application is placed on the server.
  • the server can provide more powerful GPU computing power and more powerful 3D model scene rendering capabilities, significantly improving the image quality of the 3D model scene on the client side. At the same time, it effectively reduces the hardware requirements for user equipment and reduces its operating pressure.
  • the server 12 may be a cloud server or a local physical server, such as a remote high-performance server.
  • a data interaction channel can be established between the client 11 and the server 12 based on WebRTC technology, and the server 12 pushes the rendering stream to the client 11 through the data interaction channel.
  • WebRTC Web Real-Time Communications
  • Peer-to-Peer point-to-point connections between browsers without the use of intermediaries, achieving Transmission of video streams and/or audio streams or other arbitrary data.
  • the immediacy of the rendering stream can be achieved between the client 11 and the server 12 .
  • the server 3 is also deployed with an interaction module 30.
  • the interaction module 30 includes an interaction capability encapsulation sub-module 31 and an interaction data isolation sub-module 32;
  • the interaction capability encapsulation sub-module 31 is configured to encapsulate one or more user interaction capabilities of the three-dimensional model scene in the digital twin application, obtain extended functional components and implement them in the three-dimensional model rendering engine, such as multiple user interaction capabilities Encapsulate one by one.
  • user interaction capabilities can include 3D positioning of 3D model scenes, 3D trajectory drawing, 3D space lens behavior control, 3D model disassembly, etc., which are not limited here.
  • the user interaction capability is encapsulated to obtain an extended functional component.
  • This extended functional component is relative to the basic functional component inside the 3D model rendering engine.
  • the 3D model rendering engine can be constructed through the basic functional components. On this basis, it can be Expand functional components beyond the 3D model rendering engine.
  • this embodiment obtains a universal extended function component that can be called by any three-dimensional model rendering engine or user, thereby improving the application scope of this embodiment.
  • the user interaction capability encapsulation method through the interaction capability encapsulation sub-module 31 includes the following steps:
  • the reconstructed user interaction data is extracted and parsed through the 3D model rendering engine, and extended functional components are encapsulated based on the parsed user interaction data.
  • the interaction request path includes: 3D rendering engine address + 3D model scene id/port. Therefore, the interaction request path can be the access path of the 3D rendering engine.
  • the interaction data includes the following fields:
  • digital twin applications By encapsulating the universal user interaction capabilities of digital twin applications, users can be supported to call the 3D model rendering engine in a cross-platform manner, greatly simplifying the construction of digital twin applications, improving the construction efficiency of digital twin applications, and being able to support various types of applications.
  • the vertical industry of digital twin applications provides a set of flexible and convenient plug-and-play capability services to decouple the interaction between users and the 3D model rendering engine, enhance the convenience and flexibility of interaction, and facilitate the generalization of interaction capabilities.
  • WebRTC technology can realize real-time interactive operations between users and server 3D model scenes, making full use of the GPU computing power of the cloud server or remote high-performance host to render the 3D model scene, allowing the client to remotely communicate with the 3D model rendering engine.
  • the interactive feedback is the same as local direct interaction, which effectively reduces the hardware requirements for the user device and reduces its operating pressure.
  • the user calls the interaction module 3 to initiate an interaction request, and the interaction capability encapsulation sub-module 31 reconstructs the interaction data according to the defined interaction data type, format requirements, and interaction request path specifications, and transfers it to the 3D model rendering engine. Reconstruct interaction requests for interaction data.
  • the three-dimensional model rendering engine After receiving the interaction request, the three-dimensional model rendering engine extracts and parses the interaction data reconstructed by the interaction encapsulation sub-module 31, and calls the encapsulated extended function component according to the format, type and configuration logic of the scene configuration data to implement the interaction function. . After the processing is completed, the 3D model rendering engine pushes the rendering stream to complete the display update of the user-side 3D model scene screen.
  • embodiments of the present disclosure provide an interface calling method to interactively operate the 3D model scene rendered by the 3D model rendering engine, which can easily support the implementation of rich interactive functions.
  • the interactive module 3 also includes an interactive data isolation sub-module 32, and the interactive data isolation sub-module 32 is configured as:
  • the user before establishing a connection with the three-dimensional model rendering engine, the user performs user configuration and access configuration, and the user configuration specifies the user's identity and permission type.
  • the access configuration specifies the access path of the 3D model rendering engine, the number of the 3D model scene (the same 3D model rendering engine may carry multiple 3D model scenes at the same time), the image quality parameters of the 3D model scene, etc.
  • the 3D model rendering engine After receiving the access request, the 3D model rendering engine automatically detects the user configuration information and access configuration information. If the user configuration information is correct and the user is a newly accessed user, the interactive data isolation sub-module 32 establishes an exclusive data interaction channel for the user to realize isolation of the interactive rendering stream. The corresponding 3D model data is retrieved according to the access configuration information for 3D model scene rendering, and the rendering stream is pushed through the user-exclusive data interaction channel. At this time, the user side can successfully load the initial screen of the 3D model scene.
  • the user calls the interaction capability encapsulation sub-module 31 to initiate a three-dimensional model scene interaction request to complete the update of the user-side three-dimensional model scene screen.
  • the interactive data can be effectively isolated and avoid mutual interference between different users' interactive operations on the same 3D model scene.
  • FIG. 4 is a sequence diagram of a specific three-dimensional model scene interaction method provided by an embodiment of the present disclosure.
  • the execution subject of this method is a server corresponding to the digital twin application.
  • the three-dimensional model scene interaction method includes the following steps :
  • Step 410 Receive the three-dimensional model scene interaction request submitted by the user from the client;
  • Step 420 Respond to the three-dimensional model scene interaction request, obtain user interaction data, and run the three-dimensional model rendering engine based on the user interaction data to obtain the rendering stream of the three-dimensional model scene;
  • Step 430 Push the rendering stream to the client for screen display.
  • the 3D model scene rendering process based on the digital twin application is placed on the server.
  • the server can provide more powerful GPU computing capabilities and has more powerful 3D model scene rendering capabilities, significantly improving The image quality of the 3D model scene on the client side. At the same time, it effectively reduces the hardware requirements for user equipment and reduces its operating pressure.
  • a data interaction channel can be established between the client and the server based on WebRTC technology, and the server pushes the rendering stream to the client through the data interaction channel.
  • a communication connection can be established between the server and the client through other communication methods, which is not limited here.
  • a three-dimensional model rendering engine is run based on the user interaction data to obtain a rendering stream of the three-dimensional model scene, including:
  • the 3D model rendering engine is called to parse the reconstructed user interaction data. Based on the parsed user interaction data, the encapsulated extended function component is called to perform corresponding functions on the 3D model scene to obtain a rendering stream.
  • the server side defines in advance interactive module information such as user-oriented interactive data types and formats, and access paths to the 3D model rendering engine, so that users can call the interactive module to initiate 3D model scene interaction requests based on interaction requirements.
  • a specific three-dimensional model rendering engine is matched, and the user interaction data is reconstructed through the data format of the three-dimensional model rendering engine.
  • the reconstructed user interaction data can be recognized and parsed by the three-dimensional model rendering engine. , to call the corresponding extended function component based on the parsed data.
  • the extended function component is obtained by encapsulating the user interaction capabilities in the digital twin application in advance.
  • users can call the 3D model rendering engine in a cross-platform manner, decoupling the interaction between the user and the 3D model rendering engine, enhancing the convenience and flexibility of interaction, and conducive to the improvement of interaction capabilities.
  • the 3D model scene interaction method before receiving the 3D model scene interaction request submitted by the user from the client, the 3D model scene interaction method further includes:
  • the data interaction channel is a dedicated data interaction channel between the client and the server, which can improve data security and interference prevention.
  • user configuration information is obtained in response to each three-dimensional model rendering engine access request, and based on each user configuration information Establish corresponding data interaction channels, and establish multiple data interaction channels corresponding to multiple clients.
  • the multiple data interaction channels are isolated from each other, and each user's interactive operation on the three-dimensional model scene will not change the rendering stream obtained by other users, and the influence between the respective interactive operations can be eliminated.
  • the 3D model scene interaction method before receiving the 3D model scene interaction request submitted by the user from the client, the 3D model scene interaction method further includes:
  • the access configuration information is obtained, and the initial rendering of the 3D model scene is performed based on the access configuration information, and the initial rendering stream of the 3D model scene is obtained and pushed to the client through the data interaction channel.
  • the method shown in Figure 1 can be used to further render and update the 3D model scene.
  • the three-dimensional model scene interaction method shown in Figure 4 can be based on the three-dimensional model shown in Figure 1 or Figure 3
  • the scene interaction system can also be implemented by other systems, which is not limited here.
  • FIG. 5 is a schematic module diagram of an embodiment of the three-dimensional model scene interaction device provided by the present disclosure. As shown in Figure 5, the three-dimensional model scene interaction device 500 includes but is not limited to the following modules:
  • the receiving module 510 is configured to receive a three-dimensional model scene interaction request submitted by the user from the client;
  • the rendering module 520 is configured to obtain user interaction data in response to a three-dimensional model scene interaction request, and run the three-dimensional model rendering engine based on the user interaction data to obtain a rendering stream of the three-dimensional model scene;
  • the push module 530 is configured to push the rendering stream to the client for screen display.
  • the receiving module 510 is specifically configured to:
  • the rendering module 520 is specifically configured to:
  • the 3D model rendering engine is called to parse the reconstructed user interaction data. Based on the parsed user interaction data, the encapsulated extended function component is called to perform corresponding functions on the 3D model scene to obtain a rendering stream.
  • the receiving module 510 is specifically configured to:
  • User configuration information is obtained in response to the 3D model rendering engine access request, and a data interaction channel is established based on the user configuration information.
  • the 3D model scene interaction request and the rendering stream are both transmitted through the data interaction channel.
  • the rendering module 520 is specifically configured to:
  • the access configuration information is obtained in response to the 3D model rendering engine access request, and the initial rendering of the 3D model scene is performed based on the access configuration information to obtain the 3D model scene.
  • the initial rendering stream is pushed to the client through the data interaction channel.
  • the receiving module 510 is specifically configured to:
  • the 3D model scene rendering process based on the digital twin application is placed on the server.
  • the server can provide more powerful GPU computing power and has more powerful 3D model scene rendering capabilities, which significantly improves the image quality of the 3D model scene on the client side.
  • it effectively reduces the cost of user-side equipment. The hardware requirements of the equipment are reduced, reducing its operating pressure.
  • An embodiment of the present invention also provides an electronic device, including a processor.
  • Memory which stores the executable instructions of the processor.
  • the processor is configured to execute the steps of the three-dimensional model scene interaction method by executing executable instructions.
  • the electronic device of the embodiment of the present disclosure places the 3D model scene rendering process based on the digital twin application on the server.
  • the server can provide more powerful GPU computing capabilities and have more powerful 3D model scenes.
  • Rendering capabilities significantly improve the image quality of 3D model scenes on the client side. At the same time, it effectively reduces the hardware requirements for user equipment and reduces its operating pressure.
  • FIG6 is a schematic diagram of the structure of an electronic device of the present invention.
  • the electronic device 600 according to this embodiment of the present invention is described below with reference to FIG6.
  • the electronic device 600 shown in FIG6 is only an example and should not bring any limitation to the functions and scope of use of the embodiment of the present invention.
  • the electronic device 600 is presented in the form of a general-purpose computing device.
  • the components of the electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one storage unit 620, a bus 630 connecting different platform components (including the storage unit 620 and the processing unit 610), a display unit 640, etc.
  • the storage unit stores program codes, which can be executed by the processing unit 610, so that the processing unit 610 executes the steps of various exemplary embodiments of the present invention described in the three-dimensional model scene interaction method section of this specification. For example, the processing unit 610 can execute the steps shown in FIG. 4.
  • the storage unit 620 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 621 and/or a cache storage unit 622, and may further include a read-only storage unit (ROM) 623.
  • RAM random access storage unit
  • ROM read-only storage unit
  • Storage unit 620 may also include a program/utility 624 having a set of (at least one) program modules 625 including, but not limited to: a processing system, one or more applications, other program modules, and program data, Each of these examples, or some combination, may include the implementation of a network environment.
  • Bus 630 may be a local area representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or using any of a variety of bus structures. bus.
  • Electronic device 600 may also communicate with one or more external devices 60 (e.g., keyboard, pointing device, Bluetooth device, etc.), may also communicate with one or more devices that enable a user to interact with electronic device 600, and/or with Any device (eg, router, modem, etc.) that enables the electronic device 600 to communicate with one or more other computing devices. This communication may occur through input/output (I/O) interface 650.
  • external devices 60 e.g., keyboard, pointing device, Bluetooth device, etc.
  • Any device eg, router, modem, etc.
  • the electronic device 600 can also communicate with one or more networks (such as a local area network) through a network adapter 660.
  • LAN local area network
  • WAN wide area network
  • Internet public networks
  • Network adapter 660 may communicate with other modules of electronic device 600 via bus 630.
  • other hardware and/or software modules may be used in conjunction with electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives And data backup storage platform, etc.
  • the process described above with reference to the flowchart can be implemented as a computer program product.
  • the computer program product includes: a computer program that implements the above-mentioned three-dimensional model scene when executed by a processor. Interactive methods.
  • Embodiments of the present invention also provide a computer-readable storage medium, which is configured to store a program and the steps of the three-dimensional model scene interaction method implemented when the program is executed.
  • various aspects of the present invention can also be implemented in the form of a program product, which includes program code.
  • the program product is run on a terminal device, the program code is configured to cause the terminal device to execute the above described instructions.
  • the steps according to various exemplary embodiments of the present invention are described in the three-dimensional model scene interaction method section.
  • a program product 700 configured to implement the above method according to an embodiment of the present disclosure is described.
  • the program product configured to implement the above method according to an embodiment of the present invention may adopt a portable compact disk read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer.
  • CD-ROM portable compact disk read-only memory
  • the program product of the present invention is not limited thereto.
  • a readable storage medium may be any tangible medium containing or storing a program that may be used by or in combination with an instruction execution system, apparatus or device.
  • the Program Product may take the form of one or more readable media in any combination.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may include a data signal propagated in baseband or as part of a carrier wave carrying the readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a readable storage medium may also be any readable medium other than a readable storage medium that can transmit, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code contained on a readable storage medium may be transmitted using any suitable medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the above.
  • Program code for performing the processes of the present invention may be written in any combination of one or more programming languages, including object-oriented programming languages such as Java, C++, etc., as well as conventional procedural programming. Language—such as "C” or a similar programming language.
  • the program code can be completely in Execute on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device or entirely on the remote computing device or server.
  • the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (e.g., provided by an Internet service). (business comes via Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service e.g., provided by an Internet service
  • the purpose of the present invention is to provide a 3D model scene interaction method, system, equipment, device, storage medium and computer program product, by responding to a 3D model scene interaction request, obtaining user interaction data, and running a 3D model rendering engine based on the user interaction data, obtaining a rendering stream of the 3D model scene, and pushing the rendering stream to the client for screen display.
  • the 3D model scene rendering process based on the digital twin application is placed on the server.
  • the server can provide more powerful GPU computing power and has more powerful 3D model scene rendering capabilities, significantly improving the image quality of the 3D model scene on the client side. At the same time, it effectively reduces the hardware requirements for the user-side device and reduces its operating pressure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开提供了三维模型场景交互方法、系统、设备、装置及存储介质,通过响应于三维模型场景交互请求,获得用户交互数据,并基于用户交互数据运行三维模型渲染引擎,得到三维模型场景的渲染流,将渲染流推送给客户端进行画面展示。在本实施例中,将基于数字孪生应用的三维模型场景渲染进程放到服务器,相比于用户本地设备,服务器能够提供更强大的GPU计算能力,具有更强大的三维模型场景渲染能力,显著提升客户端侧的三维模型场景的画质。同时有效降低对用户端设备的硬件要求,减小其运行压力。

Description

三维模型场景交互方法、系统、设备、装置、存储介质及计算机程序产品
相关申请的交叉引用
本公开要求于2022年09月23日提交的申请号为202211167495.8、名称为“三维模型场景交互方法、系统、设备、装置及存储介质”的中国专利申请的优先权,该中国专利申请的全部内容通过引用全部并入本文。
技术领域
本发明涉及计算机技术领域,具体地说,涉及三维模型场景交互方法、系统、设备、装置、存储介质及计算机程序产品。
背景技术
数字孪生(Digital Twin),也称为数字映射、数字镜像,指的是在信息化平台内模拟物理实体、流程或者系统,类似实体系统在信息化平台中的双胞胎。借助数字孪生,可以在信息化平台上了解物理实体的状态,甚至可以对物理实体里面预定义的接口组件进行控制,从而帮助组织监控运营、执行预测性维护和改进流程。
数字孪生的本质是信息建模,旨在为现实世界中的实体对象在数字虚拟世界中构建完全一致的数字模型。但数字孪生涉及的信息建模已不再是基于传统的底层信息传输格式的建模,而是对实体对象外部形态、内部机理和运行关系等方面的整体抽象描述,其难度和应用效果相较于传统建模呈指数级增长。主要表现在数字孪生可以有多个变身,即根据不同用途和场景构建形态各异的数字模型。
数字孪生应用依托具备高可视化能力的三维模型场景实现,因此如何获得高质量画质的三维模型场景,是业界普遍考虑的课题。
需要说明的是,上述背景技术部分公开的信息仅用于加强对本发明的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。
发明内容
针对现有技术中的问题,本发明的目的在于提供三维模型场景交互方法、系统、设备、装置、存储介质及计算机程序产品,克服了现有技术的困难,能够在降低对用户设备的硬件要求下获得高质量画质的三维模型场景。
本发明的实施例提供一种三维模型场景交互方法,其设置于服务器,三维模型场景交互方法包括:
从客户端接收用户提交的三维模型场景交互请求;
响应于三维模型场景交互请求,获得用户交互数据,并基于用户交互数据运行三维模 型渲染引擎,得到三维模型场景的渲染流;
将渲染流推送给客户端进行画面展示。
在一些实施例中,从客户端接收用户提交的三维模型场景交互请求,包括:
在与客户端之间建立基于WebRTC技术的数据交互通道的情况下,通过数据交互通道,将渲染流推送给客户端进行画面展示。
在一些实施例中,响应于三维模型场景交互请求,获得用户交互数据,并基于用户交互数据运行三维模型渲染引擎,得到三维模型场景的渲染流,包括:
响应于三维模型场景交互请求,获取用户交互数据并解析得到交互数据类型、格式及三维模型渲染引擎的接入路径;
根据交互数据类型、格式及三维模型渲染引擎的接入路径匹配三维模型渲染引擎,并基于三维模型渲染引擎的数据格式对用户交互数据进行重构;
调用三维模型渲染引擎对重构的用户交互数据进行解析,基于解析的用户交互数据调用封装的扩展功能组件对三维模型场景执行对应功能,得到渲染流。
在一些实施例中,在从客户端接收用户提交的三维模型场景交互请求之前,三维模型场景交互方法还包括:
接收客户端提交的三维模型渲染引擎接入请求;
响应于三维模型渲染引擎接入请求获得用户配置信息,并基于用户配置信息建立数据交互通道,其中,渲染流是通过数据交互通道道推送给客户端的。
在一些实施例中,在从客户端接收用户提交的三维模型场景交互请求之前,三维模型场景交互方法还包括:
响应于三维模型渲染引擎接入请求,获得接入配置信息,并基于接入配置信息进行三维模型场景的初始渲染,得到三维模型场景的初始渲染流,并通过数据交互通道推送给客户端。
在一些实施例中,响应于三维模型渲染引擎接入请求获得用户配置信息,并基于用户配置信息建立数据交互通道,包括:
在接收到多个客户端提交的三维模型渲染引擎接入请求的情况下,响应于每个三维模型渲染引擎接入请求获得用户配置信息,并基于每个用户配置信息建立对应的数据交互通道,对应于多个客户端建立得到多个数据交互通道。
本发明实施例还提供一种三维模型场景交互系统,其包括:
客户端,设置为接收用户的输入,响应于用户的输入,向服务器发送三维模型场景交互请求,并在从服务器接收到渲染流的情况下,对渲染流进行解析,得到并显示三维模型场景的画面;
服务器,设置为响应于三维模型场景交互请求,获得用户交互数据,并基于用户交互数据运行三维模型渲染引擎,得到三维模型场景的渲染流,将渲染流推送给客户端。
本发明实施例还提供一种三维模型场景交互装置,其设置于服务器,三维模型场景交 互装置包括:
接收模块,设置为从客户端接收用户提交的三维模型场景交互请求;
渲染模块,设置为响应于三维模型场景交互请求,获得用户交互数据,并基于用户交互数据运行三维模型渲染引擎,得到三维模型场景的渲染流;
推送模块,设置为将渲染流推送给客户端进行画面展示。
本发明的实施例还提供一种电子设备,包括:
处理器;
存储器,其中存储有处理器的可执行指令;
其中,处理器配置为经由执行可执行指令来执行上述三维模型场景交互方法的步骤。
本发明的实施例还提供一种计算机可读存储介质,用于存储程序,程序被执行时实现上述三维模型场景交互方法的步骤。
根据本公开的另一个方面,还提供了一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现上述任意一项的三维模型场景交互方法。
本公开的实施例所提供的三维模型场景交互方法、系统、设备、装置、存储介质及计算机程序产品,通过响应于三维模型场景交互请求,获得用户交互数据,并基于用户交互数据运行三维模型渲染引擎,得到三维模型场景的渲染流,将渲染流推送给客户端进行画面展示。在本实施例中,将基于数字孪生应用的三维模型场景渲染进程放到服务器,相比于用户本地设备,服务器能够提供更强大的GPU计算能力,具有更强大的三维模型场景渲染能力,显著提升客户端侧的三维模型场景的画质。同时有效降低对用户端设备的硬件要求,减小其运行压力。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显。
图1是本公开实施例提供的三维模型场景交互系统的架构图;
图2是本公开实施例提供的三维模型场景交互方法的时序图;
图3是本公开实施例提供的三维模型场景交互系统的原理示意图;
图4是本公开实施例提供的三维模型场景交互方法的流程图;
图5是本公开实施例提供的三维模型场景交互装置的模块结构示意图;
图6是本发明的电子设备运行的示意图;
图7示出了根据本公开实施方式的存储介质的示意图。
具体实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的实施方式。相反,提供这些实施方式使本发明全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。
附图仅为本发明的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件转发模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
此外,附图中所示的流程仅是示例性说明,不是必须包括所有的步骤。例如,有的步骤可以分解,有的步骤可以合并或部分合并,且实际执行的顺序有可能根据实际情况改变。具体描述时使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。需要说明的是,在不冲突的情况下,本发明的实施例及不同实施例中的特征可以相互组合。
在相关技术中,复杂三维模型场景的高质量画质渲染依赖高性能GPU主机的计算能力,但是普通用户的设备通常不具备本地渲染条件。
本公开实施例提出一种基于数字孪生应用的三维模型场景构建方法,其发明思想是,将三维模型渲染引擎部署在服务器,通过数字孪生应用与服务器交互,利用三维模型渲染引擎渲染三维模型场景。相比于相关技术中使用用户设备部署三维模型渲染引擎的方案,服务器能够提供更强大的GPU计算能力,具有更强大的三维模型场景渲染能力,显著提升三维模型场景的画质。
图1展示本公开实施例提供的三维模型场景渲染系统的架构图,如图1所示,该三维模型场景渲染系统包括:客户端11及服务器12。其中,客户端11可以是数字孪生应用、浏览器或运行数字孪生应用的用户设备。
基于图1所示系统,图2展示基于一种三维模型场景渲染方法的时序图,如图2所示,该三维模型场景渲染方法包括如下步骤:
步骤210:客户端11接收用户的输入;
步骤220:客户端11响应于用户的输入,向服务器12发送三维模型场景交互请求;
步骤230:服务器12响应于三维模型场景交互请求,获得用户交互数据,并基于用户交互数据运行三维模型渲染引擎,得到三维模型场景的渲染流;
步骤240:服务器12将渲染流推送给客户端11;
步骤250:客户端11对渲染流进行解析,得到并展示三维模型场景的画面。
在本实施例中,三维模型场景在服务器12一侧渲染完成,渲染流可以是以HTML文件的形式发送给客户端11,这个HTML文本被客户端11解析之后,不需要经过JavaScript脚本的执行,即可直接构建出希望的DOM树,即三维模型场景的画面,并展示到页面中。
在本实施例中,将基于数字孪生应用的三维模型场景渲染进程放到服务器,相比于用 户本地设备,服务器能够提供更强大的GPU计算能力,具有更强大的三维模型场景渲染能力,显著提升客户端侧的三维模型场景的画质。同时有效降低对用户端设备的硬件要求,减小其运行压力。
在本公开实施例中,服务器12可以是云端服务器或本地物理服务器,如远端高性能服务器。
在本公开实施例中,客户端11与服务器12之间可以基于WebRTC技术建立数据交互通道,服务器12通过该数据交互通道向客户端11推送渲染流。
其中,WebRTC(Web Real-Time Communications)是一项实时通讯技术,它允许网络应用或者站点,在不借助中间媒介的情况下,建立浏览器之间点对点(Peer-to-Peer)的连接,实现视频流和(或)音频流或者其他任意数据的传输。
在这种情况下,使用WebRTC,客户端11与服务器12之间能够实现渲染流的即时性。
在本公开实施例中,如图3所示,服务器3还部署有交互模块30,交互模块30包括交互能力封装子模块31和交互数据隔离子模块32;
其中,交互能力封装子模块31设置为将数字孪生应用中对三维模型场景的一个或多个用户交互能力进行封装,得到扩展功能组件并实现在三维模型渲染引擎中,例如对多个用户交互能力进行一一封装。
其中,用户交互能力可以包括对三维模型场景进行三维定位、三维轨迹绘制、三维空间镜头行为操控、三维模型拆解等,在此不作限定。
其中,对用户交互能力封装得到扩展功能组件,该扩展功能组件是相对于三维模型渲染引擎内部的基础功能组件来说的,三维模型渲染引擎可通过基础功能组件构建得到,在此基础上,可在三维模型渲染引擎之外进行功能组件的扩展。
因此,本实施例通过封装,得到具有通用性的扩展功能组件,任何三维模型渲染引擎或用户均能够调用,提升本实施例的应用范围。
在本实施例中,通过交互能力封装子模块31的用户交互能力封装方法包括如下步骤:
根据三维模型场景的用户交互能力的需求定义面向用户的交互数据的类型、格式、交互请求路径规范等交互数据;
匹配到三维模型渲染引擎的数据格式,以面向三维模型渲染引擎对交互数据进行解析与重构后发送给对应的三维模型渲染引擎;
通过三维模型渲染引擎对重新构造后的用户交互数据进行提取和解析,根据解析的用户交互数据进行扩展功能组件的封装。
其中,交互请求路径包括:三维渲染引擎地址+三维模型场景id/端口,因此,交互请求路径可以是三维渲染引擎的接入路径。在一种示例中,交互数据包括如下字段:
通过将数字孪生应用的通用性用户交互能力进行封装,可支持用户以跨平台的方式对三维模型渲染引擎进行调用,大大简化数字孪生应用的构建,提高数字孪生应用的构建效率,能够向各类数字孪生应用的垂直行业提供一套灵活便捷的即插即用的能力服务,解除用户与三维模型渲染引擎间交互的耦合性,增强交互的便捷性与灵活性,有利于交互能力的泛化。
而且,使用WebRTC技术,能够实现用户与服务器三维模型场景的实时互动操作,充分利用云端服务器或远端高性能主机的GPU计算能力对三维模型场景进行渲染,使得用户端与三维模型渲染引擎端远程交互的反馈和本地直接交互一样,有效降低对用户端设备的硬件要求,减小其运行压力。
在这种情况下,用户调用交互模块3发起交互请求,交互能力封装子模块31根据定义的交互数据类型、格式要求以及交互请求路径规范对交互数据进行重构,并向三维模型渲染引擎传递携带重构交互数据的交互请求。
三维模型渲染引擎处理接收到交互请求后,对经过交互封装子模块31重新构造后的交互数据进行提取和解析,根据场景配置数据的格式、类型以及配置逻辑调用封装的扩展功能组件进行交互功能实现。处理完成后,三维模型渲染引擎进行渲染流的推送,完成用户侧三维模型场景画面的显示更新。
因此,通过对用户交互能力进行封装,本公开实施例提供一种接口调用的方式对三维模型渲染引擎渲染的三维模型场景进行交互操作,易于支持丰富的交互功能实现。
在本公开实施例中,交互模块3还包括交互数据隔离子模块32,交互数据隔离子模块32设置为:
实现不同用户对同一三维模型场景的交互数据间的相互隔离,即用户1对三维模 型场景的交互操作不会改变用户2所获取的渲染流,能够消除各自操作之间的影响。
具体地,用户在建立和三维模型渲染引擎的连接前,进行用户配置与接入配置,用户配置指定用户的身份、权限类型。接入配置指定三维模型渲染引擎的接入路径、三维模型场景的编号(同一三维模型渲染引擎可能同时搭载多个三维模型场景)、三维模型场景的画质参数等。
三维模型渲染引擎在接收到接入请求后,自动检测用户配置信息与接入配置信息。若用户配置信息正确且为新增接入用户,则交互数据隔离子模块32为其建立专享的数据交互通道,实现交互渲染流的隔离。根据接入配置信息调取相应的三维模型数据进行三维模型场景渲染,通过用户专享的数据交互通道进行渲染流的推送。此时,用户侧可成功加载三维模型场景初始画面。
之后,用户调用交互能力封装子模块31发起三维模型场景交互请求,完成用户侧三维模型场景画面的更新。
因此,通过对不同用户设置专享的数据交互通道,能够有效对交互数据进行隔离,避免不同用户对同一三维模型场景交互操作间的相互干扰。
图4为本公开实施例提供的一种具体的三维模型场景交互方法的时序图,本方法的执行主体是对应数字孪生应用的服务器,如图4所示,该三维模型场景交互方法包括如下步骤:
步骤410:从客户端接收用户提交的三维模型场景交互请求;
步骤420:响应于三维模型场景交互请求,获得用户交互数据,并基于用户交互数据运行三维模型渲染引擎,得到三维模型场景的渲染流;
步骤430:将渲染流推送给客户端进行画面展示。
在本实施例中,将基于数字孪生应用的三维模型场景渲染进程放到服务器,相比于用户本地设备,服务器能够提供更强大的GPU计算能力,具有更强大的三维模型场景渲染能力,显著提升客户端侧的三维模型场景的画质。同时有效降低对用户端设备的硬件要求,减小其运行压力。
在本公开实施例中,客户端与服务器之间可以基于WebRTC技术建立数据交互通道,服务器通过该数据交互通道向客户端推送渲染流。
这可以提升三维模型场景的交互即时性。
在本公开其他实施例中,服务器与客户端之间又可以通过其他通信方式建立通信连接,在此并不做限定。
在本公开实施例中,响应于三维模型场景交互请求,获得用户交互数据,并基于用户交互数据运行三维模型渲染引擎,得到三维模型场景的渲染流,包括:
响应于三维模型场景交互请求,获取用户交互数据并解析得到交互数据类型、格式及三维模型渲染引擎的接入路径;
根据交互数据类型、格式及三维模型渲染引擎的接入路径匹配三维模型渲染引擎,并基于三维模型渲染引擎的数据格式对用户交互数据进行重构;
调用三维模型渲染引擎对重构的用户交互数据进行解析,基于解析的用户交互数据调用封装的扩展功能组件对三维模型场景执行对应功能,得到渲染流。
在本实施例,服务器侧事先定义面向用户的交互数据类型、格式及三维模型渲染引擎的接入路径等交互模块信息,使得用户能够根据交互需求调用交互模块来发起三维模型场景交互请求。
响应于该三维模型场景交互请求匹配到具体的三维模型渲染引擎,通过该三维模型渲染引擎的数据格式对用户交互数据进行重构,重构的用户交互数据能够被三维模型渲染引擎进行识别及解析,以根据解析数据调用对应的扩展功能组件,该扩展功能组件是事先对数字孪生应用中的用户交互能力进行封装得到的。对于封装方法可参考上文内容,在此不作限定。
借助于封装的扩展功能组件,用户能够以跨平台的方式对三维模型渲染引擎进行调用,解除用户与三维模型渲染引擎间交互的耦合性,增强交互的便捷性与灵活性,有利于交互能力的泛化。
在本公开实施例中,在从客户端接收用户提交的三维模型场景交互请求之前,三维模型场景交互方法还包括:
接收客户端提交的三维模型渲染引擎接入请求;
响应于三维模型渲染引擎接入请求获得用户配置信息,并基于用户配置信息建立数据交互通道,其中,渲染流是通过数据交互通道推送给客户端的。
在本实施例中,数据交互通道是专用于客户端与服务器之间的数据交互通道,能够提升数据安全性和防干扰性。
在本公开实施例中,在接收到多个客户端提交的三维模型渲染引擎接入请求的情况下,响应于每个三维模型渲染引擎接入请求获得用户配置信息,并基于每个用户配置信息建立对应的数据交互通道,对应于多个客户端建立得到多个数据交互通道。
在本实施例中,该多个数据交互通道之间相互隔离,每个用户对三维模型场景的交互操作不会改变其他用户所获取的渲染流,能够消除各自交互操作之间的影响。
在本公开实施例中,在从客户端接收用户提交的三维模型场景交互请求之前,三维模型场景交互方法还包括:
响应于三维模型渲染引擎接入请求,获得接入配置信息,并基于接入配置信息进行三维模型场景的初始渲染,得到三维模型场景的初始渲染流,并通过数据交互通道推送给客户端。
在该实施例中,在客户端显示初始化的三维模型场景的情况下,可使用图1所示方法对三维模型场景进行进一步渲染更新。
本公开实施例图4所示三维模型场景交互方法可以基于图1或图3所示三维模型 场景交互系统实现,也可以由其他系统实现,在此不作限定。
图5是本公开提供的三维模型场景交互装置的一种实施例的模块示意图,如图5所示,三维模型场景交互装置500包括但不限于如下模块:
接收模块510,设置为从客户端接收用户提交的三维模型场景交互请求;
渲染模块520,设置为响应于三维模型场景交互请求,获得用户交互数据,并基于用户交互数据运行三维模型渲染引擎,得到三维模型场景的渲染流;
推送模块530,设置为将渲染流推送给客户端进行画面展示。
上述模块的实现原理参见图4所示三维模型场景交互方法中的相关介绍,此处不再赘述。
在可选实施例中,接收模块510具体设置为:
与客户端通过WebRTC技术建立通信连接,通过通信连接,从客户端接收用户提交的三维模型场景交互请求。
在可选实施例中,渲染模块520具体设置为:
响应于三维模型场景交互请求,获取用户交互数据并解析得到交互数据类型、格式及三维模型渲染引擎的接入路径;
根据交互数据类型、格式及三维模型渲染引擎的接入路径匹配三维模型渲染引擎,并基于三维模型渲染引擎的数据格式对用户交互数据进行重构;
调用三维模型渲染引擎对重构的用户交互数据进行解析,基于解析的用户交互数据调用封装的扩展功能组件对三维模型场景执行对应功能,得到渲染流。
在可选实施例中,接收模块510具体设置为:
在从客户端接收用户提交的三维模型场景交互请求之前,接收客户端提交的三维模型渲染引擎接入请求;
响应于三维模型渲染引擎接入请求获得用户配置信息,并基于用户配置信息建立数据交互通道,其中,三维模型场景交互请求和渲染流均是通过数据交互通道传送。
在可选实施例中,渲染模块520具体设置为:
在从客户端接收用户提交的三维模型场景交互请求之前,响应于三维模型渲染引擎接入请求,获得接入配置信息,并基于接入配置信息进行三维模型场景的初始渲染,得到三维模型场景的初始渲染流,并通过数据交互通道推送给客户端。
在可选实施例中,接收模块510具体设置为:
在接收到多个客户端提交的三维模型渲染引擎接入请求的情况下,响应于每个三维模型渲染引擎接入请求获得用户配置信息,并基于每个用户配置信息建立对应的数据交互通道,对应于多个客户端建立得到多个数据交互通道。
在本实施例中,将基于数字孪生应用的三维模型场景渲染进程放到服务器,相比于用户本地设备,服务器能够提供更强大的GPU计算能力,具有更强大的三维模型场景渲染能力,显著提升客户端侧的三维模型场景的画质。同时有效降低对用户端设 备的硬件要求,减小其运行压力。
本发明实施例还提供一种电子设备,包括处理器。存储器,其中存储有处理器的可执行指令。其中,处理器配置为经由执行可执行指令来执行的三维模型场景交互方法的步骤。
如上所示,本公开实施例的电子设备将基于数字孪生应用的三维模型场景渲染进程放到服务器,相比于用户本地设备,服务器能够提供更强大的GPU计算能力,具有更强大的三维模型场景渲染能力,显著提升客户端侧的三维模型场景的画质。同时有效降低对用户端设备的硬件要求,减小其运行压力。
所属技术领域的技术人员能够理解,本发明的各个方面可以实现为系统、方法或程序产品。因此,本发明的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“平台”。
图6本发明的电子设备的结构示意图。下面参照图6来描述根据本发明的这种实施方式的电子设备600。图6显示的电子设备600仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。
如图6所示,电子设备600以通用计算设备的形式表现。电子设备600的组件可以包括但不限于:至少一个处理单元610、至少一个存储单元620、连接不同平台组件(包括存储单元620和处理单元610)的总线630、显示单元640等。
其中,存储单元存储有程序代码,程序代码可以被处理单元610执行,使得处理单元610执行本说明书三维模型场景交互方法部分中描述的根据本发明各种示例性实施方式的步骤。例如,处理单元610可以执行图4所示的步骤。
存储单元620可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)621和/或高速缓存存储单元622,还可以进一步包括只读存储单元(ROM)623。
存储单元620还可以包括具有一组(至少一个)程序模块625的程序/实用工具624,这样的程序模块625包括但不限于:处理系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
总线630可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。
电子设备600也可以与一个或多个外部设备60(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该电子设备600交互的设备通信,和/或与使得该电子设备600能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口650进行。
并且,电子设备600还可以通过网络适配器660与一个或者多个网络(例如局域 网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。网络适配器660可以通过总线630与电子设备600的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备600使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储平台等。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机程序产品,该计算机程序产品包括:计算机程序,所述计算机程序被处理器执行时实现上述的三维模型场景交互方法。
本发明实施例还提供一种计算机可读存储介质,设置于存储程序,程序被执行时实现的三维模型场景交互方法的步骤。在一些可能的实施方式中,本发明的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当程序产品在终端设备上运行时,程序代码设置为使终端设备执行本说明书上述三维模型场景交互方法部分中描述的根据本发明各种示例性实施方式的步骤。
参考图7所示,描述了根据本公开实施方式的设置为实现上述方法的程序产品700。根据本发明的实施方式的设置为实现上述方法的程序产品,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本发明的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
计算机可读存储介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读存储介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言的任意组合来编写用于执行本发明处理的程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在 用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。
综上,本发明的目的在于提供三维模型场景交互方法、系统、设备、装置、存储介质及计算机程序产品,通过响应于三维模型场景交互请求,获得用户交互数据,并基于用户交互数据运行三维模型渲染引擎,得到三维模型场景的渲染流,将渲染流推送给客户端进行画面展示。在本实施例中,将基于数字孪生应用的三维模型场景渲染进程放到服务器,相比于用户本地设备,服务器能够提供更强大的GPU计算能力,具有更强大的三维模型场景渲染能力,显著提升客户端侧的三维模型场景的画质。同时有效降低对用户端设备的硬件要求,减小其运行压力。
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。

Claims (11)

  1. 一种三维模型场景交互方法,设置于服务器,所述三维模型场景交互方法包括:
    从客户端接收用户提交的三维模型场景交互请求;
    响应于所述三维模型场景交互请求,获得用户交互数据,并基于用户交互数据运行三维模型渲染引擎,得到三维模型场景的渲染流;
    将所述渲染流推送给所述客户端进行画面展示。
  2. 根据权利要求1所述的三维模型场景交互方法,所述将所述渲染流推送给所述客户端进行画面展示,包括:
    在与所述客户端之间建立基于WebRTC技术的数据交互通道的情况下,通过所述数据交互通道,将所述渲染流推送给所述客户端进行画面展示。
  3. 根据权利要求1所述的三维模型场景交互方法,所述响应于所述三维模型场景交互请求,获得用户交互数据,并基于用户交互数据运行三维模型渲染引擎,得到三维模型场景的渲染流,包括:
    响应于所述三维模型场景交互请求,获取用户交互数据并解析得到交互数据类型、格式及所述三维模型渲染引擎的接入路径;
    根据所述交互数据类型、格式及所述三维模型渲染引擎的接入路径匹配所述三维模型渲染引擎,并基于所述三维模型渲染引擎的数据格式对所述用户交互数据进行重构;
    调用所述三维模型渲染引擎对重构的用户交互数据进行解析,基于解析的用户交互数据调用封装的扩展功能组件对所述三维模型场景执行对应功能,得到渲染流。
  4. 根据权利要求1所述的三维模型场景交互方法,在从客户端接收用户提交的三维模型场景交互请求之前,所述三维模型场景交互方法还包括:
    接收所述客户端提交的三维模型渲染引擎接入请求;
    响应于所述三维模型渲染引擎接入请求获得用户配置信息,并基于所述用户配置信息建立数据交互通道,其中,所述渲染流是通过所述数据交互通道推送给所述客户端的。
  5. 根据权利要求4所述的三维模型场景交互方法,在从客户端接收用户提交的三维模型场景交互请求之前,所述三维模型场景交互方法还包括:
    响应于所述三维模型渲染引擎接入请求,获得接入配置信息,并基于所述接入配置信息进行三维模型场景的初始渲染,得到所述三维模型场景的初始渲染流,并通过所述数据交互通道推送给所述客户端。
  6. 根据权利要求4所述的三维模型场景交互方法,响应于所述三维模型渲染引擎接入请求获得用户配置信息,并基于所述用户配置信息建立数据交互通道,包括:
    在接收到多个客户端提交的三维模型渲染引擎接入请求的情况下,响应于每个三维模型渲染引擎接入请求获得用户配置信息,并基于每个用户配置信息建立对应的数据交 互通道,对应于多个客户端建立得到多个数据交互通道。
  7. 一种三维模型场景交互系统,包括:
    客户端,设置为接收用户的输入,响应于用户的输入,向服务器发送三维模型场景交互请求,并在从所述服务器接收到渲染流的情况下,对所述渲染流进行解析,得到并显示三维模型场景的画面;
    所述服务器,设置为响应于所述三维模型场景交互请求,获得用户交互数据,并基于所述用户交互数据运行三维模型渲染引擎,得到所述三维模型场景的渲染流,将所述渲染流推送给所述客户端。
  8. 一种三维模型场景交互装置,设置于服务器,所述三维模型场景交互装置包括:
    接收模块,设置为从客户端接收用户提交的三维模型场景交互请求;
    渲染模块,设置为响应于所述三维模型场景交互请求,获得用户交互数据,并基于用户交互数据运行三维模型渲染引擎,得到三维模型场景的渲染流;
    推送模块,设置为将所述渲染流推送给所述客户端进行画面展示。
  9. 一种电子设备,包括:
    处理器;
    存储器,其中存储有所述处理器的可执行指令;
    其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1至6任一项所述的三维模型场景交互方法的步骤。
  10. 一种计算机可读存储介质,用于存储程序,其特征在于,所述程序被处理器执行时实现权利要求1至6任一项所述的三维模型场景交互方法的步骤。
  11. 一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现权利要求1至6任一项所述的三维模型场景交互方法的步骤。
PCT/CN2023/096436 2022-09-23 2023-05-26 三维模型场景交互方法、系统、设备、装置、存储介质及计算机程序产品 WO2024060663A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211167495.8A CN115550687A (zh) 2022-09-23 2022-09-23 三维模型场景交互方法、系统、设备、装置及存储介质
CN202211167495.8 2022-09-23

Publications (1)

Publication Number Publication Date
WO2024060663A1 true WO2024060663A1 (zh) 2024-03-28

Family

ID=84728761

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/096436 WO2024060663A1 (zh) 2022-09-23 2023-05-26 三维模型场景交互方法、系统、设备、装置、存储介质及计算机程序产品

Country Status (2)

Country Link
CN (1) CN115550687A (zh)
WO (1) WO2024060663A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115550687A (zh) * 2022-09-23 2022-12-30 中国电信股份有限公司 三维模型场景交互方法、系统、设备、装置及存储介质
CN116492675B (zh) * 2023-04-13 2024-04-16 因子(深圳)艺术科技有限公司 一种3d模型实时渲染方法、计算机设备与存储介质
CN117093793B (zh) * 2023-08-25 2024-05-28 江西格如灵科技股份有限公司 一种网页3d场景二维显示方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105263050A (zh) * 2015-11-04 2016-01-20 山东大学 基于云平台的移动终端实时渲染系统及其方法
CN110753218A (zh) * 2019-08-21 2020-02-04 佳都新太科技股份有限公司 一种数字孪生系统、方法及计算机设备
CN113902866A (zh) * 2021-09-24 2022-01-07 广州市城市规划勘测设计研究院 一种双引擎驱动的数字孪生系统
CN114708371A (zh) * 2022-04-12 2022-07-05 联通(广东)产业互联网有限公司 三维场景模型渲染与展示方法、装置、系统及电子设备
WO2022183519A1 (zh) * 2021-03-05 2022-09-09 艾迪普科技股份有限公司 一种可实时交互的三维图形图像播放器
CN115550687A (zh) * 2022-09-23 2022-12-30 中国电信股份有限公司 三维模型场景交互方法、系统、设备、装置及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105263050A (zh) * 2015-11-04 2016-01-20 山东大学 基于云平台的移动终端实时渲染系统及其方法
CN110753218A (zh) * 2019-08-21 2020-02-04 佳都新太科技股份有限公司 一种数字孪生系统、方法及计算机设备
WO2022183519A1 (zh) * 2021-03-05 2022-09-09 艾迪普科技股份有限公司 一种可实时交互的三维图形图像播放器
CN113902866A (zh) * 2021-09-24 2022-01-07 广州市城市规划勘测设计研究院 一种双引擎驱动的数字孪生系统
CN114708371A (zh) * 2022-04-12 2022-07-05 联通(广东)产业互联网有限公司 三维场景模型渲染与展示方法、装置、系统及电子设备
CN115550687A (zh) * 2022-09-23 2022-12-30 中国电信股份有限公司 三维模型场景交互方法、系统、设备、装置及存储介质

Also Published As

Publication number Publication date
CN115550687A (zh) 2022-12-30

Similar Documents

Publication Publication Date Title
WO2024060663A1 (zh) 三维模型场景交互方法、系统、设备、装置、存储介质及计算机程序产品
US10084864B2 (en) Methods and systems for facilitating a remote desktop session utilizing a remote desktop client common interface
US10165042B2 (en) Methods and systems for conducting a remote desktop session via HTML that supports a 2D canvas and dynamic drawing
US10237327B2 (en) Methods and systems for accessing and controlling a remote desktop of a remote machine in real time by a web browser at a client device via HTTP API utilizing a transcoding server
EP2649532B1 (en) Methods and systems for remote desktop session redrawing via http headers
US8504654B1 (en) Methods and systems for facilitating a remote desktop session utilizing long polling
US10248374B2 (en) Methods and systems for a remote desktop session utilizing HTTP header
US9430036B1 (en) Methods and systems for facilitating accessing and controlling a remote desktop of a remote machine in real time by a windows web browser utilizing HTTP
BR112021009629A2 (pt) método de processamento do conteúdo da interface de usuário, sistema, e, mídia legível por computador não transitória
CN115170711A (zh) 一种基于云渲染的高仿真数字工厂展示方法
WO2024061308A1 (zh) 通知处理方法、终端设备、服务端及计算机存储介质
US8856651B2 (en) Remote user interface cooperative application
CN115052043B (zh) 一种云桌面的视频传输方法、电子设备、装置及介质
CN112997220A (zh) 经由远程渲染的视频流进行3d模型的可视化和交互的系统和方法
CN110990109A (zh) 一种拼接屏回显方法、终端、系统及存储介质
CN114564260B (zh) 一种触摸终端远程控制方法及系统、触摸终端、存储介质
CN117492899B (zh) 即时传输及显示方法、装置、设备及存储介质
CN117676241A (zh) 基于服务器三维渲染的数字化工厂大场景显示方法及系统
CN113497821A (zh) 一种上传文件的方法、装置和系统
CN116346873A (zh) 一种云游戏的数据传输方法、装置及介质
CN115328416A (zh) 一种虚拟机显卡直通环境下的远程显示装置和方法
CN117609646A (zh) 场景渲染方法、装置、电子设备及存储介质
CN116055473A (zh) 一种基于web的分布式部署云渲染方法和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23866960

Country of ref document: EP

Kind code of ref document: A1