CN114356096A - XR (X-ray diffraction) technology-based new-dimension space construction method, system and platform - Google Patents

XR (X-ray diffraction) technology-based new-dimension space construction method, system and platform Download PDF

Info

Publication number
CN114356096A
CN114356096A CN202210022608.9A CN202210022608A CN114356096A CN 114356096 A CN114356096 A CN 114356096A CN 202210022608 A CN202210022608 A CN 202210022608A CN 114356096 A CN114356096 A CN 114356096A
Authority
CN
China
Prior art keywords
scene
user
instance
new
scenes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210022608.9A
Other languages
Chinese (zh)
Other versions
CN114356096B (en
Inventor
蔡铁峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Polytechnic
Original Assignee
Shenzhen Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Polytechnic filed Critical Shenzhen Polytechnic
Priority to CN202210022608.9A priority Critical patent/CN114356096B/en
Publication of CN114356096A publication Critical patent/CN114356096A/en
Application granted granted Critical
Publication of CN114356096B publication Critical patent/CN114356096B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention is suitable for the technical field of virtual reality, and provides a new dimension space construction method, a system and a platform based on an XR technology, wherein the method comprises the following steps: step S1: pre-constructing root scene configuration of a new dimension space; step S2: initializing a login user; step S3: dynamically allocating scene instances to users and managing communication connections between the scene instances; step S4: updating a communication connection between the user and the scene instance; step S5: updating the state of the instance of the current experience scene of the user and the instance of the descendant scene of the current experience scene of the user; the method solves the technical problems that in the prior art, a virtual reality technology cannot realize high-density deployment of a large number of XR application scenes, the scenes are overlapped in space, a user cannot visually and stereoscopically know real-time operation conditions of all XR application scenes in an area, and the coverage range is limited.

Description

XR (X-ray diffraction) technology-based new-dimension space construction method, system and platform
Technical Field
The invention belongs to the technical field of virtual reality, and particularly relates to a new dimension space construction method, system and platform based on an XR technology.
Background
At present, as the rapid development of networks makes more and more people more interested in the virtual world, the current virtual reality technology cannot realize high-density deployment of a large number of XR application scenes and make the scenes overlapped in space, so that a user cannot intuitively and stereoscopically know the real-time operation condition of all XR application scenes in an area, and the coverage range is limited.
Disclosure of Invention
The invention aims to provide a new dimension space construction method, a system and a platform based on an XR technology, and aims to solve the technical problems that in the prior art, a virtual reality technology cannot realize high-density deployment of a large number of XR application scenes, the scenes are overlapped in space, a user cannot visually and stereoscopically know real-time running conditions of all XR application scenes in an area, and the coverage range is limited.
The invention is realized in such a way that a new dimension space construction method based on XR technology comprises the following steps:
step S1: pre-constructing root scene configuration of a new dimension space;
step S2: initializing a login user;
step S3: dynamically allocating scene instances to users and managing communication connections between the scene instances;
step S4: updating a communication connection between the user and the scene instance;
step S5: updating the state of the instance of the current experience scene of the user and the instance of the descendant scene of the current experience scene of the user;
step S6: calculating the pose values of the user under the coordinate systems of the scene instances;
step S7: respectively rendering pictures by the current experience scene instance and the descendant scene instance of the user;
step S8: cutting and splicing the rendering picture pictures of the scene instances to generate a new dimension space experience picture of a user;
step S9: the current experience scene example of the user sends a new dimension space experience picture to the user terminal;
step S10: destroy the scene instance.
The further technical scheme of the invention is as follows: the specific step of step S2 is that after any user applies for login to the new-dimensional space management control center, the management control center initializes the position of the user in the new-dimensional space, counts the scenes in the user' S visual range, allocates scene instances to the user, establishes a communication connection between the scene instances according to the parent-child relationship, and establishes a communication connection between the user side and the root scene instance of the new-dimensional space.
The further technical scheme of the invention is as follows: the specific step of step S3 is that, due to the movement of the user and the deployment change factors of the new-dimensional space scene, the scenes in the visual range of the user may increase or decrease, the new-dimensional space control management center needs to dynamically manage the computing resources in real time, dynamically allocate the scene instances to the user, and dynamically manage the communication connection between the scene instances.
The further technical scheme of the invention is as follows: the step S2 further includes the steps of:
step S21: initializing the pose of a user in a new dimension space;
step S22: counting a scene set in a user visual range;
step S23: allocating scene instances to users;
step S24: establishing communication connection between scene instances according to the parent-child relationship;
step S25: and establishing communication connection between the user side and the new dimension space root scene instance.
The further technical scheme of the invention is as follows: the step S3 further includes the steps of:
step S31: updating a scene set in a visual range of a user;
step S32: distributing the new scene generation instance to a user;
step S33: establishing connection between the new scene instance and the parent scene instance thereof;
step S34: destroying the communication connection of the disappearing scene;
step S35: destroy the disappearing scene instance.
The further technical scheme of the invention is as follows: the specific step of step S21 is that a user end of any user sends a login application to the new-dimensional space management control center, and the management control center sets the pose of the user in the new-dimensional space through positioning information transmitted from the user end; the specific step of step S22 is that the user has a certain visual range in the new dimensional space, as in real life, for a single user, the system only needs to render the scenes in the visual range to the user, and the users at different positions or different positions have different scenes and number of scenes in the visual range; the specific step of step S23 is that if a scene is in the visual range of multiple users at the same time, the scene needs to be displayed to the users at the same time, and therefore scene images under the visual angles of the users need to be generated respectively, a single instance of the scene often cannot meet such a requirement, and thus, in the same scene, the system allocates an instance of the scene to the users separately, so that, the new dimension space management control center sends an instruction to the XR application server having idle computing resources, and allocates a scene instance to each scene in the visual range of each user respectively, and the XR application server loads a corresponding XR application executable file from the XR application library according to the instruction, and generates a scene instance; the specific step of step S24 is that, for a scene instance allocated to a user by the system, it only renders the picture of its own scene, but does not render the picture of a descendant scene disposed inside it, so that it can be avoided that the rendering calculation amount of a single scene instance is overloaded after a large number of descendant scenes are disposed in the scene, and conversely, since the picture of a descendant scene is not rendered, in order to generate a picture including a descendant scene, the scene needs to send user pose and interaction information data to the descendant scene, and receive the picture generated by the descendant scene, so that a communication connection is established between any scene instance allocated to the user and the parent scene instance allocated to the user; the specific step of step S25 is that, when the user is in the new dimensional space, the default interactive experience scene is the root scene of the new dimensional space, and during initialization, the user directly establishes communication connection with the root scene instance, the user transmits the positioning information and the interactive operation information to the root scene instance, and the root scene instance directly transmits the experience picture of the new dimensional space generated by splicing to the user.
The further technical scheme of the invention is as follows: the specific step of step S31 is that, due to the displacement of the user in the new dimensional space and the change of the scene deployment in the new dimensional space, a new scene appears in the user 'S visual range, or the original partial scene disappears, and for any time, the system needs to count the scene set in the user' S visual range again; the specific step of step S32 is that for any scene that newly appears in the visual range of the user, the system allocates computing resources to generate an instance of this scene; the specific step of step S33 is to traverse the scene instances allocated to the user, and the system retrieves the parent scene instance of any new scene, and the new scene instance establishes a communication connection with its parent scene instance; the specific step of step S34 is to destroy the communication connection with the parent scene instance for any scene that disappears from the user' S visible range; the specific step of step S35 is to destroy any scene instance that disappears from the user' S visible range.
The further technical scheme of the invention is as follows: the specific step of the step S1 is to configure a new dimension space, where the entire new dimension space itself can be regarded as a scene, and is a root scene of other scenes deployed in the new dimension space, the new dimension space management control center reads the new dimension space configuration table, which sub-scenes are deployed in the root scene of the new dimension space, configures the poses of sub-scenes and the states of objects in the sub-scenes, and configures the three-dimensional display intervals of the sub-scenes; further, configuring grandchild scenes deployed in each sub-scene, configuring other scenes deployed in the grandchild scenes, and so on, wherein the three-dimensional display interval of any scene necessarily comprises the three-dimensional display interval of the descendant scene; the specific step of step S4 is that the user is in a new dimensional space, the default interactive experience scene is a root scene of the new dimensional space, and the user can switch to any scene in the new dimensional space for interactive experience by entering a scene three-dimensional display interval or in an interactive selection manner, when the user newly enters a scene interactive experience, the user terminal directly establishes a communication connection with the current experience scene instance allocated to the user, and the user side transmits the positioning information and the interactive operation information to the current experience scene instance, and the current experience scene instance transmits the generated new dimensional space experience picture to the user side; the specific step of step S5 is that the current experience scene instance receives an interaction command sent by the user terminal, and at the same time, may also send a control command to its descendant scene instance, on this basis, each scene instance performs state update according to its own operating logic; the specific step of step S6 is that the user terminal sends the positioning information to the mobile terminalInteractive operation information related to pose transformation is transmitted to a scene instance currently experienced by a user, the pose of the user in a new dimension space is calculated by the current experience scene instance, the current experience scene instance transmits the pose information of the user to a sub-scene instance of the current experience scene instance, the sub-scene instance transmits the pose information of the user to a grandchild scene instance, and so on, and the pose values of the user in the current experience scene and the grandchild scenes are respectively calculated according to the transformation relation between each scene instance and the new dimension space coordinate system; the specific step of the step S7 is to render a picture of the current experience scene instance at the user pose viewing angle, and respectively render a picture of the descendant scene instance at the corresponding pose viewing angle according to the pose value of the user at each descendant scene coordinate system of the current experience scene, and when each scene instance is rendered, no descendant scene instance deployed inside the scene instance is rendered, wherein the current experience scene of the user is not constrained by the three-dimensional display interval, the scene picture is rendered completely, and when each scene of the descendant scene renders an image, the scene is rendered incompletely, and only the three-dimensional display interval corresponding to the scene is rendered; the specific step of step S8 is to start traversal from the lowest descendant node of the current scene instance, for any descendant scene instance, receive color images and depth images sent by all other descendant scene instances, splice with the rendered image to generate color images and depth images, then send to the parent node of itself, and so on, and finally crop, splice to generate a new dimension space experience picture output to the user
Figure BDA0003463016690000061
Figure BDA0003463016690000062
Pixel set P for display interval imagingkComparison of PkReplacing the RGB value and the depth value of the pixel according to the shielding relation of the depth values of the corresponding pixels in the inner pixel and psi; the specific step of step S9 is to splice the cropping by the scene instance of the current user experience through the communication connection established with the user terminalThe received picture is sent to a user terminal; the specific step of step S10 is that when the user exits the new dimension space, the system destroys all scene instances allocated to the user and the communication connections associated with these scene instances.
Another objective of the present invention is to provide a new dimension space system based on XR technology, where the system is composed of hardware and software, the system hardware includes multiple XR application servers, a storage server and a management server connected to the XR application servers, and a terminal connected to the management server wirelessly or by wire, and the system software functional module includes an XR application server end, and a new dimension space management control center, an XR application library, a new dimension space configuration table, and a user end connected to the XR application server end; and the new dimension space management control center is connected with the new dimension space configuration table and the user side.
Another objective of the present invention is to provide a new-dimension space platform based on XR technology, where multiple XR application scenes can be deployed at high density in a root scene of a new-dimension space of the platform, each application scene can be overlapped in space, a three-dimensional display interval is set for each application scene, a scene can include child scenes, the child scenes can include grandchild scenes, all scenes in a user view field are displayed to a user at the same time, when the user selects to enter any scene to experience the scene, the display of the scene is not restricted by the display interval, and the child scenes included in the scene can only be displayed to the user in a part of the display interval corresponding to each other.
The invention has the beneficial effects that: the invention constructs a new dimension space based on XR technology, and the new dimension space is characterized in that: a large number of XR application scenes can be deployed at high density in the new dimension space, the scenes can be overlapped in space, and a user can visually and stereoscopically know real-time operation conditions of all the XR application scenes in the area when entering the new dimension space; the new dimension space can cover a classroom, or a campus, a community, or a city; when the new dimension space contains a large number of scenes, a plurality of scenes which run simultaneously are rendered to the same picture, and extremely high graphic computing resources are needed.
Drawings
FIG. 1 is a new dimensional space that can be deployed in a high density of a large number of scenes according to an embodiment of the present invention;
fig. 2 is a flowchart of a new dimension space construction method based on XR technology according to an embodiment of the present invention;
FIG. 3 is a new dimensional space root scenario deployment scenario provided by an embodiment of the present invention;
FIG. 4 is a block diagram of the process of step S2 for initializing the login user according to an embodiment of the present invention;
FIG. 5 is a view of a scene in a visual range after a user logs in a new dimension space according to an embodiment of the present invention;
FIG. 6 is an allocation diagram for the scenario example of FIG. 5 according to an embodiment of the present invention;
FIG. 7 is a diagram of example communication connections for the scenario of FIG. 5 provided by an embodiment of the present invention;
fig. 8 is a flowchart illustrating establishing a communication connection between a user side and a new dimensional space root scene instance according to an embodiment of the present invention;
FIG. 9 is a block diagram of the flow chart of step S3 for dynamically allocating scenario instances to users and managing communication connections between scenario instances according to an embodiment of the present invention;
FIG. 10 is a diagram of communication connection between the updated user and the scene instance after switching the current experience scene in step S4 according to an embodiment of the present invention;
FIG. 11 is a hierarchical relationship diagram of image cropping and stitching in step S8 according to an embodiment of the present invention;
FIG. 12 is a block diagram of software functional modules of a new dimension space system based on XR technology according to an embodiment of the present invention;
fig. 13 is a hardware configuration diagram of a new dimension space platform based on XR technology according to an embodiment of the present invention.
Detailed Description
The extended reality technology (XR) technology is a synthesis of Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR) and other technologies, and a virtual-real fused new dimension space can be constructed in the real world based on the XR technology. The virtual-real fusion mode of the new dimension space can be based on the MR technology, and the real world object and the virtual object are simultaneously contained in the space; the new dimension space may also be based on VR technology, and only contains virtual objects in the space.
In the new dimension space, scenes of massive different XR applications can be deployed at high density at the same time, other XR application scenes can be deployed in the deployed XR application scenes as child scenes, grandchild scenes can be deployed in the child scenes, scenes are deployed in the grandchild scenes, and the like, so that when a user enters the new dimension space based on an XR terminal, the user can select any XR application scene for interactive experience.
The new dimension space can set three-dimensional display intervals for all scenes and descendant scenes deployed in the new dimension space, after a user selects any scene for interactive experience, the experience scene can not be restricted by the three-dimensional display intervals for the user, only the part of the descendant scene deployed in the scene can be displayed, and other scenes are not displayed.
When a plurality of users simultaneously select to perform interactive experience on the same scene, the users can select to perform experience process cooperation with other people, or not to perform the experience process cooperation.
Taking fig. 1 as an example, fig. 1(a), (b), and (c) are three different application scenarios, which are deployed in a new-dimension space root scenario (the root scenario is a large scenario corresponding to the whole new-dimension space) at high density, and these three scenarios are overlapped in space, as shown in fig. 1 (d). The present invention sets 3 three-dimensional display sections in a new dimensional space, as shown in fig. 1 (e). On the view line of the user, due to the front-back relationship of the three-dimensional display section, the display area of the three-dimensional display section on the screen has a shielding relationship, and under the constraint of the three-dimensional display section, the display effect is as shown in fig. 1 (f). When the user selects a certain scene for the interactive experience, the system will display the scene completely, as shown in fig. 1(a) or fig. 1(b) and fig. 1 (c).
Fig. 2 shows a new dimensional space construction method based on XR technology, which includes the following steps:
step S1: pre-constructing root scene configuration of a new dimension space; specifically, a new dimension space is configured, the whole new dimension space can be regarded as a scene which is a root scene of other scenes deployed in the new dimension space, a new dimension space management control center reads a new dimension space configuration table, sub-scenes deployed in the root scene of the new dimension space are configured, the poses (positions and pose angles) of all sub-scenes are configured, the size of each scene scale is configured, the state of each object in each scene is configured, and a three-dimensional display interval of each sub-scene is configured; further, configuring grandchild scenes deployed in each sub-scene, configuring other scenes deployed in the grandchild scenes, and so on, wherein the three-dimensional display interval of any scene necessarily includes the three-dimensional display interval of the descendant scene.
Taking fig. 3 as an example, the root scene of the new dimension space is deployed with sub-scenes a, c, and d, the sub-scene a is also deployed with a grandchild scene b, and all scenes except the root scene are set with three-dimensional display intervals.
Step S2: initializing a login user; after any user applies for login to the new-dimension space management control center, the management control center initializes the position of the user in the new-dimension space, counts scenes in the visual range of the user, distributes scene examples for the user, establishes communication connection between the scene examples according to the parent-child relationship, and establishes communication connection between the user side and the root scene example of the new-dimension space.
As shown in fig. 4, the step S2 further includes the following steps:
step S21: initializing the pose of a user in a new dimension space; in particular arbitrary users pkThe user side sends a login application to the new dimension space management control center, and the management control center sets the pose of the user in the new dimension space through positioning information and the like transmitted by the user side.
Taking fig. 5 as an example, after the user 1 and the user 2 log in the new dimensional space, the management control center calculates the pose value (position and direction angle) of the user in the new dimensional space according to the conversion relationship between the user positioning pose value and the pose value of the new dimensional space.
Step S22: counting a scene set in a user visual range; specifically, a user has a certain visual range in a new dimension space, as in real life, for a single user, the system only needs to render scenes in the visual range to the user, and the user is in different positions or different positions, and the scenes and the number of the scenes in the visual range are different.
Taking the distance value as a main parameter, the management control center gives a visual range of the user (for example, a hemispherical area with the user as a central point), and counts scenes contained in the visual range of the user (for example, a scene set in which a scene three-dimensional display interval intersects with the visual range). At initial login time, any user pkThe set of scenes contained in the visual range is
Figure BDA0003463016690000111
Wherein Sk,0Is the root scene S'.
Taking fig. 5 as an example, the scenes visible in the visible range of the user 1 are scenes a, b, and c, and the scenes visible in the visible range of the user 2 are scenes c and d.
Step S23: allocating scene instances to users; specifically, if a scene is in a visual range of multiple users at the same time, the scene needs to be displayed to the users at the same time, so that scene pictures at the visual angles of the users need to be generated respectively, a single instance of the scene often cannot meet the requirement, therefore, in the same scene, the system allocates the instance of the scene to the users independently, so that the new dimension space management control center sends an instruction to the XR application server with idle computing resources, one scene instance is allocated to each scene in the visual range of each user respectively, and the XR application server loads a corresponding XR application executable file from the XR application library according to the instruction to generate the scene instance.
For any user pkThe management control center is a scene set in the visible range
Figure BDA0003463016690000112
Allocating computing resources (wherein the emphasis is on graphics rendering computing resources) to each scene, generating scene instances, aggregating the scene instances provided to the user into
Figure BDA0003463016690000121
Wherein A isk,lAs a scene Sk,lExamples of (3). According to the new dimension space configuration table,
Figure BDA0003463016690000122
wherein the three-dimensional display interval specified by all scene instances is
Figure BDA0003463016690000123
Wherein psik,lIs Ak,lThe three-dimensional display section of (1).
Taking fig. 5 as an example, the scene c is within the visual range of both the user 1 and the user 2, so as shown in fig. 6, the system assigns the scene instance to the user 1, including: according to the method, a root scene instance 1, a scene a instance 1, a scene b instance 1 and a scene c instance 1 are allocated to a user 2, wherein the allocation of the scene instances comprises the following steps: root scenario instance 2, scenario c instance 2, scenario d instance 1.
Step S24: establishing communication connection between scene instances according to the parent-child relationship; specifically, for a scene instance distributed to a user by a system, the system only renders a picture of the scene itself and does not render a picture of a descendant scene deployed in the scene, so that the phenomenon that the rendering calculation amount of a single scene instance is overloaded after a large number of descendant scenes are deployed in the scene can be avoided, and conversely, since the picture of the descendant scene is not rendered, in order to generate a picture containing the descendant scene, the scene needs to send data such as user pose, interaction information and the like to the descendant scene and receive the picture generated by the descendant scene, so that the scene instance randomly distributed to the user establishes communication connection with the parent scene instance.
The management control center traverses the distribution to the users pkSet of scene instances
Figure BDA0003463016690000124
For any instance A other than the root instancek,lIn a
Figure BDA0003463016690000125
Retrieve its parent scene instance
Figure BDA0003463016690000126
The scene establishes communication connection with the parent scene instance
Figure BDA0003463016690000127
To a user pkFor a set of communication connections between scene instances
Figure BDA0003463016690000128
And (4) showing.
Still taking fig. 5 as an example, as shown in fig. 7(a), the system establishes communication connections for the user 1 between the root scenario instance 1 and the scenario a instance 1, between the root scenario instance 1 and the scenario c instance 1, and between the scenario a instance 1 and the scenario b instance 1, respectively; the system also establishes communication connections for the user 2 between the root scenario instance 2 and the scenario c instance 2, and between the root scenario instance 2 and the scenario d instance 1, respectively.
In addition, for any scene, the user may select to share the interactive experience process with other users, and at this time, communication connection needs to be established between scene instances of the user of the scene distributed to the shared experience for synchronizing scene state data. Taking fig. 5 as an example, when the user 1 and the user 2 share the interactive experience of the scene c, as shown in fig. 7(b), the scene c establishes a communication connection between the instance 1 and the instance 2, and the two scene instances synchronize the states of all objects of the scene in real time.
Step S25: and establishing communication connection between the user side and the new dimension space root scene instance. Specifically, when a user is in a new dimensional space, a default interactive experience scene is a root scene of the new dimensional space, when the user is initialized, the user side directly establishes communication connection with a root scene example, the user side can transmit positioning information and interactive operation information to the root scene example, and the root scene example directly transmits a new dimensional space experience picture generated through splicing to the user side.
Let an arbitrary user pkThe current scenario entering the interactive experience is a root scenario instance Ak,0. User and scene root scene instance Ak,0Establishing a communication connection
Figure BDA0003463016690000131
Taking fig. 8 as an example, the user side 1 establishes a communication connection with the new-dimension root scene instance 1, and the user side 2 establishes a communication connection with the new-dimension root scene instance 2.
Step S3: dynamically allocating scene instances to users and managing communication connections between the scene instances; specifically, due to factors such as movement of a user, change of scene deployment of a new dimension space and the like, scenes in a visual range of the user are increased or decreased, the new dimension space control management center needs to dynamically manage computing resources in real time, dynamically allocate scene instances to the user, and dynamically manage communication connection between the scene instances.
As shown in fig. 9, the step S3 further includes the following steps:
step S31: updating a scene set in a visual range of a user; specifically, due to the displacement of the user in the new dimension space, the change of the scene deployment in the new dimension space, and the like, a new scene appears in the user visual range, or the original partial scene disappears, and the system needs to count the scene set in the user visual range again at any time. Let at any time tjThe scene set included in the user visual range is obtained through statistics again
Figure BDA0003463016690000141
Step S32: distributing the new scene generation instance to a user; specifically, for any scene that newly appears in the visual range of the user, the system allocates computing resources and generates an instance of the scene.
Step S33: establishing connection between the new scene instance and the parent scene instance thereof; specifically, traversing the scene instances assigned to the user, the system retrieves the parent scene instance of any new scene, and the new scene instance establishes a communication connection with the parent scene instance.
Step S34: destroying the communication connection of the disappearing scene; the specific steps are that for any scene which disappears from the visual range of the user, the communication connection between the scene and the father scene instance is destroyed.
Step S35: destroying the disappearing scene example; the specific steps are to destroy any scene instance which disappears from the visual range of the user.
Step S4: updating a communication connection between the user and the scene instance; specifically, when a user newly enters a scene interaction experience, a user terminal directly establishes communication connection with a current experience scene example distributed to the user, the user side can transmit information such as positioning information and interaction operation to a current experience scene example, and the current scene example can transmit a generated experience picture of the new dimension space to the user side.
Let an arbitrary user pkThe current scenario for entering interactive experience is Ak,σUser terminal and scenario Ak,σEstablishing a communication connection
Figure BDA0003463016690000151
Taking fig. 10 as an example, the user 1 selects to enter the scene b for interactive experience, and the user 2 selects to enter the scene d for interactive experience, so that the user 1 terminal establishes communication connection with the scene b instance 1, and the user 2 terminal establishes communication connection with the scene d instance 1.
Step S5: updating the state of the instance of the current experience scene of the user and the instance of the descendant scene of the current experience scene of the user; specifically, the current experience scene instance receives an interaction command sent by a user terminal, and meanwhile, a control command can be sent to the descendant scene instance, and on the basis, each scene instance updates the state according to the running logic of the scene instance; the XR engines represented by unity and unity in the market provide the real-time updating function of the scene instance state, and the system integrates the basic functions of the XR engines.
Step S6: calculating the pose values of the user under the coordinate systems of the scene instances; specifically, a user terminal transmits positioning information and interactive operation information related to pose transformation to a scene instance currently experienced by a user, the current experience scene instance calculates the pose of the user in a root scene of a new dimension space, the current experience scene instance transmits the pose information of the user in the root scene of the new dimension space to a sub-scene instance thereof, the sub-scene instance transmits the pose information of the user to a grandchild scene instance, and so on, and the pose values (values of position and pose angle) of the user in the current experience scene and each grandchild scene are respectively calculated according to the transformation relation between each scene instance and a new dimension space coordinate system.
Let scenario instance Ak,σAt any time tjCalculating the pose of the user in the root scene of the new dimension space by receiving the information of the positioning information of the user terminal, the pose transformation interactive operation and the like
Figure BDA0003463016690000161
Wherein
Figure BDA0003463016690000162
Is a coordinate value of the coordinate value,
Figure BDA0003463016690000163
is a rotation angle value. From scene instance Ak,σThe transformation relation between the coordinate system and the new dimension space root scene instance coordinate system is obtained to obtain the scene instance A of the user terminalk,σPose values in a coordinate system
Figure BDA0003463016690000164
Ak,lHandle
Figure BDA0003463016690000165
Sequentially transmitting to each descendant sceneAnd the position and attitude values of the users in the coordinate systems of the grandchild scenes are obtained according to the transformation relation between the coordinate system of the grandchild scene and the coordinate system of the root scene of the new dimension space. The method is characterized in that any scene is deployed to a new dimension space, and the core essence is to determine the transformation relation from a scene coordinate system to a new dimension space root scene coordinate system, so that the transformation relation from the scene coordinate system to the new dimension space root scene coordinate system can be obtained by retrieving or calculating the scene deployment condition.
Step S7: respectively rendering pictures by the current experience scene instance and the descendant scene instance of the user; in particular rendering
Figure BDA0003463016690000166
Current scene instance a from a positional perspectivek,σThe method comprises the steps of respectively rendering a picture of a descendant scene instance under a corresponding pose visual angle according to a pose value of a user terminal under each scene coordinate system, and not rendering a descendant scene instance deployed in the descendant scene instance when each scene instance is rendered, wherein a current experience scene of a user is not restricted by a three-dimensional display interval, the scene picture is completely rendered, and when each scene of the descendant scene is rendered, the scene is incompletely rendered, and only the three-dimensional display interval corresponding to the scene is rendered.
Let an arbitrary scene Ak,lFor pictures generated after rendering Ik,lIn which the coordinate values [ x, y [ ]]Corresponding to a pixel value of Ik ,l(x, y). Generated depth image representation Dk,l,Ik,l(x, y) corresponding to a depth value of Dk,l(x, y) calculating to obtain Ik,lThe imaging area in its three-dimensional display interval is a pixel set
Figure BDA0003463016690000171
In which arbitrary pixel
Figure BDA0003463016690000172
The pixel coordinate value is [ x ]m,ym]
Step S8: cutting and splicing rendering pictures of all scene instances to generate a new dimension space experience picture; specifically, traversal is started from the lowest descendant node of the current scene instance, for any descendant scene instance, the color images and the depth images sent by all other descendant scene instances are received, the color images and the depth images are spliced with self rendering images to generate color images and depth images, then the color images and the depth images are sent to the father node of the descendant scene instance, and the like, and finally, pictures output to a user are generated by cutting and splicing. The picture splicing method for any scene example comprises the steps of traversing color pictures and depth pictures transmitted by all sub-scenes of the scene, comparing pixel depth values of the scene example and the sub-scene example under the constraint of an imaging area of a sub-scene display area, and replacing pixel RGB values and depth values according to a shielding relation.
Let scenario instance Ak,σThe sub-scene instances are collected as
Figure BDA0003463016690000173
Its grandchild scenario instance set is
Figure BDA0003463016690000174
Its x generation descendant scenario instance set is
Figure BDA0003463016690000175
Total mk,σGeneration and descendant. Then m isk,σAnd the generation descendants send the rendered color image and depth image to the parent scene instance. From Ak,σM ofk,σStarting from the-1 generation of offspring scene instances, the self rendering image needs to be cropped and spliced with the image sent by the sub scene. A. thek,σM ofk,σSending the color image and the depth image generated by cutting and splicing-out of the-1 generation offspring scene to Ak,σM ofk,σGeneration 2 descendant scenarios, then Ak,σM ofk,σSending the color image and the depth image generated by cutting and splicing the 2-generation offspring scene to Ak,σM ofk,σ3 generation offspring scenarios, and so on, finally Ak,σAnd cutting and splicing to generate a picture output to a user.
Let psi be Ak,σAnd any scene instance in the descendant scene instances, wherein the scene instance is deployed
Figure BDA0003463016690000181
Figure BDA0003463016690000182
Each sub-scene is allocated with a three-dimensional display interval of
Figure BDA0003463016690000183
Psi receiving color images from sub-scenes
Figure BDA0003463016690000184
And depth image
Figure BDA0003463016690000185
The color image I and the depth image D obtained by psi self-rendering need to be in accordance with D,
Figure BDA0003463016690000186
And a set of pixels for each scene satisfying a three-dimensional display interval constraint, pair I,
Figure BDA0003463016690000187
Cutting and splicing to generate new image
Figure BDA0003463016690000188
FIG. 11 shows a hierarchical relationship diagram for image cropping and stitching with the addition of a new color image
Figure BDA0003463016690000189
Corresponding new depth image
Figure BDA00034630166900001810
The specific algorithm is as follows:
order to
Figure BDA00034630166900001811
Arbitrary sub-scene, P, of psikFor a set of pixels that satisfy its three-dimensional display interval constraint, for PkInner arbitrary pixel
Figure BDA00034630166900001812
If there is
Figure BDA00034630166900001813
Then
Figure BDA00034630166900001814
Figure BDA00034630166900001815
Wherein the content of the first and second substances,
Figure BDA00034630166900001816
is composed of
Figure BDA00034630166900001817
Middle pixel
Figure BDA00034630166900001818
The depth value of (a) is determined,
Figure BDA00034630166900001819
is composed of
Figure BDA00034630166900001820
Pixel
Figure BDA00034630166900001821
RGB value of D (x)m,ym) Is a pixel p in psimDepth value of (a), I (x)m,ym) Is a pixel p in psimImage of RGB value, phi and
Figure BDA00034630166900001822
has the same image resolution and corresponding field angle, and has pixels
Figure BDA00034630166900001823
And pmThe same image coordinate is present for the user's sight.
According to the above calculation method, traversing all sub-scene images psi, traversing each pixel of each sub-scene image satisfying three-dimensional display interval constraint, and finally calculating the obtained color image I and depth image D to be the new color image to be solved
Figure BDA0003463016690000191
And corresponding new depth image
Figure BDA0003463016690000192
New colour image
Figure BDA0003463016690000193
And corresponding new depth image
Figure BDA0003463016690000194
To its parent scene instance.
And by analogy, the final new dimension space experience picture in the current interactive experience scene of the user can be obtained through calculation.
Step S9: the scene example of the current user experience sends the cut and spliced new dimension space experience picture to a user terminal; specifically, a scene example of the current user experience sends a cut and spliced new dimension space experience picture to a user terminal through a communication connection established with the user terminal.
Step S10: destroy the scene instance. Specifically, when the user exits the new dimension space, the system destroys all scene instances assigned to the user and the communication connections associated with the scene instances.
Fig. 12 shows a new dimension space system software composition based on XR technology, where the system software functional modules include an XR application server, and a new dimension space management control center, an XR application library, a new dimension space configuration table, and a user side connected to the XR application server; and the new dimension space management control center is connected with the new dimension space configuration table and the user side. The new dimension space management control center is installed on a management server in a server cluster, and the functions of the new dimension space management control center comprise: receiving user login and logout applications, new dimension space configuration management, scene state management, XR application server computing resource management, scene instance distribution for a user, communication connection between the user and the scene instance, communication connection between the scene instances and the like; the XR application library is installed on a storage server in the server cluster, comprises a large number of XR application executable files and is shared to all XR application servers, and each XR application executable file can construct one or more XR interaction experience scenes; the new dimension space configuration table is deployed in the management server, and the configuration table configures which XR scenes are deployed in the new dimension space, parent-child relations among the scenes, poses of the scenes, three-dimensional display intervals of the scenes, initial states of the scenes and the like; the XR application server end is installed on each XR application server and mainly has the functions of generating and destroying scene examples, generating and destroying communication connection between the scene examples and the user end; the user side is installed on the terminal, collects positioning data, interaction data and the like of the terminal, sends the positioning data, the interaction data and the like to the scene example, and receives and displays the experience picture of the new dimension space.
Fig. 13 shows a new dimension space system hardware configuration based on XR technology, where the system hardware includes at least one XR application server, a storage server and a management server connected to the XR application server, and an XR terminal connected to the management server wirelessly or by wire. The XR application server has to have high-level graphic computing capability, the servers have to be interconnected, and the user XR terminal can access the management server and all the XR application servers in a wired or wireless mode.
The method is characterized in that a new-dimension space platform is constructed based on an XR technology, a large number of XR application scenes can be deployed at high density in a root scene of the new-dimension space of the platform, the application scenes can be overlapped in space, a display interval is set for each application scene, the scenes can include sub-scenes, the sub-scenes can include grandchild scenes, all scenes in the visual field of a user are displayed to the user at the same time, when the user selects to enter any scene to experience the scene, the display of the scene is not limited by the display interval, and the child scenes included in the scene can be displayed to the user only in parts of the display intervals corresponding to the sub-scenes.
The invention constructs a new dimension space based on XR technology, and the new dimension space is characterized in that: a large number of XR application scenes can be deployed at high density in the new dimension space, the scenes can be overlapped in space, and a user can visually and stereoscopically know real-time operation conditions of all the XR application scenes in the area when entering the new dimension space; the new dimension space can cover a classroom, or a campus, a community, or a city; when the new dimension space contains a large number of scenes, a plurality of scenes which run simultaneously are rendered to the same picture, and extremely high graphic computing resources are needed.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A new dimension space construction method based on an XR technology is characterized by comprising the following steps:
step S1: pre-constructing root scene configuration of a new dimension space;
step S2: initializing a login user;
step S3: dynamically allocating scene instances to users and managing communication connections between the scene instances;
step S4: updating a communication connection between the user and the scene instance;
step S5: updating the state of the instance of the current experience scene of the user and the instance of the descendant scene of the current experience scene of the user;
step S6: calculating the pose values of the user under the coordinate systems of the scene instances;
step S7: respectively rendering pictures by the current experience scene instance and the descendant scene instance of the user;
step S8: cutting and splicing the rendering picture pictures of the scene instances to generate a new dimension space experience picture of a user;
step S9: the current experience scene example of the user sends a new dimension space experience picture to the user terminal;
step S10: destroy the scene instance.
2. The method for constructing the new-dimensional space according to claim 1, wherein the specific step of step S2 is that after any user applies for login to the new-dimensional space management control center, the management control center initializes the position of the user in the new-dimensional space, counts the scenes within the visual range of the user, distributes scene instances to the user, establishes communication connection between the scene instances according to the parent-child relationship, and establishes communication connection between the user side and the root scene instance of the new-dimensional space.
3. The method for constructing the new dimensional space according to claim 2, wherein the specific step of step S3 is that, due to the movement of the user and the deployment and change factors of the scene in the new dimensional space, the scene in the visual range of the user increases or decreases, the new dimensional space control and management center needs to dynamically manage computing resources in real time, dynamically allocate scene instances to the user, and dynamically manage the communication connection between the scene instances.
4. The method for constructing a new dimensional space according to claim 3, wherein the step S2 further comprises the steps of:
step S21: initializing the pose of a user in a new dimension space;
step S22: counting a scene set in a user visual range;
step S23: allocating scene instances to users;
step S24: establishing communication connection between scene instances according to the parent-child relationship;
step S25: and establishing communication connection between the user side and the new dimension space root scene instance.
5. The method for constructing a new dimensional space according to claim 4, wherein the step S3 further comprises the steps of:
step S31: updating a scene set in a visual range of a user;
step S32: distributing the new scene generation instance to a user;
step S33: establishing connection between the new scene instance and the parent scene instance thereof;
step S34: destroying the communication connection of the disappearing scene;
step S35: destroy the disappearing scene instance.
6. The method for constructing the new-dimensional space according to claim 5, wherein the specific step of step S21 is that a user side of any user sends a login application to a new-dimensional space management control center, and the management control center sets a pose of the user in the new-dimensional space through positioning information transmitted from the user side; the specific step of step S22 is that the user has a certain visual range in the new dimensional space, as in real life, for a single user, the system only needs to render the scenes in the visual range to the user, and the users at different positions or different positions have different scenes and number of scenes in the visual range; the specific step of step S23 is that if a scene is in the visual range of multiple users at the same time, the scene needs to be displayed to the users at the same time, and therefore scene images under the visual angles of the users need to be generated respectively, a single instance of the scene often cannot meet such a requirement, and thus, in the same scene, the system allocates an instance of the scene to the users separately, so that, the new dimension space management control center sends an instruction to the XR application server having idle computing resources, and allocates a scene instance to each scene in the visual range of each user respectively, and the XR application server loads a corresponding XR application executable file from the XR application library according to the instruction, and generates a scene instance; the specific step of step S24 is that, for a scene instance allocated to a user by the system, it only renders the picture of its own scene, but does not render the picture of a descendant scene disposed inside it, so that it can be avoided that the rendering calculation amount of a single scene instance is overloaded after a large number of descendant scenes are disposed in the scene, and conversely, since the picture of a descendant scene is not rendered, in order to generate a picture including a descendant scene, the scene needs to send user pose and interaction information data to the descendant scene, and receive the picture generated by the descendant scene, so that a communication connection is established between any scene instance allocated to the user and the parent scene instance allocated to the user; the specific step of step S25 is that, when the user is in the new dimensional space, the default interactive experience scene is the root scene of the new dimensional space, and during initialization, the user directly establishes communication connection with the root scene instance, the user transmits the positioning information and the interactive operation information to the root scene instance, and the root scene instance directly transmits the experience picture of the new dimensional space generated by splicing to the user.
7. The method for constructing the new-dimensional space according to claim 6, wherein the specific step of step S31 is that a new scene appears in the visual range of the user or an original partial scene disappears due to the displacement of the user in the new-dimensional space and the change of the scene deployment in the new-dimensional space, and for any time, the system needs to count the scene set in the visual range of the user again; the specific step of step S32 is that for any scene that newly appears in the visual range of the user, the system allocates computing resources to generate an instance of this scene; the specific step of step S33 is to traverse the scene instances allocated to the user, and the system retrieves the parent scene instance of any new scene, and the new scene instance establishes a communication connection with its parent scene instance; the specific step of step S34 is to destroy the communication connection with the parent scene instance for any scene that disappears from the user' S visible range; the specific step of step S35 is to destroy any scene instance that disappears from the user' S visible range.
8. The method for constructing a new-dimensional space according to claim 7, wherein the specific step of the step S1 is to configure the new-dimensional space, and the whole new-dimensional space can be viewed by itselfThe new dimension space management control center reads a new dimension space configuration table, configures the positions and the states of all sub-scenes and all objects in the sub-scenes, and configures the three-dimensional display interval of all the sub-scenes; further, configuring grandchild scenes deployed in each sub-scene, configuring other scenes deployed in the grandchild scenes, and so on, wherein the three-dimensional display interval of any scene necessarily comprises the three-dimensional display interval of the descendant scene; the specific step of step S4 is that the user is in a new dimensional space, the default interactive experience scene is a root scene of the new dimensional space, and the user can switch to any scene in the new dimensional space for interactive experience by entering a scene three-dimensional display interval or in an interactive selection manner, when the user newly enters a scene interactive experience, the user terminal directly establishes a communication connection with the current experience scene instance allocated to the user, and the user side transmits the positioning information and the interactive operation information to the current experience scene instance, and the current experience scene instance transmits the generated new dimensional space experience picture to the user side; the specific step of step S5 is that the current experience scene instance receives an interaction command sent by the user terminal, and at the same time, may also send a control command to its descendant scene instance, on this basis, each scene instance performs state update according to its own operating logic; the specific step of step S6 is that the user terminal transmits the positioning information and the interactive operation information related to pose transformation to the scene instance currently experienced by the user, the currently experienced scene instance calculates the pose of the user in the new dimension space, the currently experienced scene instance transmits the user pose information to its sub-scene instance, the sub-scene instance transmits the user pose information to the grandchild scene instance, and so on, and the pose values of the user in the currently experienced scene and each grandchild scene are calculated respectively according to the transformation relationship between each scene instance and the new dimension space coordinate system; the specific step of the step S7 is to render a picture of the current experience scene instance under the user pose viewing angle, and respectively render corresponding poses according to the pose values of the user terminal under the coordinate systems of the descendant scenes of the current experience sceneWhen each scene instance is rendered, the descendant scene instance deployed in the scene instance is not rendered, wherein the current experience scene of a user is not restricted by the three-dimensional display interval, the scene picture is rendered completely, and when each scene of the descendant scene is rendered, the scene is rendered incompletely, and only the three-dimensional display interval corresponding to the scene is rendered; the specific step of step S8 is to start traversal from the lowest descendant node of the current scene instance, for any descendant scene instance, receive all color images and depth images sent by its descendant scene instance, splice with its rendered image to generate color images and depth images, then send to its own parent node, and so on, and finally crop and splice to generate a new dimension space experience picture output to the user
Figure FDA0003463016680000061
Figure FDA0003463016680000062
Pixel set P for display interval imagingkComparison of PkReplacing the RGB value and the depth value of the pixels in psi according to the shielding relation between the depth values of the corresponding pixels in psi and the internal pixels; the specific step of the step S9 is that, through the communication connection established with the user terminal, the scene instance of the current user experience sends the new dimensional space experience picture generated by cutting and splicing to the user terminal; the specific step of step S10 is that when the user exits the new dimension space, the system destroys all scene instances allocated to the user and the communication connections associated with these scene instances.
9. The system of claim 1, wherein the system comprises system hardware and software, the system hardware comprises at least one XR application server, a storage server and a management server connected to the XR application server, and a terminal connected to the management server wirelessly or through a wire, and the system software functional module comprises an XR application server side, and a new-dimension space management control center, an XR application library, a new-dimension space configuration table, and a user side connected to the XR application server side; and the new dimension space management control center is connected with the new dimension space configuration table and the user side.
10. The new-dimensional space platform of the new-dimensional space construction method according to claim 1, wherein a plurality of XR application scenes can be deployed at high density in a root scene of a new-dimensional space of the platform, each application scene can be overlapped in space, a three-dimensional display interval is set for each application scene, a scene can include child scenes, the child scenes can include grandchild scenes, all scenes in a user view field are displayed to a user at the same time, when the user selects to enter any scene to experience the scene, the display of the scene is not restricted by the display interval, and the grandchild scenes included in the scene can only be displayed to the user partially in the display interval corresponding to the scene.
CN202210022608.9A 2022-01-10 2022-01-10 XR (X-ray diffraction) technology-based new-dimension space construction method, system and platform Active CN114356096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210022608.9A CN114356096B (en) 2022-01-10 2022-01-10 XR (X-ray diffraction) technology-based new-dimension space construction method, system and platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210022608.9A CN114356096B (en) 2022-01-10 2022-01-10 XR (X-ray diffraction) technology-based new-dimension space construction method, system and platform

Publications (2)

Publication Number Publication Date
CN114356096A true CN114356096A (en) 2022-04-15
CN114356096B CN114356096B (en) 2022-09-02

Family

ID=81109136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210022608.9A Active CN114356096B (en) 2022-01-10 2022-01-10 XR (X-ray diffraction) technology-based new-dimension space construction method, system and platform

Country Status (1)

Country Link
CN (1) CN114356096B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998063A (en) * 2022-04-22 2022-09-02 深圳职业技术学院 XR (X-ray fluorescence) technology-based immersive class construction method and system and storage medium
CN115808974A (en) * 2022-07-29 2023-03-17 深圳职业技术学院 Immersive command center construction method and system and storage medium
CN116860112A (en) * 2023-08-16 2023-10-10 深圳职业技术学院 Combined scene experience generation method, system and medium based on XR technology

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844689A (en) * 2016-03-29 2016-08-10 浪潮(苏州)金融技术服务有限公司 Method of using dimensional space technology for multidimensional data synchronization management
CN107402633A (en) * 2017-07-25 2017-11-28 深圳市鹰硕技术有限公司 A kind of safety education system based on image simulation technology
US20190005733A1 (en) * 2017-06-30 2019-01-03 Paul Alexander Wehner Extended reality controller and visualizer
CN110971678A (en) * 2019-11-21 2020-04-07 深圳职业技术学院 Immersive visual campus system based on 5G network
CN111163286A (en) * 2018-11-08 2020-05-15 北京航天长峰科技工业集团有限公司 Panoramic monitoring system based on mixed reality and video intelligent analysis technology
CN111199561A (en) * 2020-01-14 2020-05-26 上海曼恒数字技术股份有限公司 Multi-person cooperative positioning method and system for virtual reality equipment
US20210142578A1 (en) * 2019-11-11 2021-05-13 Aveva Software, Llc Computerized system and method for an extended reality (xr) progressive visualization interface
CN113360042A (en) * 2021-06-21 2021-09-07 苏州新看点信息技术有限公司 Virtual reality scene browsing method
CN113628284A (en) * 2021-08-10 2021-11-09 深圳市人工智能与机器人研究院 Pose calibration data set generation method, device and system, electronic equipment and medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844689A (en) * 2016-03-29 2016-08-10 浪潮(苏州)金融技术服务有限公司 Method of using dimensional space technology for multidimensional data synchronization management
US20190005733A1 (en) * 2017-06-30 2019-01-03 Paul Alexander Wehner Extended reality controller and visualizer
CN107402633A (en) * 2017-07-25 2017-11-28 深圳市鹰硕技术有限公司 A kind of safety education system based on image simulation technology
CN111163286A (en) * 2018-11-08 2020-05-15 北京航天长峰科技工业集团有限公司 Panoramic monitoring system based on mixed reality and video intelligent analysis technology
US20210142578A1 (en) * 2019-11-11 2021-05-13 Aveva Software, Llc Computerized system and method for an extended reality (xr) progressive visualization interface
CN110971678A (en) * 2019-11-21 2020-04-07 深圳职业技术学院 Immersive visual campus system based on 5G network
CN111199561A (en) * 2020-01-14 2020-05-26 上海曼恒数字技术股份有限公司 Multi-person cooperative positioning method and system for virtual reality equipment
CN113360042A (en) * 2021-06-21 2021-09-07 苏州新看点信息技术有限公司 Virtual reality scene browsing method
CN113628284A (en) * 2021-08-10 2021-11-09 深圳市人工智能与机器人研究院 Pose calibration data set generation method, device and system, electronic equipment and medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YANXIANG ZHANG ET AL.: ""Through the Solar System" ——XR science education system based on multiple monitors", 《IEEE XPLORE》 *
刘秀玲等: "分布式多交互虚拟场景渲染的协同控制", 《计算机工程与应用》 *
洪韬 等: "基于XR技术的交互式地质场景模型构建方法研究", 《地址通报》 *
陈锐浩 等: "职业院校"5G+XR"教学体系的构建与应用研究——以深圳职业技术学院为例", 《广西职业技术学院学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998063A (en) * 2022-04-22 2022-09-02 深圳职业技术学院 XR (X-ray fluorescence) technology-based immersive class construction method and system and storage medium
CN115808974A (en) * 2022-07-29 2023-03-17 深圳职业技术学院 Immersive command center construction method and system and storage medium
CN115808974B (en) * 2022-07-29 2023-08-29 深圳职业技术学院 Immersive command center construction method, immersive command center construction system and storage medium
CN116860112A (en) * 2023-08-16 2023-10-10 深圳职业技术学院 Combined scene experience generation method, system and medium based on XR technology
CN116860112B (en) * 2023-08-16 2024-01-23 深圳职业技术大学 Combined scene experience generation method, system and medium based on XR technology

Also Published As

Publication number Publication date
CN114356096B (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN114356096B (en) XR (X-ray diffraction) technology-based new-dimension space construction method, system and platform
EP2490179B1 (en) Method and apparatus for transmitting and receiving a panoramic video stream
KR101559838B1 (en) Visualizaion method and system, and integrated data file generating method and apparatus for 4d data
Royan et al. Network-based visualization of 3d landscapes and city models
CN109996055A (en) Position zero time delay
CN102450011A (en) Methods and apparatus for efficient streaming of free view point video
KR102389157B1 (en) Method and apparatus for providing 6-dof omni-directional stereoscopic image based on layer projection
CN115830199B (en) XR technology-based ubiquitous training campus construction method, system and storage medium
CN114998063A (en) XR (X-ray fluorescence) technology-based immersive class construction method and system and storage medium
CN114926612A (en) Aerial panoramic image processing and immersive display system
CN116325769A (en) Panoramic video streaming scenes from multiple viewpoints
US11710256B2 (en) Free-viewpoint method and system
US7091991B2 (en) 3D stereo browser for the internet
US11910054B2 (en) Method and apparatus for decoding a 3D video
CN115423916A (en) XR (X-ray diffraction) technology-based immersive interactive live broadcast construction method, system and medium
WO2001065854A1 (en) Interactive navigation through real-time live video space created in a given remote geographic location
CN116860113B (en) XR combined scene experience generation method, system and storage medium
EP3564905A1 (en) Conversion of a volumetric object in a 3d scene into a simpler representation model
CN116860112B (en) Combined scene experience generation method, system and medium based on XR technology
US7190371B2 (en) 3D stereo browser for the internet
RU2771957C2 (en) Device and method for generating mosaic representation of three-dimensional scene image
US20230401752A1 (en) Techniques using view-dependent point cloud renditions
US20220165020A1 (en) Apparatus and method of generating an image signal
CN115808974A (en) Immersive command center construction method and system and storage medium
CN116405642A (en) Method and device for fusing video and live-action three-dimensional model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant