CN115423920B - VR scene processing method, device and storage medium - Google Patents

VR scene processing method, device and storage medium Download PDF

Info

Publication number
CN115423920B
CN115423920B CN202211134294.8A CN202211134294A CN115423920B CN 115423920 B CN115423920 B CN 115423920B CN 202211134294 A CN202211134294 A CN 202211134294A CN 115423920 B CN115423920 B CN 115423920B
Authority
CN
China
Prior art keywords
rendering
scene
point
points
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211134294.8A
Other languages
Chinese (zh)
Other versions
CN115423920A (en
Inventor
杨光
白杰
李成杰
李勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
You Can See Beijing Technology Co ltd AS
Original Assignee
You Can See Beijing Technology Co ltd AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by You Can See Beijing Technology Co ltd AS filed Critical You Can See Beijing Technology Co ltd AS
Priority to CN202211134294.8A priority Critical patent/CN115423920B/en
Publication of CN115423920A publication Critical patent/CN115423920A/en
Priority to PCT/CN2022/140018 priority patent/WO2024055462A1/en
Application granted granted Critical
Publication of CN115423920B publication Critical patent/CN115423920B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses a processing method, a device and a storage medium of a VR scene, wherein the method comprises the following steps: responding to a VR viewing request sent by a terminal device of a user, and determining an initial VR scene corresponding to the VR viewing request, wherein the initial VR scene is an initial scene to be rendered; determining a first point position of an initial VR scene; rendering the first point to obtain a first rendering result corresponding to the first point, wherein the first rendering result comprises a first VR scene corresponding to the first point after rendering; and sending the first rendering result to the terminal equipment so that the terminal equipment displays the first VR scene corresponding to the first point location to the user. The first point is preferentially rendered, the rendering result of the first point is sent to the user after the first point is rendered, the user can quickly check the VR scene of the view angle of the first point, the waiting time of the user is effectively reduced, the user experience is improved, and the problems that the waiting time of the user is long in the prior art are solved.

Description

VR scene processing method, device and storage medium
Technical Field
The disclosure relates to virtual reality technology, in particular to a processing method, a device and a storage medium of a VR scene.
Background
VR (Virtual Reality) scenes, also called panoramic scenes, are multi-source information fusion, interactive three-dimensional scenes constructed based on panoramic images through computer image processing technology. Stereoscopic scenes can be more vividly and comprehensively presented through a 720-degree viewing angle, and the stereoscopic scenes are widely applied to various fields, such as furniture display, tourist attraction display, virtual exhibition halls, digital museums and the like, and VR automobiles and VR houses. In order to provide a more realistic VR scene display effect for a user, rendering is an important technology, in the related art, when the user views a VR scene, rendering is usually performed in real time, and a corresponding rendered VR scene is obtained and displayed to the user, but the real-time rendering user needs to wait for a long time.
Disclosure of Invention
The embodiment of the disclosure provides a processing method, a processing device and a storage medium for VR scenes, so that the waiting time of users is effectively reduced, and the user experience is improved.
In one aspect of the embodiments of the present disclosure, there is provided a processing method of a VR scene, including:
responding to a VR viewing request sent by terminal equipment of a user, and determining an initial VR scene corresponding to the VR viewing request, wherein the initial VR scene is an initial scene to be rendered;
Determining a first point position of the initial VR scene;
rendering the first point location to obtain a first rendering result corresponding to the first point location, wherein the first rendering result comprises a rendered first VR scene corresponding to the first point location;
and sending the first rendering result to the terminal equipment so that the terminal equipment displays the first VR scene corresponding to the first point location to a user.
In an embodiment of the present disclosure, after rendering the first point location, obtaining a first rendering result corresponding to the first point location, the method further includes:
rendering other points of the initial VR scene except the first point to obtain second rendering results corresponding to the other points respectively, wherein the second rendering results corresponding to the other points comprise rendered second VR scenes corresponding to the other points;
and sending the second rendering results corresponding to the other points to the terminal equipment.
In an embodiment of the present disclosure, the rendering the other points of the initial VR scene except the first point location to obtain second rendering results corresponding to the other points respectively includes:
Rendering the other points according to the communication distance between the other points and the first point from small to large in sequence to obtain the second rendering results corresponding to the other points respectively;
the sending the second rendering results corresponding to the other points to the terminal device includes:
and sending the second rendering result of the other points to the terminal equipment every time the rendering of the other points is completed.
In an embodiment of the present disclosure, the rendering the other points of the initial VR scene except the first point location to obtain second rendering results corresponding to the other points respectively includes:
and rendering other points of the initial VR scene in parallel to obtain second rendering results corresponding to the other points.
In an embodiment of the present disclosure, after rendering other points of the initial VR scene, obtaining second rendering results corresponding to the other points, the method further includes:
performing three-dimensional model rendering based on the planar house type graph corresponding to the initial VR scene, the panoramic image corresponding to each point position and the object space occupying information, and obtaining a third rendering result, wherein the third rendering result comprises a rendered target three-dimensional model;
And responding to a three-dimensional model viewing request sent by the terminal equipment, and sending the third rendering result to the terminal equipment so as to show the target three-dimensional model to the user.
In an embodiment of the present disclosure, further comprising:
receiving a viewing request of a first other point location except the first point location, which is sent by the terminal equipment;
responding to incomplete rendering of the first other point location, and sending in-load prompt information corresponding to the first other point location to the terminal equipment, so that the terminal equipment displays in-load prompt information at the first other point location based on the in-load prompt information;
and responding to the first other point positions to complete rendering, and sending a second rendering result corresponding to the first other point positions to the terminal equipment so as to display the second VR scene corresponding to the first other point positions to the user.
In an embodiment of the disclosure, the VR viewing request includes target household type information and region of interest information selected by the user;
the determining the initial VR scene corresponding to the VR viewing request includes:
determining the initial VR scene based on the target household type information;
The determining the first point location of the initial VR scene includes:
and taking the point position corresponding to the attention area information as the first point position of the initial VR scene.
In an embodiment of the disclosure, the VR viewing request further includes the user selected decoration style information;
the step of rendering the first point location, obtaining a first rendering result corresponding to the first point location, includes:
and rendering the first point location based on the decoration style information to obtain the first rendering result corresponding to the first point location.
In another aspect of the embodiments of the present disclosure, there is provided a processing apparatus for VR scenes, including:
the first determining module is used for responding to a VR viewing request sent by terminal equipment of a user, determining an initial VR scene corresponding to the VR viewing request, wherein the initial VR scene is an initial scene to be rendered;
the second determining module is used for determining the first point position of the initial VR scene;
the first processing module is used for rendering the first point location to obtain a first rendering result corresponding to the first point location, wherein the first rendering result comprises a rendered first VR scene corresponding to the first point location;
and the first sending module is used for sending the first rendering result to the terminal equipment so as to be displayed to the user.
In an embodiment of the disclosure, the apparatus further comprises:
the second processing module is used for rendering other points of the initial VR scene except the first point, so as to obtain second rendering results corresponding to the other points respectively, wherein the second rendering results corresponding to the other points comprise rendered second VR scenes corresponding to the other points;
and the second sending module is used for sending the second rendering results corresponding to the other points to the terminal equipment.
In an embodiment of the disclosure, the second processing module is specifically configured to:
rendering the other points according to the communication distance between the other points and the first point from small to large in sequence to obtain the second rendering results corresponding to the other points respectively;
the second sending module is specifically configured to:
and sending the second rendering result of the other points to the terminal equipment every time the rendering of the other points is completed.
In an embodiment of the disclosure, the second processing module is specifically configured to:
and rendering other points of the initial VR scene in parallel to obtain second rendering results corresponding to the other points.
In an embodiment of the disclosure, the apparatus further comprises:
the third processing module is used for performing three-dimensional model rendering based on the planar floor plan corresponding to the initial VR scene, the panoramic image corresponding to each point position and the object space occupying information, and obtaining a third rendering result, wherein the third rendering result comprises a rendered target three-dimensional model;
and the third sending module is used for responding to the three-dimensional model viewing request sent by the terminal equipment and sending the third rendering result to the terminal equipment so as to show the target three-dimensional model to the user.
In an embodiment of the disclosure, the apparatus further comprises:
the first receiving module is used for receiving a viewing request of a first other point position except the first point position, which is sent by the terminal equipment;
a fourth sending module, configured to send, to the terminal device, in response to incomplete rendering of the first other point location, in-load prompt information corresponding to the first other point location, so that the terminal device displays in-load prompt information at the first other point location based on the in-load prompt information;
and a fifth sending module, configured to send, in response to the rendering of the first other point location, a second rendering result corresponding to the first other point location to the terminal device, so as to display the second VR scene corresponding to the first other point location to the user.
In an embodiment of the disclosure, the VR viewing request includes target household type information and region of interest information selected by the user;
the first determining module is specifically configured to: determining the initial VR scene based on the target household type information;
the second determining module is specifically configured to: and taking the point position corresponding to the attention area information as the first point position of the initial VR scene.
In an embodiment of the disclosure, the VR viewing request further includes the user selected decoration style information;
the first processing module is specifically configured to: and rendering the first point location based on the decoration style information to obtain the first rendering result corresponding to the first point location.
According to yet another aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method according to any of the above embodiments of the present disclosure.
According to the processing method, the processing device and the storage medium for the VR scene, when the user requests VR to view, the first point of the initial VR scene is preferentially rendered, the rendering result of the first point is sent to the user after the first point is rendered, the user can rapidly view the VR scene at the view angle of the first point, the waiting time of the user is effectively reduced, the user experience is improved, and the problems that the waiting time of the user is long and the like in the prior art are solved.
The technical scheme of the present disclosure is described in further detail below through the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The disclosure may be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings in which:
fig. 1 is an exemplary application scenario of a VR scenario processing method provided by the present disclosure;
fig. 2 is a flowchart illustrating a processing method of a VR scene according to an exemplary embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a processing method of a VR scene according to another exemplary embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a processing method of a VR scene provided in still another exemplary embodiment of the present disclosure;
fig. 5 is a flowchart of a processing method of a VR scene provided in yet another exemplary embodiment of the present disclosure;
fig. 6 is a flowchart illustrating a processing method of a VR scene provided in still another exemplary embodiment of the present disclosure;
FIG. 7 is a flow chart of step 203 provided by an exemplary embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a processing device for VR scenes according to an exemplary embodiment of the present disclosure;
Fig. 9 is a schematic structural diagram of a processing device for VR scenes provided in another exemplary embodiment of the present disclosure;
fig. 10 is a schematic diagram showing the structure of an application embodiment of the electronic device.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
It will be appreciated by those of skill in the art that the terms "first," "second," etc. in embodiments of the present disclosure are used merely to distinguish between different steps, devices or modules, etc., and do not represent any particular technical meaning nor necessarily logical order between them.
It should also be understood that in embodiments of the present disclosure, "plurality" may refer to two or more, and "at least one" may refer to one, two or more.
It should also be appreciated that any component, data, or structure referred to in the presently disclosed embodiments may be generally understood as one or more without explicit limitation or the contrary in the context.
In addition, the term "and/or" in this disclosure is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the front and rear association objects are an or relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and that the same or similar features may be referred to each other, and for brevity, will not be described in detail.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
Embodiments of the present disclosure may be applicable to electronic devices such as terminal devices, computer systems, servers, etc., which may operate with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with the terminal device, computer system, server, or other electronic device include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, network personal computers, minicomputer systems, mainframe computer systems, and distributed cloud computing technology environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc., that perform particular tasks or implement particular abstract data types. The computer system/server may be implemented in a distributed cloud computing environment in which tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computing system storage media including memory storage devices.
Summary of the disclosure
In the process of implementing the present disclosure, the inventor finds that, in order to provide a more realistic VR (Virtual Reality) scene display effect for a user, rendering is an important technology, in the related art, when the user views a VR scene, an initial VR scene of the VR scene to be viewed is generally rendered in real time, and the corresponding rendered VR scene is displayed to the user, but the real-time rendering user needs to wait for a longer time.
Exemplary overview
Fig. 1 is an exemplary application scenario of a VR scenario processing method provided in the present disclosure.
In the field of houses, when a user wants to buy or rent a house, the user can see the house through an application program (APP) installed on a terminal device and can see a real stereoscopic scene inside the house through VR. By means of the VR scene processing method, after the terminal device selects the house type to be checked, a user can trigger the VR to see the house, the terminal device obtains operation information of the user, the VR checking request can comprise target house type information selected by the user, the server can determine a corresponding initial VR scene according to the target house type information in the VR checking request of the user, the initial VR scene can be the VR scene to be rendered obtained after pre-emption, personalized processing and determination of each point in the scene, wherein the occupation refers to the position occupied by an article in the scene, the personalized processing refers to the determination of the style of the article in the scene and other related information, the point location refers to the running point in the scene, is the virtual position where the user can run in the scene, the first point location of the initial VR scene can be determined, the first VR scene after rendering corresponding to the first point location is rendered to the terminal device, the first VR scene after rendering corresponding to the first point location is sent to the terminal device, the terminal device displays the first VR scene after the first point location corresponds to the user, and accordingly the user can effectively view the scene by the user, and the user can be effectively shortened, and the user experience can be reduced. For other points except the first point, rendering can be carried out on other points when the user views the VR scene of the first point after the rendering of the first point is finished, the rendering can be directly sent to the terminal equipment after the rendering is finished, and after the user views the VR scene of the first point, the rendering of other points can be finished, so that the user can directly view the VR scene of other points, and the overall rendered VR scene is provided for the user under the condition that the user cannot perceive waiting.
The processing method of the VR scene is not limited to the field of real estate, and can be applied to any other field related to VR scenes, such as tourist attraction display, virtual exhibition halls, digital museums, furniture display, VR automobiles, VR and decoration, and the like, and can be specifically set according to actual requirements.
Exemplary method
Fig. 2 is a flowchart illustrating a processing method of a VR scene according to an exemplary embodiment of the present disclosure. The method comprises the following steps:
in step 201, in response to a VR viewing request sent by a terminal device of a user, an initial VR scene corresponding to the VR viewing request is determined, where the initial VR scene is an initial scene to be rendered.
The user can look over the VR scene for any field, such as a house watching user looking over the VR scene of the house through VR, a tourist user looking over the VR scene of the tourist attraction through VR, a user visiting the virtual exhibition hall looking over the VR scene of the virtual exhibition hall through VR, and the like. The terminal equipment can be any equipment supporting VR scene display such as a mobile phone and a tablet. The initial VR scene may be a VR scene to be rendered obtained after pre-preempting, personalized processing, and determining points in the scene, where the preempting refers to determining a position occupied by an article in the scene, the personalized processing refers to determining a style of the article in the scene and other related information, the points refer to wandering points in the scene, and the points are virtual positions where a user may wander in the scene. The initial VR scene may be obtained in advance and stored in correspondence with its corresponding actual scene. For example, in the field of real estate, an initial VR scene corresponding to each house type can be obtained in advance and stored corresponding to house type information. When a user selects a house to view the VR scene, the user can determine the corresponding initial VR scene based on the house type information or the house information, and the initial VR scene can be specifically set according to actual requirements. The user may trigger sending a VR viewing request to a device or server executing the methods of the present disclosure through the terminal device. The VR viewing request may include association information corresponding to a VR scene to be viewed by a user, where the association information may be set according to actual needs, for example, the association information may be target house information or target house type information, and the association information may also be virtual exhibition hall identification information, tourist attraction identification information, digital museum identification information, or the like, so long as an initial VR scene corresponding to the VR viewing request can be determined based on the association information, which is not specifically limited.
Step 202, determining the first point of the initial VR scene.
The first point location may be determined according to an actual requirement of a user, or may be determined according to a preset rule, where the preset rule may be set according to an actual requirement, for example, the first point location is determined randomly, that is, a random one of a plurality of points in an initial VR scene is taken as the first point location, and for example, the preset first point location is set, for example, a point location corresponding to a certain function of a house (for example, a living room, a bedroom, a restaurant, a kitchen, etc.) is taken as the first point location, which is not particularly limited. The actual requirement of the user can be specified when the user triggers the VR viewing request, or the user requirement can be determined by analyzing the user information authorized by the user or the historical operation information of the user, and then the initial point position is determined based on the user requirement. The specific mode of determining the initial point location is not limited.
Step 203, rendering the first point location to obtain a first rendering result corresponding to the first point location, where the first rendering result includes a rendered first VR scene corresponding to the first point location.
After the first point location is determined, the first point location can be preferentially rendered, and a rendered first VR scene corresponding to the first point location is obtained after the first point location is rendered. Rendering refers to rendering calculation to obtain an image effect after a certain material, color, illumination and other attributes are given to various elements in a scene. The specific rendering mode may be any implementation mode, and the disclosure is not limited.
In an alternative example, panoramic images corresponding to real scenes (such as houses, virtual exhibition halls, tourist attractions, etc.) may be generated in advance, the panoramic image of each real scene may include panoramic images corresponding to all points of the VR scene corresponding to the real scene, and then a panoramic image corresponding to a first point location may be obtained. Rendering the panoramic image of the first point location may include assigning a certain attribute to each target element in the panoramic image, for example, the target element is a wall body, the target attribute is yellow, the wall body in the panoramic image after rendering is yellow, and so on, rendering each element in the panoramic image corresponding to the first point location may obtain a rendered panoramic image, and the rendered panoramic image corresponding to the first point location forms a first VR scene after rendering of the first point location.
In an alternative example, the panoramic image may be rendered based on pre-configured rendering data, where the rendering data includes attribute information corresponding to each element, and may specifically be set according to actual requirements.
In an optional example, panoramic images of all points may be further rendered according to a style specified by a user, and specifically may be set according to actual requirements.
Step 204, sending the first rendering result to the terminal device, so that the terminal device displays the first VR scene corresponding to the head point location to the user.
After the rendering of the first point location is completed, a first rendering result corresponding to the first point location can be timely and rapidly sent to the terminal equipment, and the terminal equipment can display the first VR scene corresponding to the first point location to the user, so that the user can rapidly check the first VR scene of the first point location without waiting for too long, the waiting time of the user is effectively reduced, and the user experience is improved.
According to the VR scene processing method, when the user requests VR to view, the first point of the initial VR scene is preferentially rendered, the rendering result of the first point is sent to the user after the first point is rendered, the user can rapidly view the VR scene at the view angle of the first point, the waiting time of the user is effectively reduced, the user experience is improved, and the problems that the waiting time of the user is long and the like in the prior art are solved.
Fig. 3 is a flowchart illustrating a processing method of a VR scene according to another exemplary embodiment of the present disclosure.
In an optional example, after rendering the first point in step 203 to obtain the first rendering result corresponding to the first point, the method further includes:
step 205, rendering other points of the initial VR scene except the first point, to obtain second rendering results corresponding to the other points, where the second rendering results corresponding to the other points include the rendered second VR scene corresponding to the other points.
Because the initial VR scene corresponding to each real scene generally includes a plurality of points, after the rendering of the first point is preferentially completed, the rendering of other points except the first point can be continued to obtain second rendering results corresponding to the other points respectively, and the rendering principle of the other points refers to the rendering of the first point and is not repeated herein.
In this embodiment, the rendering of each other point location is performed while the user views the first VR scene of the first point location, so that the user may not have browsed the first VR scene yet, and the rendering of other point locations is completed, so that a complete VR scene is provided for the user without perception by the user.
Step 205 and step 204 are not sequential.
And 206, sending the second rendering results corresponding to the other points to the terminal equipment.
After second rendering results corresponding to other points are obtained, the second rendering results can be sent to the terminal device, when the user clicks other points in the first VR scene, the terminal device can directly obtain the received second rendering results of the other points, and the second VR scene corresponding to the other points is displayed for the user based on the second rendering results.
According to the method and the device, the first rendering result of the first point location is sent to the terminal equipment to be checked by the user preferentially, the user browses the first VR scene corresponding to the first point location, the other point locations are rendered, the second rendering results of the other point locations are obtained and then are directly sent to the terminal equipment to be stored, when the user clicks the other point locations on the first VR scene, the second VR scene of the clicked other point locations can be displayed for the user rapidly, the effect that the user only perceives the first point location waiting time is achieved, the whole VR scene is provided for the user, the waiting time of the user is effectively reduced, and the user experience is improved. And rendering is not required to be processed in advance, so that the consumption of server resources can be reduced, and the cost is reduced.
In an optional example, after receiving the VR viewing request sent by the terminal device, the terminal device may further send rendering waiting interface data to the terminal device, so that the terminal device displays the rendering waiting interface to the user, where specific content of the rendering waiting interface may be set according to actual requirements, and the disclosure is not limited. And displaying the first VR scene corresponding to the first point position to the user until the rendering of the first point position is completed. Or after the rendering of the first point location is completed, displaying the entering VR confirmation interface to the user, and clicking the buttons with similar functions such as 'entering VR' or 'determining' in the confirmation interface again by the user to enter the first VR scene corresponding to the first point location, wherein the first VR scene can be specifically set according to actual requirements.
Fig. 4 is a flowchart illustrating a processing method of a VR scene according to still another exemplary embodiment of the present disclosure.
In an optional example, the rendering the other points of the initial VR scene except the first point in step 205 to obtain second rendering results corresponding to the other points respectively includes:
and step 2051a, rendering each other point according to the communication distance between each other point and the first point from small to large, and obtaining second rendering results corresponding to each other point.
The communication distance between each other point and the first point can be predetermined when the point is determined, so that the rendering sequence of each other point can be determined based on the communication distance, each other point is rendered according to the corresponding sequence, and a second rendering result corresponding to each other point is obtained. For the specific rendering operation of each point location, reference is made to the aforementioned first point location, and details are not repeated here.
Step 206 of sending the second rendering results corresponding to the other points to the terminal device, including:
in step 2061a, each time the rendering of one other point location is completed, the second rendering result of the other point location is sent to the terminal device.
In the process of rendering other points in sequence, in order to enable a user to quickly view other points near the first point, each time the rendering of one other point is completed, a second rendering result of the other point is sent to the terminal device, when the user clicks the other point in the first VR scene, the user can quickly walk to the other point, and the second VR scene corresponding to the other point is browsed. For example, the first point location is a living room, other points in the panoramic view angle range of the first point location are displayed in the first VR scene of the living room, but when rendering of the other points is not completed, a user cannot enter the second VR scene of the other points when clicking the other points in the first VR scene, prompt information in loading can be displayed at the other points, for example, in the first VR scene, the points are represented by circles, and when the user clicks the point where rendering is not completed, the circles are displayed to be rotated to represent loading, and the loading can be specifically set according to actual requirements. After the rendering of the other points is completed, the terminal equipment receives a second rendering result of the other points, finishes the prompt information in loading, and switches to a second VR scene of the other points, so that a user can browse the second VR scene.
According to the method and the device, the other points are rendered from small to large in sequence according to the communication distance between the other points and the first point, and the communication distance reflects the sequence that the user can swim from the first point to the other points, so that the user can quickly swim to the point nearby the first point, and the like, the point nearby the current browsing point of the user is rendered first, so that the user does not need to wait for loading as much as possible, and the user experience is further improved.
Fig. 5 is a flowchart illustrating a processing method of a VR scene according to another exemplary embodiment of the present disclosure.
In an optional example, the rendering the other points of the initial VR scene except the first point in step 205 to obtain second rendering results corresponding to the other points respectively includes:
step 2051b, rendering each other point of the initial VR scene in parallel to obtain a second rendering result corresponding to each other point.
In order to further improve user experience, after the rendering of the first point position is completed, other point positions can be rendered in parallel, so that the rendering of all the point positions is completed quickly, VR scenes of all the point positions are provided for a user as soon as possible, and the user can walk and browse at random at the point positions. In the parallel rendering process, the rendering principle of each point is referred to the first point, and will not be described herein.
According to the method and the device, after the rendering of the first point position is completed, the rendering of all other point positions is performed in parallel, so that the rendering efficiency is further improved, the rendering time is reduced, and the user experience is further improved.
In an optional example, after rendering the other points of the initial VR scene except the first point in step 205, to obtain the second rendering results corresponding to the other points, the method further includes:
step 301, performing three-dimensional model rendering based on the planar floor plan corresponding to the initial VR scene, the panoramic image corresponding to each point position, and the object space occupying information, to obtain a third rendering result, where the third rendering result includes the rendered target three-dimensional model.
The planar user pattern map may be obtained and stored in advance, and the panoramic images corresponding to each point bit may be rendered panoramic images generated in the point bit rendering process, and may be directly obtained in this step. The item occupancy information may be obtained during a pre-occupancy phase. The item occupancy information may include information of a position of the item in the three-dimensional model, a shape, a size, etc. of the item to be able to describe a spatial area occupied by the item in the three-dimensional model. Specifically, corresponding three-dimensional model data can be determined based on the planar house type graph, the three-dimensional model data comprises a three-dimensional coordinate point set, the construction of a three-dimensional scene of the planar house type graph is realized according to the three-dimensional model data, the construction of a three-dimensional scene of an article is realized through panoramic images of all points and article occupation information, and a rendered target three-dimensional model is obtained by combining the three-dimensional scene of the planar house type graph and the three-dimensional scene of the article. The rendering principle of the specific three-dimensional model is not described in detail.
And step 302, responding to the three-dimensional model viewing request sent by the terminal equipment, and sending a third rendering result to the terminal equipment so as to show the target three-dimensional model to the user.
After the rendering of all the points is completed, before the rendering of the three-dimensional model is completed, a radar image of a three-dimensional model entry can be displayed on an interface of a VR scene displayed by a terminal device of a user, for example, the radar image is displayed on the upper right corner of the VR scene interface, but at this time, the clicking operation of the radar image by the user cannot be responded, after the rendering of the three-dimensional model is completed, if the user clicks the radar image, a third rendering result obtained by the rendering can be sent to the terminal device, and the terminal device can display a target three-dimensional model of a currently-viewed house for the user to view.
In an alternative example, after the user clicks the radar map, the user may enter a three-dimensional model interface, or may enter a planar house type map interface, where the three-dimensional model interface displays a target three-dimensional model, and the planar house type map interface displays a planar house type map, which may be specifically set according to actual requirements.
According to the method and the device, the three-dimensional model rendering is performed after the rendering of all the points is completed, so that a three-dimensional model browsing function is provided for a user, the user can browse the global three-dimensional space layout, and the user experience is further improved.
Fig. 6 is a flowchart illustrating a processing method of a VR scene according to still another exemplary embodiment of the present disclosure.
In one optional example, the method of the present disclosure further comprises:
step 401, receiving a viewing request of a first other point location except a first point location sent by a terminal device.
The first other point location may be any other point location among other point locations except the first point location, and in the case that the user browses the first VR scene of the first point location, other point locations visible under the panoramic view angle of the first point location are displayed in the first VR scene, and the user may click on these other point locations at will, so that the clicked other point locations are referred to as first other point locations. The terminal equipment acquires clicking operation of the user, if a second rendering result corresponding to the first other point location is received, the terminal equipment can immediately display a second VR scene corresponding to the first other point location to the user, otherwise, the terminal equipment is triggered to send a viewing request of the first other point location to the server.
And step 402, in response to the incomplete rendering of the first other point location, sending in-load prompt information corresponding to the first other point location to the terminal device, so that the terminal device displays the in-load prompt information at the first other point location based on the in-load prompt information.
The specific content of the prompt information in the loading process can be set according to actual requirements and can be words, pictures, dynamic pictures and the like, so long as the user can be prompted to load, and the method is not limited. If the first other point location has not finished rendering, sending the in-loading prompt information corresponding to the first other point location to the terminal equipment, and displaying the in-loading prompt information at the first other point location by the terminal equipment to prompt the user to wait for loading.
Step 403, in response to the completion of rendering of the first other point location, sending a second rendering result corresponding to the first other point location to the terminal device, so as to display a second VR scene corresponding to the first other point location to the user.
And after the first other point positions are rendered, sending a second rendering result of the first other point positions to terminal equipment in time, and displaying a second VR scene corresponding to the first other point positions to a user in time by the terminal equipment.
In an optional example, the in-load prompt information of each other point location may also be sent to the terminal device in advance, and when the user clicks on the other point location that is not rendered, the terminal device autonomously displays the in-load prompt information of the other point location to the user.
According to the method and the device, when the user clicks the point which is not rendered, the user is prompted to display the in-loading prompt information, the user is prompted that the current point is being loaded, the user can check other points first and can wait for the point, after the rendering of the point is completed, the second rendering result of the point is sent to the terminal equipment in time, the second VR scene of the point is displayed for the user, and user experience is further improved.
In one optional example, the VR viewing request includes user-selected target household type information and region of interest information; step 201, in response to a VR viewing request sent by a terminal device of a user, determining an initial VR scene corresponding to the VR viewing request, including:
in step 2011, an initial VR scene is determined based on target household type information included in a VR viewing request in response to the VR viewing request sent by the terminal device of the user.
The target house type information may include a target house type identifier or a target house type graph, and specifically may be set according to actual requirements, where the target house type identifier may be unique identifier information set in advance for each house type, such as a house type ID, and a specific identification mode is not limited. The region of interest information may include inter-function information that the user specifies to be of interest, such as a kitchen, living room, master bedroom, etc., without limitation. The target house type information can be house type information corresponding to a target house selected by a user browsing a house list, the attention area information can be screening functions provided for the user for selection when the user browses the house list, or the attention area information can be provided for the user to select the attention area through a pop-up interface when the user clicks to view a VR scene after the user selects the target house, and the specific triggering mode can be set according to actual requirements, so that the disclosure is not limited. And pre-configuring corresponding initial VR scenes for each household type, storing the initial VR scenes in association with the target household type information, and determining the corresponding initial VR scenes based on the target household type information after the target household type information is obtained.
Step 202 determining a head point location of an initial VR scene includes:
in step 2021, the point location corresponding to the region of interest information included in the VR viewing request is taken as the first point location of the initial VR scene.
For example, when the attention area of the user is a kitchen, the point corresponding to the kitchen is used as the first point of the initial VR scene, and rendering is performed preferentially.
According to the method and the device, the first point position is determined based on the user attention area, so that the user can quickly and accurately check VR scenes among the functions of attention, and user experience is further improved.
In one optional example, the VR viewing request further includes user-selected decoration style information; rendering the first point location in step 203 to obtain a first rendering result corresponding to the first point location, including:
step 2031a, rendering the first point location based on the decoration style information included in the VR viewing request, to obtain a first rendering result corresponding to the first point location.
The decoration style information is similar to the above-mentioned attention area information, and a selection function can be provided for a user at any practicable stage so as to obtain decoration style information selected by the user. Such as the user selecting the euler style, the modern style, etc. Rendering the first point location based on the decoration style information may refer to rendering by using rendering data corresponding to the decoration style information when rendering the panoramic image of the first point location, so that the rendered VR scene meets the decoration style requirement of the user.
In an alternative example, for each real scene, panoramic images of different decoration styles of the real scene may be obtained in advance, and during rendering, panoramic images of decoration styles required by a user may be obtained to form VR scenes of corresponding points.
In an optional example, the rendering data of the corresponding style may be obtained and stored in advance for each decoration style, and when rendering, the initial panoramic image is rendered based on the rendering data of the decoration style required by the user, so as to obtain the rendered panoramic image meeting the requirement of the user style, and further form the VR scene of the corresponding point location, which may be specifically set according to the actual requirement.
Fig. 7 is a flow chart of step 203 provided by an exemplary embodiment of the present disclosure.
In an optional example, the rendering the first point in step 203, to obtain a first rendering result corresponding to the first point, includes:
step 2031b, obtaining a target panoramic image corresponding to the first point location.
Step 2032b, rendering the first point location based on the target panoramic image, to obtain a first rendering result corresponding to the first point location.
The rendering of the first point location based on the target panoramic image means that a rendered panoramic image corresponding to the first point location is formed by rendering the target panoramic image, and then a first rendering result corresponding to the first point location is formed.
Any of the VR scene processing methods provided by the embodiments of the present disclosure may be performed by any suitable device having data processing capabilities, including, but not limited to: terminal equipment, servers, etc. Alternatively, any of the processing methods of the VR scenes provided in the embodiments of the present disclosure may be executed by a processor, for example, the processor executes any of the processing methods of the VR scenes mentioned in the embodiments of the present disclosure by calling corresponding instructions stored in a memory. And will not be described in detail below.
Exemplary apparatus
Fig. 8 is a schematic structural diagram of a processing device for VR scenes according to an exemplary embodiment of the present disclosure. The apparatus of this embodiment may be used to implement a corresponding method embodiment of the present disclosure, where the apparatus shown in fig. 8 includes: a first determination module 501, a second determination module 502, a first processing module 503, and a first sending module 504.
A first determining module 501, configured to determine an initial VR scene corresponding to a VR viewing request, where the initial VR scene is an initial scene to be rendered, in response to a VR viewing request sent by a terminal device of a user; a second determining module 502, configured to determine a first point location of the initial VR scene; a first processing module 503, configured to render the first point location, obtain a first rendering result corresponding to the first point location, where the first rendering result includes a rendered first VR scene corresponding to the first point location; and the first sending module 504 is configured to send the first rendering result to the terminal device for displaying to the user.
Fig. 9 is a schematic structural diagram of a processing device for VR scenes according to another exemplary embodiment of the present disclosure.
In an alternative example, the apparatus of the present disclosure further comprises: a second processing module 505 and a second transmitting module 506.
A second processing module 505, configured to render other points of the initial VR scene except the first point, obtain second rendering results corresponding to the other points, where the second rendering result corresponding to each other point includes a rendered second VR scene corresponding to the other point; and a second sending module 506, configured to send the second rendering results corresponding to the other points to the terminal device.
In an alternative example, the second processing module 505 is specifically configured to: and rendering the other points according to the communication distance between the other points and the first point from small to large in sequence to obtain the second rendering results respectively corresponding to the other points.
The second sending module 506 is specifically configured to: and sending the second rendering result of the other points to the terminal equipment every time the rendering of the other points is completed.
In an alternative example, the second processing module 505 is specifically configured to: and rendering other points of the initial VR scene in parallel to obtain second rendering results corresponding to the other points.
In an embodiment of the present disclosure, the apparatus of the present disclosure further comprises: a third processing module 507 and a third transmitting module 508.
The third processing module 507 is configured to perform three-dimensional model rendering based on the planar house type graph corresponding to the initial VR scene, the panoramic image corresponding to each point position, and the object space occupying information, to obtain a third rendering result, where the third rendering result includes a rendered target three-dimensional model; and a third sending module 508, configured to send the third rendering result to the terminal device in response to the three-dimensional model viewing request sent by the terminal device, so as to display the target three-dimensional model to the user.
In an alternative example, the apparatus of the present disclosure further comprises: a first receiving module 601, a fourth transmitting module 602, and a fifth transmitting module 603.
A first receiving module 601, configured to receive a view request of a first other point location except the first point location sent by the terminal device; a fourth sending module 602, configured to send, in response to the incomplete rendering of the first other point location, in-load prompt information corresponding to the first other point location to the terminal device, so that the terminal device displays in-load prompt information at the first other point location based on the in-load prompt information; and a fifth sending module 603, configured to send, in response to the rendering of the first other point location, a second rendering result corresponding to the first other point location to the terminal device, so as to display the second VR scene corresponding to the first other point location to the user.
In an alternative example, the VR viewing request includes target household type information and region of interest information selected by the user; the first determining module 501 is specifically configured to: determining the initial VR scene based on the target household type information; the second determining module 502 is specifically configured to: and taking the point position corresponding to the attention area information as the first point position of the initial VR scene.
In an alternative example, the VR viewing request further includes the user selected decoration style information; the first processing module 503 is specifically configured to: and rendering the first point location based on the decoration style information to obtain the first rendering result corresponding to the first point location.
In addition, the embodiment of the disclosure also provides an electronic device, which comprises:
a memory for storing a computer program;
and a processor, configured to execute a computer program stored in the memory, where the computer program is executed to implement the VR scene processing method according to any one of the foregoing embodiments of the present disclosure.
Fig. 10 is a schematic structural diagram of an application embodiment of the electronic device of the present disclosure. As shown in fig. 10, the electronic device includes one or more processors and memory.
The processor may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device to perform the desired functions.
The memory may store one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or nonvolatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program products may be stored on the computer-readable storage medium that can be executed by a processor to implement the VR scene processing methods and/or other desired functions of the various embodiments of the present disclosure as described above.
In one example, the electronic device may further include: input devices and output devices, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
In addition, the input device may include, for example, a keyboard, a mouse, and the like.
The output device may output various information including the determined distance information, direction information, etc., to the outside. The output devices may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device relevant to the present disclosure are shown in fig. 10 for simplicity, components such as buses, input/output interfaces, and the like being omitted. In addition, the electronic device may include any other suitable components depending on the particular application.
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform steps in a method according to various embodiments of the present disclosure described in the above section of the specification.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform steps in a method according to various embodiments of the present disclosure described in the above section of the present disclosure.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware associated with program instructions, where the foregoing program may be stored in a computer readable storage medium, and when executed, the program performs steps including the above method embodiments; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
The basic principles of the present disclosure have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present disclosure are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different manner from other embodiments, so that the same or similar parts between the embodiments are mutually referred to. For system embodiments, the description is relatively simple as it essentially corresponds to method embodiments, and reference should be made to the description of method embodiments for relevant points.
The block diagrams of the devices, apparatuses, devices, systems referred to in this disclosure are merely illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present disclosure may also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the apparatus, devices and methods of the present disclosure, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the disclosure to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (8)

1. A method for processing a VR scene, comprising:
responding to a VR viewing request sent by terminal equipment of a user, and determining an initial VR scene corresponding to the VR viewing request, wherein the initial VR scene is an initial scene to be rendered;
determining a first point position of the initial VR scene;
rendering the first point location to obtain a first rendering result corresponding to the first point location, wherein the first rendering result comprises a rendered first VR scene corresponding to the first point location;
the first rendering result is sent to the terminal equipment, so that the terminal equipment displays the first VR scene corresponding to the first point location to a user;
after the first point location is rendered and the first rendering result corresponding to the first point location is obtained, the method further comprises the steps of:
rendering other points of the initial VR scene except the first point to obtain second rendering results corresponding to the other points respectively, wherein the second rendering results corresponding to the other points comprise rendered second VR scenes corresponding to the other points;
The second rendering results corresponding to the other points are sent to the terminal equipment;
rendering other points of the initial VR scene to obtain second rendering results corresponding to the other points, and then further comprising:
performing three-dimensional model rendering based on the planar house type graph corresponding to the initial VR scene, the panoramic image corresponding to each point position and the object space occupying information, and obtaining a third rendering result, wherein the third rendering result comprises a rendered target three-dimensional model;
and responding to a three-dimensional model viewing request sent by the terminal equipment, and sending the third rendering result to the terminal equipment under the condition that the three-dimensional model rendering is determined to be completed, so as to show the target three-dimensional model to the user.
2. The method of claim 1, wherein rendering other points of the initial VR scene except the first point to obtain second rendering results respectively corresponding to the other points, includes:
rendering the other points according to the communication distance between the other points and the first point from small to large in sequence to obtain the second rendering results corresponding to the other points respectively;
The sending the second rendering results corresponding to the other points to the terminal device includes:
and sending the second rendering result of the other points to the terminal equipment every time the rendering of the other points is completed.
3. The method of claim 1, wherein rendering other points of the initial VR scene except the first point to obtain second rendering results respectively corresponding to the other points, includes:
and rendering other points of the initial VR scene in parallel to obtain second rendering results corresponding to the other points.
4. The method as recited in claim 1, further comprising:
receiving a viewing request of a first other point location except the first point location, which is sent by the terminal equipment;
responding to incomplete rendering of the first other point location, and sending in-load prompt information corresponding to the first other point location to the terminal equipment, so that the terminal equipment displays in-load prompt information at the first other point location based on the in-load prompt information;
and responding to the first other point positions to complete rendering, and sending a second rendering result corresponding to the first other point positions to the terminal equipment so as to display the second VR scene corresponding to the first other point positions to the user.
5. The method of claim 1, wherein the VR viewing request includes target household type information and region of interest information selected by the user;
the determining the initial VR scene corresponding to the VR viewing request includes:
determining the initial VR scene based on the target household type information;
the determining the first point location of the initial VR scene includes:
and taking the point position corresponding to the attention area information as the first point position of the initial VR scene.
6. The method of claim 1, wherein the VR viewing request further includes the user selected decoration style information;
the step of rendering the first point location, obtaining a first rendering result corresponding to the first point location, includes:
and rendering the first point location based on the decoration style information to obtain the first rendering result corresponding to the first point location.
7. A processing apparatus for VR scenes, comprising:
the first determining module is used for responding to a VR viewing request sent by terminal equipment of a user, determining an initial VR scene corresponding to the VR viewing request, wherein the initial VR scene is an initial scene to be rendered;
the second determining module is used for determining the first point position of the initial VR scene;
The first processing module is used for rendering the first point location to obtain a first rendering result corresponding to the first point location, wherein the first rendering result comprises a rendered first VR scene corresponding to the first point location;
the first sending module is used for sending the first rendering result to the terminal equipment so as to be displayed to the user;
the second processing module is used for rendering other points of the initial VR scene except the first point, so as to obtain second rendering results corresponding to the other points respectively, wherein the second rendering results corresponding to the other points comprise rendered second VR scenes corresponding to the other points;
the second sending module is used for sending the second rendering results corresponding to the other points to the terminal equipment;
the third processing module is used for performing three-dimensional model rendering based on the planar floor plan corresponding to the initial VR scene, the panoramic image corresponding to each point position and the object space occupying information, and obtaining a third rendering result, wherein the third rendering result comprises a rendered target three-dimensional model;
the third sending module is used for responding to the three-dimensional model viewing request sent by the terminal equipment, and sending the third rendering result to the terminal equipment under the condition that the three-dimensional model rendering is determined to be completed, so as to show the target three-dimensional model to the user.
8. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of processing a VR scene as set forth in any one of claims 1-6.
CN202211134294.8A 2022-09-16 2022-09-16 VR scene processing method, device and storage medium Active CN115423920B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211134294.8A CN115423920B (en) 2022-09-16 2022-09-16 VR scene processing method, device and storage medium
PCT/CN2022/140018 WO2024055462A1 (en) 2022-09-16 2022-12-19 Vr scene processing method and apparatus, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211134294.8A CN115423920B (en) 2022-09-16 2022-09-16 VR scene processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN115423920A CN115423920A (en) 2022-12-02
CN115423920B true CN115423920B (en) 2024-01-30

Family

ID=84204012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211134294.8A Active CN115423920B (en) 2022-09-16 2022-09-16 VR scene processing method, device and storage medium

Country Status (2)

Country Link
CN (1) CN115423920B (en)
WO (1) WO2024055462A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115423920B (en) * 2022-09-16 2024-01-30 如你所视(北京)科技有限公司 VR scene processing method, device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009093136A2 (en) * 2008-01-24 2009-07-30 Areograph Ltd Image capture and motion picture generation
CN103885788A (en) * 2014-04-14 2014-06-25 焦点科技股份有限公司 Dynamic WEB 3D virtual reality scene construction method and system based on model componentization
CN107168780A (en) * 2017-04-06 2017-09-15 北京小鸟看看科技有限公司 Loading method, equipment and the virtual reality device of virtual reality scenario
CN107871338A (en) * 2016-09-27 2018-04-03 重庆完美空间科技有限公司 Real-time, interactive rendering intent based on scene decoration
US10776989B1 (en) * 2019-05-13 2020-09-15 Robert Edwin Douglas Method and apparatus for prioritized volume rendering
CN112891944A (en) * 2021-03-26 2021-06-04 腾讯科技(深圳)有限公司 Interaction method and device based on virtual scene, computer equipment and storage medium
CN112948043A (en) * 2021-03-05 2021-06-11 吉林吉动盘古网络科技股份有限公司 Fine-grained Web3D online visualization method for large-scale building scene
CN114387400A (en) * 2022-01-18 2022-04-22 北京有竹居网络技术有限公司 Three-dimensional scene display method, display device, electronic equipment and server

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10872467B2 (en) * 2018-06-06 2020-12-22 Ke.Com (Beijing) Technology Co., Ltd. Method for data collection and model generation of house
CA3134424A1 (en) * 2019-03-18 2020-09-24 Geomagical Labs, Inc. Virtual interaction with three-dimensional indoor room imagery
CN110866200A (en) * 2019-11-12 2020-03-06 北京城市网邻信息技术有限公司 Service interface rendering method and device
CN111627116B (en) * 2020-05-29 2024-02-27 联想(北京)有限公司 Image rendering control method and device and server
CN114078092A (en) * 2020-08-11 2022-02-22 中兴通讯股份有限公司 Image processing method and device, electronic equipment and storage medium
US20220139026A1 (en) * 2020-11-05 2022-05-05 Facebook Technologies, Llc Latency-Resilient Cloud Rendering
CN113763552A (en) * 2021-09-08 2021-12-07 苏州光格科技股份有限公司 Three-dimensional geographic model display method and device, computer equipment and storage medium
CN114387398A (en) * 2022-01-18 2022-04-22 北京有竹居网络技术有限公司 Three-dimensional scene loading method, loading device, electronic equipment and readable storage medium
CN114387376A (en) * 2022-01-18 2022-04-22 北京有竹居网络技术有限公司 Rendering method and device of three-dimensional scene, electronic equipment and readable storage medium
CN115423920B (en) * 2022-09-16 2024-01-30 如你所视(北京)科技有限公司 VR scene processing method, device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009093136A2 (en) * 2008-01-24 2009-07-30 Areograph Ltd Image capture and motion picture generation
CN103885788A (en) * 2014-04-14 2014-06-25 焦点科技股份有限公司 Dynamic WEB 3D virtual reality scene construction method and system based on model componentization
CN107871338A (en) * 2016-09-27 2018-04-03 重庆完美空间科技有限公司 Real-time, interactive rendering intent based on scene decoration
CN107168780A (en) * 2017-04-06 2017-09-15 北京小鸟看看科技有限公司 Loading method, equipment and the virtual reality device of virtual reality scenario
US10776989B1 (en) * 2019-05-13 2020-09-15 Robert Edwin Douglas Method and apparatus for prioritized volume rendering
CN112948043A (en) * 2021-03-05 2021-06-11 吉林吉动盘古网络科技股份有限公司 Fine-grained Web3D online visualization method for large-scale building scene
CN112891944A (en) * 2021-03-26 2021-06-04 腾讯科技(深圳)有限公司 Interaction method and device based on virtual scene, computer equipment and storage medium
CN114387400A (en) * 2022-01-18 2022-04-22 北京有竹居网络技术有限公司 Three-dimensional scene display method, display device, electronic equipment and server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
大规模三维城市场景模型的流式传输策略;任慧玲等;计算机仿真;第30卷(第11期);第209-213页第1-6节 *

Also Published As

Publication number Publication date
WO2024055462A1 (en) 2024-03-21
CN115423920A (en) 2022-12-02

Similar Documents

Publication Publication Date Title
CN111127627B (en) Model display method and device in three-dimensional house model
US8989440B2 (en) System and method of room decoration for use with a mobile device
CN111178191B (en) Information playing method and device, computer readable storage medium and electronic equipment
CN111414225B (en) Three-dimensional model remote display method, first terminal, electronic device and storage medium
US11227437B2 (en) Three-dimensional model constructing method, apparatus, and system
CN103157281B (en) Display method and display equipment of two-dimension game scene
CN115423920B (en) VR scene processing method, device and storage medium
US20190139322A1 (en) Technologies for composing a virtual reality setting in a mobile computing environment
US11734929B2 (en) Enhanced product visualization technology with web-based augmented reality user interface features
CN114511668A (en) Method, device and equipment for acquiring three-dimensional decoration image and storage medium
CN115097975A (en) Method, apparatus, device and storage medium for controlling view angle conversion
CN112465971B (en) Method and device for guiding point positions in model, storage medium and electronic equipment
CN111562845B (en) Method, device and equipment for realizing three-dimensional space scene interaction
WO2016196407A1 (en) Path-linked viewpoints from point of interest
CN114463104B (en) Method, apparatus, and computer-readable storage medium for processing VR scene
CN115512046B (en) Panorama display method and device for points outside model, equipment and medium
CN112651801B (en) Method and device for displaying house source information
CN115079921A (en) Method, device, equipment and storage medium for controlling loading of scene information
CN115454255B (en) Switching method and device for article display, electronic equipment and storage medium
CN114117161A (en) Display method and device
CN113112613B (en) Model display method and device, electronic equipment and storage medium
CN117351177A (en) Virtual object display method, device and storage medium in three-dimensional scene
US20230333727A1 (en) Immersive gallery with linear scroll
AU2021240312A1 (en) Technology configured to enable shared experiences whereby multiple users engage with 2d and/or 3d architectural environments whilst in a common physical location
CN117148966A (en) Control method, control device, head-mounted display device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant