WO2024055462A1 - Vr场景的处理方法、装置、电子设备和存储介质 - Google Patents

Vr场景的处理方法、装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2024055462A1
WO2024055462A1 PCT/CN2022/140018 CN2022140018W WO2024055462A1 WO 2024055462 A1 WO2024055462 A1 WO 2024055462A1 CN 2022140018 W CN2022140018 W CN 2022140018W WO 2024055462 A1 WO2024055462 A1 WO 2024055462A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
scene
rendering
initial
terminal device
Prior art date
Application number
PCT/CN2022/140018
Other languages
English (en)
French (fr)
Inventor
杨光
白杰
李成杰
李勇
Original Assignee
如你所视(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 如你所视(北京)科技有限公司 filed Critical 如你所视(北京)科技有限公司
Publication of WO2024055462A1 publication Critical patent/WO2024055462A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present disclosure relates to virtual reality technology, especially a VR scene processing method, device, electronic device and storage medium
  • VR Virtual Reality, virtual reality
  • panoramic scene is an interactive three-dimensional scene that integrates multi-source information and is built based on panorama through computer image processing technology. It can present three-dimensional scenes more realistically and comprehensively through a 720° viewing angle. It has been widely used in various fields, such as furniture display, tourist attraction display, virtual exhibition hall, digital museum, etc., such as VR cars and VR viewing. house.
  • rendering has become an important technology.
  • users view VR scenes they are usually rendered in real time, and the corresponding rendered VR scenes are obtained and displayed to the users.
  • real-time rendering Users need to wait for a long time.
  • Embodiments of the present disclosure provide a VR scene processing method, device, electronic device, and storage medium to effectively reduce user waiting time and improve user experience.
  • One aspect of the embodiments of the present disclosure provides a VR scene processing method, including:
  • the first rendering result is sent to the terminal device, so that the terminal device displays the first VR scene corresponding to the first point to the user.
  • a VR scene processing device including:
  • a first determination module configured to respond to a VR viewing request sent by the user's terminal device and determine the initial VR scene corresponding to the VR viewing request, where the initial VR scene is the initial scene to be rendered;
  • the second determination module is used to determine the first point of the initial VR scene
  • the first processing module is used to render the first point and obtain the first rendering result corresponding to the first point.
  • the first rendering result includes the rendered first VR scene corresponding to the first point. ;
  • a first sending module configured to send the first rendering result to the terminal device for display to the user.
  • a computer-readable storage medium on which computer program instructions are stored.
  • the computer program instructions are executed by a processor, the processing of the VR scene described in any embodiment of the present disclosure is realized. method.
  • an electronic device including:
  • a processor configured to execute a computer program product stored in the memory, and when the computer program product is executed, implement the VR scene processing method described in any embodiment of the present disclosure.
  • the VR scene processing method, device, electronic device and storage medium provided by the present disclosure give priority to rendering the first point of the initial VR scene when the user requests VR viewing, and after rendering the first point, the first point will be rendered.
  • the results are sent to the user, who can quickly view the VR scene from the first point of view, effectively reducing user waiting time, thus improving user experience and solving problems such as long user waiting time.
  • Figure 1 is an exemplary application scenario of the VR scene processing method provided by the present disclosure
  • Figure 2 is a schematic flowchart of a VR scene processing method provided by an exemplary embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of a VR scene processing method provided by another exemplary embodiment of the present disclosure.
  • Figure 4 is a schematic flowchart of a VR scene processing method provided by yet another exemplary embodiment of the present disclosure
  • Figure 5 is a schematic flowchart of a VR scene processing method provided by yet another exemplary embodiment of the present disclosure.
  • Figure 6 is a schematic flowchart of a VR scene processing method provided by yet another exemplary embodiment of the present disclosure.
  • Figure 7 is a schematic flowchart of step 203 provided by an exemplary embodiment of the present disclosure.
  • Figure 8 is a schematic structural diagram of a VR scene processing device provided by an exemplary embodiment of the present disclosure.
  • Figure 9 is a schematic structural diagram of a VR scene processing device provided by another exemplary embodiment of the present disclosure.
  • Figure 10 is a schematic structural diagram disclosing an application embodiment of an electronic device.
  • plural may refer to two or more than two, and “at least one” may refer to one, two, or more than two.
  • Embodiments of the present disclosure may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which may operate with numerous other general or special purpose computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments and/or configurations suitable for use with terminal devices, computer systems, servers and other electronic devices include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients Computers, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems and distributed cloud computing technology environments including any of the above systems, etc.
  • Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system executable instructions (such as program modules) being executed by the computer system.
  • program modules may include routines, programs, object programs, components, logic, data structures, etc., that perform specific tasks or implement specific abstract data types.
  • the computer system/server may be implemented in a distributed cloud computing environment where tasks are performed by remote processing devices linked through a communications network.
  • program modules may be located on local or remote computing system storage media including storage devices.
  • Figure 1 is an exemplary application scenario of the VR scene processing method provided by the present disclosure.
  • the VR viewing can be triggered.
  • the terminal device obtains the user's operation information and sends a VR viewing request to the server.
  • the VR viewing request may include the user's selection.
  • the target house type information the server can determine the corresponding initial VR scene based on the target house type information in the user's VR viewing request.
  • the initial VR scene can be obtained after pre-occupying the space, personalization processing, and determining each point in the scene.
  • the VR scene to be rendered in which position occupancy refers to determining the position occupied by items in the scene, personalized processing refers to the determination of the style and other related information of items in the scene, and point refers to the wandering point in the scene. It is a virtual position that the user can move around in the scene, and then the first point of the initial VR scene can be determined, the first point is rendered to obtain the rendered first VR scene corresponding to the first point, and the corresponding rendering of the first point is The first VR scene after the first point is sent to the terminal device, and the terminal device displays the rendered first VR scene corresponding to the first point to the user, so that the user can quickly view the VR scene at the first point, effectively reducing the user's waiting time, thus Improve user experience.
  • the disclosed VR scene processing method is not limited to the real estate field, but can also be applied to any other fields involving VR scenes, such as tourist attraction displays, virtual exhibition halls, digital museums, furniture displays, VR cars, VR viewing decoration, etc., specifically It can be set according to actual needs.
  • FIG. 2 is a schematic flowchart of a VR scene processing method provided by an exemplary embodiment of the present disclosure. The method includes the following steps:
  • Step 201 In response to the VR viewing request sent by the user's terminal device, determine the initial VR scene corresponding to the VR viewing request, and the initial VR scene is the initial scene to be rendered.
  • users can be users who need to view VR scenes in any field, such as house viewing users viewing VR scenes of houses through VR viewing, tourism users viewing VR scenes of tourist attractions through VR, and users visiting virtual exhibition halls viewing virtual exhibition halls through VR. VR scenes, etc.
  • the terminal device can be a mobile phone, tablet, or any other device that supports VR scene display.
  • the initial VR scene can be obtained in advance and stored corresponding to its corresponding actual scene.
  • the corresponding initial VR scene can be obtained for each house type in advance and stored corresponding to the house type information.
  • the corresponding initial VR scene can be determined based on the house type information or the house information, which can be set according to actual needs.
  • the user can trigger through the terminal device to send a VR viewing request to the device or server that executes the method of the present disclosure.
  • the VR viewing request can include associated information corresponding to the VR scene to be viewed by the user.
  • the associated information can be set according to actual needs.
  • the associated information can be target house information or target apartment type information.
  • the associated information can also be virtual exhibition hall identification information, There are no specific restrictions on tourist attraction identification information, digital museum identification information, etc., as long as the corresponding initial VR scene can be determined based on the associated information.
  • step 201 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by the first determination module run by the processor.
  • Step 202 Determine the first point of the initial VR scene.
  • the first point can be determined according to the actual needs of the user, or according to preset rules.
  • the preset rules can be set according to actual needs.
  • the first point can be randomly determined, that is, a random point among multiple points in the initial VR scene.
  • another example is to set a preset first point.
  • the point corresponding to a certain functional room in the house such as living room, bedroom, dining room, kitchen, etc.
  • the actual needs of the user can be specified when the user triggers the VR viewing request, or the user needs can be determined based on the analysis of user information authorized by the user or user historical operation information, and then the first point is determined based on the user needs.
  • the specific method of determining the first point is not limited.
  • step 202 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by the second determination module run by the processor.
  • Step 203 Render the first point to obtain a first rendering result corresponding to the first point.
  • the first rendering result includes the rendered first VR scene corresponding to the first point.
  • the first point can be rendered first, and after the rendering of the first point is completed, the first rendered VR scene corresponding to the first point is obtained.
  • Rendering refers to assigning certain materials, colors, lighting and other attributes to various elements in the scene and then performing rendering calculations to obtain image effects.
  • the specific rendering method can adopt any implementable method, and is not limited by this disclosure.
  • panoramic images corresponding to each real scene can be generated in advance, and the panoramic image of each real scene can include the VR corresponding to the real scene.
  • the panoramic images corresponding to all points of the scene are respectively obtained, the panoramic image corresponding to the first point can be obtained, and the first rendering result corresponding to the first point can be obtained by rendering the panoramic image of the first point.
  • Rendering the panoramic image at the first point may include assigning certain attributes to each target element in the panoramic image. For example, if the target element is a wall and the target attribute is yellow, then the wall in the rendered panoramic image will be yellow.
  • each element in the panoramic image corresponding to the first point can be rendered to obtain a rendered panoramic image, and the rendered panoramic image corresponding to the first point constitutes the first rendered VR scene of the first point.
  • the rendering of the panoramic image can be implemented based on preconfigured rendering data.
  • the rendering data includes attribute information corresponding to each element, which can be set according to actual needs.
  • the panoramic image at each point can also be rendered according to the style specified by the user, which can be set according to actual needs.
  • step 203 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by the first processing module run by the processor.
  • Step 204 Send the first rendering result to the terminal device, so that the terminal device displays the first VR scene corresponding to the first point to the user.
  • the first rendering result corresponding to the first point can be quickly and promptly sent to the terminal device.
  • the terminal device can display the first VR scene corresponding to the first point to the user, thereby allowing the user to quickly view the first point.
  • the first VR scene in the spot effectively reduces the user’s waiting time and improves the user experience.
  • step 204 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by the first sending module run by the processor.
  • the VR scene processing method provided by the embodiment of the present disclosure prioritizes rendering of the first point of the initial VR scene when the user requests VR viewing. After rendering the first point, the rendering result of the first point is sent to the user.
  • FIG. 3 is a schematic flowchart of a VR scene processing method provided by another exemplary embodiment of the present disclosure.
  • the method further includes:
  • Step 205 Render other points of the initial VR scene except the first point, and obtain second rendering results corresponding to each other point.
  • the second rendering result corresponding to each other point includes the other point.
  • the initial VR scene corresponding to each real scene usually includes multiple points, after completing the rendering of the first point first, you can continue to render other points except the first point to obtain the corresponding corresponding points of each other point.
  • the second rendering result For the rendering principles of other points, please refer to the aforementioned rendering of the first point, which will not be described again here.
  • the rendering of other points is performed while the user is viewing the first VR scene at the first point. Therefore, the rendering of other points may have been completed before the user has finished browsing the first VR scene. Provide users with a complete VR scene without the user being aware of it.
  • Step 205 and step 204 may be performed in no particular order.
  • step 205 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by a second processing module run by the processor.
  • Step 206 Send the second rendering results corresponding to each other point to the terminal device.
  • each second rendering result can be sent to the terminal device.
  • the terminal device can directly obtain the received other points.
  • the second rendering result of the point is used to display the second VR scene corresponding to the other point to the user based on the second rendering result.
  • step 206 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by the second sending module run by the processor.
  • This disclosure prioritizes sending the first rendering result of the first point to the terminal device for the user to view. While the user browses the first VR scene corresponding to the first point, other points are rendered to obtain the third points of each other point.
  • the second rendering result is directly sent to the terminal device for storage.
  • the second VR scene of the other clicked points can be quickly displayed to the user, enabling the user to only perceive the first point.
  • Within the waiting time it provides users with an overall VR scene, effectively reducing user waiting time and improving user experience. And the rendering does not need to be processed in advance, which can reduce the consumption of server resources and thus reduce costs.
  • the rendering waiting interface data can also be sent to the terminal device, so that the terminal device displays the rendering waiting interface to the user.
  • the specific content of the rendering waiting interface can be based on the actual Requirement setting is not limited by this disclosure.
  • Figure 4 is a schematic flowchart of a VR scene processing method provided by yet another exemplary embodiment of the present disclosure.
  • step 205 renders other points of the initial VR scene except the first point, and obtains second rendering results corresponding to each other point, including:
  • Step 2051a Render each other point in order from small to large according to the connecting distance between each other point and the first point, and obtain the second rendering result corresponding to each other point.
  • the connecting distance between each other point and the first point can be predetermined when determining the point, so that the rendering order of each other point can be determined based on the connected distance, and each other point can be rendered in the corresponding order to obtain each other
  • step 2051a may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by a second processing module run by the processor.
  • Step 206 of sending the second rendering results corresponding to each other point to the terminal device includes:
  • Step 2061a Each time the rendering of another point is completed, the second rendering result of the other point is sent to the terminal device.
  • the second rendering result of the other point is sent to the terminal device.
  • the user clicks on the other point in the first VR scene the user can quickly move to the other point and browse the second VR scene corresponding to the other point.
  • the first point is the living room.
  • the rendering of the other points is not completed, the user will not be able to view the first VR scene in the first VR scene. If you cannot enter the second VR scene when you click on the other point, you can display the loading prompt message at the other point.
  • the point is represented by a circle.
  • a rotating circle is displayed at the circle, indicating that it is loading.
  • the details can be set according to actual needs.
  • the terminal device receives the second rendering result of the other point, ends the loading prompt information, and switches to the second VR scene of the other point, so that the user can browse the second VR Scenes.
  • step 2061a may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by the second sending module run by the processor.
  • This disclosure renders each other point in order from small to large according to the connected distance between each other point and the first point. Since the connected distance reflects the order in which the user can travel from the first point to other points, it can This allows the user to quickly navigate to points near the first point, and so on, ensuring that points near the user's current browsing point are rendered first, thereby minimizing the time the user waits for loading and further improving the user experience.
  • Figure 5 is a schematic flowchart of a VR scene processing method provided by yet another exemplary embodiment of the present disclosure.
  • step 205 renders other points of the initial VR scene except the first point, and obtains second rendering results corresponding to each other point, including:
  • Step 2051b Render each other point of the initial VR scene in parallel to obtain second rendering results corresponding to each other point.
  • this disclosure can also render other points in parallel after completing the rendering of the first point to quickly complete the rendering of other points, thereby providing users with VR scenes of other points as soon as possible, so that Users can browse freely at various points.
  • the parallel rendering process for the rendering principle of each point, please refer to the first point mentioned above, and will not be described again here.
  • step 2051b may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by a second processing module run by the processor.
  • This disclosure further improves rendering efficiency and reduces rendering time by rendering other points in parallel after completing the rendering of the first point, thereby further improving user experience.
  • Step 301 Perform three-dimensional model rendering based on the floor plan corresponding to the initial VR scene, the panoramic image corresponding to each point, and the item occupancy information, and obtain a third rendering result.
  • the third rendering result includes the rendered target three-dimensional model. .
  • the floor plan can be obtained and stored in advance, and the panoramic image corresponding to each point can be a rendered panoramic image generated during the point rendering process, which can be directly obtained in this step.
  • the item occupancy information can be obtained in the early occupancy stage.
  • the item occupancy information may include the position of the item in the three-dimensional model, the shape, size and other information of the item, so as to be able to describe the spatial area occupied by the item in the three-dimensional model.
  • the corresponding three-dimensional model data can be determined based on the floor plan.
  • the three-dimensional model data includes a set of three-dimensional coordinate points.
  • the three-dimensional scene of the floor plan can be constructed based on the three-dimensional model data, and the panoramic image and item occupancy of each point can be used to construct the three-dimensional scene. information, realize the construction of the three-dimensional scene of the object, and combine the three-dimensional scene of the floor plan and the three-dimensional scene of the object to obtain the rendered target three-dimensional model.
  • the rendering principle of the specific three-dimensional model will not be described again.
  • step 301 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by a third processing module run by the processor.
  • Step 302 In response to the three-dimensional model viewing request sent by the terminal device, send the third rendering result to the terminal device to display the target three-dimensional model to the user.
  • the radar chart of the 3D model entrance can be displayed on the VR scene interface displayed on the user's terminal device.
  • the radar chart can be displayed in the upper right corner of the VR scene interface.
  • the third rendering result obtained by the rendering can be sent to the terminal device, and the terminal device can display the current view to the user.
  • step 302 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by a third sending module run by the processor.
  • the user clicks on the radar chart, he can enter the 3D model interface or the floor plan interface.
  • the 3D model interface displays the target 3D model
  • the floor plan interface displays the floor plan. The details can be set according to actual needs. .
  • the present disclosure provides users with a three-dimensional model browsing function by rendering the three-dimensional model after completing the rendering of all points, allowing the user to browse the global three-dimensional space layout, further improving the user experience.
  • FIG. 6 is a schematic flowchart of a VR scene processing method provided by yet another exemplary embodiment of the present disclosure.
  • the method of the present disclosure also includes:
  • Step 401 Receive a viewing request for a first other point other than the first point sent by the terminal device.
  • the first other point can be any other point among other points except the first point.
  • the first VR scene displays the first point.
  • the user can click any other points visible under the panoramic view, and the other points clicked are called the first other points.
  • the terminal device obtains the user's click operation. If it has received the second rendering result corresponding to the first other point, it can immediately display the second VR scene corresponding to the first other point to the user. Otherwise, the terminal device is triggered to send a message to the server. Send a viewing request for the first other point.
  • step 401 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by the first receiving module run by the processor.
  • Step 402 In response to the incomplete rendering of the first other point, send the loading prompt information corresponding to the first other point to the terminal device, so that the terminal device displays the loading prompt at the first other point based on the loading prompt information. information.
  • the specific content of the loading prompt information can be set according to actual needs, for example, it can be text, pictures, dynamic graphics, etc., as long as it can prompt the user that the loading is in progress, and this disclosure does not limit it. If the rendering of the first other point has not been completed, the loading prompt information corresponding to the first other point can be sent to the terminal device, and the terminal device can display the loading prompt information at the first other point to prompt the user to wait. load.
  • step 402 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by the fourth sending module run by the processor.
  • Step 403 In response to the completion of rendering at the first other point, send the second rendering result corresponding to the first other point to the terminal device to display the second VR scene corresponding to the first other point to the user.
  • the second rendering result of the first other point is sent to the terminal device in time, and the terminal device promptly displays the second VR scene corresponding to the first other point to the user.
  • step 403 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by the fifth sending module run by the processor.
  • the loading prompt information for each other point can also be sent to the terminal device in advance.
  • the terminal device autonomously displays the loading of the other point to the user. prompt information.
  • This disclosure displays loading prompt information to the user when the user clicks on a point that has not completed rendering, prompting the user that the current point is loading.
  • the user can view other points first or wait for this point. After completing this point After rendering, the second rendering result of the point is sent to the terminal device in a timely manner to display the second VR scene of the point to the user, further improving the user experience.
  • the VR viewing request includes the target house type information and area of interest information selected by the user; in step 201, in response to the VR viewing request sent by the user's terminal device, determining the initial VR scene corresponding to the VR viewing request includes:
  • Step 2011 In response to the VR viewing request sent by the user's terminal device, determine the initial VR scene based on the target house type information included in the VR viewing request.
  • the target house type information can include a target house type identification or a target house type diagram, which can be set according to actual needs.
  • the target house type identification can be identification information that uniquely identifies each house type set in advance for each house type, such as a house type ID, and a specific identification method. No restrictions.
  • the area of interest information may include information about functional rooms that the user specifies to pay attention to, such as kitchen, living room, master bedroom, etc., and is not specifically limited.
  • the target house type information can be the house type information corresponding to the target house selected by the user when browsing the house list.
  • the area of interest information can be the filtering function provided for the user to select when the user browses the house list, or it can be the user's selection after selecting the target house.
  • Each house type is pre-configured with a corresponding initial VR scene, which is stored in association with the target house type information. After obtaining the target house type information, the corresponding initial VR scene can be determined based on the target house type information.
  • step 2011 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by the first determination module run by the processor.
  • Determining the first point of the initial VR scene in step 202 includes:
  • Step 2021 Use the point corresponding to the area of interest information included in the VR viewing request as the first point of the initial VR scene.
  • the point corresponding to the kitchen is used as the first point of the initial VR scene and is rendered first.
  • step 2021 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by the second determination module run by the processor.
  • the present disclosure enables users to quickly and accurately view VR scenes between functions they are concerned about, further improving user experience.
  • the VR viewing request also includes the decorative style information selected by the user; step 203 renders the first point to obtain the first rendering result corresponding to the first point, including:
  • Step 2031a Render the first point based on the decorative style information included in the VR viewing request, and obtain the first rendering result corresponding to the first point.
  • the decorative style information is similar to the above-mentioned area of interest information, and can provide the user with a selection function at any implementable stage to obtain the decorative style information selected by the user.
  • Rendering the first point based on the decorative style information may refer to using rendering data corresponding to the decorative style information when rendering the panoramic image of the first point, so that the rendered VR scene meets the user's decorative style requirements.
  • step 2031a may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by the first processing module run by the processor.
  • panoramic images of different decorative styles of the real scene can be obtained in advance.
  • panoramic images of the decorative styles required by the user can be obtained to form a VR scene at the corresponding point.
  • the rendering data of the corresponding style can also be obtained and stored in advance.
  • the initial panoramic image is rendered based on the rendering data of the decorative style required by the user, and the rendered panoramic image is obtained.
  • the panoramic image that meets the user's style requirements, and then constitutes a VR scene at the corresponding point, can be set according to actual needs, and is not limited in the embodiments of the present disclosure.
  • Figure 7 is a schematic flowchart of step 203 provided by an exemplary embodiment of the present disclosure.
  • step 203 renders the first point to obtain the first rendering result corresponding to the first point, including:
  • Step 2031b Obtain the target panoramic image corresponding to the first point.
  • Step 2032b Render the first point based on the target panoramic image, and obtain the first rendering result corresponding to the first point.
  • the target panoramic image can be an initial panoramic image of the first point obtained in advance.
  • Rendering the first point based on the target panoramic image means forming a rendered panoramic image corresponding to the first point through the rendering of the target panoramic image, and then forming a The first rendering result corresponding to the first point.
  • steps 2031b and 2032b may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by the first processing module run by the processor.
  • Any VR scene processing method provided by the embodiments of the present disclosure can be executed by any appropriate device with data processing capabilities, including but not limited to: terminal devices and servers.
  • any of the VR scene processing methods provided by the embodiments of the present disclosure can be executed by the processor.
  • the processor executes any of the VR scene processing methods mentioned in the embodiments of the present disclosure by calling corresponding instructions stored in the memory. No further details will be given below.
  • the aforementioned program can be stored in a computer-readable storage medium.
  • the program When the program is executed, It includes the steps of the above method embodiment; and the aforementioned storage medium includes: ROM, RAM, magnetic disk or optical disk and other various media that can store program codes.
  • FIG. 8 is a schematic structural diagram of a VR scene processing device provided by an exemplary embodiment of the present disclosure.
  • the device of this embodiment can be used to implement the corresponding method embodiment of the present disclosure.
  • the device shown in Figure 8 includes: a first determination module 501, a second determination module 502, a first processing module 503 and a first sending module 504.
  • the first determination module 501 is used to determine the initial VR scene corresponding to the VR viewing request in response to the VR viewing request sent by the user's terminal device.
  • the initial VR scene is the initial scene to be rendered;
  • the second determination module 502 is used to determine the initial VR scene.
  • the first processing module 503 is used to render the first point and obtain the first rendering result corresponding to the first point.
  • the first rendering result includes the rendered first VR scene corresponding to the first point.
  • the first sending module 504 is used to send the first rendering result to the terminal device for display to the user.
  • Figure 9 is a schematic structural diagram of a VR scene processing device provided by another exemplary embodiment of the present disclosure.
  • the device of the present disclosure also includes: a second processing module 505 and a second sending module 506.
  • the second processing module 505 is used to render other points of the initial VR scene except the first point, and obtain second rendering results corresponding to each other point, and second rendering results corresponding to each other point. Including the rendered second VR scene corresponding to the other points; the second sending module 506 is used to send the second rendering results corresponding to each other point to the terminal device.
  • the second processing module 505 is specifically configured to: render each other point in order from small to large according to the connection distance between each other point and the first point, and obtain the third corresponding to each other point. 2. Rendering results.
  • the second sending module 506 is specifically configured to: each time the rendering of another point is completed, send the second rendering result of the other point to the terminal device.
  • the second processing module 505 is specifically configured to: render each other point of the initial VR scene in parallel, and obtain second rendering results corresponding to each other point.
  • the device of the present disclosure further includes: a third processing module 507 and a third sending module 508.
  • the third processing module 507 is used to render the three-dimensional model based on the floor plan corresponding to the initial VR scene, the panoramic image corresponding to each point, and the object occupancy information, to obtain a third rendering result, where the third rendering result includes the rendered target three-dimensional model; the third sending module 508 is used to respond to the three-dimensional model viewing request sent by the terminal device, and send the third rendering result to the terminal device to display the target three-dimensional model to the user.
  • the device of the present disclosure also includes: a first receiving module 601, a fourth sending module 602, and a fifth sending module 603.
  • the first receiving module 601 is used to receive a viewing request for the first other point except the first point sent by the terminal device; the fourth sending module 602 is used to respond to the incomplete rendering of the first other point to the terminal.
  • the device sends the loading prompt information corresponding to the first other point, so that the terminal device displays the loading prompt information at the first other point based on the loading prompt information; the fifth sending module 603 is used to respond to the first other point
  • the rendering is completed at the first other point, and the second rendering result corresponding to the first other point is sent to the terminal device, so as to display the second VR scene corresponding to the first other point to the user.
  • the VR viewing request includes the target house type information and area of interest information selected by the user; the first determination module 501 is specifically used to: determine the initial VR scene based on the target house type information; the second determination module 502 is specifically used to: The point corresponding to the area of interest information is used as the first point of the initial VR scene.
  • the VR viewing request also includes decoration style information selected by the user; the first processing module 503 is specifically used to: render the first point based on the decoration style information to obtain a first rendering result corresponding to the first point.
  • embodiments of the present disclosure also provide an electronic device, including:
  • Memory used to store computer programs
  • a processor configured to execute a computer program stored in the memory, and when the computer program is executed, implement the VR scene processing method described in any of the above embodiments of the present disclosure.
  • FIG. 10 is a schematic structural diagram of an application embodiment of the electronic device of the present disclosure. As shown in Figure 10, an electronic device includes one or more processors and memory.
  • the processor may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
  • CPU central processing unit
  • the processor may control other components in the electronic device to perform desired functions.
  • the memory may store one or more computer program products, and the memory may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • the volatile memory may include, for example, random access memory (RAM) and/or cache memory (cache), etc.
  • the non-volatile memory may include, for example, read-only memory (ROM), hard disk, flash memory, etc.
  • One or more computer program products may be stored on the computer-readable storage medium, and the processor may run the computer program products to implement the VR scene processing methods and/or the above-described various embodiments of the present disclosure. or other desired functionality.
  • the electronic device may further include an input device and an output device, and these components are interconnected through a bus system and/or other forms of connection mechanisms (not shown).
  • the input device may also include, for example, a keyboard, a mouse, and the like.
  • the output device can output various information to the outside, including determined distance information, direction information, etc.
  • the output devices may include, for example, displays, speakers, printers, and communication networks and remote output devices to which they are connected, among others.
  • the electronic device may include any other suitable components depending on the specific application.
  • embodiments of the present disclosure may also be a computer program product, which includes computer program instructions that, when executed by a processor, cause the processor to perform the steps described in the above part of this specification. Steps in methods of various embodiments of the present disclosure.
  • the computer program product may be written with program code for performing operations of embodiments of the present disclosure in any combination of one or more programming languages, including object-oriented programming languages such as Java, C++, etc. , also includes conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server execute on.
  • embodiments of the present disclosure may also be a computer-readable storage medium having computer program instructions stored thereon.
  • the computer program instructions when executed by a processor, cause the processor to perform the steps described in the above part of this specification according to the present invention. Steps in methods of various embodiments are disclosed.
  • the computer-readable storage medium may be any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may include, for example, but is not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the aforementioned program can be stored in a computer-readable storage medium.
  • the program When the program is executed, It includes the steps of the above method embodiment; and the aforementioned storage medium includes: ROM, RAM, magnetic disk or optical disk and other various media that can store program codes.
  • the methods and apparatus of the present disclosure may be implemented in many ways.
  • the methods and devices of the present disclosure can be implemented through software, hardware, firmware, or any combination of software, hardware, and firmware.
  • the above order for the steps of the methods is for illustration only, and the steps of the methods of the present disclosure are not limited to the order specifically described above unless otherwise specifically stated.
  • the present disclosure may also be implemented as programs recorded in recording media, and these programs include machine-readable instructions for implementing methods according to the present disclosure.
  • the present disclosure also covers recording media storing programs for executing methods according to the present disclosure.
  • each component or each step can be decomposed and/or recombined. These decompositions and/or recombinations should be considered equivalent versions of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开实施例公开了一种VR场景的处理方法、装置、电子设备和存储介质,其中,方法包括:响应于用户的终端设备发送的VR查看请求,确定VR查看请求对应的初始VR场景,初始VR场景为待渲染的初始场景;确定初始VR场景的首点位;对首点位进行渲染,获得首点位对应的第一渲染结果,第一渲染结果包括首点位对应的渲染后的第一VR场景;将第一渲染结果发送给终端设备,以使终端设备将首点位对应的第一VR场景展示给用户。通过优先对首点位进行渲染,渲染完首点位则将首点位的渲染结果发送给用户,用户可以快速查看首点位视角的VR场景,有效降低用户等待时间,从而提升用户体验,解决现有技术用户等待时间较长等问题。

Description

VR场景的处理方法、装置、电子设备和存储介质
本公开要求在2022年09月16日提交中国专利局、申请号为CN202211134294.8、发明名称为“VR场景的处理方法、装置和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及虚拟现实技术,尤其是一种VR场景的处理方法、装置、电子设备和存储介质
背景技术
VR(Virtual Reality,虚拟现实)场景又称为全景场景,是通过计算机图像处理技术,基于全景图构建的多源信息融合的、交互式的三维场景。可以通过720°的视角更逼真、更全面的呈现立体场景,目前已在各个领域中得到了广泛的应用,例如家具展示、旅游景点展示、虚拟展厅、数字博物馆等,再例如VR汽车和VR看房。为了能够为用户提供更加真实的VR场景展示效果,渲染成为重要的技术,相关技术中,在用户查看VR场景时,通常实时进行渲染,获得对应的渲染后的VR场景展示给用户,但是实时渲染用户需要等待较长时间。
发明内容
本公开实施例提供一种VR场景的处理方法、装置、电子设备和存储介质,以有效降低用户等待时间,提升用户体验。
本公开实施例的一个方面,提供一种VR场景的处理方法,包括:
响应于用户的终端设备发送的VR查看请求,确定所述VR查看请求对应的初始VR场景,所述初始VR场景为待渲染的初始场景;
确定所述初始VR场景的首点位;
对所述首点位进行渲染,获得所述首点位对应的第一渲染结果,所述第一渲染结果包括所述首点位对应的渲染后的第一VR场景;
将所述第一渲染结果发送给所述终端设备,以使所述终端设备将所述首点位对应的所述第一VR场景展示给用户。
本公开实施例的另一个方面,提供一种VR场景的处理装置,包括:
第一确定模块,用于响应于用户的终端设备发送的VR查看请求,确定所述VR查看请求对应的初始VR场景,所述初始VR场景为待渲染的初始场景;
第二确定模块,用于确定所述初始VR场景的首点位;
第一处理模块,用于对所述首点位进行渲染,获得所述首点位对应的第一渲染结果,所述第一渲染结果包括所述首点位对应的渲染后的第一VR场景;
第一发送模块,用于将所述第一渲染结果发送给所述终端设备,以展示给所述用户。
本公开实施例的再一方面,提供一种计算机可读存储介质,其上存储有计算机程序指令,该计算机程序指令被处理器执行时,实现本公开任一实施例所述的VR场景的处理方法。
本公开实施例的又一方面,提供一种电子设备,包括:
存储器,用于存储计算机程序产品;
处理器,用于执行所述存储器中存储的计算机程序产品,且所述计算机程序产品被执行时,实现本公开任一实施例所述的VR场景的处理方法。
本公开提供的VR场景的处理方法、装置、电子设备和存储介质,通过在用户请求VR查看时,优先对初始VR场景的首点位进行渲染,渲染完首点位则将首点位的渲染结果发送给用户,用户可以快速查看首点位视角的VR场景,有效降低用户等待时间,从而提升用户体验,解决用户等待时间较长等问题。
下面通过附图和实施例,对本公开的技术方案做进一步的详细描述。
附图说明
构成说明书的一部分的附图描述了本公开的实施例,并且连同描述一起用于解释本公开的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本公开。显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图:
图1是本公开提供的VR场景的处理方法的一个示例性的应用场景;
图2是本公开一个示例性实施例提供的VR场景的处理方法的流程示意图;
图3是本公开另一示例性实施例提供的VR场景的处理方法的流程示意图;
图4是本公开再一示例性实施例提供的VR场景的处理方法的流程示意图;
图5是本公开又一示例性实施例提供的VR场景的处理方法的流程示意图;
图6是本公开再一示例性实施例提供的VR场景的处理方法的流程示意图;
图7是本公开一示例性实施例提供的步骤203的流程示意图;
图8是本公开一示例性实施例提供的VR场景的处理装置的结构示意图;
图9是本公开另一示例性实施例提供的VR场景的处理装置的结构示意图;
图10是公开电子设备一个应用实施例的结构示意图。
具体实施方式
现在将参照附图来详细描述本公开的各种示例性实施例。应注意到:除非另外具体说 明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。
本领域技术人员可以理解,本公开实施例中的“第一”、“第二”等术语仅用于区别不同步骤、设备或模块等,既不代表任何特定技术含义,也不表示它们之间的必然逻辑顺序。
还应理解,在本公开实施例中,“多个”可以指两个或两个以上,“至少一个”可以指一个、两个或两个以上。
还应理解,对于本公开实施例中提及的任一部件、数据或结构,在没有明确限定或者在前后文给出相反启示的情况下,一般可以理解为一个或多个。
另外,本公开中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本公开中字符“/”,一般表示前后关联对象是一种“或”的关系。
还应理解,本公开对各个实施例的描述着重强调各个实施例之间的不同之处,其相同或相似之处可以相互参考,为了简洁,不再一一赘述。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
本公开实施例可以应用于终端设备、计算机系统、服务器等电子设备,其可与众多其它通用或专用计算系统环境或配置一起操作。适于与终端设备、计算机系统、服务器等电子设备一起使用的众所周知的终端设备、计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统、大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
终端设备、计算机系统、服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
图1是本公开提供的VR场景的处理方法的一个示例性的应用场景。
在房产领域,当用户想要买房或租房时,用户可以通过终端设备上安装的应用程序(Application,简称APP)看房,并能够通过VR看房查看房屋内部的真实立体场景。利用本公开的VR场景的处理方法,用户在终端设备选择要查看的户型后,可以触发VR看 房,终端设备获取用户的操作信息,向服务器发送VR查看请求,该VR查看请求可以包括用户选择的目标户型信息,服务器根据用户的VR查看请求中的目标户型信息可以确定出对应的初始VR场景,初始VR场景可以是经过预先占位、个性化处理、及确定场景中的各点位后获得的待渲染的VR场景,其中,占位是指确定场景中物品占据的位置,个性化处理是指场景中物品的风格及其他相关信息的确定,点位是指场景中的游走点位,是用户在场景中可游走的虚拟位置,进而可以确定初始VR场景的首点位,对首点位进行渲染获得首点位对应的渲染后的第一VR场景,将首点位对应的渲染后的第一VR场景发送给终端设备,终端设备将首点位对应的渲染后的第一VR场景展示给用户,从而可以使用户快速查看首点位的VR场景,有效降低用户等待时间,从而提升用户体验。对于除首点位之外的其他点位,可以在首点位渲染完成后,在用户查看首点位的VR场景的同时对其他点位进行渲染,渲染完成后可直接发送给终端设备,当用户查看完首点位的VR场景后,其他点位的渲染可能已经完成,从而用户可以直接查看其他点位的VR场景,实现了在用户感知不到等待的情况下,为用户提供整体渲染后的VR场景。
本公开的VR场景的处理方法不限于房产领域,还可以应用于其他任意涉及VR场景的领域,比如旅游景点展示、虚拟展厅、数字博物馆、家具展示、VR汽车、VR看装修,等等,具体可以根据实际需求设置。
图2是本公开一个示例性实施例提供的VR场景的处理方法的流程示意图。该方法包括以下步骤:
步骤201,响应于用户的终端设备发送的VR查看请求,确定VR查看请求对应的初始VR场景,初始VR场景为待渲染的初始场景。
其中,用户可以为任意领域需要进行VR场景查看的用户,比如看房用户通过VR看房查看房屋VR场景、旅游用户通过VR查看旅游景点的VR场景、游览虚拟展厅的用户通过VR查看虚拟展厅的VR场景,等等。终端设备可以为手机、平板等任意支持VR场景显示的设备。初始VR场景可以为预先获得并与其对应的实际场景对应存储。比如在房产领域,可以预先对每种户型获得其对应的初始VR场景,并与户型信息对应存储。当用户选择某一房屋要查看其VR场景时,可以基于该房屋的户型信息或者该房屋信息确定其对应的初始VR场景,具体可以根据实际需求设置。用户可以通过终端设备触发向执行本公开的方法的装置或服务器发送VR查看请求。该VR查看请求可以包括用户待查看VR场景对应的关联信息,该关联信息可以根据实际需求设置,比如该关联信息可以为目标房屋信息或目标户型信息,该关联信息还可以为虚拟展厅标识信息、旅游景点标识信息、数字博物馆标识信息,等等,只要能够基于该关联信息确定出其对应的初始VR场景即可,具体不做限定。
在一个可选示例中,该步骤201可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一确定模块执行。
步骤202,确定初始VR场景的首点位。
其中,首点位可以根据用户实际需求确定,也可以根据预设规则确定,预设规则可以根据实际需求设置,比如随机确定首点位,即将初始VR场景中多个点位中随机的一个点 位作为首点位,再比如设置预设首点位,比如将房屋某一功能间(比如客厅、卧室、餐厅、厨房等功能间)对应的点位作为首点位,具体不做限定。用户的实际需求可以是用户在触发VR查看请求时指定,还可以是根据用户授权的用户信息或用户历史操作信息分析确定出用户需求,进而基于用户需求确定出首点位。具体首点位的确定方式不做限定。
在一个可选示例中,该步骤202可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二确定模块执行。
步骤203,对首点位进行渲染,获得首点位对应的第一渲染结果,第一渲染结果包括首点位对应的渲染后的第一VR场景。
其中,在确定了首点位后,可以优先对首点位进行渲染,在首点位渲染完成后获得首点位对应的渲染后的第一VR场景。渲染是指对场景中的各种元素赋予一定的材质、色彩、光照等属性后进行渲染计算获得图像效果。具体渲染方式可以采用任意可实施的方式,本公开不做限定。
在一个可选示例中,可以预先生成各真实场景(比如各房屋、各虚拟展厅、各旅游景区,等等)分别对应的全景图像,每个真实场景的全景图像可以包括该真实场景对应的VR场景所有点位分别对应的全景图像,则可以获取首点位对应的全景图像,通过对首点位的全景图像进行渲染,获得首点位对应的第一渲染结果。对首点位的全景图像进行渲染,可以包括对全景图像中的各目标元素赋予一定的属性,比如目标元素为墙体,目标属性为黄色,则渲染后全景图像中的墙体为黄色,以此类推,对首点位对应的全景图像中的各元素进行渲染即可获得渲染后的全景图像,首点位对应的渲染后的全景图像构成首点位的渲染后的第一VR场景。
在一个可选示例中对全景图像进行渲染可以是基于预先配置的渲染数据实现,渲染数据包括各元素对应的属性信息,具体可以根据实际需求设置。
在一个可选示例中,还可以根据用户指定的风格对各点位的全景图像进行渲染,具体可以根据实际需求设置。
在一个可选示例中,该步骤203可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一处理模块执行。
步骤204,将第一渲染结果发送给终端设备,以使终端设备将首点位对应的第一VR场景展示给用户。
在完成首点位的渲染后,可以及时快速将首点位对应的第一渲染结果发送给终端设备,终端设备可以将首点位对应的第一VR场景展示给用户,从而使用户快速查看首点位的第一VR场景,有效降低用户的等待时间,提升用户体验。
在一个可选示例中,该步骤204可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一发送模块执行。
本公开实施例提供的VR场景的处理方法,通过在用户请求VR查看时,优先对初始VR场景的首点位进行渲染,渲染完首点位则将首点位的渲染结果发送给用户,用户可以快速查看首点位视角的VR场景,有效降低用户等待时间,从而提升用户体验,解决现有技术用户等待时间较长等问题。
图3是本公开另一示例性实施例提供的VR场景的处理方法的流程示意图。
在一个可选示例中,在步骤203的对首点位进行渲染,获得首点位对应的第一渲染结果之后,该方法还包括:
步骤205,对初始VR场景的除首点位之外的其他点位进行渲染,获得各其他点位分别对应的第二渲染结果,每个其他点位对应的第二渲染结果包括该其他点位对应的渲染后的第二VR场景。
由于每个真实场景对应的初始VR场景中通常包括多个点位,优先完成首点位的渲染后,可以继续对除首点位之外的其他点位进行渲染,获得各其他点位分别对应的第二渲染结果,其他点位的渲染原理参见前述首点位的渲染,在此不再赘述。
本实施例对各其他点位的渲染是在用户查看首点位的第一VR场景的同时进行的,因此,可能用户还未浏览完第一VR场景,其他点位的渲染就已经完成了,在用户无感知的情况下,为用户提供完整的VR场景。
步骤205与步骤204可以不分先后顺序。
在一个可选示例中,该步骤205可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二处理模块执行。
步骤206,将各其他点位分别对应的第二渲染结果发送给终端设备。
在获得各其他点位分别对应的第二渲染结果后,可以将各第二渲染结果发送给终端设备,当用户在第一VR场景点击其他点位时,终端设备可以直接获取接收到的该其他点位的第二渲染结果,基于第二渲染结果为用户展示该其他点位对应的第二VR场景。
在一个可选示例中,该步骤206可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二发送模块执行。
本公开通过优先将首点位的第一渲染结果发送给终端设备供用户查看,在用户浏览首点位对应的第一VR场景的同时,对其他点位进行渲染,获得各其他点位的第二渲染结果后直接发送给终端设备进行存储,当用户在第一VR场景点击其他点位时,可以快速为用户展示其点击的其他点位的第二VR场景,实现了在用户仅感知首点位等待时间下,为用户提供整体VR场景,有效降低用户等待时间,提升用户体验。且可以不预先将渲染处理好,可以降低服务器资源的耗费,从而降低成本。
在一个可选示例中,在接收到终端设备发送的VR查看请求后,还可以向终端设备发送渲染等待界面数据,以使终端设备向用户展示渲染等待界面,渲染等待界面的具体内容可以根据实际需求设置,本公开不做限定。直至首点位渲染完成,向用户展示首点位对应的第一VR场景。或者直至首点位渲染完成后,向用户展示进入VR确认界面,用户再次点击确认界面中的“进入VR”或“确定”等类似功能按钮,进入首点位对应的第一VR场景,具体可以根据实际需求设置。
图4是本公开再一示例性实施例提供的VR场景的处理方法的流程示意图。
在一个可选示例中,步骤205的对初始VR场景的除首点位之外的其他点位进行渲染,获得各其他点位分别对应的第二渲染结果,包括:
步骤2051a,按照各其他点位与首点位的连通距离由小到大依次对各其他点位进行渲 染,获得各其他点位分别对应的第二渲染结果。
其中,各其他点位与首点位的连通距离可以在确定点位时预先确定,从而可以基于连通距离确定其他各点位的渲染顺序,按照相应顺序对各其他点位进行渲染,获得各其他点位分别对应的第二渲染结果。对于每个点位的具体渲染操作参见前述首点位,在此不再赘述。
在一个可选示例中,该步骤2051a可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二处理模块执行。
步骤206的将各其他点位分别对应的第二渲染结果发送给终端设备,包括:
步骤2061a,每完成一个其他点位的渲染,将该其他点位的第二渲染结果发送给终端设备。
在按顺序进行其他点位渲染过程中,为了能够使用户快速查看首点位附近的其他点位,每完成一个其他点位的渲染,就把该其他点位的第二渲染结果发送给终端设备,当用户在第一VR场景下点击该其他点位时,可以使用户快速游走到该其他点位,浏览该其他点位对应的第二VR场景。比如首点位为客厅,在客厅的第一VR场景下,会展示在首点位的全景视角范围内的其他点位,但在该其他点位的渲染未完成时,用户在第一VR场景下点击该其他点位时无法进入该其他点位的第二VR场景,可以在该其他点位处显示加载中提示信息,比如在第一VR场景下,点位由圆圈表示,当用户点击未渲染完成的点位时,该圆圈处显示旋转的圆圈,表示加载中,具体可以根据实际需求设置。当该其他点位完成渲染后,终端设备接收到该其他点位的第二渲染结果,将加载中提示信息结束,切换到该其他点位的第二VR场景,使用户可以浏览该第二VR场景。
在一个可选示例中,该步骤2061a可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二发送模块执行。
本公开通过按照各其他点位与首点位的连通距离由小到大依次对各其他点位进行渲染,由于连通距离反应了用户从首点位可游走到其他点位的顺序,从而可以使用户能够快速游走到首点位附近的点位,以此类推,保证用户当前浏览点位附近的点位先渲染,以尽可能减少用户等待加载的时间,进一步提升用户体验。
图5是本公开又一示例性实施例提供的VR场景的处理方法的流程示意图。
在一个可选示例中,步骤205的对初始VR场景的除首点位之外的其他点位进行渲染,获得各其他点位分别对应的第二渲染结果,包括:
步骤2051b,并行对初始VR场景的各其他点位进行渲染,获得各其他点位分别对应的第二渲染结果。
本公开为了进一步提升用户体验,在完成首点位的渲染后,还可以对其他各点位进行并行渲染,以快速完成其他点位的渲染,从而尽快为用户提供其他点位的VR场景,使用户能够在各点位随意游走浏览。在并行渲染过程中,对于每个点位的渲染原理参见前述首点位,在此不再赘述。
在一个可选示例中,该步骤2051b可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二处理模块执行。
本公开通过在完成首点位的渲染后,对各其他点位并行进行渲染,进一步提升渲染效率,降低渲染时间,从而进一步提升用户体验。
在一个可选示例中,在步骤205的对初始VR场景的除首点位之外的其他点位进行渲染,获得各其他点位分别对应的第二渲染结果之后,还包括:
步骤301,基于初始VR场景对应的平面户型图、各点位分别对应的全景图像、及物品占位信息,进行三维模型渲染,获得第三渲染结果,第三渲染结果包括渲染后的目标三维模型。
其中,平面户型图可以为预先获得并存储的,各点位分别对应的全景图像可以是在点位渲染过程中生成的渲染后的全景图像,在该步骤可以直接获取到。物品占位信息可以是在前期占位阶段获得的。物品占位信息可以包括物品在三维模型中的位置、物品的形状、尺寸等信息,以能够描述物品在三维模型中占据的空间区域。具体来说,可以基于平面户型图确定对应的三维模型数据,三维模型数据包括三维坐标点集,根据三维模型数据实现平面户型图的三维场景的搭建,并通过各点位的全景图像及物品占位信息,实现物品的三维场景的搭建,结合平面户型图的三维场景和物品的三维场景,获得渲染后的目标三维模型。具体三维模型的渲染原理不再赘述。
在一个可选示例中,该步骤301可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第三处理模块执行。
步骤302,响应于终端设备发送的三维模型查看请求,将第三渲染结果发送给终端设备,以向用户展示目标三维模型。
在完成所有点位的渲染后,在三维模型渲染完成之前,可以在用户的终端设备显示的VR场景的界面上展示三维模型入口的雷达图,比如在VR场景界面的右上角展示雷达图,但是这时可能还无法响应用户对雷达图的点击操作,当三维模型渲染完成后,若用户点击该雷达图,可以将渲染获得的第三渲染结果发送给终端设备,终端设备可以向用户展示当前查看房屋的目标三维模型,供用户查看。
在一个可选示例中,该步骤302可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第三发送模块执行。
在一个可选示例中,用户点击雷达图后,可以进入三维模型界面,也可以进入平面户型图界面,三维模型界面展示目标三维模型,平面户型图界面展示平面户型图,具体可以根据实际需求设置。
本公开通过在完成所有点位的渲染后再进行三维模型渲染,为用户提供三维模型浏览功能,使得用户可以浏览全局的三维空间布局,进一步提升用户体验。
图6是本公开再一示例性实施例提供的VR场景的处理方法的流程示意图。
在一个可选示例中,本公开的方法还包括:
步骤401,接收终端设备发送的除首点位之外的第一其他点位的查看请求。
其中,第一其他点位可以是除首点位之外的其他点位中的任一其他点位,用户在浏览首点位的第一VR场景情况下,第一VR场景中显示有首点位全景视角下可视的其他点位,用户可以任意点击这些其他点位,将其点击的其他点位称为第一其他点位。终端设备获取 到用户的点击操作,若已接收到该第一其他点位对应的第二渲染结果,可以立即向用户展示该第一其他点位对应的第二VR场景,否则触发终端设备向服务器发送该第一其他点位的查看请求。
在一个可选示例中,该步骤401可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一接收模块执行。
步骤402,响应于第一其他点位未完成渲染,向终端设备发送第一其他点位对应的加载中提示信息,以使终端设备基于加载中提示信息在第一其他点位处展示加载中提示信息。
其中,加载中提示信息的具体内容可以根据实际需求设置,比如可以是文字、图片、动态图等,只要能够提示用户正在加载即可,本公开不做限定。若该第一其他点位尚未完成渲染,则可以向终端设备发送第一其他点位对应的加载中提示信息,终端设备则可以在该第一其他点位处展示加载中提示信息,提示用户等待加载。
在一个可选示例中,该步骤402可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第四发送模块执行。
步骤403,响应于第一其他点位完成渲染,将第一其他点位对应的第二渲染结果发送给终端设备,以向用户展示第一其他点位对应的第二VR场景。
当该第一其他点位完成渲染后,则及时将该第一其他点位的第二渲染结果发送终端设备,终端设备及时向用户展示该第一其他点位对应的第二VR场景。
在一个可选示例中,该步骤403可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第五发送模块执行。
在一个可选示例中,对于各其他点位的加载中提示信息也可以是预先发送给终端设备,当用户点击未渲染完成的其他点位时,终端设备自主向用户展示该其他点位的加载中提示信息。
本公开通过在用户点击了未完成渲染的点位时,为用户展示加载中提示信息,提示用户当前点位正在加载,用户可以先查看其他点位也可以等待该点位,在完成该点位的渲染后,及时将该点位的第二渲染结果发送给终端设备,为用户展示该点位的第二VR场景,进一步提升用户体验。
在一个可选示例中,VR查看请求包括用户选择的目标户型信息和关注区域信息;步骤201的响应于用户的终端设备发送的VR查看请求,确定VR查看请求对应的初始VR场景,包括:
步骤2011,响应于用户的终端设备发送的VR查看请求,基于VR查看请求包括的目标户型信息确定初始VR场景。
其中,目标户型信息可以包括目标户型标识或者目标户型图,具体可以根据实际需求设置,目标户型标识可以是预先为每个户型设置的唯一标识每个户型的标识信息,比如户型ID,具体标识方式不做限定。关注区域信息可以包括用户指定关注的功能间信息,比如厨房、客厅、主卧室等,具体不做限定。目标户型信息可以是用户浏览房屋列表选择的目标房屋对应的户型信息,关注区域信息可以是在用户浏览房屋列表时为用户提供的筛选功能供用户选择,也可以是用户在选择了目标房屋后在用户点击查看VR场景时,通过弹出 界面供用户选择关注区域,具体触发方式可以根据实际需求设置,本公开不做限定。对于每种户型预先配置有对应的初始VR场景,与目标户型信息关联存储,在获得目标户型信息后,即可基于目标户型信息确定其对应的初始VR场景。
在一个可选示例中,该步骤2011可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一确定模块执行。
步骤202的确定初始VR场景的首点位,包括:
步骤2021,将VR查看请求包括的关注区域信息对应的点位作为初始VR场景的首点位。
示例性的,当用户的关注区域为厨房时,则将厨房对应的点位作为初始VR场景的首点位,优先进行渲染。
在一个可选示例中,该步骤2021可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二确定模块执行。
本公开通过基于用户关注区域确定首点位,使得用户能够快速准确地查看到自己关注的功能间的VR场景,进一步提升用户体验。
在一个可选示例中,VR查看请求还包括用户选择的装饰风格信息;步骤203的对首点位进行渲染,获得首点位对应的第一渲染结果,包括:
步骤2031a,基于VR查看请求包括的装饰风格信息,对首点位进行渲染,获得首点位对应的第一渲染结果。
其中,装饰风格信息与上述关注区域信息类似,可以在任意可实施的阶段为用户提供选择功能,以获得用户选择的装饰风格信息。比如用户选择欧美风格、现代风格,等等。基于装饰风格信息对首点位进行渲染,可以是指在对首点位的全景图像进行渲染时,采用装饰风格信息对应的渲染数据进行渲染,使得渲染后的VR场景符合用户的装饰风格需求。
在一个可选示例中,该步骤2031a可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一处理模块执行。
在一个可选示例中,对于每个真实场景,可以预先获得该真实场景的不同装饰风格的全景图像,在渲染时,可以获取用户需求的装饰风格的全景图像,构成相应点位的VR场景。
在一个可选示例中,还可以针对每种装饰风格,预先获得并存储相应风格的渲染数据,在渲染时,基于用户需求的装饰风格的渲染数据对初始的全景图像进行渲染,获得渲染后的满足用户风格需求的全景图像,进而构成相应点位的的VR场景,具体可以根据实际需求设置,本公开实施例不做限定。
图7是本公开一示例性实施例提供的步骤203的流程示意图。
在一个可选示例中,步骤203的对首点位进行渲染,获得首点位对应的第一渲染结果,包括:
步骤2031b,获取首点位对应的目标全景图像。
步骤2032b,基于目标全景图像,对首点位进行渲染,获得首点位对应的第一渲染结果。
其中,目标全景图像可以为预先获得的首点位的初始全景图像,基于目标全景图像对首点位进行渲染是指通过目标全景图像的渲染构成首点位对应的渲染后的全景图像,进而构成首点位对应的第一渲染结果。
在一个可选示例中,该步骤2031b和2032b可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一处理模块执行。
本公开实施例提供的任一种VR场景的处理方法可以由任意适当的具有数据处理能力的设备执行,包括但不限于:终端设备和服务器等。或者,本公开实施例提供的任一种VR场景的处理方法可以由处理器执行,如处理器通过调用存储器存储的相应指令来执行本公开实施例提及的任一种VR场景的处理方法。下文不再赘述。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
图8是本公开一示例性实施例提供的VR场景的处理装置的结构示意图。该实施例的装置可用于实现本公开相应的方法实施例,如图8所示的装置包括:第一确定模块501、第二确定模块502、第一处理模块503和第一发送模块504。
第一确定模块501,用于响应于用户的终端设备发送的VR查看请求,确定VR查看请求对应的初始VR场景,初始VR场景为待渲染的初始场景;第二确定模块502,用于确定初始VR场景的首点位;第一处理模块503,用于对首点位进行渲染,获得首点位对应的第一渲染结果,第一渲染结果包括首点位对应的渲染后的第一VR场景;第一发送模块504,用于将第一渲染结果发送给终端设备,以展示给用户。
图9是本公开另一示例性实施例提供的VR场景的处理装置的结构示意图。
在一个可选示例中,本公开的装置还包括:第二处理模块505和第二发送模块506。
第二处理模块505,用于对初始VR场景的除首点位之外的其他点位进行渲染,获得各其他点位分别对应的第二渲染结果,每个其他点位对应的第二渲染结果包括该其他点位对应的渲染后的第二VR场景;第二发送模块506,用于将各其他点位分别对应的第二渲染结果发送给终端设备。
在一个可选示例中,第二处理模块505具体用于:按照各其他点位与首点位的连通距离由小到大依次对各其他点位进行渲染,获得各其他点位分别对应的第二渲染结果。
第二发送模块506具体用于:每完成一个其他点位的渲染,将该其他点位的第二渲染结果发送给终端设备。
在一个可选示例中,第二处理模块505具体用于:并行对初始VR场景的各其他点位进行渲染,获得各其他点位分别对应的第二渲染结果。
在本公开一实施方式中,本公开的装置还包括:第三处理模块507和第三发送模块508。
第三处理模块507,用于基于初始VR场景对应的平面户型图、各点位分别对应的全景图像、及物品占位信息,进行三维模型渲染,获得第三渲染结果,第三渲染结果包括渲染后的目标三维模型;第三发送模块508,用于响应于终端设备发送的三维模型查看请求, 将第三渲染结果发送给终端设备,以向用户展示目标三维模型。
在一个可选示例中,本公开的装置还包括:第一接收模块601、第四发送模块602和第五发送模块603。
第一接收模块601,用于接收终端设备发送的除首点位之外的第一其他点位的查看请求;第四发送模块602,用于响应于第一其他点位未完成渲染,向终端设备发送第一其他点位对应的加载中提示信息,以使终端设备基于加载中提示信息在第一其他点位处展示加载中提示信息;第五发送模块603,用于响应于第一其他点位完成渲染,将第一其他点位对应的第二渲染结果发送给终端设备,以向用户展示第一其他点位对应的第二VR场景。
在一个可选示例中,VR查看请求包括用户选择的目标户型信息和关注区域信息;第一确定模块501具体用于:基于目标户型信息确定初始VR场景;第二确定模块502具体用于:将关注区域信息对应的点位作为初始VR场景的首点位。
在一个可选示例中,VR查看请求还包括用户选择的装饰风格信息;第一处理模块503具体用于:基于装饰风格信息,对首点位进行渲染,获得首点位对应的第一渲染结果。
另外,本公开实施例还提供了一种电子设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行所述存储器中存储的计算机程序,且所述计算机程序被执行时,实现本公开上述任一实施例所述的VR场景的处理方法。
图10为本公开电子设备一个应用实施例的结构示意图。如图10所示,电子设备包括一个或多个处理器和存储器。
处理器可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备中的其他组件以执行期望的功能。
存储器可以存储一个或多个计算机程序产品,所述存储器可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机程序产品,处理器可以运行所述计算机程序产品,以实现上文所述的本公开的各个实施例的VR场景的处理方法以及/或者其他期望的功能。
在一个示例中,电子设备还可以包括:输入装置和输出装置,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。
此外,该输入装置还可以包括例如键盘、鼠标等等。
该输出装置可以向外部输出各种信息,包括确定出的距离信息、方向信息等。该输出设备可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。
当然,为了简化,图10中仅示出了该电子设备中与本公开有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备还可以包括任何其他适当的组件。
除了上述方法和设备以外,本公开的实施例还可以是计算机程序产品,其包括计算机 程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述部分中描述的根据本公开各种实施例的方法中的步骤。
所述计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本公开实施例操作的程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。
此外,本公开的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述部分中描述的根据本公开各种实施例的方法中的步骤。
所述计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于系统实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本公开中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
可能以许多方式来实现本公开的方法和装置。例如,可通过软件、硬件、固件或者软 件、硬件、固件的任何组合来实现本公开的方法和装置。用于所述方法的步骤的上述顺序仅是为了进行说明,本公开的方法的步骤不限于以上具体描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本公开实施为记录在记录介质中的程序,这些程序包括用于实现根据本公开的方法的机器可读指令。因而,本公开还覆盖存储用于执行根据本公开的方法的程序的记录介质。
还需要指出的是,在本公开的装置、设备和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本公开。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本公开的范围。因此,本公开不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。

Claims (18)

  1. 一种VR场景的处理方法,其特征在于,包括:
    响应于用户的终端设备发送的VR查看请求,确定所述VR查看请求对应的初始VR场景,所述初始VR场景为待渲染的初始场景;
    确定所述初始VR场景的首点位;
    对所述首点位进行渲染,获得所述首点位对应的第一渲染结果,所述第一渲染结果包括所述首点位对应的渲染后的第一VR场景;
    将所述第一渲染结果发送给所述终端设备,以使所述终端设备将所述首点位对应的所述第一VR场景展示给用户。
  2. 根据权利要求1所述的方法,其特征在于,在对所述首点位进行渲染,获得所述首点位对应的第一渲染结果之后,还包括:
    对所述初始VR场景的除所述首点位之外的其他点位进行渲染,获得各所述其他点位分别对应的第二渲染结果,每个其他点位对应的所述第二渲染结果包括该其他点位对应的渲染后的第二VR场景;
    将各所述其他点位分别对应的所述第二渲染结果发送给所述终端设备。
  3. 根据权利要求2所述的方法,其特征在于,所述对所述初始VR场景的除所述首点位之外的其他点位进行渲染,获得各所述其他点位分别对应的第二渲染结果,包括:
    按照各所述其他点位与所述首点位的连通距离由小到大依次对各所述其他点位进行渲染,获得各所述其他点位分别对应的所述第二渲染结果;
    所述将各所述其他点位分别对应的所述第二渲染结果发送给所述终端设备,包括:
    每完成一个其他点位的渲染,将该其他点位的所述第二渲染结果发送给所述终端设备。
  4. 根据权利要求2所述的方法,其特征在于,所述对所述初始VR场景的除所述首点位之外的其他点位进行渲染,获得各所述其他点位分别对应的第二渲染结果,包括:
    并行对所述初始VR场景的各其他点位进行渲染,获得各所述其他点位分别对应的第二渲染结果。
  5. 根据权利要求2所述的方法,其特征在于,在对所述初始VR场景的其他点位进行渲染,获得各所述其他点位分别对应的第二渲染结果之后,还包括:
    基于所述初始VR场景对应的平面户型图、各点位分别对应的全景图像、及物品占位信息,进行三维模型渲染,获得第三渲染结果,所述第三渲染结果包括渲染后的目标三维模型;
    响应于所述终端设备发送的三维模型查看请求,将所述第三渲染结果发送给所述终端设备,以向所述用户展示所述目标三维模型。
  6. 根据权利要求2所述的方法,其特征在于,还包括:
    接收所述终端设备发送的除所述首点位之外的第一其他点位的查看请求;
    响应于所述第一其他点位未完成渲染,向所述终端设备发送所述第一其他点位对应的加载中提示信息,以使所述终端设备基于所述加载中提示信息在所述第一其他点位处展示加载中提示信息;
    响应于所述第一其他点位完成渲染,将所述第一其他点位对应的第二渲染结果发送给所述终端设备,以向所述用户展示所述第一其他点位对应的所述第二VR场景。
  7. 根据权利要求1所述的方法,其特征在于,所述VR查看请求包括所述用户选择的目标户型信息和关注区域信息;
    所述确定所述VR查看请求对应的初始VR场景,包括:
    基于所述目标户型信息确定所述初始VR场景;
    所述确定所述初始VR场景的首点位,包括:
    将所述关注区域信息对应的点位作为所述初始VR场景的首点位。
  8. 根据权利要求1所述的方法,其特征在于,所述VR查看请求还包括所述用户选择的装饰风格信息;
    所述对所述首点位进行渲染,获得所述首点位对应的第一渲染结果,包括:
    基于所述装饰风格信息,对所述首点位进行渲染,获得所述首点位对应的所述第一渲染结果。
  9. 一种VR场景的处理装置,其特征在于,包括:
    第一确定模块,用于响应于用户的终端设备发送的VR查看请求,确定所述VR查看请求对应的初始VR场景,所述初始VR场景为待渲染的初始场景;
    第二确定模块,用于确定所述初始VR场景的首点位;
    第一处理模块,用于对所述首点位进行渲染,获得所述首点位对应的第一渲染结果,所述第一渲染结果包括所述首点位对应的渲染后的第一VR场景;
    第一发送模块,用于将所述第一渲染结果发送给所述终端设备,以展示给所述用户。
  10. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    第二处理模块,用于对所述初始VR场景的除所述首点位之外的其他点位进行渲染,获得各所述其他点位分别对应的第二渲染结果,每个其他点位对应的所述第二渲染结果包括该其他点位对应的渲染后的第二VR场景;
    第二发送模块,用于将各所述其他点位分别对应的所述第二渲染结果发送给所述终端设备。
  11. 根据权利要求10所述的装置,其特征在于,所述第二处理模块具体用于:
    按照各所述其他点位与所述首点位的连通距离由小到大依次对各所述其他点位进行渲染,获得各所述其他点位分别对应的所述第二渲染结果;
    所述第二发送模块具体用于:
    每完成一个其他点位的渲染,将该其他点位的所述第二渲染结果发送给所述终端设备。
  12. 根据权利要求10所述的装置,其特征在于,所述第二处理模块具体用于:
    并行对所述初始VR场景的各其他点位进行渲染,获得各所述其他点位分别对应的第二渲染结果。
  13. 根据权利要求10所述的装置,其特征在于,所述装置还包括:
    第三处理模块,用于基于所述初始VR场景对应的平面户型图、各点位分别对应的全景图像、及物品占位信息,进行三维模型渲染,获得第三渲染结果,所述第三渲染结果包括渲染后的目标三维模型;
    第三发送模块,用于响应于所述终端设备发送的三维模型查看请求,将所述第三渲染结果发送给所述终端设备,以向所述用户展示所述目标三维模型。
  14. 根据权利要求10所述的装置,其特征在于,所述装置还包括:
    第一接收模块,用于接收所述终端设备发送的除所述首点位之外的第一其他点位的查看请求;
    第四发送模块,用于响应于所述第一其他点位未完成渲染,向所述终端设备发送所述第一其他点位对应的加载中提示信息,以使所述终端设备基于所述加载中提示信息在所述第一其他点位处展示加载中提示信息;
    第五发送模块,用于响应于所述第一其他点位完成渲染,将所述第一其他点位对应的第二渲染结果发送给所述终端设备,以向所述用户展示所述第一其他点位对应的所述第二VR场景。
  15. 根据权利要求9所述的装置,其特征在于,所述VR查看请求包括所述用户选择的目标户型信息和关注区域信息;
    所述第一确定模块,具体用于:基于所述目标户型信息确定所述初始VR场景;
    所述第二确定模块,具体用于:将所述关注区域信息对应的点位作为所述初始VR场景的首点位。
  16. 根据权利要求9所述的装置,其特征在于,所述VR查看请求还包括所述用户选择的装饰风格信息;
    所述第一处理模块,具体用于:基于所述装饰风格信息,对所述首点位进行渲染,获得所述首点位对应的所述第一渲染结果。
  17. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,该计算机程序指令被处理器执行时,实现上述权利要求1-8任一所述的VR场景的处理方法。
  18. 一种电子设备,其特征在于,包括:
    存储器,用于存储计算机程序产品;
    处理器,用于执行所述存储器中存储的计算机程序产品,且所述计算机程序产品被执行时,实现上述权利要求1-8任一所述的VR场景的处理方法。
PCT/CN2022/140018 2022-09-16 2022-12-19 Vr场景的处理方法、装置、电子设备和存储介质 WO2024055462A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211134294.8A CN115423920B (zh) 2022-09-16 2022-09-16 Vr场景的处理方法、装置和存储介质
CN202211134294.8 2022-09-16

Publications (1)

Publication Number Publication Date
WO2024055462A1 true WO2024055462A1 (zh) 2024-03-21

Family

ID=84204012

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/140018 WO2024055462A1 (zh) 2022-09-16 2022-12-19 Vr场景的处理方法、装置、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN115423920B (zh)
WO (1) WO2024055462A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115423920B (zh) * 2022-09-16 2024-01-30 如你所视(北京)科技有限公司 Vr场景的处理方法、装置和存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866200A (zh) * 2019-11-12 2020-03-06 北京城市网邻信息技术有限公司 一种业务界面的渲染方法和装置
CN111627116A (zh) * 2020-05-29 2020-09-04 联想(北京)有限公司 图像渲染控制方法、装置及服务器
CN113763552A (zh) * 2021-09-08 2021-12-07 苏州光格科技股份有限公司 三维地理模型的展示方法、装置、计算机设备和存储介质
WO2022033389A1 (zh) * 2020-08-11 2022-02-17 中兴通讯股份有限公司 一种图像处理的方法、装置、电子设备及存储介质
CN114387400A (zh) * 2022-01-18 2022-04-22 北京有竹居网络技术有限公司 三维场景的显示方法、显示装置、电子设备和服务器
CN114387376A (zh) * 2022-01-18 2022-04-22 北京有竹居网络技术有限公司 三维场景的渲染方法及装置、电子设备、可读存储介质
CN114387398A (zh) * 2022-01-18 2022-04-22 北京有竹居网络技术有限公司 三维场景加载方法、加载装置、电子设备和可读存储介质
CN115423920A (zh) * 2022-09-16 2022-12-02 如你所视(北京)科技有限公司 Vr场景的处理方法、装置和存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2456802A (en) * 2008-01-24 2009-07-29 Areograph Ltd Image capture and motion picture generation using both motion camera and scene scanning imaging systems
CN103885788B (zh) * 2014-04-14 2015-02-18 焦点科技股份有限公司 一种基于模型组件化动态web 3d虚拟现实场景的搭建方法及系统
CN107871338B (zh) * 2016-09-27 2019-12-03 重庆完美空间科技有限公司 基于场景装饰的实时交互渲染方法
CN107168780B (zh) * 2017-04-06 2020-09-08 北京小鸟看看科技有限公司 虚拟现实场景的加载方法、设备及虚拟现实设备
US10872467B2 (en) * 2018-06-06 2020-12-22 Ke.Com (Beijing) Technology Co., Ltd. Method for data collection and model generation of house
US10776989B1 (en) * 2019-05-13 2020-09-15 Robert Edwin Douglas Method and apparatus for prioritized volume rendering
EP3948840A4 (en) * 2019-03-18 2023-07-19 Geomagical Labs, Inc. VIRTUAL INTERACTION WITH THREE-DIMENSIONAL INTERIOR ROOM IMAGING
US20220139026A1 (en) * 2020-11-05 2022-05-05 Facebook Technologies, Llc Latency-Resilient Cloud Rendering
CN112948043A (zh) * 2021-03-05 2021-06-11 吉林吉动盘古网络科技股份有限公司 大规模建筑场景的细粒度化Web3D在线可视化方法
CN112891944B (zh) * 2021-03-26 2022-10-25 腾讯科技(深圳)有限公司 基于虚拟场景的互动方法、装置、计算机设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866200A (zh) * 2019-11-12 2020-03-06 北京城市网邻信息技术有限公司 一种业务界面的渲染方法和装置
CN111627116A (zh) * 2020-05-29 2020-09-04 联想(北京)有限公司 图像渲染控制方法、装置及服务器
WO2022033389A1 (zh) * 2020-08-11 2022-02-17 中兴通讯股份有限公司 一种图像处理的方法、装置、电子设备及存储介质
CN113763552A (zh) * 2021-09-08 2021-12-07 苏州光格科技股份有限公司 三维地理模型的展示方法、装置、计算机设备和存储介质
CN114387400A (zh) * 2022-01-18 2022-04-22 北京有竹居网络技术有限公司 三维场景的显示方法、显示装置、电子设备和服务器
CN114387376A (zh) * 2022-01-18 2022-04-22 北京有竹居网络技术有限公司 三维场景的渲染方法及装置、电子设备、可读存储介质
CN114387398A (zh) * 2022-01-18 2022-04-22 北京有竹居网络技术有限公司 三维场景加载方法、加载装置、电子设备和可读存储介质
CN115423920A (zh) * 2022-09-16 2022-12-02 如你所视(北京)科技有限公司 Vr场景的处理方法、装置和存储介质

Also Published As

Publication number Publication date
CN115423920A (zh) 2022-12-02
CN115423920B (zh) 2024-01-30

Similar Documents

Publication Publication Date Title
CN111127627B (zh) 一种三维房屋模型中的模型展示方法及装置
WO2021093416A1 (zh) 信息播放方法、装置、计算机可读存储介质及电子设备
WO2019126002A1 (en) Recommending and presenting products in augmented reality
EP2819035B1 (en) Systems and methods for presentations with live application integration
CN111414225B (zh) 三维模型远程展示方法、第一终端、电子设备及存储介质
US11227437B2 (en) Three-dimensional model constructing method, apparatus, and system
EP2819033A1 (en) Systems and methods for presentations with live application integration
CN112232900A (zh) 一种信息的展示方法和装置
US20220343055A1 (en) Systems and methods for product visualization using a single-page application
WO2024055462A1 (zh) Vr场景的处理方法、装置、电子设备和存储介质
US20220245888A1 (en) Systems and methods to generate an interactive environment using a 3d model and cube maps
WO2023202349A1 (zh) 三维标签的交互呈现方法、装置、设备、介质和程序产品
WO2021228200A1 (zh) 用于实现三维空间场景互动的方法、装置和设备
US20240087004A1 (en) Rendering 3d model data for prioritized placement of 3d models in a 3d virtual environment
CN111562845B (zh) 用于实现三维空间场景互动的方法、装置和设备
US20230394817A1 (en) Enhanced product visualization technology with web-based augmented reality user interface features
US9891791B1 (en) Generating an interactive graph from a building information model
WO2023197657A1 (zh) 用于处理vr场景的方法、装置和计算机程序产品
CN112344932A (zh) 室内导航方法、装置、设备和存储介质
CN112651801B (zh) 一种房源信息的展示方法和装置
CN112835575A (zh) 多图层显示控制方法和装置
EP3859566B1 (en) Systems and methods for product visualization using a single-page application
CN115454255B (zh) 物品展示的切换方法和装置、电子设备、存储介质
KR102464437B1 (ko) 기가 픽셀 미디어 객체 감상 및 거래를 제공하는 메타버스 기반 크로스 플랫폼 서비스 시스템
WO2023081453A1 (en) Systems and methods to generate an interactive environment using a 3d model and cube maps

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22958649

Country of ref document: EP

Kind code of ref document: A1