WO2023197657A1 - 用于处理vr场景的方法、装置和计算机程序产品 - Google Patents

用于处理vr场景的方法、装置和计算机程序产品 Download PDF

Info

Publication number
WO2023197657A1
WO2023197657A1 PCT/CN2022/140021 CN2022140021W WO2023197657A1 WO 2023197657 A1 WO2023197657 A1 WO 2023197657A1 CN 2022140021 W CN2022140021 W CN 2022140021W WO 2023197657 A1 WO2023197657 A1 WO 2023197657A1
Authority
WO
WIPO (PCT)
Prior art keywords
panoramic image
scene
target
current
editing instruction
Prior art date
Application number
PCT/CN2022/140021
Other languages
English (en)
French (fr)
Inventor
杨光
白杰
李成杰
申福龙
Original Assignee
如你所视(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 如你所视(北京)科技有限公司 filed Critical 如你所视(北京)科技有限公司
Publication of WO2023197657A1 publication Critical patent/WO2023197657A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Definitions

  • the present disclosure relates to a method, apparatus and computer program product for processing VR scenes.
  • VR VirtualReality, virtual reality
  • panoramic scene is an interactive three-dimensional scene that integrates multi-source information based on panoramic images through image processing technology. It can be presented more realistically and comprehensively through a 720° viewing angle. Three-dimensional scenes have been widely used in many fields, such as furniture display, tourist attraction display, virtual exhibition hall, digital museum, etc., as well as VR cars, VR house viewing and other fields.
  • Embodiments of the present disclosure provide a method, device and computer program product for processing VR scenes.
  • One aspect of an embodiment of the present disclosure provides a method for processing a VR scene, including: in response to receiving a user's object editing instruction for the current VR scene, determining a target object and target attributes pointed to by the object editing instruction, where the target object is Objects included in the current VR scene; obtain the target panoramic image that matches the object editing instruction.
  • the target object contained in the target panoramic image has target attributes, and the attributes of other objects other than the target object in the target panoramic image are the same as those in the current VR
  • the attributes in the scene are consistent; the current panoramic image that constitutes the current VR scene is replaced with the target panoramic image to generate an updated VR scene; the updated VR scene is presented.
  • an apparatus for processing a VR scene including: an instruction receiving unit configured to, in response to receiving a user's object editing instruction for the current VR scene, determine the object to which the object editing instruction points.
  • the target object and the target attribute, the target object is an object contained in the current VR scene;
  • the image acquisition unit is configured to obtain the target panoramic image matching the object editing instruction from the preset panoramic image library, and the target included in the target panoramic image
  • the object has target attributes, and the attributes of other objects other than the target object in the target panoramic image are consistent with the attributes in the current VR scene;
  • the scene update unit is configured to replace the current panoramic image constituting the current VR scene with the target panorama
  • the image generates an updated VR scene;
  • the scene presentation unit is configured to present the updated VR scene.
  • Another aspect of the embodiments of the present disclosure provides a computer program product, including computer program instructions.
  • the computer program instructions are executed by a processor, the method for processing a VR scene according to any embodiment of the present disclosure is implemented. .
  • Another aspect of the embodiments of the present disclosure provides a computer-readable storage medium.
  • Computer program instructions are stored on the computer-readable storage medium. When the computer program instructions are executed by a processor, any embodiment of the present disclosure is implemented. The method for processing VR scenes.
  • Yet another aspect of an embodiment of the present disclosure provides an electronic device, including: a processor; a memory for storing executable instructions by the processor; and a processor for reading executable instructions from the memory and executing the instructions.
  • the method for processing VR scenes when receiving the user's object editing instruction for the current VR scene, determines the target object and target attributes pointed to by the object editing instruction, and the target object is the object included in the current VR scene. ; Obtain the target panoramic image that matches the object editing instruction from the preset panoramic image library.
  • the target object contained in the target panoramic image has target attributes; replace the current panoramic image that constitutes the current VR scene with the target panoramic image, and generate the updated VR scene; presents the updated VR scene.
  • the user selects the attributes of the objects in the VR scene, and replaces the current panoramic image that constitutes the current VR scene with the corresponding target panoramic image according to the user-selected attributes, and then presents the user-selected attributes to the user through the updated VR scene. It allows users to intuitively feel the rendering effects of objects in VR scenes under different attributes, responds to user needs more flexibly, and expands the VR scene's ability to present different attributes, which helps improve users' experience when browsing VR scenes.
  • Figure 1 is a flow chart of an embodiment of a method for processing VR scenes of the present disclosure
  • Figure 2 is a schematic diagram of the storage structure of the panoramic image library in one embodiment of the method for processing VR scenes of the present disclosure
  • Figure 3 is a flow chart of another embodiment of a method for processing VR scenes of the present disclosure
  • Figure 4 is a schematic diagram of an object editing instruction list in an application scenario of the method for processing VR scenes of the present disclosure
  • Figure 5 is a schematic structural diagram of an embodiment of a device for processing VR scenes according to the present disclosure
  • FIG. 6 is a schematic structural diagram of an application embodiment of the electronic device of the present disclosure.
  • plural may refer to two or more than two, and “at least one” may refer to one, two, or more than two.
  • Embodiments of the present disclosure may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which may operate with numerous other general or special purpose computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments and/or configurations suitable for use with terminal devices, computer systems, servers and other electronic devices include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients Computers, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems and distributed cloud computing technology environments including any of the above systems, etc.
  • Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system executable instructions (such as program modules) being executed by the computer system.
  • program modules may include routines, programs, object programs, components, logic, data structures, etc., that perform specific tasks or implement specific abstract data types.
  • the computer system/server may be implemented in a distributed cloud computing environment where tasks are performed by remote processing devices linked through a communications network.
  • program modules may be located on local or remote computing system storage media including storage devices.
  • Figure 1 shows a flowchart of an embodiment of a method for processing a VR scene of the present disclosure. As shown in Figure 1, the process includes steps 110 to 140. Each step is illustrated below with an example.
  • Step 110 In response to receiving the user's object editing instruction for the current VR scene, determine the target object and target attributes pointed to by the object editing instruction.
  • the target object is the object contained in the current VR scene.
  • step 110 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by an instruction receiving unit run by the processor.
  • the current VR scene may be a VR scene presented to the user at the current moment by the execution subject of the method for processing a VR scene of the present disclosure (for example, it may be a terminal device such as a smartphone or a tablet computer).
  • Objects represent objects contained in the current VR scene. Attributes can characterize the characteristics of an object. For example, the appearance characteristics of an object can be characterized by appearance attributes, and the material characteristics of an object can be characterized by material attributes.
  • Users can select the target object in the current VR scene through object editing instructions, and determine the target attributes of the target object according to needs.
  • the user can browse the VR scene through a smartphone. Assuming that the current VR scene is used for furniture display, the user can change the material of the furniture through object editing instructions.
  • the smartphone receives an object editing instruction from the user, it can determine based on the object editing instruction that the target object is the furniture whose material the user desires to change, and the target attribute is the furniture material the user desires.
  • the target attributes include but are not limited to wall color, wall material, etc.
  • the current VR scene can be used for house display, and the wall in the current VR scene can be used as a target object.
  • the user can edit the wall color or wall material through the object editing command.
  • wall materials may include different types such as wall paint, ceramic tiles, wallpaper, etc.
  • walls at different locations can be used as different objects.
  • the user's customization granularity of the VR scene can be improved.
  • Step 120 Obtain the target panoramic image that matches the object editing instruction.
  • the target object included in the target panoramic image has target attributes, and the attributes of other objects other than the target object in the target panoramic image are consistent with the attributes in the current VR scene.
  • step 120 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by an image acquisition unit run by the processor.
  • the target panoramic image may be pre-generated and stored in the panoramic image library.
  • the panoramic image library can be used to store panoramic images corresponding to the current VR scene, that is, the panoramic images in the panoramic image library and the current VR scene represent the same real scene.
  • the panoramic image library may include not only the current panoramic image that constitutes the current VR scene, but also backup panoramic images that are not used in the current VR scene.
  • the image parameters (such as wandering points, image size, etc.) of the backup panoramic image can be consistent with the current panoramic image, and the attributes of at least one object in any two panoramic images can be different.
  • a VR scene may include multiple wandering points, and at least one of the plurality of wandering points or each wandering point may correspond to multiple panoramic images.
  • multiple VR scenes can be constructed using different combinations of panoramic images. These VR scenes and the current VR scene all correspond to the same real scene. Different VR scenes can present the same real object. different properties.
  • the target panoramic image used for VR scene update in this embodiment may include multiple target panoramic images corresponding to at least one wandering point in the current VR scene.
  • the current VR scene includes 3 wandering points, each wandering point corresponds to 2 current panoramic images, the target object is a wall, and the target attribute is yellow, then the number of target panoramic images used for VR scene update There can be 6, corresponding to 3 wandering points, and each wandering point can correspond to 2 target panoramic images among the 6 target panoramic images.
  • the walls are all yellow, and the attributes of other objects except the walls are consistent with the attributes of the other objects in the current VR scene.
  • the target panoramic image can also be generated in the following manner: based on the target object and target attributes, the current panoramic image is rendered to obtain the target panoramic image.
  • the execution subject may be pre-loaded with a rendering tool (such as image processing software), and the execution subject may respond to the object editing instruction by calling the rendering tool to render the target object in the current panoramic image so that the target object is presented.
  • the target attributes are extracted to obtain the target panoramic image, so that the target panoramic image can be generated in real time, thus responding to the user's needs more flexibly.
  • Step 130 Replace the current panoramic image constituting the current VR scene with the target panoramic image to generate an updated VR scene.
  • step 130 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by a scene update unit run by the processor.
  • the updated VR scene can present target objects with target attributes.
  • the execution subject (for example, it can be a smartphone that presents the current VR scene) can be pre-loaded with a VR scene generation tool.
  • the VR scene generation tool can be, for example, OpenGL ES, Google Cardboard, etc.
  • the execution subject can input the target panoramic image extracted in step 120 into the VR scene generation tool, and after processing such as alignment and splicing, an updated VR scene can be obtained.
  • the updated VR scene can be generated through the following steps: based on the wandering point of the current panoramic image and the wandering point of the target panoramic image, determine the current panoramic image and the target panoramic image Correspondence of images; based on the correspondence, replace the current panoramic image with the corresponding target panoramic image.
  • the browsing point in the VR scene corresponds to the wandering point of the panoramic image.
  • the amount of calculation required for image alignment and splicing is relatively large. Small and efficient in building VR scenes.
  • the corresponding relationship between the current panoramic image and the target panoramic image can be determined based on the wandering point. For example, the current panoramic image and the target panoramic image corresponding to the same wandering point can be determined, and the current panoramic image and the target panoramic image can be recorded.
  • the corresponding relationship between the target panoramic images so that the current panoramic image can be replaced with the target panoramic image according to the corresponding relationship, thereby realizing the replacement of the panoramic image, which can reduce the amount of calculation and improve the updated VR scene generation efficiency.
  • Step 140 Present the updated VR scene.
  • step 140 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by the scene presentation unit run by the processor.
  • the wall color in the current VR scene is white.
  • the user can select the wall as the target object by tapping the phone screen, and select yellow as the target attribute. , to generate object editing instructions and send them to the smartphone.
  • the smartphone can extract the target panoramic image with a yellow wall color from the panoramic image library according to the object editing instructions, and then replace the current panoramic image in the current VR scene with the target panoramic image to obtain an updated VR scene and present it on the mobile phone screen to users.
  • the color of the wall in the VR scene that the user browses is yellow.
  • the method for processing VR scenes when receiving a user's object editing instruction for the current VR scene, determines the target object and target attributes pointed to by the object editing instruction, and the target object is an object included in the current VR scene; Obtain the target panoramic image that matches the object editing instruction from the preset panoramic image library.
  • the target object contained in the target panoramic image has target attributes; replace the current panoramic image that constitutes the current VR scene with the target panoramic image to generate an updated VR scene; presents the updated VR scene.
  • the user selects the attributes of the objects in the VR scene, and replaces the current panoramic image that constitutes the current VR scene with the corresponding target panoramic image according to the user-selected attributes, and then presents the user-selected attributes to the user through the updated VR scene. It can respond to user needs more flexibly, thereby improving the user experience when browsing VR scenes.
  • obtaining the target panoramic image that matches the object editing instruction includes: extracting the target panoramic image that matches the object editing instruction from a preset panoramic image library; wherein, the panoramic image library A panoramic image set corresponding to at least one object contained in the current VR scene is pre-stored in the panoramic image set; for any panoramic image set, the panoramic image set is divided into multiple panoramic image sub-sets according to the different attributes of the objects corresponding to the panoramic image set.
  • the objects corresponding to the panoramic image set have different attributes in the different panoramic image subsets into which the panoramic image set is divided, and other objects other than the objects corresponding to the panoramic image set are in the panoramic images into which the panoramic image set is divided
  • the attributes in the image subset are consistent with the current panoramic image; for any panoramic image subset, the panoramic image subset includes panoramic images obtained by rendering processing of objects corresponding to the panoramic image subset at different wandering points.
  • a VR scene can be generated by using a VR scene construction tool to perform alignment, splicing, and other processing on the panoramic images in the panoramic image subset.
  • the objects corresponding to the panoramic image set have different attributes in the VR scenes corresponding to different panoramic image subsets, and the attributes of other objects other than the object are consistent with the attributes in the current VR scene.
  • Technicians can use image processing software to render the original panoramic image, and render the objects in the original panoramic image into visual effects corresponding to the attributes according to the attributes of the objects. In this way, different attributes can be presented through different panoramic images. .
  • FIG. 2 shows a schematic storage structure diagram of the panoramic image library in one embodiment of the present disclosure for processing VR scenes.
  • the panoramic image library There are two panoramic image sets stored in 200, wherein the object corresponding to the panoramic image set 210 is the wall, and the object corresponding to the panoramic image set 220 is furniture.
  • the panoramic image set 210 is divided into a panoramic image subset 211, a panoramic image subset 212 and a panoramic image subset 213 according to the color of the wall, wherein each panoramic image subset can include six panoramic images, each wandering point Corresponding to two panoramic images; in the panoramic image subset 211, the color of the wall is yellow; in the panoramic image subset 212, the color of the wall is white; in the panoramic image subset 213, the color of the wall are all green; and, in the panoramic image subset 211, the panoramic image subset 212, and the panoramic image subset 213, the material of the furniture can be consistent with the material in the current VR scene, for example, it can be wood.
  • the panoramic image set 220 is divided into a panoramic image subset 221 and a panoramic image subset 222 according to the material of the furniture.
  • Each panoramic image subset can include six panoramic images, and each wandering point corresponds to two panoramic images; in the panoramic image In the subset 221, the furniture is made of wood; in the panoramic image subset 222, the furniture is made of metal; and, in the panoramic image subset 221 and the panoramic image subset 222, the color of the wall can be the same as the current one.
  • the material in the VR scene is consistent, for example it can be white.
  • the pre-generated panoramic image can be stored in the panoramic image library according to the attributes of the object, so that the target panoramic image can be extracted therefrom.
  • the efficiency of obtaining the target panoramic image can be improved, thereby improving the update efficiency of the VR scene.
  • it can reduce the performance requirements for image processing equipment and help reduce development costs.
  • the above step 120 may include the following steps: based on the target object, determine the target panoramic image set from the panoramic image library; based on the target attribute, determine the target panoramic image subset from the target panoramic image set; Extract the target panoramic image from the subset.
  • the panoramic image library can be stored in the local storage space of the execution subject, or can be set in a cloud server, which is not limited by this disclosure.
  • the execution subject can directly retrieve the wall from the panoramic image library 200 The corresponding panoramic image set 210; then based on "yellow", the panoramic image subset 211 is retrieved from the panoramic image set 210, and the panoramic image extracted from the panoramic image subset 211 is the target panoramic image.
  • the execution subject when the panoramic image library 200 is set on a cloud server, the execution subject (for example, a smartphone that presents a VR scene) can generate retrieval conditions based on "wall" and "yellow” and send them to the cloud server.
  • the cloud server determines the panoramic image subset 211 from the panoramic image library 200 according to the search conditions, then generates a link to the panoramic image subset 211, and sends the generated link to the execution subject.
  • the execution subject may extract the target panoramic image from the panoramic image subset 211 according to the link.
  • the panoramic image library can be retrieved step by step according to the target object and target attributes to obtain the target panoramic image, which can further improve the efficiency of obtaining the target panoramic image.
  • Figure 3 shows a flow chart for processing a VR scene according to another embodiment of the present disclosure.
  • the process includes steps 310 to 370. Each step is illustrated below with an example. .
  • Step 310 Present the current VR scene.
  • step 310 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by the current rendering unit run by the processor.
  • Step 320 In response to receiving the list presentation instruction, present the object editing instruction list at a preset position in the current VR scene, so that the user can select the object editing instruction in the object editing instruction list.
  • step 320 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by a list presentation unit run by the processor.
  • the smartphone can present the object editing instruction list at a preset position in the current VR scene. If the current VR scene includes multiple objects, the object editing instructions of the multiple objects can be integrated into one object editing instruction list, or the multiple objects can have corresponding object editing instruction lists.
  • the list presentation instruction may also be a triggering instruction.
  • the object editing instruction list may be presented at a preset position.
  • the preset angle may be 10°, 15° or other angles, which are not listed here.
  • FIG. 4 shows a schematic diagram of an object editing instruction list in an application scenario of the method for processing VR scenes of the present disclosure.
  • the target object may be a wall
  • the object editing instruction list 400 may be presented around the wall in the form of a graphic, and the color attribute of the wall may be represented by the icon color.
  • presenting the object editing instruction list at a preset position in the current VR scene includes: determining the preset position based on the position of the object corresponding to the object editing instruction list in the current VR scene; A list of object editing instructions is displayed in a floating position at the preset position.
  • a preset position can be set in three-dimensional space close to the object.
  • the presentation position of the object editing instruction list can be determined according to the location of the object, and then the object editing instruction list is suspended and presented in the three-dimensional space around the object, thereby representing the corresponding relationship between the object editing instruction list and the object. This can improve operational convenience.
  • Step 330 Receive the user's selection instruction for the object editing instruction.
  • step 330 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by an instruction determination unit run by the processor.
  • the user can click the icon in the editing instruction list to select the corresponding object editing instruction.
  • Step 340 In response to receiving the user's object editing instruction for the current VR scene, determine the target object and target attributes pointed to by the editing instruction.
  • step 340 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by an instruction receiving unit run by the processor.
  • Step 350 Obtain the target panoramic image that matches the object editing instruction from the preset panoramic image library.
  • step 350 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by an image acquisition unit run by the processor.
  • Step 360 Replace the current panoramic image constituting the current VR scene with the target panoramic image to generate an updated VR scene.
  • step 360 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by a scene update unit run by the processor.
  • Step 370 Present the updated VR scene.
  • step 370 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by the scene presentation unit run by the processor.
  • steps 340 to 370 correspond to the aforementioned steps 110 to 140, and will not be described again here.
  • the embodiment shown in Figure 3 embodies the step of presenting the object editing instruction list at the preset position of the current VR scene when receiving the user's list presentation instruction.
  • the user can perform the following according to his or her own preferences.
  • object editing instructions different presentation strategies can be adopted according to the preferences of different users, so that the objects in the VR scene are presented according to the attributes required by the users, thereby further improving the user experience.
  • the method may further include: in response to receiving the scene jump instruction, closing the object editing instruction list, and jumping the current VR scene to the new VR pointed to by the scene jump instruction. scene; and/or, in response to receiving the perspective rotation instruction, keep the position of the object editing instruction list unchanged, and change the perspective of the current VR scene according to the perspective rotation instruction.
  • the steps of closing the object editing instruction list and jumping the current VR scene to the new VR scene pointed to by the scene jump instruction may be performed by the processor by calling the corresponding stored in the memory. Instruction execution can also be executed by the scene jump unit run by the processor.
  • the operation of keeping the position of the object editing instruction list unchanged and transforming the perspective of the current VR scene according to the perspective rotation instruction can be performed by the processor calling the corresponding instructions stored in the memory, It can also be performed by a perspective transformation unit executed by the processor.
  • the new VR scene and the current VR scene respectively represent different real scenes, and the objects in the two VR scenes are different. Therefore, during the scene jump process, by closing the object editing instruction list, you can avoid object Edit command list conflicts with new VR scenes.
  • the user when browsing the current VR scene, the user can browse different areas in the current VR scene by rotating the perspective. During this process, the position of the object editing instruction list can be kept unchanged to avoid disturbing the user's browsing process. cause interference.
  • Any method for processing a VR scene provided by the embodiments of the present disclosure can be executed by any appropriate device with data processing capabilities, including but not limited to: terminal devices and servers.
  • any of the methods for processing VR scenes provided in the embodiments of the present disclosure can be executed by the processor.
  • the processor executes any of the methods for processing VR scenes mentioned in the embodiments of the present disclosure by calling corresponding instructions stored in the memory. scene method. No further details will be given below.
  • the aforementioned program can be stored in a computer-readable storage medium.
  • the program When the program is executed, It includes the steps of the above method embodiment; and the aforementioned storage medium includes: read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other various media that can store program codes.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or optical disk and other various media that can store program codes.
  • the device shown in Figure 5 can be used to implement the above method embodiments of the present disclosure.
  • the device includes: an instruction receiving unit 510, configured to respond to receiving the user's object editing instruction for the current VR scene, determine the target object and target attributes pointed to by the editing instruction, and the target object is an object included in the current VR scene; the image acquisition unit 520 is configured to obtain The target panoramic image matches the object editing instruction, the target object contained in the target panoramic image has target attributes, and the attributes of other objects other than the target object in the target panoramic image are consistent with the attributes in the current VR scene; scene update unit 530 is configured to replace the current panoramic image constituting the current VR scene with the target panoramic image and generate an updated VR scene; the scene presentation unit 540 is configured to present the updated VR scene.
  • the image acquisition unit 520 is further configured to: extract a target panoramic image that matches the object editing instruction from a preset panoramic image library; wherein the panoramic image library pre-stores the contents of the current VR scene.
  • the image acquisition unit 520 further includes: a first index module configured to determine the target panoramic image set from the panoramic image library based on the target object; a second index module configured to determine the target panoramic image set based on the target attributes. , determine the target panoramic image subset from the target panoramic image set; the image extraction module is configured to extract the target panoramic image from the target panoramic image subset.
  • the image acquisition unit 520 is configured to: perform rendering processing on the current panoramic image based on the target object and target attributes to obtain the target panoramic image.
  • the scene update unit 530 further includes: a matching module configured to determine the relationship between the current panoramic image and the target panoramic image based on the wandering point of the current panoramic image and the wandering point of the target panoramic image.
  • the replacement module is configured to replace the current panoramic image with the corresponding target panoramic image.
  • the device further includes: a current presentation unit configured to present the current VR scene; a list presentation unit configured to respond to receiving the list presentation instruction, at a preset position in the current VR scene
  • the object editing instruction list is presented so that the user selects the object editing instruction in the object editing instruction list; and the instruction determining unit is configured to receive a user's selection instruction for the object editing instruction.
  • the list presentation unit includes: a position determination module, configured to determine the preset position based on the position of the object corresponding to the object editing instruction list in the current VR scene; the list presentation module, configured to A list of object editing instructions is displayed in a floating position at the default position.
  • the device further includes: a scene jump unit configured to respond to receiving the scene jump instruction, close the object editing instruction list, and jump the current VR scene to the scene jump instruction pointer. a new VR scene; and/or the perspective transformation unit is configured to respond to receiving the perspective rotation instruction, keep the position of the object editing instruction list unchanged, and transform the perspective of the current VR scene according to the perspective rotation instruction.
  • a scene jump unit configured to respond to receiving the scene jump instruction, close the object editing instruction list, and jump the current VR scene to the scene jump instruction pointer. a new VR scene
  • the perspective transformation unit is configured to respond to receiving the perspective rotation instruction, keep the position of the object editing instruction list unchanged, and transform the perspective of the current VR scene according to the perspective rotation instruction.
  • the target attribute when the target object is a wall, the target attribute may be the color of the wall.
  • the electronic device may be either or both of the first device and the second device, or a stand-alone device independent of them.
  • the stand-alone device may communicate with the first device and the second device to receive the collected information from them. input signal.
  • Figure 6 illustrates a block diagram of an electronic device according to an embodiment of the present disclosure.
  • an electronic device includes one or more processors and memory.
  • the processor may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
  • CPU central processing unit
  • the processor may control other components in the electronic device to perform desired functions.
  • Memory may store one or more computer program products, and the memory may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • the volatile memory may include, for example, random access memory (RAM) and/or cache memory (cache), etc.
  • the non-volatile memory may include, for example, read-only memory (ROM), hard disk, flash memory, etc.
  • One or more computer program products may be stored on the computer-readable storage medium, and the processor may execute the computer program products to implement the methods for processing VR scenes of various embodiments of the present disclosure described above. and/or other desired functionality.
  • the electronic device may further include an input device and an output device, and these components are interconnected through a bus system and/or other forms of connection mechanisms (not shown).
  • the input device may also include, for example, a keyboard, a mouse, and the like.
  • the output device can output various information to the outside, including determined distance information, direction information, etc.
  • the output devices may include, for example, displays, speakers, printers, and communication networks and remote output devices to which they are connected, among others.
  • the electronic device may include any other suitable components depending on the specific application.
  • embodiments of the present disclosure may also be a computer program product, which includes computer program instructions that, when executed by a processor, cause the processor to perform the steps described in the above part of this specification. Steps in methods for processing VR scenes according to various embodiments of the present disclosure.
  • the computer program product may be written with program code for performing operations of embodiments of the present disclosure in any combination of one or more programming languages, including object-oriented programming languages such as Java, C++, etc. , also includes conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server execute on.
  • embodiments of the present disclosure may also be a computer-readable storage medium having computer program instructions stored thereon.
  • the computer program instructions when executed by a processor, cause the processor to perform the steps described in the above part of this specification according to the present invention. Steps in methods for processing VR scenes of various embodiments are disclosed.
  • the computer-readable storage medium may be any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may include, for example, but is not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more wires, portable disk, hard disk, random access memory, read only memory, erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the methods and apparatus of the present disclosure may be implemented in many ways.
  • the methods and devices of the present disclosure may be implemented through software, hardware, firmware, or any combination of software, hardware, and firmware.
  • the above order for the steps of the methods is for illustration only, and the steps of the methods of the present disclosure are not limited to the order specifically described above unless otherwise specifically stated.
  • the present disclosure may also be implemented as programs recorded in recording media, and these programs include machine-readable instructions for implementing methods according to the present disclosure.
  • the present disclosure also covers recording media storing programs for executing methods according to the present disclosure.
  • each component or each step can be decomposed and/or recombined. These decompositions and/or recombinations should be considered equivalent versions of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开实施例公开了一种用于处理VR场景的方法、装置和计算机程序产品,其中,方法包括:响应于接收到用户针对当前VR场景的对象编辑指令,确定编辑指令指向的目标对象和目标属性,目标对象为当前VR场景中包含的对象;获取与对象编辑指令匹配的目标全景图像,目标全景图像中包含的目标对象具有目标属性,且目标对象之外的其他对象在目标全景图像中的属性与在当前VR场景中的属性一致;将构成当前VR场景的当前全景图像替换为目标全景图像,生成更新后的VR场景;呈现更新后的VR场景。本公开实施例可以更灵活地响应用户需求,使用户可以直观地感受VR场景中对象在不同属性下的呈现效果。

Description

用于处理VR场景的方法、装置和计算机程序产品
本公开要求在2022年04月12日提交中国专利局、申请号为CN202210376410.0、发明名称为“用于处理VR场景的方法、装置和计算机程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及一种用于处理VR场景的方法、装置和计算机程序产品。
背景技术
VR(VirtualReality,虚拟现实)场景又称为全景场景,是通过图像处理技术,基于全景图像构建的多源信息融合的、交互式的三维场景,可以通过720°的视角更逼真、更全面地呈现立体场景,目前已在多个领域中得到了广泛的应用,例如家具展示、旅游景点展示、虚拟展厅、数字博物馆等领域,再例如VR汽车、VR看房等领域。
发明内容
本公开实施例提供一种用于处理VR场景的方法、装置和计算机程序产品。
本公开实施例的一个方面,提供一种用于处理VR场景的方法,包括:响应于接收到用户针对当前VR场景的对象编辑指令,确定对象编辑指令指向的目标对象和目标属性,目标对象为当前VR场景中包含的对象;获取与对象编辑指令匹配的目标全景图像,目标全景图像中包含的目标对象具有目标属性,且目标对象之外的其他对象在目标全景图像中的属性与在当前VR场景中的属性一致;将构成当前VR场景的当前全景图像替换为目标全景图像,生成更新后的VR场景;呈现更新后的VR场景。
本公开实施例的另一个方面,提供了一种用于处理VR场景的装置,包括:指令接收单元,被配置成响应于接收到用户针对当前VR场景的对象编辑指令,确定对象编辑指令指向的目标对象和目标属性,目标对象为当前VR场景中包含的对象;图像获取单元,被配置成从预设的全景图像库中获取与对象编辑指令匹配的目标全景图像,目标全景图像中包含的目标对象具有目标属性,且目标对象之外的其他对象在目标全景图像中的属性与在当前VR场景中的属性一致;场景更新单元,被配置成将构成当前VR场景的当前全景图像替换为目标全景图像,生成更新后的VR场景;场景呈现单元,被配置成呈现更新后的 VR场景。
本公开的实施例的再一个方面,提供了一种计算机程序产品,包括计算机程序指令,该计算机程序指令被处理器执行时,实现本公开任一实施例所述的用于处理VR场景的方法。
本公开的实施例的又一个方面,提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序指令,该计算机程序指令被处理器执行时,实现本公开任一实施例所述的用于处理VR场景的方法。
本公开的实施例的又一个方面,提供了一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;处理器,用于从存储器中读取可执行指令,并执行指令以实现本公开任一实施例所述的用于处理VR场景的方法。
本公开的实施例提供的用于处理VR场景的方法,接收到用户针对当前VR场景的对象编辑指令时,确定对象编辑指令指向的目标对象和目标属性,目标对象为当前VR场景中包含的对象;从预设的全景图像库中获取与对象编辑指令匹配的目标全景图像,目标全景图像中包含的目标对象具有目标属性;将构成当前VR场景的当前全景图像替换为目标全景图像,生成更新后的VR场景;呈现更新后的VR场景。由用户选择VR场景中对象的属性,并根据用户选定的属性将构成当前VR场景的当前全景图像替换为对应的目标全景图像,然后通过更新后的VR场景向用户呈现用户选定的属性,使用户可以直观地感受VR场景中对象在不同属性下的呈现效果,可以更灵活地响应用户需求,并拓展VR场景对不同属性的呈现能力,有助于提升用户在浏览VR场景时的体验。
下面通过附图和实施例,对本公开的技术方案做进一步的详细描述。
附图说明
构成说明书的一部分的附图描述了本公开的实施例,并且连同描述一起用于解释本公开的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本公开,其中:
图1为本公开的用于处理VR场景的方法的一个实施例的流程图;
图2为本公开的用于处理VR场景的方法的一个实施例中全景图像库的存储结构示意图;
图3为本公开的用于处理VR场景的方法的又一个实施例的流程图;
图4为本公开的用于处理VR场景的方法的一个应用场景中的对象编辑指令列表的示意图;
图5为本公开用于处理VR场景的装置一个实施例的结构示意图;
图6为本公开电子设备一个应用实施例的结构示意图。
具体实施方式
现在将参照附图来详细描述本公开的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。
本领域技术人员可以理解,本公开实施例中的“第一”、“第二”等术语仅用于区别不同步骤、设备或模块等,既不代表任何特定技术含义,也不表示它们之间的必然逻辑顺序。
还应理解,在本公开实施例中,“多个”可以指两个或两个以上,“至少一个”可以指一个、两个或两个以上。
还应理解,对于本公开实施例中提及的任一部件、数据或结构,在没有明确限定或者在前后文给出相反启示的情况下,一般可以理解为一个或多个。
另外,本公开中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本公开中字符“/”,一般表示前后关联对象是一种“或”的关系。
还应理解,本公开对各个实施例的描述着重强调各个实施例之间的不同之处,其相同或相似之处可以相互参考,为了简洁,不再一一赘述。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
本公开实施例可以应用于终端设备、计算机系统、服务器等电子设备,其可与众多其它通用或专用计算系统环境或配置一起操作。适于与终端设备、计算机系统、服务器等电子设备一起使用的众所周知的终端设备、计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、瘦客户机、厚客户机、手持或膝上设备、基于微处 理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统﹑大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
终端设备、计算机系统、服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
下面结合图1对本公开的用于处理VR场景的方法进行示例性说明。图1示出了本公开的用于处理VR场景的方法的一个实施例的流程图,如图1所示,该流程包括步骤110至步骤140,下面分别对各个步骤进行举例说明。
步骤110、响应于接收到用户针对当前VR场景的对象编辑指令,确定对象编辑指令指向的目标对象和目标属性。
其中,目标对象为当前VR场景中包含的对象。
在一个可选示例中,步骤110可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的指令接收单元执行。
在本实施例中,当前VR场景可以为本公开的用于处理VR场景的方法的执行主体(例如可以是智能手机、平板电脑等终端设备)在当前时刻向用户呈现的VR场景。对象表示当前VR场景包含的物体。属性可以表征对象的特征,例如可以通过外观属性表征对象的外观特征,通过材料属性表征对象的材料特征等。
用户可以通过对象编辑指令在当前VR场景中选取目标对象,并根据需求确定目标对象的目标属性。
作为示例,用户可以通过智能手机浏览VR场景,假设当前VR场景用于家具展示,用户可以通过对象编辑指令更换家具的材质。当智能手机接收到用户下达的对象编辑指令时,可以根据对象编辑指令确定目标对象为用户期望更换材质的家具,目标属性则是用户期望的家具材质。
在本实施例的一些可选的实现方式中,当目标对象为墙体时,目标属性包括但不限于墙体颜色、墙面材质等。
在本实施方式中,当前VR场景可以用于房屋展示,当前VR场景中的墙体可以作为目标对象,如此一来,用户可以通过对象编辑指令编辑墙体颜色或墙面材质。作为示例, 墙面材质可以包括墙面漆、瓷砖、壁纸等不同类型。
可选地,不同位置的墙体可以作为不同的对象,如此,可以提高用户对VR场景的自定义粒度。
步骤120、获取与对象编辑指令匹配的目标全景图像。
其中,目标全景图像中包含的目标对象具有目标属性,且目标对象之外的其他对象在目标全景图像中的属性与当前VR场景中的属性一致。
在一个可选示例中,步骤120可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的图像获取单元执行。
在一个可选示例中,目标全景图像可以是预先生成的,并存储在全景图像库中。全景图像库可以用于存储当前VR场景对应的全景图像,即全景图像库中的全景图像与当前VR场景表征同一个真实场景。全景图像库中不仅可以包括构成当前VR场景的当前全景图像,还可以包括未用于当前VR场景的备用全景图像。备用全景图像的图像参数(例如游走点位、图像尺寸等)均可以与当前全景图像一致,并且,任意两个全景图像中至少一个对象的属性可以存在差异。
通常,一个VR场景中可以包括多个游走点位,多个游走点位中的至少一个游走点位或者每个游走点位可以均对应多个全景图像。基于全景图像库中的全景图像,采用不同的全景图像的组合方式,可以构建多个VR场景,这些VR场景与当前VR场景均对应同一个真实场景,不同的VR场景可以呈现同一个真实物体的不同属性。
本实施例中用于VR场景更新的目标全景图像可以包括当前VR场景中的至少一个游走点位分别对应的多个目标全景图像。作为示例,当前VR场景包括3个游走点位,每个游走点位对应2个当前全景图像,目标对象为墙体,目标属性为黄色,则用于VR场景更新的目标全景图像的数量可以为6个,对应3个游走点位,每个游走点位可以对应6个目标全景图像中的2个目标全景图像。并且,在这6个目标全景图像中,墙体均为黄色,除墙体之外的其他对象的属性与该其他对象在当前VR场景中的属性一致。
在本实施例的一些可选的实施方式中,还可以采用如下方式生成目标全景图像:基于目标对象和目标属性,对当前全景图像进行渲染处理,得到目标全景图像。
在本实施方式中,执行主体中可以预先装载有渲染工具(例如图像处理软件),执行主体可以响应于对象编辑指令,调用渲染工具对当前全景图像中的目标对象进行渲染处理,使目标对象呈现出目标属性,以得到目标全景图像,这样可以实时生成目标全景图像,从而更灵活地响应用户的需求。
步骤130、将构成当前VR场景的当前全景图像替换为目标全景图像,生成更新后的VR场景。
在一个可选示例中,步骤130可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的场景更新单元执行。
在本实施例中,更新后的VR场景可以呈现具有目标属性的目标对象。
作为示例,执行主体(例如可以是呈现当前VR场景的智能手机)中可以预先装载有VR场景生成工具,VR场景生成工具例如可以是OpenGL ES、GoogleCardboard等。执行主体可以将步骤120中提取的目标全景图像输入VR场景生成工具,经过对准、拼接等处理后,可以得到更新后的VR场景。
在本实施例的一些可选的实现方式中,可以通过如下步骤生成更新后的VR场景:基于当前全景图像的游走点位与目标全景图像的游走点位,确定当前全景图像与目标全景图像的对应关系;基于对应关系,将当前全景图像替换为对应的目标全景图像。
通常,VR场景中的浏览点位与全景图像的游走点位是对应的,同一个游走点位得到的全景图像在构建VR场景时,图像对准和拼接等环节所需要的计算量较小,构建VR场景的效率也较高。
本实施方式中,可以根据游走点位确定当前全景图像与目标全景图像的对应关系,例如,可以确定出对应同一游走点位的当前全景图像和目标全景图像,并记录该当前全景图像与该目标全景图像之间的对应关系,这样,后续按照该对应关系,可以将该当前全景图像替换为该目标全景图像,以此实现全景图像的替换,可以降低运算量,提高更新后的VR场景的生成效率。
步骤140、呈现更新后的VR场景。
在一个可选示例中,步骤140可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的场景呈现单元执行。
在一个可选示例中,用户通过智能手机进行VR看房时,当前VR场景中的墙体颜色为白色,用户可以通过点击手机屏幕的方式将墙体选中为目标对象,并选中黄色为目标属性,以生成对象编辑指令并发送至智能手机。智能手机可以根据对象编辑指令从全景图像库提取墙体颜色为黄色的目标全景图像,然后将当前VR场景中的当前全景图像替换为目标全景图像,得到更新后的VR场景,并通过手机屏幕呈现给用户。此时,用户浏览的VR场景中墙体的颜色为黄色。
可以理解的是,更新后的VR场景被呈现时,即成为新的当前VR场景,相应地,上 述步骤110至步骤140同样适用于新的当前VR场景。
本公开实施例提供的用于处理VR场景的方法,接收到用户针对当前VR场景的对象编辑指令时,确定对象编辑指令指向的目标对象和目标属性,目标对象为当前VR场景中包含的对象;从预设的全景图像库中获取与对象编辑指令匹配的目标全景图像,目标全景图像中包含的目标对象具有目标属性;将构成当前VR场景的当前全景图像替换为目标全景图像,生成更新后的VR场景;呈现更新后的VR场景。由用户选择VR场景中对象的属性,并根据用户选定的属性将构成当前VR场景的当前全景图像替换为对应的目标全景图像,然后通过更新后的VR场景向用户呈现用户选定的属性,可以更灵活地响应用户需求,从而提升用户在浏览VR场景时的体验。
在本实施例的一些可选的实现方式中,获取与对象编辑指令匹配的目标全景图像,包括:从预设的全景图像库中提取与对象编辑指令匹配的目标全景图像;其中,全景图像库中预存有当前VR场景中包含的至少一个对象分别对应的全景图像集;对于任一全景图像集,该全景图像集根据该全景图像集对应的对象所具有的不同属性划分为多个全景图像子集,该全景图像集对应的对象在该全景图像集划分为的不同全景图像子集中具有不同的属性,且除该全景图像集对应的对象之外的其他对象在该全景图像集划分为的全景图像子集中的属性均与当前全景图像一致;对于任一全景图像子集,该全景图像子集包括该全景图像子集对应的对象在不同游走点位下经渲染处理得到的全景图像。
在本实施方式中,对于任一个全景图像子集,利用VR场景搭建工具对该全景图像子集中的全景图像进行对准、拼接等处理,可以生成一个VR场景。对于任一个全景图像集,该全景图像集对应的对象在不同全景图像子集对应的VR场景中,具有不同的属性,且该对象之外的其他对象的属性与当前VR场景中的属性一致。
技术人员可以利用图像处理软件对原始全景图像进行渲染处理,根据对象的属性,将原始全景图像中的对象渲染成该属性对应的视觉效果,如此一来,可以通过不同的全景图像呈现不同的属性。
进一步结合图2对本实施方式中的全景图像库进行说明,图2示出了本公开的用于处理VR场景的一个实施例中全景图像库的存储结构示意图,如图2所示,全景图像库200中存储有两个全景图像集,其中,全景图像集210对应的对象为墙体,全景图像集220对应的对象为家具。全景图像集210根据墙体的颜色划分为全景图像子集211、全景图像子集212和全景图像子集213,其中,每个全景图像子集可以包括六个全景图像,每个游走点位对应两个全景图像;在全景图像子集211中,墙体的颜色均为黄色;在全景图像子集 212中,墙体的颜色均为白色;在全景图像子集213中,墙体的颜色均为绿色;并且,在全景图像子集211、全景图像子集212、全景图像子集213中,家具的材质可以与当前VR场景中的材质一致,例如可以是木材。
全景图像集220根据家具的材质划分为全景图像子集221和全景图像子集222,每个全景图像子集可以包括六个全景图像,每个游走点位对应两个全景图像;在全景图像子集221中,家具的材质均为木材;在全景图像子集222中,家具的材质均为金属;并且,在全景图像子集221和全景图像子集222中,墙体的颜色可以与当前VR场景中的材质一致,例如可以是白色。
在本实施方式中,可以根据对象的属性将预先生成的全景图像存储在全景图像库中,以便于从中提取目标全景图像,一方面可以提高获取目标全景图像的效率,进而提高VR场景的更新效率;另一方面,与实时渲染相比,可以降低对图像处理设备的性能需求,有助于降低开发成本。
在一个可选示例中,上述步骤120可以包括以下步骤:基于目标对象,从全景图像库中确定目标全景图像集;基于目标属性,从目标全景图像集中确定目标全景图像子集;从目标全景图像子集中提取目标全景图像。
需要说明的是,全景图像库可以存储在执行主体的本地存储空间,或者可以设置在云端服务器,本公开对此不做限定。
继续结合图2进行说明,假设步骤110中确定的目标对象为墙体,目标属性为黄色,则执行主体(例如可以是呈现VR场景的智能手机)可以直接从全景图像库200中检索出墙体对应的全景图像集210;然后基于“黄色”,从全景图像集210中检索出全景图像子集211,从全景图像子集211中提取的全景图像即为目标全景图像。
再例如,当全景图像库200设置在云端服务器时,执行主体(例如可以是呈现VR场景的智能手机)可以基于“墙体”和“黄色”生成检索条件,并发送至云端服务器。由云端服务器根据检索条件从全景图像库200中确定出全景图像子集211,然后生成全景图像子集211的链接,并将生成的链接发送至执行主体。执行主体可以根据链接从全景图像子集211中提取目标全景图像。
在本实施方式中,可以根据目标对象和目标属性逐级检索全景图像库,以获取目标全景图像,可以进一步提高获取目标全景图像的效率。
接着参考图3,图3示出了本公开的用于处理VR场景的又一个实施例的流程图,如图3所示,该流程包括步骤310至步骤370,下面分别对各个步骤进行举例说明。
步骤310、呈现当前VR场景。
在一个可选示例中,步骤310可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的当前呈现单元执行。
步骤320、响应于接收到列表呈现指令,在当前VR场景中的预设位置呈现对象编辑指令列表,以使用户在对象编辑指令列表中选取对象编辑指令。
在一个可选示例中,步骤320可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的列表呈现单元执行。
作为示例,用户在使用智能手机浏览当前VR场景时,当用户单击手机屏幕时,智能手机可以在当前VR场景中的预设位置呈现对象编辑指令列表。若当前VR场景中包括多个对象,可以将多个对象的对象编辑指令集成在一个对象编辑指令列表,也可以令多个对象具有分别对应的对象编辑指令列表。
在本实施例的一个可选的示例中,列表呈现指令还可以为触发类的指令,例如,当视线与对象的夹角达到预设角度时,可以在预设位置呈现对象编辑指令列表。可选地,预设角度可以为10°、15°或者其他角度,在此不再一一列举。
参考图4,图4示出了本公开的用于处理VR场景的方法一个应用场景中对象编辑指令列表的示意图。在图4所示的VR看房场景中,目标对象可以为墙体,对象编辑指令列表400可以采用图形的形式呈现在墙体周围,并以图标颜色表征墙体的颜色属性。
在本实施例的一些可选实现方式中,在当前VR场景中的预设位置呈现对象编辑指令列表,包括:基于对象编辑指令列表对应的对象在当前VR场景中的位置,确定预设位置;在预设位置悬浮呈现对象编辑指令列表。
作为示例,可以将预设位置设置在靠近对象的三维空间中。
在本实施方式中,可以根据对象所在的位置确定对象编辑指令列表的呈现位置,然后将对象编辑指令列表悬浮呈现在对象周围的三维空间中,以此表征对象编辑指令列表与对象的对应关系,由此可以提高操作的便利性。
步骤330、接收用户针对对象编辑指令的选取指令。
在一个可选示例中,步骤330可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的指令确定单元执行。
继续参考图4,用户可以在编辑指令列表中点击图标,以选取对应的对象编辑指令。
步骤340、响应于接收到用户针对当前VR场景的对象编辑指令,确定编辑指令指向的目标对象和目标属性。
在一个可选示例中,步骤340可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的指令接收单元执行。
步骤350、从预设的全景图像库中获取与对象编辑指令匹配的目标全景图像。
在一个可选示例中,步骤350可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的图像获取单元执行。
步骤360、将构成当前VR场景的当前全景图像替换为目标全景图像,生成更新后的VR场景。
在一个可选示例中,步骤360可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的场景更新单元执行。
步骤370、呈现更新后的VR场景。
在一个可选示例中,步骤370可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的场景呈现单元执行。
需要说明的是,步骤340至步骤370与前述步骤110至步骤140相对应,此处不再赘述。
从图3可以看出,图3所示的实施例体现了:接收到用户的列表呈现指令时,在当前VR场景的预设位置呈现对象编辑指令列表的步骤,用户可以根据自身的喜好,进行对象编辑指令的选取,由此可以根据不同用户的喜好,采用不同的呈现策略,使VR场景中的对象按照用户所需的属性进行呈现,从而进一步提升用户体验。
在上述实施例的一些可选实现方式中,该方法还可以进一步包括:响应于接收到场景跳转指令,关闭对象编辑指令列表,并将当前VR场景跳转至场景跳转指令指向的新VR场景;和/或,响应于接收到视角旋转指令,保持对象编辑指令列表的位置不变,并按照视角旋转指令变换当前VR场景的视角。
在一个可选示例中,响应于接收到场景跳转指令,关闭对象编辑指令列表,并将当前VR场景跳转至场景跳转指令指向的新VR场景的步骤可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的场景跳转单元执行。
在一个可选示例中,响应于接收到视角旋转指令,保持对象编辑指令列表的位置不变,并按照视角旋转指令变换当前VR场景的视角的操作可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的视角变换单元执行。
在本实现方式中,新VR场景与当前VR场景分别表示不同的真实场景,两个VR场景中的对象存在差异,因此,在场景跳转的过程中,通过关闭对象编辑指令列表,可以避 免对象编辑指令列表与新VR场景的冲突。
可以理解的是,新VR场景被呈现时,即成为新的当前VR场景,则针对新的当前VR场景,上述实施例仍然适用。
在本实现方式中,用户在浏览当前VR场景时,可以通过旋转视角来浏览当前VR场景中的不同区域,在此过程中,可以保持对象编辑指令列表的位置不变,以免对用户的浏览过程造成干扰。
本公开实施例提供的任一种用于处理VR场景的方法可以由任意适当的具有数据处理能力的设备执行,包括但不限于:终端设备和服务器等。或者,本公开实施例提供的任一种用于处理VR场景的方法可以由处理器执行,如处理器通过调用存储器存储的相应指令来执行本公开实施例提及的任一种用于处理VR场景的方法。下文不再赘述。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:只读存储器(ROM)、随机存取存储器(RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
下面结合图5对本公开的用于处理VR场景的装置进行示例性说明,图5所示的装置可以用于实现本公开上述各方法实施例,如图5所示,该装置包括:指令接收单元510,被配置成响应于接收到用户针对当前VR场景的对象编辑指令,确定编辑指令指向的目标对象和目标属性,目标对象为当前VR场景中包含的对象;图像获取单元520,被配置成获取与对象编辑指令匹配的目标全景图像,目标全景图像中包含的目标对象具有目标属性,且目标对象之外的其他对象在目标全景图像中的属性与在当前VR场景中的属性一致;场景更新单元530,被配置成将构成当前VR场景的当前全景图像替换为目标全景图像,生成更新后的VR场景;场景呈现单元540,被配置成呈现更新后的VR场景。
在其中一个可选实施例中,图像获取单元520被进一步配置成:从预设的全景图像库中提取与对象编辑指令匹配的目标全景图像;其中,全景图像库中预存有当前VR场景中包含的至少一个对象分别对应的全景图像集;对于任一全景图像集,该全景图像集根据该全景图像集对应的对象所具有的不同属性划分为多个全景图像子集,该全景图像集对应的对象在该全景图像集划分为的不同全景图像子集中具有不同的属性,且除该全景图像集对应的对象之外的其他对象在该全景图像集划分为的全景图像子集中的属性均与当前全景图像一致;对于任一全景图像子集,该全景图像子集包括该全景图像子集对应的对象在不同游走点位下经渲染处理得到的全景图像。
在其中一个可选实施例中,图像获取单元520进一步包括:第一索引模块,被配置成基于目标对象,从全景图像库中确定目标全景图像集;第二索引模块,被配置成基于目标属性,从目标全景图像集中确定目标全景图像子集;图像提取模块,被配置成从目标全景图像子集中提取目标全景图像。
在其中一个可选实施例中,图像获取单元520被配置成:基于目标对象和目标属性,对当前全景图像进行渲染处理,得到目标全景图像。
在其中一个可选实施例中,场景更新单元530进一步包括:匹配模块,被配置成基于当前全景图像的游走点位与目标全景图像的游走点位,确定当前全景图像与目标全景图像的对应关系;替换模块,被配置成将当前全景图像替换为对应的目标全景图像。
在其中一个可选实施例中,该装置还包括:当前呈现单元,被配置成呈现当前VR场景;列表呈现单元,被配置成响应于接收到列表呈现指令,在当前VR场景中的预设位置呈现对象编辑指令列表,以使用户在对象编辑指令列表中选取对象编辑指令;以及,指令确定单元,被配置成接收用户针对对象编辑指令的选取指令。
在其中一个可选实施例中,列表呈现单元包括:位置确定模块,被配置成基于对象编辑指令列表对应的对象在当前VR场景中的位置,确定预设位置;列表呈现模块,被配置成在预设位置悬浮呈现对象编辑指令列表。
在其中一个可选实施例中,该装置还包括:场景跳转单元,被配置成响应于接收到场景跳转指令,关闭对象编辑指令列表,并将当前VR场景跳转至场景跳转指令指向的新VR场景;和/或,视角变换单元,被配置成响应于接收到视角旋转指令,保持对象编辑指令列表的位置不变,并按照视角旋转指令变换当前VR场景的视角。
在其中一个可选实施例中,当目标对象为墙体时,目标属性可以为墙体颜色。
下面,参考图6来描述根据本公开实施例的电子设备。该电子设备可以是第一设备和第二设备中的任一个或两者、或与它们独立的单机设备,该单机设备可以与第一设备和第二设备进行通信,以从它们接收所采集到的输入信号。
图6图示了根据本公开实施例的电子设备的框图。
如图6所示,电子设备包括一个或多个处理器和存储器。
处理器可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备中的其他组件以执行期望的功能。
存储器可以存储一个或多个计算机程序产品,所述存储器可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包 括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机程序产品,处理器可以运行所述计算机程序产品,以实现上文所述的本公开的各个实施例的用于处理VR场景的方法以及/或者其他期望的功能。
在一个示例中,电子装置还可以包括:输入装置和输出装置,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。
此外,该输入装置还可以包括例如键盘、鼠标等等。
该输出装置可以向外部输出各种信息,包括确定出的距离信息、方向信息等。该输出装置可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出装置等等。
当然,为了简化,图6中仅示出了该电子设备中与本公开有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备还可以包括任何其他适当的组件。
除了上述方法和设备以外,本公开的实施例还可以是计算机程序产品,其包括计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述部分中描述的根据本公开各种实施例的用于处理VR场景的方法中的步骤。
所述计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本公开实施例操作的程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。
此外,本公开的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述部分中描述的根据本公开各种实施例的用于处理VR场景的方法中的步骤。
所述计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器、只读存储器、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑 盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于系统实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本公开中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
可能以许多方式来实现本公开的方法和装置。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本公开的方法和装置。用于所述方法的步骤的上述顺序仅是为了进行说明,本公开的方法的步骤不限于以上具体描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本公开实施为记录在记录介质中的程序,这些程序包括用于实现根据本公开的方法的机器可读指令。因而,本公开还覆盖存储用于执行根据本公开的方法的程序的记录介质。
还需要指出的是,在本公开的装置、设备和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本公开。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本公开的范围。因此,本公开不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。

Claims (19)

  1. 一种用于处理VR场景的方法,其特征在于,包括:
    响应于接收到用户针对当前VR场景的对象编辑指令,确定所述对象编辑指令指向的目标对象和目标属性,所述目标对象为所述当前VR场景中包含的对象;
    获取与所述对象编辑指令匹配的目标全景图像,所述目标全景图像中包含的目标对象具有所述目标属性,且所述目标对象之外的其他对象在所述目标全景图像中的属性与在所述当前VR场景中的属性一致;
    将构成所述当前VR场景的当前全景图像替换为所述目标全景图像,生成更新后的VR场景;
    呈现所述更新后的VR场景。
  2. 根据权利要求1所述的方法,其特征在于,获取与所述对象编辑指令匹配的目标全景图像,包括:从预设的全景图像库中提取与所述对象编辑指令匹配的目标全景图像,其中,所述全景图像库中预存有所述当前VR场景中包含的至少一个对象分别对应的全景图像集;
    对于任一所述全景图像集,该全景图像集根据该全景图像集对应的对象所具有的不同属性划分为多个全景图像子集,该全景图像集对应的对象在该全景图像集划分为的不同全景图像子集中具有不同的属性,且除该全景图像集对应的对象之外的其他对象在该全景图像集划分为的全景图像子集中的属性均与所述当前全景图像一致;
    对于任一所述全景图像子集,该全景图像子集包括该全景图像子集对应的对象在不同的游走点位下经渲染处理得到的全景图像。
  3. 根据权利要求2所述的方法,其特征在于,从预设的全景图像库中提取与所述对象编辑指令匹配的目标全景图像,包括:
    基于所述目标对象,从所述全景图像库中确定目标全景图像集;
    基于所述目标属性,从所述目标全景图像集中确定目标全景图像子集;
    从所述目标全景图像子集中提取所述目标全景图像。
  4. 根据权利要求1所述的方法,其特征在于,获取与所述对象编辑指令匹配的目标全景图像,包括:
    基于所述目标对象和所述目标属性,对所述当前全景图像进行渲染处理,得到所述目标全景图像。
  5. 根据权利要求1所述的方法,其特征在于,将构成所述当前VR场景的当前全景图像替换为所述目标全景图像,包括:
    基于所述当前全景图像的游走点位与所述目标全景图像的游走点位,确定所述当前全景图像与所述目标全景图像的对应关系;
    基于所述对应关系,将所述当前全景图像替换为对应的所述目标全景图像。
  6. 根据权利要求1所述的方法,其特征在于,接收到用户针对当前VR场景的对象编辑指令之前,所述方法还包括:
    呈现所述当前VR场景;
    响应于接收到列表呈现指令,在所述当前VR场景中的预设位置呈现对象编辑指令列表,以使所述用户在所述对象编辑指令列表中选取对象编辑指令;以及,
    接收所述用户针对对象编辑指令的选取指令。
  7. 根据权利要求6所述的方法,其特征在于,在所述当前VR场景中的预设位置呈现对象编辑指令列表,包括:
    基于所述对象编辑指令列表对应的对象在所述当前VR场景中的位置,确定所述预设位置;
    在所述预设位置悬浮呈现所述对象编辑指令列表。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述方法还包括:
    响应于接收到场景跳转指令,关闭所述对象编辑指令列表,并将所述当前VR场景跳转至所述场景跳转指令指向的新VR场景;和/或
    响应于接收到视角旋转指令,保持所述对象编辑指令列表的位置不变,并按照所述视角旋转指令变换所述当前VR场景的视角。
  9. 一种用于处理VR场景的装置,其特征在于,包括:
    指令接收单元,被配置成响应于接收到用户针对当前VR场景的对象编辑指令,确定所述对象编辑指令指向的目标对象和目标属性,所述目标对象为所述当前VR场景中包含的对象;
    图像获取单元,被配置成获取与所述对象编辑指令匹配的目标全景图像,所述目标全景图像中包含的目标对象具有所述目标属性,且所述目标对象之外的其他对象在所述目标全景图像中的属性与在所述当前VR场景中的属性一致;
    场景更新单元,被配置成将构成所述当前VR场景的当前全景图像替换为所述目标全景图像,生成更新后的VR场景;
    场景呈现单元,被配置成呈现所述更新后的VR场景。
  10. 根据权利要求9所述的装置,其特征在于,所述图像获取单元被配置成:
    从预设的全景图像库中提取与所述对象编辑指令匹配的目标全景图像,其中,所述全景图像库中预存有所述当前VR场景中包含的至少一个对象分别对应的全景图像集;
    对于任一所述全景图像集,该全景图像集根据该全景图像集对应的对象所具有的不同属性划分为多个全景图像子集,该全景图像集对应的对象在该全景图像集划分为的不同全景图像子集中具有不同的属性,且除该全景图像集对应的对象之外的其他对象在该全景图像集划分为的全景图像子集中的属性均与所述当前全景图像一致;
    对于任一所述全景图像子集,该全景图像子集包括该全景图像子集对应的对象在不同的游走点位下经渲染处理得到的全景图像。
  11. 根据权利要求10所述的装置,其特征在于,所述图像获取单元,包括:
    第一索引模块,被配置成基于所述目标对象,从所述全景图像库中确定目标全景图像集;
    第二索引模块,被配置成基于所述目标属性,从所述目标全景图像集中确定目标全景图像子集;
    图像提取模块,被配置成从所述目标全景图像子集中提取所述目标全景图像。
  12. 根据权利要求9所述的装置,其特征在于,所述图像获取单元被配置成:
    基于所述目标对象和所述目标属性,对所述当前全景图像进行渲染处理,得到所述目标全景图像。
  13. 根据权利要求9所述的装置,其特征在于,所述场景更新单元,包括:
    匹配模块,被配置成基于所述当前全景图像的游走点位与所述目标全景图像的游走点位,确定所述当前全景图像与所述目标全景图像的对应关系;
    替换模块,被配置成基于所述对应关系,将所述当前全景图像替换为对应的所述目标全景图像。
  14. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    当前呈现单元,被配置成呈现所述当前VR场景;
    列表呈现单元,被配置成响应于接收到列表呈现指令,在所述当前VR场景中的预设位置呈现对象编辑指令列表,以使所述用户在所述对象编辑指令列表中选取对象编辑指令;以及,
    指令确定单元,被配置成接收所述用户针对对象编辑指令的选取指令。
  15. 根据权利要求14所述的装置,其特征在于,所述列表呈现单元,包括:
    位置确定模块,被配置成基于所述对象编辑指令列表对应的对象在所述当前VR场景中的位置,确定所述预设位置;
    列表呈现模块,被配置成在所述预设位置悬浮呈现所述对象编辑指令列表。
  16. 根据权利要求9至15中任一项所述的装置,其特征在于,所述装置还包括:
    场景跳转单元,被配置成响应于接收到场景跳转指令,关闭所述对象编辑指令列表,并将所述当前VR场景跳转至所述场景跳转指令指向的新VR场景;和/或
    视角变换单元,被配置成响应于接收到视角旋转指令,保持所述对象编辑指令列表的位置不变,并按照所述视角旋转指令变换所述当前VR场景的视角。
  17. 一种计算机程序产品,包括计算机程序指令,其特征在于,该计算机程序指令被处理器执行时,实现上述权利要求1至8中任一项所述的用于处理VR场景的方法。
  18. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,该计算机程序指令被处理器执行时,实现上述权利要求1至8中任一项所述的用于处理VR场景的方法。
  19. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现上述权利要求1至8中任一项所述的用于处理VR场景的方法。
PCT/CN2022/140021 2022-04-12 2022-12-19 用于处理vr场景的方法、装置和计算机程序产品 WO2023197657A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210376410.0A CN114463104B (zh) 2022-04-12 2022-04-12 用于处理vr场景的方法、装置和计算机可读存储介质
CN202210376410.0 2022-04-12

Publications (1)

Publication Number Publication Date
WO2023197657A1 true WO2023197657A1 (zh) 2023-10-19

Family

ID=81417047

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/140021 WO2023197657A1 (zh) 2022-04-12 2022-12-19 用于处理vr场景的方法、装置和计算机程序产品

Country Status (2)

Country Link
CN (1) CN114463104B (zh)
WO (1) WO2023197657A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463104B (zh) * 2022-04-12 2022-07-26 贝壳技术有限公司 用于处理vr场景的方法、装置和计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180342043A1 (en) * 2017-05-23 2018-11-29 Nokia Technologies Oy Auto Scene Adjustments For Multi Camera Virtual Reality Streaming
CN111399655A (zh) * 2020-03-27 2020-07-10 吴京 一种基于vr同步的图像处理方法及装置
CN111951374A (zh) * 2020-07-10 2020-11-17 北京城市网邻信息技术有限公司 房屋装修数据的处理方法、装置、电子设备及存储介质
CN113554738A (zh) * 2021-07-27 2021-10-26 广东三维家信息科技有限公司 全景图像展示方法、装置、电子设备及存储介质
CN114299261A (zh) * 2021-12-28 2022-04-08 江苏华泽微福科技发展有限公司 一种基于虚拟现实技术的客户看房系统
CN114463104A (zh) * 2022-04-12 2022-05-10 贝壳技术有限公司 用于处理vr场景的方法、装置和计算机程序产品

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140122292A (ko) * 2013-03-28 2014-10-20 삼성전자주식회사 디스플레이 장치의 디스플레이 방법 및 디스플레이 장치
CN106780421A (zh) * 2016-12-15 2017-05-31 苏州酷外文化传媒有限公司 基于全景平台的装修效果展示方法
CN106652047A (zh) * 2016-12-29 2017-05-10 四川跳爪信息技术有限公司 一种可自由编辑的虚拟场景全景体验系统
CN106980728A (zh) * 2017-03-30 2017-07-25 理光图像技术(上海)有限公司 房屋装潢设计体验装置及系统
CN107169247B (zh) * 2017-06-30 2020-06-30 猪八戒股份有限公司 基于3d云设计的家居行业服务系统
GB2569979B (en) * 2018-01-05 2021-05-19 Sony Interactive Entertainment Inc Rendering a mixed reality scene using a combination of multiple reference viewing points
CN111985022B (zh) * 2020-06-23 2022-07-19 北京城市网邻信息技术有限公司 一种线上装修的处理方法、装置、电子设备及存储介质
CN112051956A (zh) * 2020-09-09 2020-12-08 北京五八信息技术有限公司 一种房源的交互方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180342043A1 (en) * 2017-05-23 2018-11-29 Nokia Technologies Oy Auto Scene Adjustments For Multi Camera Virtual Reality Streaming
CN111399655A (zh) * 2020-03-27 2020-07-10 吴京 一种基于vr同步的图像处理方法及装置
CN111951374A (zh) * 2020-07-10 2020-11-17 北京城市网邻信息技术有限公司 房屋装修数据的处理方法、装置、电子设备及存储介质
CN113554738A (zh) * 2021-07-27 2021-10-26 广东三维家信息科技有限公司 全景图像展示方法、装置、电子设备及存储介质
CN114299261A (zh) * 2021-12-28 2022-04-08 江苏华泽微福科技发展有限公司 一种基于虚拟现实技术的客户看房系统
CN114463104A (zh) * 2022-04-12 2022-05-10 贝壳技术有限公司 用于处理vr场景的方法、装置和计算机程序产品

Also Published As

Publication number Publication date
CN114463104B (zh) 2022-07-26
CN114463104A (zh) 2022-05-10

Similar Documents

Publication Publication Date Title
US7013435B2 (en) Three dimensional spatial user interface
US6636246B1 (en) Three dimensional spatial user interface
US11636660B2 (en) Object creation with physical manipulation
EP1679589B1 (en) System and methods for inline property editing in tree view based editors
US20100257468A1 (en) Method and system for an enhanced interactive visualization environment
US20090293008A1 (en) Information Processing Device, User Interface Method, And Information Storage Medium
CN110597773B (zh) 在计算机设备和虚拟现实设备之间共享文件的方法和装置
WO2023202349A1 (zh) 三维标签的交互呈现方法、装置、设备、介质和程序产品
US20200186869A1 (en) Method and apparatus for referencing, filtering, and combining content
WO2023197657A1 (zh) 用于处理vr场景的方法、装置和计算机程序产品
WO2019057191A1 (zh) 内容检索方法、终端、服务器、电子设备及存储介质
US11093548B1 (en) Dynamic graph for time series data
US11003467B2 (en) Visual history for content state changes
WO2023098915A1 (zh) 三维房屋模型中的内容展示方法及装置
WO2024055462A1 (zh) Vr场景的处理方法、装置、电子设备和存储介质
US10956497B1 (en) Use of scalable vector graphics format to encapsulate building floorplan and metadata
JP2006107020A (ja) コンテンツ・マネジメント・システム及びコンテンツ・マネジメント方法、並びにコンピュータ・プログラム
CN114116622A (zh) 一种显示设备及文件显示方法
US11036468B2 (en) Human-computer interface for navigating a presentation file
CN115687704A (zh) 信息显示方法、装置、电子设备及计算机可读存储介质
CN115454255B (zh) 物品展示的切换方法和装置、电子设备、存储介质
WO2018187534A1 (en) Method and apparatus for referencing, filtering, and combining content
US11880424B1 (en) Image generation from HTML data using incremental caching
US11074735B2 (en) Multistep interactive image generation utilizing knowledge store
CN115455552A (zh) 模型的编辑方法和装置、电子设备、存储介质、产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22937284

Country of ref document: EP

Kind code of ref document: A1