WO2024104248A1 - Procédé et appareil de rendu pour panorama virtuel, dispositif, et support de stockage - Google Patents

Procédé et appareil de rendu pour panorama virtuel, dispositif, et support de stockage Download PDF

Info

Publication number
WO2024104248A1
WO2024104248A1 PCT/CN2023/130744 CN2023130744W WO2024104248A1 WO 2024104248 A1 WO2024104248 A1 WO 2024104248A1 CN 2023130744 W CN2023130744 W CN 2023130744W WO 2024104248 A1 WO2024104248 A1 WO 2024104248A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
virtual
panoramic
faceted
virtual scene
Prior art date
Application number
PCT/CN2023/130744
Other languages
English (en)
Chinese (zh)
Inventor
高磊
王璨
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2024104248A1 publication Critical patent/WO2024104248A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the embodiments of the present disclosure relate to a method, device, equipment and storage medium for rendering a virtual panorama.
  • VR panoramic live broadcast generally uses binocular panoramic cameras for real-time shooting.
  • the panoramic video is in equirectangular projection (ERP) format, if the traditional rendering method is used to render the virtual scene, the rendered 3D virtual scene will be severely distorted, affecting the display effect of the panoramic video.
  • ERP equirectangular projection
  • the embodiments of the present disclosure provide a method, device, equipment and storage medium for rendering a virtual panorama, which can avoid distortion of the rendered virtual panorama and improve the display effect of the augmented reality panorama.
  • the present disclosure provides a method for rendering a virtual panorama, including:
  • the projected panoramic map is rendered to obtain a virtual panoramic image.
  • the present disclosure also provides a virtual panorama rendering device, including:
  • a virtual scene graph acquisition module is used to acquire a virtual scene and collect images of the virtual scene in different directions to obtain multiple virtual scene graphs;
  • a multi-faceted target map acquisition module used to fill the multiple virtual scene graphs into a multi-faceted map to obtain a multi-faceted target map; wherein the multi-faceted map is composed of multiple maps;
  • a projection module used for projecting the multi-faceted target map texture to a panoramic map
  • the rendering module is used to render the projected panoramic map to obtain a virtual panoramic image.
  • the present disclosure also provides an electronic device, the electronic device comprising:
  • processors one or more processors
  • a storage device for storing one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the method for rendering a virtual panorama as described in the embodiment of the present disclosure.
  • the embodiment of the present disclosure further provides a storage medium containing computer executable instructions, which, when executed by a computer processor, are used to execute the method for rendering a virtual panorama as described in the embodiment of the present disclosure.
  • FIG1 is a schematic diagram of a flow chart of a method for rendering a virtual panorama provided by an embodiment of the present disclosure
  • FIG2 is a schematic diagram of setting a virtual camera in a virtual scene provided by an embodiment of the present disclosure
  • FIG3 is an expanded view of a multi-faceted map provided by an embodiment of the present disclosure.
  • FIG4 is an example diagram of determining a projection relationship between a panoramic map and a multi-faceted target map provided by an embodiment of the present disclosure
  • FIG5 is an example diagram of projecting a multi-faceted target map onto a panoramic map provided by an embodiment of the present disclosure
  • FIG6 is an example diagram of pixel points falling within a panoramic display viewing angle provided by an embodiment of the present disclosure.
  • FIG7 is an example diagram of a 180-degree virtual panoramic image provided by an embodiment of the present disclosure.
  • FIG8 is an example diagram of a binocular virtual panoramic image provided by an embodiment of the present disclosure.
  • FIG9 is a schematic diagram of the structure of a virtual panorama rendering device provided by an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure.
  • a prompt message is sent to the user to clearly prompt the user that the operation requested to be performed will require obtaining and using the user's personal information.
  • the user can autonomously choose whether to provide personal information to software or hardware such as an electronic device, application, server, or storage medium that performs the operation of the technical solution of the present disclosure according to the prompt message.
  • the method of sending the prompt information to the user may be, for example, a pop-up window, in which the prompt information may be presented in text.
  • the pop-up window may also carry a selection control for the user to choose "agree” or "disagree” to provide personal information to the electronic device.
  • Figure 1 is a flow chart of a method for rendering a virtual panorama provided by an embodiment of the present disclosure.
  • the embodiment of the present disclosure is applicable to the case of rendering a virtual panorama.
  • the method can be executed by a rendering device for a virtual panorama, which can be implemented in the form of software and/or hardware.
  • a rendering device for a virtual panorama can be implemented in the form of software and/or hardware.
  • it can be implemented by an electronic device, which can be a mobile terminal, a PC or a server, etc.
  • the method comprises:
  • the virtual scene may be a pre-built virtual special effects scene, which may be built according to actual needs, and the content and type of the virtual scene are not limited here. For example, it may be a virtual scene of a school of fish swimming in the deep sea, or a virtual scene of heavy snow.
  • the method of acquiring images of the virtual scene in different directions to obtain multiple virtual scene graphs may be: controlling multiple virtual cameras to acquire images of the virtual scene in different directions to obtain multiple virtual scene graphs.
  • multiple virtual cameras are set in the virtual scene and face different directions of the virtual scene.
  • six virtual cameras can be created in the virtual scene, and the six virtual cameras face the front, back, left, right, top and bottom directions respectively.
  • the image size (width and height) captured by the virtual camera and the camera field of view (FOV) can be set arbitrarily according to actual needs.
  • the aspect ratio of the image captured by the virtual camera can be set to 512:512, and the camera FOV can be set to 90 degrees.
  • FIG. 2 is a schematic diagram of setting a virtual camera in a virtual scene in this embodiment.
  • the virtual scene is an underwater virtual scene.
  • Six virtual cameras are created in the virtual scene to collect images in six directions of the underwater virtual scene. In this embodiment, collecting images of the virtual scene in multiple directions is conducive to the subsequent generation of a virtual panorama.
  • the polyhedral map is composed of multiple maps.
  • the polyhedral map can be an expanded view of a three-dimensional map. For example, if the polyhedral map is a map corresponding to a triangular prism, it includes 5 faces, each of which is a triangle; if the polyhedral map is a map corresponding to a cube, it includes 6 faces, each of which is a square.
  • the polyhedral map is a map corresponding to a cube containing six map faces, and is a pre-built polyhedral map of fixed size.
  • the multi-faceted target map can be understood as a multi-faceted map filled with a virtual scene map.
  • the virtual scene map corresponds to each map in the multi-faceted map.
  • Filling multiple virtual scene maps into the multi-faceted map can be understood as: filling the pixel values in the multiple virtual scene maps into the corresponding maps in the multi-faceted map, thereby obtaining the multi-faceted target map.
  • the method of filling multiple virtual scene graphs into a multi-faceted map to obtain a multi-faceted target map can be: obtaining the correspondence between the virtual scene graph and each map in the multi-faceted map; filling the multiple virtual scene graphs into the corresponding maps according to the correspondence to obtain the multi-faceted target map.
  • the correspondence between the virtual scene graph and the map can be understood as the correspondence between the virtual camera and the map, which is pre-established, and the virtual scene graphs of different orientations correspond to one of the maps in the multi-faceted map.
  • Figure 3 is an expanded view of a multi-faceted map corresponding to a cube in this embodiment. As shown in Figure 3, the multi-faceted map includes six maps: +X, -X, +Y, -Y, +Z and -Z.
  • the correspondence between the virtual scene graph and the map surface can be: +X map corresponds to the virtual scene graph facing right, -X map corresponds to the virtual scene graph facing left, +Y map corresponds to the virtual scene graph facing forward, -Y map corresponds to the virtual scene graph facing backward, +Z map corresponds to the virtual scene graph facing upward, and -Z map corresponds to the virtual scene graph facing downward.
  • multiple virtual scene graphs are filled into the corresponding textures according to the correspondence. That is, according to the above correspondence, the virtual scene graph collected by the right virtual camera is filled into the +X texture surface, the virtual scene graph collected by the left virtual camera is filled into the -X texture surface, the virtual scene graph collected by the front virtual camera is filled into the +Y texture surface, the virtual scene graph collected by the back virtual camera is filled into the -Y texture surface, and the virtual scene graph collected by the left virtual camera is filled into the -X texture surface.
  • the virtual scene graph collected by the upward virtual camera is filled into the +Z mapping surface, and the virtual scene graph collected by the downward virtual camera is filled into the -Z mapping surface.
  • a set number of virtual scene graphs are filled into corresponding mapping surfaces according to the corresponding relationship, which can improve the accuracy of image filling, thereby ensuring the display effect of subsequent images on the VR device.
  • multiple virtual scene graphs are respectively filled into corresponding maps according to the correspondence relationship
  • the method for obtaining the multi-faceted target map can be: determining the size ratio between the virtual scene graph and the map; sampling pixel values from the virtual scene graph according to the size ratio; filling the sampled pixel values into the corresponding map according to the correspondence relationship, to obtain the multi-faceted target map.
  • the size of the virtual scene graph and the size of the mapping surface may be the same or different.
  • the process of sampling pixel values from the virtual scene graph according to the size ratio may be: first determine the sampling interval according to the size ratio, and then sample pixel values from the virtual scene graph according to the sampling interval. Specifically, if the size ratio between the virtual scene graph and the mapping surface is 1:1, the sampling interval is 1; if the size ratio between the virtual scene graph and the mapping surface is n:1, the sampling interval may be the value rounded to n; if the size ratio between the virtual scene graph and the mapping surface is 1:n, the sampling interval is 1, and each pixel point of the virtual scene graph is sampled m times continuously, where m is the value rounded to n, and n>1.
  • the sampling interval may be 2; the size ratio between the virtual scene graph and the mapping surface is 1:3.2, the sampling interval is 1, and each pixel point of the virtual scene graph is sampled 3 times continuously.
  • the virtual scene graph can be accurately filled into the multi-faceted mapping.
  • the panoramic map can be understood as a texture map corresponding to the panoramic map.
  • the method of projecting the multi-faceted target map to the panoramic map can be: first determine the projection relationship (also referred to as the corresponding relationship) between the texture coordinates in the multi-faceted target map and the texture coordinates in the panoramic map, and then fill the pixel values in the multi-faceted target map into the panoramic map according to the projection relationship, so as to obtain the projected panoramic map.
  • the process of projecting the multi-faceted target map to the panoramic map can be: obtaining the texture coordinates of the pixel points in the panoramic map; performing a projection transformation on the texture coordinates to obtain the projection coordinates corresponding to the texture coordinates; and filling the pixel values in the multi-faceted target map into the positions corresponding to the texture coordinates in the panoramic map according to the projection coordinates.
  • the texture coordinates (also called UV coordinates) of the pixels in the panoramic map are 0-1.
  • the process of performing projection transformation on the texture coordinates may be: using a spherical projection algorithm to transform the texture coordinates, thereby obtaining projection coordinates corresponding to the texture coordinates of the panoramic map.
  • the texture coordinates may be projected and transformed to obtain projection coordinates corresponding to the texture coordinates by: determining angle information corresponding to the texture coordinates; and performing sine and/or cosine operations on the angle information to obtain projection coordinates corresponding to the texture coordinates.
  • the projection coordinates are three-dimensional coordinates, expressed as (x, y, z).
  • the pixel values in the multifaceted target map are filled into the panoramic map according to the projection coordinates.
  • the multifaceted target map can be accurately projected into the panoramic map, which can prevent the generated virtual panorama from being distorted when displayed on a VR device (such as VR glasses), thereby improving the display effect of the panorama on the VR device.
  • the method of filling the pixel values in the polyhedral target map to the position corresponding to the texture coordinates of the panoramic map according to the projection coordinates can be: determining a ray pointing from the center point of the polyhedral target map to a point at the projection coordinates; determining the intersection of the ray and the polyhedral target map; and filling the pixel values at the intersection to the position corresponding to the texture coordinates in the panoramic map.
  • the center point of the polyhedral target map can be understood as the center point of the solid corresponding to the polyhedral target map.
  • the point at the projection coordinate is a point on a sphere.
  • the center point of the polyhedral target map and the point at the projection coordinate are connected to form a ray, which will intersect with a mapping surface in the polyhedral target map at a point.
  • the UV coordinate of this point is the texture coordinate in the panoramic map corresponding to the texture map in the polyhedral target map, and finally the pixel value of the intersection is filled into the corresponding position in the panoramic map.
  • Figure 4 is an example diagram for determining the projection relationship between the panoramic map and the polyhedral target map in this embodiment.
  • point A is the center point of the polyhedral target map
  • point B is the point at the projection coordinate.
  • the line AB intersects the multi-faceted target map at point C, and the pixel value of point C is filled into the corresponding position in the panoramic map, thereby obtaining the projected panoramic map.
  • FIG5 is an example diagram of projecting a multifaceted target map onto a panoramic map in this embodiment. As shown in FIG5 , the multifaceted target map of FIG3 is projected to obtain the panoramic map of FIG5 .
  • the pixel values of the intersection of the ray pointing from the center point of the multifaceted target map to the point at the projection coordinate and the multifaceted target map are filled into the panoramic map, so that the virtual scene graphs in different directions can be accurately filled into the panoramic map, and the generated virtual panorama can be prevented from being distorted when displayed on a VR device (such as VR glasses), thereby improving the display effect of the panorama on the VR device.
  • a VR device such as VR glasses
  • the virtual panorama can be understood as a picture with a viewing angle exceeding 90 degrees, for example, it can be a 180-degree panorama, a 360-degree panorama, or a 720-degree panorama.
  • the virtual panorama in this embodiment can be a 360-degree virtual panorama.
  • the projected panoramic map is rendered based on the pixel values to obtain a virtual panorama.
  • the virtual panoramic image when the virtual panoramic image is obtained, the virtual panoramic image can be displayed on a VR device, so that the user can watch the virtual panoramic image by wearing the VR device.
  • the projected panoramic map is rendered to obtain a virtual panoramic image by: determining a panoramic display viewing angle; and rendering pixel points in the projected panoramic map that fall within the panoramic display viewing angle to obtain a virtual panoramic image.
  • the panoramic display viewing angle can be a viewing angle determined according to people's viewing needs. In this embodiment, it is set to 180 degrees, that is, half of the viewing angle of the entire virtual panorama.
  • the pixel points that fall into the panoramic display viewing angle are determined, and finally the pixel points in the panoramic map that fall into the panoramic display viewing angle are rendered to obtain a virtual panorama.
  • Figure 6 is an example diagram of pixel points that fall into the panoramic display viewing angle. As shown in Figure 6, the part framed by the black frame is the area that falls into the panoramic display viewing angle. Only the pixel points in this area are rendered, and the pixel points outside the area are not rendered.
  • FIG7 is an example of a 180-degree virtual panoramic image in this embodiment. As shown in FIG7 , this image is a 180-degree virtual panoramic image obtained by panoramically projecting the rear image captured by the virtual underwater scene in FIG2. In this embodiment, rendering is performed on the pixel points in the projected panoramic map that fall within the panoramic display viewing angle, which can save rendering resources and improve rendering speed.
  • the projected panoramic map is rendered to obtain a virtual panoramic map by: obtaining first depth information of virtual pixels in the projected panoramic map and second depth information of real pixels in the real scene map; fusing the panoramic map and the real scene map according to the first depth information and the second depth information to obtain a target panoramic map; and rendering the target panoramic map to obtain a virtual panoramic map.
  • the real scene image is an image obtained by collecting an image of the real scene using a panoramic camera.
  • the acquisition of depth information can adopt any depth detection algorithm, which is not limited here. Fusion of the panoramic map and the real scene image according to the first depth information and the second depth information can be understood as: determining whether to retain the pixel values in the panoramic map or the pixel values in the real scene image according to the first depth information and the second depth information, thereby rendering the virtual panoramic image based on the retained pixel values.
  • the panoramic map and the real scene map are fused according to the first depth information and the second depth information, and the target panoramic map is obtained in the following manner: if the first depth information is less than or equal to the second depth information, the pixel value of the panoramic map remains unchanged; if the first depth information is greater than the second depth information, the pixel value of the panoramic map is replaced with the pixel value of the real pixel point.
  • the first depth information is less than or equal to the second depth information, it indicates that the virtual pixel in the panoramic map is in front of the real pixel in the real scene map, that is, the real pixel is blocked by the virtual pixel, and the pixel value of the virtual pixel is retained at this time.
  • the first depth information is greater than the second depth information, it indicates that the virtual pixel in the panoramic map is behind the real pixel in the real scene map, that is, the virtual pixel is blocked by the real pixel, and the pixel value of the real pixel is retained at this time.
  • the panoramic map after the panoramic map and the real scene map are fused is rendered to obtain a virtual panorama.
  • the panoramic map and the real scene map are fused based on the depth information, so that the virtual scene can be realistically added to the real scene, thereby forming an augmented reality effect, and the fused panorama is displayed on the VR device, so that the user can watch the image with augmented reality effect through the VR device, thereby improving the user experience.
  • the virtual panoramic image is a left-eye virtual panoramic image or a right-eye virtual panoramic image, that is, the solution of the above embodiment is used to determine the virtual panoramic images of both eyes.
  • Six virtual cameras are set in the scene, and six virtual cameras are also set in the virtual scene for the right eye.
  • the real panoramic image can also be a left-eye real panoramic image or a right-eye real panoramic image.
  • a binocular panoramic camera can be used to collect images of the real scene to obtain a left-eye real panoramic image and a right-eye real panoramic image.
  • Figure 8 is an example of a binocular virtual panoramic image in this embodiment. As shown in Figure 8, the left image is a left-eye virtual panoramic image, and the right image is a right-eye virtual panoramic image.
  • the following steps are further included: displaying the left-eye virtual panoramic image on a display device corresponding to the left eye, and displaying the right-eye virtual panoramic image on a display device corresponding to the right eye.
  • the left-eye virtual panorama and the right-eye virtual panorama are sent to the audience.
  • the audience watches the live broadcast by wearing a VR device, and the left-eye virtual panorama is displayed on the display device corresponding to the left eye in the VR device, and the right-eye virtual panorama is displayed on the display device corresponding to the right eye in the VR device. This allows the audience to watch the live broadcast through the VR device, which can not only improve the display effect but also improve the user's viewing experience.
  • the technical solution of the disclosed embodiment is to obtain a virtual scene, and collect images of the virtual scene in different directions to obtain multiple virtual scene graphs; fill the multiple virtual scene graphs into a multi-faceted map to obtain a multi-faceted target map; wherein the multi-faceted map is composed of multiple maps; project the multi-faceted target map map onto a panoramic map; render the projected panoramic map to obtain a virtual panorama.
  • the rendering method of the virtual panorama provided by the disclosed embodiment is to first fill the virtual scene graphs of different directions into the multi-faceted map, and then project the multi-faceted target map onto the panoramic map to render the projected panoramic map to obtain a virtual panorama, which can avoid distortion of the rendered virtual panorama and improve the display effect of the augmented reality panorama.
  • FIG9 is a schematic diagram of the structure of a virtual panorama rendering device provided by an embodiment of the present disclosure. As shown in FIG9 , the device includes:
  • a virtual scene graph acquisition module 210 is used to acquire a virtual scene and collect images of the virtual scene in different directions to obtain multiple virtual scene graphs;
  • a multi-faceted target map acquisition module 220 is used to fill the multiple virtual scene graphs into a multi-faceted map to obtain a multi-faceted target map; wherein the multi-faceted map is composed of multiple maps;
  • a projection module 230 for projecting the multi-faceted target map texture to a panoramic map
  • the rendering module 240 is used to render the projected panoramic map to obtain a virtual panoramic image.
  • the multi-faceted target map acquisition module 220 is further used to:
  • the plurality of virtual scene graphs are filled into corresponding maps respectively according to the corresponding relationship to obtain a multi-faceted target map.
  • the multi-faceted target map acquisition module 220 is further used to:
  • the sampled pixel values are filled into the corresponding map surface according to the corresponding relationship to obtain a multi-surface target map.
  • the projection module 230 is further used for:
  • the pixel values in the multi-faceted target map are filled into positions corresponding to the texture coordinates of the panoramic map according to the projection coordinates.
  • the projection module 230 is further used for:
  • the projection module 230 is further used for:
  • the pixel value at the intersection is filled into the position corresponding to the texture coordinate in the panoramic map.
  • the rendering module 240 is further configured to:
  • Rendering is performed on the pixel points in the projected panoramic map that fall within the panoramic display viewing angle to obtain a virtual panoramic map.
  • the rendering module 240 is further configured to:
  • the real scene map is a map obtained by collecting a real scene map using a panoramic camera
  • the target panoramic map is rendered to obtain a virtual panoramic map.
  • the rendering module 240 is further configured to:
  • the pixel value of the panoramic map remains unchanged
  • the pixel value of the panoramic map is replaced with the pixel value of the real pixel point.
  • the virtual panoramic image is a left-eye virtual panoramic image or a right-eye virtual panoramic image; and further includes: a display module, which is used to:
  • the left-eye virtual panoramic image is displayed on the display device corresponding to the left eye
  • the right-eye virtual panoramic image is displayed on the display device corresponding to the right eye.
  • the virtual scene graph acquisition module 210 is further used for:
  • the virtual panorama rendering device provided in the embodiments of the present disclosure can execute the virtual panorama rendering method provided in any embodiment of the present disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
  • FIG10 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure. Referring to FIG10 below, it shows a schematic diagram of the structure of an electronic device (such as a terminal device or server in FIG10 ) 500 suitable for implementing an embodiment of the present disclosure.
  • the terminal device in the embodiment of the present disclosure may include but is not limited to mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), etc., and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG10 is merely an example and should not be construed as an example of the functions and usage of the embodiments of the present disclosure. No limitations are imposed by scope.
  • the electronic device 500 may include a processing device (e.g., a central processing unit, a graphics processing unit, etc.) 501, which can perform various appropriate actions and processes according to a program stored in a read-only memory (ROM) 502 or a program loaded from a storage device 508 into a random access memory (RAM) 503.
  • a processing device 501 e.g., a central processing unit, a graphics processing unit, etc.
  • RAM random access memory
  • Various programs and data required for the operation of the electronic device 500 are also stored in the RAM 503.
  • the processing device 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504.
  • An edit/output (I/O) interface 505 is also connected to the bus 504.
  • the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; output devices 507 including, for example, a liquid crystal display (LCD), a speaker, a vibrator, etc.; storage devices 508 including, for example, a magnetic tape, a hard disk, etc.; and communication devices 509.
  • the communication devices 509 may allow the electronic device 500 to communicate wirelessly or wired with other devices to exchange data.
  • FIG. 10 shows an electronic device 500 with various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or have alternatively.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program can be downloaded and installed from a network through a communication device 509, or installed from a storage device 508, or installed from a ROM 502.
  • the processing device 501 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the electronic device provided in the embodiment of the present disclosure and the method for rendering a virtual panorama provided in the above embodiment belong to the same inventive concept.
  • the embodiments of the present disclosure provide a computer storage medium on which a computer program is stored.
  • the program is executed by a processor, the method for rendering a virtual panorama provided in the above embodiments is implemented.
  • the computer-readable medium of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • the substance may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination thereof.
  • More specific examples of computer-readable storage media may include, but are not limited to, an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination thereof.
  • a computer-readable storage medium may be any tangible medium containing or storing a program that may be used by or in conjunction with an instruction execution system, device or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, in which a computer-readable program code is carried. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination thereof.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which may send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, device, or device.
  • the program code embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wire, optical cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the client and server may communicate using any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network).
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include a local area network ("LAN”), a wide area network ("WAN”), an internet (e.g., the Internet), and a peer-to-peer network (e.g., an ad hoc peer-to-peer network), as well as any currently known or future developed network.
  • the computer-readable medium may be included in the electronic device, or may exist independently without being installed in the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device When the one or more programs are executed by the electronic device, the electronic device:
  • the computer-readable medium carries one or more programs.
  • the electronic device obtains a virtual scene, and collects images of the virtual scene in different directions to obtain multiple virtual scene graphs; fills the multiple virtual scene graphs into a multi-faceted map to obtain a multi-faceted target map; wherein the multi-faceted map is composed of multiple maps.
  • the multi-faceted target map is projected onto a panoramic map; the projected panoramic map is rendered to obtain a virtual panoramic map.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages or a combination thereof, including, but not limited to, object-oriented programming languages, such as Java, Smalltalk, C++, and conventional procedural programming languages, such as "C" or similar programming languages.
  • the program code may be executed entirely on the user's computer, partially on the user's computer, as a separate software package, partially on the user's computer and partially on a remote computer, or entirely on a remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., through the Internet using an Internet service provider).
  • LAN local area network
  • WAN wide area network
  • Internet service provider e.g., AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each square box in the flow chart or block diagram can represent a module, a program segment or a part of a code, and the module, the program segment or a part of the code contains one or more executable instructions for realizing the specified logical function.
  • the functions marked in the square box can also occur in a sequence different from that marked in the accompanying drawings. For example, two square boxes represented in succession can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved.
  • each square box in the block diagram and/or flow chart, and the combination of the square boxes in the block diagram and/or flow chart can be implemented with a dedicated hardware-based system that performs a specified function or operation, or can be implemented with a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or hardware.
  • the name of a unit does not limit the unit itself in some cases.
  • the first acquisition unit may also be described as a "unit for acquiring at least two Internet Protocol addresses".
  • exemplary types of hardware logic components include: field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chip (SOCs), complex programmable logic devices (CPLDs), and the like.
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • SOCs systems on chip
  • CPLDs complex programmable logic devices
  • a machine-readable medium may be a tangible medium that may contain or A program stored for use by or in conjunction with an instruction execution system, device or equipment.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or equipment, or any suitable combination of the foregoing.
  • a more specific example of a machine-readable storage medium may include an electrical connection based on one or more lines, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM portable compact disk read-only memory
  • CD-ROM compact disk read-only memory
  • magnetic storage device or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Les modes de réalisation de la présente divulgation concernent un procédé et un appareil de rendu pour un panorama virtuel, un dispositif, et un support de stockage. Le procédé de rendu comprend : l'acquisition d'une scène virtuelle et la collecte d'images de la scène virtuelle dans différentes directions pour obtenir une pluralité d'images de scène virtuelle ; l'application de la pluralité d'images de scène virtuelle sur une carte à plusieurs faces pour obtenir une carte cible à plusieurs faces, la carte à plusieurs faces étant composée d'une pluralité de cartes ; la projection de la carte cible à plusieurs faces sur une carte panoramique ; et le rendu de la carte panoramique, qui a été soumise à une projection, de façon à obtenir un panorama virtuel. Selon le procédé de rendu pour un panorama virtuel décrit dans les modes de réalisation de la présente divulgation, premièrement, les images de scène virtuelle dans différentes directions sont appliquées sur la carte à plusieurs faces ; ensuite, la carte cible à plusieurs faces est projetée sur la carte panoramique ; et la carte panoramique, qui a été soumise à une projection, est rendue pour obtenir le panorama virtuel de sorte qu'une distorsion du panorama virtuel rendu peut être évitée, ce qui permet d'améliorer un effet d'affichage d'un panorama de réalité augmentée.
PCT/CN2023/130744 2022-11-17 2023-11-09 Procédé et appareil de rendu pour panorama virtuel, dispositif, et support de stockage WO2024104248A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211460764.X 2022-11-17
CN202211460764.XA CN115861514A (zh) 2022-11-17 2022-11-17 虚拟全景图的渲染方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2024104248A1 true WO2024104248A1 (fr) 2024-05-23

Family

ID=85664616

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/130744 WO2024104248A1 (fr) 2022-11-17 2023-11-09 Procédé et appareil de rendu pour panorama virtuel, dispositif, et support de stockage

Country Status (2)

Country Link
CN (1) CN115861514A (fr)
WO (1) WO2024104248A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861514A (zh) * 2022-11-17 2023-03-28 北京字跳网络技术有限公司 虚拟全景图的渲染方法、装置、设备及存储介质
CN116109803B (zh) * 2023-04-13 2023-07-07 腾讯科技(深圳)有限公司 信息构建方法、装置、设备及存储介质
CN117218266A (zh) * 2023-10-26 2023-12-12 神力视界(深圳)文化科技有限公司 3d白模的纹理贴图生成方法、装置、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180096453A1 (en) * 2016-10-05 2018-04-05 Hidden Path Entertainment, Inc. System and method of capturing and rendering a stereoscopic panorama using a depth buffer
CN108648257A (zh) * 2018-04-09 2018-10-12 腾讯科技(深圳)有限公司 全景画面的获取方法、装置、存储介质及电子装置
CN109934764A (zh) * 2019-01-31 2019-06-25 北京奇艺世纪科技有限公司 全景视频文件的处理方法、装置、终端、服务器及存储介质
CN114782612A (zh) * 2022-04-29 2022-07-22 北京字跳网络技术有限公司 图像渲染方法、装置、电子设备及存储介质
CN115861514A (zh) * 2022-11-17 2023-03-28 北京字跳网络技术有限公司 虚拟全景图的渲染方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180096453A1 (en) * 2016-10-05 2018-04-05 Hidden Path Entertainment, Inc. System and method of capturing and rendering a stereoscopic panorama using a depth buffer
CN108648257A (zh) * 2018-04-09 2018-10-12 腾讯科技(深圳)有限公司 全景画面的获取方法、装置、存储介质及电子装置
CN109934764A (zh) * 2019-01-31 2019-06-25 北京奇艺世纪科技有限公司 全景视频文件的处理方法、装置、终端、服务器及存储介质
CN114782612A (zh) * 2022-04-29 2022-07-22 北京字跳网络技术有限公司 图像渲染方法、装置、电子设备及存储介质
CN115861514A (zh) * 2022-11-17 2023-03-28 北京字跳网络技术有限公司 虚拟全景图的渲染方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN115861514A (zh) 2023-03-28

Similar Documents

Publication Publication Date Title
WO2024104248A1 (fr) Procédé et appareil de rendu pour panorama virtuel, dispositif, et support de stockage
WO2022166872A1 (fr) Procédé et appareil d'affichage à effet spécial, ainsi que dispositif et support
CN110728622B (zh) 鱼眼图像处理方法、装置、电子设备及计算机可读介质
WO2023207963A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support d'enregistrement
WO2024037556A1 (fr) Appareil et procédé de traitement d'image, dispositif et support de stockage
WO2023207379A1 (fr) Procédé et appareil de traitement d'images, dispositif et support de stockage
WO2022166868A1 (fr) Procédé, appareil et dispositif de génération de vue de visite, et support de stockage
WO2024016923A1 (fr) Procédé et appareil de génération de graphe à effets spéciaux, dispositif et support de stockage
WO2023193639A1 (fr) Procédé et appareil de rendu d'image, support lisible et dispositif électronique
WO2022247630A1 (fr) Procédé et appareil de traitement d'images, dispositif électronique et support de stockage
US11494961B2 (en) Sticker generating method and apparatus, and medium and electronic device
WO2024032752A1 (fr) Procédé et appareil pour générer une image d'effet spécial de transition, dispositif, et support de stockage
CN111833459B (zh) 一种图像处理方法、装置、电子设备及存储介质
WO2023231918A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage
WO2023207354A1 (fr) Procédé et appareil de détermination de vidéo à effets spéciaux, dispositif électronique et support de stockage
CN111862342A (zh) 增强现实的纹理处理方法、装置、电子设备及存储介质
CN117115267A (zh) 免标定的图像处理方法、装置、电子设备和存储介质
CN115761197A (zh) 图像渲染方法、装置、设备及存储介质
CN114049403A (zh) 一种多角度三维人脸重建方法、装置及存储介质
CN114419299A (zh) 虚拟物体的生成方法、装置、设备及存储介质
CN114567742A (zh) 全景视频的传输方法、装置及存储介质
CN114419298A (zh) 虚拟物体的生成方法、装置、设备及存储介质
CN111489428B (zh) 图像生成方法、装置、电子设备及计算机可读存储介质
KR102534449B1 (ko) 이미지 처리 방법, 장치, 전자 장치 및 컴퓨터 판독 가능 저장 매체
CN112668474B (zh) 平面生成方法和装置、存储介质和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23890684

Country of ref document: EP

Kind code of ref document: A1