CN112967390A - Scene switching method and device and storage medium - Google Patents

Scene switching method and device and storage medium Download PDF

Info

Publication number
CN112967390A
CN112967390A CN202110366826.XA CN202110366826A CN112967390A CN 112967390 A CN112967390 A CN 112967390A CN 202110366826 A CN202110366826 A CN 202110366826A CN 112967390 A CN112967390 A CN 112967390A
Authority
CN
China
Prior art keywords
scene
viewpoint position
virtual camera
dimensional model
switching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110366826.XA
Other languages
Chinese (zh)
Other versions
CN112967390B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Urban Network Neighbor Information Technology Co Ltd
Original Assignee
Beijing Urban Network Neighbor Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Urban Network Neighbor Information Technology Co Ltd filed Critical Beijing Urban Network Neighbor Information Technology Co Ltd
Priority to CN202110366826.XA priority Critical patent/CN112967390B/en
Publication of CN112967390A publication Critical patent/CN112967390A/en
Application granted granted Critical
Publication of CN112967390B publication Critical patent/CN112967390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Abstract

A scene switching method, a scene switching device and a storage medium based on a three-dimensional model and a full sedum empty box. The scene switching method is suitable for a computing device. The scene switching method comprises the following steps: constructing a full-sedum empty box and a three-dimensional model, wherein when a first scene is displayed, the panoramic sky box comprises textures corresponding to the first scene, and when a second scene is displayed, the panoramic sky box comprises textures corresponding to the second scene; in the process of switching from the first scene to the second scene, a first virtual camera in the three-dimensional model is used for rendering for display, and in the three-dimensional model, the first virtual camera is moved from a first viewpoint position corresponding to the first scene to a second viewpoint position corresponding to the second scene so as to realize switching from the first scene to the second scene. The scene switching method can realize the switching of different scenes, has the effect of space roaming in the scene switching process, has obvious space moving feeling, and improves the use experience of users.

Description

Scene switching method and device and storage medium
The application is a divisional application of an invention patent application with the application date of 2019, 11 and 30 months and the application number of 201911208538.0 and the name of scene switching method, device and storage medium.
Technical Field
The embodiment of the disclosure relates to a scene switching method, a scene switching device and a storage medium based on a three-dimensional model and a full sedum empty box.
Background
With the development of technology, Virtual Reality (VR) technology is widely used in many fields, such as medical treatment, education, entertainment, and the like. Common VR products include, for example, VR games, VR theaters, VR galleries, and the like. The VR technology enables a user to have immersive experience, the same perception as the real world is generated in the aspects of vision, hearing and even touch, and the use experience of the user is greatly improved.
Disclosure of Invention
At least one embodiment of the present disclosure provides a scene switching method based on a three-dimensional model and a full-rhodiola rosea empty box, which is applicable to a computing device, and the method includes: constructing the full sedum aizoon blank box and the three-dimensional model, wherein the three-dimensional model comprises a first virtual camera, and the full sedum aizoon blank box comprises a second virtual camera; rendering for display using a first virtual camera in the three-dimensional model during a switch from a first scene to a second scene; when the first scene is displayed, enabling the second virtual camera to be located at a first viewpoint position corresponding to the first scene; when the second scene is displayed, enabling the second virtual camera to be located at a second viewpoint position corresponding to the second scene; the first scene is a partial scene in the whole virtual environment corresponding to the three-dimensional model, the second scene is a partial scene in the whole virtual environment, and the first scene and the second scene are different; in the process of switching from the first scene to the second scene, the first virtual camera is moved from a first viewpoint position corresponding to the first scene to a second viewpoint position corresponding to the second scene in the three-dimensional model so as to realize the switching from the first scene to the second scene, wherein the first viewpoint position and the second viewpoint position are different.
For example, in a scene switching method provided in at least one embodiment of the present disclosure, the full sedum aizoon further includes a second virtual camera, and the scene switching method further includes: when the first scene is displayed, enabling the second virtual camera to be located at a first viewpoint position corresponding to the first scene; when the second scene is displayed, enabling the second virtual camera to be located at a second viewpoint position corresponding to the second scene; the second virtual camera and the first virtual camera are the same virtual camera or different virtual cameras.
For example, in a scene switching method provided in at least one embodiment of the present disclosure, constructing the full sedum aizoon box includes: acquiring a first panoramic image of the first scene, wherein the first panoramic image comprises a plurality of first scene pictures at first picture acquisition positions; acquiring a second panoramic image of the second scene, wherein the second panoramic image comprises a plurality of second scene pictures at second picture acquisition positions; obtaining the first viewpoint position based on the coordinates of the first picture acquisition position; obtaining the second viewpoint position based on the coordinates of the second picture acquisition position; the method comprises the steps that a full-sedum empty box is constructed based on a first panoramic picture of a first scene or a second panoramic picture of a second scene, the center of the full-sedum empty box is located at the origin of a coordinate axis, the texture of the full-sedum empty box comprises a plurality of first scene pictures after rotation and displacement in the first scene, the texture of the full-sedum empty box comprises a plurality of second scene pictures after rotation and displacement in the second scene, and displacement information of the plurality of first scene pictures and displacement information of the plurality of second scene pictures are determined according to the coordinates of a first viewpoint position and the coordinates of a second viewpoint position respectively.
For example, in a scene switching method provided in at least one embodiment of the present disclosure, in a process of switching from the first scene to the second scene, the scene switching method further includes: obtaining a hybrid texture based on the first and second panoramas; applying the hybrid texture to the three-dimensional model.
For example, in a scene switching method provided by at least one embodiment of the present disclosure, moving the first virtual camera from a first viewpoint position corresponding to the first scene to a second viewpoint position corresponding to the second scene in the three-dimensional model includes: in moving the first virtual camera from the first viewpoint position to the second viewpoint position, decreasing transparency of a first panorama in the blended texture along a time axis including a time when the first virtual camera is moved from the first viewpoint position to the second viewpoint position, and increasing transparency of a second panorama in the blended texture along the time axis, a sum of the transparency of the first panorama in the blended texture and the transparency of the second panorama in the blended texture being 1.
For example, in a scene switching method provided by at least one embodiment of the present disclosure, in a process of switching from the first scene to the second scene, when the first scene is displayed, a transparency of the first panorama is 1, and a transparency of the second panorama is 0; when the second scene is displayed, the transparency of the first panoramic image is 0, and the transparency of the second panoramic image is 1.
For example, the scene switching method provided in at least one embodiment of the present disclosure further includes: and after the first virtual camera is moved from the first viewpoint position to the second viewpoint position, displaying a second scene corresponding to the second viewpoint position by using the second virtual camera.
For example, the scene switching method provided in at least one embodiment of the present disclosure further includes: and before the process of switching from the first scene to the second scene is started, rendering by using the second virtual camera to display the first scene corresponding to the first viewpoint position.
For example, in a scene switching method provided by at least one embodiment of the present disclosure, moving the first virtual camera from a first viewpoint position corresponding to the first scene to a second viewpoint position corresponding to the second scene includes: and obtaining a movement vector according to the first viewpoint position and the second viewpoint position, and moving the first virtual camera to the second viewpoint position along the movement vector by taking the first viewpoint position as a starting point.
For example, in a scene switching method provided in at least one embodiment of the present disclosure, a position of the first virtual camera and a position of the second virtual camera correspond to the first picture capturing position and the second picture capturing position, respectively.
For example, in a scene switching method provided by at least one embodiment of the present disclosure, the three-dimensional model is determined based on wall data of a picture acquisition scene.
For example, in a scene switching method provided by at least one embodiment of the present disclosure, when a trigger event for switching from a first viewpoint position to a second viewpoint position is detected, a first virtual camera in the three-dimensional model is used for rendering for display.
At least one embodiment of the present disclosure further provides a scene switching method based on a three-dimensional model and a full-stonecrop empty box, which is applicable to a computing device, and the method includes: constructing the full sedum aizoon blank box and the three-dimensional model, wherein the three-dimensional model comprises a first virtual camera, and the full sedum aizoon blank box comprises a second virtual camera; when the first scene is displayed, enabling the second virtual camera to be located at a first viewpoint position corresponding to the first scene; when the second scene is displayed, enabling the second virtual camera to be located at a second viewpoint position corresponding to the second scene; the first scene is a partial scene in the whole virtual environment corresponding to the three-dimensional model, the second scene is a partial scene in the whole virtual environment, and the first scene and the second scene are different; displaying the first scene by using the full sedum aizoon, and executing the following method when receiving an instruction of clicking a click mark of the second scene in the first scene: using a first virtual camera in the three-dimensional model for rendering for display, during switching from the first scene to the second scene, in the three-dimensional model, moving the first virtual camera from a first viewpoint position corresponding to the first scene to a second viewpoint position corresponding to the second scene to realize switching from the first scene to the second scene, wherein the first viewpoint position and the second viewpoint position are different, the first viewpoint position corresponds to a click mark of the first scene, and the second viewpoint position corresponds to a click mark of the second scene; and after the switching from the first scene to the second scene is executed, displaying the second scene by using the full sedum aizoon blank box.
At least one embodiment of the present disclosure further provides a scene switching apparatus based on a three-dimensional model and a panoramic sky box, including: a construction unit configured to construct the full sedum empty box and the three-dimensional model, the three-dimensional model including a first virtual camera, the full sedum empty box including a second virtual camera; a control unit configured to render for display using a first virtual camera in the three-dimensional model during a switch from a first scene to a second scene; when the first scene is displayed, enabling the second virtual camera to be located at a first viewpoint position corresponding to the first scene; when the second scene is displayed, enabling the second virtual camera to be located at a second viewpoint position corresponding to the second scene; the first scene and the second scene are different; the first scene is a partial scene in the whole virtual environment corresponding to the three-dimensional model, the second scene is a partial scene in the whole virtual environment, and the first scene and the second scene are different; the control unit is configured to move the first virtual camera from a first viewpoint position corresponding to the first scene to a second viewpoint position corresponding to the second scene in the three-dimensional model during switching from the first scene to the second scene, so as to realize switching from the first scene to the second scene, wherein the first viewpoint position and the second viewpoint position are different.
For example, in a scene switching apparatus provided in at least one embodiment of the present disclosure, the total sedum aizoon further includes a second virtual camera, and the control unit is further configured to: when the first scene is displayed, enabling the second virtual camera to be located at a first viewpoint position corresponding to the first scene; when the second scene is displayed, enabling the second virtual camera to be located at a second viewpoint position corresponding to the second scene; the second virtual camera and the first virtual camera are the same virtual camera or different virtual cameras.
At least one embodiment of the present disclosure further provides a scene switching apparatus based on a three-dimensional model and a panoramic sky box, including: a processor; a memory; one or more computer program modules stored in the memory and configured to be executed by the processor, the one or more computer program modules comprising instructions for performing a scene switching method provided by any embodiment of the present disclosure.
At least one embodiment of the present disclosure further provides a storage medium that stores non-transitory computer-readable instructions, which when executed by a computer, can perform the scene switching method provided in any one of the embodiments of the present disclosure.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description relate only to some embodiments of the present disclosure and are not limiting to the present disclosure.
Fig. 1 is a flowchart of a scene switching method according to at least one embodiment of the present disclosure;
fig. 2 is a flowchart of a method for constructing a full sedum empty box according to at least one embodiment of the present disclosure;
fig. 3A is a schematic diagram of a first panorama or a second panorama according to at least one embodiment of the present disclosure;
fig. 3B is a schematic view of a hexahedron in a full sedum aizoon box according to at least one embodiment of the disclosure;
fig. 3C is a schematic diagram of an all-rhodiola rosea empty box according to at least one embodiment of the present disclosure;
FIG. 4 is a flow chart of a method for applying a first panorama and a second panorama to a constructed three-dimensional model according to at least one embodiment of the present disclosure;
fig. 5 is a flowchart of another scene switching method according to at least one embodiment of the present disclosure;
fig. 6 is a schematic diagram of a scene switching system according to at least one embodiment of the present disclosure;
fig. 7 is a schematic block diagram of a scene switching apparatus according to at least one embodiment of the present disclosure;
fig. 8 is a schematic block diagram of another scene switching apparatus provided in at least one embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device according to at least one embodiment of the present disclosure; and
fig. 10 is a schematic diagram of a storage medium according to at least one embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. Also, the use of the terms "a," "an," or "the" and similar referents do not denote a limitation of quantity, but rather denote the presence of at least one. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
In order to enable the user to have an immersive experience, a virtual scene can be constructed by applying VR technology. The virtual scene is built in two ways: the first is achieved by three-dimensional modeling; the second is to use a panoramic picture instead of the three-dimensional model.
Panoramic pictures, also known as 3D real scenes, are based on real-scene photographs, and are stitched and processed to synthesize viewpoint images, thereby giving a user a feeling of being in a picture environment. In the situation that a VR scene is built by using panoramic pictures, when the VR scene is switched, the scene watched by a user can only be switched to the scene presented by another group of panoramic pictures from the scene presented by one group of panoramic pictures, and the user can not experience the space roaming effect in the scene switching process, so that the immersion of the user is reduced, and the use experience of the user is influenced. Therefore, how to achieve the space roaming effect in the scene switching process to improve the user experience becomes one of the technical problems that needs to be solved at present.
At least one embodiment of the present disclosure provides a scene switching method based on a three-dimensional model and a full sedum empty box, which is applicable to a computing device, and the method includes: constructing a full-sedum empty box and a three-dimensional model, wherein when a first scene is displayed, the panoramic sky box comprises textures corresponding to the first scene, when a second scene is displayed, the panoramic sky box comprises textures corresponding to the second scene, and the first scene is different from the second scene; rendering for display using a first virtual camera in the three-dimensional model during a switch from a first scene to a second scene; in the process of switching from a first scene to a second scene, in the three-dimensional model, the first virtual camera is moved from a first viewpoint position corresponding to the first scene to a second viewpoint position corresponding to the second scene to realize the switching from the first scene to the second scene, wherein the first viewpoint position and the second viewpoint position are different.
Some embodiments of the present disclosure also provide a scene switching apparatus and a storage medium corresponding to the above-described scene switching method.
The scene switching method based on the three-dimensional model and the full sedum aizoon provided by the embodiment of the disclosure can realize switching of different scenes, has a space roaming effect in the scene switching process, has obvious space moving feeling, and improves the use experience of users.
Embodiments of the present disclosure and examples thereof are described in detail below with reference to the accompanying drawings.
At least one embodiment of the present disclosure provides a scene switching method, which may be applied to, for example, virtual house watching and the like. Fig. 1 is a flowchart of an example of a scene switching method according to at least one embodiment of the present disclosure. For example, the scene switching method may be implemented in a software, hardware, firmware, or any combination thereof, and may be loaded and executed by a processor in a device such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, or a network server, so as to implement switching between different scenes, and have a spatial roaming effect during the scene switching process, thereby improving the user experience.
For example, the scene switching method is applicable to a computing device, which includes any electronic device with a computing function, for example, a mobile phone, a notebook computer, a tablet computer, a desktop computer, a web server, and the like, and the scene switching method may be loaded and executed, which is not limited in this respect by the embodiments of the present disclosure. For example, the computing device may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or other Processing units with data Processing capability and/or instruction execution capability, a storage Unit, or other forms of storage units, and the like, and the computing device is installed with an operating system, an application programming interface (e.g., opengl (open Graphics library), Metal, or the like), and implements the scene switching method provided by the embodiments of the present disclosure by running codes or instructions. For example, the computing device may further include a Display component, such as a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED) Display, a Quantum Dot Light Emitting Diode (QLED) Display, a projection component, a VR head-mounted Display device (e.g., VR helmet, VR glasses), and so on, which are not limited in this regard. The display component may display a plurality of scenes (e.g., virtual scenes).
As shown in fig. 1, the scene switching method includes steps S110 to S120.
Step S110: and constructing a full-sedum empty box and a three-dimensional model, wherein when a first scene is displayed, the panoramic sky box comprises textures corresponding to the first scene, when a second scene is displayed, the panoramic sky box comprises textures corresponding to the second scene, and the first scene and the second scene are different.
Step S120: and in the process of switching from the first scene to the second scene, the first virtual camera in the three-dimensional model is used for rendering for display, and in the process of switching from the first scene to the second scene, the first virtual camera is moved from a first viewpoint position corresponding to the first scene to a second viewpoint position corresponding to the second scene in the three-dimensional model so as to realize the switching from the first scene to the second scene, wherein the first viewpoint position and the second viewpoint position are different.
The following description is given by taking an example that the scene switching method based on the three-dimensional model and the all-stonecrop empty box is applied to a virtual house-watching scene, but of course, the method can also be applied to other application scenes such as games and education, and the embodiment of the present disclosure is not limited thereto. For example, the scene switching method described above may be performed when the user performs virtual house viewing through a computing device such as a mobile phone, a computer, or the like.
For step S110, for example, when a single scene is displayed, that is, when a scene change is not performed, a picture of each scene may be displayed by the panoramic sky box. For example, when the first scene is displayed (e.g., before a scene switch is made), the panoramic sky box includes a texture corresponding to the first scene, that is, the first scene may be displayed through the panoramic sky box; when the second scene is displayed (e.g., after the scene switch is completed), the panoramic sky box includes a texture corresponding to the second scene, i.e., the second scene may be displayed through the panoramic sky box.
In the embodiment of the present disclosure, when a scene is not switched, the texture corresponding to the first scene or the texture corresponding to the second scene may be applied to the same full-sedum empty box in the corresponding time period according to a scene to be displayed, so that the first scene and the second scene may be displayed through one full-sedum empty box, that is, the first scene and the second scene may share one full-sedum empty box. It should be noted that the panoramic sky box may also display more scenes, and is not limited to the first scene and the second scene, and the embodiments of the present disclosure do not limit this.
For example, the first scene is a virtual scene that can be observed at a first viewpoint position, and the second scene is a virtual scene that can be observed at a second viewpoint position. For example, in some examples, the first scene and the second scene are different and, accordingly, the first viewpoint location and the second viewpoint location are also different. For example, the first scene and the second scene may be completely different scenes, e.g., the first scene may be located in a living room and the second scene may be located in a bedroom, or, e.g., the first scene may be located at an entrance position of the living room and the second scene may be located at an exit position of the living room; alternatively, the first scene and the second scene have a higher similarity but have a parallax, for example, a certain degree of parallax is generated due to a different viewing angle of the user or a different focusing position of the camera, and the embodiments of the present disclosure are not limited thereto.
For example, the texture corresponding to the first scene represents an image displayed after the panorama acquired in the first scene is attached to the rhodiola sacra empty box, that is, the texture of the rhodiola sacra empty box in the first scene. For example, the texture corresponding to the second scene represents an image displayed after the panorama acquired in the second scene is attached to the rhodiola sacra empty box, that is, the texture of the rhodiola sacra empty box in the second scene.
For example, the first viewpoint location and the second viewpoint location are both located in the full sedum empty box and in the three-dimensional model.
Fig. 2 is a flowchart of a method for constructing a full sedum empty box according to at least one embodiment of the present disclosure. That is, fig. 2 is a flowchart of some examples of step S110 shown in fig. 1. For example, in the example shown in fig. 2, the method of constructing the all-stonecrop empty box includes steps S111 to S1115. A method for constructing a full rhodiola rosea empty box according to at least one embodiment of the present disclosure is described in detail with reference to fig. 2.
Step S111: a first panorama of a first scene is acquired, wherein the first panorama comprises a plurality of first scene pictures at a first picture acquisition position.
Step S112: and acquiring a second panoramic image of the second scene, wherein the second panoramic image comprises a plurality of second scene pictures at the second picture acquisition position.
Step S113: the first viewpoint position is obtained based on the coordinates of the first picture acquisition position.
Step S114: and obtaining a second viewpoint position based on the coordinates of the second picture acquisition position.
Step S115: and constructing the full sedum aizoon blank box based on the first panorama of the first scene or the second panorama of the second scene.
For step S111, for example, the first panorama can be taken at the first picture taking position or rendered based on the first picture taking position. The first panorama is, for example, a group of pictures including a plurality of first scene pictures at the first picture capturing position. For example, the first panorama includes a first scene picture in a plurality of orientations such as front, rear, left, right, up, down, and the like at a first picture acquisition position as a center of a circle, which is taken in a cylindrical manner by moving an angle of the camera. For example, in some examples, when a first picture taking location is located in a room, a camera may be set to the first picture taking location and take a first scene picture (i.e., photograph) of four walls, front, back, left, right, etc., of the first picture taking location and a first scene picture (i.e., photograph) of a ceiling above and a floor below the first picture taking location. For example, the plurality of first scene pictures are seamlessly spliced, so that the first panorama can be obtained. For example, the seamless splicing technique may be implemented by a GPU, which is not described herein.
It should be noted that, in the embodiment of the present disclosure, the first panorama can be obtained by shooting, or can be obtained by drawing (for example, computer drawing or manual drawing) based on the first picture obtaining position, or can be generated by an image algorithm based on the first picture obtaining position, and a specific implementation manner may be determined according to an actual requirement, which is not limited in this embodiment of the present disclosure.
For example, in some examples, the first panorama may include 6 different orientation scene pictures, i.e., 6 orientation scene pictures including front, back, left, right, up, down, etc. at the first picture acquisition location, such that a complete and comprehensive scene may be presented in subsequent steps. For example, in other examples, the first panorama may include 5 first scene pictures of different orientations, i.e., including 5 orientations of front, back, left, right, and top at the first picture acquisition location, and no longer including the first scene picture below the first picture acquisition location. For example, when the first picture taking position is located in a certain room, since the texture of the floor is almost uniform in the room and does not change with the change of the place, and the user does not need to observe the texture of the floor, the first scene picture below the first picture taking position in the first panorama can be omitted, and only the first scene pictures of 5 orientations, front, rear, left, right, and the like, at the first picture taking position can be included, so that the data amount can be reduced.
For example, in step S112, the second panorama can be taken at the second picture taking position or rendered based on the second picture taking position. The second panorama is, for example, a group of pictures, including a plurality of second scene pictures at the second picture capturing position. For example, the second panorama includes pictures of the scene at a plurality of orientations, such as front, rear, left, right, up, down, and the like, at which the second picture is taken, in a cylindrical manner by moving the angle of the camera with respect to the second picture-taking position as a center. Similar to the manner of acquiring the first panorama, the second panorama may be obtained by shooting at the second picture acquiring position, may also be obtained by drawing (for example, computer drawing or manual drawing) based on the second picture acquiring position, and may also be generated by an image algorithm based on the second picture acquiring position, which may be determined according to actual needs, and this is not limited by the embodiments of the present disclosure.
For example, in some examples, the second panorama can include 6 second scene pictures of different orientations, i.e., including 6 orientations of front, back, left, right, up, down, etc., at the second picture acquisition location, such that a complete and comprehensive scene can be presented in subsequent steps. For example, in other examples, the second panorama may include 5 second scene pictures of different orientations, i.e., including 5 orientations of front, back, left, right, and top at the second picture acquisition location, and no longer including the second scene picture below the second picture acquisition location. When the scene pictures below the second picture taking position are omitted from the second panorama and only the second scene pictures of 5 orientations, front, rear, left, right, and top, at the second picture taking position are included, the amount of data can be reduced.
It should be noted that, in the embodiment of the present disclosure, the first panorama may be obtained in the same manner as the second panorama, for example, all the first panorama is obtained by shooting or all the second panorama is obtained by drawing, so that consistency of visual effects of users may be ensured. Of course, the embodiment of the present disclosure is not limited thereto, and the first panorama may be acquired in a different manner from the second panorama to meet diversified application requirements. For example, the first panorama is similar to the second panorama but has parallax, or the first panorama is completely different from the second panorama.
For example, in step S113, the first picture taking position is a location in the actual scene, and the first viewpoint position may be obtained from the coordinates of the first picture taking position. For example, the first viewpoint position is a position in a virtual scene corresponding to an actual scene at the first picture taking position, and thus, the first viewpoint position corresponds to a first picture taking position in the actual scene.
For example, in some examples, a two-dimensional plane map may be generated according to an actual scene, and a coordinate of the first picture obtaining position in the actual scene is labeled to the two-dimensional plane map, where the labeling point is the first viewpoint position. For example, since the user moves in the virtual scene generally horizontally in actual applications, the coordinates of the first picture taking position may be two-dimensional coordinates (i.e., coordinates in the horizontal plane) from which the first viewpoint position is marked in the two-dimensional plan view, which may reduce the amount of data.
Of course, the embodiments of the present disclosure are not limited thereto, and in other examples, the coordinates of the first picture capturing position may also be three-dimensional coordinates (that is, both the coordinates in the horizontal plane and the altitude are included), and accordingly, a three-dimensional stereo map needs to be generated according to the actual scene, and the first viewpoint position is marked in the three-dimensional stereo map according to the three-dimensional coordinates.
For example, in step S114, the second picture taking position is a location in the actual scene, and the second viewpoint position can be obtained from the coordinates of the second picture taking position. For example, the second viewpoint position is a position in a virtual scene corresponding to the actual scene, and the second viewpoint position corresponds to a second picture taking position in the actual scene.
For example, in some examples, similar to the manner of obtaining the first viewpoint position, a two-dimensional plane graph may be generated according to an actual scene, and a coordinate of the second picture obtaining position in the actual scene is labeled to the two-dimensional plane graph, where the labeling point is the second viewpoint position. The coordinates of the acquisition position of the second picture adopt two-dimensional coordinates, and the position of the second viewpoint is obtained by marking a two-dimensional plane graph, so that the data volume can be reduced. Of course, the embodiment of the present disclosure is not limited to this, in other examples, the coordinates of the second picture obtaining position may also be three-dimensional coordinates, and accordingly, a three-dimensional stereo image needs to be generated according to an actual scene, and the second viewpoint position is obtained by marking in the three-dimensional stereo image according to the three-dimensional coordinates.
For example, the first picture acquisition position is different from the second picture acquisition position. For example, the first picture taking location and the second picture taking location may be different locations in the same room, and the first picture taking location and the second picture taking location may also be located in different rooms. For example, the first picture acquisition location may be 1 meter, 2 meters, or any other distance from the second picture acquisition location. Accordingly, the first viewpoint position and the second viewpoint position may be different positions within the same room in the virtual scene, and the first viewpoint position and the second viewpoint position may also be located within different rooms in the virtual scene.
It should be noted that the first picture obtaining position and the second picture obtaining position may also be locations outdoors, and the first viewpoint position and the second viewpoint position may also be positions in an outdoor scene, which may be determined according to actual needs, and the embodiments of the present disclosure are not limited thereto.
For example, in step S115, a hexahedron as shown in fig. 3B is constructed, and when the first scene is displayed, the multiple first scene pictures in the first panorama are attached to different surfaces of the hexahedron, so as to construct a full-stonecrop empty box, so that when the first scene is displayed, the panoramic sky box includes a texture corresponding to the first scene; when the second scene is displayed, the plurality of second scene pictures in the second panoramic picture are attached to different surfaces of the hexahedron, so that the panoramic sky box is reconstructed, and when the second scene is displayed, the panoramic sky box comprises textures corresponding to the second scene. For example, the full sedum empty box may be constructed by performing a rendering operation in a processing unit, such as a graphics processing unit, by calling a graphics program interface (e.g., OpenGL) to attach the plurality of scene pictures in the first panorama to different surfaces of the hexahedron or attach the plurality of scene pictures in the second panorama to different surfaces of the hexahedron.
For example, as shown in fig. 3A, the first panorama or the second panorama includes 6 scene pictures with different orientations, i.e., front, rear, left, right, top, bottom, and the like. For example, scene pictures in different directions, such as front, back, left, right, up, and down, are attached to the surface of the hexahedron in the corresponding direction as shown in fig. 3B, so that the all-stonecrop empty box including different textures in different scenes can be obtained.
For example, as shown in fig. 3C, the full sedum empty box is a cube, and the center of the full sedum empty box is its geometric center, for example, the geometric center is located at the origin of a 3-dimensional coordinate system, i.e., the center of the full sedum empty box is located at the origin of coordinate axes, for example, the origin (0,0,0) of the coordinate axes shown in fig. 3C. The first viewpoint position A and the second viewpoint position B are both located in the panoramic sky box and are located at different positions in the panoramic sky box.
In order to interactively browse a virtual scene, a corresponding visible part in the virtual scene needs to be displayed according to the rotation angle of a user, so when the corresponding scene is presented by using a full sedum empty box, a second virtual camera needs to be arranged in a panoramic sky box, and when the corresponding scene is presented by using a three-dimensional model, a first virtual camera is arranged in the three-dimensional model. For example, the first virtual camera and the second virtual camera are used for rendering a portion visible to a user in a virtual scene, for example, the first virtual camera and the second virtual camera are implemented by corresponding program codes for rendering, which may refer to the design in the art and are not described herein again.
For example, the second virtual camera and the first virtual camera are the same virtual camera, or different virtual cameras. For example, the first virtual camera and the second virtual camera may be two identical cameras or two different cameras, so that the switching between the full sedum aizoon and the three-dimensional model may be more flexible, and of course, the same virtual camera may be used in the full sedum aizoon and the three-dimensional model instead of the first virtual camera and the second virtual camera, which is not limited in this embodiment of the present disclosure. For example, the position of the first virtual camera and the position of the second virtual camera correspond to the actual picture taking positions, respectively. For example, the position of the first virtual camera and the position of the second virtual camera correspond to the first picture taking position and the second picture taking position, respectively.
For example, in this example, when the scene is presented using the full sedentary blank, the scene switching method further includes: when a first scene is displayed, enabling a second virtual camera to be located at a first viewpoint position A corresponding to the first scene in the full sedum aizoon box; when the second scene is displayed, the second virtual camera is positioned at a second viewpoint position B corresponding to the second scene in the all-rhodiola rosea empty box, so that a user can respectively see complete space pictures corresponding to different scenes at different viewpoint positions. As for the method of rendering according to the rotation angle of the user by using the first virtual camera and the second virtual camera, reference may be made to the design in the art and detailed description thereof will be omitted.
For example, in a first scene, the texture of the full sedum empty box includes a plurality of rotated and shifted first scene pictures, and in a second scene, the texture of the full sedum empty box includes a plurality of rotated and shifted second scene pictures. For example, the displacement information of the plurality of first scene pictures and the displacement information of the plurality of second scene pictures are determined according to the coordinates of the first viewpoint position and the coordinates of the second viewpoint position, respectively.
Since the second virtual camera is not located at the center of the all-rhodiola rosea empty box (i.e. the origin of coordinates (0,0,0)), in order to see a complete spatial picture at the corresponding viewpoint position, a plurality of scene photos applied to the texture of the all-rhodiola rosea empty box are also displaced to a certain extent according to the viewpoint position of the second virtual camera, so that the user can see the complete spatial picture when the second virtual camera is located at the first viewpoint position or the second viewpoint position. For example, when a first scene is displayed, displacement information of a plurality of first scene pictures in a first panorama corresponding to the first scene corresponds to displacement information between a first viewpoint position a and a coordinate origin (0,0, 0); when the second scene is displayed, the displacement information of the plurality of second scene pictures in the second panorama corresponding to the second scene corresponds to the displacement information between the second viewpoint position B and the origin of coordinates (0,0, 0).
For example, a photograph taken at the image capturing position may have an inclination, and therefore, a plurality of first scene images and a plurality of second scene images may be subjected to rotation correction, so as to ensure that a scene seen through the virtual camera may be identical to an actual scene, thereby improving user experience.
For example, in some examples, the first and second panoramic views are pictures of a scene of the same room, and thus, the first and second scenes presented by the panoramic sky box are scenes of the same room. Since the first viewpoint position is different from the second viewpoint position, the first scene and the second scene have a certain parallax although located in the same room. For example, in other examples, the first and second panoramic views are pictures of scenes of different rooms, and thus, the first and second scenes presented by the panoramic sky box are scenes of different rooms. In an embodiment of the present disclosure, the first panorama and the second panorama may be any scene pictures, may be pictures of a room (for example, indoor pictures), or may be outdoor pictures, and the embodiment of the present disclosure is not limited to this.
It should be noted that, in the embodiment of the present disclosure, a manner of generating a full sky box by using a first panorama or a second panorama is not limited to the manner described above, and any applicable sky box generation method may be adopted, and only a first scene corresponding to the first panorama or a second scene corresponding to the second panorama is presented in the panoramic sky box, which is not limited in this respect.
It should be noted that, in the embodiment of the present disclosure, the shape of the total sedum empty box is not limited to a cube, and may also be any suitable shape such as a rectangular parallelepiped, a hexagonal prism, and the like, which may be determined according to actual needs, and the embodiment of the present disclosure is not limited to this. For example, in some examples, when the panoramic sky box presents a scene corresponding to a particular shaped room, such as a hexagonal wall instead of a typical four-sided wall, the panoramic sky box may include a hexagonal prism, and accordingly, the first and second panoramic views each include 8 differently oriented scene pictures, with the 8 scene pictures correspondingly fitting onto different surfaces of the hexagonal prism.
It should be noted that, in the embodiment of the present disclosure, when the first panoramic view and the second panoramic view respectively include 6 scene pictures with different orientations, 6 surfaces of the generated panoramic sky box each have a texture; when the first panoramic picture and the second panoramic picture respectively comprise 5 scene pictures with different directions, the surfaces of the 5 directions of the front direction, the rear direction, the left direction, the right direction and the upper direction of the generated all-sky box have textures, and the surface of the lower part has no textures, so that all the panoramic sky boxes are sky domes. For example, when the surface below the panoramic sky box is not textured, an image algorithm may be used to generate an image of the underlying surface, or the underlying surface may be filled with colors.
The general three-dimensional model has the disadvantages of large data volume, high development cost, large file and large rendering calculation amount, so that the blockage is easily caused, the system is delayed, and the like.
For example, in some embodiments of the present disclosure, the three-dimensional model is determined based on wall data of a picture obtaining scene (e.g., a broker actually takes a picture of a room), so that the amount of data required for running the three-dimensional model can be greatly reduced, and thus, the resource occupation space can be reduced, the power consumption of the system can be reduced, the operation is simple, the fluency of the system can be ensured, the jamming can be avoided, and the user experience can be improved. The specific construction method of the three-dimensional model can refer to the design in the field, and is not described herein again.
For example, a building unit may be provided, and the full sedum empty box and the three-dimensional model may be built by the building unit; the building unit may also be implemented, for example, by a Central Processing Unit (CPU), an image processor (GPU), a Tensor Processor (TPU), a Field Programmable Gate Array (FPGA) or other form of processing unit with data processing and/or instruction execution capabilities and corresponding computer instructions. For example, the processing unit may be a general purpose processor or a special purpose processor, may be a processor based on the X86 or ARM architecture, or the like.
For step S120, when the above three-dimensional model display is adopted in the scene switching process, the first panorama and the second panorama need to be applied to the above constructed three-dimensional model. For example, when a trigger event for switching from a first viewpoint position to a second viewpoint position is detected, a first virtual camera in the three-dimensional model is used for rendering for displaying, for example, showing an animation in the process of switching from a first scene to a second scene, so that a spatial roaming effect can be generated in the process of scene switching.
Fig. 4 is a flowchart of a method for applying the first panorama and the second panorama to the three-dimensional model constructed as described above according to at least one embodiment of the present disclosure. For example, as shown in fig. 4, the scene switching method further includes step S121 and step S122.
Step S121: a hybrid texture is obtained based on the first and second panoramas.
Step S122: the hybrid texture is applied to the three-dimensional model.
For step S121, for example, the first panorama and the second panorama are blended according to a certain perspective transformation to obtain a blended texture. That is, the mixed texture includes both the first panorama and the second panorama, for example, by adjusting transparency of the first panorama and transparency of the second panorama in the mixed texture, switching from a first scene corresponding to the first panorama to a second scene corresponding to the second panorama can be realized, and thus a spatial roaming effect can be realized in the three-dimensional model. The specific mixing method can refer to the design in the field and is not described herein.
For step S122, for example, the first panorama and the second panorama included in the mixed texture are applied to the three-dimensional model, and since the depth information is originally included in the three-dimensional model, and then the mixed texture is applied to the corresponding position in the three-dimensional model, the image with the depth information displayed in the three-dimensional model is the same as the image in the photograph, so that distortion caused by position errors of specific scenes such as doors and windows is avoided, thereby improving the reality of display and improving user experience.
For example, in moving the first virtual camera from the first viewpoint position to the second viewpoint position, the transparency of the first panorama in the blended texture is decreased along the time axis, and the transparency of the second panorama in the blended texture is increased along the time axis. For example, the transparency of the first panorama and the transparency of the second panorama are normalized. For example, the transparency of the first panorama decreases from 1 to 0 within a certain period of time, and the transparency of the second panorama increases from 0 to 1 within the same period of time. For example, a transparency of 1 for the first panorama indicates that the first panorama is opaque, and a transparency of 0 for the second panorama indicates that the second panorama is transparent, so the first scene is displayed; for example, a transparency of 1 for the second panorama indicates that the second panorama is opaque, and a transparency of 0 for the first panorama indicates that the first panorama is transparent, which shows the second scene. Therefore, the first scene presented in the three-dimensional model gradually disappears, and the second scene presented in the three-dimensional model gradually appears, so that the first scene corresponding to the first panoramic image is switched to the second scene corresponding to the second panoramic image in the three-dimensional model.
In the embodiment of the disclosure, by implementing the operation method in the three-dimensional model, the scene switching process seen by the user has the switching effect of fade-in and fade-out, and also has the effect of spatial roaming in the scene switching process, thereby improving the user experience.
For example, the transparency of the first panorama and the transparency of the second panorama each vary linearly in time, and thus may have a more uniform switching effect. Of course, the embodiments of the present disclosure are not limited thereto, and the transparency of the first panorama and the transparency of the second panorama may also be nonlinearly changed in time, respectively, to achieve a personalized switching effect.
For example, the transparency of the first panorama and the transparency of the second panorama vary over the same period of time. When the transparency of the first panorama begins to change, the transparency of the second panorama also begins to change. When the transparency of the first panorama is changed, the transparency of the second panorama is also changed.
For example, the time axis includes a time to move the first virtual camera from the first viewpoint position to the second viewpoint position, and a sum of a transparency of the first panorama in the blended texture and a transparency of the second panorama in the blended texture is 1. That is, during the change of the transparency of the first panorama and the transparency of the second panorama, the sum of both is always kept at 1.
For example, in the process of switching from a first scene to a second scene, the transparency of the first panorama is 1 (i.e., the first panorama is opaque) and the transparency of the second panorama is 0 (i.e., the second panorama is transparent) when the first scene is displayed; when the second scene is displayed, the transparency of the first panorama is 0 (i.e., the first panorama is transparent) and the transparency of the second panorama is 1 (i.e., the second panorama is opaque). For example, during a scene cut, when the texture transparency of the first panorama is 0.8, the transparency of the second panorama is 0.2. For example, during a scene cut, when the transparency of the first panorama is 0.5, the transparency of the second panorama is 0.5. For example, after the scene switching is completed, the transparency of the first panorama is 0, and the transparency of the second panorama is 1, and at this time, the user sees the second scene corresponding to the second panorama.
In this way, the transparency of the texture seen by the user (i.e., the superimposed texture of the transparency of the first panorama and the transparency of the second panorama in the mixed texture on the three-dimensional model seen by the user) can be always kept at 1, thereby ensuring the consistency of the visual effect of the user.
For example, before the process of switching from the first scene to the second scene begins, that is, when a single scene (i.e., the first scene) is displayed, the panoramic sky box is used for displaying at this time, for example, the first scene corresponding to the first viewpoint position is displayed by using the second virtual camera in the full sedum sky box.
For example, after the first virtual camera is moved from the first viewpoint position to the second viewpoint position, that is, a single scene (that is, a second scene) is displayed, for example, the second virtual camera in the all-stonecrop empty box is used to display the second scene corresponding to the second viewpoint position, that is, after the scene switching is completed, the three-dimensional model is hidden, and the all-stonecrop empty box is continuously used to display the second scene.
For example, at this time, the second panorama displayed in the three-dimensional model is applied to the full-rhodiola empty box to replace the first panorama on the panoramic sky box, so that after the scene switching is completed, the second scene corresponding to the second panorama is displayed through the full-rhodiola empty box.
For example, as shown in fig. 3C, the first virtual camera is moved from the first viewpoint position to the second viewpoint position in such a manner that: and obtaining a movement vector uAuuBr according to the first viewpoint position A and the second viewpoint position B, and enabling the first virtual camera to move to the second viewpoint position B along the movement vector uAuuBr by taking the first viewpoint position A as an initial point. For example, the moving route of the first virtual camera is a straight line, that is, the first virtual camera moves along a straight line between two points A, B. For example, the first virtual camera may move at a constant speed or at a variable speed, which is not limited in this embodiment of the disclosure. The position transformation algorithm for the first virtual camera may refer to conventional designs and will not be described in detail here.
It should be noted that, in the embodiment of the present disclosure, the operation of changing the transparency of the first panorama and the transparency of the second panorama is performed simultaneously with the operation of moving the first virtual camera from the first viewpoint position to the second viewpoint position. For example, the moving time of the first virtual camera is equal to the changing time of the transparency of the first and second panoramas, and the time when the first virtual camera starts to move is equal to the time when the transparency of the first and second panoramas starts to change. For example, the moving time of the first virtual camera (i.e., the time of the change of the transparency of the first and second panoramas) may be 1s, 5s, 10s, 30s, 1min or any other time, which is a switching time of switching the scene at the first viewpoint position to the scene at the second viewpoint position, which may be determined according to actual needs, and embodiments of the present disclosure are not limited thereto.
In a typical scene switching method, different scenes are realized by different sky boxes, for example, two sky boxes for presenting different scenes are centered at the same point, for example, both at the origin of coordinates, or at different points. Accordingly, the scene switching process is only achieved by moving the virtual camera from one sky box to another. Therefore, in the scene switching process, the user cannot feel the effect of the spatial roaming, and the switching effect is more obtrusive, which is different from the feeling of the user in the actual environment.
In contrast, in the scene switching method provided in the embodiment of the present disclosure, scenes before or after (i.e., when no) scene switching is performed are both displayed by the same panoramic sky box, but are not displayed by different rhodiola rosea empty boxes, and in the scene switching process, the first virtual camera in the three-dimensional model is moved from the first viewpoint position a to the second viewpoint position B, so that an effect of allowing the user to view the animation can be achieved in the scene switching process, and thus the user can have a feeling of spatial roaming, which is similar to the feeling of the user in an actual environment, thereby improving the use experience of the user.
For example, the scene switching method provided by the embodiment of the disclosure can be used for house-watching software. The user browses scenes (virtual scenes) of all rooms through the electronic terminal (such as a mobile phone, a computer and the like), so that the room does not need to be seen on the spot, the room-viewing efficiency is improved, and the user experience is also improved. Of course, the embodiments of the present disclosure are not limited thereto, and the scene switching method may be used in any scene, for example, may also be used in the fields of games, education, and the like, which may be determined according to actual needs, and the embodiments of the present disclosure are not limited thereto.
For example, a control unit may be provided and by means of the control unit, during a switch from a first scene to a second scene, rendering for display using a first virtual camera in the three-dimensional model; the control unit may also be implemented, for example, by a Central Processing Unit (CPU), an image processor (GPU), a Tensor Processor (TPU), a Field Programmable Gate Array (FPGA) or other form of processing unit with data processing and/or instruction execution capabilities and corresponding computer instructions.
At least one embodiment of the disclosure further provides a scene switching method based on the three-dimensional model and the full-rhodiola rosea empty box, and the method is suitable for a computing device. The scene switching method comprises the following steps: constructing a full-sedum empty box and a three-dimensional model, wherein when a first scene is displayed, the panoramic sky box comprises textures corresponding to the first scene, when a second scene is displayed, the panoramic sky box comprises textures corresponding to the second scene, and the first scene is different from the second scene; displaying a first scene by using a full sedum empty box, and executing the following method when an instruction for clicking a click mark of a second scene is received in the first scene: rendering for display by using a first virtual camera in a three-dimensional model, and in the three-dimensional model, moving the first virtual camera from a first viewpoint position corresponding to a first scene to a second viewpoint position corresponding to a second scene to realize switching from the first scene to the second scene, wherein the first viewpoint position and the second viewpoint position are different, the first viewpoint position corresponds to a click mark of the first scene, and the second viewpoint position corresponds to a click mark of the second scene; and after the switching from the first scene to the second scene is executed, displaying the second scene by using a second virtual camera in the full sedum aizoon.
Fig. 5 is a flowchart of another scene switching method according to at least one embodiment of the present disclosure. As shown in fig. 5, the scene switching method includes steps S210 and S240.
Step S210: and constructing a full sedum empty box and a three-dimensional model.
For example, when a first scene is displayed, the panoramic sky box comprises a texture corresponding to the first scene, when a second scene is displayed, the panoramic sky box comprises a texture corresponding to the second scene, and the first scene and the second scene are different. For example, the specific operation process of step S210 may refer to the specific description of step S110, which is not described herein again.
Step S220: the first scene is displayed using the full sedum empty box.
For example, when the location marker is located at the click marker of the first scene, the first scene is displayed using the second virtual camera in the full sedum aizoon. For example, when the scene switching method is performed using a mobile phone or a computer and the respective scenes are displayed, the position mark may be a cursor clicked with a finger on a touch screen, a touch pad, or a mouse, for example.
For example, the location marker is displayed via a display screen of the computing device (e.g., a display screen of a cell phone or a computer).
It should be noted that the position mark and the click mark are not necessarily provided, as long as the full sedum empty box display can be implemented when the scene switching is not performed, and the three-dimensional model display is used when the scene switching is performed, which is not limited by the embodiment of the present disclosure.
For example, the click mark of the first scene corresponds to a first viewpoint position. The cursor is located at the click mark of the first scene, namely, at the first viewpoint position, the single scene display is performed at the moment, and the scene switching does not exist, so that the first scene is displayed by using the second virtual camera in the full sedum aizoon box.
Step S230: when an instruction to click a click marker of a second scene is received in a first scene, rendering for display using a first virtual camera in a three-dimensional model.
For example, when an electrode mark of the second scene is clicked on a display screen of the computing device by finger touch or mouse, the processor of the computing device receives an instruction to click the click mark of the second scene in the first scene, so as to call program code stored in a memory, for example, to perform a corresponding scene switching operation.
For example, when there is an internal ground roaming point, e.g., when the click mark is a small circle on the ground, for example, a finger or mouse may be allowed to click within a certain range at the position of the click mark, e.g., within 15 degrees of the center of the click mark; for example, when there is no ground roaming point, when the click flag is an arrow, for example, a click at the arrow is needed to implement the instruction for triggering the scene change, which is not limited in this embodiment of the present disclosure.
For example, in the three-dimensional model, the first virtual camera is moved from a first viewpoint position corresponding to a first scene to a second viewpoint position corresponding to a second scene to realize the switching from the first scene to the second scene, the first viewpoint position and the second viewpoint position are different, the first viewpoint position corresponds to the click mark of the first scene, and the second viewpoint position corresponds to the click mark of the second scene.
For example, the specific operation process of step S230 may refer to the specific description of step S120 shown in fig. 1, and is not described herein again.
Step S240: and after the switching from the first scene to the second scene is executed, displaying the second scene by using the full sedum aizoon blank box.
For example, after the switching from the first scene to the second scene is performed, that is, when the position mark is located at the click mark of the second scene, the second scene is displayed by using the second virtual camera in the full sedum aizoon box. For example, the position mark is located at the click mark of the second scene, that is, the cursor is located at the second viewpoint position corresponding to the second scene, at this time, the scene switching is finished, that is, the single scene, that is, the second scene is displayed, so that the second scene is displayed by using the second virtual camera in the all-stonecrop empty box.
It should be noted that, in the embodiments of the present disclosure, the flow of the scene switching method provided in the above-mentioned embodiments of the present disclosure may include more or less operations, and these operations may be executed sequentially or in parallel. Although the flow of the scene switching method described above includes a plurality of operations that occur in a specific order, it should be clearly understood that the order of the plurality of operations is not limited. The scene switching method described above may be performed once or may be performed a plurality of times according to a predetermined condition.
The scene switching method based on the three-dimensional model and the full sedum aizoon provided by each embodiment of the disclosure can realize switching of different scenes, has a space roaming effect in the scene switching process, has obvious space moving feeling, and improves the use experience of users.
For example, the scene switching method provided by the foregoing embodiments may be implemented by the scene switching system shown in fig. 6. As shown in fig. 6, the scene switching system 10 may include a user terminal 11, a network 12, a server 13, and a database 14.
The user terminal 11 may be, for example, a computer 11-1, a cellular phone 11-2 shown in fig. 6. It is understood that the user terminal 11 may be any other type of electronic device capable of performing data processing, which may include, but is not limited to, a desktop computer, a laptop computer, a tablet computer, a smart phone, a smart home device, a wearable device, a vehicle-mounted electronic device, a monitoring device, and the like. The user terminal may also be any equipment provided with an electronic device, such as a vehicle, a robot, etc.
The user may operate an application installed on the user terminal 11, the application transmits user behavior data to the server 13 through the network 12, and the user terminal 11 may also receive data transmitted by the server 13 through the network 12. The user terminal 11 may implement the scene switching method provided by the embodiment of the present disclosure by running a sub program or a sub thread.
For example, when the user uses the house-watching software on the user terminal 11, the server 13 transmits the house source information browsed by the user to the user terminal 11 through the network 12, the house source information including a virtual scene of the house source and a three-dimensional model, a full stonecrop box and related data required for presenting the virtual scene. The house watching software on the user terminal 11 displays the virtual scene of the house source, and the user can switch scenes by clicking different viewpoint positions in the virtual scene of the house source. For example, the user terminal 11 may include a touch screen, and thus the user may directly click a position on the screen with a finger to effect switching of scenes. For example, the user terminal 11 may also include a mouse, so that the user clicks the position of the cursor on the screen with the mouse to realize the switching of scenes.
In some embodiments, the processing unit of the user terminal 11 may be utilized to execute the scene switching method provided by the embodiments of the present disclosure. In some implementations, the user terminal 11 may perform the scene switching method using an application built in the user terminal 11. In other implementations, the user terminal 11 may execute the scene switching method provided by at least one embodiment of the present disclosure by calling an application program stored outside the user terminal 11.
In other embodiments, the user terminal 11 transmits a received instruction to click on a click mark (i.e., viewpoint position) of a different scene to the server 13 via the network 12, and the scene switching method is performed by the server 13. In some implementations, the server 13 may perform the scene switching method using an application built in the server. In other implementations, the server 13 may perform the scene change method by calling an application stored outside the server 13.
The network 12 may be a single network or a combination of at least two different networks. For example, the network 12 may include, but is not limited to, one or a combination of local area networks, wide area networks, public networks, private networks, and the like.
The server 13 may be a single server or a group of servers, each connected via a wired or wireless network. A group of servers may be centralized, such as a data center, or distributed. The server 13 may be local or remote.
The database 14 may generally refer to a device having a storage function. The database 13 is mainly used to store various data utilized, generated, and outputted from the user terminal 11 and the server 13 in operation. For example, the database 14 stores a large amount of house source information, the server 13 reads the house source information required by the user from the database 14 and transmits the house source information to the user terminal 11 through the network 12, and the user terminal 11 displays a virtual scene of the house source, thereby facilitating the user to browse and switch scenes. The database 14 may be local or remote. The database 14 may include various memories such as a Random Access Memory (RAM), a Read Only Memory (ROM), and the like. The above mentioned storage devices are only examples and the storage devices that the system can use are not limited to these.
The database 14 may be interconnected or in communication with the server 13 or a portion thereof via the network 12, or directly interconnected or in communication with the server 13, or a combination thereof.
In some embodiments, the database 15 may be a stand-alone device. In other embodiments, the database 15 may also be integrated in at least one of the user terminal 11 and the server 14. For example, the database 15 may be provided on the user terminal 11 or may be provided on the server 14. For another example, the database 15 may be distributed, and a part thereof may be provided in the user terminal 11 and another part thereof may be provided in the server 14.
For example, in some examples, the above-described scene switching method may be applied to a plurality of displaced terminals in a synchronized presentation. For example, when a plurality of users located in different regions want to watch the same house, the scene switching method can be applied to synchronous display of a plurality of different terminals, so that efficient synchronous roaming display can be realized. For example, the plurality of remote terminals may be terminal devices, such as mobile phones or computers, used by different users located in different areas, such as Shanghai, Wuhan and Beijing. For example, the plurality of different users may be a family in different regions simultaneously watching a set of houses for negotiation convenience, or a plurality of users unrelated to each other. For example, the plurality of remote terminals are controlled and displayed by the terminal device as the main control terminal as the controlled terminal. For example, the master control end may be a terminal device used by a broker, for example, in the Shanghai. For example, the broker may perform operations such as displaying the first scene or the second scene and switching the first scene to the second scene on the terminal device used by the broker as the main control end, and correspondingly, when the main control end performs corresponding operations, the computing device may send corresponding operation instructions to the terminal devices serving as the controlled ends located in the other different areas (for example, the shanghai, wuhan, and beijing) to control the terminal devices to simultaneously display the same operation display as the main control end, thereby implementing synchronous display of a plurality of terminals in different places.
For example, in this example, when multiple terminal devices located in different regions and serving as controlled terminals perform synchronous roaming display, it is not necessary to transmit image data with a large data size to the multiple terminal devices, it is only necessary to transmit the image data with the large data size to the terminal device serving as the main control terminal to perform corresponding scene display and switching, and the terminal device serving as the controlled terminal only needs to perform the same display as the main control terminal under the control of an operation instruction sent by the main control terminal, so that the synchronous roaming display of multiple terminals in different places can be realized, thereby effectively reducing the data transmission size, reducing the transmission power consumption, being simple in operation, responding in time, and further improving the user experience.
Fig. 7 is a schematic block diagram of a scene switching apparatus according to at least one embodiment of the present disclosure. For example, in the example shown in fig. 7, the scene switching apparatus 100 includes a construction unit 110 and a control unit 120. For example, these units may be implemented by hardware (e.g., circuit) modules or software modules, and the following embodiments are the same and will not be described again. These units may be implemented, for example, by a Central Processing Unit (CPU), image processor (GPU), Tensor Processor (TPU), Field Programmable Gate Array (FPGA) or other form of processing unit having data processing and/or instruction execution capabilities and corresponding computer instructions.
The constructing unit 110 is configured to construct a full-sedum empty box and a three-dimensional model, where when a first scene is displayed, the panoramic sky box includes a texture corresponding to the first scene, and when a second scene is displayed, the panoramic sky box includes a texture corresponding to the second scene, and the first scene is different from the second scene. For example, the constructing unit 110 may implement the step S110, and a specific implementation method thereof may refer to the related description of the step S110, which is not described herein again.
The control unit 120 is configured to render for display using a first virtual camera in the three-dimensional model during a switch from a first scene to a second scene, the control unit being configured to move the first virtual camera from a first viewpoint position corresponding to the first scene to a second viewpoint position corresponding to the second scene in the three-dimensional model to effect the switch from the first scene to the second scene, the first viewpoint position and the second viewpoint position being different. For example, the control unit 120 may implement step S120, and the specific implementation method may refer to the related description of step S120, which is not described herein again.
It should be noted that, in the embodiment of the present disclosure, the scene switching apparatus 100 may include more or less circuits or units, and the connection relationship between the respective circuits or units is not limited and may be determined according to actual requirements. The specific configuration of each circuit is not limited, and may be configured by an analog device, a digital chip, or other suitable configurations according to the circuit principle.
Fig. 8 is a schematic block diagram of another scene switching apparatus according to at least one embodiment of the present disclosure. For example, as shown in fig. 8, the scene switching apparatus 200 includes a processor 210, a memory 220, and one or more computer program modules 221.
For example, the processor 210 and the memory 220 are connected by a bus system 230. For example, one or more computer program modules 221 are stored in memory 220. For example, one or more computer program modules 221 include instructions for performing a scene switching method provided by any embodiment of the present disclosure. For example, instructions in one or more computer program modules 221 may be executed by processor 210. For example, the bus system 230 may be a conventional serial, parallel communication bus, etc., and embodiments of the present disclosure are not limited in this respect.
For example, the processor 210 may be a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an image processor (GPU) or other form of processing unit having data processing capability and/or instruction execution capability, may be a general purpose processor or a special purpose processor, and may control other components in the scene switching apparatus 200 to perform desired functions.
Memory 220 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on a computer-readable storage medium and executed by processor 210 to implement the functions of the disclosed embodiments (implemented by processor 210) and/or other desired functions, such as a scene switching method, etc. Various applications and various data, such as the first viewpoint position, the second viewpoint position, the movement displacement, and various data used and/or generated by the applications, etc., may also be stored in the computer-readable storage medium.
It should be noted that, for clarity and conciseness, not all the constituent elements of the scene switching apparatus 200 are given in the embodiments of the present disclosure. To realize the necessary functions of the scene switching device 200, those skilled in the art may provide and arrange other components not shown according to specific needs, and the embodiment of the present disclosure is not limited thereto.
For technical effects of the scene switching device 100 and the scene switching device 200 in different embodiments, reference may be made to technical effects of the scene switching method provided in the embodiments of the present disclosure, and details are not repeated here.
The scene switching apparatus 100 and the scene switching apparatus 200 may be used for various appropriate electronic devices (e.g., a terminal device or a server in fig. 6). Fig. 9 is a schematic structural diagram of an electronic device according to at least one embodiment of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
For example, as shown in fig. 9, in some examples, an electronic device 300 includes a processing apparatus (e.g., central processing unit, graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage 308 into a Random Access Memory (RAM) 303. In the RAM303, various programs and data necessary for the operation of the computer system are also stored. The processing device 301, the ROM302, and the RAM303 are connected thereto via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
For example, input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including a display such as a Liquid Crystal Display (LCD), speaker, vibrator, etc.; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309 including a network interface card such as a LAN card, modem, or the like. The communication means 309 may allow the electronic apparatus 300 to perform wireless or wired communication with other apparatuses to exchange data, performing communication processing via a network such as the internet. A drive 310 is also connected to the I/O interface 305 as needed. A removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 310 as necessary, so that a computer program read out therefrom is mounted into the storage device 309 as necessary. While fig. 9 illustrates an electronic device 300 that includes various means, it is to be understood that not all illustrated means are required to be implemented or included. More or fewer devices may be alternatively implemented or included.
For example, the electronic device 300 may further include a peripheral interface (not shown in the figure) and the like. The peripheral interface may be various types of interfaces, such as a USB interface, a lightning (lighting) interface, and the like. The communication device 309 may communicate with networks such as the internet, intranets, and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs) and other devices via wireless communication. The wireless communication may use any of a number of communication standards, protocols, and technologies, including, but not limited to, global system for mobile communications (GSM), Enhanced Data GSM Environment (EDGE), wideband code division multiple access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), bluetooth, Wi-Fi (e.g., based on IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, and/or IEEE 802.11n standards), voice over internet protocol (VoIP), Wi-MAX, protocols for email, instant messaging, and/or Short Message Service (SMS), or any other suitable communication protocol.
For example, the electronic device may be any device such as a mobile phone, a tablet computer, a notebook computer, an electronic book, a game machine, a television, a digital photo frame, and a navigator, and may also be any combination of electronic devices and hardware, which is not limited in this respect in the embodiments of the disclosure.
For example, the processes described above with reference to the flowcharts may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 308, or installed from the ROM 302. When executed by the processing device 301, the computer program performs the above-described authentication code processing function defined in the method of the embodiment of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; the acquired internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In various embodiments of the disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
At least one embodiment of the present disclosure also provides a storage medium. Fig. 10 is a schematic diagram of a storage medium according to at least one embodiment of the present disclosure. For example, as shown in fig. 10, the storage medium 400 stores non-transitory computer readable instructions 401, which can perform the scene switching method provided by any embodiment of the present disclosure when the non-transitory computer readable instructions are executed by a computer (including a processor).
For example, the storage medium can be any combination of one or more computer-readable storage media, such as one containing computer-readable program code that constructs the full-stonecrop aerial box and the three-dimensional model, another containing computer-readable program code that moves the first virtual camera from a first viewpoint position corresponding to a first scene to a second viewpoint position corresponding to a second scene to effect a switch from the first scene to the second scene. For example, when the program code is read by a computer, the computer may execute the program code stored in the computer storage medium to perform a scene switching method such as that provided by any of the embodiments of the present disclosure.
For example, the storage medium may include a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a flash memory, or any combination of the above, as well as other suitable storage media.
The following points need to be explained:
(1) the drawings of the embodiments of the disclosure only relate to the structures related to the embodiments of the disclosure, and other structures can refer to the common design.
(2) Without conflict, embodiments of the present disclosure and features of the embodiments may be combined with each other to arrive at new embodiments.
The above description is intended to be exemplary of the present disclosure, and not to limit the scope of the present disclosure, which is defined by the claims appended hereto.

Claims (16)

1. A scene switching method based on a three-dimensional model and a full-sedum empty box is applicable to a computing device, and comprises the following steps:
constructing the full sedum aizoon blank box and the three-dimensional model, wherein the three-dimensional model comprises a first virtual camera, and the full sedum aizoon blank box comprises a second virtual camera;
rendering for display using a first virtual camera in the three-dimensional model during a switch from a first scene to a second scene;
when the first scene is displayed, enabling the second virtual camera to be located at a first viewpoint position corresponding to the first scene;
when the second scene is displayed, enabling the second virtual camera to be located at a second viewpoint position corresponding to the second scene;
the first scene is a partial scene in the whole virtual environment corresponding to the three-dimensional model, the second scene is a partial scene in the whole virtual environment, and the first scene and the second scene are different;
in the process of switching from the first scene to the second scene, the first virtual camera is moved from a first viewpoint position corresponding to the first scene to a second viewpoint position corresponding to the second scene in the three-dimensional model so as to realize the switching from the first scene to the second scene, wherein the first viewpoint position and the second viewpoint position are different.
2. The scene switching method of claim 1, wherein constructing the full sedum aizoon box comprises:
acquiring a first panoramic image of the first scene, wherein the first panoramic image comprises a plurality of first scene pictures at a first picture acquisition position;
acquiring a second panoramic image of the second scene, wherein the second panoramic image comprises a plurality of second scene pictures at second picture acquisition positions;
obtaining the first viewpoint position based on the coordinates of the first picture acquisition position;
obtaining the second viewpoint position based on the coordinates of the second picture acquisition position;
constructing the full sedum aizoon based on a first panorama of the first scene or a second panorama of the second scene,
wherein the center of the full sedum empty box is positioned at the origin of the coordinate axis,
under the first scene, the texture of the full sedum empty box comprises a plurality of first scene pictures after rotation and displacement,
under the second scene, the texture of the full sedum empty box comprises a plurality of second scene pictures after rotation and displacement,
and the displacement information of the plurality of first scene pictures and the displacement information of the plurality of second scene pictures are determined according to the coordinates of the first viewpoint positions and the coordinates of the second viewpoint positions respectively.
3. The scene switching method according to claim 2, wherein, in switching from the first scene to the second scene, the scene switching method further comprises:
obtaining a hybrid texture based on the first and second panoramas;
applying the hybrid texture to the three-dimensional model.
4. The scene switching method of claim 3, wherein moving the first virtual camera from a first viewpoint position corresponding to the first scene to a second viewpoint position corresponding to the second scene in the three-dimensional model comprises:
in moving the first virtual camera from the first viewpoint position to the second viewpoint position, decreasing transparency of a first panorama in the blended texture along a time axis and increasing transparency of a second panorama in the blended texture along the time axis,
wherein the timeline includes a time to move the first virtual camera from the first viewpoint position to the second viewpoint position, and a sum of a transparency of a first panorama in the blended texture and a transparency of a second panorama in the blended texture is 1.
5. The scene switching method according to claim 4, wherein, in switching from the first scene to the second scene,
when the first scene is displayed, the transparency of the first panoramic image is 1, and the transparency of the second panoramic image is 0;
when the second scene is displayed, the transparency of the first panoramic image is 0, and the transparency of the second panoramic image is 1.
6. The scene switching method according to claim 1, further comprising:
and after the first virtual camera is moved from the first viewpoint position to the second viewpoint position, displaying a second scene corresponding to the second viewpoint position by using the second virtual camera.
7. The scene switching method according to claim 1 or 6, further comprising:
and before the process of switching from the first scene to the second scene is started, rendering by using the second virtual camera to display the first scene corresponding to the first viewpoint position.
8. The method of claim 1 or 6, wherein moving the first virtual camera from a first viewpoint position corresponding to the first scene to a second viewpoint position corresponding to the second scene comprises:
and obtaining a movement vector according to the first viewpoint position and the second viewpoint position, and moving the first virtual camera to the second viewpoint position along the movement vector by taking the first viewpoint position as a starting point.
9. The scene switching method according to claim 2, wherein the position of the first virtual camera and the position of the second virtual camera correspond to the first picture taking position and the second picture taking position, respectively.
10. The scene switching method according to claim 1 or 6, wherein the three-dimensional model is determined based on wall data of a picture taking scene.
11. The scene switching method according to any one of claims 1-2 and 7, wherein when a triggering event for switching from a first viewpoint position to a second viewpoint position is detected, rendering is performed for display using a first virtual camera in the three-dimensional model.
12. A scene switching method based on a three-dimensional model and a full-sedum empty box is applicable to a computing device, and comprises the following steps:
constructing the full sedum aizoon blank box and the three-dimensional model, wherein the three-dimensional model comprises a first virtual camera, and the full sedum aizoon blank box comprises a second virtual camera;
when the first scene is displayed, enabling the second virtual camera to be located at a first viewpoint position corresponding to the first scene;
when the second scene is displayed, enabling the second virtual camera to be located at a second viewpoint position corresponding to the second scene;
the first scene is a partial scene in the whole virtual environment corresponding to the three-dimensional model, the second scene is a partial scene in the whole virtual environment, and the first scene and the second scene are different;
displaying the first scene by using the full sedum aizoon, and executing the following method when receiving an instruction of clicking a click mark of the second scene in the first scene:
rendering for display using a first virtual camera in the three-dimensional model,
during the process of switching from the first scene to the second scene, in the three-dimensional model, moving the first virtual camera from a first viewpoint position corresponding to the first scene to a second viewpoint position corresponding to the second scene to realize the switching from the first scene to the second scene, wherein the first viewpoint position and the second viewpoint position are different, the first viewpoint position corresponds to a click mark of the first scene, and the second viewpoint position corresponds to a click mark of the second scene;
and after the switching from the first scene to the second scene is executed, displaying the second scene by using the full sedum aizoon blank box.
13. A scene switching apparatus based on a three-dimensional model and a panoramic sky box, comprising:
a construction unit configured to construct the full sedum empty box and the three-dimensional model, wherein the three-dimensional model includes a first virtual camera and the full sedum empty box includes a second virtual camera;
a control unit configured to render for display using a first virtual camera in the three-dimensional model during a switch from a first scene to a second scene; when the first scene is displayed, enabling the second virtual camera to be located at a first viewpoint position corresponding to the first scene; when the second scene is displayed, enabling the second virtual camera to be located at a second viewpoint position corresponding to the second scene; the first scene and the second scene are different; the first scene is a partial scene in the whole virtual environment corresponding to the three-dimensional model, the second scene is a partial scene in the whole virtual environment, and the first scene and the second scene are different;
wherein the control unit is configured to move the first virtual camera from a first viewpoint position corresponding to the first scene to a second viewpoint position corresponding to the second scene in the three-dimensional model during switching from the first scene to the second scene, and the first viewpoint position and the second viewpoint position are different.
14. The scene switching apparatus of claim 13, wherein the all-stonecrop aerial box further comprises a second virtual camera, the control unit further configured to:
when the first scene is displayed, enabling the second virtual camera to be located at a first viewpoint position corresponding to the first scene;
when the second scene is displayed, enabling the second virtual camera to be located at a second viewpoint position corresponding to the second scene;
wherein the second virtual camera and the first virtual camera are the same virtual camera or different virtual cameras.
15. A scene switching apparatus based on a three-dimensional model and a panoramic sky box, comprising:
a processor;
a memory;
one or more computer program modules stored in the memory and configured to be executed by the processor, the one or more computer program modules comprising instructions for performing implementing the scene switching method of any of claims 1-12.
16. A storage medium storing, non-transitory, computer-readable instructions that when executed by a computer perform the scene-switching method of any one of claims 1-12.
CN202110366826.XA 2019-11-30 2019-11-30 Scene switching method and device and storage medium Active CN112967390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110366826.XA CN112967390B (en) 2019-11-30 2019-11-30 Scene switching method and device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110366826.XA CN112967390B (en) 2019-11-30 2019-11-30 Scene switching method and device and storage medium
CN201911208538.0A CN111028336B (en) 2019-11-30 2019-11-30 Scene switching method and device and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201911208538.0A Division CN111028336B (en) 2019-11-30 2019-11-30 Scene switching method and device and storage medium

Publications (2)

Publication Number Publication Date
CN112967390A true CN112967390A (en) 2021-06-15
CN112967390B CN112967390B (en) 2022-02-25

Family

ID=70207378

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201911208538.0A Active CN111028336B (en) 2019-11-30 2019-11-30 Scene switching method and device and storage medium
CN202110366809.6A Active CN112967389B (en) 2019-11-30 2019-11-30 Scene switching method and device and storage medium
CN202110366826.XA Active CN112967390B (en) 2019-11-30 2019-11-30 Scene switching method and device and storage medium

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN201911208538.0A Active CN111028336B (en) 2019-11-30 2019-11-30 Scene switching method and device and storage medium
CN202110366809.6A Active CN112967389B (en) 2019-11-30 2019-11-30 Scene switching method and device and storage medium

Country Status (1)

Country Link
CN (3) CN111028336B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693895A (en) * 2022-03-24 2022-07-01 北京城市网邻信息技术有限公司 Map switching method and device, electronic equipment and storage medium

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111651053A (en) * 2020-06-10 2020-09-11 浙江商汤科技开发有限公司 Urban sand table display method and device, electronic equipment and storage medium
CN111968246B (en) * 2020-07-07 2021-12-03 北京城市网邻信息技术有限公司 Scene switching method and device, electronic equipment and storage medium
CN112135325A (en) * 2020-09-30 2020-12-25 Oppo广东移动通信有限公司 Network switching method, device, storage medium and terminal
CN112184907A (en) * 2020-10-27 2021-01-05 中图云创智能科技(北京)有限公司 Space moving method of three-dimensional scene
CN113034654A (en) * 2021-03-10 2021-06-25 北京房江湖科技有限公司 Scene switching method and scene switching system
CN113593052B (en) * 2021-08-06 2022-04-29 贝壳找房(北京)科技有限公司 Scene orientation determining method and marking method
CN113724331B (en) * 2021-09-02 2022-07-19 北京城市网邻信息技术有限公司 Video processing method, video processing apparatus, and non-transitory storage medium
CN115840546A (en) * 2021-09-18 2023-03-24 华为技术有限公司 Method, electronic equipment and device for displaying image on display screen
CN114900679B (en) * 2022-05-25 2023-11-21 安天科技集团股份有限公司 Three-dimensional model display method and device, electronic equipment and readable storage medium
CN115168925B (en) * 2022-07-14 2024-04-09 苏州浩辰软件股份有限公司 View navigation method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426202A (en) * 2013-07-24 2013-12-04 江苏物联网络科技发展有限公司 Display system and display method for three-dimensional panoramic interactive mobile terminal
CN106780759A (en) * 2016-12-09 2017-05-31 深圳创维-Rgb电子有限公司 Method, device and the VR systems of scene stereoscopic full views figure are built based on picture
CN108564654A (en) * 2018-04-03 2018-09-21 中德(珠海)人工智能研究院有限公司 The picture mode of entrance of three-dimensional large scene
US20190197765A1 (en) * 2017-12-22 2019-06-27 Magic Leap, Inc. Method of occlusion rendering using raycast and live depth

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101118622A (en) * 2007-05-25 2008-02-06 清华大学 Minisize rudders three-dimensional track emulation method under city environment
US10085005B2 (en) * 2015-04-15 2018-09-25 Lytro, Inc. Capturing light-field volume image and video data using tiled light-field cameras
US20170132845A1 (en) * 2015-11-10 2017-05-11 Dirty Sky Games, LLC System and Method for Reducing Virtual Reality Simulation Sickness
CN106485772B (en) * 2016-09-30 2019-10-15 北京百度网讯科技有限公司 Panorama switching method and system
CN108492354A (en) * 2018-03-13 2018-09-04 北京农业智能装备技术研究中心 A kind of methods of exhibiting and system of Agricultural Park scene
CN108629828B (en) * 2018-04-03 2019-08-13 中德(珠海)人工智能研究院有限公司 Scene rendering transition method in the moving process of three-dimensional large scene
CN108648257B (en) * 2018-04-09 2020-12-29 腾讯科技(深圳)有限公司 Panoramic picture acquisition method and device, storage medium and electronic device
US11164377B2 (en) * 2018-05-17 2021-11-02 International Business Machines Corporation Motion-controlled portals in virtual reality
CN109360262B (en) * 2018-10-23 2023-02-24 东北大学 Indoor positioning system and method for generating three-dimensional model based on CAD (computer-aided design) drawing
CN110018742B (en) * 2019-04-03 2023-11-21 北京八亿时空信息工程有限公司 Construction method of network virtual travel system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426202A (en) * 2013-07-24 2013-12-04 江苏物联网络科技发展有限公司 Display system and display method for three-dimensional panoramic interactive mobile terminal
CN106780759A (en) * 2016-12-09 2017-05-31 深圳创维-Rgb电子有限公司 Method, device and the VR systems of scene stereoscopic full views figure are built based on picture
US20190197765A1 (en) * 2017-12-22 2019-06-27 Magic Leap, Inc. Method of occlusion rendering using raycast and live depth
CN108564654A (en) * 2018-04-03 2018-09-21 中德(珠海)人工智能研究院有限公司 The picture mode of entrance of three-dimensional large scene

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693895A (en) * 2022-03-24 2022-07-01 北京城市网邻信息技术有限公司 Map switching method and device, electronic equipment and storage medium
CN114693895B (en) * 2022-03-24 2023-03-03 北京城市网邻信息技术有限公司 Map switching method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112967389A (en) 2021-06-15
CN111028336B (en) 2021-04-23
CN112967389B (en) 2021-10-15
CN112967390B (en) 2022-02-25
CN111028336A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN112967390B (en) Scene switching method and device and storage medium
US20230245395A1 (en) Re-creation of virtual environment through a video call
US11087553B2 (en) Interactive mixed reality platform utilizing geotagged social media
US20180033208A1 (en) Telelocation: location sharing for users in augmented and virtual reality environments
CN112312111A (en) Virtual image display method and device, electronic equipment and storage medium
CN111277890B (en) Virtual gift acquisition method and three-dimensional panoramic living broadcast room generation method
WO2023207963A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN114461064B (en) Virtual reality interaction method, device, equipment and storage medium
CN112070907A (en) Augmented reality system and augmented reality data generation method and device
CN110891167A (en) Information interaction method, first terminal and computer readable storage medium
EP4290464A1 (en) Image rendering method and apparatus, and electronic device and storage medium
CN109801354B (en) Panorama processing method and device
CN110889384A (en) Scene switching method and device, electronic equipment and storage medium
US20240037856A1 (en) Walkthrough view generation method, apparatus and device, and storage medium
CN114549781A (en) Data processing method and device, electronic equipment and storage medium
CN114419267A (en) Three-dimensional model construction method and device and storage medium
CN113704527B (en) Three-dimensional display method, three-dimensional display device and storage medium
WO2022244157A1 (en) Information processing device, information processing program, and information processing system
KR102464437B1 (en) Metaverse based cross platfrorm service system providing appreciation and trade gigapixel media object
US20230316670A1 (en) Volumetric immersion system & method
KR20130096785A (en) System, user terminal unit and method for guiding display information using mobile device
US20240143126A1 (en) Display method, apparatus, and electronic device
US20240020910A1 (en) Video playing method and apparatus, electronic device, medium, and program product
Jung et al. Immersive Virtual Reality Content Supporting a Wide and Free Viewpoint made with a Single 360° Camera
JP2023134089A (en) Communication device, communication system, display method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40054583

Country of ref document: HK