CN114581611A - Virtual scene construction method and device - Google Patents

Virtual scene construction method and device Download PDF

Info

Publication number
CN114581611A
CN114581611A CN202210457590.5A CN202210457590A CN114581611A CN 114581611 A CN114581611 A CN 114581611A CN 202210457590 A CN202210457590 A CN 202210457590A CN 114581611 A CN114581611 A CN 114581611A
Authority
CN
China
Prior art keywords
target
image
panoramic
scene
acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210457590.5A
Other languages
Chinese (zh)
Other versions
CN114581611B (en
Inventor
盛哲
丁帅
徐超
董子龙
谭平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210457590.5A priority Critical patent/CN114581611B/en
Publication of CN114581611A publication Critical patent/CN114581611A/en
Application granted granted Critical
Publication of CN114581611B publication Critical patent/CN114581611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The embodiment of the specification provides a virtual scene construction method and a virtual scene construction device, wherein the virtual scene construction method comprises the following steps: acquiring an image at least one acquisition point in a target scene through image acquisition equipment, and acquiring a video at a target acquisition point in the at least one acquisition point through video acquisition equipment; generating a panoramic image corresponding to each acquisition point according to an image acquisition result, and generating a panoramic video corresponding to the target acquisition point according to a video acquisition result; and processing the panoramic image and the panoramic video through an editing platform to generate a target virtual scene corresponding to the target scene, wherein the editing platform is used for editing the virtual scene browsed through the terminal. The method is applied to the field of virtual reality, and the generated target virtual scene can be rendered in a Web end renderer to realize roaming interaction for a user.

Description

Virtual scene construction method and device
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a virtual scene construction method and device.
Background
With the development of computer technology, three-dimensional reconstruction technology is applied to more and more scenes, and the applied project scenes comprise display projects such as real estate, travel, exhibition and the like. The immersive roaming experience can be brought to the user by three-dimensional reconstruction of different objects. And can be divided into object three-dimensional reconstruction, scene three-dimensional reconstruction, human body three-dimensional reconstruction and the like according to different reconstructed objects in different scenes. In the prior art, the technology for three-dimensional reconstruction of indoor and outdoor scenes is mature day by day, and applications and projects such as VR house watching and virtual shopping are promoted. The project is mainly deployed at a web end, a three-dimensional scene is mainly rendered by WebGL, and data required by rendering in the process comprises the following steps: a chartlet model of a specified scene, a 360-degree panorama of a plurality of browsing points. However, although the scene construction method can meet the browsing requirements of the user, the reality is low, and the scene construction method cannot effectively interact with the user, so that an effective scheme is urgently needed to solve the above problems.
Disclosure of Invention
In view of this, embodiments of the present specification provide a virtual scene construction method. One or more embodiments of the present disclosure also relate to a virtual scene constructing apparatus, a computing device, a computer-readable storage medium, and a computer program, so as to solve the technical deficiencies in the prior art.
According to a first aspect of embodiments of the present specification, there is provided a first virtual scene construction method, including:
acquiring an image at least one acquisition point in a target scene through image acquisition equipment, and acquiring a video at a target acquisition point in the at least one acquisition point through video acquisition equipment;
generating a panoramic image corresponding to each acquisition point according to an image acquisition result, and generating a panoramic video corresponding to the target acquisition point according to a video acquisition result;
and processing the panoramic image and the panoramic video through an editing platform to generate a target virtual scene corresponding to the target scene, wherein the editing platform is used for editing the virtual scene browsed through the terminal.
According to a second aspect of embodiments of the present specification, there is provided a second virtual scene construction method, including:
receiving a virtual scene construction instruction submitted by a user aiming at a target scene;
acquiring an image at least one acquisition point in the target scene through an image acquisition device, and acquiring a video at a target acquisition point in the at least one acquisition point through a video acquisition device;
generating a panoramic image corresponding to each acquisition point according to an image acquisition result, and generating a panoramic video corresponding to the target acquisition point according to a video acquisition result;
processing the panoramic image and the panoramic video through an editing platform to generate a target virtual scene corresponding to the target scene;
creating a scene access link for the target virtual scene in response to the virtual scene construction instruction.
According to a third aspect of the embodiments of the present specification, there is provided a third virtual scene construction method, including:
displaying a virtual scene construction interface based on a starting instruction submitted by a user aiming at a scene construction application, and receiving a virtual scene construction instruction submitted by the user through the virtual scene construction interface;
acquiring an image at least one acquisition point in a target scene through image acquisition equipment, and acquiring a video at a target acquisition point in the at least one acquisition point through video acquisition equipment;
generating a panoramic image corresponding to each acquisition point according to an image acquisition result, and generating a panoramic video corresponding to the target acquisition point according to a video acquisition result;
and processing the panoramic image and the panoramic video through an editing platform to generate a target virtual scene corresponding to the target scene and display the target virtual scene to the user.
According to a fourth aspect of the embodiments of the present specification, there is provided a virtual scene constructing apparatus including:
an acquisition module configured to perform image acquisition at least one acquisition point in a target scene by an image acquisition device and perform video acquisition at a target acquisition point of the at least one acquisition point by a video acquisition device;
the generating module is configured to generate a panoramic image corresponding to each acquisition point according to an image acquisition result and generate a panoramic video corresponding to the target acquisition point according to a video acquisition result;
and the processing module is configured to process the panoramic image and the panoramic video through an editing platform and generate a target virtual scene corresponding to the target scene, wherein the editing platform is used for editing the virtual scene browsed through a terminal.
According to a fifth aspect of embodiments of the present specification, there is provided a second virtual scene constructing apparatus, including:
the receiving instruction module is configured to receive a virtual scene construction instruction submitted by a user aiming at a target scene;
a capture video module configured to capture an image at least one capture point in the target scene by an image capture device and to capture a video at a target capture point of the at least one capture point by a video capture device;
the video generation module is configured to generate a panoramic image corresponding to each acquisition point according to an image acquisition result and generate a panoramic video corresponding to the target acquisition point according to a video acquisition result;
the scene generation module is configured to process the panoramic image and the panoramic video through an editing platform to generate a target virtual scene corresponding to the target scene;
a create link module configured to create a scene access link for the target virtual scene in response to the virtual scene build instruction.
According to a sixth aspect of embodiments of the present specification, there is provided a third virtual scene constructing apparatus including:
the display interface module is configured to display a virtual scene construction interface based on a starting instruction submitted by a user aiming at a scene construction application, and receive a virtual scene construction instruction submitted by the user through the virtual scene construction interface;
a capture video module configured to capture an image at least one capture point in a target scene by an image capture device and to capture a video at a target capture point of the at least one capture point by a video capture device;
the video generation module is configured to generate a panoramic image corresponding to each acquisition point according to an image acquisition result and generate a panoramic video corresponding to the target acquisition point according to a video acquisition result;
and the scene display module is configured to process the panoramic image and the panoramic video through an editing platform, generate a target virtual scene corresponding to the target scene and display the target virtual scene to the user.
According to a seventh aspect of embodiments herein, there is provided a computing device comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions, and the processor is configured to implement the steps of the virtual scene construction method described above when executing the computer-executable instructions.
According to an eighth aspect of embodiments herein, there is provided a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of the above virtual scene construction method.
According to a ninth aspect of embodiments herein, there is provided a computer program, wherein when the computer program is executed in a computer, the computer is caused to execute the steps of the above virtual scene construction method.
In order to improve browsing experience of a user, the virtual scene construction method provided by the present specification may perform image acquisition at least one acquisition point in a target scene through an image acquisition device, perform video acquisition at the target acquisition point through a video acquisition device, so as to obtain a panoramic image and a panoramic video according to an acquisition result, then perform panoramic video fusion on the panoramic image by using an editing platform, so as to obtain a target virtual scene corresponding to the target scene according to a fusion result, so as to achieve blending of the panoramic video in the target scene image, and can bring immersive experience to the user, so that the target virtual scene is more realistic, and the user experience effect is improved.
Drawings
Fig. 1 is a schematic diagram of a virtual scene construction method provided in an embodiment of the present specification;
fig. 2 is a flowchart of a first virtual scene construction method provided in an embodiment of the present specification;
fig. 3 is a schematic diagram of acquisition points in a first virtual scene construction method provided in an embodiment of the present specification;
fig. 4 is a flowchart of a first virtual scene construction method according to an embodiment of the present specification;
fig. 5 is a schematic diagram of an editing platform in a first virtual scene construction method provided in an embodiment of the present specification;
fig. 6 is a flowchart of a second virtual scene construction method provided in an embodiment of the present specification;
fig. 7 is a flowchart of a second virtual scene construction method according to an embodiment of the present specification;
fig. 8 is a flowchart of a third virtual scene construction method provided in an embodiment of the present specification;
fig. 9 is a flowchart of a third virtual scene construction method provided in an embodiment of the present specification;
fig. 10 is a flowchart of a fourth virtual scene construction method provided in an embodiment of the present specification;
fig. 11 is a flowchart of a fifth virtual scene construction method according to an embodiment of the present specification;
fig. 12 is a schematic structural diagram of a first virtual scene constructing apparatus according to an embodiment of the present specification;
fig. 13 is a schematic structural diagram of a second virtual scene constructing apparatus according to an embodiment of the present specification;
fig. 14 is a schematic structural diagram of a third virtual scene constructing apparatus according to an embodiment of the present specification;
fig. 15 is a block diagram of a computing device according to an embodiment of the present disclosure.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present description. This description may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, as those skilled in the art will be able to make and use the present disclosure without departing from the spirit and scope of the present disclosure.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present specification relate are explained.
Three-dimensional reconstruction: the method is used for establishing a mathematical model suitable for computer representation and processing on a three-dimensional object, is the basis for processing, operating and analyzing the properties of the three-dimensional object in a computer environment, and is also a key technology for establishing virtual reality expressing an objective world in a computer.
Panoramic camera: the device is an acquisition device for acquiring 360-degree 180-degree panoramic images of surrounding scenes; generally, images at a plurality of angles can be collected simultaneously, and a panoramic image is spliced and output in the machine. Panoramic camera devices may also typically take 360 degrees by 180 degrees panoramic video.
Vr (virtual reality) roaming: virtual roaming is an important branch of Virtual Reality (VR) technology, has a plurality of applications in various industries such as building, tourism, games, aerospace, medicine and the like, and has 3D characteristics such as immersion, interactivity and imagination. At present, VR house watching is a common application in the industry, and the effect of roaming in space is realized by collecting a model of a generated scene and a panorama of a plurality of point positions and matching with a proper renderer and an interaction means.
In the present specification, a virtual scene construction method is provided, and the present specification relates to a virtual scene construction apparatus, a computing device, a computer-readable storage medium, and a computer program, which are described in detail one by one in the following embodiments.
In practical application, when three-dimensional reconstruction is performed on an object involved in a project scene, a reconstruction scheme of the three-dimensional reconstruction is influenced by acquisition equipment, and three-dimensional reconstruction is completed by combining different schemes through different acquisition equipment. One solution is a depth camera solution. The device is provided with 3 depth cameras for head-up, overlooking and look-up to respectively collect a depth map and a color map, and a rotating motor is arranged at the bottom of the device so that the device can horizontally rotate to shoot scene pictures. On one acquisition point, the equipment rotates to 6 angles at intervals of 60 degrees for shooting, and finally 18 color images and 18 depth images are obtained. In the same indoor scene, this operation is repeated at several (n) acquisition points for acquisition shooting. For all the acquired data, off-line three-dimensional reconstruction algorithm processing is required to obtain 1 scene model and n panoramas. On one hand, the 18 color images of each acquisition point can be calculated to obtain 1 panoramic image through steps of re-projection, edge fusion and the like; on the other hand, after the 18 × n color maps and 18 × n depth maps are subjected to registration splicing, mesh reconstruction, mapping reconstruction and the like, 1 model with a mapping can be obtained. The depth camera scheme acquisition equipment acquires images and depth information of the surrounding environment by using the depth camera. The depth camera generally acquires the actual distance of an object corresponding to each pixel in a shot view by adopting the principles of structured light, flight time, binocular and the like. However, when the scheme is used for reconstruction, the depth range is short and the depth error is large, so that the deviation between a reconstructed three-dimensional scene and a real scene is large.
The second scheme is a lidar scheme. The laser radar scheme adopts the laser radar to measure the depth information of the surrounding environment, the laser radar generally adopts the principles of a trigonometry method, flight time and the like to obtain the actual distance of an object irradiated by a single laser point, the product covers various measuring range ranges, and the precision is generally higher. In the prior art, the range of the triangular method 2D laser radar can reach more than 30 meters, and the relative error is about +/-1% -2% of the measurement distance. Therefore, the area covered by a single acquisition point is wider, and the number of acquisition points can be reduced for an open scene. But it is more demanding on the equipment and requires extra hardware cost to complete.
And the third scheme is to adopt lightweight equipment to complete reconstruction, and the lightweight equipment acquisition scheme comprises a mobile phone holder adding scheme and a mobile phone panoramic camera adding scheme. The acquisition process generally comprises the steps that a mobile phone is connected with a panoramic camera or a cloud deck through wifi or Bluetooth, then acquisition is carried out on each acquisition point position through APP control equipment on the mobile phone, the cloud deck scheme can rotate by 18 angles, meanwhile, the mobile phone can shoot 18 colorful pictures, then panoramic picture splicing is carried out in the same mode as the first scheme, and a panoramic colorful picture is spliced; the panoramic camera scheme is to directly acquire a panoramic color image. And after obtaining the panoramic color image, obtaining a corresponding panoramic depth image by using a panoramic depth estimation technology. And finally, obtaining 1 model with a mapping after the collected n panoramic color images and the corresponding n panoramic depth images are subjected to registration splicing, grid reconstruction, mapping reconstruction and the like. The scheme is low in acquisition cost, but the accuracy of the produced model is low due to the limited specialization of lightweight equipment hardware.
In view of the above, referring to the schematic diagram shown in fig. 1, in order to improve the browsing experience of the user, the virtual scene construction method provided in this specification may perform image acquisition at least one acquisition point in the target scene through the image acquisition device, perform video acquisition at the target acquisition point through the video acquisition device, obtain a panoramic image and a panoramic video according to the acquisition result, and then fuse the panoramic image with the editing platform to obtain a target virtual scene corresponding to the target scene according to the fusion result, so as to achieve blending of the panoramic video into the target scene image, which may bring immersive experience to the user, make the target virtual scene more realistic, and improve the user experience effect.
Fig. 2 is a flowchart illustrating a first virtual scene construction method provided in an embodiment of the present specification, which specifically includes the following steps.
Step S202: the method comprises the steps of carrying out image acquisition at least one acquisition point in a target scene through an image acquisition device, and carrying out video acquisition at a target acquisition point in the at least one acquisition point through a video acquisition device.
Specifically, the image capturing device specifically refers to a device capable of capturing a scene image for a target scene, and the device may be a professional VR capturing device dedicated to reconstructing a virtual scene, or a consumer panoramic camera, and this embodiment is not limited herein. Correspondingly, the target scene specifically refers to a scene needing three-dimensional reconstruction, and may be a room to be reconstructed in a house project, an ancient building in a travel project, an article to be displayed in an exhibition project, and the like. Correspondingly, the video capture device specifically refers to a device capable of recording a video for a target scene, and may be a consumer-grade panoramic camera, or a mobile terminal, and the like, which is not limited herein. Correspondingly, the acquisition points specifically refer to acquisition position points set according to acquisition requirements in the target scene, and image acquisition is performed at the acquisition points, so that a mode of splicing panoramic images corresponding to a plurality of acquisition points in the subsequent process can be realized, and a virtual scene of the target scene through three-dimensional reconstruction can be obtained. The target acquisition point specifically refers to an acquisition point which needs to be subjected to video acquisition in at least one acquisition point.
In this embodiment, a virtual scene construction method is described by taking a target scene as a room to be reconstructed in a property project as an example, that is, by using the virtual scene construction method provided in this embodiment, a three-dimensional model corresponding to the room can be reconstructed, so that a user can browse the room on line; the same or corresponding description contents in this embodiment can be referred to for the schemes that need to perform three-dimensional reconstruction in different project scenarios, and this embodiment is not described in detail herein.
Furthermore, after image acquisition is completed at each acquisition point for a target scene by the image acquisition device, a target acquisition point needs to be determined in at least one acquisition point so as to perform video acquisition at the target acquisition point; in this process, in order to ensure that the acquired video and the image are more fitted, when video acquisition is performed, video acquisition needs to be completed according to the same acquisition parameters, and in this embodiment, the specific implementation manner is as follows:
receiving a collection point selection instruction submitted by a user; determining the target acquisition point in the at least one acquisition point according to the acquisition point selection instruction; and performing video acquisition at the target acquisition point through the video acquisition equipment according to the acquisition parameters corresponding to the target acquisition point.
Specifically, the acquisition point selection instruction specifically refers to an instruction uploaded by a user through a terminal, a target acquisition point required to acquire a video can be specified in at least one acquisition point through the instruction, and accordingly, the acquisition parameters specifically refer to parameters required to be followed when the video is acquired, including but not limited to the height, the pitch angle, the focal length and the like of the video acquisition device.
Based on this, after image acquisition is performed on a target scene at each acquisition point through the image acquisition equipment, in order to bind a video subsequently, after an acquisition point selection instruction is received, a target acquisition point is selected from the acquisition points included in the top view mapped by the terminal according to the instruction, and an acquisition parameter corresponding to the selected target acquisition point is fed back to a user, so that the user can conveniently control the video acquisition equipment to perform video acquisition on the target acquisition point according to the acquisition parameter corresponding to the target acquisition point, and the scene reconstruction can be performed subsequently.
For example, when a three-dimensional reconstruction of a room as shown in fig. 3 is required, first 9 acquisition points may be determined in the room, where each acquisition point is 1= P1, and each acquisition point is 2= P2 … and 9= P9; and then, carrying out image acquisition at each acquisition point through professional VR equipment according to a rotation angle of 60 degrees, and obtaining 6N images corresponding to each acquisition point after the acquisition is finished, wherein N represents the number of cameras of the professional VR equipment. Meanwhile, according to a collection point selection instruction of a user, in 9 collection points contained in a plan view of scene mapping, collection points 3 and 9 are selected as video collection points, namely collection points needing to shoot panoramic video are Pk3 and Pk 9; and then, video acquisition is carried out through a consumer-level panoramic camera at the acquisition points 3 and 9 according to the height and the focal length of the acquired images of professional-level VR equipment, so that the videos and the images can be conveniently integrated in the following process to obtain a three-dimensional model corresponding to a room.
In summary, by defining the target acquisition points to be subjected to video acquisition in the acquisition points and performing video acquisition according to the acquisition parameters, the acquired video and the acquired image can be ensured to be more similar in attribute, so that adjustment operation can be saved in the process of constructing a virtual scene subsequently, and the efficiency of three-dimensional reconstruction can be improved.
And S204, generating a panoramic image corresponding to each acquisition point according to the image acquisition result, and generating a panoramic video corresponding to the target acquisition point according to the video acquisition result.
Specifically, after the image acquisition is performed at each acquisition point and the video acquisition is performed at the target acquisition point, further, a panoramic image corresponding to each acquisition point can be generated according to the image acquisition result, and a panoramic video corresponding to the target acquisition point can be generated according to the video acquisition result.
Further, when generating a panoramic image corresponding to each acquisition point according to the image acquisition result, actually splicing a plurality of images corresponding to the same acquisition point to obtain the panoramic image corresponding to the acquisition point according to the splicing result, for example, 6 images are acquired by the same acquisition point through image acquisition equipment, the 6 images can be spliced end to end, and the panoramic image can be obtained according to the splicing result; correspondingly, the panoramic video can also be completed in the same way, namely, a plurality of videos corresponding to the target acquisition point are spliced frame by frame to obtain panoramic video frames, and finally the panoramic video frames are integrated to obtain the panoramic video corresponding to the target acquisition point.
Further, considering that after the image acquisition device finishes acquiring the image, a target virtual scene needs to be constructed subsequently, an image set corresponding to each acquisition point includes a color image and a depth image for constructing a map model, in this embodiment, the specific implementation manner is as follows:
generating an image set corresponding to each acquisition point according to an image acquisition result, and extracting a plurality of color images corresponding to each acquisition point in the image set; and splicing the plurality of color images to obtain a panoramic image corresponding to each acquisition point.
Specifically, the image set refers to a set composed of a depth image and a color image after image acquisition is performed on the same acquisition point by an image acquisition device. Correspondingly, the color image specifically refers to an RGB image corresponding to the acquisition point; the depth image is specifically an image capable of representing the depth of an object at an acquisition point, and is used for assisting in building a virtual scene map model.
Based on the method, an image set corresponding to each acquisition point is obtained according to an image acquisition result, and the panoramic image is an image for assisting a user to browse a virtual scene, so that a plurality of color images corresponding to each acquisition point can be extracted from the image set, then the color images are spliced to obtain the panoramic image corresponding to the same acquisition point, and after the color images are spliced to each acquisition point, the panoramic images corresponding to all the acquisition points are obtained, so that the virtual scene can be conveniently constructed.
Furthermore, when a user browses a target scene on line, position points are selected in the three-dimensional reconstruction model, and then the panoramic image and the panoramic video corresponding to the position points are loaded to assist the user in browsing the target scene; therefore, in order to support a user to browse a target scene, a three-dimensional model needs to be constructed, and in this embodiment, the specific implementation manner is as follows:
extracting a plurality of depth images corresponding to each acquisition point in the image set; according to the multiple depth images corresponding to each acquisition point and the multiple color images corresponding to each acquisition point, constructing a scene chartlet model corresponding to the target scene and uploading the scene chartlet model to the editing platform; and carrying out visualization processing on the scene chartlet model through the editing platform to generate a virtual scene chartlet model corresponding to the target virtual scene.
Specifically, the virtual scene map model is a texture model constructed according to the depth image and the color image, and may be understood as a model constructed by scaling the target scene according to a set ratio, and is used for browsing through the terminal.
Based on this, in order to support a user to browse more truly, a plurality of depth images corresponding to each acquisition point can be extracted from the image set, then a scene chartlet model corresponding to a target scene is constructed according to the depth images corresponding to the acquisition points and the plurality of color images, and then the scene chartlet model is uploaded to an editing platform, the scene chartlet model can be visually processed by responding to an editing instruction of the user, so that a virtual scene chartlet model corresponding to the target virtual scene is obtained according to a processing result, and the user can conveniently browse.
In practical application, after the virtual scene map model is constructed, the panoramic image and the panoramic video need to be bound to the virtual scene map model, and the binding is completed based on the acquisition point. That is to say, browsing positions corresponding to the acquisition points are mapped in the virtual scene map model, each browsing position associates the panoramic image and/or the panoramic video corresponding to the acquisition points, and when a user browses the virtual scene map model through a terminal, the panoramic image and/or the panoramic video associated with the current position is browsed at each browsing position so as to simulate the user to browse a target scene, and the browsing experience of the user is improved.
According to the above example, after the images are acquired by professional VR equipment and the videos are acquired by consumer panoramic cameras, the image sets corresponding to the acquisition points and the panoramic videos corresponding to the target acquisition points Pk3 and Pk9 can be obtained according to the acquisition results, then 6N color images corresponding to the acquisition points are spliced, the panoramic images corresponding to the acquisition points can be obtained, and meanwhile, the depth images corresponding to the acquisition points are used for constructing texture models.
Further, after obtaining the panoramic image corresponding to 1-9 acquisition points in the room, the panoramic videos corresponding to the acquisition points 3 and 9 and a texture model, uploading the panoramic image, the panoramic video, the texture model and the relationship between the acquisition points and the panoramic image and the panoramic video to an editing platform, editing the panoramic image, the panoramic video, the texture model and the relationship between the acquisition points and the panoramic image and the panoramic video by the editing platform, outputting the three-dimensional model corresponding to the room which meets the browsing requirements of the user, and rendering the three-dimensional model in a Web end renderer to realize roaming interaction for the user.
That is to say, after rendering is completed through the Web end, if a user needs to browse, the user can enter a page for displaying the three-dimensional model through the terminal, and according to the position of the viewing angle in the page, a panoramic image corresponding to the current position is selected from 9 panoramic images to be displayed to the user, and when a panoramic video exists at the position, the panoramic video can be played under the control of the user or automatically played, so that the user can conveniently browse.
It should be noted that the panoramic videos recorded in different scenes have different contents, for example, in a room project, the recorded panoramic videos may be used for teaching the use of electric appliances in the room and displaying light in the room; in a travel project, the recorded panoramic video may be a simulated animation; in the exhibition project, the recorded panoramic video can be a display video of an exhibit; therefore, the user can watch the display content from multiple angles in any project scene conveniently, and the watching experience of the user is improved.
In conclusion, the virtual scene chartlet model is uploaded to the editing platform together, so that the user is assisted in browsing the three-dimensional reconstructed virtual scene, the scene layout can be conveniently known, and the browsing experience of the user is improved.
Step S206, processing the panoramic image and the panoramic video through an editing platform to generate a target virtual scene corresponding to the target scene, wherein the editing platform is used for editing the virtual scene browsed through the terminal.
Specifically, after the generation processing of the panoramic image and the panoramic video is completed, further, it is considered that the panoramic video and the panoramic image may be interfered by off-site phonemes during acquisition, so that the panoramic video and the panoramic image are not real enough; meanwhile, in order to provide a more real browsing experience for a user, the panoramic image and the panoramic video can be uploaded to an editing platform, and then the user edits the virtual scene on the editing platform, so that a target virtual scene which meets the follow-up browsing requirement and has a mapping relation with the target scene can be obtained; the editing platform is used for editing a virtual scene browsed through the terminal; the target virtual scene specifically refers to a three-dimensional model fusing a panoramic video and a panoramic image, and corresponds to the target scene.
Based on the method, the panorama and the panoramic video are processed through the editing platform, namely the processing operation of fusing the panorama and the panoramic video is carried out, and the method is used for constructing a target virtual scene with stronger reality.
Referring to the structural block diagram shown in fig. 4, that is, after the image corresponding to the target scene is acquired by the professional VR acquisition device, the panorama corresponding to each acquisition point and the texture model can be obtained by the cloud three-dimensional reconstruction engine; meanwhile, a panoramic video corresponding to the target acquisition point can be obtained through the consumption-level panoramic camera and then sent to the editing platform, the determination of video binding point positions and the adjustment of the viewing angles of the panoramic video and the panoramic image are established through the editing platform which is edited through the Web end, so that the panoramic image, the panoramic video and the texture model which can be rendered through the Web end renderer can be obtained, and the target virtual scene for the user to browse can be obtained through the integration of the panoramic video, the panoramic image and the panoramic image.
Further, in the process of generating the target virtual scene, in consideration of different requirements of the user for browsing the virtual scene under different scenes, scene editing processing needs to be performed in combination with scene characteristics, and in this embodiment, a specific implementation manner is as follows:
receiving a scene editing instruction submitted by a user through the editing platform; responding to the scene editing instruction, configuring scene elements for the panoramic image, and adjusting the panoramic video to obtain a target panoramic image and a target panoramic video; and fusing the target panoramic image and the target panoramic video, and generating the target virtual scene corresponding to the target scene according to a fusion result.
Specifically, the scene editing instruction specifically refers to an operation of performing editing processing on the panoramic image and the panoramic video, so that scene elements can be configured in the panoramic image, and attribute adjustment is performed on the panoramic image; correspondingly, the target panorama is the adjusted panorama. The target panoramic video is the adjusted panoramic video.
Based on the method, after the panoramic image and the panoramic video are uploaded to the editing platform, a user can access the editing platform through a Web end to edit the panoramic image and the panoramic video. After a scene editing instruction submitted by a user is received, a default entrance strategy can be configured in a texture model (a virtual scene mapping model) in response to the instruction, when the user accesses the model, the user enters according to the default entrance strategy, and a panoramic image corresponding to the entrance is selected to be displayed to the user; and simultaneously, scene elements can be added in each panoramic image to assist a user in browsing objects in the target virtual scene. The added scene elements may be commodity identifiers, introduction information, science popularization information, and the like, and may be set according to an actual application scene, and this embodiment is not limited at all. And meanwhile, the panoramic video is adjusted in response to the instruction and used for generating the target panoramic video meeting the playing requirement. After the editing processing for the panoramic image and the panoramic video is completed, the panoramic image and the panoramic video can be fused to obtain the target virtual scene corresponding to the target scene, so that the user can browse conveniently.
Further, when editing the panoramic image and the panoramic video, in order to improve the browsing experience of the user, the reality of the target virtual scene may be improved through a finer-grained operation, in this embodiment, the specific implementation manner is as follows:
determining an entrance panorama corresponding to an entrance acquisition point in the at least one acquisition point and a scene panorama corresponding to a scene acquisition point; configuring entry information for the entry panorama in response to the scene editing instructions and adding object identification information in the scene panorama; generating the target panoramic image according to the entry information configuration result and the object identification information adding result; and responding to the scene editing instruction to perform visual angle adjustment, playing adjustment and/or video attribute adjustment on the panoramic video, and obtaining the target panoramic video.
Specifically, the entry collection point specifically refers to a default entry position when the user enters the virtual scene; correspondingly, the entry panorama is a panorama corresponding to the entry acquisition point; correspondingly, the scene panorama specifically refers to a panorama corresponding to other acquisition points except the entrance acquisition point in at least one acquisition point; correspondingly, the entry information specifically means setting the display priority of the entry panorama, and if the user browses the virtual scene for the first time, the entry panorama is preferentially displayed to the user; correspondingly, the object identification information specifically refers to identification information added for a target object in the panoramic image, and is used for describing the target object, and the identification information may be information such as a commodity label, price, introduction and the like; correspondingly, the adjustment of the viewing angle specifically refers to adjusting the viewing angle when the panoramic video is played, so that when the full-motion video is played at the target acquisition point, the position of the point is not staggered with the panoramic image being displayed at the point. Correspondingly, the playing adjustment specifically refers to adjusting relevant parameters during playing, such as resolution; correspondingly, the video attribute adjustment specifically refers to adjustment of attribute parameters of the panoramic video, such as a code rate and the like.
Based on this, referring to the schematic diagram shown in fig. 5, when a user edits a panoramic image and a panoramic video, the user can enter different editing pages, such as an entry editing interface, an AI shopping guide editing interface, a user-type image editing interface, a panoramic image editing interface, and the like, by selecting different controls, so as to complete editing processing on a virtual scene in different editing interfaces. Meanwhile, a hotspot tag, a commodity tag or a navigation tag and the like can be added into the panoramic image to assist a user in browsing.
That is to say, when a user performs entry configuration on a texture model and a panorama, an entry acquisition point may be determined in at least one acquisition point, and an entry scene graph corresponding to the entry acquisition point may be determined, and meanwhile, the remaining acquisition points are used as scene acquisition points, and a scene panorama corresponding to the scene acquisition points may be determined, and then entry information is configured on the entry panorama in response to an instruction of the user, and object identification information is added to the scene panorama, so as to implement entry configuration and object marking; meanwhile, based on the instruction, the visual angle adjustment, the playing adjustment and/or the video attribute adjustment can be performed on the video, such as cutting, caption adding, resolution adjustment, code rate adjustment and the like. And finally, a target panoramic image and a target panoramic video can be obtained according to the adjustment result, and the target virtual scene generated on the basis can better meet the browsing requirements of the user and has higher fitting degree with the real scene.
In addition, considering that the panoramic video added in a part of project scenes may be an explanation video, in order to ensure that the orientation of the explanation video faces a user, the direction of the view angle of the panoramic video can be adjusted through the editing platform, and after jumping to a target acquisition point, the renderer can automatically adjust the direction of the view angle to a preset direction. So that the instructor in the panoramic video may face the user.
In summary, by performing editing processing on the panoramic image and the panoramic video respectively, the required functions can be configured for the virtual scene in a personalized manner, so that a user can use different functions to experience a more real virtual scene when browsing a target virtual scene.
On the basis, considering that the image acquisition device and the video acquisition device may be finished by different devices, although shooting is carried out at the same position during shooting, the orientation of the two devices is difficult to ensure to be consistent. Therefore, in order to ensure that the orientations of the two are consistent, it is further necessary to support a user to adjust the rendering orientation of the panoramic video, so that the panoramic image and the panoramic video are overlapped, in this embodiment, a specific implementation manner is as follows:
and under the condition that the image acquisition equipment is detected to be different from the video acquisition equipment, aligning the target panoramic image and the target panoramic video, fusing the aligned target panoramic image and the target panoramic video, and generating the target virtual scene corresponding to the target scene according to the fusion result.
Specifically, the alignment processing specifically refers to an operation of overlapping the same display content of the target panoramic video and the target panoramic image, that is, the overlap ratio of the two can reach a threshold value by adjusting the rotation angle of the video or the image, so as to meet the playing requirement, and generate the target virtual scene.
Wherein, the aligning process of the target panoramic image and the target panoramic video comprises the following steps:
responding to an alignment instruction submitted by the user through the editing platform, and aligning the target panoramic image and the target panoramic video; or determining an associated panoramic image and an associated panoramic video of the target acquisition point, extracting an associated panoramic video frame matched with the associated panoramic image from the associated panoramic video, determining a panoramic image characteristic corresponding to the associated panoramic image and a panoramic video frame characteristic corresponding to the associated panoramic video frame, calculating an alignment parameter according to the panoramic image characteristic and the panoramic video frame characteristic, and aligning the target panoramic image and the target panoramic video according to the alignment parameter.
Specifically, the alignment instruction refers to an instruction submitted when a user manually adjusts the target panoramic video and the target panoramic image; correspondingly, the associated panoramic image is specifically a panoramic image corresponding to the target acquisition point, and the associated panoramic video is specifically a panoramic video corresponding to the target acquisition point; correspondingly, the associated panoramic video frame specifically refers to a panoramic video frame in the associated panoramic video frame sequence, which has the same display content as the associated panoramic image; correspondingly, the panoramic image feature specifically refers to a feature capable of representing the positional information of the panoramic image, and correspondingly, the panoramic video frame feature refers to a feature capable of representing the positional information of the associated panoramic video frame; correspondingly, the alignment parameter specifically refers to a parameter for adjusting the position of the panoramic video or the panoramic image.
Based on this, on one hand, when the user needs to manually adjust the alignment relationship, the target panoramic image and the target panoramic video may be aligned in response to the alignment instruction submitted by the user through the editing platform.
On the other hand, considering that there may be errors in manual adjustment, it is also possible to automatically trigger an alignment processing rule according to which: firstly, determining an associated panoramic image and an associated panoramic video of a target acquisition point, then performing framing processing on the associated panoramic video to obtain an associated panoramic image video frame sequence, and then extracting associated panoramic video frames matched with the associated panoramic image from the associated panoramic image video frame sequence; at this time, the position relationship between the associated panoramic image and the associated panoramic video frame can be determined by comparing the associated panoramic image with the associated panoramic video frame, and in order to ensure that the adjusted position coincidence degree is higher, the panoramic image characteristics corresponding to the associated panoramic image and the panoramic video frame characteristics corresponding to the associated panoramic video frame can be determined, the alignment parameters are calculated by combining the panoramic image characteristics and the associated panoramic video frame, the angle of the panoramic video and the panoramic image which need to be adjusted can be determined, and finally, the target panoramic image and the target panoramic video are aligned according to the alignment parameters.
That is to say, when performing alignment processing, the method of solving a Homography transformation matrix (Homography) may be adopted for implementation, firstly, feature points such as an associated panorama and an associated panoramic video frame SIFT/SURF/FAST/ORB may be extracted, secondly, a descriptor (vector expression) corresponding to each feature point is constructed, and then, by matching the feature point descriptors, a feature point pair matched between the associated panorama and the associated panoramic video frame is found, and the wrongly matched feature point pair is eliminated by using a RANSAC algorithm, and finally, by solving, a Homography transformation matrix (alignment parameter) may be obtained, and then, the alignment processing of the panorama and the panoramic video may be performed according to the matrix.
That is, for any point (xl, yl) in the panorama, the coordinates of a point having the same scene content in the panoramic image are (xr, yr), and the relationship between the two is expressed by the homography transformation matrix H as formula (1):
Figure 675579DEST_PATH_IMAGE001
(1)
when the shooting point positions of the panoramic video and the panoramic picture are superposed, the following formula (2) is satisfied:
Figure 858299DEST_PATH_IMAGE002
(2)
where T represents the displacement (translation) from the panoramic video frame to the panoramic frame, N represents the normal vector of the image plane in the panoramic image, and d represents the distance of the image plane at the far point of the panoramic frame coordinate system. When T =0, the scene depth d is disabled, i.e. there is H = R, and the homography transformation matrix degenerates to the rotation matrix R (rotation). And R is the relative rotation relation used by aligning the panoramic image and the panoramic video. That is to say, on this basis, the panoramic image or the panoramic video is rotated according to the rotation matrix R, so that the panoramic image and the panoramic video can be aligned, and the user can browse conveniently.
In conclusion, the panoramic image and the panoramic video are aligned in a mode of calculating the alignment parameters and are fused on the basis, so that the target virtual scene with better reality and display effect can be obtained, and a user can have better browsing experience when browsing.
In order to improve browsing experience of a user, the virtual scene construction method provided by the present specification may perform image acquisition at least one acquisition point in a target scene through an image acquisition device, perform video acquisition at the target acquisition point through a video acquisition device, so as to obtain a panoramic image and a panoramic video according to an acquisition result, then perform panoramic video fusion on the panoramic image by using an editing platform, so as to obtain a target virtual scene corresponding to the target scene according to a fusion result, so as to achieve blending of the panoramic video in the target scene image, and can bring immersive experience to the user, so that the target virtual scene is more realistic, and the user experience effect is improved.
Fig. 6 is a flowchart illustrating a second virtual scene construction method according to an embodiment of the present disclosure, which specifically includes the following steps.
Step S602: the method comprises the steps of carrying out image acquisition at least one acquisition point in a target scene through an image acquisition device, and carrying out video acquisition at a target acquisition point in the at least one acquisition point through a video acquisition device.
And step S604, generating a panoramic image corresponding to each acquisition point according to the image acquisition result, and generating a panoramic video corresponding to the target acquisition point according to the video acquisition result.
Step S606, receiving a scene editing instruction submitted by a user through an editing platform.
Step S608, determining an entry panorama corresponding to an entry acquisition point in the at least one acquisition point, and a scene panorama corresponding to a scene acquisition point.
Step S610, configuring entry information for the entry panorama in response to the scene editing instruction, and adding object identification information in the scene panorama.
And step S612, generating a target panoramic image according to the entry information configuration result and the object identification information adding result.
And step S614, responding to the scene editing instruction, and performing visual angle adjustment, playing adjustment and/or video attribute adjustment on the panoramic video to obtain the target panoramic video.
And step S616, fusing the target panoramic image and the target panoramic video under the condition that the image acquisition equipment is different from the video acquisition equipment, and generating a target virtual scene corresponding to the target scene according to a fusion result.
The second virtual scene construction method provided in this embodiment and the same or corresponding descriptions in the above embodiments may be referred to each other, and this embodiment is not described in detail herein.
Referring to the structural block diagram shown in fig. 7, when the image capturing device is different from the video capturing device, it is described that after the panoramic image is captured, the capturing device may capture the panoramic video again, and the position of the capturing device is not adjusted in this process, so that the alignment processing operation is not required, and the target virtual scene corresponding to the target scene is directly generated by fusion. That is to say, after the consumer-level panoramic camera finishes collecting the panoramic image and the panoramic video, the cloud three-dimensional reconstruction engine can generate a virtual scene corresponding to the target scene, and finally, the virtual scene is rendered in the Web-end renderer, so that roaming interaction of the user can be realized. Namely, the Web-end renderer fuses the panoramic image, the panoramic video and the texture model to generate a target virtual scene.
In summary, image acquisition is performed at least one acquisition point in the target scene through the image acquisition device, video acquisition is performed at the target acquisition point through the video acquisition device, so as to obtain a panoramic image and a panoramic video according to the acquisition result, then the panoramic image is fused by using the editing platform, so as to obtain a target virtual scene corresponding to the target scene according to the fusion result, so that the panoramic video is fused into the target scene image, immersive experience can be brought to the user, the target virtual scene is more realistic, and the user experience effect is improved. In the process, because the image acquisition equipment is the same as the video acquisition equipment, the construction of the target virtual scene can be realized without additional processing operation, and the scene construction efficiency can be effectively improved.
Fig. 8 is a flowchart illustrating a third virtual scene construction method according to an embodiment of the present disclosure, which specifically includes the following steps.
Step S802: the method comprises the steps of carrying out image acquisition at least one acquisition point in a target scene through an image acquisition device, and carrying out video acquisition at a target acquisition point in the at least one acquisition point through a video acquisition device.
Step S804, generating a panoramic image corresponding to each acquisition point according to the image acquisition result, and generating a panoramic video corresponding to the target acquisition point according to the video acquisition result.
Step S806, processing the panoramic image and the panoramic video through an editing platform, and generating a target virtual scene corresponding to the target scene, where the editing platform is configured to edit a virtual scene browsed through a terminal.
The third virtual scene construction method provided in this embodiment and the same or corresponding description contents in the above embodiments may be referred to each other, and this embodiment is not described in detail herein.
Further, in order to provide a more real video to the user after the video is acquired by the video acquisition device, that is, the panoramic video and the panoramic image have closer attributes, and an object related to a target scene in the video can be added to a virtual scene, the acquired video is segmented and then merged with a preset background, in this embodiment, the specific implementation manner is as follows:
generating an initial panoramic video corresponding to the target acquisition point according to a video acquisition result, and determining an initial panoramic video frame sequence corresponding to the initial panoramic video; performing object segmentation processing on initial panoramic video frames contained in the initial panoramic video frame sequence to obtain an object panoramic video frame sequence consisting of object panoramic video frames containing target objects; selecting a background panoramic video frame sequence related to the object panoramic video frame sequence, fusing the object panoramic video frame sequence and the background panoramic video frame sequence, and generating the panoramic video according to a fusion result.
Specifically, the initial panoramic video specifically refers to an unprocessed panoramic video; correspondingly, the initial panoramic video frame sequence specifically refers to a sequence formed by video frames obtained by framing the initial panoramic video; correspondingly, the object segmentation processing specifically refers to an operation of performing matting processing on a target object included in an initial panoramic video frame. Correspondingly, the object panoramic video frame sequence is a sequence formed by the panoramic video frames after the matting processing; correspondingly, the background panoramic video frame sequence specifically refers to a preset video frame sequence fused with the object panoramic video frame sequence, and the background panoramic video frame sequence and the object panoramic video frame sequence have the same frame number, and have a one-to-one correspondence relationship between the video frames.
Based on this, after the initial panoramic video is acquired by the video acquisition device, in order to smoothly fuse the panoramic video into the virtual scene, the initial panoramic video may be selectively subjected to framing processing to obtain an initial panoramic video frame sequence, and then object segmentation processing is respectively performed on the initial panoramic video frames included in the initial panoramic video frame sequence, that is, video frames corresponding to the target object are extracted from each initial panoramic video frame to obtain an object panoramic video frame sequence composed of object panoramic video frames including the target object. And then selecting a background panoramic video frame sequence associated with the object panoramic video frame sequence, and fusing the background panoramic video frame sequence and the object panoramic video frame sequence to obtain a panoramic video according to a fusion result for subsequent virtual scene construction.
In conclusion, the panoramic video is generated in a reconstruction mode, so that the panoramic video is closer to a real scene, and the display effect of the panoramic image and the panoramic video can be ensured to be closer to each other when a virtual scene is constructed subsequently, so that the browsing experience of a user is improved.
In this process, in order to ensure that the display effect is more real, the video authenticity can be improved by recalculating the illumination information, and in this embodiment, the specific implementation manner is as follows:
generating an object color image, an object depth image and a light information image containing the target object according to a video acquisition result; obtaining a target object depth image by filtering the object depth image; and according to the object color image and the target object depth image, carrying out object segmentation processing on an initial panoramic video frame contained in the initial panoramic video frame sequence to obtain the object panoramic video frame sequence. Calculating the irradiance of the image according to the light information image and a background panoramic video frame in the background panoramic video frame sequence; calculating illumination information according to the image irradiance and an object normal corresponding to the target object; fusing the object panoramic video frame sequence and the background panoramic video frame sequence to obtain a fused panoramic video; adjusting the fused panoramic video by utilizing the illumination information, and generating the panoramic video according to an adjustment result; processing the panoramic image and the panoramic video through the editing platform to generate an initial virtual scene corresponding to the target scene; and performing pose adjustment on the panoramic video associated with the initial virtual scene to generate the target virtual scene corresponding to the target scene.
Specifically, the object color image specifically refers to an RGB image including the target object; correspondingly, the object depth image specifically refers to a depth image containing a target object; correspondingly, the optical information image specifically refers to an ambient light field image in a target collection point location environment; correspondingly, the target depth image specifically refers to a depth image obtained by performing noise filtering on the depth image; accordingly, the image irradiance specifically refers to the radiant flux per unit area of the irradiated surface of the image, and represents the amount of radiant energy received per unit area of the irradiated surface, i.e., the radiant flux density on the irradiated surface. Correspondingly, the illumination information specifically refers to information for adjusting the illumination information of the panoramic video.
Based on this, in order to improve the reality of the panoramic video, enable the target object in the panoramic video to exist in the virtual scene and be more fit with the virtual scene, an object color image, an object depth image and an optical information image containing the target object need to be generated according to the video acquisition result; then, filtering the object depth image to obtain a target object depth image; on this basis, in order to enable the target object to appear in the virtual scene alone, the object panoramic video frame sequence may be obtained by performing object segmentation processing on the initial panoramic video frame included in the initial panoramic video frame sequence according to the object color image and the target object depth image.
Further, calculating the irradiance of the image according to the light information image and a background panoramic video frame in the background panoramic video frame sequence; calculating illumination information according to the image irradiance and an object normal corresponding to the target object; then, fusing the object panoramic video frame sequence and the background panoramic video frame sequence to obtain a fused panoramic video; adjusting the fused panoramic video by utilizing the illumination information, and generating the panoramic video according to an adjustment result; then uploading to an editing platform, and processing the panoramic image and the panoramic video through the editing platform to generate an initial virtual scene corresponding to the target scene; and performing pose adjustment on the panoramic video associated with the initial virtual scene to generate a target virtual scene corresponding to the target scene.
That is, referring to the structural block diagram shown in fig. 9, after acquiring an RGB image (object color image), a rough depth image (object depth image), and an ambient light field panorama (light information image), the professional-level acquisition apparatus may optimize the rough depth image, perform image segmentation according to the optimization result and the RGB image, and then calculate a normal line of the target object according to the segmentation result. And meanwhile, irradiance calculation is carried out by combining the environment light field panoramic image and a preset target environment light field panoramic image, relighting is completed according to a calculation result and a normal line, and finally, a keying video can be obtained through integration so as to be used for subsequent virtual scene construction.
In this process, a professional-level acquisition device (e.g., an RGBD camera) is used to acquire an RGB image and a depth image at a corresponding viewing angle, and an ambient light field image at the current point environment is acquired at the same time. Then, optimizing the depth image, namely filtering and denoising to obtain an optimized depth image; and performing image segmentation based on the RGB image and the optimized depth image, separating the foreground and the background of the video image, and separating the characters and the background in the video. And obtaining a video containing the task, namely obtaining the mark of the central character. And finally, relighting and rendering the marked image according to the irradiance estimation and the normal estimation result, namely outputting a keying video containing the figure according to the merging result, wherein the video contains an alpha channel, namely the non-figure region is transparent.
After sending the sectional video to the editing platform, the sectional video can be bound to the point position of the corresponding panoramic image, namely after the character video is bound to the corresponding point position, the angle and the position of the placed video are specified through the editing platform, and then the target virtual scene containing the panoramic image and the sectional video can be obtained, so that a user can browse conveniently.
In conclusion, the panoramic image and the panoramic video are fused in an image matting mode, so that the target virtual scene containing the panoramic image and the panoramic video is more real, the browsing requirements of users are met, and the browsing experience is improved.
In summary, image acquisition is performed at least one acquisition point in the target scene through the image acquisition device, video acquisition is performed at the target acquisition point through the video acquisition device, so as to obtain a panoramic image and a panoramic video according to the acquisition result, then the panoramic image is fused by using the editing platform, so as to obtain a target virtual scene corresponding to the target scene according to the fusion result, so that the panoramic video is fused into the target scene image, immersive experience can be brought to the user, the target virtual scene is more realistic, and the user experience effect is improved.
Fig. 10 is a flowchart illustrating a fourth virtual scene construction method according to an embodiment of the present disclosure, which specifically includes the following steps.
Step S1002, receiving a virtual scene construction instruction submitted by a user aiming at a target scene.
Step S1004, performing image acquisition at least one acquisition point in the target scene by an image acquisition device, and performing video acquisition at a target acquisition point in the at least one acquisition point by a video acquisition device.
Step S1006, generating a panorama corresponding to each acquisition point according to the image acquisition result, and generating a matting video corresponding to the target acquisition point according to the video acquisition result.
The image buckling video is a panoramic video corresponding to the target acquisition point generated according to the video acquisition result.
And step S1008, processing the panoramic image and the matting video through an editing platform to generate a target virtual scene corresponding to the target scene.
Step S1010, creating a scene access link for the target virtual scene as a response to the virtual scene construction instruction.
It should be noted that, where the description contents of the fourth virtual scene construction method provided in this embodiment are the same as or similar to those of the virtual scene construction method provided in the foregoing embodiment, reference may be made to the foregoing embodiment, and this embodiment is not described in detail herein.
Fig. 11 is a flowchart illustrating a fifth virtual scene construction method according to an embodiment of the present disclosure, which specifically includes the following steps.
Step S1102 is to display a virtual scene construction interface based on a start instruction submitted by a user for a scene construction application, and receive a virtual scene construction instruction submitted by the user through the virtual scene construction interface.
Step S1104, performing image acquisition at least one acquisition point in the target scene by the image acquisition device, and performing video acquisition at a target acquisition point of the at least one acquisition point by the video acquisition device.
Step S1106, generating a panoramic image corresponding to each acquisition point according to the image acquisition result, and generating a panoramic video corresponding to the target acquisition point according to the video acquisition result.
Step S1108, processing the panoramic image and the panoramic video through an editing platform, generating a target virtual scene corresponding to the target scene, and displaying the target virtual scene to the user.
It should be noted that, where the description contents of the fifth virtual scene construction method provided in this embodiment are the same as or similar to those of the virtual scene construction method provided in the foregoing embodiment, reference may be made to the foregoing embodiment, and this embodiment is not described in detail herein too.
Corresponding to the above method embodiment, the present specification further provides a first virtual scene constructing apparatus embodiment, and fig. 12 shows a schematic structural diagram of the first virtual scene constructing apparatus provided in the present specification. As shown in fig. 12, the apparatus includes:
an acquisition module 1202 configured to perform image acquisition at least one acquisition point in a target scene by an image acquisition device and perform video acquisition at a target acquisition point of the at least one acquisition point by a video acquisition device;
a generating module 1204, configured to generate a panoramic image corresponding to each acquisition point according to an image acquisition result, and generate a panoramic video corresponding to the target acquisition point according to a video acquisition result;
the processing module 1206 is configured to process the panoramic image and the panoramic video through an editing platform, and generate a target virtual scene corresponding to the target scene, where the editing platform is used to edit a virtual scene browsed through a terminal.
In an optional embodiment, the acquisition module 1202 is further configured to:
receiving a collection point selection instruction submitted by a user; determining the target acquisition point in the at least one acquisition point according to the acquisition point selection instruction; and performing video acquisition at the target acquisition point through the video acquisition equipment according to the acquisition parameters corresponding to the target acquisition point.
In an optional embodiment, the generating module 1204 is further configured to:
generating an image set corresponding to each acquisition point according to an image acquisition result, and extracting a plurality of color images corresponding to each acquisition point in the image set; obtaining a panoramic image corresponding to each acquisition point by splicing the plurality of color images;
correspondingly, the device further comprises:
a model generation module configured to extract a plurality of depth images corresponding to each acquisition point in the image set; according to the multiple depth images corresponding to each acquisition point and the multiple color images corresponding to each acquisition point, constructing a scene chartlet model corresponding to the target scene and uploading the scene chartlet model to the editing platform; and carrying out visualization processing on the scene chartlet model through the editing platform to generate a virtual scene chartlet model corresponding to the target virtual scene.
In an optional embodiment, the processing module 1206 is further configured to:
receiving a scene editing instruction submitted by a user through the editing platform; responding to the scene editing instruction, configuring scene elements for the panoramic image, and adjusting the panoramic video to obtain a target panoramic image and a target panoramic video; and fusing the target panoramic image and the target panoramic video, and generating the target virtual scene corresponding to the target scene according to a fusion result.
In an optional embodiment, the processing module 1206 is further configured to:
determining an entrance panorama corresponding to an entrance acquisition point in the at least one acquisition point and a scene panorama corresponding to a scene acquisition point; configuring entry information for the entry panorama in response to the scene editing instructions and adding object identification information in the scene panorama; generating the target panorama according to an entry information configuration result and an object identification information adding result;
correspondingly, the adjusting the panoramic video in response to the scene editing instruction to obtain the target panoramic video includes:
and responding to the scene editing instruction to perform visual angle adjustment, playing adjustment and/or video attribute adjustment on the panoramic video, and obtaining the target panoramic video.
In an optional embodiment, the apparatus further comprises:
a detection module configured to detect whether the image capture device and the video capture device are the same;
if yes, fusing the target panoramic image and the target panoramic video, and generating a target virtual scene corresponding to the target scene according to a fusion result;
if not, aligning the target panoramic image and the target panoramic video, fusing the aligned target panoramic image and the aligned target panoramic video, and generating the target virtual scene corresponding to the target scene according to the fusion result.
In an optional embodiment, the aligning the target panorama and the target panoramic video includes:
responding to an alignment instruction submitted by the user through the editing platform, and aligning the target panoramic image and the target panoramic video; or determining an associated panoramic image and an associated panoramic video of the target acquisition point, extracting an associated panoramic video frame matched with the associated panoramic image from the associated panoramic video, determining a panoramic image characteristic corresponding to the associated panoramic image and a panoramic video frame characteristic corresponding to the associated panoramic video frame, calculating an alignment parameter according to the panoramic image characteristic and the panoramic video frame characteristic, and aligning the target panoramic image and the target panoramic video according to the alignment parameter.
In an optional embodiment, the generating module 1204 is further configured to:
generating an initial panoramic video corresponding to the target acquisition point according to a video acquisition result, and determining an initial panoramic video frame sequence corresponding to the initial panoramic video; performing object segmentation processing on initial panoramic video frames contained in the initial panoramic video frame sequence to obtain an object panoramic video frame sequence consisting of object panoramic video frames containing target objects; selecting a background panoramic video frame sequence related to the object panoramic video frame sequence, fusing the object panoramic video frame sequence and the background panoramic video frame sequence, and generating the panoramic video according to a fusion result.
In an optional embodiment, the apparatus further comprises:
a filtering module configured to generate an object color image, an object depth image and a light information image containing the target object according to a video acquisition result; obtaining a target object depth image by filtering the object depth image;
correspondingly, the performing object segmentation processing on the initial panoramic video frame included in the initial panoramic video frame sequence to obtain an object panoramic video frame sequence composed of object panoramic video frames including a target object includes:
and according to the object color image and the target object depth image, carrying out object segmentation processing on an initial panoramic video frame contained in the initial panoramic video frame sequence to obtain the object panoramic video frame sequence.
In an optional embodiment, the apparatus further comprises:
a calculation module configured to calculate an image irradiance from the light information image and a background panoramic video frame of the sequence of background panoramic video frames; calculating illumination information according to the image irradiance and an object normal corresponding to the target object;
correspondingly, the fusing the object panoramic video frame sequence and the background panoramic video frame sequence and generating the panoramic video according to a fusion result includes:
fusing the object panoramic video frame sequence and the background panoramic video frame sequence to obtain a fused panoramic video; adjusting the fused panoramic video by utilizing the illumination information, and generating the panoramic video according to an adjustment result;
correspondingly, the processing the panoramic image and the panoramic video through the editing platform to generate a target virtual scene corresponding to the target scene includes:
processing the panoramic image and the panoramic video through the editing platform to generate an initial virtual scene corresponding to the target scene; and performing pose adjustment on the panoramic video associated with the initial virtual scene to generate the target virtual scene corresponding to the target scene.
The foregoing is a schematic scheme of the first virtual scene constructing apparatus in this embodiment. It should be noted that the technical solution of the virtual scene constructing apparatus and the technical solution of the virtual scene constructing method belong to the same concept, and details that are not described in detail in the technical solution of the virtual scene constructing apparatus can be referred to the description of the technical solution of the virtual scene constructing method.
Corresponding to the above method embodiment, the present specification further provides a second virtual scene constructing apparatus embodiment, and fig. 13 shows a schematic structural diagram of the second virtual scene constructing apparatus provided in the present specification embodiment. As shown in fig. 13, the apparatus includes:
a receive instruction module 1302 configured to receive a virtual scene construction instruction submitted by a user for a target scene;
an acquisition video module 1304 configured to perform image acquisition at least one acquisition point in the target scene by an image acquisition device and perform video acquisition at a target acquisition point in the at least one acquisition point by a video acquisition device;
a video generation module 1306, configured to generate a panoramic image corresponding to each acquisition point according to an image acquisition result, and generate a panoramic video corresponding to the target acquisition point according to a video acquisition result;
a scene generation module 1308, configured to process the panoramic image and the panoramic video through an editing platform, and generate a target virtual scene corresponding to the target scene;
a create link module 1310 configured to create a scene access link for the target virtual scene in response to the virtual scene build instruction.
The foregoing is a schematic scheme of the second virtual scene constructing apparatus in this embodiment. It should be noted that the technical solution of the virtual scene constructing apparatus and the technical solution of the virtual scene constructing method belong to the same concept, and details that are not described in detail in the technical solution of the virtual scene constructing apparatus can be referred to the description of the technical solution of the virtual scene constructing method.
Corresponding to the above method embodiment, the present specification further provides a third virtual scene constructing apparatus embodiment, and fig. 14 shows a schematic structural diagram of the third virtual scene constructing apparatus provided in the present specification embodiment. As shown in fig. 14, the apparatus includes:
a presentation interface module 1402 configured to present a virtual scene construction interface based on a start instruction submitted by a user for a scene construction application, and receive a virtual scene construction instruction submitted by the user through the virtual scene construction interface;
an acquire video module 1404 configured to perform image acquisition at least one acquisition point in a target scene by an image acquisition device and perform video acquisition at a target acquisition point of the at least one acquisition point by a video acquisition device;
a video generation module 1406 configured to generate a panoramic image corresponding to each acquisition point according to the image acquisition result, and generate a panoramic video corresponding to the target acquisition point according to the video acquisition result;
a scene display module 1408, configured to process the panoramic image and the panoramic video through an editing platform, generate a target virtual scene corresponding to the target scene, and display the target virtual scene to the user.
The above is a schematic scheme of the third virtual scene constructing apparatus of this embodiment. It should be noted that the technical solution of the virtual scene constructing apparatus and the technical solution of the virtual scene constructing method belong to the same concept, and details that are not described in detail in the technical solution of the virtual scene constructing apparatus can be referred to the description of the technical solution of the virtual scene constructing method.
FIG. 15 illustrates a block diagram of a computing device 1500 provided in accordance with one embodiment of the present description. The components of the computing device 1500 include, but are not limited to, a memory 1510 and a processor 1520. The processor 1520 is coupled to the memory 1510 via a bus 1530 and a database 1550 is used to store data.
The computing device 1500 also includes an access device 1540 that enables the computing device 1500 to communicate via one or more networks 1560. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 1540 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 1500, as well as other components not shown in FIG. 15, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device structure shown in FIG. 15 is for purposes of example only and is not limiting as to the scope of the description. Those skilled in the art may add or replace other components as desired.
Computing device 1500 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 1500 may also be a mobile or stationary server.
The processor 1520 is configured to execute computer-executable instructions, which when executed by the processor, implement the steps of the virtual scene construction method described above.
The foregoing is a schematic diagram of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the virtual scene construction method described above belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the virtual scene construction method described above.
An embodiment of the present specification further provides a computer-readable storage medium, which stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, the steps of the virtual scene construction method are implemented.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium and the technical solution of the virtual scene construction method described above belong to the same concept, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the virtual scene construction method described above.
An embodiment of the present specification further provides a computer program, wherein when the computer program is executed in a computer, the computer is caused to execute the steps of the virtual scene construction method.
The above is an illustrative scheme of a computer program of the present embodiment. It should be noted that the technical solution of the computer program and the technical solution of the virtual scene construction method described above belong to the same concept, and details that are not described in detail in the technical solution of the computer program can be referred to the description of the technical solution of the virtual scene construction method described above.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in source code form, object code form, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U.S. disk, removable hard disk, magnetic diskette, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunications signal, and software distribution medium, etc. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the foregoing method embodiments are described as a series of acts, but those skilled in the art should understand that the present embodiment is not limited by the described acts, because some steps may be performed in other sequences or simultaneously according to the present embodiment. Further, those skilled in the art should also appreciate that the embodiments described in this specification are preferred embodiments and that acts and modules referred to are not necessarily required for an embodiment of the specification.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present specification disclosed above are intended only to aid in the description of the specification. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the embodiments. The specification is limited only by the claims and their full scope and equivalents.

Claims (14)

1. A virtual scene construction method comprises the following steps:
acquiring an image at least one acquisition point in a target scene through image acquisition equipment, and acquiring a video at a target acquisition point in the at least one acquisition point through video acquisition equipment;
generating a panoramic image corresponding to each acquisition point according to an image acquisition result, and generating a panoramic video corresponding to the target acquisition point according to a video acquisition result;
and processing the panoramic image and the panoramic video through an editing platform to generate a target virtual scene corresponding to the target scene, wherein the editing platform is used for editing the virtual scene browsed through the terminal.
2. The method of claim 1, said video capturing at a target capture point of said at least one capture point by a video capture device, comprising:
receiving a collection point selection instruction submitted by a user;
determining the target acquisition point in the at least one acquisition point according to the acquisition point selection instruction;
and performing video acquisition at the target acquisition point through the video acquisition equipment according to the acquisition parameters corresponding to the target acquisition point.
3. The method of claim 1, wherein generating a panorama for each acquisition point according to the image acquisition result comprises:
generating an image set corresponding to each acquisition point according to an image acquisition result, and extracting a plurality of color images corresponding to each acquisition point in the image set;
obtaining a panoramic image corresponding to each acquisition point by splicing the plurality of color images;
correspondingly, the method further comprises the following steps:
extracting a plurality of depth images corresponding to each acquisition point in the image set;
according to the multiple depth images corresponding to each acquisition point and the multiple color images corresponding to each acquisition point, constructing a scene chartlet model corresponding to the target scene and uploading the scene chartlet model to the editing platform;
and carrying out visualization processing on the scene chartlet model through the editing platform to generate a virtual scene chartlet model corresponding to the target virtual scene.
4. The method of claim 1, wherein the processing the panoramic image and the panoramic video through the editing platform to generate a target virtual scene corresponding to the target scene comprises:
receiving a scene editing instruction submitted by a user through the editing platform;
responding to the scene editing instruction, configuring scene elements for the panoramic image, and adjusting the panoramic video to obtain a target panoramic image and a target panoramic video;
and fusing the target panoramic image and the target panoramic video, and generating the target virtual scene corresponding to the target scene according to a fusion result.
5. The method of claim 4, the configuring scene elements for the panorama in response to the scene editing instructions, obtaining a target panorama, comprising:
determining an entrance panorama corresponding to an entrance acquisition point in the at least one acquisition point and a scene panorama corresponding to a scene acquisition point;
configuring entry information for the entry panorama in response to the scene editing instructions and adding object identification information in the scene panorama;
generating the target panoramic image according to the entry information configuration result and the object identification information adding result;
correspondingly, the adjusting the panoramic video in response to the scene editing instruction to obtain the target panoramic video includes:
and responding to the scene editing instruction to perform visual angle adjustment, playing adjustment and/or video attribute adjustment on the panoramic video, and obtaining the target panoramic video.
6. The method according to claim 4 or 5, wherein before the step of fusing the target panoramic image and the target panoramic video and generating the target virtual scene corresponding to the target scene according to the fusion result is executed, the method further comprises:
detecting whether the image acquisition equipment is the same as the video acquisition equipment;
if yes, fusing the target panoramic image and the target panoramic video, and generating a target virtual scene corresponding to the target scene according to a fusion result;
if not, aligning the target panoramic image and the target panoramic video, fusing the aligned target panoramic image and the aligned target panoramic video, and generating the target virtual scene corresponding to the target scene according to the fusion result.
7. The method of claim 6, the aligning the target panorama and the target panoramic video, comprising:
responding to an alignment instruction submitted by the user through the editing platform, and aligning the target panoramic image and the target panoramic video; alternatively, the first and second electrodes may be,
determining an associated panoramic image and an associated panoramic video of the target acquisition point, extracting an associated panoramic video frame matched with the associated panoramic image from the associated panoramic video, determining a panoramic image characteristic corresponding to the associated panoramic image and a panoramic video frame characteristic corresponding to the associated panoramic video frame, calculating an alignment parameter according to the panoramic image characteristic and the panoramic video frame characteristic, and aligning the target panoramic image and the target panoramic video according to the alignment parameter.
8. The method according to claim 1, wherein the generating of the panoramic video corresponding to the target acquisition point according to the video acquisition result comprises:
generating an initial panoramic video corresponding to the target acquisition point according to a video acquisition result, and determining an initial panoramic video frame sequence corresponding to the initial panoramic video;
performing object segmentation processing on initial panoramic video frames contained in the initial panoramic video frame sequence to obtain an object panoramic video frame sequence consisting of object panoramic video frames containing target objects;
selecting a background panoramic video frame sequence related to the object panoramic video frame sequence, fusing the object panoramic video frame sequence and the background panoramic video frame sequence, and generating the panoramic video according to a fusion result.
9. The method of claim 8, further comprising:
generating an object color image, an object depth image and a light information image containing the target object according to a video acquisition result;
obtaining a target object depth image by filtering the object depth image;
correspondingly, the performing object segmentation processing on the initial panoramic video frame included in the initial panoramic video frame sequence to obtain an object panoramic video frame sequence composed of object panoramic video frames including a target object includes:
and according to the object color image and the target object depth image, carrying out object segmentation processing on an initial panoramic video frame contained in the initial panoramic video frame sequence to obtain the object panoramic video frame sequence.
10. The method of claim 9, further comprising:
calculating the irradiance of the image according to the light information image and a background panoramic video frame in the background panoramic video frame sequence;
calculating illumination information according to the image irradiance and an object normal corresponding to the target object;
correspondingly, the fusing the object panoramic video frame sequence and the background panoramic video frame sequence and generating the panoramic video according to a fusion result includes:
fusing the object panoramic video frame sequence and the background panoramic video frame sequence to obtain a fused panoramic video;
adjusting the fused panoramic video by utilizing the illumination information, and generating the panoramic video according to an adjustment result;
correspondingly, the processing the panoramic image and the panoramic video through the editing platform to generate a target virtual scene corresponding to the target scene includes:
processing the panoramic image and the panoramic video through the editing platform to generate an initial virtual scene corresponding to the target scene;
and performing pose adjustment on the panoramic video associated with the initial virtual scene to generate the target virtual scene corresponding to the target scene.
11. A virtual scene construction method comprises the following steps:
receiving a virtual scene construction instruction submitted by a user aiming at a target scene;
acquiring an image at least one acquisition point in the target scene through an image acquisition device, and acquiring a video at a target acquisition point in the at least one acquisition point through a video acquisition device;
generating a panoramic image corresponding to each acquisition point according to an image acquisition result, and generating a panoramic video corresponding to the target acquisition point according to a video acquisition result;
processing the panoramic image and the panoramic video through an editing platform to generate a target virtual scene corresponding to the target scene;
creating a scene access link for the target virtual scene in response to the virtual scene construction instruction.
12. A virtual scene construction method comprises the following steps:
displaying a virtual scene construction interface based on a starting instruction submitted by a user aiming at a scene construction application, and receiving a virtual scene construction instruction submitted by the user through the virtual scene construction interface;
acquiring an image at least one acquisition point in a target scene through image acquisition equipment, and acquiring a video at a target acquisition point in the at least one acquisition point through video acquisition equipment;
generating a panoramic image corresponding to each acquisition point according to an image acquisition result, and generating a panoramic video corresponding to the target acquisition point according to a video acquisition result;
and processing the panoramic image and the panoramic video through an editing platform to generate a target virtual scene corresponding to the target scene and display the target virtual scene to the user.
13. A computing device, comprising:
a memory and a processor;
the memory is for storing computer-executable instructions, and the processor is for executing the computer-executable instructions, which when executed by the processor, implement the steps of the method of any one of claims 1 to 12.
14. A computer-readable storage medium storing computer-executable instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 12.
CN202210457590.5A 2022-04-28 2022-04-28 Virtual scene construction method and device Active CN114581611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210457590.5A CN114581611B (en) 2022-04-28 2022-04-28 Virtual scene construction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210457590.5A CN114581611B (en) 2022-04-28 2022-04-28 Virtual scene construction method and device

Publications (2)

Publication Number Publication Date
CN114581611A true CN114581611A (en) 2022-06-03
CN114581611B CN114581611B (en) 2022-09-20

Family

ID=81785363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210457590.5A Active CN114581611B (en) 2022-04-28 2022-04-28 Virtual scene construction method and device

Country Status (1)

Country Link
CN (1) CN114581611B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115904188A (en) * 2022-11-21 2023-04-04 北京城市网邻信息技术有限公司 Method and device for editing house-type graph, electronic equipment and storage medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040021668A1 (en) * 2000-11-29 2004-02-05 Louis Chevallier Method for displaying an object in a panorama window
CN103400005A (en) * 2013-07-22 2013-11-20 西安电子科技大学 Quantifying method for intense light source to interfere imaging features of glimmer system
US20140199050A1 (en) * 2013-01-17 2014-07-17 Spherical, Inc. Systems and methods for compiling and storing video with static panoramic background
CN105376500A (en) * 2014-08-18 2016-03-02 三星电子株式会社 Video processing apparatus for generating paranomic video and method thereof
CN108322625A (en) * 2017-12-28 2018-07-24 杭州蜜迩科技有限公司 A kind of panoramic video production method based on panorama sketch
CN108616731A (en) * 2016-12-30 2018-10-02 艾迪普(北京)文化科技股份有限公司 360 degree of VR panoramic images images of one kind and video Real-time Generation
WO2019041351A1 (en) * 2017-09-04 2019-03-07 艾迪普(北京)文化科技股份有限公司 Real-time aliasing rendering method for 3d vr video and virtual three-dimensional scene
CN109934764A (en) * 2019-01-31 2019-06-25 北京奇艺世纪科技有限公司 Processing method, device, terminal, server and the storage medium of panoramic video file
KR20200035696A (en) * 2018-09-27 2020-04-06 주식회사 더픽트 System and method for editing virtual reality image and video
CN111866523A (en) * 2020-07-24 2020-10-30 北京爱笔科技有限公司 Panoramic video synthesis method and device, electronic equipment and computer storage medium
CN112085659A (en) * 2020-09-11 2020-12-15 中德(珠海)人工智能研究院有限公司 Panorama splicing and fusing method and system based on dome camera and storage medium
CN112822479A (en) * 2020-12-30 2021-05-18 北京华录新媒信息技术有限公司 Depth map generation method and device for 2D-3D video conversion
CN113192183A (en) * 2021-04-29 2021-07-30 山东产研信息与人工智能融合研究院有限公司 Real scene three-dimensional reconstruction method and system based on oblique photography and panoramic video fusion
CN113446956A (en) * 2020-03-24 2021-09-28 阿里巴巴集团控股有限公司 Data acquisition equipment, data correction method and device and electronic equipment
CN113840049A (en) * 2021-09-17 2021-12-24 阿里巴巴(中国)有限公司 Image processing method, video flow scene switching method, device, equipment and medium
CN113920036A (en) * 2021-12-14 2022-01-11 武汉大学 Interactive relighting editing method based on RGB-D image
CN113963100A (en) * 2021-10-25 2022-01-21 广东工业大学 Three-dimensional model rendering method and system for digital twin simulation scene
CN114401451A (en) * 2021-12-28 2022-04-26 有半岛(北京)信息科技有限公司 Video editing method and device, electronic equipment and readable storage medium

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040021668A1 (en) * 2000-11-29 2004-02-05 Louis Chevallier Method for displaying an object in a panorama window
US20140199050A1 (en) * 2013-01-17 2014-07-17 Spherical, Inc. Systems and methods for compiling and storing video with static panoramic background
CN103400005A (en) * 2013-07-22 2013-11-20 西安电子科技大学 Quantifying method for intense light source to interfere imaging features of glimmer system
CN105376500A (en) * 2014-08-18 2016-03-02 三星电子株式会社 Video processing apparatus for generating paranomic video and method thereof
CN108616731A (en) * 2016-12-30 2018-10-02 艾迪普(北京)文化科技股份有限公司 360 degree of VR panoramic images images of one kind and video Real-time Generation
WO2019041351A1 (en) * 2017-09-04 2019-03-07 艾迪普(北京)文化科技股份有限公司 Real-time aliasing rendering method for 3d vr video and virtual three-dimensional scene
CN108322625A (en) * 2017-12-28 2018-07-24 杭州蜜迩科技有限公司 A kind of panoramic video production method based on panorama sketch
KR20200035696A (en) * 2018-09-27 2020-04-06 주식회사 더픽트 System and method for editing virtual reality image and video
CN109934764A (en) * 2019-01-31 2019-06-25 北京奇艺世纪科技有限公司 Processing method, device, terminal, server and the storage medium of panoramic video file
CN113446956A (en) * 2020-03-24 2021-09-28 阿里巴巴集团控股有限公司 Data acquisition equipment, data correction method and device and electronic equipment
CN111866523A (en) * 2020-07-24 2020-10-30 北京爱笔科技有限公司 Panoramic video synthesis method and device, electronic equipment and computer storage medium
CN112085659A (en) * 2020-09-11 2020-12-15 中德(珠海)人工智能研究院有限公司 Panorama splicing and fusing method and system based on dome camera and storage medium
CN112822479A (en) * 2020-12-30 2021-05-18 北京华录新媒信息技术有限公司 Depth map generation method and device for 2D-3D video conversion
CN113192183A (en) * 2021-04-29 2021-07-30 山东产研信息与人工智能融合研究院有限公司 Real scene three-dimensional reconstruction method and system based on oblique photography and panoramic video fusion
CN113840049A (en) * 2021-09-17 2021-12-24 阿里巴巴(中国)有限公司 Image processing method, video flow scene switching method, device, equipment and medium
CN113963100A (en) * 2021-10-25 2022-01-21 广东工业大学 Three-dimensional model rendering method and system for digital twin simulation scene
CN113920036A (en) * 2021-12-14 2022-01-11 武汉大学 Interactive relighting editing method based on RGB-D image
CN114401451A (en) * 2021-12-28 2022-04-26 有半岛(北京)信息科技有限公司 Video editing method and device, electronic equipment and readable storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ALEKSEY I. EFIMOV 等: "Algorithm of geometrical transformation and merging of radar and video images for technical vision systems", 《2018 7TH MEDITERRANEAN CONFERENCE ON EMBEDDED COMPUTING (MECO)》 *
冯建平等: "基于全景图像的三维全景漫游系统的构建", 《计算机与数字工程》 *
张忠民等: "一种基于全景视觉的运动目标检测方法", 《电子科技》 *
李晓禹 等: "柱面全景视频拼接算法", 《电子技术与软件工程》 *
王晨昊: "基于几何映射的遥感成像光照仿真方法", 《系统仿真学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115904188A (en) * 2022-11-21 2023-04-04 北京城市网邻信息技术有限公司 Method and device for editing house-type graph, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114581611B (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN109887003B (en) Method and equipment for carrying out three-dimensional tracking initialization
JP4642757B2 (en) Image processing apparatus and image processing method
US20200358996A1 (en) Real-time aliasing rendering method for 3d vr video and virtual three-dimensional scene
EP3533218B1 (en) Simulating depth of field
JP2006053694A (en) Space simulator, space simulation method, space simulation program and recording medium
KR20140082610A (en) Method and apaaratus for augmented exhibition contents in portable terminal
US10621777B2 (en) Synthesis of composite images having virtual backgrounds
WO2023280038A1 (en) Method for constructing three-dimensional real-scene model, and related apparatus
US11900552B2 (en) System and method for generating virtual pseudo 3D outputs from images
CN112954292B (en) Digital museum navigation system and method based on augmented reality
Ebner et al. Multi‐view reconstruction of dynamic real‐world objects and their integration in augmented and virtual reality applications
CN112598780A (en) Instance object model construction method and device, readable medium and electronic equipment
CN114581611B (en) Virtual scene construction method and device
Langlotz et al. AR record&replay: situated compositing of video content in mobile augmented reality
CN109788270B (en) 3D-360-degree panoramic image generation method and device
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
DuVall et al. Compositing light field video using multiplane images
Gomes Jr et al. Semi-automatic methodology for augmented panorama development in industrial outdoor environments
CN114089836B (en) Labeling method, terminal, server and storage medium
CN113947671A (en) Panoramic 360-degree image segmentation and synthesis method, system and medium
Feng et al. Foreground-aware dense depth estimation for 360 images
Wu et al. Construction and implementation of the three-dimensional virtual panoramic roaming system of Hainan ecotourism
CN109348132B (en) Panoramic shooting method and device
CN111489407A (en) Light field image editing method, device, equipment and storage medium
Huang et al. Generation of Animated Stereo Panoramic Images for Image-Based Virtual Reality Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40073987

Country of ref document: HK