CN117527994A - Visual presentation method and system for space simulation shooting - Google Patents

Visual presentation method and system for space simulation shooting Download PDF

Info

Publication number
CN117527994A
CN117527994A CN202311465914.0A CN202311465914A CN117527994A CN 117527994 A CN117527994 A CN 117527994A CN 202311465914 A CN202311465914 A CN 202311465914A CN 117527994 A CN117527994 A CN 117527994A
Authority
CN
China
Prior art keywords
data
scene
virtual
shooting
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311465914.0A
Other languages
Chinese (zh)
Inventor
马平
孙靖
姜文
安娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Film Digital Production Base Co ltd
Original Assignee
China Film Digital Production Base Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Film Digital Production Base Co ltd filed Critical China Film Digital Production Base Co ltd
Priority to CN202311465914.0A priority Critical patent/CN117527994A/en
Publication of CN117527994A publication Critical patent/CN117527994A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a visual presentation method and a visual presentation system for space simulation shooting, a storage medium and electronic equipment, wherein the visual presentation method comprises the following steps: shooting data acquisition, shooting data processing, data debugging, virtual scene display and simulated shooting, and performing information collection according to shooting environments in the real world to generate point cloud data; transmitting the point cloud data to a computer; processing and reconstructing the point cloud data through a computer to create a real world virtual model; and adjusting relevant debugging parameters in the virtual shooting scene by a computer to form the virtual studio. According to the invention, the virtual shooting scene is constructed by shooting, scanning and modeling the objects in the real scene in the computer, so that a user intuitively experiences the shooting process and operation, and the virtual shooting scene is constructed, so that different shooting requirements of the user on different scenes are met, and the virtual shooting space is modified according to the user wish.

Description

Visual presentation method and system for space simulation shooting
Technical Field
The invention relates to the technical field of image processing, in particular to a visual presentation method and a visual presentation system for space simulation shooting, a storage medium and electronic equipment.
Background
In the shooting process, in order to reach the expected background picture, people usually choose to shoot a real scene, but the shooting of the real scene is limited by the limitation of the real scene, and a great deal of manpower, material resources, financial resources and time are required to be spent. In order to achieve a realistic live-action effect without being limited by a real scene, people choose a space virtual shooting technique for shooting, which is a technique for shooting and creating by using a computer-generated virtual environment.
The current space virtual shooting technology can reproduce the realistic shooting effect of the real scene, but can not simulate and represent the real shooting process before shooting, and users have no visual feeling and experience when shooting by using space simulation.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a visual presentation method and a visual presentation system for space simulation shooting.
In order to solve the technical problems, the invention provides a visual presentation method for space simulation shooting, which comprises the following steps:
step one, acquiring image data of a shooting environment in a real scene, collecting information of the image data of the shooting environment in the real scene, and generating environment information of the real scene based on the collected information, wherein the environment information comprises: prop information and construction information of a scene, and then extracting a plurality of key points from environment information, so that point cloud information is generated and stored based on the extracted key points; the environment information is data obtained after image scanning and image conversion of objects in a real scene by a laser scanner or a camera; prop information is data obtained after image scanning and image conversion of objects in a real scene by a laser scanner or a camera;
step two, processing point cloud information, obtaining a plurality of key points of the point cloud information, and constructing a stereoscopic graph in a virtual shooting scene according to the plurality of key points in the point cloud information; determining image data of a planar scene based on environmental information of the real scene, wherein the planar scene is a planar image obtained after shooting a scene in the real scene;
step three, according to the image data of the plane scene of the real scene, adjusting the prop placement position and the light irradiation angle in the virtual scene, obtaining prop placement position and light irradiation angle data in the real scene, adjusting the prop placement position and the light irradiation angle data in the virtual scene by a computer based on the prop placement position and the light irradiation angle data in the real scene, enabling the prop placement position and the light irradiation angle data in the virtual scene to be matched with the prop placement position and the light irradiation angle data in the real scene, constructing a virtual studio in the virtual scene, and respectively determining the stereoscopic image of the stereoscopic scene corresponding to the plane image of the plane scene according to the image data of the plane scene, wherein the independent prop in each plane scene has the stereoscopic image corresponding to the stereoscopic scene; the virtual studio includes: a plurality of virtual cameras for image acquisition, a tracking module for position tracking, and a computer graphics rendering module for graphics rendering processing;
step four, shooting scene data of a real scene is obtained, the shooting scene data of the real scene is simulated through a virtual scene creation module, the virtual shooting scene is adjusted in real time according to position data, angle data and ray data of a virtual camera, a virtual scene for virtual shooting is constructed according to processed point cloud information, and the virtual shooting scene is displayed, wherein prop placement positions and lamplight irradiation angles in the virtual shooting scene are adjusted in real time according to the position data, the angle data and the ray data of the virtual camera, so that an adjusted virtual shooting scene is obtained; the virtual scene creation module includes: a modeling module for creating a model, a rendering module for performing graphics rendering, an animation module for generating animation data, and a special effect making module for generating an image special effect;
step five, obtaining user operation data, performing simulated shooting in a virtual space taking a virtual shooting scene as a background through the user operation data, dynamically adjusting the position of a virtual camera based on the user operation data, matching corresponding light supplementing data and focusing data through the user operation data, and transmitting an image shot by the virtual camera to a user terminal for presentation, wherein the user operation data comprises the following steps: virtual camera preset data, camera focal length data, camera aperture data and camera focusing data.
Preferably, the environmental information is data obtained after image scanning and image conversion of an object in a real scene by a laser scanner or a camera.
Preferably, the virtual studio includes: a plurality of virtual cameras for image acquisition, a tracking module for position tracking, and a computer graphics rendering module for graphics rendering processing.
Preferably, the virtual scene authoring module includes: a modeling module for creating a model, a rendering module for performing graphics rendering, an animation module for generating animation data, and a special effect module for generating an image special effect.
Preferably, the prop information is data obtained after image scanning and image conversion of objects in a real scene by a laser scanner or a camera.
According to another aspect of the present invention, there is provided a visual presentation system for spatially simulating photographing, including:
the system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring image data of a shooting environment in a real scene, collecting information of the image data of the shooting environment in the real scene, and generating environment information of the real scene based on the collected information, wherein the environment information comprises: prop information and construction information of a scene, and then extracting a plurality of key points from environment information, so that point cloud information is generated and stored based on the extracted key points;
the processing module is used for processing the point cloud information, acquiring a plurality of key points of the point cloud information, and constructing a stereoscopic graph in the virtual shooting scene according to the plurality of key points in the point cloud information; determining image data of a planar scene based on environmental information of the real scene, wherein the planar scene is a planar image obtained after shooting a scene in the real scene;
the construction module is used for adjusting prop placement positions and lamplight irradiation angles in the virtual scenes according to image data of the plane scenes of the real scenes, obtaining prop placement position and lamplight irradiation angle data in the real scenes, adjusting prop placement position and lamplight irradiation angle data in the virtual scenes based on prop placement position and lamplight irradiation angle data in the real scenes by the computer, enabling the prop placement position and lamplight irradiation angle data in the virtual scenes to be matched with prop placement position and lamplight irradiation angle data of the real scenes, constructing a virtual studio in the virtual scenes, and respectively determining stereoscopic images of stereoscopic scenes corresponding to the plane images of the plane scenes according to the image data of the plane scenes, wherein the independent props in each plane scene have the stereoscopic images corresponding to the stereoscopic scenes;
the device comprises an adjustment module, a virtual camera, a light irradiation angle adjustment module, a virtual camera setting module and a virtual camera setting module, wherein the adjustment module is used for acquiring shooting scene data of a real scene, simulating the shooting scene data of the real scene through the virtual scene creation module, adjusting the virtual shooting scene in real time according to the position data, the angle data and the light data of the virtual camera, constructing a virtual scene for virtual shooting according to the processed point cloud information, and displaying the virtual shooting scene, wherein prop placement positions and light irradiation angles in the virtual shooting scene are adjusted in real time according to the position data, the angle data and the light data of the virtual camera, so as to acquire an adjusted virtual shooting scene;
the presentation module is used for acquiring user operation data, performing simulated shooting in a virtual space taking a virtual shooting scene as a background through the user operation data, dynamically adjusting the position of the virtual camera based on the user operation data, matching corresponding light supplementing data and focusing data through the user operation data, and transmitting an image shot by the virtual camera to the user terminal for presentation, wherein the user operation data comprises: virtual camera preset data, camera focal length data, camera aperture data and camera focusing data.
After the technical scheme is adopted, compared with the prior art, the invention has the following beneficial effects, and of course, any product for implementing the invention does not necessarily need to achieve all the following advantages at the same time:
the virtual shooting scene is constructed after the objects in the real scene are shot, scanned and re-molded on the computer, so that a user can intuitively experience the shooting process and operation, different shooting requirements of the user on different scenes can be met through the reconstruction of the virtual shooting scene, and the virtual shooting space can be modified according to the user wish.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing embodiments of the present invention in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, and not constitute a limitation to the invention. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a flowchart of a visual presentation method for space simulation shooting according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a visual presentation system for space-simulation shooting according to an embodiment of the present invention.
It should be noted that these drawings and the written description are not intended to limit the scope of the inventive concept in any way, but to illustrate the inventive concept to those skilled in the art by referring to the specific embodiments.
Detailed Description
Hereinafter, exemplary embodiments according to the present invention will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present invention and not all embodiments of the present invention, and it should be understood that the present invention is not limited by the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
It will be appreciated by those of skill in the art that the terms "first," "second," etc. in embodiments of the present invention are used merely to distinguish between different steps, devices or modules, etc., and do not represent any particular technical meaning nor necessarily logical order between them.
It should also be understood that in embodiments of the present invention, "plurality" may refer to two or more, and "at least one" may refer to one, two or more.
It should also be appreciated that any component, data, or structure referred to in an embodiment of the invention may be generally understood as one or more without explicit limitation or the contrary in the context.
The invention will now be described in further detail with reference to the accompanying drawings.
Fig. 1 is a flowchart of a visual presentation method for space simulation shooting according to an embodiment of the present invention. As shown in fig. 1, the method includes:
step one, acquiring image data of a shooting environment in a real scene, collecting information of the image data of the shooting environment in the real scene, and generating environment information of the real scene based on the collected information, wherein the environment information comprises: prop information and construction information of a scene, and then extracting a plurality of key points from the environment information, thereby generating point cloud information based on the extracted plurality of key points and storing the point cloud information. The environmental information is data obtained after image scanning and image conversion of objects in a real scene by a laser scanner or a camera. The prop information is data obtained after image scanning and image conversion of objects in a real scene by a laser scanner or a camera.
In another embodiment, data of a shooting environment in the real world is acquired, information is collected according to the shooting environment in the real world, environment data is generated according to the information, then point cloud data is generated according to the environment data and stored, the environment data is data obtained by scanning and converting an object in the real world through a laser scanner or a camera, and the environment data comprises: prop data and construction data of a scene.
And secondly, processing the point cloud information, acquiring a plurality of key points of the point cloud information, and constructing a stereoscopic graph in the virtual shooting scene according to the plurality of key points in the point cloud information. And determining image data of a plane scene based on environment information of the real scene, wherein the plane scene is a plane image obtained after shooting a scene in the real scene, obtaining plane scene image data, extracting geometric data of the plane scene image to obtain geometric data of each two-dimensional element in the plane scene image, and carrying out image preprocessing on images in each two-dimensional element.
In another embodiment, processing point cloud data, constructing a virtual shooting scene according to the point cloud data, acquiring plane scene image data, extracting geometric data of a plane scene image to obtain geometric data of each two-dimensional element in the plane scene image, and performing image preprocessing on images in each two-dimensional element.
And thirdly, adjusting prop placement positions and lamplight irradiation angles in the virtual scenes according to image data of the plane scenes of the real scenes, obtaining prop placement positions and lamplight irradiation angle data in the real scenes, adjusting prop placement positions and lamplight irradiation angle data in the virtual scenes by a computer based on the prop placement positions and lamplight irradiation angle data in the real scenes, enabling the prop placement positions and lamplight irradiation angle data in the virtual scenes to be matched with prop placement positions and lamplight irradiation angle data in the real scenes, constructing a virtual studio in the virtual scenes, and respectively determining stereoscopic images of stereoscopic scenes corresponding to the plane images of the plane scenes according to the image data of the plane scenes, wherein independent props in each plane scene have stereoscopic images corresponding to the stereoscopic scenes. Wherein, virtual studio includes: the system comprises a plurality of virtual cameras for acquiring images, a tracking module for tracking positions and a computer graphics rendering module for performing graphics rendering processing, wherein the tracking module is used for performing the position tracking, and the computer graphics rendering module is used for respectively drawing three-dimensional models corresponding to two-dimensional elements at coordinates corresponding to the geometric shapes of the two-dimensional elements and generating three-dimensional virtual scenes of virtual shooting scenes, and the virtual studio comprises: the system comprises a plurality of virtual cameras, a tracking module and a computer graphics rendering module.
In another embodiment, the virtual shooting scene is adjusted according to the planar scene image data, a virtual studio is built in the virtual shooting scene, and stereoscopic data corresponding to the planar data are respectively determined according to the planar scene image data. And respectively drawing a three-dimensional model corresponding to the two-dimensional elements at coordinates corresponding to the geometric shapes of the two-dimensional elements and generating a three-dimensional virtual scene of the virtual shooting scene, wherein the virtual studio comprises: the system comprises a plurality of virtual cameras, a tracking module and a computer graphics rendering module.
Step four, shooting scene data of a real scene is obtained, the shooting scene data of the real scene is simulated through a virtual scene creation module, the virtual shooting scene is adjusted in real time according to position data, angle data and ray data of a virtual camera, a virtual scene for virtual shooting is constructed according to processed point cloud information, and the virtual shooting scene is displayed, wherein prop placement positions and lamplight irradiation angles in the virtual shooting scene are adjusted in real time according to the position data, the angle data and the ray data of the virtual camera, so that an adjusted virtual shooting scene is obtained, and the virtual scene creation module comprises: a modeling module for creating a model, a rendering module for performing graphics rendering, an animation module for generating animation data, and a special effect module for generating an image special effect.
In another embodiment, shooting scene data of the real world is obtained, simulation is performed through a virtual scene creation module, and a virtual shooting scene is displayed, wherein the virtual shooting scene is adjusted in real time according to position data, angle data and ray data of a virtual camera, and the virtual scene creation module comprises: the modeling module, the rendering module, the animation production module and the special effect production module can be used together with other space virtual shooting equipment and systems to realize more complex and fine space virtual shooting.
Step five, obtaining user operation data, performing simulated shooting in a virtual space taking a virtual shooting scene as a background through the user operation data, dynamically adjusting the position of a virtual camera based on the user operation data, matching corresponding light supplementing data and focusing data through the user operation data, and transmitting an image shot by the virtual camera to a user terminal for presentation, wherein the user operation data comprises the following steps: the virtual camera preset data, the camera focal length data, the camera aperture data and the camera focusing data, wherein the corresponding response according to the user operation comprises the following steps: the user operation data includes: virtual camera preset data, camera focal length data, camera aperture data and camera focusing data.
In another embodiment, obtaining user operation data, performing simulated shooting on a virtual space through the user operation data, adjusting the position of a virtual camera, and matching corresponding light supplementing data and focusing data through the user operation data, wherein performing corresponding response according to the user operation comprises: the user operation data includes: virtual camera preset data, camera focal length data, camera aperture data and camera focusing data.
In an alternative embodiment, a visual presentation method of space simulation shooting is provided, the method comprising:
acquiring data of a shooting environment in the real world, collecting information according to the shooting environment in the real world, generating environment data according to the information, generating point cloud data according to the environment data, and storing the point cloud data, wherein the environment data are data obtained by scanning and converting objects in the real world through a laser scanner or a camera, and the environment data comprise: prop data and construction data of a scene.
And secondly, processing point cloud data, constructing a virtual shooting scene according to the point cloud data, acquiring plane scene image data, extracting geometric data of the plane scene image to acquire geometric data of each two-dimensional element in the plane scene image, and carrying out image preprocessing on images in each two-dimensional element.
Step three, adjusting the virtual shooting scene according to the plane scene image data, constructing a virtual studio in the virtual shooting scene, and respectively determining three-dimensional data corresponding to the plane data according to the plane scene image data; and respectively drawing a three-dimensional model corresponding to the two-dimensional elements at coordinates corresponding to the geometric shapes of the two-dimensional elements and generating a three-dimensional virtual scene of the virtual shooting scene, wherein the virtual studio comprises: the system comprises a plurality of virtual cameras, a tracking module and a computer graphics rendering module.
Step four, shooting scene data in the real world is obtained, simulation is carried out through a virtual scene creation module, and a virtual shooting scene is displayed, wherein the virtual shooting scene is adjusted in real time according to position data, angle data and ray data of a virtual camera, and the virtual scene creation module comprises: the modeling module, the rendering module, the animation production module and the special effect production module can be used together with other space virtual shooting equipment and systems to realize more complex and fine space virtual shooting.
Step five, acquiring user operation data, simulating shooting a virtual space through the user operation data, adjusting the position of a virtual camera, and matching corresponding light supplementing data and focusing data through the user operation data, wherein the corresponding response according to the user operation comprises the following steps: the user operation data includes: virtual camera preset data, camera focal length data, camera aperture data and camera focusing data.
Fig. 2 is a schematic structural diagram of a visual presentation system for space-simulation shooting according to an embodiment of the present invention. The system comprises: an acquisition module 201, a processing module 202, a construction module 203, an adjustment module 204, and a presentation module 205.
The acquiring module 201 is configured to acquire image data of a shooting environment in a real scene, collect information of the image data of the shooting environment in the real scene, and generate environmental information of the real scene based on the collected information, where the environmental information includes: prop information and construction information of a scene, and then extracting a plurality of key points from the environment information, thereby generating point cloud information based on the extracted plurality of key points and storing the point cloud information. The environmental information is data obtained after image scanning and image conversion of objects in a real scene by a laser scanner or a camera. The prop information is data obtained after image scanning and image conversion of objects in a real scene by a laser scanner or a camera.
In another embodiment, data of a shooting environment in the real world is acquired, information is collected according to the shooting environment in the real world, environment data is generated according to the information, then point cloud data is generated according to the environment data and stored, the environment data is data obtained by scanning and converting an object in the real world through a laser scanner or a camera, and the environment data comprises: prop data and construction data of a scene.
The processing module 202 is configured to process the point cloud information, obtain a plurality of key points of the cloud information in the point, and construct a stereoscopic image in the virtual shooting scene according to the plurality of key points in the point cloud information. And determining image data of a plane scene based on environment information of the real scene, wherein the plane scene is a plane image obtained after shooting a scene in the real scene, obtaining plane scene image data, extracting geometric data of the plane scene image to obtain geometric data of each two-dimensional element in the plane scene image, and carrying out image preprocessing on images in each two-dimensional element.
In another embodiment, processing point cloud data, constructing a virtual shooting scene according to the point cloud data, acquiring plane scene image data, extracting geometric data of a plane scene image to obtain geometric data of each two-dimensional element in the plane scene image, and performing image preprocessing on images in each two-dimensional element.
The construction module 203 is configured to adjust a prop placement position and a light irradiation angle in a virtual scene according to image data of a planar scene of a real scene, obtain prop placement position and light irradiation angle data in the real scene, and adjust data of the prop placement position and the light irradiation angle in the virtual scene based on the prop placement position and the light irradiation angle data in the real scene, so that the data of the prop placement position and the light irradiation angle in the virtual scene are matched with the data of the prop placement position and the light irradiation angle in the real scene, and construct a virtual studio in the virtual scene, and respectively determine stereoscopic images of stereoscopic scenes corresponding to the planar images of the planar scene according to the image data of the planar scene, wherein independent props in each planar scene have stereoscopic images corresponding to the stereoscopic scenes. Wherein, virtual studio includes: the system comprises a plurality of virtual cameras for acquiring images, a tracking module for tracking positions and a computer graphics rendering module for performing graphics rendering processing, wherein the tracking module is used for performing the position tracking, and the computer graphics rendering module is used for respectively drawing three-dimensional models corresponding to two-dimensional elements at coordinates corresponding to the geometric shapes of the two-dimensional elements and generating three-dimensional virtual scenes of virtual shooting scenes, and the virtual studio comprises: the system comprises a plurality of virtual cameras, a tracking module and a computer graphics rendering module.
In another embodiment, the virtual shooting scene is adjusted according to the planar scene image data, a virtual studio is built in the virtual shooting scene, and stereoscopic data corresponding to the planar data are respectively determined according to the planar scene image data. And respectively drawing a three-dimensional model corresponding to the two-dimensional elements at coordinates corresponding to the geometric shapes of the two-dimensional elements and generating a three-dimensional virtual scene of the virtual shooting scene, wherein the virtual studio comprises: the system comprises a plurality of virtual cameras, a tracking module and a computer graphics rendering module.
The adjusting module 204 is configured to obtain shooting scene data of a real scene, simulate the shooting scene data of the real scene through the virtual scene creation module, adjust the virtual shooting scene in real time according to the position data, the angle data and the light data of the virtual camera, construct a virtual scene for virtual shooting according to the processed point cloud information, and display the virtual shooting scene, wherein the prop placement position and the light irradiation angle in the virtual shooting scene are adjusted in real time according to the position data, the angle data and the light data of the virtual camera, so as to obtain an adjusted virtual shooting scene, and the virtual scene creation module includes: a modeling module for creating a model, a rendering module for performing graphics rendering, an animation module for generating animation data, and a special effect module for generating an image special effect.
In another embodiment, shooting scene data of the real world is obtained, simulation is performed through a virtual scene creation module, and a virtual shooting scene is displayed, wherein the virtual shooting scene is adjusted in real time according to position data, angle data and ray data of a virtual camera, and the virtual scene creation module comprises: the modeling module, the rendering module, the animation production module and the special effect production module can be used together with other space virtual shooting equipment and systems to realize more complex and fine space virtual shooting.
The presentation module 205 is configured to obtain user operation data, perform simulated shooting in a virtual space with a virtual shooting scene as a background through the user operation data, dynamically adjust a position of a virtual camera based on the user operation data, match corresponding light supplementing data and focusing data through the user operation data, and then transmit an image shot by the virtual camera to a user terminal for presentation, where the user operation data includes: the virtual camera preset data, the camera focal length data, the camera aperture data and the camera focusing data, wherein the corresponding response according to the user operation comprises the following steps: the user operation data includes: virtual camera preset data, camera focal length data, camera aperture data and camera focusing data.
In another embodiment, obtaining user operation data, performing simulated shooting on a virtual space through the user operation data, adjusting the position of a virtual camera, and matching corresponding light supplementing data and focusing data through the user operation data, wherein performing corresponding response according to the user operation comprises: the user operation data includes: virtual camera preset data, camera focal length data, camera aperture data and camera focusing data.
In an alternative embodiment, the system includes: the system comprises an information acquisition module, a data transmission module, a data processing module and a shooting simulation module.
The information acquisition module is used for acquiring shooting environment information in the real world, and comprises a plurality of camera information transmission modules and laser scanner information transmission modules, wherein a camera shoots the real environment, and a laser scanner scans the real environment to acquire information.
The data transmission module is used for transmitting the acquired point cloud data to the computer, generating the point cloud data, collecting the environmental data acquired from the real camera and the laser scanner, and then transmitting the environmental data to a database in the computer, and generating the point cloud data.
The data processing module is used for processing the point cloud data and debugging parameters to form a virtual studio, the data processing module is used for processing the point cloud data of the real environment, reproducing and constructing the virtual studio in the virtual scene creation module, modeling, rendering and animating the point cloud data through the virtual scene creation module of the computer, and performing special effect processing on the point cloud data, and the data processing module and the shooting simulation module are mutually matched in a sharing way.
The shooting simulation module is used for acquiring user operation data, simulating shooting the virtual space through the user operation data, adjusting the position of the virtual camera, matching corresponding light supplementing data and focusing data through the user operation data, displaying a virtual shooting scene under a real client visual angle through a computer so as to simulate real shooting of the real world, controlling the virtual scene creation module according to the operation of a user on the real computer, adjusting and moving the virtual shooting scene, achieving different shooting visual angles through controlling the virtual camera in the virtual scene creation module, and displaying corresponding adjustment on the real display screen through the virtual scene creation module.
In an alternative embodiment, a visual presentation system for spatially simulated photography, comprising:
the information acquisition module is used for acquiring shooting environment information in the real world;
the data transmission module is used for transmitting the acquired point cloud data to a computer;
the data processing module is used for processing and parameter debugging on the point cloud data to form a virtual studio;
and the shooting simulation module is used for operating the virtual camera to perform simulation shooting in the virtual studio.
The information acquisition module comprises a plurality of camera information transmission modules and a laser scanner information transmission module, the camera shoots the real environment, the laser scanner scans the real environment to acquire information, and then the acquired information is converted into environmental data.
The data transmission module is used for transmitting the environment data to the computer and generating point cloud data.
The data processing module is used for processing the point cloud data of the real environment, and reproducing and constructing the virtual studio in the virtual scene creation module.
The shooting simulation module is used for acquiring user operation data, simulating shooting of the virtual space through the user operation data, adjusting the position of the virtual camera, and matching corresponding light supplementing data and focusing data through the user operation data.
The basic principles of the present disclosure have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present disclosure are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different manner from other embodiments, so that the same or similar parts between the embodiments are mutually referred to. For system embodiments, the description is relatively simple as it essentially corresponds to method embodiments, and reference should be made to the description of method embodiments for relevant points.
The block diagrams of the devices, apparatuses, devices, systems referred to in this disclosure are merely illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present disclosure may also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the apparatus, devices and methods of the present disclosure, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure. The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the disclosure to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. A visual presentation method for space simulation shooting, which is characterized by comprising the following steps:
acquiring image data of a shooting environment in a real scene, collecting information of the image data of the shooting environment in the real scene, and generating environment information of the real scene based on the collected information, wherein the environment information comprises: prop information and construction information of a scene, and then extracting a plurality of key points from environment information, so that point cloud information is generated and stored based on the extracted key points;
processing point cloud information, obtaining a plurality of key points of the point cloud information, and constructing a stereoscopic graph in a virtual shooting scene according to the plurality of key points in the point cloud information; determining image data of a planar scene based on environmental information of the real scene, wherein the planar scene is a planar image obtained after shooting a scene in the real scene;
according to the image data of the plane scene of the real scene, the prop placement position and the light irradiation angle in the virtual scene are adjusted, prop placement position and light irradiation angle data in the real scene are obtained, the computer adjusts the prop placement position and light irradiation angle data in the virtual scene based on the prop placement position and light irradiation angle data in the real scene, the prop placement position and light irradiation angle data in the virtual scene are matched with the prop placement position and light irradiation angle data in the real scene, a virtual studio is built in the virtual scene, and the stereoscopic images of the stereoscopic scenes corresponding to the plane images of the plane scene are respectively determined according to the image data of the plane scene, wherein the independent props in each plane scene have the stereoscopic images corresponding to the stereoscopic scenes;
acquiring shooting scene data of a real scene, simulating the shooting scene data of the real scene through a virtual scene creation module, adjusting the virtual shooting scene in real time by the virtual shooting scene according to the position data, the angle data and the light data of a virtual camera, constructing a virtual scene for virtual shooting according to the processed point cloud information, and displaying the virtual shooting scene, wherein prop placement positions and light irradiation angles in the virtual shooting scene are adjusted in real time according to the position data, the angle data and the light data of the virtual camera, so as to obtain an adjusted virtual shooting scene;
obtaining user operation data, performing simulated shooting in a virtual space taking a virtual shooting scene as a background through the user operation data, dynamically adjusting the position of a virtual camera based on the user operation data, matching corresponding light supplementing data and focusing data through the user operation data, and transmitting an image shot by the virtual camera to a user terminal for presentation, wherein the user operation data comprises the following components: virtual camera preset data, camera focal length data, camera aperture data and camera focusing data.
2. The visual presentation method according to claim 1, wherein the environmental information is data obtained by image scanning and image conversion of an object in a real scene by a laser scanner or a camera.
3. The visual presentation method of claim 1, wherein the virtual studio comprises: a plurality of virtual cameras for image acquisition, a tracking module for position tracking, and a computer graphics rendering module for graphics rendering processing.
4. The visual presentation method of claim 1, wherein the virtual scene authoring module comprises: a modeling module for creating a model, a rendering module for performing graphics rendering, an animation module for generating animation data, and a special effect module for generating an image special effect.
5. The visual presentation method according to claim 1, wherein the prop information is data obtained after image scanning and image conversion of an object in a real scene by a laser scanner or a camera.
6. A visual presentation system for spatially simulated photography, comprising:
the system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring image data of a shooting environment in a real scene, collecting information of the image data of the shooting environment in the real scene, and generating environment information of the real scene based on the collected information, wherein the environment information comprises: prop information and construction information of a scene, and then extracting a plurality of key points from environment information, so that point cloud information is generated and stored based on the extracted key points;
the processing module is used for processing the point cloud information, acquiring a plurality of key points of the point cloud information, and constructing a stereoscopic graph in the virtual shooting scene according to the plurality of key points in the point cloud information; determining image data of a planar scene based on environmental information of the real scene, wherein the planar scene is a planar image obtained after shooting a scene in the real scene;
the construction module is used for adjusting prop placement positions and lamplight irradiation angles in the virtual scenes according to image data of the plane scenes of the real scenes, obtaining prop placement position and lamplight irradiation angle data in the real scenes, adjusting prop placement position and lamplight irradiation angle data in the virtual scenes based on prop placement position and lamplight irradiation angle data in the real scenes by the computer, enabling the prop placement position and lamplight irradiation angle data in the virtual scenes to be matched with prop placement position and lamplight irradiation angle data of the real scenes, constructing a virtual studio in the virtual scenes, and respectively determining stereoscopic images of stereoscopic scenes corresponding to the plane images of the plane scenes according to the image data of the plane scenes, wherein the independent props in each plane scene have the stereoscopic images corresponding to the stereoscopic scenes;
the device comprises an adjustment module, a virtual camera, a light irradiation angle adjustment module, a virtual camera setting module and a virtual camera setting module, wherein the adjustment module is used for acquiring shooting scene data of a real scene, simulating the shooting scene data of the real scene through the virtual scene creation module, adjusting the virtual shooting scene in real time according to the position data, the angle data and the light data of the virtual camera, constructing a virtual scene for virtual shooting according to the processed point cloud information, and displaying the virtual shooting scene, wherein prop placement positions and light irradiation angles in the virtual shooting scene are adjusted in real time according to the position data, the angle data and the light data of the virtual camera, so as to acquire an adjusted virtual shooting scene;
the presentation module is used for acquiring user operation data, performing simulated shooting in a virtual space taking a virtual shooting scene as a background through the user operation data, dynamically adjusting the position of the virtual camera based on the user operation data, matching corresponding light supplementing data and focusing data through the user operation data, and transmitting an image shot by the virtual camera to the user terminal for presentation, wherein the user operation data comprises: virtual camera preset data, camera focal length data, camera aperture data and camera focusing data.
7. The visual presentation system of claim 6, wherein the information acquisition module comprises a plurality of camera information transmission modules and a laser scanner information transmission module, the camera capturing a real environment, the laser scanner scanning the real environment to acquire information, and subsequently converting the acquired information into environmental data.
8. The visual presentation system of claim 6, wherein the data transmission module is configured to transmit the environmental data to a computer and generate the point cloud data.
9. An electronic device, the electronic device comprising: a memory and a processor, the memory and the processor coupled; the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the visual presentation method of any one of claims 1 to 5.
10. A computer readable storage medium comprising a computer program which, when run on an electronic device, causes the electronic device to perform the visual presentation method of any one of claims 1 to 5.
CN202311465914.0A 2023-11-06 2023-11-06 Visual presentation method and system for space simulation shooting Pending CN117527994A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311465914.0A CN117527994A (en) 2023-11-06 2023-11-06 Visual presentation method and system for space simulation shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311465914.0A CN117527994A (en) 2023-11-06 2023-11-06 Visual presentation method and system for space simulation shooting

Publications (1)

Publication Number Publication Date
CN117527994A true CN117527994A (en) 2024-02-06

Family

ID=89744923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311465914.0A Pending CN117527994A (en) 2023-11-06 2023-11-06 Visual presentation method and system for space simulation shooting

Country Status (1)

Country Link
CN (1) CN117527994A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130014987A (en) * 2011-08-01 2013-02-12 한밭대학교 산학협력단 Tree dimension composition apparatus for realizing camera oriented effect of object in three dimensionsimagination space and tree dimention composition method using thereof
CN111447340A (en) * 2020-05-29 2020-07-24 深圳市瑞立视多媒体科技有限公司 Mixed reality virtual preview shooting system
CN113747138A (en) * 2021-07-30 2021-12-03 杭州群核信息技术有限公司 Video generation method and device for virtual scene, storage medium and electronic equipment
CN115661236A (en) * 2022-10-26 2023-01-31 长沙神漫文化科技有限公司 Real space and virtual space camera real-time positioning method and related equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130014987A (en) * 2011-08-01 2013-02-12 한밭대학교 산학협력단 Tree dimension composition apparatus for realizing camera oriented effect of object in three dimensionsimagination space and tree dimention composition method using thereof
CN111447340A (en) * 2020-05-29 2020-07-24 深圳市瑞立视多媒体科技有限公司 Mixed reality virtual preview shooting system
CN113747138A (en) * 2021-07-30 2021-12-03 杭州群核信息技术有限公司 Video generation method and device for virtual scene, storage medium and electronic equipment
CN115661236A (en) * 2022-10-26 2023-01-31 长沙神漫文化科技有限公司 Real space and virtual space camera real-time positioning method and related equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘戈三;赵建军;郭蕴辉;: "电影虚拟化制作中运动控制系统实时交互预演平台的研究与设计实现", 现代电影技术, no. 4, 11 April 2017 (2017-04-11) *
邵丹,邓宇,宋震: "实时虚拟预演在电影艺术创作中的应用", 电影艺术, no. 3, 5 May 2017 (2017-05-05), pages 1 - 6 *

Similar Documents

Publication Publication Date Title
US10154246B2 (en) Systems and methods for 3D capturing of objects and motion sequences using multiple range and RGB cameras
US11425283B1 (en) Blending real and virtual focus in a virtual display environment
US11354774B2 (en) Facial model mapping with a neural network trained on varying levels of detail of facial scans
US11055900B1 (en) Computer-generated image processing including volumetric scene reconstruction to replace a designated region
EP4176412A1 (en) Generating an animation rig for use in animating a computer-generated character based on facial scans of an actor and a muscle model
US11689815B2 (en) Image modification of motion captured scene for reconstruction of obscured views using uncoordinated cameras
CN109788270B (en) 3D-360-degree panoramic image generation method and device
CN113781660A (en) Method and device for rendering and processing virtual scene on line in live broadcast room
CN111223190A (en) Processing method for collecting VR image in real scene
CN117527993A (en) Device and method for performing virtual shooting in controllable space
CN117527994A (en) Visual presentation method and system for space simulation shooting
KR102688669B1 (en) Method for gaining 3d mesh model sequence from multi-view video sequence based on mixed reality
US11153480B2 (en) Plate reconstruction of obscured views of captured imagery using arbitrary captured inputs
Gao et al. Aesthetics Driven Autonomous Time-Lapse Photography Generation by Virtual and Real Robots
CN118587362A (en) Digital human image generation method and system based on virtual concert hall
CN115866354A (en) Interactive virtual reality-based non-material heritage iconic deduction method and device
CN117173291A (en) Animation method, device, apparatus, storage medium, and computer program product
WO2022071810A1 (en) Method for operating a character rig in an image-generation system using constraints on reference nodes
CN117278800A (en) Video content replacement method and device, electronic equipment and storage medium
CN114723873A (en) End-to-end 3D scene reconstruction and image projection
CN117830545A (en) Fully mechanized mining face simulation video generation method and device and electronic equipment
CN114007017A (en) Video generation method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination