CN115661410A - Scene simulation method and device, storage medium and electronic equipment - Google Patents

Scene simulation method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN115661410A
CN115661410A CN202211328099.9A CN202211328099A CN115661410A CN 115661410 A CN115661410 A CN 115661410A CN 202211328099 A CN202211328099 A CN 202211328099A CN 115661410 A CN115661410 A CN 115661410A
Authority
CN
China
Prior art keywords
virtual
venue
model
virtual camera
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211328099.9A
Other languages
Chinese (zh)
Inventor
王毅
许若薇
赵冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Boguan Information Technology Co Ltd
Original Assignee
Guangzhou Boguan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Boguan Information Technology Co Ltd filed Critical Guangzhou Boguan Information Technology Co Ltd
Priority to CN202211328099.9A priority Critical patent/CN115661410A/en
Publication of CN115661410A publication Critical patent/CN115661410A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a scene simulation method, an apparatus, a storage medium and an electronic device, which relate to the technical field of computers, and the scene simulation method includes: creating a simple virtual venue model; placing the virtual camera within the virtual venue dumb in response to a first configuration operation for the virtual camera; presenting scene pictures shot by the virtual camera in response to a viewing operation for the virtual camera; and responding to the operation of adjusting the machine position of the virtual camera in the virtual venue simple model based on the scene pictures, and adjusting the machine position of the virtual camera in the virtual venue simple model. This is disclosed shoots through the simulation, confirms the position of camera, can reduce the deviation of design and effect of falling to the ground to a certain extent, and then reduces the degree of difficulty of field debugging, save time and human cost.

Description

Scene simulation method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a scene simulation method, a scene simulation apparatus, a computer-readable storage medium, and an electronic device.
Background
Scene making technology of real scene combined with AR (Augmented Reality) visual effect is widely applied to large-scale broadcasting programs, concerts, various offline events and other projects. For the projects, accurate measurement data are required to serve as execution support of each link from the design of the AR visual effect in the early stage to the landing of the dance beauty in the later stage, so that the AR visual effect can better fit with a real scene.
In the related art, in order to enable the AR visual effect to be better fit with a real scene, the size and the position of the stage art consumable are roughly estimated by referring to a CAD (Computer Aided Design) plan, the AR visual effect cannot be expected, and in order to prevent machine position mismatching, an extra spare material needs to be manufactured for deviation adjustment of the effect in the landing process, so that the field building difficulty, the building time and the building cost are higher.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides a scene simulation method, a scene simulation apparatus, a computer-readable storage medium, and an electronic device, thereby solving, at least to a certain extent, the problems of high field debugging difficulty and high cost in the related art.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a scene simulation method, wherein the method includes: creating a simple virtual venue model; in response to a first configuration operation for a virtual camera, placing the virtual camera within the virtual venue dumb; presenting scene pictures shot by the virtual camera in response to a view viewing operation for the virtual camera; adjusting the position of the virtual camera in the virtual venue simple model in response to an adjustment operation of the position of the virtual camera in the virtual venue simple model based on the scene picture.
In an exemplary embodiment of the present disclosure, the creating of the virtual venue simple model includes: creating an initial virtual venue simple model; and responding to the size adjustment operation aiming at the initial virtual venue simple model, adjusting the size of the initial virtual venue simple model, and obtaining the virtual venue simple model.
In an exemplary embodiment of the present disclosure, a virtual stage dumb model is included within the initial virtual venue dumb model, the method further comprising: adjusting the position and/or size of the virtual stage simple model in the virtual venue simple model in response to an adjustment operation for the position and/or size of the virtual stage simple model.
In an exemplary embodiment of the present disclosure, the method further comprises: in response to a second configuration operation for one or more virtual placeholders, placing the virtual placeholder within the virtual venue facili ty model.
In an example embodiment of the present disclosure, the placing the virtual placeholder within the virtual venue facile model in response to a second configuration operation for one or more virtual placeholders comprises: in response to a drag operation on a virtual placeholder in a virtual part bar, moving the virtual placeholder into the virtual venue dumb; adjusting the position and/or size of the virtual placeholder in the virtual venue compact model in response to the adjustment operation for the position and/or size of the virtual placeholder.
In an exemplary embodiment of the present disclosure, the placing the virtual camera within the virtual stadium simple model in response to a first configuration operation for the virtual camera comprises: adding a first virtual camera and/or a second virtual camera into the virtual venue simple model in response to a drag operation for the first virtual camera and/or the second virtual camera in a virtual part bar, the first virtual camera being used for fixed-position shooting and the second virtual camera being used for track shooting; adjusting the shooting parameters of the first virtual camera and/or the second virtual camera in response to an adjustment operation for the shooting parameters of the first virtual camera and/or the second virtual camera.
In an exemplary embodiment of the present disclosure, the method further comprises: in response to a drag operation for a virtual tracking point in a virtual part bar, adding the virtual tracking point into the virtual venue simple model, the virtual tracking point moving within the virtual venue simple model, the second virtual camera performing a tracking shot of the virtual tracking point.
In an exemplary embodiment of the present disclosure, a first virtual camera and a second virtual camera are placed in the virtual venue simple model, the first virtual camera is used for fixed-position shooting, the second virtual camera is used for track shooting, and in response to a viewing angle viewing operation for the virtual cameras, a scene picture shot by the virtual cameras is presented, including: and presenting a scene picture under a target switching visual angle in response to switching operation among the visual angle of the first virtual camera, the visual angle of the second virtual camera and a free visual angle.
In an exemplary embodiment of the present disclosure, the adjusting the position of the virtual camera in the virtual venue simple model in response to the adjusting operation of the position of the virtual camera in the virtual venue simple model based on the scene picture includes: adjusting a position of the first virtual camera in the virtual venue simple model in response to an adjustment operation of the position of the first virtual camera in the virtual venue simple model based on the scene picture; and/or adjusting the position of the second virtual camera in the virtual venue simple model in response to an adjustment operation of the position of the second virtual camera in the virtual venue simple model based on the scene picture.
In an exemplary embodiment of the present disclosure, the method further comprises: responding to the import operation aiming at the virtual design model, and acquiring a target virtual design model; and in response to rendering effect derivation operation aiming at the target virtual design model, rendering the virtual venue simple model by adopting the target virtual design model to obtain the virtual venue rendering model, so that the virtual camera shoots the virtual venue rendering model.
According to a second aspect of the present disclosure, there is provided a scene simulation apparatus, characterized in that the apparatus comprises: the simple model creating module is used for creating a virtual venue simple model; a camera configuration module to place a virtual camera within the virtual venue dumb in response to a first configuration operation for the virtual camera; a shot picture presenting module for presenting a scene picture shot by the virtual camera in response to a viewing operation for the virtual camera; and the machine position adjusting module is used for responding to the adjustment operation of the position of the virtual camera in the virtual venue simple model based on the scene picture, and adjusting the position of the virtual camera in the virtual venue simple model.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described scene simulation method.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the above-described scene simulation method via execution of the executable instructions.
The technical scheme of the disclosure has the following beneficial effects:
in the scene simulation process, a virtual venue simple model is created; placing the virtual camera within the virtual venue dumb in response to a first configuration operation for the virtual camera; presenting a scene picture under the visual angle of the virtual camera in response to a visual angle viewing operation for the virtual camera; and responding to the operation of adjusting the position of the virtual camera in the virtual venue simple model based on the scene picture, and adjusting the position of the virtual camera in the virtual venue simple model. In the process, the position of the virtual camera is adjusted through simulated shooting, so that visual and three-dimensional simulation effects and position data reference can be provided, and an accurate expected position is provided for AR effect design; spare materials which are additionally manufactured due to the fact that the machine positions are not aligned in advance can be saved, and the problem that construction can be carried out only after the machine positions are confirmed on site when part of dance beauty is built is solved; the expected design can be restored, the field building difficulty is reduced, and the field building debugging time and cost are saved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is apparent that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings can be obtained from those drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a flow chart of a method of scene simulation in the present exemplary embodiment;
fig. 2A is a diagram showing a configuration example of a virtual stadium simple model in the present exemplary embodiment;
fig. 2B is a diagram showing a configuration example of a virtual stage simplification model in the present exemplary embodiment;
FIG. 2C is a diagram illustrating an example of calling out a virtual parts bar in the exemplary embodiment;
FIG. 2D is a diagram showing an example of presentation of a virtual parts bar in the present exemplary embodiment;
FIG. 2E is a diagram illustrating an example of a position adjustment for a virtual placeholder in the exemplary embodiment;
FIG. 2F is a diagram illustrating an example parameter configuration adjustment for a virtual placeholder in accordance with certain example embodiments;
fig. 2G is a diagram showing a configuration example of a fixed machine position in the present exemplary embodiment;
FIG. 2H is a diagram illustrating an example configuration of a tracker position according to this exemplary embodiment;
FIG. 2I is a diagram illustrating an example of position adjustment of a virtual tracking point in the exemplary embodiment;
fig. 3 is a view showing an example of switching of the angle of view in the present exemplary embodiment;
fig. 4A is a diagram showing an example of position adjustment of a stationary gantry in the present exemplary embodiment;
FIG. 4B illustrates an example of a position adjustment of a rocker arm in the present exemplary embodiment;
FIG. 5A is a diagram illustrating a rendering example of a virtual design model in the present exemplary embodiment;
FIG. 5B is a diagram showing a rendering example of another virtual design model in the present exemplary embodiment;
FIG. 6A is a diagram showing an example of calling out a model lead-in area in the present exemplary embodiment;
FIG. 6B illustrates an example diagram of a virtual design model import in the present exemplary embodiment;
FIG. 7 illustrates a flow diagram of a multi-pass scene simulation in the present exemplary embodiment;
fig. 8 is a block diagram showing the configuration of a scene simulation apparatus in the present exemplary embodiment;
fig. 9 illustrates an electronic device for implementing the above-described scene simulation method in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
The terms "a", "an", "the" and "said" are used in this specification to denote the presence of one or more elements/components/parts/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. other than the listed elements/components/etc.; the terms "first" and "second," etc. are used merely as labels, and are not limiting on the number of their objects.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
In the correlation technique, because the AR visual effect can not be expected, the spare material for manufacturing the stage art consumables is used for deviation adjustment of the effect in the landing process, the consumed materials are more, the difficulty of field debugging is increased, and the time and the labor cost are difficult to control.
In view of one or more of the above problems, exemplary embodiments of the present disclosure provide a scene simulation method, based on which a scene construction process including an AR effect and a multi-party collaboration process of a scene item in a booth can be optimized.
An exemplary embodiment of the present disclosure provides a scene simulation method, as shown in fig. 1, which may specifically include the following steps S110 to S140:
step S110, creating a simple virtual venue model;
step S120, responding to a first configuration operation aiming at the virtual camera, and placing the virtual camera in the simple model of the virtual venue;
step S130, responding to the view angle viewing operation aiming at the virtual camera, and presenting scene pictures shot by the virtual camera;
and step S140, responding to the operation of adjusting the position of the virtual camera in the simple virtual venue model based on the scene picture, and adjusting the position of the virtual camera in the simple virtual venue model.
In the scene simulation process, the position of the virtual camera is adjusted through simulated shooting, so that visual and three-dimensional simulation effect and machine position data reference can be provided, and an accurate expected machine position is provided for AR effect design; spare materials which are additionally manufactured due to the fact that the machine positions are not aligned in advance can be saved, and the problem that construction can be carried out only after the machine positions are confirmed on site when part of dance beauty is built is solved; the expected design can be restored, the field building difficulty is reduced, and the field building debugging time and cost are saved.
Each step in fig. 1 will be specifically described below.
And step S110, creating a virtual stadium simple model.
The virtual venue simplified model is a simplified model of a virtual venue model, and can be used for simulating a real scene site, such as a venue for building a stage. The simple model of the virtual stadium simplifies the model structure, is relatively simple and convenient to manufacture, and can improve the efficiency of scene simulation to a certain extent.
It should be noted that the size of the virtual stadium simple model is not fixed, but can be adjusted. The model structure of the virtual venue simple model is simple, so that the size adjustment of the virtual venue simple model is simple and convenient to operate and easy to realize.
In an optional embodiment, the creating of the virtual stadium simple model may be specifically implemented by the following steps: creating an initial virtual venue simple model; and responding to the size adjustment operation aiming at the initial virtual venue simple model, and adjusting the size of the initial virtual venue simple model to obtain the virtual venue simple model.
When creating the virtual venue simple model, an initial virtual venue simple model may be created in advance, and the size of the initial virtual venue simple model may be a default size.
And responding to the size adjustment operation aiming at the initial virtual venue simple model, and adjusting the size of the initial virtual venue simple model to obtain the virtual venue simple model. Specifically, the target size of the virtual venue simple model may be determined according to a size adjustment operation of the user for the initial virtual venue simple model, and the size of the initial virtual venue simple model is adjusted to the target size, so as to obtain the virtual venue model with the target size.
When the size of the virtual simple venue model/the initial virtual simple venue model is adjusted, the adjustment may be performed with reference to the plan view of the live view venue.
In the steps, the size of the simple virtual venue model can be flexibly adjusted, so that the simple virtual venue model can be closer to a real scene.
In an alternative embodiment, when the virtual stage simplification model is included in the initial virtual venue simplification model, the position and/or size of the virtual stage simplification model in the virtual venue simplification model can be adjusted in response to an adjustment operation for the position and/or size of the virtual stage simplification model.
Illustratively, as shown in the virtual venue simple model 201 in fig. 2A, the venue data configuration floating window 202 may be popped up in response to a right click operation of the virtual venue simple model 201 by the user, so that the user may modify parameter information in the venue data configuration floating window 202, such as parameters of venue width, venue length, venue height, stage-to-venue back clearance, and the like; in response to the user right-clicking the virtual venue simple model 201 again, the venue data configuration floating window 202 may be closed and the user-modified parameter information saved, and the size of the virtual venue simple model 201 and the position of the virtual stage simple model 203 may be adapted.
Illustratively, as shown in the virtual stage simplification model 203 in fig. 2B, in response to a right-click operation on the virtual stage simplification model 203 by the user, the stage area data configuration floating window 204 may be popped up, so that the user may modify parameter information, such as parameters of stage width, stage length, stage height, etc., in the stage area data configuration floating window 204; in response to the user right-clicking the virtual stage simplicity model 203 again, the stage area data configuration floating window 204 may be closed and the parameter information modified by the user may be saved, and the size of the virtual stage simplicity model 203 may be adaptively adjusted.
It should be noted that the position of the virtual stage simple model in the virtual venue simple model can be adjusted by the parameters from the stage to the back of the venue, or by moving the virtual stage simple model in the virtual venue simple model.
In the steps, the size and the position of the simple virtual stage model can be flexibly adjusted, so that the building of the stage can be simulated.
It should be noted that the virtual stage dumb model needs to be located in the virtual venue dumb model. The adjustment sequence of the virtual stage mockup and the virtual stadium mockup can be determined by the operation object of the user, and is not limited in detail here.
In addition, in order to improve the reality of scene simulation, a virtual placeholder is placed in the virtual venue model to simulate a functional area in a real scene.
In an optional implementation, the virtual placeholder may be placed within the virtual venue facilty model in response to a second configuration operation for one or more virtual placeholders.
The second configuration operation refers to a configuration operation for the virtual placeholder. The virtual occupying part refers to a part occupying in a virtual venue model, and can be used for simulating a functional area in a real scene to further improve the reality of simulation, such as a QC (Quality Control) room, an AV (Audio and Video) console area, a player fighting room and other functional rooms existing in the venue and possibly influencing the positioning and the placement of a rocker arm and a positioning machine.
In an optional implementation, the placing of the virtual placeholder in the virtual venue simple model in response to the second configuration operation for the one or more virtual placeholders may be specifically implemented by: responding to the dragging operation of the virtual placeholder in the virtual component bar, and moving the virtual placeholder into the simple model of the virtual venue; in response to the adjustment operation for the position and/or size of the virtual placeholder, the position and/or size of the virtual placeholder in the virtual venue dumb is adjusted.
For example, as shown in fig. 2C, the virtual part bar 205 in fig. 2D may be called out in response to the user clicking on the left function icon of the virtual venue simple model, and then the virtual placeholder 206 may be moved into the virtual venue simple model 201 in response to the user dragging the virtual placeholder 206 in the virtual part bar 205.
While adjusting the position of the virtual placeholder 206, in response to a left-click operation of the user on the virtual placeholder 206 in the virtual venue simple model 201, displaying coordinate axes in the virtual placeholder 206, as shown in fig. 2E, to prompt the user that the virtual placeholder 206 can be moved at this time; after the appearance of the coordinate axis of the virtual placeholder 206, the virtual placeholder 206 may be moved in the virtual venue facil model 201 in response to a user movement operation of the virtual placeholder 206.
When the size of the virtual placeholder 206 is adjusted, in response to a right click operation of the user on the virtual placeholder 206 in the virtual venue simple model 201, popping up the inter-function parameter setting floating window 207, as shown in fig. 2F, so that the user can modify parameter information, such as parameters of placeholder, occupation bit width, occupation length, occupation height and the like, in the inter-function parameter setting floating window 207; in response to the user right-clicking the virtual placeholder 206 in the virtual venue simplified model 201 again, the inter-function parameter setting floating window 207 is closed, the parameter information modified by the user is saved, and the size of the virtual placeholder 206 in the virtual venue simplified model 201 is adaptively adjusted.
In the steps, the size and the position of the virtual occupying part can be flexibly adjusted, so that the function room in the real scene can be simulated, and the authenticity of the simulation can be improved.
Step S120, in response to a first configuration operation for the virtual camera, placing the virtual camera within the virtual venue dumb.
The first configuration operation refers to a configuration operation for the virtual camera, which enables configuration of the position of the virtual camera and the shooting parameters. The virtual camera can be a virtual camera mounted on the fixed frame or a virtual camera mounted on the rocker arm.
In an alternative embodiment, the placing the virtual camera within the virtual venue simple model in response to the first configuration operation for the virtual camera includes: in response to a drag operation for a first virtual camera and/or a second virtual camera in the virtual part bar, adding the first virtual camera and/or the second virtual camera into the virtual stadium simple model, wherein the first virtual camera is used for fixed-position shooting, and the second virtual camera is used for tracking shooting; in response to an adjustment operation for the shooting parameters of the first virtual camera and/or the second virtual camera, the shooting parameters of the first virtual camera and/or the second virtual camera are adjusted.
The first virtual camera is used for shooting at a fixed machine position and can be a virtual camera installed on a fixed machine frame. The second virtual camera is used for tracking shooting and can be a virtual camera arranged on a rocker arm.
The following provides an exemplary explanation of the process of configuring the positions and the shooting parameters of the first virtual camera and the second virtual camera.
Illustratively, as shown in the virtual parts bar 205 in fig. 2D with the fixed rack 208 of the virtual camera installed, the fixed rack 208 may be moved into the virtual venue simplified model 201 in response to a drag operation of the fixed rack 208 by the user.
When adjusting the setting and shooting parameters of the virtual camera loaded on the fixed rack 208, first, in response to the right click operation of the user on the fixed rack 208 in the virtual venue simple model 201, a setting floating window 209 for setting the setting of the setting parameters of the setting floating window 209 may be popped up, as shown in fig. 2G, so that the user may modify the parameter information in the setting floating window 209 for setting the setting parameters of the setting parameters, for example: parameters such as ground clearance, lens focal length, photoreceptor width, photoreceptor height, minimum focal length and maximum focal length; the positioning parameter setting hover window 209 may then be closed and the user modified parameter information saved in response to the user right-clicking again on the fixed rack 208 in the virtual venue facili-tated model 201.
Illustratively, as shown in the virtual component column 205 in fig. 2D with the swing arm 210 of the virtual camera mounted therein, the swing arm 210 may be moved into the virtual venue simplified model 201 in response to a drag operation of the swing arm 210 by a user.
In adjusting the setting and shooting parameters of the virtual camera on the swing arm 210, the swing arm parameter setting floating window 211 may be popped up in response to the right-click operation of the user on the swing arm 210 in the virtual venue simple model 201, as shown in fig. 2H, so that the user may modify parameter information in the swing arm parameter setting floating window 211, such as: the arm length of the rocker arm, the focal length of the lens, the width of the photoreceptor, the height of the photoreceptor, the minimum focal length, the maximum focal length and other parameters; the swing arm parameter setting float window 211 may be closed and user modified parameter information saved in response to a user right-clicking operation of the swing arm 210 again in the virtual stadium monument 201.
In the steps, the flexible adjustment of the configuration parameters of the virtual camera is realized, so that the real-scene shooting is simulated more really.
The second virtual camera can be controlled to move by swinging the rocker arm up, down, left and right, so that the second virtual camera can perform tracking shooting, and therefore a virtual tracking point can be set in the simple model of the virtual venue as an object to be tracked and shot by the second virtual camera.
In an alternative embodiment, in response to a drag operation for a virtual tracking point in the virtual part bar, the virtual tracking point is added to the virtual venue simple model, the virtual tracking point moves within the virtual venue simple model, and the second virtual camera performs a tracking shot of the virtual tracking point.
For example, as shown by the virtual tracking point 212 in the virtual part bar 205 in fig. 2D, the virtual tracking point 212 may be moved into the virtual venue simple model 201 in response to a drag operation of the virtual tracking point 212 by the user. The virtual tracking point 212 may be moved within the virtual venue facili-ty model 201. In response to a left click operation of the user on the virtual tracking point 212 in the virtual venue simple model 201, a coordinate axis is displayed in the virtual tracking point 212, as shown in fig. 2I, so as to prompt the user to move the virtual tracking point 212 at this time; after the virtual tracking point 212 appears on the coordinate axis, the virtual tracking point 212 may be moved in the virtual stadium simple model 201 in response to the user's movement operation on the virtual tracking point 212.
And setting a virtual tracking point in the virtual venue model so that the second virtual camera can perform tracking shooting.
After the virtual camera is placed in the virtual stadium simple model, step S130 may be performed to present a scene captured by the virtual camera.
Step S130, in response to the angle-of-view viewing operation for the virtual camera, presents a scene picture taken by the virtual camera.
Specifically, a viewing angle viewing button may be provided in the user interface for the user to perform a viewing angle viewing operation. When the user clicks the view angle viewing button, the scene picture of the virtual camera under the current view angle can be presented in response to the clicking operation of the user on the view angle viewing button.
In an alternative embodiment, if a first virtual camera and a second virtual camera are placed in the virtual venue simplified model, the first virtual camera is used for fixed-position shooting, the second virtual camera is used for track shooting, and in step S130, in response to a viewing angle viewing operation for the virtual cameras, a scene picture shot by the virtual cameras is presented, which may be implemented by the following steps: and presenting the scene picture under the target switching visual angle in response to the switching operation among the visual angle of the first virtual camera, the visual angle of the second virtual camera and the free visual angle.
The steps can provide three scene pictures under different visual angles for a user, and the steps comprise: the method comprises the steps of fixing a scene picture under a machine position visual angle, tracking the scene picture under the machine position visual angle and tracking the scene picture under a free visual angle, so that a user can adjust the machine position of a first virtual camera or a second camera according to the scene pictures under different visual angles, an accurate expected machine position is provided for AR effect design, and the real-scene construction cost is saved.
Illustratively, in an actual application process, a view viewing function control under three views can be provided for a user, so that the user can perform view switching operation. Illustratively, as shown in fig. 3 by a fixed-position-view button 301, a tracking-position-view button 302, and a free-view button 303.
And step S140, responding to the operation of adjusting the position of the virtual camera in the simple virtual venue model based on the scene picture, and adjusting the position of the virtual camera in the simple virtual venue model.
The user can perform the set position adjustment operation with reference to the displayed scene picture. Specifically, the position of the virtual camera in the virtual venue simplified model may be adjusted in response to a user's position adjustment operation on the virtual camera.
When the virtual scene simple model includes the first virtual camera and the second virtual camera, in an alternative embodiment, the adjusting the position of the virtual camera in the virtual venue simple model in response to the operation of adjusting the position of the virtual camera in the virtual venue simple model based on the scene picture may be implemented by: responding to the operation of adjusting the position of the first virtual camera in the simple virtual venue model based on the scene picture, and adjusting the position of the first virtual camera in the simple virtual venue model; and/or adjusting the position of the second virtual camera in the virtual venue simple model in response to the operation of adjusting the position of the second virtual camera in the virtual venue simple model based on the scene picture.
It should be noted that, when the position of the first virtual camera is adjusted, the configurations such as the distance from the fixed frame to the front edge of the stage, the distance from the fixed frame to the center axis of the stage, the ground clearance of the virtual camera on the fixed frame, and the current focal length of the virtual camera on the fixed frame may be adjusted. When the position of the second virtual camera is adjusted, the distance between the rocker arm and the front edge of the stage, the distance between the rocker arm and the middle shaft of the stage, the included angle between the virtual camera on the rocker arm and the center of the stage, the swing arm panning amplitude, the swing arm pitching amplitude, the current focal length of the virtual camera on the rocker arm and the like can be adjusted.
For example, as shown in fig. 4A, in response to a left-click operation of the user on the fixed rack 208 in the virtual stadium simple model 201, a coordinate axis may be displayed in the fixed rack 208 to prompt the user that the position of the fixed rack 208 may be moved at this time.
For example, as shown in fig. 4B, in response to a click operation of the swing arm 210 in the virtual venue simple model 201 by the user, coordinate axes are displayed in the swing arm 210 to prompt the user to move the position of the swing arm 210 at this time.
The machine positions of the first virtual camera and the second virtual camera in the simple virtual venue model are adjusted respectively in the steps, so that accurate expected machine positions can be obtained, and the real scene building is assisted. For example, the method can assist constructors in grooving, digging and routing the ground platform based on the obtained expected machine position during live-action construction.
After the position of the virtual camera in the simple model of the virtual venue is adjusted, the simple model of the current virtual scene can be stored.
In addition, in order to more intuitively embody the layout visual effect of the virtual camera in the current machine position, in an optional implementation manner, a target virtual design model can be obtained in response to an import operation for the virtual design model; and in response to the rendering effect derivation operation aiming at the target virtual design model, rendering the simple virtual venue model by adopting the target virtual design model to obtain the virtual venue rendering model, so that the virtual camera shoots the virtual venue rendering model.
The virtual design model can comprise design models such as a stage and a virtual role. As shown in fig. 5A and 5B, fig. 5A and 5B are rendering effect diagrams captured by a virtual camera after a virtual venue simple model is rendered by using different virtual design models.
It should be noted that, during rendering, the stage, the virtual character, and the like included in the virtual design model may be rendered to the corresponding position of the virtual venue simple model, so as to obtain the virtual venue rendering model. For example, the stage may be rendered according to the position of the virtual stage simple model in the virtual venue simple model, the virtual character may be rendered according to the position of the virtual tracking point, and the like.
For example, data corresponding to the virtual design model may be stored in a file in the format of "fbx", and the corresponding virtual design model may be obtained by importing the "fbx" file from an external system.
When the virtual design model is obtained, as shown in fig. 6A, in response to a click operation of a user on a right function icon of the virtual venue simple model, a model import area 601 as shown in fig. 6B may be called, file path information input by the user in a path bar of the model import area 601 may be obtained, and a name list of files included in a path matching the file path information is displayed in a to-be-imported file display area 602 below the path bar; then, in response to the user's clicking operation on these file names, the clicked target file name may be determined, and the clicked target file name may be displayed in the imported file display area 603; then, in response to a click operation of the user on the project export control 604, that is, a rendering effect export operation, the virtual design model corresponding to the target file name is used to render the virtual venue simple model, so as to obtain a virtual venue rendering model.
It should be noted that the image displayed in the image display area 605 in the model importing area 601 is a top view of the virtual venue simple model, so as to visually indicate the spatial position relationship of the virtual camera position; also included in the module lead-in area 601 are a fixed station parameter display area 606 and a rocker station parameter display area 607.
In addition, in order to facilitate the user to better distinguish the imported target files, the display state of the name of the target file selected in the file display area to be imported 602 may also be changed; in order to prevent the user from misoperation, a deletion control can be further arranged in the imported file display area 603 to provide the user with adjustment on the imported target file; in order to improve the operation experience of the user, the display effects of the file names in the to-be-imported file display area 602 and the imported file display area 603 may also be presented according to the current state, and the display effect of the control provided for the user to operate may also be presented according to the current state, for example, the current state may include: one of a normal state, a mouse passing state and a mouse clicking state; in order to facilitate a user to quickly obtain or share current machine position parameters, a parameter screenshot control 608 may also be set in the model import area 601.
In the above steps, the virtual design model is adopted to render the simple virtual venue model, so that the picture shot by the virtual camera can more visually reflect the shooting effect under the real scene, and the real scene layout efficiency and the shooting effect are improved when the real scene layout is carried out according to the adjusted machine position of the virtual camera.
To facilitate user reset, loading and saving of the virtual venue easy model, as shown in FIG. 3, a project reset control 304, a project load control 305, and a project save control 306 may be provided for the user. The item resetting control 304 may be configured to reset relevant parameter data of the current virtual venue simple model to initialize the current virtual venue simple model; the item loading control 305 may be configured to load the relevant parameter data of the previously saved virtual venue simple model, and then present the previously saved virtual venue simple model; the item save control 306 can be used to save the relevant parameter data of the current virtual venue compact model. In addition, after the resetting, loading and saving processing is completed, corresponding operation prompt information of resetting completion, loading completion and saving completion can be generated so as to prompt the user to perform the next operation.
As shown in fig. 7, a flow diagram for scene simulation through multiple passes is provided.
One approach for scene simulation is that an initial virtual venue simple model is created through project resetting processing, and the initial virtual venue simple model and a virtual stage simple model in the initial virtual venue simple model are configured based on a venue data configuration floating window 701 and a stage area data configuration floating window 702; adding the virtual placeholder component into the simple virtual venue model through dragging and moving operations of a user, and configuring a newly added virtual placeholder component in the simple virtual venue model based on the floating window 703 set by the inter-function parameter; adding a fixed rack loaded with a virtual camera and a rocker arm into a virtual venue simple model through dragging and moving operations of a user, and configuring a machine position based on a positioning parameter setting floating window 704 and a rocker arm parameter setting floating window 705; adding the virtual tracking points to the simple virtual venue model through user dragging and moving operations, and configuring the virtual tracking points; importing a virtual design model, and rendering the virtual venue simple model to obtain a virtual venue rendering model; the virtual venue rendering model can be previewed through multi-view switching among a fixed station view angle, a tracked station view angle and a free view angle so as to adjust station parameters of the virtual camera; outputting the adjusted machine position parameters; and saving the current virtual venue simple model through project saving processing.
The other scene simulation approach is that a stored simple virtual venue model can be directly obtained through project loading processing; importing a virtual design model, and rendering the virtual venue simple model to obtain a virtual venue rendering model; the virtual venue rendering model can be previewed through multi-view switching among a fixed station view angle, a tracked station view angle and a free view angle so as to adjust station parameters of the virtual camera; outputting the adjusted machine position parameters; and storing the current simple model of the virtual stadium through project storage processing.
Exemplary embodiments of the present disclosure also provide a scene simulation apparatus, as shown in fig. 8, the scene simulation apparatus 800 may include:
a simple model creation module 810, configured to create a virtual venue simple model;
a camera configuration module 820 for placing the virtual camera within the virtual stadium dumb in response to a first configuration operation for the virtual camera;
a shot picture presenting module 830 configured to present a scene picture shot by the virtual camera in response to a viewing angle viewing operation for the virtual camera;
the position adjusting module 840 is configured to adjust the position of the virtual camera in the virtual venue simple model in response to an operation of adjusting the position of the virtual camera in the virtual venue simple model based on the scene picture.
In an alternative embodiment, the simplistic model creation module 810 may be configured to: creating an initial virtual venue simple model; and responding to the size adjustment operation aiming at the initial virtual venue simple model, and adjusting the size of the initial virtual venue simple model to obtain the virtual venue simple model.
In an optional implementation manner, the initial virtual venue simplified model includes a virtual stage simplified model, and the scene simulation apparatus 800 further includes: a stage configuration adjustment module for adjusting the position and/or size of the virtual stage simple model in the virtual venue simple model in response to an adjustment operation for the position and/or size of the virtual stage simple model.
In an alternative embodiment, the scene simulation apparatus 800 further includes: a virtual placeholder configuration module to place the virtual placeholder within the virtual venue dumb in response to a second configuration operation for one or more virtual placeholders.
In an alternative embodiment, the virtual placeholder configuration module may be configured to: responding to the dragging operation of the virtual placeholder in the virtual part bar, and moving the virtual placeholder into the virtual venue simple model; in response to the adjustment operation for the position and/or size of the virtual placeholder, the position and/or size of the virtual placeholder in the virtual venue dumb is adjusted.
In an alternative embodiment, the camera configuration module 820 may be configured to: in response to a drag operation for a first virtual camera and/or a second virtual camera in the virtual part bar, adding the first virtual camera and/or the second virtual camera into the virtual venue simple model, wherein the first virtual camera is used for fixed-position shooting, and the second virtual camera is used for tracking shooting; in response to an adjustment operation for the shooting parameters of the first virtual camera and/or the second virtual camera, the shooting parameters of the first virtual camera and/or the second virtual camera are adjusted.
In an optional implementation, the scene simulation apparatus 800 may further include: and the tracking point configuration module is used for responding to the dragging operation of the virtual tracking point in the virtual component bar, adding the virtual tracking point into the virtual venue simple model, moving the virtual tracking point in the virtual venue simple model, and tracking and shooting the virtual tracking point by the second virtual camera.
In an optional embodiment, a first virtual camera and a second virtual camera are placed in the virtual stadium simple model, the first virtual camera is used for fixed-position shooting, the second virtual camera is used for track shooting, and the shot picture presenting module 830 may include: and the visual angle switching module is used for responding to switching operation among the visual angle of the first virtual camera, the visual angle of the second virtual camera and the free visual angle and presenting a scene picture under the target switching visual angle.
In an alternative embodiment, the position adjusting module 840 adjusts the position of the first virtual camera in the virtual venue simple model in response to a scene picture-based adjustment operation of the position of the first virtual camera in the virtual venue simple model; and/or adjusting the position of the second virtual camera in the virtual venue simple model in response to the operation of adjusting the position of the second virtual camera in the virtual venue simple model based on the scene picture.
In an optional implementation, the scene simulation apparatus 800 may further include: a rendering model, which may be configured to: responding to the import operation aiming at the virtual design model, and acquiring a target virtual design model; and in response to the rendering effect derivation operation aiming at the target virtual design model, rendering the virtual venue simple model by adopting the target virtual design model to obtain a virtual venue rendering model so that the virtual camera shoots the virtual venue rendering model.
The specific details of each part in the scene simulation apparatus 800 are already described in detail in the method part embodiment, and details that are not disclosed may refer to the method part embodiment, and thus are not described again.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described scene simulation method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing an electronic device to perform the steps according to various exemplary embodiments of the disclosure described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the electronic device.
In particular, the program product stored on the computer-readable storage medium may cause the electronic device to perform the steps of:
creating a simple model of a virtual venue;
placing the virtual camera within the virtual venue dumb in response to a first configuration operation for the virtual camera;
presenting scene pictures shot by the virtual camera in response to a view viewing operation for the virtual camera;
and responding to the operation of adjusting the position of the virtual camera in the virtual venue simple model based on the scene picture, and adjusting the position of the virtual camera in the virtual venue simple model.
In an alternative embodiment, the creating of the virtual venue simple model may be implemented by: creating an initial virtual stadium simple model; and responding to the size adjustment operation aiming at the initial virtual venue simple model, and adjusting the size of the initial virtual venue simple model to obtain the virtual venue simple model.
In an optional implementation manner, the initial virtual venue simplified model includes a virtual stage simplified model, and the following steps may be further performed: in response to an adjustment operation for the position and/or size of the virtual stage simple model, the position and/or size of the virtual stage simple model in the virtual venue simple model is adjusted.
In an alternative embodiment, the following steps can also be performed: in response to a second configuration operation for the one or more virtual placeholders, placing the virtual placeholder within the virtual venue facile model.
In an alternative embodiment, the placing of the virtual placeholder within the virtual venue facile model in response to the second configuration operation for the one or more virtual placeholders described above may be achieved by: responding to the dragging operation of the virtual placeholder in the virtual part bar, and moving the virtual placeholder into the virtual venue simple model; in response to the adjustment operation for the position and/or size of the virtual placeholder, the position and/or size of the virtual placeholder in the virtual venue dumb is adjusted.
In an alternative embodiment, the placing of the virtual camera within the virtual venue simple model in response to the first configuration operation for the virtual camera may be implemented by: in response to a drag operation for a first virtual camera and/or a second virtual camera in the virtual part bar, adding the first virtual camera and/or the second virtual camera into the virtual stadium simple model, wherein the first virtual camera is used for fixed-position shooting, and the second virtual camera is used for tracking shooting; in response to an adjustment operation for the shooting parameters of the first virtual camera and/or the second virtual camera, the shooting parameters of the first virtual camera and/or the second virtual camera are adjusted.
In an alternative embodiment, the following steps may also be performed: in response to a drag operation for a virtual tracking point in the virtual part bar, the virtual tracking point is added to the virtual venue simple model, the virtual tracking point moves in the virtual venue simple model, and the second virtual camera performs tracking shooting on the virtual tracking point.
In an optional embodiment, a first virtual camera and a second virtual camera are placed in the virtual stadium simple model, the first virtual camera is used for fixed-position shooting, the second virtual camera is used for track shooting, and the presenting the scene pictures shot by the virtual cameras in response to the view angle viewing operation for the virtual cameras is implemented by: and presenting a scene picture under the target switching visual angle in response to the switching operation among the visual angle of the first virtual camera, the visual angle of the second virtual camera and the free visual angle.
In an alternative embodiment, the adjusting the position of the virtual camera in the virtual venue simple model in response to the adjusting operation of the position of the virtual camera in the virtual venue simple model based on the scene picture may be implemented by: responding to the operation of adjusting the position of the first virtual camera in the simple virtual venue model based on the scene picture, and adjusting the position of the first virtual camera in the simple virtual venue model; and/or adjusting the position of the second virtual camera in the virtual venue simple model in response to the operation of adjusting the position of the second virtual camera in the virtual venue simple model based on the scene picture.
In an alternative embodiment, the following steps can also be performed: responding to the import operation aiming at the virtual design model, and acquiring a target virtual design model; and in response to the rendering effect derivation operation aiming at the target virtual design model, rendering the virtual venue simple model by adopting the target virtual design model to obtain a virtual venue rendering model so that the virtual camera shoots the virtual venue rendering model.
In the scene simulation process, the machine position of the virtual camera is adjusted through simulated shooting, so that visual and three-dimensional simulation effect and machine position data reference can be provided, and an accurate expected machine position is provided for AR effect design; spare materials which are additionally manufactured due to the fact that the machine positions are not aligned in advance can be saved, and the problem that construction can be performed only after the machine positions are confirmed on site when part of dance beauty is built is solved; the expected design can be restored, the field building difficulty is reduced, and the field building debugging time and cost are saved.
The program product may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not so limited, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Exemplary embodiments of the present disclosure also provide an electronic device capable of implementing the above-described scene simulation method. An electronic device 900 according to this exemplary embodiment of the present disclosure is described below with reference to fig. 9. The electronic device 900 shown in fig. 9 is only an example and should not bring any limitations to the functionality or scope of use of the embodiments of the present disclosure.
As shown in fig. 9, electronic device 900 may take the form of a general purpose computing device. Components of electronic device 900 may include, but are not limited to: at least one processing unit 910, at least one memory unit 920, a bus 930 that connects the various system components (including the memory unit 920 and the processing unit 910), and a display unit 940.
The storage unit 920 stores program code, which may be executed by the processing unit 910, so that the processing unit 910 performs the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "exemplary method" section of this specification.
Specifically, the processing unit 910 may perform the following steps:
creating a simple virtual venue model;
placing the virtual camera within the virtual venue dumb in response to a first configuration operation for the virtual camera;
presenting scene pictures shot by the virtual camera in response to a viewing operation for the virtual camera;
and responding to the operation of adjusting the position of the virtual camera in the virtual venue simple model based on the scene picture, and adjusting the position of the virtual camera in the virtual venue simple model.
In an alternative embodiment, the creating of the virtual venue simple model may be implemented by: creating an initial virtual stadium simple model; and responding to the size adjustment operation aiming at the initial virtual venue simple model, and adjusting the size of the initial virtual venue simple model to obtain the virtual venue simple model.
In an optional implementation manner, the initial virtual venue simplified model includes a virtual stage simplified model, and the following steps may be further performed: in response to an adjustment operation for the position and/or size of the virtual stage simple model, the position and/or size of the virtual stage simple model in the virtual venue simple model is adjusted.
In an alternative embodiment, the following steps may also be performed: in response to the second configuration operation for the one or more virtual placeholders, the virtual placeholder is placed within the virtual venue compact model.
In an alternative embodiment, the placing of the virtual placeholder within the virtual venue facile model in response to the second configuration operation for the one or more virtual placeholders described above may be achieved by: responding to the dragging operation of the virtual placeholder in the virtual part bar, and moving the virtual placeholder into the virtual venue simple model; in response to the adjustment operation for the position and/or size of the virtual placeholder, the position and/or size of the virtual placeholder in the virtual venue dumb is adjusted.
In an alternative embodiment, the placing of the virtual camera within the virtual venue simple model in response to the first configuration operation for the virtual camera may be implemented by: in response to a drag operation for a first virtual camera and/or a second virtual camera in the virtual part bar, adding the first virtual camera and/or the second virtual camera into the virtual venue simple model, wherein the first virtual camera is used for fixed-position shooting, and the second virtual camera is used for tracking shooting; in response to an adjustment operation for the shooting parameters of the first virtual camera and/or the second virtual camera, the shooting parameters of the first virtual camera and/or the second virtual camera are adjusted.
In an alternative embodiment, the following steps can also be performed: in response to a drag operation for a virtual tracking point in the virtual part bar, the virtual tracking point is added to the virtual venue simple model, the virtual tracking point moves in the virtual venue simple model, and the second virtual camera performs tracking shooting on the virtual tracking point.
In an alternative embodiment, a first virtual camera and a second virtual camera are placed in the virtual venue simple model, the first virtual camera is used for fixed-position shooting, the second virtual camera is used for track shooting, and the presenting of the scene pictures shot by the virtual cameras in response to the viewing angle viewing operation for the virtual cameras is implemented by the following steps: and presenting the scene picture under the target switching visual angle in response to the switching operation among the visual angle of the first virtual camera, the visual angle of the second virtual camera and the free visual angle.
In an alternative embodiment, the adjusting the position of the virtual camera in the virtual venue simple model in response to the adjusting operation of the position of the virtual camera in the virtual venue simple model based on the scene picture may be implemented by: adjusting the position of the first virtual camera in the virtual venue simple model in response to an operation of adjusting the position of the first virtual camera in the virtual venue simple model based on the scene picture; and/or adjusting the position of the second virtual camera in the virtual venue simple model in response to the operation of adjusting the position of the second virtual camera in the virtual venue simple model based on the scene picture.
In an alternative embodiment, the following steps can also be performed: responding to the import operation aiming at the virtual design model, and acquiring a target virtual design model; and in response to the rendering effect derivation operation aiming at the target virtual design model, rendering the simple virtual venue model by adopting the target virtual design model to obtain the virtual venue rendering model, so that the virtual camera shoots the virtual venue rendering model.
In the scene simulation process, the machine position of the virtual camera is adjusted through simulated shooting, so that visual and three-dimensional simulation effect and machine position data reference can be provided, and accurate expected machine position is provided for AR effect design; spare materials which are additionally manufactured due to the fact that the machine positions are not aligned in advance can be saved, and the problem that construction can be carried out only after the machine positions are confirmed on site when part of dance beauty is built is solved; the expected design can be restored, the field building difficulty is reduced, and the field building debugging time and cost are saved.
The storage unit 920 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 921 and/or a cache memory unit 922, and may further include a read only memory unit (ROM) 923.
Storage unit 920 may also include a program/utility 924 having a set (at least one) of program modules 925, such program modules 925 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment.
Bus 930 can be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 900 may also communicate with one or more external devices 1000 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 900, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 900 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interface 950. Also, the electronic device 900 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 960. As shown in FIG. 9, the network adapter 960 communicates with the other modules of the electronic device 900 via the bus 930. It should be appreciated that although not shown in FIG. 9, other hardware and/or software modules may be used in conjunction with electronic device 900, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, to name a few.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the exemplary embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed, for example, synchronously or asynchronously in multiple modules.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (13)

1. A method for scene simulation, the method comprising:
creating a simple virtual venue model;
in response to a first configuration operation for a virtual camera, placing the virtual camera within the virtual venue dumb;
presenting scene pictures shot by the virtual camera in response to a view viewing operation for the virtual camera;
and adjusting the position of the virtual camera in the virtual stadium simple model in response to the operation of adjusting the position of the virtual camera in the virtual stadium simple model based on the scene pictures.
2. The method of claim 1, wherein creating the virtual stadium simplicity model comprises:
creating an initial virtual stadium simple model;
and responding to the size adjustment operation aiming at the initial virtual venue simple model, adjusting the size of the initial virtual venue simple model, and obtaining the virtual venue simple model.
3. The method of claim 2, wherein the initial virtual venue facili ty model includes a virtual stage facili ty model therein, the method further comprising:
adjusting the position and/or size of the virtual stage simple model in the virtual venue simple model in response to an adjustment operation for the position and/or size of the virtual stage simple model.
4. The method of claim 1, further comprising:
in response to a second configuration operation for one or more virtual placeholders, placing the virtual placeholder within the virtual venue compact model.
5. The method of claim 4, wherein placing the virtual placeholder within the virtual venue simplicity model in response to the second configuration operation for one or more virtual placeholders comprises:
in response to a drag operation for a virtual placeholder in a virtual part bar, moving the virtual placeholder into the virtual venue dumb;
adjusting the position and/or size of the virtual placeholder in the virtual venue compact model in response to the adjustment operation for the position and/or size of the virtual placeholder.
6. The method of claim 1, wherein placing the virtual camera within the virtual stadium simple model in response to the first configuration operation for the virtual camera comprises:
adding a first virtual camera and/or a second virtual camera into the virtual venue simple model in response to a drag operation for the first virtual camera and/or the second virtual camera in a virtual part bar, the first virtual camera being used for fixed-position shooting and the second virtual camera being used for track shooting;
adjusting the shooting parameters of the first virtual camera and/or the second virtual camera in response to an adjustment operation for the shooting parameters of the first virtual camera and/or the second virtual camera.
7. The method of claim 6, further comprising:
in response to a drag operation for a virtual tracking point in a virtual part bar, adding the virtual tracking point into the virtual venue simple model, the virtual tracking point moving within the virtual venue simple model, the second virtual camera performing a tracking shot of the virtual tracking point.
8. The method of claim 1, wherein placing a first virtual camera and a second virtual camera within the virtual stadium simple model, the first virtual camera for fixed-position shooting and the second virtual camera for track shooting, the rendering of the scene shot by the virtual cameras in response to the perspective view operation for the virtual cameras comprises:
and presenting a scene picture under a target switching visual angle in response to switching operation among the visual angle of the first virtual camera, the visual angle of the second virtual camera and a free visual angle.
9. The method of claim 8, wherein adjusting the position of the virtual camera in the virtual venue simple model in response to the operation of adjusting the position of the virtual camera in the virtual venue simple model based on the scene picture comprises:
adjusting the position of the first virtual camera in the virtual venue simple model in response to an adjustment operation of the position of the first virtual camera in the virtual venue simple model based on the scene picture; and/or
And adjusting the position of the second virtual camera in the virtual venue simple model in response to the operation of adjusting the position of the second virtual camera in the virtual venue simple model based on the scene picture.
10. The method of claim 1, further comprising:
responding to the import operation aiming at the virtual design model, and acquiring a target virtual design model;
and in response to rendering effect derivation operation for the target virtual design model, rendering the virtual venue simplified model by using the target virtual design model to obtain the virtual venue rendering model, so that the virtual camera shoots the virtual venue rendering model.
11. A scene simulation apparatus, characterized in that the apparatus comprises:
the simple model creating module is used for creating a virtual venue simple model;
a camera configuration module to place a virtual camera within the virtual venue dumb in response to a first configuration operation for the virtual camera;
a shot picture presenting module for presenting a scene picture shot by the virtual camera in response to a viewing operation for the virtual camera;
and the machine position adjusting module is used for responding to the operation of adjusting the machine position of the virtual camera in the virtual venue simple model based on the scene picture, and adjusting the machine position of the virtual camera in the virtual venue simple model.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 10.
13. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1 to 10 via execution of the executable instructions.
CN202211328099.9A 2022-10-27 2022-10-27 Scene simulation method and device, storage medium and electronic equipment Pending CN115661410A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211328099.9A CN115661410A (en) 2022-10-27 2022-10-27 Scene simulation method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211328099.9A CN115661410A (en) 2022-10-27 2022-10-27 Scene simulation method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115661410A true CN115661410A (en) 2023-01-31

Family

ID=84993187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211328099.9A Pending CN115661410A (en) 2022-10-27 2022-10-27 Scene simulation method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115661410A (en)

Similar Documents

Publication Publication Date Title
CN101969527B (en) Content-aware video stabilization
JP7422876B2 (en) Display method and device based on augmented reality, and storage medium
US20180241986A1 (en) Image processing method and device
US20230062951A1 (en) Augmented reality platform for collaborative classrooms
CN107977141B (en) Interaction control method and device, electronic equipment and storage medium
US20170060379A1 (en) Augmentable and spatially manipulable 3d modeling
CN113436343B (en) Picture generation method and device for virtual concert hall, medium and electronic equipment
CN111773709B (en) Scene map generation method and device, computer storage medium and electronic equipment
KR102433857B1 (en) Device and method for creating dynamic virtual content in mixed reality
CN111142669B (en) Interaction method, device, equipment and storage medium from two-dimensional interface to three-dimensional scene
CN108776544B (en) Interaction method and device in augmented reality, storage medium and electronic equipment
US11120617B2 (en) Method and apparatus for switching panoramic scene
CN112053370A (en) Augmented reality-based display method, device and storage medium
US20230405475A1 (en) Shooting method, apparatus, device and medium based on virtual reality space
US11948257B2 (en) Systems and methods for augmented reality video generation
CN107978018B (en) Method and device for constructing three-dimensional graph model, electronic equipment and storage medium
CN117095135B (en) Industrial three-dimensional scene modeling arrangement method and device capable of being edited online
Soares et al. Designing a highly immersive interactive environment: The virtual mine
CN112017264A (en) Display control method and device for virtual studio, storage medium and electronic equipment
CN112381946A (en) Digital scene viewing method and device, storage medium and computer equipment
CN115661410A (en) Scene simulation method and device, storage medium and electronic equipment
US20190026941A1 (en) Traversal selection of components for a geometric model
CN102930076A (en) Real-time three-dimensional data interaction method for virtual object
CN111970559A (en) Video acquisition method and device, electronic equipment and storage medium
CN114222067B (en) Scene shooting method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination