CN110544314B - Fusion method, system, medium and equipment of virtual reality and simulation model - Google Patents
Fusion method, system, medium and equipment of virtual reality and simulation model Download PDFInfo
- Publication number
- CN110544314B CN110544314B CN201910836813.7A CN201910836813A CN110544314B CN 110544314 B CN110544314 B CN 110544314B CN 201910836813 A CN201910836813 A CN 201910836813A CN 110544314 B CN110544314 B CN 110544314B
- Authority
- CN
- China
- Prior art keywords
- model
- panoramic
- simulation
- rendered
- simulation model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2008—Assembling, disassembling
Abstract
The invention discloses a fusion method, a system, a medium and electronic equipment of virtual reality and a simulation model, wherein the fusion method comprises the following steps: acquiring panoramic materials corresponding to a target area, wherein the panoramic materials comprise panoramic videos or panoramic pictures; performing simulation modeling on the real object to obtain a simulation model; extracting a reference background from the panoramic material, and combining the simulation model with the reference background to obtain a model to be rendered, wherein the reference background comprises a scene in which the simulation model needs to be placed; rendering the model to be rendered, and exporting the rendered model in a transparent background format to obtain a model to be fused; and fusing the model to be fused with the panoramic material. According to the technical scheme, the fusion degree of the simulation model and the virtual scene is high, and the virtual roaming experience is effectively improved.
Description
Technical Field
The present invention relates to the field of computer simulation technologies, and in particular, to a method, a system, a medium, and an electronic device for fusing a virtual reality and a simulation model.
Background
The virtual reality technology (Virtual Reality Technology, VR) is a comprehensive integration technology, and combines computer graphics, man-machine interaction technology, sensor technology, man-machine interface technology, artificial intelligence technology and the like, so that the virtual reality technology can generate various realistic three-dimensional visual, auditory and olfactory senses through a computer, and combines a virtual scene with a real scene better, so that virtual information can be perceived by human senses, and a roaming experience exceeding reality is achieved.
With the development of technology, the panoramic virtual reality technology extended on the basis of the traditional virtual reality technology can provide 360-degree omnibearing images with three-dimensional stereoscopic sensation, has the characteristics of strong sense of reality, deep 360-degree panorama and the like, and is receiving increasingly wide attention. In the prior art, panoramic virtual reality technology is increasingly adopted to simulate real life so as to present the scene of a target object in an ideal state in advance.
However, in the prior art, when a panoramic virtual reality technology is used for performing virtual roaming design, panoramic images or videos in a certain area are usually required to be collected in advance, and due to factors such as maintenance and replacement of some devices or objects, when panoramic images or videos are collected, corresponding devices or objects cannot be temporarily found on some display positions, that is, real materials cannot be completely collected, so that object vacancies on some display positions in the finally completed virtual roaming design can be caused, and the virtual reality technology cannot completely restore the real scene, thereby affecting the virtual roaming experience.
Disclosure of Invention
The invention aims to overcome the defect that in the prior art, a virtual reality technology can not completely simulate a real scene under the condition that real materials can not be completely collected, and provides a fusion method, a system, a medium and electronic equipment of a virtual reality and simulation model.
The invention solves the technical problems by the following technical scheme:
a fusion method of virtual reality and simulation models, the fusion method comprising:
acquiring panoramic materials corresponding to a target area, wherein the panoramic materials comprise panoramic videos or panoramic pictures;
performing simulation modeling on the real object to obtain a simulation model;
extracting a reference background from the panoramic material, and combining the simulation model with the reference background to obtain a model to be rendered, wherein the reference background comprises a scene in which the simulation model needs to be placed;
rendering the model to be rendered, and exporting the rendered model in a transparent background format to obtain a model to be fused;
and fusing the model to be fused with the panoramic material.
Preferably, the reference background includes a reference object, and the step of combining the simulation model with the reference background includes:
calculating the scaling of the actual size of the reference object relative to the size of the reference object in the panoramic material;
adjusting the size of the simulation model in the reference background according to the scaling;
and/or the number of the groups of groups,
the reference background comprises a horizontal reference line, and the step of combining the simulation model with the reference background comprises the following steps:
the simulation model is adjusted such that the angle of the simulation model with respect to the horizontal reference line coincides with the angle of the reference object with respect to the horizontal reference line.
Preferably, before the step of rendering the model to be rendered, the method further includes: setting action attributes of the model to be rendered, wherein the action attributes comprise one or more of rotation, movement and jump;
and/or the number of the groups of groups,
the step of rendering the model to be rendered comprises the following steps: and adding material properties to the model to be rendered, wherein the material properties are consistent with the material of the real object.
Preferably, the format of the transparent background comprises a sequential frame format;
the step of fusing the model to be fused with the panoramic material comprises the following steps:
importing the panoramic material and the model to be fused in the sequence frame format into editing software;
and processing the panoramic material and the model to be fused by utilizing the editing software, wherein the processing comprises one or more of editing, toning and dubbing.
Preferably, the real object includes a plurality of parts, and the step of performing simulation modeling on the real object includes: respectively carrying out simulation modeling on a plurality of parts to obtain a plurality of simulation sub-models; assembling and combining a plurality of simulation sub-models to obtain a simulation model;
and/or the number of the groups of groups,
the step of obtaining the panoramic material corresponding to the target area comprises the following steps:
acquiring materials of a target area in a plurality of view finding directions, wherein the view finding directions comprise front, rear, left, right, upper and lower directions taking view finding equipment as a reference;
and splicing the materials in the view finding directions to obtain the panoramic material.
A fusion system of virtual reality and simulation models, the fusion system comprising:
the material acquisition module is used for acquiring panoramic materials corresponding to the target area, wherein the panoramic materials comprise panoramic videos or panoramic pictures;
the modeling execution module is used for carrying out simulation modeling on the real object to obtain a simulation model;
the combination module is used for extracting a reference background from the panoramic material, and combining the simulation model with the reference background to obtain a model to be rendered, wherein the reference background comprises a scene where the simulation model needs to be placed;
the rendering module is used for rendering the model to be rendered and exporting the rendered model in a transparent background format to obtain a model to be fused;
and the fusion module is used for fusing the model to be fused with the panoramic material.
Preferably, the reference background comprises a reference object, and the combination module is used for calculating the scaling of the actual size of the reference object relative to the size of the reference object in the panoramic material; the combination module is further used for adjusting the size of the simulation model in the reference background according to the scaling;
and/or the number of the groups of groups,
the reference background comprises a horizontal reference line, and the combination module is further used for adjusting the simulation model to enable the angle of the simulation model relative to the horizontal reference line to be consistent with the angle of the reference object relative to the horizontal reference line.
Preferably, the fusion system further comprises a model action setting module, wherein the model action setting module is used for setting action attributes of the model to be rendered, and the action attributes comprise one or more of rotation, movement and jump;
and/or the number of the groups of groups,
the rendering module is further configured to add material properties to the model to be rendered, where the material properties are consistent with the material of the real object.
Preferably, the format of the transparent background comprises a sequential frame format;
the fusion module is used for importing the panoramic material and the model to be fused in the sequence frame format into editing software; the fusion module is also used for processing the panoramic material and the model to be fused by utilizing the editing software, and the processing comprises one or more of editing, color mixing and dubbing.
Preferably, the real object includes a plurality of parts, and the modeling execution module is configured to perform simulation modeling on the plurality of parts, so as to obtain a plurality of simulation sub-models; the modeling execution module is further used for assembling and combining a plurality of simulation sub-models to obtain the simulation model;
and/or the number of the groups of groups,
the material acquisition module is used for acquiring materials of the target area in a plurality of view finding directions, wherein the view finding directions comprise front, rear, left, right, upper and lower directions taking view finding equipment as a reference; and the material acquisition module is also used for splicing a plurality of materials in the view finding direction so as to obtain the panoramic material.
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the aforementioned method of fusing virtual reality and simulation models when the computer program is executed.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the aforementioned method of fusing virtual reality with a simulation model.
The invention has the positive progress effects that: according to the technical scheme, the simulation model is added into the panoramic video or the panoramic image, the fusion degree of the simulation model and the virtual scene is high, the blank or missing part is filled in a virtual mode when the panoramic material is acquired, and the virtual roaming experience is effectively improved.
In addition, for the virtual roaming design which is completed, if a new virtual article which never appears in reality needs to be added in the later period, the virtual article and the scene which is completed by the design can be spliced and fused in a seamless mode, so that the propaganda effect is effectively improved.
Drawings
Fig. 1 is a flowchart of a fusion method of virtual reality and simulation model in embodiment 1 of the present invention.
Fig. 2 is a flowchart of a fusion method of virtual reality and simulation model in embodiment 2 of the present invention.
Fig. 3 is a flowchart of a fusion method of virtual reality and simulation model in embodiment 3 of the present invention.
Fig. 4 is a block diagram of a fusion system of virtual reality and simulation model in embodiment 4 of the present invention.
Fig. 5 is a block diagram of a fusion system of virtual reality and simulation model in embodiment 6 of the present invention.
Fig. 6 is a schematic structural diagram of an electronic device implementing a fusion method of virtual reality and simulation model in embodiment 7 of the present invention.
Detailed Description
The invention is further illustrated by means of the following examples, which are not intended to limit the scope of the invention.
Example 1
The embodiment provides a fusion method of a virtual reality and a simulation model, which comprises the following steps:
step S1: and acquiring panoramic materials corresponding to the target area, wherein the panoramic materials comprise panoramic videos or panoramic pictures.
The target area may be an area to be created as a virtual scene selected according to specific user requirements.
Step S2: and carrying out simulation modeling on the real object to obtain a simulation model.
The real object refers to an object that needs to be fused with panoramic materials at a later stage, for example: mechanical equipment, counter displays, and the like.
In this embodiment, an AutoCAD (modeling software) or Solidworks (another modeling software) may be used to perform three-dimensional simulation modeling on a real object, and in the modeling process, in order to improve the display effect and increase the sense of reality, the modeling may be performed according to a one-to-one size ratio with the real object, and details such as chamfering and rounding are kept, without any optimization and simplification.
Step S3: and extracting a reference background from the panoramic material, and combining the simulation model with the reference background to obtain a model to be rendered, wherein the reference background comprises a scene in which the simulation model needs to be placed.
Specifically, a picture serving as a reference background may be first imported into a rendering tool, and then the simulation model is imported into the rendering tool in a step format or stl format, so as to implement a combination of the two.
Step S4: rendering the model to be rendered, and exporting the rendered model in a transparent background format to obtain a model to be fused;
step S5: and fusing the model to be fused with the panoramic material.
In this embodiment, in the case where the panoramic material is a video shot in a mobile manner, the background picture should be taken from the first frame of the moving start point, and then a camera may be added to the rendering tool according to the path of the mobile panoramic view, so as to replicate the moving path, so as to obtain the same viewing angle as the real scene.
According to the fusion method of the virtual reality and the simulation model, the simulation model can be added into the panoramic video or the panoramic image, the fusion degree of the simulation model and the virtual scene is high, the blank or missing part is filled in a virtual mode when panoramic materials are collected, and the virtual roaming experience is effectively improved. In addition, for the virtual roaming design which is completed, if a new virtual article which never appears in reality needs to be added in the later period, the virtual article and the scene which is completed by the design can be spliced and fused in a seamless mode, so that the propaganda effect is effectively improved.
Example 2
The present embodiment provides a method for fusing virtual reality and simulation models, please refer to fig. 2, which is a further improvement on the basis of embodiment 1.
Specifically, the step S1 may further include the steps of:
step S11: acquiring materials of a target area in a plurality of view finding directions, wherein the view finding directions comprise front, rear, left, right, upper and lower directions taking view finding equipment as a reference;
step S12: and splicing the materials in the view finding directions to obtain the panoramic material.
Specifically, a group of photos can be automatically shot through a panoramic camera or 6 photos can be shot through a single lens reflex, 6 scenes of the front, the back, the left, the right, the upper, the lower and the 6 directions are respectively recorded, then the 6 photos are stitched through panoramic photo stitching software (for example, "Kolor Autopano Giga" software or "Microsoft Image Composite Editor" software) to form a 360-degree panoramic photo, video shooting can be performed by the same method, and finally a 360-degree panoramic video is obtained, wherein the video or the photo obtained by the method is a main body of panoramic virtual roaming.
In addition, when the real object includes a plurality of parts, the plurality of parts may be modeled separately and then assembled into an assembly, so that it is convenient to implement 6 degrees of freedom for different parts in the final assembly. Wherein the 6 degrees of freedom include movement along X, Y, Z axes respectively and rotation about X, Y, Z axes respectively.
Based on this, for a scene in which the real object includes a plurality of parts, the step S2 may specifically include the steps of:
step S21: respectively carrying out simulation modeling on a plurality of parts to obtain a plurality of simulation sub-models;
step S22: assembling and combining a plurality of simulation sub-models to obtain a simulation model;
the fusion method of the virtual reality and the simulation model provided by the embodiment can realize that different degrees of freedom display modes are independently set for each part by respectively modeling the parts of the assembly body, and effectively improves visual perception and virtual roaming quality. In addition, the materials of the target area in a plurality of view finding directions are obtained, and then the materials of the view finding directions are spliced to obtain the panoramic material, so that the panoramic material can be enabled to reflect a real scene in a realistic manner, and the visual experience of virtual reality is further improved.
Example 3
The present embodiment provides a method for fusing virtual reality and simulation models, please refer to fig. 3, which is a further improvement on the basis of embodiment 1.
Specifically, the reference background may include a reference object, and the step S3 may specifically include:
step S31: calculating the scaling of the actual size of the reference object relative to the size of the reference object in the panoramic material;
step S32: adjusting the size of the simulation model in the reference background according to the scaling;
further, the reference background may further include a horizontal reference line, and the step S3 may further include:
step S33: the simulation model is adjusted such that the angle of the simulation model with respect to the horizontal reference line coincides with the angle of the reference object with respect to the horizontal reference line.
Further, the step S4 may further include a step S6: setting action attributes of the model to be rendered, wherein the action attributes comprise one or more of rotation, movement and jump;
the rendering operation may be to add material properties to the model to be rendered, the material properties being consistent with the material of the real object. The material properties may include metal, wood, ceramic, woven, etc.
Further, the format of the transparent background may include a sequence frame format in a TGA format or a PNG format, and based on this, the step S5 may specifically include the following steps:
step S51: importing the panoramic material and the model to be fused in the sequence frame format into editing software;
step S52: and processing the panoramic material and the model to be fused by utilizing the editing software, wherein the processing comprises one or more of editing, toning and dubbing.
The application of the present embodiment will be described below in a specific application scenario, for example, a scenario in which panoramic roaming is performed on a factory.
The factory that needs to carry out virtual reality preparation contains steam turbine assembly line and generator assembly line, can carry out panorama shooting according to the route map of visiting that plans, carries out image acquisition to every visit point, and the later stage adds special effect and dubbing, alright simulate real visit guide route, is equipped with VR all-in-one, plays the experience of immersing the roaming.
In the roaming path, due to factors such as maintenance and replacement of equipment, the display equipment cannot be found at some stations, and at this time, the technical scheme of the invention needs to be adopted, and the materials of three-dimensional animation simulation are applied and overlapped to the equipment blank to form the effect of virtual-real fusion.
When the method is implemented, firstly, the whole outline area of a factory and the working state of an internal core component are subjected to image acquisition through the panoramic camera, a tripod can be used for supporting the panoramic camera to record the part of fixed-point shooting, and the panoramic camera can be placed on a balance car special for workshops to record the part of moving and propelling. And after the acquisition is finished, stitching and stitching the materials through special image stitching software, and stitching the images of 6 parts, namely the upper part, the lower part, the left part, the right part, the front part and the rear part into a 360-degree panoramic video.
Next, according to the user's demand, the three-dimensional model is built in to the area where the facility is empty, and at the same time, an AGV car (a transport vehicle equipped with an automatic guidance device such as electromagnetic or optical, capable of traveling along a predetermined guidance route and having safety protection and various transfer functions) is added to the aisle, so that the model is three-dimensionally built. After modeling is completed, the model is imported into a rendering tool, a picture of a panoramic video is taken as a background because the model can be in seamless connection with the background, then the model is moved to a relative position, the picture is made by precession, and finally the animation is exported in a transparent bottom form of PNG or TGA, and because the format is the transparent background format, the exported model does not contain background color, can be directly fused into the background of the panoramic video, and then operations such as green matting and the like are not needed. After the two parts of preparation work are finished, the panoramic video and the AGV motion animation of the factory are imported into editing software, and parameters such as hue saturation of the two sections of video are adjusted to enable the two sections of video to have the same hue and be perfectly fused.
Finally, only the h.264 format, avi format or mpeg format is needed, 2:1, then putting the output video file into a VR helmet to realize panoramic roaming of the factory.
The method for fusing the virtual reality and the simulation model can adjust the size and the angle of the simulation model in advance before fusing the simulation model and the panoramic material, so that the simulation model and other objects in the panoramic material are more coordinated in size and relative position. In the later editing process, the two video segments can be provided with the same hue by adjusting the hue saturation of the two video segments, so that the fusion degree is higher, and the visual impact effect of panoramic roaming is effectively improved.
Example 4
The present embodiment provides a fusion system of virtual reality and simulation model, please refer to fig. 4, the fusion system 1 includes:
the material acquisition module 10 is configured to acquire panoramic materials corresponding to a target area, where the panoramic materials include panoramic videos or panoramic pictures;
the modeling execution module 11 is used for performing simulation modeling on the real object to obtain a simulation model;
a combination module 12, configured to extract a reference background from the panoramic material, and combine the simulation model with the reference background to obtain a model to be rendered, where the reference background includes a scene where the simulation model needs to be placed;
the rendering module 13 is used for rendering the model to be rendered, and exporting the rendered model in a transparent background format to obtain a model to be fused;
and the fusion module 14 is used for fusing the model to be fused with the panoramic material.
The target area may be an area to be created as a virtual scene selected according to specific requirements.
The real object refers to an object that needs to be fused with panoramic materials at a later stage, for example: mechanical equipment, counter displays, and the like.
In this embodiment, three-dimensional simulation modeling can be performed on a real object by using AutoCAD (modeling software) or Soildworks (another modeling software), in the modeling process, in order to improve the display effect and increase the sense of reality, the real object can be modeled in a one-to-one size ratio, and details such as chamfering, rounding and the like are reserved without any optimization and simplification.
In this embodiment, the picture serving as the reference background may be first imported into the rendering tool, and then the simulation model is imported into the rendering tool in a step or stl format, so as to implement a combination of the two.
In the case that the panoramic material is a video shot in a mobile manner, the background picture should be taken from the first frame of the moving start point, and then a camera may be added to the rendering tool according to the path of the mobile panoramic view, so as to replicate the moving path, so as to obtain the same viewing angle as the real scene.
When the fusion system of the virtual reality and the simulation model provided by the embodiment is operated, the simulation model can be added into the panoramic video or the panoramic image, the fusion degree of the simulation model and the virtual scene is higher, and the blank or missing part is filled in a virtual mode when the panoramic material is acquired, so that the virtual roaming experience is effectively improved. In addition, for the virtual roaming design which is completed, if a new virtual article which never appears in reality needs to be added in the later period, the virtual article and the scene which is completed by the design can be spliced and fused in a seamless mode, so that the propaganda effect is effectively improved.
Example 5
The present embodiment provides a fusion system of virtual reality and simulation models, which is further improved on the basis of embodiment 4.
Specifically, the material acquiring module 10 is configured to acquire materials of the target area in a plurality of view directions, where the plurality of view directions include a front direction, a rear direction, a left direction, a right direction, an upper direction, and a lower direction with reference to the view finder; the material obtaining module 10 is further configured to splice a plurality of materials in the view direction, so as to obtain the panoramic material.
Specifically, a group of photos can be automatically shot through a panoramic camera or 6 photos can be shot through a single lens reflex, 6 scenes of the front, the back, the left, the right, the upper, the lower and the 6 directions are respectively recorded, then the 6 photos are stitched through panoramic photo stitching software (for example, "Kolor Autopano Giga" software or "Microsoft Image Composite Editor" software) to form a 360-degree panoramic photo, video shooting can be performed by the same method, and finally a 360-degree panoramic video is obtained, wherein the video or the photo obtained by the method is a main body of panoramic virtual roaming.
In addition, when the real object includes a plurality of parts, the plurality of parts may be modeled separately and then assembled into an assembly, so that it is convenient to implement 6 degrees of freedom for different parts in the final assembly. Wherein the 6 degrees of freedom include movement along the X, Y, Z axis and rotation about X, Y, Z axes, respectively.
Based on this, for a scenario in which the real object includes a plurality of parts, the modeling execution module 11 is configured to perform simulation modeling on the plurality of parts, so as to obtain a plurality of simulation sub-models; the modeling execution module 11 is further configured to assemble and combine a plurality of the simulation sub-models to obtain the simulation model.
When the fusion system of the virtual reality and the simulation model provided by the embodiment is operated, by respectively modeling the parts of the assembly body, different degrees of freedom display modes can be independently set for each part, and visual feeling and virtual roaming quality are effectively improved. In addition, the materials of the target area in a plurality of view finding directions are obtained, and then the materials of the view finding directions are spliced to obtain the panoramic material, so that the panoramic material can be enabled to reflect a real scene in a realistic manner, and the visual experience of virtual reality is further improved.
Example 6
The present embodiment provides a fusion system of virtual reality and simulation models, please refer to fig. 5, which is further improved on the basis of embodiment 4.
Specifically, the reference background may include a reference object, and the combination module 12 is configured to calculate a scaling ratio of an actual size of the reference object relative to a size of the reference object in the panoramic material; the combination module 12 is further configured to adjust the size of the simulation model in the reference background according to the scaling.
The reference background may further include a horizontal reference line, and the combining module 12 is further configured to adjust the simulation model such that an angle of the simulation model with respect to the horizontal reference line coincides with an angle of the reference object with respect to the horizontal reference line.
Further, the fusion system 1 further comprises a model action setting module 15, wherein the model action setting module 15 is configured to set action attributes of the model to be rendered, and the action attributes include one or more of rotation, movement and jump.
The rendering module 13 is further configured to add material properties to the model to be rendered, the material properties being consistent with the material of the real object.
The rendering operation may be to add material properties to the model to be rendered, the material properties being consistent with the material of the real object. The material properties may include metal, wood, ceramic, woven, etc.
Further, the transparent background format may include a sequential frame format in the form of TGA or PNG,
the fusion module 14 is configured to import the panoramic material and the model to be fused in the sequential frame format into clipping software; the fusion module 14 is further configured to process the panoramic material and the model to be fused by using the clipping software, where the processing includes one or more of clipping, toning, and dubbing.
The application of the present embodiment will be described below in a specific application scenario, for example, a scenario in which panoramic roaming is performed on a factory.
The factory that needs to carry out virtual reality preparation contains steam turbine assembly line and generator assembly line, can carry out panorama shooting according to the visitor's sightseeing route map, carries out image acquisition to every sightseeing point, and the later stage adds special effect and dubbing, alright simulate real sightseeing guide path, is equipped with VR all-in-one, plays the experience of immersing the roaming.
In the roaming path, due to factors such as maintenance and replacement of equipment, the display equipment cannot be found at some stations, and at this time, the technical scheme of the invention needs to be adopted, and the materials of three-dimensional animation simulation are applied and overlapped to the equipment blank to form the effect of virtual-real fusion.
When the method is implemented, firstly, the whole outline area of a factory and the working state of an internal core component are subjected to image acquisition through the panoramic camera, a tripod can be used for supporting the panoramic camera to record the part of fixed-point shooting, and the panoramic camera can be placed on a balance car special for workshops to record the part of moving and propelling. And after the acquisition is finished, stitching and stitching the materials through special image stitching software, and stitching the images of 6 parts, namely the upper part, the lower part, the left part, the right part, the front part and the rear part into a 360-degree panoramic video.
Next, according to the user's demand, the three-dimensional model is built in to the area where the facility is empty, and at the same time, an AGV car (a transport vehicle equipped with an automatic guidance device such as electromagnetic or optical, capable of traveling along a predetermined guidance route and having safety protection and various transfer functions) is added to the aisle, so that the model is three-dimensionally built. After modeling is completed, the model is imported into a rendering tool, a picture of a panoramic video is taken as a background because the model can be in seamless connection with the background, then the model is moved to a relative position, the picture is made by precession, and finally the animation is exported in a transparent bottom form of PNG or TGA, and because the format is the transparent background format, the exported model does not contain background color, can be directly fused into the background of the panoramic video, and then operations such as green matting and the like are not needed. After the two parts are ready to work, the panoramic video and agv motion animation of the factory are imported into editing software, and parameters such as hue saturation of the two sections of video are adjusted to enable the two sections of video to have the same hue and be perfectly fused.
Finally, only the h.264 format, avi format or mpeg format is needed, 2:1, then putting the output video file into a VR helmet to realize panoramic roaming of the factory.
The fusion system of the virtual reality and the simulation model provided by the embodiment can adjust the size and the angle of the simulation model in advance before the simulation model and the panoramic material are fused when in operation, so that the simulation model and other objects in the panoramic material are more coordinated in size and relative position. In the later editing process, the two video segments can be provided with the same hue by adjusting the hue saturation of the two video segments, so that the fusion degree is higher, and the visual impact effect of panoramic roaming is effectively improved.
Example 7
The present invention also provides an electronic device, as shown in fig. 6, where the electronic device may include a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the fusion method of virtual reality and simulation model of any of the foregoing embodiments 1-3 when the computer program is executed.
It should be understood that the electronic device shown in fig. 6 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 6, electronic device 2 may be embodied in the form of a general purpose computing device, such as: which may be a server device. The components of the electronic device 2 may include, but are not limited to: the at least one processor 3, the at least one memory 4, a bus 5 connecting the different system components, including the memory 4 and the processor 3.
The bus 5 may include a data bus, an address bus, and a control bus.
The memory 4 may comprise volatile memory, such as Random Access Memory (RAM) 41 and/or cache memory 42, and may further comprise Read Only Memory (ROM) 43.
The memory 4 may also include a program tool 45 (or utility) having a set (at least one) of program modules 44, such program modules 44 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The processor 3 executes various functional applications and data processing, such as the steps of the fusion method of virtual reality and simulation models of any of embodiments 1-3 of the present invention, by running a computer program stored in the memory 4.
The electronic device 2 may also communicate with one or more external devices 6, such as a keyboard, pointing device, etc. Such communication may be through an input/output (I/O) interface 7. Also, the model-generated electronic device 2 may communicate with one or more networks (e.g., a local area network, LAN, wide area network, WAN, and/or public network) via the network adapter 8.
As shown in fig. 6, the network adapter 8 may communicate with other modules of the model-generated electronic device 2 via the bus 5. Those skilled in the art will appreciate that although not shown, other hardware and/or software modules may be used in connection with the model-generated electronic device 2, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, data backup storage systems, and the like.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of an electronic device are mentioned, such a division is only exemplary and not mandatory. Indeed, the features and functionality of two or more units/modules described above may be embodied in one unit/module in accordance with embodiments of the present invention. Conversely, the features and functions of one unit/module described above may be further divided into ones that are embodied by a plurality of units/modules.
Example 8
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the fusion method of virtual reality and simulation models of any of embodiments 1-3.
More specific ways in which the computer-readable storage medium may be employed include, but are not limited to: portable disk, hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible implementation manner, the present invention may also be realized in the form of a program product comprising program code for causing a terminal device to carry out the steps of a fusion method of virtual reality and simulation models implementing any of the embodiments 1-3, when the program product is run on the terminal device.
Wherein the program code for carrying out the invention may be written in any combination of one or more programming languages, the program code may execute entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device, partly on a remote device or entirely on the remote device.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the principles and spirit of the invention, but such changes and modifications fall within the scope of the invention.
Claims (10)
1. The fusion method of the virtual reality and the simulation model is characterized by comprising the following steps of:
acquiring panoramic materials corresponding to a target area, wherein the panoramic materials comprise panoramic videos or panoramic pictures;
performing simulation modeling on the real object to obtain a simulation model;
extracting a reference background from the panoramic material, and combining the simulation model with the reference background to obtain a model to be rendered, wherein the reference background comprises a scene in which the simulation model needs to be placed;
rendering the model to be rendered, and exporting the rendered model in a transparent background format to obtain a model to be fused;
fusing the model to be fused with the panoramic material;
the reference background comprises a reference object, and the step of combining the simulation model with the reference background comprises the following steps:
calculating the scaling of the actual size of the reference object relative to the size of the reference object in the panoramic material;
adjusting the size of the simulation model in the reference background according to the scaling;
and/or the number of the groups of groups,
the reference background comprises a horizontal reference line, and the step of combining the simulation model with the reference background comprises the following steps:
the simulation model is adjusted such that the angle of the simulation model with respect to the horizontal reference line coincides with the angle of the reference object with respect to the horizontal reference line.
2. The method of claim 1, wherein the virtual reality and simulation model are integrated,
before the step of rendering the model to be rendered, the method further comprises the following steps: setting action attributes of the model to be rendered, wherein the action attributes comprise one or more of rotation, movement and jump;
and/or the number of the groups of groups,
the step of rendering the model to be rendered comprises the following steps: and adding material properties to the model to be rendered, wherein the material properties are consistent with the material of the real object.
3. The method of merging virtual reality and simulation models according to any one of claims 1-2,
the format of the transparent background comprises a sequence frame format;
the step of fusing the model to be fused with the panoramic material comprises the following steps:
importing the panoramic material and the model to be fused in the sequence frame format into editing software;
and processing the panoramic material and the model to be fused by utilizing the editing software, wherein the processing comprises one or more of editing, toning and dubbing.
4. The method of merging virtual reality and simulation models according to any one of claims 1-2,
the real object comprises a plurality of parts, and the step of performing simulation modeling on the real object comprises the following steps: respectively carrying out simulation modeling on a plurality of parts to obtain a plurality of simulation sub-models; assembling and combining a plurality of simulation sub-models to obtain a simulation model;
and/or the number of the groups of groups,
the step of obtaining the panoramic material corresponding to the target area comprises the following steps:
acquiring materials of a target area in a plurality of view finding directions, wherein the view finding directions comprise front, rear, left, right, upper and lower directions taking view finding equipment as a reference;
and splicing the materials in the view finding directions to obtain the panoramic material.
5. A fusion system of virtual reality and simulation models, the fusion system comprising:
the material acquisition module is used for acquiring panoramic materials corresponding to the target area, wherein the panoramic materials comprise panoramic videos or panoramic pictures;
the modeling execution module is used for carrying out simulation modeling on the real object to obtain a simulation model;
the combination module is used for extracting a reference background from the panoramic material, and combining the simulation model with the reference background to obtain a model to be rendered, wherein the reference background comprises a scene where the simulation model needs to be placed;
the rendering module is used for rendering the model to be rendered and exporting the rendered model in a transparent background format to obtain a model to be fused;
the fusion module is used for fusing the model to be fused with the panoramic material;
the reference background comprises a reference object, and the combination module is used for calculating the scaling of the actual size of the reference object relative to the size of the reference object in the panoramic material; the combination module is further used for adjusting the size of the simulation model in the reference background according to the scaling;
and/or the number of the groups of groups,
the reference background comprises a horizontal reference line, and the combination module is further used for adjusting the simulation model to enable the angle of the simulation model relative to the horizontal reference line to be consistent with the angle of the reference object relative to the horizontal reference line.
6. The fusion system of virtual reality and simulation models of claim 5,
the fusion system further comprises a model action setting module, wherein the model action setting module is used for setting action attributes of the model to be rendered, and the action attributes comprise one or more of rotation, movement and jump;
and/or the number of the groups of groups,
the rendering module is further configured to add material properties to the model to be rendered, where the material properties are consistent with the material of the real object.
7. The fusion system of virtual reality and simulation models of any of claims 5-6,
the format of the transparent background comprises a sequence frame format;
the fusion module is used for importing the panoramic material and the model to be fused in the sequence frame format into editing software; the fusion module is also used for processing the panoramic material and the model to be fused by utilizing the editing software, and the processing comprises one or more of editing, color mixing and dubbing.
8. The fusion system of virtual reality and simulation models of any of claims 5-6,
the modeling execution module is used for respectively carrying out simulation modeling on the parts so as to obtain a plurality of simulation sub-models; the modeling execution module is further used for assembling and combining a plurality of simulation sub-models to obtain the simulation model;
and/or the number of the groups of groups,
the material acquisition module is used for acquiring materials of the target area in a plurality of view finding directions, wherein the view finding directions comprise front, rear, left, right, upper and lower directions taking view finding equipment as a reference; and the material acquisition module is also used for splicing a plurality of materials in the view finding direction so as to obtain the panoramic material.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the fusion method of virtual reality and simulation models of any of claims 1-4 when the computer program is executed by the processor.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, carries out the steps of the fusion method of virtual reality and simulation models according to any of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910836813.7A CN110544314B (en) | 2019-09-05 | 2019-09-05 | Fusion method, system, medium and equipment of virtual reality and simulation model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910836813.7A CN110544314B (en) | 2019-09-05 | 2019-09-05 | Fusion method, system, medium and equipment of virtual reality and simulation model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110544314A CN110544314A (en) | 2019-12-06 |
CN110544314B true CN110544314B (en) | 2023-06-02 |
Family
ID=68712583
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910836813.7A Active CN110544314B (en) | 2019-09-05 | 2019-09-05 | Fusion method, system, medium and equipment of virtual reality and simulation model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110544314B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111045777A (en) * | 2019-12-12 | 2020-04-21 | 米哈游科技(上海)有限公司 | Rendering method, rendering device, storage medium and electronic equipment |
CN111292406A (en) * | 2020-03-12 | 2020-06-16 | 北京字节跳动网络技术有限公司 | Model rendering method and device, electronic equipment and medium |
CN112423014A (en) * | 2020-11-19 | 2021-02-26 | 上海电气集团股份有限公司 | Remote review method and device |
CN114390268B (en) * | 2021-12-31 | 2023-08-11 | 中南建筑设计院股份有限公司 | Virtual reality panoramic video manufacturing method based on Rhino and Enscape |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011086199A1 (en) * | 2010-01-18 | 2011-07-21 | Fittingbox | Augmented reality method applied to the integration of a pair of spectacles into an image of a face |
CN108766579A (en) * | 2018-05-28 | 2018-11-06 | 北京交通大学长三角研究院 | A kind of virtual cerebral surgery operation emulation mode based on high degrees of fusion augmented reality |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105916022A (en) * | 2015-12-28 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | Video image processing method and apparatus based on virtual reality technology |
CN109783914B (en) * | 2018-12-29 | 2023-08-22 | 河北德冠隆电子科技有限公司 | Preprocessing dynamic modeling method and device based on virtual reality simulation |
-
2019
- 2019-09-05 CN CN201910836813.7A patent/CN110544314B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011086199A1 (en) * | 2010-01-18 | 2011-07-21 | Fittingbox | Augmented reality method applied to the integration of a pair of spectacles into an image of a face |
CN108766579A (en) * | 2018-05-28 | 2018-11-06 | 北京交通大学长三角研究院 | A kind of virtual cerebral surgery operation emulation mode based on high degrees of fusion augmented reality |
Also Published As
Publication number | Publication date |
---|---|
CN110544314A (en) | 2019-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110544314B (en) | Fusion method, system, medium and equipment of virtual reality and simulation model | |
CN106157359B (en) | Design method of virtual scene experience system | |
US9041899B2 (en) | Digital, virtual director apparatus and method | |
US20110273451A1 (en) | Computer simulation of visual images using 2d spherical images extracted from 3d data | |
CN104268939A (en) | Transformer substation virtual-reality management system based on three-dimensional panoramic view and implementation method of transformer substation virtual-reality management system based on three-dimensional panoramic view | |
CN106598229A (en) | Virtual reality scene generation method and equipment, and virtual reality system | |
US6388666B1 (en) | System and method for generating stereoscopic image data | |
CN103258338A (en) | Method and system for driving simulated virtual environments with real data | |
US20090102834A1 (en) | Image processing apparatus and image processing method | |
CN109961495B (en) | VR editor and implementation method thereof | |
CN108280873A (en) | Model space position capture and hot spot automatically generate processing system | |
WO2009121904A1 (en) | Sequential image generation | |
CN105023294A (en) | Fixed point movement augmented reality method combining sensors and Unity3D | |
CN111970453A (en) | Virtual shooting system and method for camera robot | |
CN113253842A (en) | Scene editing method and related device and equipment | |
CN111598983A (en) | Animation system, animation method, storage medium, and program product | |
CN114581611B (en) | Virtual scene construction method and device | |
CN103530869B (en) | For mating the system and method that moving mass controls | |
CN108346183A (en) | A kind of method and system for AR origin reference locations | |
US20210241486A1 (en) | Analyzing screen coverage | |
JPH1188910A (en) | Three-dimension model generating device, three-dimension model generating method, medium recording three-dimension model generating program three-dimension model reproduction device, three-dimension model reproduction method and medium recording three-dimension model reproduction program | |
CN109872400A (en) | A kind of generation method of panoramic virtual reality scene | |
CN110133958A (en) | A kind of tracking system and method for three-dimensional film | |
US11682175B2 (en) | Previsualization devices and systems for the film industry | |
CA2252063C (en) | System and method for generating stereoscopic image data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |