CN112346572A - Method, system and electronic device for realizing virtual-real fusion - Google Patents
Method, system and electronic device for realizing virtual-real fusion Download PDFInfo
- Publication number
- CN112346572A CN112346572A CN202011252695.4A CN202011252695A CN112346572A CN 112346572 A CN112346572 A CN 112346572A CN 202011252695 A CN202011252695 A CN 202011252695A CN 112346572 A CN112346572 A CN 112346572A
- Authority
- CN
- China
- Prior art keywords
- real
- virtual
- scene
- image
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 65
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000005516 engineering process Methods 0.000 claims abstract description 15
- 230000000007 visual effect Effects 0.000 claims abstract description 14
- 239000011521 glass Substances 0.000 claims description 78
- 238000007499 fusion processing Methods 0.000 claims description 38
- 238000004891 communication Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 8
- 238000006748 scratching Methods 0.000 claims description 7
- 230000002393 scratching effect Effects 0.000 claims description 7
- 230000003190 augmentative effect Effects 0.000 claims description 4
- 238000007654 immersion Methods 0.000 abstract description 14
- 230000002452 interceptive effect Effects 0.000 abstract description 4
- 210000003128 head Anatomy 0.000 description 27
- 238000012549 training Methods 0.000 description 22
- 238000012545 processing Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 12
- 239000003550 marker Substances 0.000 description 8
- WSFSSNUMVMOOMR-UHFFFAOYSA-N Formaldehyde Chemical compound O=C WSFSSNUMVMOOMR-UHFFFAOYSA-N 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000004088 simulation Methods 0.000 description 5
- 238000009877 rendering Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000000126 substance Substances 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000005474 detonation Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 210000002414 leg Anatomy 0.000 description 2
- 230000010076 replication Effects 0.000 description 2
- 208000006440 Open Bite Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 210000001513 elbow Anatomy 0.000 description 1
- 210000003811 finger Anatomy 0.000 description 1
- 210000002683 foot Anatomy 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 238000002558 medical inspection Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a method, a system and electronic equipment for realizing virtual-real fusion. The virtual-real fusion implementation method is characterized in that a virtual three-dimensional scene is accurately copied and constructed according to a real scene through a digital twinning technology, the position and the motion of a human body/object in a real scene are positioned and tracked and captured in real time on the basis of a motion capture technology, an acquired real image is fused into the virtual scene, and a virtual-real fusion image is obtained and presented to a user. By using the virtual-real fusion implementation method provided by the invention, a user can synchronously experience real interactive operation feeling while obtaining good visual angle immersion feeling.
Description
Technical Field
The invention relates to the field of virtual reality/augmented reality, in particular to a method, a system and electronic equipment for realizing virtual-real fusion.
Background
Currently, virtual reality/augmented reality is widely applied to the field of education and training as an advanced digital simulation and three-dimensional display technology, especially in the medical education and training industry. After entering the post formally, clinicians, medical staff and the like need to carry out professional skill practice and practical training. The skill practice and practical training are important links for switching from medical students to doctors and medical care roles, wherein the practical training part is a key link for the clinical medical staff to learn and improve the clinical experience of the staff.
At present, the practical training of clinical medical staff has two modes: one way is to carry out the inquiry dialogue with a practical training doctor by simulating a patient or a diseased body, such as a rubber person or a human body soaked in formalin, or a specially trained person as a simulated patient, so as to train the inquiry process of various diseases and focus, correct errors after practical training, help to accumulate clinical experience and improve the level; the other mode is that a three-dimensional (3D) virtual medical training system is adopted, and the purpose of practical training is achieved by operating a virtual digital patient or a sick body according to training arrangement and requirements in a pure virtual scene.
However, in a purely virtual scene, for example, a virtual wounded person is treated on a virtual battlefield (the operation target of the medical staff is a virtual wounded soldier), although a good sense of immersion in the virtual scene can be obtained, the medical staff cannot obtain a real sense of operation. However, in a real scene such as a hospital or a hospital training center, the operation object of the medical staff may be a rubber person or a human body soaked in formalin, and although the real operation feeling can be obtained, the good immersion feeling of the virtual scene cannot be obtained, so that a good training effect cannot be obtained. How to realize good immersion and real interactive operation feeling in the virtual simulation, and the method has strong requirements in the field of education and training and is a blank.
Disclosure of Invention
The invention aims to provide a method and a system for realizing virtual-real fusion, which are used for constructing a virtual scene based on accurate copying of a real scene through a digital twin three-dimensional modeling technology, positioning and tracking the position and the motion of a human body/object in the real scene in real time based on a motion capture technology, acquiring a real image of real operation of a target area, and then fusing the real image into a three-dimensional virtual scene. By the method, a user can experience a real operation feeling while obtaining a good immersion feeling in the three-dimensional virtual scene.
The invention provides a method for realizing virtual-real fusion, which comprises the following steps:
positioning and tracking a real object and display equipment in a real scene, and acquiring real object positioning information and equipment positioning information in real time;
determining a target area in a virtual scene based on the real object positioning information, wherein the target area is used for presenting a virtual-real fused image;
acquiring a target real image corresponding to the target area in a real scene;
and fusing the target real image and the virtual scene to obtain a virtual-real fused image, and presenting the virtual-real fused image to a user through the display equipment.
Optionally, the method further comprises: and outputting the real object positioning information and the equipment positioning information to virtual-real fusion processing equipment so that the virtual-real fusion processing equipment can present a virtual object corresponding to the real object in a virtual scene according to the real object positioning information. The virtual-real fusion processing device can also obtain a first visual angle range (equivalent to eyes of a user in the virtual scene) in the virtual scene according to the device positioning information. The device positioning information includes the position and angle of the display device, so that it can be ensured that the first view angle range in the virtual scene is consistent with the view angle range of the user in the real scene.
Optionally, the implementation method further includes: acquiring the virtual scene before determining the target area.
The virtual scene acquisition can be realized in the following ways:
in the first implementation mode, a digital twinning technology is adopted for a real scene, and a three-dimensional virtual scene is accurately reproduced and generated;
the second realization mode is that firstly, a digital twinning technology is adopted for a real scene to copy and generate a three-dimensional original virtual scene, and then a plurality of virtual pictures are added in the generated original virtual scene according to the requirements of customers to obtain a three-dimensional virtual scene meeting the requirements of the customers;
in the third implementation mode, the three-dimensional virtual scene which is stored in advance and meets the requirements of customers is selected from the database by referring to the three-dimensional space of the real scene.
Optionally, there are one or more physical objects, and correspondingly, there are one or more target areas. The physical object is active; accordingly, the target area is active (changing). The motion capture system can acquire real-time real object positioning information of each real object and determine a target area corresponding to each real object in real time.
In one possible implementation, the display device is a VR (virtual reality) head display;
acquiring a target reality image in a real scene comprises the following steps: shooting a real image in a real scene through a camera device of the display equipment (VR head display); and determining a target real image corresponding to the target area from the real images.
In a first implementation, determining a target real image corresponding to the target area from the real images includes:
creating a virtual projector in a software project;
creating a digitized model matching the target region;
delivering the shot real image to a virtual projector, and projecting the real image to a virtual scene by the virtual projector;
and the digital model receives and displays the target real image corresponding to the target area.
In a second implementation, determining a target real image corresponding to the target area from the real images includes:
creating a mask shader for a target area, and dynamically generating a black-and-white mask image corresponding to the target area;
creating a plane model in software engineering, wherein the plane model is always placed in front of other 3D objects in a virtual scene;
and assigning a mask shader to the plane model, so that the plane model only retains the required target real image in the real scene.
In another possible implementation, the display device is AR (Augmented Reality) glasses; the acquiring of the target reality image in the real scene includes:
determining a keying region corresponding to the virtual scene based on the target region, wherein the virtual scene completely covers a real scene in the AR glasses;
and scratching the virtual image of the scratching area so as to obtain the target real image through the scratching area.
The virtual scene covers the real scene in the AR glasses completely, after the keying region in the virtual scene is determined, the virtual image of the keying region is removed (keying), and a user can see through the keying region to directly see the target real image in the real scene. Thus, the user sees a virtual-real fused image in which the virtual scene is fused into the target real image.
The matte region is determined based on the target region. The matte region may be the target region.
The second aspect of the present invention provides a system for implementing virtual-real fusion, including: virtual-real fusion processing equipment, head display equipment and a motion capture system; the virtual-real fusion processing equipment, the display equipment and the motion capture system are connected and communicated with each other through a wired/wireless network or a cable;
the motion capture system is used for capturing and positioning the real object and the display equipment in the real scene, acquiring real object positioning information of the real object and equipment positioning information of the display equipment in real time, and transmitting the real object positioning information and the equipment positioning information to the virtual-real fusion processing equipment. The motion capture system is also used for capturing, analyzing and processing the operation of the user.
The virtual-real fusion processing equipment is used for processing the real object positioning information and the equipment positioning information, determining a target area in the virtual scene according to the real object positioning information, acquiring a target real image in the real scene according to the target area, realizing the fusion of the virtual scene and the target real image, rendering and outputting the virtual-real fusion image obtained by the fusion to the display equipment, and presenting the virtual-real fusion image to a user.
The invention provides an electronic device for realizing a virtual-real fusion realizing method, which comprises a processor, a memory, an I/O interface and a network communication interface; the processor, the memory, the I/O interface and the network communication interface are connected through a communication bus.
A computer program stored in the memory, the computer program comprising instructions for:
acquiring real object positioning information corresponding to the real object and equipment positioning information corresponding to the display equipment;
determining a target area in the virtual scene based on the object positioning information;
acquiring a target real image corresponding to a target area in a real scene;
and fusing the target real image and the virtual scene to obtain a virtual-real fused image, and presenting the virtual-real fused image to a user through display equipment.
The processor runs the computer program stored in the memory to execute the virtual-real fusion implementation method provided by the invention.
A fourth aspect of the present invention provides an electronic device, which includes a plurality of functional modules for implementing the virtual-real fusion implementation method.
A fifth aspect of the present invention provides a storage medium for storing a computer program, where the computer program includes instructions for executing the virtual-real fusion implementation method provided by the present invention.
The sixth aspect of the present invention provides computer software for implementing the virtual-real fusion implementation method provided by the present invention.
Drawings
Fig. 1 is a schematic flow chart of a virtual-real fusion implementation method provided in an embodiment of the present invention;
fig. 2 is a flowchart of an exemplary method for implementing virtual-real fusion according to an embodiment of the present invention;
fig. 3 is a flowchart of another exemplary method for implementing virtual-real fusion according to an embodiment of the present invention;
fig. 4 is a flowchart of another exemplary method for implementing virtual-real fusion according to an embodiment of the present invention;
fig. 5 is a schematic composition diagram of a virtual-real fusion implementation system according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another electronic device according to an embodiment of the present invention;
fig. 8 is a schematic diagram illustrating an implementation of acquiring a real image in a real scene according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a corresponding relationship between a virtual camera and a virtual projector according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating an embodiment of obtaining a target real-world image based on a digital model;
fig. 11 is a schematic diagram illustrating an implementation of outputting a virtual-real fused image according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of a mask image generated for a target area according to an embodiment of the present invention;
FIG. 13 is a schematic diagram illustrating an embodiment of obtaining a target real-world image based on a mask image;
fig. 14 is a schematic diagram illustrating another embodiment of outputting a virtual-real fused image.
Detailed Description
In order to make the invention more clearly understood, the technical solutions of the present invention will be described in detail below with reference to the accompanying drawings. Embodiments disclosed herein do not necessarily include all aspects of the invention. It should be understood that the various concepts and embodiments disclosed herein may be embodied in any one or more of a variety of combinations and that the invention is not limited to any of these embodiments. In addition, some aspects of the present disclosure may be used alone, or in any suitable combination with other aspects of the present disclosure.
With reference to fig. 1, an embodiment of the present invention provides a method for implementing virtual-real fusion, including the following steps:
s101, tracking and positioning a real object and display equipment in a real scene, and acquiring real object positioning information and equipment positioning information in real time;
the display device may be a head-mounted display device (abbreviated as "head display") or an immersive display device such as a ring screen display, a dome screen display, or the like. The head display can be a VR head display (e.g., VR glasses, VR helmet) or an AR head display (e.g., AR glasses). The VR head display has a camera shooting function, can be a novel VR head display with a camera device (such as a camera) capable of achieving the purpose of the invention, and can also be an existing general VR head display with a camera device (such as a video camera) added according to requirements.
It is understood that the display device is not only used for display, but also for user interaction, for example provided with an interaction means such as a remote control, a touch screen, etc.
The real object positioning information and the equipment positioning information can be obtained through a motion capture system; alternatively, the real object Positioning information and the device Positioning information may be obtained by a motion capture System in combination with a GPS (Global Positioning System). Of course, the positioning information may be acquired in combination with the GPS and the motion capture system.
And according to the real object positioning information, a virtual object corresponding to the real object can be presented in the virtual scene. The position of the virtual object in the virtual scene completely matches the position of the physical object in the real scene. For example, a virtual wounded person (virtual object) is accurately copied by a Digital Twin (Digital Twin) technology according to the size, the physique, the posture and the like of a rubber person (real object) in a ratio of 1:1, and then the virtual wounded person (virtual object) is correspondingly presented in a virtual three-dimensional fire scene according to the position of the real object in a real scene.
According to the device positioning information, a first view angle range in the virtual scene can be obtained, and the first view angle range can also be understood as the eyes of the user in the virtual scene. The device positioning information includes the position and angle of the display device, and it can be ensured that the first view angle range in the virtual scene is consistent with the view angle range of the user in the real scene.
In the embodiment of the invention, one or more physical objects can be provided; the physical object may be stationary or movable. The physical object can be determined according to the user requirement. For example, the physical object is an eraser for training medical staff who is a user wearing a head-display device to perform various operations on the eraser, such as high-voltage electric shock, blood pressure detection, wound dressing, and the like.
S102, determining a target area in a virtual scene based on the real object positioning information;
the target region can be understood as a region of interest (ROI) in digital image processing, which is used for fusion processing of a real image and a virtual scene to obtain a virtual-real fused image.
The target area covers the virtual object. The target area may be an area occupied by the virtual object, or may be set to be slightly larger or smaller than the area occupied by the virtual object according to a specific situation in which the technical solution of the present invention is implemented.
Before determining the target area, the virtual scene is preferably acquired. In a specific implementation, the virtual scene may be acquired before, after, or synchronously with step S101. The virtual scene can be obtained by copying a real scene by using a digital twin technology, just like the simulation of the real scene, and the following implementation modes can be specifically adopted:
in some specific implementations, a digital twinning technique may be applied to the real scene to generate a three-dimensional virtual scene by replication. The generated three-dimensional virtual scene is completely matched with the three-dimensional space of the real scene. For example, to train medical staff for an operation, a three-dimensional virtual rescue scene can be generated by copying according to the real scene of a rescue room or an operating room by using a digital twinning technology in a proportion of 1: 1; some tools, medical instruments, personnel and the like in the real scene can be completely or selectively copied into the three-dimensional virtual rescue scene according to requirements.
In other specific implementations, a digital twinning technology may be first applied to the real scene to generate an original virtual scene by duplication, and then some virtual pictures are added to the generated original virtual scene according to the needs of the customer, so as to finally obtain the virtual scene needed by the customer. For example, to train medical staff to rescue a detonation scene, a three-dimensional virtual plant scene can be generated by copying a real chemical plant by using a digital twin technology, and then a virtual detonation scene is added at one or some positions, so as to finally form the virtual plant detonation scene meeting the requirements of customers.
In still other implementations, a three-dimensional virtual scene that meets the customer's needs may also be selected from the database based on the three-dimensional space of the real scene. For example, to train medical personnel for the treatment and treatment of gunshot injuries, a three-dimensional virtual war scene meeting the requirements can be selected from a database according to the three-dimensional space of a training room or an operating room where the medical personnel are located.
It will be appreciated that if the physical object is active, the target area is also active (changed), requiring the target area to be obtained in real time. The real object positioning information of the real object can be acquired in real time through the motion capture system, so that the target area can be determined in real time.
The physical object can be one or more, so that the target area can also be one or more. The capture points can be set on each physical object respectively to obtain the corresponding target areas.
In the embodiment of the invention, a physical object is taken as an example to illustrate the technical scheme of the invention. For the case where multiple physical objects exist, the manner and process for processing each physical object are similar, and are not described herein again.
S103, acquiring a target real image corresponding to the target area in the real scene;
the motion capture system is used for capturing, analyzing and processing the operation of the user to obtain the position information of the user operation. When a user operates a real object, the real image in a real scene needs to be acquired in real time when the real object is within the visual angle range of the user. In one possible implementation, the display device is a VR head display, and the real image in the real scene can be acquired by an internal or external camera device of the VR head display, and the real image can be considered to be within the first view angle range.
Motion capture systems can locate user operations, track user motion and range of perspectives in real time by capturing, analyzing and processing tracking points located on a user's body (e.g., arms, legs, head-mounted display devices). The motion capture system may be implemented using optical, inertial, etc. sensors. Optically, for example, the tracking dots can be marker dots (e.g., reflective marker dots or light-emitting marker dots); the tracking and capturing device can be a camera and is used for tracking the mark point and acquiring the position of the mark point. Mark points can be attached to the moving parts of the user body, such as fingers, elbows, wrists, heads, feet, knees and the like, so as to track and position the user operation and obtain the position information of the user operation.
Likewise, the motion capture system can locate the physical object (i.e., physical location information) and track and locate the motion of the physical object in real time by capturing, analyzing and processing the marker points set on the physical object.
According to the position information and the object positioning information operated by the user, whether the user operates the object can be determined, namely whether the target area is in the first visual angle range of the user. When a user operates a real object, the real image in a real scene needs to be acquired in real time when the real object is within the visual angle range of the user.
For the scene that the display device is the VR head display, the real image can be captured by the camera device of the VR head display. The camera device and the VR head display are integrated, can be arranged in the VR head display, and can also be externally attached to the VR head display. It can be understood that the display device in the embodiment of the present invention may be a novel VR head display, which is internally provided with a high definition camera for implementing the camera device; or a general VR head display is used at present, and a high-definition camera is additionally arranged outside the VR head display and used for realizing the camera device.
It should be noted that the real image captured by the VR head may cover the entire target area, or may partially overlap with the target area, so that the target real image corresponding to the target area needs to be determined according to the first view angle range and the matching corresponding relationship between the real scene and the virtual scene, and then the target real image and the virtual scene are fused.
To the scene that display device is AR head shows, user (operator) can see through the area of scratching the image in the virtual scene, directly catches real image through AR glasses. The matting region can be understood as a region where the three-dimensional virtual image is scratched out from the virtual scene after the virtual scene completely covers the real image, and can also be referred to as a non-occlusion region (i.e., a region that is not occluded by the virtual scene). The matte region is determined based on the target region. In one possible implementation, the matting area is the target area.
S104, fusing the target real image and the virtual scene to obtain a virtual-real fused image, and presenting the virtual-real fused image to a user through the display equipment;
specifically, the target real image is fused into a target area of the virtual scene to obtain a virtual-real fused image, and then the virtual-real fused image is presented to the user through a display device, such as a VR head display or an AR head display.
Specifically, the real image and the virtual scene are fused, and the following implementation modes may be possible:
in one possible implementation, the real image captured by the camera of the VR glasses may cover the entire target area or may partially overlap the target area. Therefore, the target real image corresponding to the target area needs to be determined in the real image according to the first view angle range, and then the target real image is fused with the virtual scene, that is, the target real image corresponding to the target area is fused into the target area of the virtual scene, so as to obtain the virtual-real fused image fused with the real image in the virtual scene. And then rendering the virtual-real fused image and outputting the rendered virtual-real fused image to VR glasses, and presenting the rendered virtual-real fused image to a user through the VR glasses.
In another possible implementation, the virtual scene with the image matting region scratched off is rendered and output to the AR glasses, so that the virtual scene with the non-shielding region scratched off and the virtual-real fused image fused with the real image captured through the non-shielding region are presented to a user.
The method for realizing the virtual-real fusion is combined with a motion capture system to carry out real-time multi-target three-dimensional positioning and motion tracking capture, and a user can directly observe an operated real object in a virtual scene, so that good immersion feeling is obtained, and simultaneously, more real operation feeling is experienced, and better interactive operation effect is obtained.
Taking the medical training system as an example, if the medical training system is operated in a pure virtual scene, for example, a virtual wounded person is treated on a virtual battlefield, the medical staff is used as an operator, the operation object is the virtual wounded person, and although the good immersion feeling of the virtual scene is obtained, the medical staff cannot obtain a real operation feeling. On the other hand, in a real scene such as a hospital or a hospital training center, although a medical staff is an operator and an operation target is a rubber person or a human body soaked in formalin, a real operation feeling can be obtained, but a good immersion feeling of a virtual scene cannot be obtained. According to the implementation method provided by the embodiment of the invention, the operation picture of the medical staff on the eraser is integrated into the virtual battlefield in the real scene, so that the medical staff can experience the real operation feeling on the eraser while being immersed in the virtual battlefield, and better training quality and effect are achieved.
On the basis of the implementation method shown in fig. 1, referring to fig. 2, an exemplary virtual-real fusion implementation method provided by an embodiment of the present invention is described by taking a display device as VR glasses as an example, and specifically includes the following steps:
s201, tracking and positioning a real object and VR glasses in a real scene, obtaining real object positioning information and glasses positioning information, and transmitting the real object positioning information and glasses positioning information to virtual and real fusion processing equipment;
and capturing and positioning the real object and the VR glasses in the real scene through a motion capture system, and transmitting the real object positioning information and the glasses positioning information to virtual and real fusion processing equipment after obtaining the real object positioning information and the glasses positioning information.
And the virtual-real fusion processing equipment ensures that the positions of the real object in the real scene and the virtual object in the virtual scene are correspondingly consistent based on the real object positioning information.
Virtual reality fuses processing apparatus based on glasses locating information, guarantees that the VR glasses in the real scene are equipped with the position unanimous with virtual camera in the virtual scene to guarantee that the visual angle of VR glasses is unanimous with the visual angle of virtual camera.
The virtual-real fusion processing equipment ensures that a first relative position between a virtual object and a virtual camera in a virtual scene is consistent with a second relative position between a real object and VR glasses in a real scene based on real object positioning information and glasses positioning information.
The motion capture system can be used for tracking and positioning the real object and the VR glasses, and capturing, analyzing and processing the operation of the user, so that the multi-target real-time positioning and motion tracking capture are realized.
S202, determining a target area in a virtual scene by virtual-real fusion processing equipment based on the real object positioning information;
the target area is used for presenting a virtual and reality fused image. Therefore, the target area covers the virtual object, which may be the area occupied by the virtual object, or may be slightly larger or smaller than the area occupied by the virtual object.
Before determining the target area, the virtual scene is preferably acquired. In particular implementations, the virtual scene may be acquired before, after, or simultaneously with step 201.
S203, acquiring a real image in a real scene;
the real image in the real scene can be obtained through VR glasses.
The embodiment will take a medical training system as an example, and with reference to the exemplary implementation manners of fig. 8 to 11, how to implement virtual-real fusion to obtain a virtual-real fusion image will be described.
First, a real image in a real scene is captured by an imaging device (camera) built in or attached to VR glasses, see the example of fig. 8.
An imaging device of VR glasses has high resolution, and a field of view (FOV) can approach or reach the range of the field of view of human eyes.
The motion capture system also carries out motion acquisition preprocessing after working, and specifically can automatically correct a confused or shielded target point by adopting a multi-frame analysis method according to tracking point information and motion trail prediction of front and back frame images, and automatically carry out smoothing processing on data by adopting a signal smoothing algorithm on the jumping of the tracking point, so that the jitter is eliminated, and the information acquisition quality is ensured. And the motion capture system performs spatial three-dimensional coordinate calculation processing according to the collected tracking points and the real object, and acquires the operation position of the user in the digital space in real time to determine whether the operation position is in the target area.
When a user operates the physical object, the physical object is within the visual angle range of the user, and the operation position of the user in the digital space (in the virtual scene) is within the target area. At this time, the VR glasses need to acquire real images in real scenes in real time.
S204, fusing the target real image into a target area of the virtual scene to obtain a virtual-real fused scene; after the VR glasses capture a real image, first, a target real image corresponding to the target area (which may also be understood as corresponding to a real object) is acquired from the real image; and then, the target real image is fused into a target area of the virtual scene to obtain a virtual-real fused image. A specific implementation may include the following steps S2041 to S2044.
S2041, creating a virtual projector in a software project;
the size of the projection picture of the virtual projector is larger than or equal to the size of the VR screen. In addition, the position and angle of the virtual projector are consistent with those of the virtual camera (see fig. 9), so that the user can always see the first-person perspective of the user, and the viewing perspective of the user is always perpendicular to the projection screen. Therefore, the problem that the picture projected on the three-dimensional virtual scene is dragged and deformed due to the change of the angle of the VR glasses in the field of the user can be solved.
S2042, creating a digital model matched with the target area;
the digital model is provided with a specific shader, the shader can receive and process images projected by the virtual projector, and the target real image corresponding to the target area in a matching mode can be determined through the shader.
S2043: delivering a real image captured by VR glasses to a virtual projector, and projecting the real image to a virtual scene by the virtual projector;
s2044: receiving and displaying a target real image corresponding to a target area by the digital model to obtain a virtual-real fusion scene fused with the target real image;
the virtual projector projects the real image to a virtual scene, and obtains a target real image corresponding to the target area through a shader of the digital model, as shown in fig. 10, thereby obtaining a virtual-real fusion scene fused with the real image.
And S205, rendering the virtual-real fusion scene and outputting the rendered virtual-real fusion scene to VR glasses, and presenting a virtual-real fusion image for a user.
Referring to fig. 11, a user can obtain a virtual-real fused image in a first view angle range through VR glasses.
In this embodiment, the user sees a virtual-real fused image through VR glasses; the virtual-real fused image is obtained by superimposing a target real image on a three-dimensional virtual scene. Therefore, the pure virtual scene is changed into a virtual-real fused scene fused with real images, and the experience of having operation feeling and picture immersion feeling is provided for the user.
Based on the implementation method shown in fig. 1, further referring to fig. 3, taking a display device as VR glasses as an example to describe another exemplary virtual-real fusion implementation method provided by the embodiment of the present invention, specifically including the following steps:
s301, tracking and positioning a real object and VR glasses in a real scene, obtaining real object positioning information and glasses positioning information, and transmitting the real object positioning information and glasses positioning information to virtual and real fusion processing equipment;
and positioning and tracking the real object and the VR glasses in the real scene through the motion capture system, and transmitting the real object positioning information and the glasses positioning information to the virtual-real fusion processing equipment after the real object positioning information and the glasses positioning information are obtained in real time.
And the virtual-real fusion processing equipment ensures that the positions of the real object in the real scene and the virtual object in the virtual scene are always consistent based on the real object positioning information.
Virtual reality fuses processing apparatus based on glasses locating information, guarantees that the position of VR glasses in the real scene is unanimous with virtual camera's in the virtual scene to guarantee that the visual angle of VR glasses is unanimous with the visual angle of virtual camera.
The virtual-real fusion processing equipment also ensures that a first relative position between a virtual object and a virtual camera in a virtual scene is matched and consistent with a second relative position between a real object and VR glasses in a real scene based on the real object positioning information and the glasses positioning information.
The motion capture system positions and tracks the real object and the VR glasses in real time according to tracking points arranged on the real object and the VR glasses; and capturing, analyzing and processing the operation of the user according to the tracking points arranged on the body of the user.
S302, determining a target area in a virtual scene by virtual-real fusion processing equipment based on the real object positioning information;
the target area is used for presenting a virtual and reality fused image. Therefore, the target area covers the virtual object, which may be the area occupied by the virtual object, or may be slightly larger or smaller than the area occupied by the virtual object.
Before determining the target area, the virtual scene is preferably acquired. In particular implementations, the virtual scene may be acquired before, after, or simultaneously with step 201.
S303, acquiring a real image in a real scene;
specifically, a real image in a real scene may be acquired through VR glasses.
The embodiment also takes the medical training system as an example, and the exemplary implementation manners of fig. 8 and fig. 12 to 14 are combined to describe how to implement virtual-real fusion to obtain a virtual-real fused image.
First, a real image in a real scene is captured by an imaging device (camera) built in or attached to VR glasses, see the example of fig. 8.
Also, the cameras in VR glasses are required to have high resolution, with a range of viewing angles (FOV) that can approach or reach the range of viewing angles of human eyes.
S304, determining a target real image corresponding to the target area from the real images;
after the VR glasses capture the real image, first, a target real image corresponding to the target area is acquired from the real image, and the specific implementation may include the following steps S3041 to S3043.
S3041, creating a mask shader for the target area;
the mask shader dynamically generates a black-and-white mask image (mask) according to a target area of a virtual scene by using a matching correspondence between a real object of the real scene and a virtual object of the virtual scene, as shown in fig. 12. Then, a target real image (for example, an image of a real object) required in the real images captured by the cameras of the VR glasses is determined based on the black-and-white mask image, and other unnecessary portions are completely removed, as shown in fig. 13.
S3042, creating a plane model in software engineering;
the position and the distance of the plane model are adjusted according to the position and the distance of the virtual camera, and the plane model is adjusted to be in a full-screen state within the visual angle range of the virtual camera.
The angle of the planar model is at a perpendicular angle to the virtual camera.
The planar model is always placed in front of other 3D objects in the virtual scene.
S3043 assigning a mask shader to the planar model, so that the planar model only retains a target real image required in the real image;
s305, rendering the plane model and the virtual scene and outputting the rendered plane model and virtual scene to VR glasses, and presenting a virtual-real fused image for a user.
After the target real image is obtained, the target real image is fused into a target area of the virtual scene, and then the virtual-real fused image can be obtained. Specifically, referring to fig. 14, the plane model is perpendicular to the virtual scene, and only the target real image is retained, since the plane model is always in front of other 3D objects in the virtual scene, the plane model and the virtual scene are rendered together and output to VR glasses, and the virtual-real fused image is presented to the user.
In this embodiment, the virtual-real fused image that the user sees through the VR glasses is obtained by superimposing the target real image on the target area of the three-dimensional virtual scene. Therefore, the pure virtual scene is changed into a virtual-real fusion scene fusing real images, and the experience of having operation feeling and picture immersion feeling is provided for the user. Therefore, the experience with both operation feeling and picture immersion feeling is provided for the user.
On the basis of the implementation method shown in fig. 1, referring to fig. 4, a display device is taken as AR glasses as an example to illustrate another exemplary virtual-real fusion implementation method provided by the embodiment of the present invention, which specifically includes the following steps:
s401, acquiring a virtual scene;
specifically, the virtual scene may be obtained by referring to the implementation manner in step S102. Of course, step S401 may be performed after step S402 or synchronously.
The acquired virtual scene is used for completely covering the real scene.
S402, tracking and positioning a real object and AR glasses in a real scene, obtaining real object positioning information and glasses positioning information, and transmitting the real object positioning information and glasses positioning information to virtual and real fusion processing equipment;
either GPS or motion capture systems or a combination of both may be employed to locate the physical object and the AR glasses.
And the virtual-real fusion processing equipment ensures that the positions of the real object in the real scene and the virtual object in the virtual scene are consistent based on the real object positioning information.
The virtual-real fusion processing equipment ensures that the positions of the AR glasses in the real scene and the virtual camera in the virtual scene are consistent based on the glasses positioning information, and ensures that the visual angle of the AR glasses is consistent with the visual angle of the virtual camera.
The virtual-real fusion processing equipment also ensures that a first relative position between the virtual object and the virtual camera in the virtual scene is consistent with a second relative position between the real object and the AR glasses in the real scene based on the real object positioning information and the glasses positioning information.
The user's actions may be captured, analyzed, and processed by a motion capture system based on tracking points (e.g., retro-reflective marker points) disposed on the user's body.
S403, determining a target area in the virtual scene by the virtual-real fusion processing equipment based on the real object positioning information;
the target area is used for presenting a virtual-real fused image fused by a virtual scene and a real image.
The target area covers the virtual object, which may be the area occupied by the virtual object, or may be slightly larger or smaller than the area occupied by the virtual object.
S404, acquiring a target real image in a real scene, and presenting a virtual-real fused image fused by a virtual scene and the target real image to a user;
the motion capture system performs spatial three-dimensional coordinate calculation processing according to the acquired positions of the tracking points and the position of the real object, and acquires the operation position of the user in a digital space (in a virtual scene) to determine whether the operation position is in the target area.
When the user operates the physical object, the physical object is within the view angle range of the user, that is, the operation position of the user in the digital space (in the virtual scene) is within the target area and within the first view angle range. The virtual-real fusion processing device can determine the matting region in the virtual scene according to the position information provided by the motion capture system, including the real object positioning information, the glasses positioning information, and the position information of other tracking points (e.g., the light-reflecting mark points on the fingers, elbows, legs, etc.) and the like. The virtual image completely covers the real scene, and then the virtual image of the image matting region is removed (matting), so that a user can directly see the real image in the real scene through the image matting region. It can be understood that the matting region is not shielded by the virtual scene, so that both eyes of the user can directly see the target real image in a certain range through the matting region. Thus, the user sees a virtual-real fused image in which the virtual scene is fused into the real image.
The matte region is determined based on the target region, which may be the target region.
In this embodiment, the user sees a virtual-real fused image through the AR glasses; the virtual-real fused image is obtained by removing the image-matting area from the three-dimensional virtual scene and then fusing the three-dimensional virtual image into the real image. Therefore, the pure virtual scene is changed into a virtual-real fusion scene fusing real images, and the experience of having operation feeling and picture immersion feeling is provided for the user.
The technical scheme of the invention can be applied to training and examination of medical staff, can also be applied to scenes such as machine tool operation, chemical experiments, drug experiments and the like, and can provide the experience of immersion and operation feeling for users through virtual-real fusion. In the embodiments provided in the present application, medical training/certification applications are mainly taken as examples, some specific descriptions are made, and the application scenarios of the technical solutions of the present invention are not limited. The technical scheme of the invention can be applied to any scene needing virtual-real fusion.
Referring to fig. 5, an embodiment of the present invention provides a system for implementing virtual-real fusion, where the system 500 for implementing virtual-real fusion includes: a virtual-real fusion processing device 501, a display device 502 and a motion capture system 503. The virtual-real fusion processing device 501, the display device 502 and the motion capture system 503 can be connected and communicated with each other through a network (wired, wireless or a combination of both) and/or a cable, so as to implement the virtual-real fusion implementation method provided by the present invention.
The virtual-real fusion processing device 501 may be a personal computer (e.g., desktop computer, notebook computer, etc.), a server, or a mobile terminal (e.g., mobile smartphone, tablet PAD, etc.).
The display device 502 may be a head-on display device or an immersive display device such as a ring screen display, a dome screen display, or the like. The head display equipment can be VR glasses or AR glasses. The display device 502 may include a video input/output interface, a voice input/output interface, and the like.
In one possible implementation, the display device 502 is VR glasses, which include a camera, and may be a novel VR glasses with a camera function that can achieve the purpose of the present invention, or may be an existing VR glasses with a camera attached to the existing VR glasses as required.
In another possible implementation, the display device 502 is AR glasses.
The motion capture system 503 may include an acquisition unit 5031, a digital computation processing unit 5032, a database 5033, and the like. The acquisition unit 5031 includes a tracking point, a tracking capture device, and the like.
The motion capture system may be implemented using optical, inertial sensors, and the like.
In one possible implementation, the motion capture system is implemented optically, and the tracking points may be marker points, such as reflective marker points or luminescent marker points; the tracking and capturing device is a camera and is used for tracking the mark points and collecting the positions of the mark points. The acquisition unit 5031 comprises a plurality of cameras and a plurality of mark points.
The database 5033 may store therein a three-dimensional virtual scene, virtual objects, and the like. The three-dimensional virtual scene can be collected by three-dimensional simulation in advance according to a real scene, and can also be purely fictional; such as a virtual battlefield, a virtual fire, a virtual operating room, a virtual factory floor, a virtual laboratory, etc. The virtual object may be obtained by three-dimensional simulation as required, or may be purely fictitious, for example, a virtual medical device, a virtual medicament, a virtual warrior, a virtual pharmacist, and the like.
The database 5033 may also be used to store the target reality imagery obtained from the real scene. The database 5033 can be local or cloud-based.
The motion capture system 503 is configured to capture and position the real object in the real scene and the display device 502, obtain real object positioning information of the real object and device positioning information of the display device 502, and transmit the real object positioning information and the device positioning information to the virtual-real fusion processing device 501 in real time. The motion capture system 503 is also used to track, capture, analyze and process the user's actions.
The virtual-real fusion processing device 501 is configured to process real object positioning information and device positioning information, so as to implement accurate matching between a virtual scene and a real scene. Specifically, the virtual fusion processing device 501 may adopt a digital twin technology to implement accurate replication (simulation) of a virtual scene to a real scene according to a ratio of 1: 1.
The virtual fusion processing device 501 is further configured to determine a target area in a virtual scene according to the real object positioning information, obtain a target reality image corresponding to the target area in the real scene, fuse the virtual scene with the target reality image, render the virtual-real fusion image obtained through fusion and output the virtual-real fusion image to the display device 502, and present the virtual-real fusion scene to the user.
The motion capture system 503 may be an existing motion capture system that meets the requirements of the present invention, or may be a dedicated motion capture system customized according to the requirements of the present invention.
The virtual-real fusion implementation system 500 may also include other parts not shown in the figures, for example, including but not limited to presentation (planar/3D presentation, holographic presentation, VR/AR presentation) devices, GPS devices, and the like. The display device may be a stand-alone display apparatus, such as a flat panel display device, a 3D display device, a holographic projection device, or the like, and may also be a part of the virtual-real fusion processing device 501 or the display device 502, such as a display screen of a handheld smart phone, or a display screen of a head display device (VR/AR device).
The virtual-real fusion processing device 501, the display device 502, and the motion capture system 503 cooperate with each other to execute the virtual-real fusion implementation method shown in fig. 1 to 4 according to an embodiment of the present invention, and the specific process thereof is referred to above and is not described herein again.
The virtual-real fusion implementation system provided by the embodiment of the invention realizes the fusion of the virtual scene and the real image through the cooperation of the virtual-real fusion processing equipment, the display equipment and the motion capture system, changes the pure virtual scene into a virtual-real fusion scene fusing the real image, and provides the user with the experience of interactive operation sense and picture immersion sense.
Fig. 6 is a schematic structural diagram of an electronic device for deploying and/or executing the virtual-real fusion implementation method/system of the present invention. The electronic device 600 typically has one or more processors 601, memory 602, input output (I/O) interface 603, and network communication interface 604, as well as other components not shown in the figures, including, but not limited to, a camera, a power system, a GPS location module, adjustment keys, etc.
These components are interconnected by a communication bus, as shown in fig. 6. The communication bus includes circuitry (a chipset) that interconnects and controls communications between the system components.
The I/O interface 603 may include a display screen, such as a touch-sensitive display screen. The I/O interfaces 603 may also include one or more of audio/video input-output interfaces, keyboard/mouse input-output devices, touch pads, sensors (e.g., optical sensors, acceleration sensors, gyroscopes, touch sensitive sensors, etc.).
The memory 602 includes random access memory such as DRAM, SRAM, DDR RAM, etc.; but also flash memory, magnetic disks, or other non-volatile solid-state storage media, etc.
The memory 602 is also used to store computer programs, modules, instructions, data structures, data, or subsets thereof that are executed by the electronic device 600. Fig. 6 optionally shows, among other things, an operating system, a communication module, a graphics module, a voice module, a video module, a haptic feedback module, a system state, and one or more application programs (APPs). The application program (APP) may be, for example, a chemical experiment simulation program, a chinese medical inspection/diagnosis training program, a workshop operation training program, or the like, which is provided to the user.
A network communication interface 604, which is used for the electronic device 600 to communicate with a network, and implements network data communication to send and receive data. The network communication interface 604 may be a wired network communication interface, a wireless network communication interface, or a combination of both.
The processor 601 runs the computer program stored in the memory 602 to implement the operations and steps related to the virtual-real fusion implementation method provided in the embodiment of the present invention, and specific details can be referred to as those shown in fig. 1 to fig. 4, which are not described herein again.
Fig. 7 is a schematic structural diagram of an electronic device for deploying and/or executing the virtual-real fusion implementation method/system of the present invention, where the electronic device 700 includes a positioning information obtaining module 701, a target area determining module 702, a real image obtaining module 703 and a fusion processing module 704; wherein,
the positioning information obtaining module 701 is configured to position a real object and a display device in a real scene, and obtain real object positioning information and device positioning information;
the target area determining module 702 is configured to determine a target area in a virtual scene based on the real object positioning information, where the target area is used to present a virtual-real fused image;
the real image acquiring module 703 is configured to acquire a target real image in the real scene, where the target real image corresponds to the target area in a matching manner;
the fusion processing module 704 is configured to fuse the target real image and the virtual scene to obtain a virtual-real fusion image, and present the virtual-real fusion image to a user through the display device.
The positioning information acquisition module 701 may include a motion capture function and may further include a GPS function.
The electronic device 700 may further comprise a virtual scene acquisition module 705 for acquiring the virtual scene based on a digital twinning technique before the target region is determined by the target region determination module 702.
In one possible implementation, the display device is VR glasses, and the real image obtaining module 703 is specifically configured to capture a real image through a camera (e.g., a camera) of the display device, and determine a target real image corresponding to the target area in the real image; the fusion processing module 704 is specifically configured to fuse a target real image into the target area of the virtual scene to obtain the virtual-real fusion image.
In another possible implementation, the display device is AR glasses, and the virtual scene obtaining module 705 is further configured to completely cover the obtained virtual scene with the real scene; the real image obtaining module 703 is specifically configured to determine a matting region of the virtual scene according to the target region, and remove a virtual image of the matting region, so as to obtain the target real image.
The electronic device may implement all the module functions by one independent device, or may implement all the module functions by a plurality of devices together.
The electronic device may further include other functional modules not shown in the drawings, and the virtual-real fusion implementation method may specifically refer to fig. 1 to 4, and include corresponding functional modules according to different possible implementations, which are not described herein again.
In view of the above, embodiments of the invention also provide a non-transitory computer readable storage medium comprising one or more programs for execution by one or more processors of an electronic device, the one or more programs comprising instructions, which when executed by the one or more processors, cause the electronic device to perform a method according to any of the preceding embodiments.
In view of the above, an embodiment of the present invention further provides an electronic device, which includes a processing unit configured to execute any one of the virtual-real fusion implementation methods and processes described herein.
In view of the above, embodiments of the present invention also provide an electronic device including one or more processors and memory storing one or more programs for execution by the one or more processors, the one or more programs including instructions for performing any of the virtual-real fusion implementation methods and processes described herein.
It should be noted that the above-mentioned embodiments enable a person skilled in the art to more fully understand the invention, without restricting it in any way. Therefore, although the present invention has been described in detail with reference to the drawings and examples, it will be understood by those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention. The scope of the invention is to be determined by the claims.
Claims (10)
1. A method for realizing virtual-real fusion is characterized by comprising the following steps:
tracking and positioning a real object and display equipment in a real scene, and acquiring real object positioning information and equipment positioning information in real time;
determining a target area in a virtual scene based on the object positioning information;
acquiring a target real image in the real scene, wherein the target real image corresponds to the target area;
and fusing the target real image and the virtual scene to obtain a virtual-real fused image, and presenting the virtual fused image to a user through the display equipment.
2. The implementation method of claim 1, wherein the acquiring the target reality image in the real scene comprises:
obtaining a first view angle range in the virtual scene according to the device positioning information, wherein the device positioning information comprises the position and the angle of the display device;
and determining a target real image corresponding to the target area according to the first visual angle range.
3. The method of claim 1, further comprising:
acquiring the virtual scene before determining the target area.
4. The implementation method of claim 3, wherein the acquiring the virtual scene comprises:
adopting a digital twinning technology for the real scene, and accurately copying to generate the virtual scene;
or, copying and generating an original virtual scene by adopting a digital twinning technology for the real scene, and adding a virtual picture in the original virtual scene according to the requirement of a client to obtain the virtual scene;
or selecting a three-dimensional scene meeting the customer requirement from a database as the virtual scene based on the three-dimensional space of the real scene.
5. The implementation method of any one of claims 1 to 4, wherein the display device is a Virtual Reality (VR) head display;
the acquiring of the target reality image in the real scene includes:
shooting a real image in a real scene through a camera device of the display equipment;
and determining a target real image in the real images based on the target area.
6. The implementation method of any one of claims 1 to 4, wherein the display device is Augmented Reality (AR) glasses;
the acquiring of the target reality image in the real scene includes:
determining a keying region corresponding to the virtual scene based on the target region, wherein the virtual scene completely covers a real scene in the AR glasses;
and scratching the virtual image of the scratching area so as to obtain the target real image through the scratching area.
7. An electronic device, comprising:
the positioning information acquisition module is used for positioning the real object and the display equipment in the real scene to acquire real object positioning information and equipment positioning information;
a target area determining module, configured to determine a target area corresponding to a virtual scene based on the real object positioning information, where the target area is used to present a virtual-real fused image;
a real image acquisition module, configured to acquire a target real image in the real scene, where the target real image corresponds to the target area;
and the fusion processing module is used for fusing the target real image and the virtual scene to obtain a virtual-real fused image and presenting the virtual-real fused image to a user through the display equipment.
8. The electronic device of claim 7, further comprising: and the virtual scene acquisition module is used for acquiring the virtual scene before the target area is determined.
9. An electronic device comprising a processor, a memory, an input output (I/O) interface, and a network communication interface;
the processor, the memory, the I/O interface, and the network communication interface are interconnected;
the processor executes a computer program stored in the memory to implement the virtual-real fusion implementation method of any one of claims 1 to 6.
10. A virtual-real fusion implementation system is characterized by comprising a virtual-real fusion processing device, a display device and a motion capture system;
the virtual-real fusion processing apparatus includes the electronic apparatus of any one of claims 7 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011252695.4A CN112346572A (en) | 2020-11-11 | 2020-11-11 | Method, system and electronic device for realizing virtual-real fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011252695.4A CN112346572A (en) | 2020-11-11 | 2020-11-11 | Method, system and electronic device for realizing virtual-real fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112346572A true CN112346572A (en) | 2021-02-09 |
Family
ID=74363192
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011252695.4A Pending CN112346572A (en) | 2020-11-11 | 2020-11-11 | Method, system and electronic device for realizing virtual-real fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112346572A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113066192A (en) * | 2021-04-25 | 2021-07-02 | 中国人民解放军陆军军医大学第一附属医院 | Real-time masking method in full-virtual environment based on AR imaging |
CN113160648A (en) * | 2021-04-25 | 2021-07-23 | 中国人民解放军陆军军医大学第一附属医院 | Disaster emergency training method based on motion capture positioning and scene simulation |
CN114023126A (en) * | 2021-10-13 | 2022-02-08 | 徐州工程学院 | Simulation teaching factory for aniline production |
CN114153214A (en) * | 2021-12-02 | 2022-03-08 | 浙江科顿科技有限公司 | MR/AR/VR message leaving and creating scene control method, mobile terminal and readable storage medium |
CN114153315A (en) * | 2021-12-02 | 2022-03-08 | 浙江科顿科技有限公司 | Augmented reality distributed server intelligent glasses system and control method |
CN114185433A (en) * | 2021-12-02 | 2022-03-15 | 浙江科顿科技有限公司 | Intelligent glasses system based on augmented reality and control method |
CN114419293A (en) * | 2022-01-26 | 2022-04-29 | 广州鼎飞航空科技有限公司 | Augmented reality data processing method, device and equipment |
CN114785909A (en) * | 2022-04-25 | 2022-07-22 | 歌尔股份有限公司 | Shooting calibration method, device, equipment and storage medium |
CN115346413A (en) * | 2022-08-19 | 2022-11-15 | 南京邮电大学 | Assembly guidance method and system based on virtual-real fusion |
CN115937626A (en) * | 2022-11-17 | 2023-04-07 | 郑州轻工业大学 | Automatic generation method of semi-virtual data set based on instance segmentation |
CN116030684A (en) * | 2023-03-03 | 2023-04-28 | 广州禧闻信息技术有限公司 | Virtual reality technology-based interactive training system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130222647A1 (en) * | 2011-06-27 | 2013-08-29 | Konami Digital Entertainment Co., Ltd. | Image processing device, control method for an image processing device, program, and information storage medium |
CN106055113A (en) * | 2016-07-06 | 2016-10-26 | 北京华如科技股份有限公司 | Reality-mixed helmet display system and control method |
CN106791784A (en) * | 2016-12-26 | 2017-05-31 | 深圳增强现实技术有限公司 | Augmented reality display methods and device that a kind of actual situation overlaps |
CN110505464A (en) * | 2019-08-21 | 2019-11-26 | 佳都新太科技股份有限公司 | A kind of number twinned system, method and computer equipment |
-
2020
- 2020-11-11 CN CN202011252695.4A patent/CN112346572A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130222647A1 (en) * | 2011-06-27 | 2013-08-29 | Konami Digital Entertainment Co., Ltd. | Image processing device, control method for an image processing device, program, and information storage medium |
CN106055113A (en) * | 2016-07-06 | 2016-10-26 | 北京华如科技股份有限公司 | Reality-mixed helmet display system and control method |
CN106791784A (en) * | 2016-12-26 | 2017-05-31 | 深圳增强现实技术有限公司 | Augmented reality display methods and device that a kind of actual situation overlaps |
CN110505464A (en) * | 2019-08-21 | 2019-11-26 | 佳都新太科技股份有限公司 | A kind of number twinned system, method and computer equipment |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113066192A (en) * | 2021-04-25 | 2021-07-02 | 中国人民解放军陆军军医大学第一附属医院 | Real-time masking method in full-virtual environment based on AR imaging |
CN113160648A (en) * | 2021-04-25 | 2021-07-23 | 中国人民解放军陆军军医大学第一附属医院 | Disaster emergency training method based on motion capture positioning and scene simulation |
CN114023126A (en) * | 2021-10-13 | 2022-02-08 | 徐州工程学院 | Simulation teaching factory for aniline production |
CN114153214A (en) * | 2021-12-02 | 2022-03-08 | 浙江科顿科技有限公司 | MR/AR/VR message leaving and creating scene control method, mobile terminal and readable storage medium |
CN114153315A (en) * | 2021-12-02 | 2022-03-08 | 浙江科顿科技有限公司 | Augmented reality distributed server intelligent glasses system and control method |
CN114185433A (en) * | 2021-12-02 | 2022-03-15 | 浙江科顿科技有限公司 | Intelligent glasses system based on augmented reality and control method |
CN114419293A (en) * | 2022-01-26 | 2022-04-29 | 广州鼎飞航空科技有限公司 | Augmented reality data processing method, device and equipment |
CN114785909A (en) * | 2022-04-25 | 2022-07-22 | 歌尔股份有限公司 | Shooting calibration method, device, equipment and storage medium |
CN115346413A (en) * | 2022-08-19 | 2022-11-15 | 南京邮电大学 | Assembly guidance method and system based on virtual-real fusion |
CN115346413B (en) * | 2022-08-19 | 2024-09-13 | 南京邮电大学 | Assembly guidance method and system based on virtual-real fusion |
CN115937626A (en) * | 2022-11-17 | 2023-04-07 | 郑州轻工业大学 | Automatic generation method of semi-virtual data set based on instance segmentation |
CN115937626B (en) * | 2022-11-17 | 2023-08-08 | 郑州轻工业大学 | Automatic generation method of paravirtual data set based on instance segmentation |
CN116030684A (en) * | 2023-03-03 | 2023-04-28 | 广州禧闻信息技术有限公司 | Virtual reality technology-based interactive training system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112346572A (en) | Method, system and electronic device for realizing virtual-real fusion | |
US11730545B2 (en) | System and method for multi-client deployment of augmented reality instrument tracking | |
US10674142B2 (en) | Optimized object scanning using sensor fusion | |
US9654734B1 (en) | Virtual conference room | |
CN110954083B (en) | Positioning of mobile devices | |
CN103180893B (en) | For providing the method and system of three-dimensional user interface | |
JP6364022B2 (en) | System and method for role switching in a multiple reality environment | |
JP6515813B2 (en) | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM | |
Pfeiffer | Measuring and visualizing attention in space with 3D attention volumes | |
KR101930657B1 (en) | System and method for immersive and interactive multimedia generation | |
CN109801379B (en) | Universal augmented reality glasses and calibration method thereof | |
US20190371072A1 (en) | Static occluder | |
CN108629830A (en) | A kind of three-dimensional environment method for information display and equipment | |
CN102959616A (en) | Interactive reality augmentation for natural interaction | |
CN105374251A (en) | Mine virtual reality training system based on immersion type input and output equipment | |
CN104536579A (en) | Interactive three-dimensional scenery and digital image high-speed fusing processing system and method | |
US10582190B2 (en) | Virtual training system | |
Saggio et al. | Augmented reality for restoration/reconstruction of artefacts with artistic or historical value | |
CN113678173A (en) | Method and apparatus for graph-based placement of virtual objects | |
CN114546125B (en) | Keyboard tracking method and tracking system | |
CN117826976A (en) | XR-based multi-person collaboration method and system | |
Mathi | Augment HoloLens’ Body Recognition and Tracking Capabilities Using Kinect | |
Siegl et al. | An augmented reality human–computer interface for object localization in a cognitive vision system | |
KR20220083552A (en) | Method for estimating and correcting 6 DoF of multiple objects of wearable AR device and AR service method using the same | |
Mamdouh et al. | Using Azure to construct recent architecture for visualize training in real-time |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |