WO2018058601A1 - Method and system for fusing virtuality and reality, and virtual reality device - Google Patents

Method and system for fusing virtuality and reality, and virtual reality device Download PDF

Info

Publication number
WO2018058601A1
WO2018058601A1 PCT/CN2016/101255 CN2016101255W WO2018058601A1 WO 2018058601 A1 WO2018058601 A1 WO 2018058601A1 CN 2016101255 W CN2016101255 W CN 2016101255W WO 2018058601 A1 WO2018058601 A1 WO 2018058601A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
realistic
rendering
user
real
Prior art date
Application number
PCT/CN2016/101255
Other languages
French (fr)
Chinese (zh)
Inventor
骆磊
Original Assignee
深圳达闼科技控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳达闼科技控股有限公司 filed Critical 深圳达闼科技控股有限公司
Priority to PCT/CN2016/101255 priority Critical patent/WO2018058601A1/en
Priority to CN201680002728.5A priority patent/CN107077755B/en
Publication of WO2018058601A1 publication Critical patent/WO2018058601A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering

Definitions

  • Embodiments of the present application relate to the field of virtual reality (VR), and in particular, to a method and system for superimposing rendered content related to a real element in a virtual reality display environment.
  • VR virtual reality
  • VR virtual reality
  • AR augmented reality
  • VR is a completely virtual content for users. Users bring a head-mounted VR device, immersed in a completely virtual world, can watch and listen, but can not be sensed by smell, taste, touch, completely isolated from reality.
  • Augmented Reality (AR) adopts a semi-lens method, allowing users to see the real world, while displaying virtual objects or virtual objects on the semi-lens, so that the virtual fusion into the real environment, the user sees the real A small amount of virtual content appears in the world.
  • existing head mounted VR devices typically include a head mounted display and a VR content generating device.
  • the head mounted display can be worn on the user's head and provide the user with an immersive field of view of the virtual scene.
  • the head mounted display also includes sensors for head positioning.
  • the VR content generation device includes a calculation module, a storage module, and a head positioning module.
  • the head positioning module obtains data from the head positioning sensor in the head mounted display in real time, and is processed by the sensor fusion related algorithm, and the head positioning module can obtain the current user's head posture.
  • the VR content generating device obtains the current head pose from the head positioning module, obtains the material required for rendering the virtual scene from the storage module, and finally processes the virtual scene with the current user's head posture as a perspective. And displayed to the user through the head-mounted display.
  • Head-mounted displays and VR content-generating devices can be embedded integrated (such as VR mobile all-in-ones) or connected together via display data lines (such as HDMI) (such as HTC Vive).
  • the technical problem to be solved by the embodiments of the present application is to provide a new virtual reality experience method and system for adjusting the degree of rendering according to the user's wishes.
  • the method and system realize the superposition of the virtual elements on the virtual content of the virtual reality system to superimpose the realistic elements of the user's surrounding environment, and bring the user a new VR experience.
  • one technical solution adopted by the embodiment of the present application is: providing
  • a fusion method of virtual and reality including:
  • the rendered result is output to the virtual reality display module.
  • a virtual reality integration system including a virtual reality system, for displaying a virtual three-dimensional scene for a user, and a reality fusion module, wherein
  • the reality fusion module is configured to acquire a realistic three-dimensional panoramic image of a real environment around the user; the virtual reality system is configured to render the realistic three-dimensional panoramic image according to a rendering virtual degree selected by the user; the virtual reality system is further used for rendering The obtained result is output to the virtual reality display module.
  • an electronic device including: a processor module, an interaction module connected by the processor module, and a virtual reality display module;
  • the processor module is configured to acquire a realistic three-dimensional panoramic image according to an image of a surrounding environment of the user, and render the realistic three-dimensional panoramic image according to a rendering virtual degree selected by the user in the interaction module;
  • the processor module is further configured to output the rendered result to the virtual reality display module.
  • the electronic device is a virtual reality wearing device, and further comprises a helmet and a reality capturing module.
  • the reality capture module is configured to capture images of a user's surroundings from multiple angles. In this way, when the user brings the virtual reality wearing device, the user can well immerse the virtual display ring. In the environment, improve the user's embodiment.
  • the interaction module includes a quantity adjustment interaction device, wherein the processor module is further configured to identify and extract a realistic element in the realistic three-dimensional panoramic image; and acquire a first rendering virtual degree selected by the user from the quantity adjustment interaction device.
  • the first rendering virtual level is used to indicate a proportion of real-world elements that the user wishes to retain; and to process the real-world elements in the real-world three-dimensional panoramic image according to the acquired first rendering virtual degree.
  • the real-life element includes a security reality element
  • the processor module is further configured to reserve the real-world three-dimensional when the first rendering virtual degree indicates that the proportion of the real-life element that the user wishes to retain is non-zero. At least a portion of the security reality elements in the panoramic image.
  • the interaction module further includes a rendering virtuality interaction device quantity adjustment interaction device, wherein the processor module is further configured to identify and extract a realistic element in the realistic three-dimensional panoramic image; and obtain a user selection from the rendering virtuality interaction device.
  • the second rendering virtual degree is used to indicate the rendering degree of each real element, and render each real element in the real three-dimensional panoramic image according to the acquired second rendering virtual degree.
  • the processor module is configured to determine a sound rendering corresponding to the virtual three-dimensional scene according to the security reality element and the other real-life elements.
  • the reality capture module includes a plurality of cameras disposed in different orientations of the helmet.
  • the interaction module includes a knob and an analog-to-digital conversion device; the analog-to-digital conversion device is coupled to the knob and the processor module, and configured to generate a corresponding digital signal according to a rotation angle of the knob and output the same to the processor Module.
  • the beneficial effect of the embodiment of the present application is that the image in the surrounding real environment can be rendered according to the rendering virtual degree selected by the user. Allows users to choose the rendering virtual level to help improve the user experience.
  • FIG. 1 is a block diagram of a virtual reality integration system according to an embodiment of the present application.
  • FIG. 2 is a perspective view of a virtual reality integration system according to an embodiment of the present application.
  • FIG. 3 is a perspective rear view of a virtual reality integration system according to an embodiment of the present application.
  • FIG. 4 is a flowchart of an embodiment of a virtual and reality fusion method according to an embodiment of the present application
  • FIG. 5 is a schematic structural diagram of hardware of a virtual reality integration system according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a hardware structure of an electronic device in a virtual and reality fusion method according to an embodiment of the present application.
  • FIG. 1 is a schematic diagram of an implementation environment, which may be applied to a processor module 100, a virtual reality display module 200, a real-life capture module capable of collecting a surrounding real environment, and capable of interacting with a user to determine a user.
  • a processor module 100 a virtual reality display module 200
  • a real-life capture module capable of collecting a surrounding real environment, and capable of interacting with a user to determine a user.
  • each component included in the electronic device can be connected through a communication interface (compared to an I/O interface).
  • the reality capture module can be a three-dimensional image recognition sensor that can be modeled in the forward direction or in multiple directions for real-time modeling (similar devices are already available on the market). If only one face-oriented three-dimensional image recognition sensor is used, the real-time requirements are extremely high, and the head rotation requires substantially time-delayed modeling and rendering, and the viewing angle is extremely demanding; and if it is a plurality of three-dimensional
  • the image recognition sensor splicing the realistic three-dimensional panoramic image is equivalent to constructing a spatial model, and the head rotation is only another part of the display space.
  • the processor module in this embodiment may include a central processing unit module (including a central processing unit CPU 520 and a storage medium 530 connected to the central processing unit), and may further include an image processing module (image
  • the processing module can include a graphics processor GPU 510 and a storage medium 530 coupled to GPU 510).
  • the latter case is shown in FIG.
  • the virtual and reality fusion system of this embodiment may be a software system stored in a storage medium of the processor module described above. Referring to the figure, the system can include a reality fusion module and a virtual reality system.
  • the reality fusion module acquires a realistic three-dimensional panoramic image of a real environment around the user.
  • the reality fusion module can pre-acquire images of real environments around the user captured by multiple cameras.
  • the plurality of cameras are photographed based on the same stereo coordinate system to splicing out the three-dimensional panoramic image.
  • the reality fusion module acquires a realistic three-dimensional panoramic image of a real environment around the user
  • the virtual reality system renders the realistic three-dimensional panoramic image according to the rendering virtual degree selected by the user; the virtual reality system outputs the rendered result to the virtual reality display module.
  • the virtual reality system identifies and extracts realistic elements in the realistic three-dimensional panoramic image.
  • the virtual reality system provided by the embodiment of the present application can be applied to a device such as a helmet.
  • the user When used in a device such as a helmet, the user can walk with a helmet, and the virtual reality system can be in front of the user according to the helmet worn by the user in real time.
  • the image is rendered.
  • it in order to ensure the security of the user, in the specific implementation: it can be implemented as follows:
  • the virtual reality system retains at least a portion of the security reality elements in the realistic three-dimensional panoramic image when the first rendering virtual level indicates that the proportion of the real-world elements that the user wishes to retain is non-zero.
  • the first rendering virtual level is used to indicate the proportion of real-world elements that the user wishes to retain.
  • the real elements in the realistic three-dimensional panoramic image are processed according to the acquired first rendering virtual degree.
  • the rendering degree of each (retained) real element may be according to the second virtual rendering degree (the second rendering virtual degree is used to indicate the rendering degree of each real element) Rendering.
  • the corresponding rendering process can include:
  • the second rendering virtual degree is used to indicate a rendering degree of each real element
  • each real element can also be rendered according to the second virtual rendering degree.
  • the processor module acquires a second rendering virtual degree selected by the user, where the second rendering virtual level is used to indicate the rendering degree of each real element.
  • the image processing module renders each real element in the real three-dimensional panoramic image according to the acquired second rendering virtual degree.
  • the extraction and rendering of real-world elements can be done separately by the central processing module, either separately by the image processing module or by distributed processing by the central processing module and the image processing module.
  • the traditional virtual reality system includes a central processing module, a rendering module, a sensor, an interaction module, and a virtual reality display module.
  • One of the embodiments of the virtual reality display module is VR imaging glasses.
  • the interaction module aggregates data of all sensors, and the user's behavior and interaction instructions are sent to the interaction module and the central processing module through the sensor.
  • the central processing module displays the virtual content through the rendering module and displays the virtual content on the VR imaging glasses three-dimensionally according to the indication of the user at the interaction module.
  • VR Imaging Glasses Two discrete display devices, the same as traditional VR imaging glasses.
  • the virtual reality system in the embodiment of the present application further provides a real-life capture module for acquiring a realistic three-dimensional panoramic image of the user's surrounding environment, an interaction module for implementing interaction with the user, and a unique image processing module.
  • the reality fusion module includes a plurality of realistic capture modules for acquiring realistic three-dimensional panoramic images of the real environment around the user, an interaction module for realizing virtual quantity and virtuality adjustment, and a unique image processing module.
  • the plurality of real-time capture modules are simultaneously captured by a plurality of three-dimensional modeling modules that are covered by the viewing angles and are spliced to form a realistic three-dimensional panoramic image.
  • the real-time capture module is a three-dimensional image recognition sensor 540.
  • the image processing module comprises a real element extraction module, a fusion matching module, an image processing module for merging images, and a sound processing module for merging sounds.
  • the interaction module is for the user to select a virtual reality level of the realistic three-dimensional panoramic image or the real element on the virtual reality wearing device.
  • the virtual reality wearing device includes a helmet 1, a central processing module 2, an earphone holder 3, a data line 4, a power port 5, and 5 realistic capture modules 6-10, a microphone. 20.
  • the virtuality interaction device 30 and the real element number interaction device 40 are rendered.
  • the central processing module receives all the real capture modules 6-10, the sensors 540, the peripheral devices, the rendering virtuality interaction device 30 and the real element number interaction device 40 and the content source signals, and transmits the processed signals to the VR imaging glasses. 550, output devices such as headphones. For example, image stitching processing of each reality capture module, display of image changes after the head is rotated, and the like.
  • the reality capture module 6-10 acquires a realistic three-dimensional panoramic image of the real environment around the user; the image processing module renders the realistic three-dimensional panoramic image according to the rendering virtual degree selected by the user; wherein the central processing module renders the obtained result Output to the virtual reality display module.
  • the reality capture module 6-10 identifies and extracts the real elements in the realistic three-dimensional panoramic image, and the central processing module can also perform the recognition and extraction of the real elements.
  • the interaction module in order to implement the selection of the number of rendered objects, includes a quantity adjustment interaction device, and the user implements setting of the first rendering virtual degree by using the quantity adjustment interaction device.
  • the image processing module retains at least a portion of the security real-life elements in the realistic three-dimensional panoramic image when the first rendering virtual level indicates that the proportion of the real-world elements that the user wishes to retain is non-zero.
  • the interaction module further includes a rendering virtuality interaction device by which the user completes the setting of the second rendering virtual degree.
  • the central processing module acquires a second rendering virtual degree selected by the user from the quantity adjustment interaction device, where the second rendering virtual level is used to indicate the rendering degree of each real element.
  • the sound processing module is further configured to determine a sound rendering corresponding to the virtual three-dimensional scene according to the security reality element and the other real-life elements.
  • the first rendering virtual degree or the second rendering virtual degree is determined by a ratio selected by the user on the interactive interface.
  • the reality capture module 6-10 of the embodiment can measure the depth of field and construct a three-dimensional model in real time, and the actual number of applications depends on the perspective and processing capability of a single reality capture module.
  • Real capture mode The embodiment of the group takes a plurality of real-time modeling three-dimensional image recognition sensors as an example, and distributes the top of the helmet 1 and the five front, rear, left, and right directions according to the coverage requirements of the viewing angle.
  • the reality capture module is connected to the image recognition module.
  • the image recognition module identifies all of the realistic elements of the stitched realistic three-dimensional panorama.
  • the solid coordinates of the real elements are consistent with their coordinates in the three-dimensional panorama.
  • the virtual reality system can display a virtual three-dimensional scene for the user.
  • the reality fusion module of the reality fusion module acquires a realistic three-dimensional panoramic image of the real environment around the user, and the image recognition module identifies all the realistic elements in the realistic three-dimensional panoramic image.
  • the Reality Element Extraction module extracts security reality elements from all real-world elements.
  • the realistic elements include security reality elements and other realistic elements selected by the user.
  • the image processing module of the reality fusion module combines the security reality elements corresponding to the image rendering and the selected other realistic elements in the virtual three-dimensional scene of the virtual reality system according to the security reality element and other realistic elements selected by the user.
  • the reality fusion module also includes a sound processing module.
  • the sound processing module is configured to determine a sound rendering corresponding to the virtual three-dimensional scene according to the security reality element and other real-life elements, and implement the fused sound.
  • the setting of the second rendering virtual degree is described below.
  • the virtual rendering of the security reality elements and other selected real-world elements can be set by the user through the interaction module.
  • the interaction module includes a rendering virtuality interaction device 30, and the reality fusion module is further configured to render a virtual degree according to a real element selected by the user in the rendering virtuality interaction device 30, and select all the realities in the virtual three-dimensional scene of the virtual reality system.
  • the element performs image rendering corresponding to the selected degree of virtuality.
  • the rendering virtuality interaction device 30 is used by the user to adjust the rendering virtuality/trueness of the fused real-world elements. It can be a hardware knob or a software knob. In the software knob embodiment, the user implements the selection through the interaction module. The following is an example of a hardware knob.
  • the rendering virtuality interaction device 30 sets a number of realistic element number positions for the user to select, wherein, when adjusted to the most virtual state, the display of the fused real-life elements and the content are completely virtualized, such as rendering a tall building into a castle. , the sky is rendered into the universe, etc., according to the content source of the current virtual reality; when adjusted to the most real state, the reality 3D panoramic image or the display of the fused real element is directly superimposed in the virtual 3D scene of the virtual reality system. Completely true. In this case, there will be a phenomenon that the virtual content cannot be merged, but The user can know the state of the environment around him, and there are very few cases where the user actually uses it.
  • the middle gear adjusts the realism/virtuality of the actual element display, such as a running ordinary car, which is rendered as a running monster in the most virtual situation, and may gradually become able to rotate as it moves to the real side.
  • Locomotives, tanks, armored vehicles, etc. that incorporate virtual content styles.
  • the bank is rendered as a cliff, and as it rotates to the side of reality, it may gradually become a waterfall, sea, lake, etc. that can be integrated into the virtual content style. Note that in the rendering of realistic elements Full consideration of safety factors.
  • the setting of the first rendering virtual degree is described below.
  • the real element number interaction device 40 is used by the user to adjust the number of other realistic elements of the fusion.
  • the security reality element must be fused and displayed in the virtual three-dimensional scene, thereby ensuring the security of the user's activity and preventing collision and fall.
  • the real element number interaction device 40 can be a hardware knob or a software knob.
  • the software knob embodiment the user implements selection through an interaction module. The following is an example of a hardware knob.
  • the set 0 position means that the real elements are not displayed and rendered, and no objects in the surrounding real environment will be seen. When adjusting to the 0 position, it is the same as the current VR device, and will not see any surrounding reality.
  • Object the set 100 gear position indicates that all the realistic elements extracted by the real environment around the user are superimposed and displayed in the virtual three-dimensional scene;
  • the middle gear position of 0-100 adjusts the superimposed display quantity of the real element in the virtual three-dimensional scene according to the proportion of the gear position.
  • the middle gear adjusts how much the real element is displayed in the virtual 3D scene. For example, it is possible to display the chair and sofa around the user, but not the TV, air conditioner and other equipment.
  • the knob only controls how much, and the actual display does not display which real element is determined by the central processing module according to the matching degree of the actual element and the virtual three-dimensional content.
  • the matching fusion module of the real fusion module sorts the matching fusion degree of the real elements other than the security real element, and matches and merges the virtual three-dimensional scene according to the real element. Degrees determine whether the corresponding real elements are superimposed and displayed in the virtual three-dimensional scene.
  • the equipment workflow is as follows:
  • the central processing module reads the virtual content or game to be played to form a three-dimensional model of the content and surface rendering information. If the real element number interaction device 40 is 0 or renders true If the degree knob is completely true, the content can be played or run directly without any processing;
  • a plurality of stereo modeling modules 6-10 work simultaneously. At this time, the setting of the real element number interaction device 40 is not 0, and a real three-dimensional panoramic image in various directions around the user is obtained.
  • the central processing module splices the three-dimensional data of each of the plurality of stereo modeling modules to obtain a realistic three-dimensional panoramic image and model of the real environment around the user.
  • the image recognition module extracts all realistic elements from the realistic three-dimensional panoramic image and model.
  • the real element extraction module extracts security realities and determines other real elements selected by the user.
  • the image processing module determines the fusion rendering information of the aforementioned security reality element and other real elements selected by the user.
  • the image processing module sorts the matching degree of all the real elements according to the position of the interactive element 40 and the three-dimensional model of the current virtual content according to the actual number of elements, and determines and selects which of the other realistic elements selected by the user. Elements can be more integrated into the current virtual content from the shape structure model.
  • the VR virtual world can adjust the three-dimensional model of the real element, such as modifying the model of a building into a castle, but try to ensure the outer dimensions of the castle and the building. Basically consistent.
  • the matching fusion module specifically performs the sortability integration of the three-dimensional models of the real-life elements, and renders according to the three-dimensional model of the plurality of suitable realistic elements before the location of the interactive element 40. The selection must take into account the user's security issues, such as steps, walls, rivers, etc. All elements that may pose a threat to personal safety must be displayed.
  • the selected respective real elements are subjected to surface rendering capable of being integrated into the VR virtual three-dimensional scene.
  • the 3D model of the realistic element of a building is modified into a 3D model of the castle with the same outer diameter at the bottom, in this step, the 3D model of the castle is integrated into the surface rendering of the virtual content, such as style, color, surface material, old and new, and sun reflection.
  • users should be fully considered for security issues such as steps, walls, rivers, etc.
  • the sound processing module of the fused sound is controlled by the central processing module to perform different sound renderings in accordance with different three-dimensional virtual scenes. For example, if the user opens a door, the user in the VR virtual three-dimensional world is in a prison scene, and the opened door is rendered into an iron fence door. At the same time, when the user opens the door, the sound should also be correspondingly rendered into the sound of opening the iron fence door.
  • the central processing module processes the display and rendering of the synchronized virtual content and the surrounding real-world elements in real time.
  • the technical solution of the present application is based on existing VR devices, providing a completely different experience.
  • the VR helmet that allows the user to wear a completely obscured view can still walk and move. It can be said that the experience is better than the VR virtual experience, and the fusion of virtual and reality.
  • the user can independently adjust the amount of reality elements and virtualness displayed in the virtual three-dimensional world in the surrounding real environment. From the visual to the auditory, the virtual reality is added to the pure virtual experience, such as sitting On the long bench rendered by the real high step, virtual and reality are difficult to distinguish, and it is easy for users to immerse themselves in this new world of virtual reality and reality.
  • the embodiment also breaks the limitation that the traditional VR is only used in the pure virtual environment, and at the same time, the user can ensure the security of the user in the technical way, so that the user can walk around, or even walk to the street, and can be worn at any time, this embodiment
  • the world of virtual and real-world hybrids is no longer limited to pure virtual environments, it is a fly-by for VR device applications.
  • the embodiment further provides a virtual and reality fusion method, which can be performed by the processor module of the above electronic device, and the method may include
  • the realistic three-dimensional panoramic image or the real element is rendered according to the virtual degree selected by the user, and the realistic three-dimensional panoramic image or the image-selected realistic element is rendered in the virtual three-dimensional scene of the virtual reality system.
  • the reality element includes a security reality element; according to the security reality element and other realistic elements selected by the user, the security reality element corresponding to the image rendering and other selected real elements are merged in the virtual three-dimensional scene of the virtual reality system. According to the security reality The element and the other real element determine a sound rendering corresponding to the virtual three-dimensional scene.
  • FIG. 4 a flow of one embodiment of the virtual and reality fusion method of the present embodiment is shown.
  • the three-dimensional modeling module 6-10 and the fused rendering are not necessarily started synchronously.
  • the reality fusion module starts working.
  • Step 401 The central processing module presents the virtual three-dimensional scene to the user through the VR imaging glasses 550.
  • Step 402 Determine whether the user starts the real fusion module, and if not, continue to present the virtual three-dimensional content and the scene in the virtual reality system working mode; if activated, the real-life capturing module and the merge rendering start working.
  • Step 403 Several reality capture modules 6-10 simultaneously capture stereoscopic images and splicing to form a realistic three-dimensional panoramic image or model.
  • Step 404 The image recognition module identifies all the realistic elements according to the realistic three-dimensional panoramic image.
  • Step 406 The real element extraction module extracts a security reality element with a fusion priority.
  • Step 407 determining whether the user selects the number of actual element fusions, and if not selected, waiting for the user to select in the virtual reality system working mode; wherein, when the user selects the number of real elements, the number of the actual elements is selected, wherein
  • the set 0 position indicates that the real element is not displayed and rendered, and no objects existing in the surrounding real environment are seen; the set 100 position indicates that all the realistic elements extracted by the real environment around the user are superimposed in the virtual three-dimensional scene; 0-100
  • the middle gear adjusts the number of superimposed displays of the real elements in the virtual three-dimensional scene according to the proportion of the gears.
  • Step 408 The fusion matching module fuses the security reality element and other selected real elements in the virtual three-dimensional scene.
  • Step 409 Determine whether the user selects the rendering virtual degree, if not selected, wait for the user to select in the virtual reality system working mode; if selected, render the virtual degree according to the realistic element selected by the user, in the virtual reality system Image rendering corresponding to the selected virtual degree is performed on all selected real elements in the virtual three-dimensional scene.
  • Step 410 The image and sound processing module renders the security reality element and other real elements that are merged in the virtual three-dimensional scene according to the rendering virtual degree.
  • the virtual and reality fusion method and system of the embodiment realize real-time three-dimensional modeling module and fusion image processing module, and realize real-time elements integrated into the user in the VR virtual three-dimensional world or image rendering based on the real-life elements.
  • the user can make the fusion video seen by the user through the VR imaging glasses 550 the same or different.
  • AR is also a mixture of virtual and reality, the main scene is still true, and the virtual part is just a small amount of additional information.
  • the virtual and the reality are perfectly integrated, the user can set the ratio of the virtual reality to the reality, and the VR imaging glasses 550 can actually feel the existence of the real element, thereby providing the user with a A new virtual and realistic experience.
  • the present application provides a method and system for seamlessly integrating virtual reality with reality, allowing users to completely immerse themselves between virtual reality and reality.
  • the method and system can superimpose and superimpose the realistic elements of the surrounding environment of the user on the virtual content of the virtual reality system, so that the user can walk the activity and complete certain actions while wearing the virtual reality helmet.
  • the user can adjust the amount of the elements and the virtual degree displayed in the virtual world from the visual to the auditory, and the virtual reality is added to the pure virtual experience, so that the user is immersed in the virtual reality.
  • a new world of virtual and reality integration The method and system of the embodiment of the present application break the boundary between the traditional VR and the reality, and the security fusion element is preferentially integrated in the video fusion, which can fully ensure the security of the user using the helmet to walk and move.
  • the present application integrates the security reality element in the virtual three-dimensional scene, so that the user can sense the external real world by wearing the VR helmet, and realize safe walking and activities; 2.
  • the present application sets the realistic element rendering.
  • Set the interactive device the user selects the actual element to render the virtual/real degree, realizes the rendering according to the user's will, and provides the best experience; 3.
  • This application realizes the capture and modeling of the realistic 3D panoramic image through several realistic capture modules.
  • the application also sets the interactive device of the real element fusion setting, by setting a number of realistic element number of gears for the user to select, to achieve the best experience according to the user's willingness; 5, in addition to preferential extraction of security in this application
  • the realistic element also sorts the matching degree of other realistic elements to solve the fusion matching problem between the real element and the virtual three-dimensional scene.
  • FIG. 6 is a schematic diagram of the hardware structure of the electronic device 600 of the virtual and reality fusion method provided by the embodiment of the present application. As shown in FIG. 6, the electronic device 600 includes:
  • processors 610 and memory 620 one processor 610 is taken as an example in FIG.
  • the processor 610 and the memory 620 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 620 is used as a non-volatile computer readable storage medium, and can be used for storing a non-volatile software program, a non-volatile computer executable program, and a module, as in the virtual and reality fusion method in the embodiment of the present application.
  • Program instructions/modules eg, the fusion matching module, the realistic element extraction module, and the image recognition module shown in FIG. 1).
  • the processor 610 executes various functional applications and data processing of the terminal or the server by executing non-volatile software programs, instructions, and modules stored in the memory 620, that is, implementing the virtual and reality fusion method in the foregoing method embodiments.
  • the memory 620 can include a storage program area and an storage data area, wherein the storage program area can store an operating system, an application required for at least one function; the storage data area can store data created according to the use of the virtual and reality fusion system, and the like.
  • memory 620 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • memory 620 can optionally include memory remotely located relative to processor 610, which can be connected to the virtual reality fusion system via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the one or more modules are stored in the memory 620, and when executed by the one or more processors 610, perform a virtual and reality fusion method in any of the above method embodiments, for example, performing the above described diagram Method 403 to step 410 in 4, implementing FIG. 1
  • the functions of the fusion matching module, the realistic element extraction module, and the image recognition module are stored in the memory 620, and when executed by the one or more processors 610, perform a virtual and reality fusion method in any of the above method embodiments, for example, performing the above described diagram Method 403 to step 410 in 4, implementing FIG. 1
  • the functions of the fusion matching module, the realistic element extraction module, and the image recognition module are stored in the memory 620, and when executed by the one or more processors 610, perform a virtual and reality fusion method in any of the above method embodiments, for example, performing the above described diagram Method 403 to step 410 in 4, implementing FIG. 1
  • the functions of the fusion matching module, the realistic element extraction module, and the image recognition module
  • the electronic device of the embodiment of the present application exists in various forms, such as other electronic devices having data interaction functions.
  • the embodiment of the present application provides a non-transitory computer readable storage medium storing computer-executable instructions that are executed by one or more processors, such as in FIG.
  • the processor 610 is configured to enable the one or more processors to perform the virtual and reality fusion method in any of the foregoing method embodiments, for example, to perform the method steps 403 to 410 in FIG. 4 described above, to implement FIG.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Abstract

Disclosed are a method and system for fusing virtuality and reality, and a virtual reality device. The method comprises: acquiring a realistic three-dimensional panoramic image of a reality environment around a user; rendering the realistic three-dimensional panoramic image according to a rendering virtual degree selected by the user; and outputting a rendered result to a virtual reality display module. By means of the method and system and the virtual reality device in the present application, realistic elements of a surrounding environment of a user are superimposed and fused on the virtual content in a virtual degree adjustable manner, so that the user can also walk and complete certain actions while wearing a helmet, thereby providing a new VR experience for the user.

Description

虚拟与现实融合方法、系统和虚拟现实设备Virtual and reality fusion methods, systems and virtual reality devices 技术领域Technical field
本申请实施方式涉及虚拟现实(VR)领域,特别是涉及一种在虚拟现实显示环境中叠加与现实元素相关的渲染内容的融合方法和系统。Embodiments of the present application relate to the field of virtual reality (VR), and in particular, to a method and system for superimposing rendered content related to a real element in a virtual reality display environment.
背景技术Background technique
当前前沿技术研究中,虚拟现实(VR)和增强现实(AR)是关注度很高的技术研究方向。VR是为用户呈现的是完全虚拟的内容,用户带上一个头戴式VR设备,沉浸在完全虚拟的世界中,可以看和听,却无法用嗅觉,味觉,触觉感受到,和现实完全隔离。而增强现实(AR)则采用半透镜的方式,让用户能够看到真实的世界,同时在半透镜上显示虚拟内容的虚拟人物或物体,使虚拟融合到现实环境中,用户看到的是真实世界中出现少量虚拟内容。In the current cutting-edge technology research, virtual reality (VR) and augmented reality (AR) are highly concerned technical research directions. VR is a completely virtual content for users. Users bring a head-mounted VR device, immersed in a completely virtual world, can watch and listen, but can not be sensed by smell, taste, touch, completely isolated from reality. . Augmented Reality (AR) adopts a semi-lens method, allowing users to see the real world, while displaying virtual objects or virtual objects on the semi-lens, so that the virtual fusion into the real environment, the user sees the real A small amount of virtual content appears in the world.
针对虚拟现实设备,现有头戴式VR设备一般包括头戴式显示器和VR内容生成设备。For virtual reality devices, existing head mounted VR devices typically include a head mounted display and a VR content generating device.
头戴式显示器可以穿戴在用户头部并向用户提供虚拟场景的沉浸式视场。除此之外,头戴式显示器还包含用于头部定位的传感器。The head mounted display can be worn on the user's head and provide the user with an immersive field of view of the virtual scene. In addition to this, the head mounted display also includes sensors for head positioning.
VR内容生成设备包含计算模块、存储模块、和头部定位模块。头部定位模块实时从头戴式显示器中的头部定位传感器获得数据,经过传感器融合相关算法处理,头部定位模块能够得出当前用户的头部姿态。The VR content generation device includes a calculation module, a storage module, and a head positioning module. The head positioning module obtains data from the head positioning sensor in the head mounted display in real time, and is processed by the sensor fusion related algorithm, and the head positioning module can obtain the current user's head posture.
VR内容生成设备从头部定位模块中获得当前头部姿态,从存储模块中获得渲染虚拟场景所需的素材,经过计算模块的处理,最终渲染出以当前用户头部姿态为视角的虚拟场景,并通过头戴式显示器显示到用户眼前。头戴式显示器与VR内容生成设备可以是嵌入式集成在一起(如VR移动一体机),也可以是通过显示数据线(如HDMI)连接在一起(如HTC Vive)。The VR content generating device obtains the current head pose from the head positioning module, obtains the material required for rendering the virtual scene from the storage module, and finally processes the virtual scene with the current user's head posture as a perspective. And displayed to the user through the head-mounted display. Head-mounted displays and VR content-generating devices can be embedded integrated (such as VR mobile all-in-ones) or connected together via display data lines (such as HDMI) (such as HTC Vive).
然而,本申请的申请人发现:现有的头戴式VR设备仅能特定的模式为用户呈现虚拟的场景。而不能根据用户的意愿调整渲染的程度。因此, 现有技术还有待于改进和发展。Applicants of the present application, however, have found that existing head-mounted VR devices can only present virtual scenes to a user in a particular mode. The degree of rendering cannot be adjusted according to the user's wishes. Therefore, The prior art has yet to be improved and developed.
申请内容Application content
本申请实施方式主要解决的技术问题是提供一种全新的虚拟现实体验方法和系统,用以根据用户的意愿调整渲染的程度。该方法和系统实现了在虚拟现实系统的虚拟内容上可调节虚拟程度地叠加融合了用户周围环境的现实元素,带给用户全新的VR体验。The technical problem to be solved by the embodiments of the present application is to provide a new virtual reality experience method and system for adjusting the degree of rendering according to the user's wishes. The method and system realize the superposition of the virtual elements on the virtual content of the virtual reality system to superimpose the realistic elements of the user's surrounding environment, and bring the user a new VR experience.
为解决上述技术问题,本申请实施方式采用的一个技术方案是:提供In order to solve the above technical problem, one technical solution adopted by the embodiment of the present application is: providing
一种虚拟与现实融合方法,包括:A fusion method of virtual and reality, including:
获取用户周围现实环境的现实三维全景图像;Obtain a realistic three-dimensional panoramic image of the real environment around the user;
根据用户选择的渲染虚拟程度,对该现实三维全景图像进行渲染;Rendering the realistic three-dimensional panoramic image according to the rendering virtual degree selected by the user;
将渲染得到的结果输出到虚拟现实显示模组中。The rendered result is output to the virtual reality display module.
为解决上述技术问题,本申请实施方式采用的另一个技术方案是:提供一种虚拟与现实融合系统,包括虚拟现实系统,用于为用户展现虚拟三维场景,还包括现实融合模组,其中,该现实融合模组用于获取用户周围现实环境的现实三维全景图像;该虚拟现实系统用于根据用户选择的渲染虚拟程度,对该现实三维全景图像进行渲染;该虚拟现实系统还用于将渲染得到的结果输出到虚拟现实显示模组中。In order to solve the above technical problem, another technical solution adopted by the embodiment of the present application is to provide a virtual reality integration system, including a virtual reality system, for displaying a virtual three-dimensional scene for a user, and a reality fusion module, wherein The reality fusion module is configured to acquire a realistic three-dimensional panoramic image of a real environment around the user; the virtual reality system is configured to render the realistic three-dimensional panoramic image according to a rendering virtual degree selected by the user; the virtual reality system is further used for rendering The obtained result is output to the virtual reality display module.
为解决上述技术问题,本申请实施例采用的再一个技术方案是:提供一种电子设备,包括:处理器模组,处理器模组连接的交互模块和虚拟现实显示模组;In order to solve the above technical problem, another technical solution adopted by the embodiment of the present application is to provide an electronic device, including: a processor module, an interaction module connected by the processor module, and a virtual reality display module;
处理器模组用于根据用户周围环境的图像获取现实三维全景图像,并根据用户在交互模块选择的渲染虚拟程度,对所述现实三维全景图像进行渲染;The processor module is configured to acquire a realistic three-dimensional panoramic image according to an image of a surrounding environment of the user, and render the realistic three-dimensional panoramic image according to a rendering virtual degree selected by the user in the interaction module;
其中,所述处理器模组还用于将渲染得到的结果输出到虚拟现实显示模组中。The processor module is further configured to output the rendered result to the virtual reality display module.
优选的,电子设备为虚拟现实头戴设备,还包括头盔和现实捕获模组,Preferably, the electronic device is a virtual reality wearing device, and further comprises a helmet and a reality capturing module.
所述现实捕获模组用于从多个角度捕获用户周围环境的图像。这样当用户带上该虚拟现实头戴设备后,能够很好的沉浸到所营造的虚拟显示环 境中,提高用户体现。The reality capture module is configured to capture images of a user's surroundings from multiple angles. In this way, when the user brings the virtual reality wearing device, the user can well immerse the virtual display ring. In the environment, improve the user's embodiment.
优选的,该交互模块包括数量调节交互装置,其中,该处理器模组还用于识别和提取该现实三维全景图像中的现实元素;并从数量调节交互装置获取用户选择的第一渲染虚拟程度,该第一渲染虚拟程度用于指示用户希望保留的现实元素的比例;以及用于根据获取到的第一渲染虚拟程度对该现实三维全景图像中的现实元素进行处理。Preferably, the interaction module includes a quantity adjustment interaction device, wherein the processor module is further configured to identify and extract a realistic element in the realistic three-dimensional panoramic image; and acquire a first rendering virtual degree selected by the user from the quantity adjustment interaction device. The first rendering virtual level is used to indicate a proportion of real-world elements that the user wishes to retain; and to process the real-world elements in the real-world three-dimensional panoramic image according to the acquired first rendering virtual degree.
为了保证用户使用安全,该现实元素包括安全性现实元素,其中,该处理器模组还用于在该第一渲染虚拟程度指示用户希望保留的现实元素的比例为非0时,保留该现实三维全景图像中的至少一部分安全性现实元素。In order to ensure user security, the real-life element includes a security reality element, wherein the processor module is further configured to reserve the real-world three-dimensional when the first rendering virtual degree indicates that the proportion of the real-life element that the user wishes to retain is non-zero. At least a portion of the security reality elements in the panoramic image.
优选的,该交互模块还包括渲染虚拟度交互装置数量调节交互装置,其中,该处理器模组还用于识别和提取该现实三维全景图像中的现实元素;从渲染虚拟度交互装置获取用户选择的第二渲染虚拟程度,该第二渲染虚拟程度用于指示对各个现实元素的渲染程度,并根据获取到的第二渲染虚拟程度对该现实三维全景图像中的各个现实元素进行渲染。Preferably, the interaction module further includes a rendering virtuality interaction device quantity adjustment interaction device, wherein the processor module is further configured to identify and extract a realistic element in the realistic three-dimensional panoramic image; and obtain a user selection from the rendering virtuality interaction device. The second rendering virtual degree is used to indicate the rendering degree of each real element, and render each real element in the real three-dimensional panoramic image according to the acquired second rendering virtual degree.
为了提供沉浸式VR体验,该处理器模组用于根据该安全性现实元素和该其它现实元素,确定虚拟三维场景对应的声音渲染。In order to provide an immersive VR experience, the processor module is configured to determine a sound rendering corresponding to the virtual three-dimensional scene according to the security reality element and the other real-life elements.
该现实捕获模组包括多个摄像头,该多个摄像头设置在该头盔的不同方位上。The reality capture module includes a plurality of cameras disposed in different orientations of the helmet.
作为一种实施方式,该交互模块包括旋钮和模数转换装置;该模数转换装置与该旋钮和该处理器模组连接,用于根据旋钮的旋转角度生成对应的数字信号并输出到处理器模组。In an embodiment, the interaction module includes a knob and an analog-to-digital conversion device; the analog-to-digital conversion device is coupled to the knob and the processor module, and configured to generate a corresponding digital signal according to a rotation angle of the knob and output the same to the processor Module.
本申请实施方式的有益效果是:能够根据用户选择的渲染虚拟程度对周围现实环境中的图像进行渲染。使得用户能够对渲染虚拟程度进行选择,有助于提升用户体验。The beneficial effect of the embodiment of the present application is that the image in the surrounding real environment can be rendered according to the rendering virtual degree selected by the user. Allows users to choose the rendering virtual level to help improve the user experience.
附图说明DRAWINGS
图1是本申请实施方式的虚拟与现实融合系统的模块图;1 is a block diagram of a virtual reality integration system according to an embodiment of the present application;
图2是本申请实施方式的虚拟与现实融合系统的立体图; 2 is a perspective view of a virtual reality integration system according to an embodiment of the present application;
图3是本申请实施方式的虚拟与现实融合系统的立体后视图;3 is a perspective rear view of a virtual reality integration system according to an embodiment of the present application;
图4是本申请实施方式的虚拟与现实融合方法的其中一实施例的流程图;4 is a flowchart of an embodiment of a virtual and reality fusion method according to an embodiment of the present application;
图5是本申请实施例提供的虚拟与现实融合系统的硬件结构示意图;以及FIG. 5 is a schematic structural diagram of hardware of a virtual reality integration system according to an embodiment of the present application;
图6是本申请实施例提供的虚拟与现实融合方法的电子设备的硬件结构示意图。FIG. 6 is a schematic diagram of a hardware structure of an electronic device in a virtual and reality fusion method according to an embodiment of the present application.
具体实施方式detailed description
为使本申请实施例的目的、技术方案和优点更加清楚明白,下面结合附图对本申请实施例做进一步详细说明。在此,本申请的示意性实施例及其说明用于解释本申请,但并不作为对本申请的限定。In order to make the objectives, technical solutions, and advantages of the embodiments of the present application more clearly, the embodiments of the present application are further described in detail below with reference to the accompanying drawings. The illustrative embodiments of the present application and the description thereof are for explaining the present application, but are not intended to limit the application.
图1为所应用的一种实施环境的示意图,可以应用于包含了处理器模组100、虚拟现实显示模组200、能够采集周围现实环境的现实捕获模组以及能够实现与用户交互从而确定用户选择的虚拟程度的交互模块300的电子设备中,电子设备所包含的各个组件之间可以通过通信接口(比I/O接口相连)。FIG. 1 is a schematic diagram of an implementation environment, which may be applied to a processor module 100, a virtual reality display module 200, a real-life capture module capable of collecting a surrounding real environment, and capable of interacting with a user to determine a user. In the electronic device of the selected virtual degree interaction module 300, each component included in the electronic device can be connected through a communication interface (compared to an I/O interface).
该现实捕获模组可以是一个面向正方向或多个面向不同方向的可进行实时建模的三维图像识别传感器(市面上已经有类似的设备)。如只采用一个面向正方向的三维图像识别传感器,则对实时性要求极高,头部转动时需要基本无延时的建模和渲染,且对视角的要求极大;而如果是多个三维图像识别传感器拼接现实三维全景图像,则相当于构建了一个空间模型,头部转动时也只是显示空间中的另一部分而已。The reality capture module can be a three-dimensional image recognition sensor that can be modeled in the forward direction or in multiple directions for real-time modeling (similar devices are already available on the market). If only one face-oriented three-dimensional image recognition sensor is used, the real-time requirements are extremely high, and the head rotation requires substantially time-delayed modeling and rendering, and the viewing angle is extremely demanding; and if it is a plurality of three-dimensional The image recognition sensor splicing the realistic three-dimensional panoramic image is equivalent to constructing a spatial model, and the head rotation is only another part of the display space.
请一并参考图5,本实施例中的处理器模组可以包括中央处理器模块(包括中央处理器CPU520以及与中央处理器相连的存储介质530),也可以还包括图像处理模组(图像处理模组可以包括图形处理器GPU510以及与GPU510相连的存储介质530)。图1中示出的是后一种情况。本实施例的虚拟和现实融合系统可以为存储于上述的处理器模组的存储介质中的软件系统。参见图,该系统可以包括现实融合模组和虚拟现实系统。 Referring to FIG. 5 together, the processor module in this embodiment may include a central processing unit module (including a central processing unit CPU 520 and a storage medium 530 connected to the central processing unit), and may further include an image processing module (image The processing module can include a graphics processor GPU 510 and a storage medium 530 coupled to GPU 510). The latter case is shown in FIG. The virtual and reality fusion system of this embodiment may be a software system stored in a storage medium of the processor module described above. Referring to the figure, the system can include a reality fusion module and a virtual reality system.
其中,该现实融合模组获取用户周围现实环境的现实三维全景图像。比如在一些实施例中,现实融合模组可以预先的获取由多个摄像机拍摄的用户周围现实环境的图像。The reality fusion module acquires a realistic three-dimensional panoramic image of a real environment around the user. For example, in some embodiments, the reality fusion module can pre-acquire images of real environments around the user captured by multiple cameras.
该多个摄像头之间基于同一立体坐标系拍摄以拼接出三维全景图像。The plurality of cameras are photographed based on the same stereo coordinate system to splicing out the three-dimensional panoramic image.
该现实融合模组获取用户周围现实环境的现实三维全景图像;The reality fusion module acquires a realistic three-dimensional panoramic image of a real environment around the user;
该虚拟现实系统根据用户选择的渲染虚拟程度,对该现实三维全景图像进行渲染;该虚拟现实系统将渲染得到的结果输出到虚拟现实显示模组中。The virtual reality system renders the realistic three-dimensional panoramic image according to the rendering virtual degree selected by the user; the virtual reality system outputs the rendered result to the virtual reality display module.
该虚拟现实系统识别和提取该现实三维全景图像中的现实元素。The virtual reality system identifies and extracts realistic elements in the realistic three-dimensional panoramic image.
本申请实施例提供的虚拟现实系统可以应用于头盔等装置中,当用于头盔之类的装置中时,用户可以戴着头盔行走,虚拟现实系统实时的根据用户所佩戴的头盔对用户前方的图像进行渲染。对于这种情况,为了保证用户的安全,在具体实施时:可以按照如下方式进行实施:The virtual reality system provided by the embodiment of the present application can be applied to a device such as a helmet. When used in a device such as a helmet, the user can walk with a helmet, and the virtual reality system can be in front of the user according to the helmet worn by the user in real time. The image is rendered. In this case, in order to ensure the security of the user, in the specific implementation: it can be implemented as follows:
该虚拟现实系统在该第一渲染虚拟程度指示用户希望保留的现实元素的比例为非0时,保留该现实三维全景图像中的至少一部分安全性现实元素。为了对渲染对象做选择数量,该第一渲染虚拟程度用于指示用户希望保留的现实元素的比例。The virtual reality system retains at least a portion of the security reality elements in the realistic three-dimensional panoramic image when the first rendering virtual level indicates that the proportion of the real-world elements that the user wishes to retain is non-zero. In order to make a selection number for the rendered object, the first rendering virtual level is used to indicate the proportion of real-world elements that the user wishes to retain.
根据获取到的第一渲染虚拟程度对该现实三维全景图像中的现实元素进行处理。The real elements in the realistic three-dimensional panoramic image are processed according to the acquired first rendering virtual degree.
除了根据第一渲染虚拟程度进行渲染之外,还可以根据第二虚拟渲染程度(该第二渲染虚拟程度用于指示对各个现实元素的渲染程度)对(所保留的)各个现实元素的渲染程度的进行渲染。此时:相应的渲染过程可以包括:In addition to rendering according to the first rendering virtual degree, the rendering degree of each (retained) real element may be according to the second virtual rendering degree (the second rendering virtual degree is used to indicate the rendering degree of each real element) Rendering. At this point: the corresponding rendering process can include:
识别和提取该现实三维全景图像中的现实元素;Identifying and extracting realistic elements in the realistic three-dimensional panoramic image;
获取用户选择的第二渲染虚拟程度;该第二渲染虚拟程度用于指示对各个现实元素的渲染程度;Obtaining a second rendering virtual degree selected by the user; the second rendering virtual degree is used to indicate a rendering degree of each real element;
根据获取到的第二渲染虚拟程度对该现实三维全景图像中的各个现实元素进行渲染。And rendering each real element in the realistic three-dimensional panoramic image according to the acquired second rendering virtual degree.
这样能够使得用户能够自主的选择各个现实元素的虚化程度。不难理 解的是,在具体实施时,也可以仅按照第二虚拟渲染程度对各个现实元素进行渲染。This enables the user to autonomously select the degree of blurring of each real element. Not difficult to understand The solution is that, in the specific implementation, each real element can also be rendered according to the second virtual rendering degree.
具体实施时,处理器模组获取用户选择的第二渲染虚拟程度,该第二渲染虚拟程度用于指示对各个现实元素的渲染程度。图像处理模组根据获取到的第二渲染虚拟程度对该现实三维全景图像中的各个现实元素进行渲染。In a specific implementation, the processor module acquires a second rendering virtual degree selected by the user, where the second rendering virtual level is used to indicate the rendering degree of each real element. The image processing module renders each real element in the real three-dimensional panoramic image according to the acquired second rendering virtual degree.
对本技术领域的一般技术人员来说,对现实元素的提取和渲染可以由中央处理模块单独完成,或者由图像处理模组单独完成,或者由中央处理模块和图像处理模块以分布式处理完成。For those of ordinary skill in the art, the extraction and rendering of real-world elements can be done separately by the central processing module, either separately by the image processing module or by distributed processing by the central processing module and the image processing module.
传统的虚拟现实系统包括中央处理模块、渲染模块、传感器、交互模块以及虚拟现实显示模组。该虚拟现实显示模组的实施例之一为VR成像眼镜。其中,交互模块汇集了所有传感器的数据,用户的行为和交互指令通过传感器发送至交互模块和中央处理模块。该中央处理模块根据用户在交互模块的指示,通过渲染模块将虚拟内容渲染后三维显示在VR成像眼镜上。The traditional virtual reality system includes a central processing module, a rendering module, a sensor, an interaction module, and a virtual reality display module. One of the embodiments of the virtual reality display module is VR imaging glasses. The interaction module aggregates data of all sensors, and the user's behavior and interaction instructions are sent to the interaction module and the central processing module through the sensor. The central processing module displays the virtual content through the rendering module and displays the virtual content on the VR imaging glasses three-dimensionally according to the indication of the user at the interaction module.
VR成像眼镜:两个分立的显示装置,与传统VR成像眼镜相同。VR Imaging Glasses: Two discrete display devices, the same as traditional VR imaging glasses.
传感器:除传统的动作传感器外,也可包含其他各种类型的传感器,取决于内容的需要。Sensors: In addition to traditional motion sensors, various other types of sensors can be included, depending on the needs of the content.
本申请实施例中的虚拟现实系统进一步设置获取用户周围环境的现实三维全景图像的现实捕获模组、实现与用户交互的交互模块以及独特的图像处理模组。The virtual reality system in the embodiment of the present application further provides a real-life capture module for acquiring a realistic three-dimensional panoramic image of the user's surrounding environment, an interaction module for implementing interaction with the user, and a unique image processing module.
该现实融合模组包括获取用户周围现实环境的现实三维全景图像的若干现实捕获模组、实现虚拟数量和虚拟度调节的交互模块以及独特的图像处理模组。The reality fusion module includes a plurality of realistic capture modules for acquiring realistic three-dimensional panoramic images of the real environment around the user, an interaction module for realizing virtual quantity and virtuality adjustment, and a unique image processing module.
该若干现实捕获模组由视角彼此覆盖的若干三维立体建模模块同时拍摄图像并拼接形成现实三维全景图像,本实施例中,该现实捕获模组为三维图像识别传感器540。为了实现现实元素的融合,该图像处理模组包括现实元素提取模块、融合匹配模块、融合图像的图像处理模组以及融合声音的声音处理模组。 The plurality of real-time capture modules are simultaneously captured by a plurality of three-dimensional modeling modules that are covered by the viewing angles and are spliced to form a realistic three-dimensional panoramic image. In this embodiment, the real-time capture module is a three-dimensional image recognition sensor 540. In order to realize the fusion of the real elements, the image processing module comprises a real element extraction module, a fusion matching module, an image processing module for merging images, and a sound processing module for merging sounds.
该交互模块供用户在虚拟现实头戴设备上选择对现实三维全景图像或者现实元素的渲染虚拟程度。The interaction module is for the user to select a virtual reality level of the realistic three-dimensional panoramic image or the real element on the virtual reality wearing device.
请一并参考图2和图3,所示虚拟现实头戴设备包括头盔1、中央处理模块2、耳机座3、数据线4、电源口5、以及5个现实捕获模组6-10、麦克风20、渲染虚拟度交互装置30以及现实元素数量交互装置40。Referring to FIG. 2 and FIG. 3 together, the virtual reality wearing device includes a helmet 1, a central processing module 2, an earphone holder 3, a data line 4, a power port 5, and 5 realistic capture modules 6-10, a microphone. 20. The virtuality interaction device 30 and the real element number interaction device 40 are rendered.
中央处理模块:接收所有现实捕获模组6-10、传感器540、外围器件、渲染虚拟度交互装置30和现实元素数量交互装置40以及内容源的信号,并将处理完成的信号传送给VR成像眼镜550,耳机等输出设备。如各个现实捕获模组的图像拼接处理,头部转动后显示图像的变化等。The central processing module: receives all the real capture modules 6-10, the sensors 540, the peripheral devices, the rendering virtuality interaction device 30 and the real element number interaction device 40 and the content source signals, and transmits the processed signals to the VR imaging glasses. 550, output devices such as headphones. For example, image stitching processing of each reality capture module, display of image changes after the head is rotated, and the like.
现实捕获模组6-10获取用户周围现实环境的现实三维全景图像;图像处理模组根据用户选择的渲染虚拟程度,对该现实三维全景图像进行渲染;其中,该中央处理模块将渲染得到的结果输出到虚拟现实显示模组中。The reality capture module 6-10 acquires a realistic three-dimensional panoramic image of the real environment around the user; the image processing module renders the realistic three-dimensional panoramic image according to the rendering virtual degree selected by the user; wherein the central processing module renders the obtained result Output to the virtual reality display module.
该现实捕获模组6-10识别和提取该现实三维全景图像中的现实元素,也可以由中央处理模块完成现实元素的识别和提取。The reality capture module 6-10 identifies and extracts the real elements in the realistic three-dimensional panoramic image, and the central processing module can also perform the recognition and extraction of the real elements.
本申请实施例中,为了实现渲染对象的数量选择,该交互模块包括数量调节交互装置,用户通过该数量调节交互装置实现对第一渲染虚拟程度的设置。该图像处理模组在该第一渲染虚拟程度指示用户希望保留的现实元素的比例为非0时,保留该现实三维全景图像中的至少一部分安全性现实元素。In the embodiment of the present application, in order to implement the selection of the number of rendered objects, the interaction module includes a quantity adjustment interaction device, and the user implements setting of the first rendering virtual degree by using the quantity adjustment interaction device. The image processing module retains at least a portion of the security real-life elements in the realistic three-dimensional panoramic image when the first rendering virtual level indicates that the proportion of the real-world elements that the user wishes to retain is non-zero.
该交互模块还包括渲染虚拟度交互装置,用户通过该渲染虚拟度交互装置完成对第二渲染虚拟程度的设置。其中,该中央处理模块从数量调节交互装置获取用户选择的第二渲染虚拟程度,该第二渲染虚拟程度用于指示对各个现实元素的渲染程度。The interaction module further includes a rendering virtuality interaction device by which the user completes the setting of the second rendering virtual degree. The central processing module acquires a second rendering virtual degree selected by the user from the quantity adjustment interaction device, where the second rendering virtual level is used to indicate the rendering degree of each real element.
从沉浸式体验考虑,该声音处理模组还用于根据该安全性现实元素和该其它现实元素,确定虚拟三维场景对应的声音渲染。In view of the immersive experience, the sound processing module is further configured to determine a sound rendering corresponding to the virtual three-dimensional scene according to the security reality element and the other real-life elements.
优选的,该第一渲染虚拟程度或者该第二渲染虚拟程度由用户在交互界面上选择的比例确定。Preferably, the first rendering virtual degree or the second rendering virtual degree is determined by a ratio selected by the user on the interactive interface.
本实施例的现实捕获模组6-10可测景深并实时构建三维立体模型,实际应用数量取决于单个现实捕获模组的视角和处理能力。该现实捕获模 组的实施例以多个实时建模三维图像识别传感器为例,根据视角的覆盖要求,分布设置头盔1的顶部和前后左右五个方位上。The reality capture module 6-10 of the embodiment can measure the depth of field and construct a three-dimensional model in real time, and the actual number of applications depends on the perspective and processing capability of a single reality capture module. Real capture mode The embodiment of the group takes a plurality of real-time modeling three-dimensional image recognition sensors as an example, and distributes the top of the helmet 1 and the five front, rear, left, and right directions according to the coverage requirements of the viewing angle.
请一并参考图1,作为本申请的一种具体实施方式,现实捕获模组与图像识别模块连接。该图像识别模块识别拼接的现实三维全景图的所有现实元素。该等现实元素的立体坐标与其在三维全景图中的坐标保持一致。Referring to FIG. 1 together, as a specific implementation manner of the present application, the reality capture module is connected to the image recognition module. The image recognition module identifies all of the realistic elements of the stitched realistic three-dimensional panorama. The solid coordinates of the real elements are consistent with their coordinates in the three-dimensional panorama.
该虚拟现实系统可为用户展现虚拟三维场景,该现实融合模组的若干若干现实捕获模组获取用户周围现实环境的现实三维全景图像,图像识别模块识别该现实三维全景图像中所有的现实元素。The virtual reality system can display a virtual three-dimensional scene for the user. The reality fusion module of the reality fusion module acquires a realistic three-dimensional panoramic image of the real environment around the user, and the image recognition module identifies all the realistic elements in the realistic three-dimensional panoramic image.
现实元素提取模块从所有现实元素中提取出安全性现实元素。The Reality Element Extraction module extracts security reality elements from all real-world elements.
该现实元素包括安全性现实元素和用户选择的其它现实元素。该现实融合模组的图像处理模组根据该安全性现实元素以及用户选择的其它现实元素,在该虚拟现实系统的虚拟三维场景中融合和图像渲染对应的安全性现实元素以及选取的其它现实元素。该现实融合模组还包括声音处理模组。该声音处理模组用于根据安全性现实元素和其它现实元素,确定虚拟三维场景对应的声音渲染,实现融合声音。The realistic elements include security reality elements and other realistic elements selected by the user. The image processing module of the reality fusion module combines the security reality elements corresponding to the image rendering and the selected other realistic elements in the virtual three-dimensional scene of the virtual reality system according to the security reality element and other realistic elements selected by the user. . The reality fusion module also includes a sound processing module. The sound processing module is configured to determine a sound rendering corresponding to the virtual three-dimensional scene according to the security reality element and other real-life elements, and implement the fused sound.
以下介绍第二渲染虚拟程度的设置。对安全性现实元素以及选取的其它现实元素的虚拟渲染是可以由用户通过交互模块设置。该交互模块包括渲染虚拟度交互装置30,该现实融合模组还用于根据用户在渲染虚拟度交互装置30选择的现实元素渲染虚拟程度,在虚拟现实系统的虚拟三维场景中对所有选取的现实元素进行对应所选虚拟程度的图像渲染。The setting of the second rendering virtual degree is described below. The virtual rendering of the security reality elements and other selected real-world elements can be set by the user through the interaction module. The interaction module includes a rendering virtuality interaction device 30, and the reality fusion module is further configured to render a virtual degree according to a real element selected by the user in the rendering virtuality interaction device 30, and select all the realities in the virtual three-dimensional scene of the virtual reality system. The element performs image rendering corresponding to the selected degree of virtuality.
该渲染虚拟度交互装置30用于用户自行调整融合的现实元素的渲染虚拟度/真实度。可以是硬件旋钮也可以是软件旋钮,在软件旋钮实施例中,用户通过交互模块实现选择。以下以硬件旋钮例进行说明。The rendering virtuality interaction device 30 is used by the user to adjust the rendering virtuality/trueness of the fused real-world elements. It can be a hardware knob or a software knob. In the software knob embodiment, the user implements the selection through the interaction module. The following is an example of a hardware knob.
该渲染虚拟度交互装置30设置若干现实元素数量档位供用户选取,其中,在调整到最虚拟状态时,融合的现实元素的显示和内容完全虚拟的形式来渲染,如一栋高楼渲染成一座城堡,天空渲染成宇宙等等,根据当前虚拟现实的内容源进行融合;在调整到最真实状态时,直接在虚拟现实系统的虚拟三维场景中叠加该现实三维全景图像或者则融合的现实元素的显示完全真实。在该种情况下,会出现和虚拟内容无法融合现象,但用 户可以知道自己周围的环境状态,这种设置用户实际使用的情况很少。中间档位则调节现实元素显示的真实度/虚拟度,如一辆行驶的普通轿车,在最虚拟情况下被渲染成一个奔跑的怪兽,而随着往现实那一侧旋转,可能逐渐变成能融入虚拟内容风格的火车头、坦克、装甲车等。如一条河流,在最虚拟情况下河岸被渲染成悬崖,而随着往现实那一侧旋转,可能逐渐变成能融入虚拟内容风格的瀑布、大海、湖泊等,注意,在现实元素渲染中应充分考虑到安全因素。The rendering virtuality interaction device 30 sets a number of realistic element number positions for the user to select, wherein, when adjusted to the most virtual state, the display of the fused real-life elements and the content are completely virtualized, such as rendering a tall building into a castle. , the sky is rendered into the universe, etc., according to the content source of the current virtual reality; when adjusted to the most real state, the reality 3D panoramic image or the display of the fused real element is directly superimposed in the virtual 3D scene of the virtual reality system. Completely true. In this case, there will be a phenomenon that the virtual content cannot be merged, but The user can know the state of the environment around him, and there are very few cases where the user actually uses it. The middle gear adjusts the realism/virtuality of the actual element display, such as a running ordinary car, which is rendered as a running monster in the most virtual situation, and may gradually become able to rotate as it moves to the real side. Locomotives, tanks, armored vehicles, etc. that incorporate virtual content styles. Like a river, in the most virtual situation, the bank is rendered as a cliff, and as it rotates to the side of reality, it may gradually become a waterfall, sea, lake, etc. that can be integrated into the virtual content style. Note that in the rendering of realistic elements Full consideration of safety factors.
以下介绍第一渲染虚拟程度的设置。该现实元素数量交互装置40用于用户自行调整融合的其它现实元素的数量。本实施方式中,安全性现实元素是必须融合显示在虚拟三维场景中的,从而保证用户活动的安全性,防止碰撞和摔倒。该现实元素数量交互装置40可以是硬件旋钮也可以是软件旋钮,在软件旋钮实施例中,用户通过交互模块实现选择。以下以硬件旋钮例进行说明。The setting of the first rendering virtual degree is described below. The real element number interaction device 40 is used by the user to adjust the number of other realistic elements of the fusion. In this embodiment, the security reality element must be fused and displayed in the virtual three-dimensional scene, thereby ensuring the security of the user's activity and preventing collision and fall. The real element number interaction device 40 can be a hardware knob or a software knob. In the software knob embodiment, the user implements selection through an interaction module. The following is an example of a hardware knob.
设置的0档位表示不显示和渲染现实元素,不会看到任何周围现实环境存在的物体,在调整到0档位时,则和现在的VR设备相同,不会看到任何周围现实存在的物体;设置的100档位表示在虚拟三维场景叠加显示用户周围现实环境提取的所有现实元素;0-100的中间档位则根据档位所占比例调节现实元素在虚拟三维场景的叠加显示数量,中间档位则调节现实元素在虚拟三维场景显示中的多少,如可能把用户周围的椅子和沙发显示出来,但却不显示电视,空调等设备。旋钮只控制多少,具体显示不显示哪个现实元素则由中央处理模块根据现实元素和虚拟三维内容的匹配融合度决定。The set 0 position means that the real elements are not displayed and rendered, and no objects in the surrounding real environment will be seen. When adjusting to the 0 position, it is the same as the current VR device, and will not see any surrounding reality. Object; the set 100 gear position indicates that all the realistic elements extracted by the real environment around the user are superimposed and displayed in the virtual three-dimensional scene; the middle gear position of 0-100 adjusts the superimposed display quantity of the real element in the virtual three-dimensional scene according to the proportion of the gear position. The middle gear adjusts how much the real element is displayed in the virtual 3D scene. For example, it is possible to display the chair and sofa around the user, but not the TV, air conditioner and other equipment. The knob only controls how much, and the actual display does not display which real element is determined by the central processing module according to the matching degree of the actual element and the virtual three-dimensional content.
为了实现虚拟内容与现实元素叠加内容的融合,该现实融合模组的匹配融合模块对除安全性现实元素以外的其它现实元素的匹配融合度进行排序,并根据现实元素在虚拟三维场景的匹配融合度来决定对应现实元素是否被叠加显示在虚拟三维场景中。In order to realize the fusion of the virtual content and the superimposed content of the real element, the matching fusion module of the real fusion module sorts the matching fusion degree of the real elements other than the security real element, and matches and merges the virtual three-dimensional scene according to the real element. Degrees determine whether the corresponding real elements are superimposed and displayed in the virtual three-dimensional scene.
设备工作流程如下:The equipment workflow is as follows:
中央处理模块读取要播放的虚拟内容或游戏,形成内容的三维立体模型和表面渲染信息。如果该现实元素数量交互装置40为0或者渲染真实 程度旋钮为完全真实,则内容直接播放或运行虚拟内容即可,不需要进行任何处理;The central processing module reads the virtual content or game to be played to form a three-dimensional model of the content and surface rendering information. If the real element number interaction device 40 is 0 or renders true If the degree knob is completely true, the content can be played or run directly without any processing;
若干立体建模模块6-10同时工作,此时该现实元素数量交互装置40的设置不为0,得到用户周围各个方向上真实的三维全景图像。中央处理模块对多个立体建模模块的各方向上的三维立体数据进行拼接,得到用户周围真实环境的现实三维全景图像和模型。该图像识别模块根据现实三维全景图像和模型提取所有现实元素。A plurality of stereo modeling modules 6-10 work simultaneously. At this time, the setting of the real element number interaction device 40 is not 0, and a real three-dimensional panoramic image in various directions around the user is obtained. The central processing module splices the three-dimensional data of each of the plurality of stereo modeling modules to obtain a realistic three-dimensional panoramic image and model of the real environment around the user. The image recognition module extracts all realistic elements from the realistic three-dimensional panoramic image and model.
该现实元素提取模块提取安全性现实元素,确定用户选择的其它现实元素。The real element extraction module extracts security realities and determines other real elements selected by the user.
图像处理模组确定前述安全性现实元素以及用户选择的其它现实元素的融合渲染信息。The image processing module determines the fusion rendering information of the aforementioned security reality element and other real elements selected by the user.
图像处理模组根据该现实元素数量交互装置40的位置和当前虚拟内容的三维立体模型,依据该匹配融合模块对所有现实元素的匹配融合度排序,判定和选取用户选择的其它现实元素中哪些现实元素从形状结构模型上更能够融入当前虚拟内容,比如,VR虚拟世界中可对现实元素的三维立体模型进行调整,如将一个大楼的模型修改为一个城堡,但尽量保证城堡的外围尺寸和大楼基本相符。该匹配融合模块具体可对各现实元素的三维模型进行可融入度排序,并根据该现实元素数量交互装置40的位置选取前若干适合的现实元素的三维模型进行渲染。选取时需充分考虑用户的安全性问题,如台阶、墙壁、河流等等所有可能对人身安全构成威胁的元素必须显示。The image processing module sorts the matching degree of all the real elements according to the position of the interactive element 40 and the three-dimensional model of the current virtual content according to the actual number of elements, and determines and selects which of the other realistic elements selected by the user. Elements can be more integrated into the current virtual content from the shape structure model. For example, the VR virtual world can adjust the three-dimensional model of the real element, such as modifying the model of a building into a castle, but try to ensure the outer dimensions of the castle and the building. Basically consistent. The matching fusion module specifically performs the sortability integration of the three-dimensional models of the real-life elements, and renders according to the three-dimensional model of the plurality of suitable realistic elements before the location of the interactive element 40. The selection must take into account the user's security issues, such as steps, walls, rivers, etc. All elements that may pose a threat to personal safety must be displayed.
根据该渲染虚拟度交互装置30的位置以及当前虚拟内容的三维立体模型和渲染数据,将选定的各个现实元素进行能够融入VR虚拟三维场景的表面渲染。如将一座大楼的现实元素的三维模型修改为底部外径相同的城堡三维模型,此步骤中则将城堡三维模型进行融入虚拟内容的表面渲染,如风格,颜色,表面材质,新旧程度,太阳反光等等,同样的,应充分考虑用户安全性问题,如台阶,墙壁,河流等等可能对人身安全构成威胁的元素必须真实渲染,且必须比较真实的渲染。如下楼的台阶,可能根据虚拟中的场景渲染成布满苔藓的台阶,但不可渲染成下坡或者平路。 According to the position of the rendering virtuality interaction device 30 and the three-dimensional stereo model and rendering data of the current virtual content, the selected respective real elements are subjected to surface rendering capable of being integrated into the VR virtual three-dimensional scene. For example, if the 3D model of the realistic element of a building is modified into a 3D model of the castle with the same outer diameter at the bottom, in this step, the 3D model of the castle is integrated into the surface rendering of the virtual content, such as style, color, surface material, old and new, and sun reflection. Etc. Similarly, users should be fully considered for security issues such as steps, walls, rivers, etc. Elements that may pose a threat to personal safety must be rendered realistically and must be realistically rendered. Steps in the following building may be rendered as mossy steps according to the scene in the virtual, but may not be rendered as a downhill or a flat road.
该融合声音的声音处理模组由中央处理模块控制,配合不同的三维虚拟场景作不同的声音渲染。比如用户打开一扇门,VR虚拟三维世界中用户正处于监狱场景,打开的门被渲染成铁栅栏门,同时用户在打开门的时候,声音应该也对应渲染成打开铁栅栏门的声音。The sound processing module of the fused sound is controlled by the central processing module to perform different sound renderings in accordance with different three-dimensional virtual scenes. For example, if the user opens a door, the user in the VR virtual three-dimensional world is in a prison scene, and the opened door is rendered into an iron fence door. At the same time, when the user opens the door, the sound should also be correspondingly rendered into the sound of opening the iron fence door.
在用户运动中或头部转动中,根据传感器540的数据,中央处理模块实时处理同步虚拟内容以及周围现实中的现实元素的显示与渲染。During user motion or head rotation, based on the data of sensor 540, the central processing module processes the display and rendering of the synchronized virtual content and the surrounding real-world elements in real time.
本申请的技术方案基于现有VR设备,提供完全不同的使用体验。使得用户戴上完全遮挡视线的VR头盔任然可以行走和活动,可以说体验是胜于VR虚拟体验的,是虚拟与现实的融合。用户在看到虚拟世界的同时,又能够自主调节周围现实环境在虚拟三维世界中显示的现实元素多少和虚拟度,从视觉到听觉,纯虚拟体验中增加了被虚拟化的现实,比如可以坐在由真实高台阶渲染出的长条凳上,虚拟与现实难辨,很容易让用户沉浸在这种全新的虚拟与现实融合的世界。另外,本实施例也打破了传统VR仅运用在纯虚拟环境的界限,同时在技术上也能够保证用户的安全使用户可以四处走动,甚至可以走到大街上,可以随时都佩戴,本实施例提供的虚拟和现实混合的世界,不再局限于纯虚拟环境,是对VR设备应用的飞越。The technical solution of the present application is based on existing VR devices, providing a completely different experience. The VR helmet that allows the user to wear a completely obscured view can still walk and move. It can be said that the experience is better than the VR virtual experience, and the fusion of virtual and reality. While seeing the virtual world, the user can independently adjust the amount of reality elements and virtualness displayed in the virtual three-dimensional world in the surrounding real environment. From the visual to the auditory, the virtual reality is added to the pure virtual experience, such as sitting On the long bench rendered by the real high step, virtual and reality are difficult to distinguish, and it is easy for users to immerse themselves in this new world of virtual reality and reality. In addition, the embodiment also breaks the limitation that the traditional VR is only used in the pure virtual environment, and at the same time, the user can ensure the security of the user in the technical way, so that the user can walk around, or even walk to the street, and can be worn at any time, this embodiment The world of virtual and real-world hybrids is no longer limited to pure virtual environments, it is a fly-by for VR device applications.
本实施例还提供一种虚拟与现实融合方法,可以通过以上的电子设备的处理器模组执行,该方法可以包括,The embodiment further provides a virtual and reality fusion method, which can be performed by the processor module of the above electronic device, and the method may include
获取用户周围现实环境的现实三维全景图像,识别和提取该现实三维全景图像中的现实元素;Obtaining a realistic three-dimensional panoramic image of a real environment around the user, and identifying and extracting a realistic element in the realistic three-dimensional panoramic image;
在虚拟现实系统的虚拟三维场景中融合该现实三维全景图像或该现实元素;Merging the realistic three-dimensional panoramic image or the realistic element in a virtual three-dimensional scene of the virtual reality system;
根据用户选择的现实三维全景图像或者现实元素渲染虚拟程度,在该虚拟现实系统的虚拟三维场景中图像渲染该现实三维全景图像或者图像渲染选取的现实元素。The realistic three-dimensional panoramic image or the real element is rendered according to the virtual degree selected by the user, and the realistic three-dimensional panoramic image or the image-selected realistic element is rendered in the virtual three-dimensional scene of the virtual reality system.
其中,该现实元素包括安全性现实元素;根据该安全性现实元素以及用户选择的其它现实元素,在虚拟现实系统的虚拟三维场景中融合和图像渲染对应的安全性现实元素以及选取的其它现实元素;根据该安全性现实 元素和该其它现实元素,确定虚拟三维场景对应的声音渲染。The reality element includes a security reality element; according to the security reality element and other realistic elements selected by the user, the security reality element corresponding to the image rendering and other selected real elements are merged in the virtual three-dimensional scene of the virtual reality system. According to the security reality The element and the other real element determine a sound rendering corresponding to the virtual three-dimensional scene.
请参考图4,所示为本实施方式虚拟与现实融合方法其中一个实施例的流程。其中,用户启动虚拟现实系统时,并不一定会同步启动三维建模模块6-10和融合渲染。当用户选择确定了现实元素融合数量和渲染虚拟度后,该现实融合模组开始工作。Referring to FIG. 4, a flow of one embodiment of the virtual and reality fusion method of the present embodiment is shown. Wherein, when the user starts the virtual reality system, the three-dimensional modeling module 6-10 and the fused rendering are not necessarily started synchronously. When the user selects the number of realistic element fusions and renders the virtual degree, the reality fusion module starts working.
步骤401:中央处理模块通过VR成像眼镜550为用户呈现虚拟三维场景。Step 401: The central processing module presents the virtual three-dimensional scene to the user through the VR imaging glasses 550.
步骤402:判断用户是否启动现实融合模组,没有启动则继续在虚拟现实系统工作模式下为用户呈现虚拟三维内容和场景;如果启动则现实捕获模组和融合渲染开始工作。Step 402: Determine whether the user starts the real fusion module, and if not, continue to present the virtual three-dimensional content and the scene in the virtual reality system working mode; if activated, the real-life capturing module and the merge rendering start working.
步骤403:若干现实捕获模组6-10同时拍摄立体图像并拼接形成现实三维全景图像或模型。Step 403: Several reality capture modules 6-10 simultaneously capture stereoscopic images and splicing to form a realistic three-dimensional panoramic image or model.
步骤404:图像识别模块根据该现实三维全景图像识别出所有的现实元素。Step 404: The image recognition module identifies all the realistic elements according to the realistic three-dimensional panoramic image.
步骤406:现实元素提取模块提取出具有融合优先级的安全性现实元素。Step 406: The real element extraction module extracts a security reality element with a fusion priority.
步骤407:判断用户是否选定现实元素融合数量,如果没有选定则在虚拟现实系统工作模式下等待用户选定;其中,用户在选择现实元素数量时从若干现实元素数量档位选取,其中,设置的0档位表示不显示和渲染现实元素,不会看到任何周围现实环境存在的物体;设置的100档位表示在虚拟三维场景叠加显示用户周围现实环境提取的所有现实元素;0-100的中间档位则根据档位所占比例调节现实元素在虚拟三维场景的叠加显示数量。Step 407: determining whether the user selects the number of actual element fusions, and if not selected, waiting for the user to select in the virtual reality system working mode; wherein, when the user selects the number of real elements, the number of the actual elements is selected, wherein The set 0 position indicates that the real element is not displayed and rendered, and no objects existing in the surrounding real environment are seen; the set 100 position indicates that all the realistic elements extracted by the real environment around the user are superimposed in the virtual three-dimensional scene; 0-100 The middle gear adjusts the number of superimposed displays of the real elements in the virtual three-dimensional scene according to the proportion of the gears.
步骤408:融合匹配模块在虚拟三维场景中融合该安全性现实元素以及选取的其它现实元素。Step 408: The fusion matching module fuses the security reality element and other selected real elements in the virtual three-dimensional scene.
步骤409:判断用户是否选定渲染虚拟程度,如果没有选定则在虚拟现实系统工作模式下等待用户选定;如果选定了,则根据用户选择的现实元素渲染虚拟程度,在虚拟现实系统的虚拟三维场景中对所有选取的现实元素进行对应所选虚拟程度的图像渲染。 Step 409: Determine whether the user selects the rendering virtual degree, if not selected, wait for the user to select in the virtual reality system working mode; if selected, render the virtual degree according to the realistic element selected by the user, in the virtual reality system Image rendering corresponding to the selected virtual degree is performed on all selected real elements in the virtual three-dimensional scene.
步骤410:图像和声音处理模组根据渲染虚拟程度渲染该融合在虚拟三维场景中的安全性现实元素和其它现实元素。Step 410: The image and sound processing module renders the security reality element and other real elements that are merged in the virtual three-dimensional scene according to the rendering virtual degree.
本实施例的虚拟与现实融合方法和系统通过设置实时三维建模模块和融合图像处理模组,实现了VR虚拟三维世界中融入用户周围的现实元素或者基于该等现实元素作影像渲染,为用户提供虚幻又超出现实的独特体验,并且现实元素是真实存在的,呈现在虚拟三维世界使得用户戴着头盔也可以感知身边的现实世界。用户通过自行设置虚拟三维世界中融入现实元素的比例、真实程度、渲染场景,使得用户通过VR成像眼镜550看到的融合视像可以和实际相同也可以不同。虽然AR也是虚拟和现实的混合,但主要场景还是真实的,虚拟部分只是其中的少量附加信息。而本实施例的技术方案,将虚拟与现实完美的融合在了一起,用户可以设置虚拟与现实的比例,并通过VR成像眼镜550可实实在在感受到现实元素的存在,从而为用户提供一种全新的虚拟与现实体验。The virtual and reality fusion method and system of the embodiment realize real-time three-dimensional modeling module and fusion image processing module, and realize real-time elements integrated into the user in the VR virtual three-dimensional world or image rendering based on the real-life elements. Providing a unique experience that is illusory and beyond reality, and the real elements are real, presented in a virtual three-dimensional world that allows users to wear helmets and perceive the real world around them. By setting the proportion, the true degree, and the rendering scene of the real-life elements in the virtual three-dimensional world, the user can make the fusion video seen by the user through the VR imaging glasses 550 the same or different. Although AR is also a mixture of virtual and reality, the main scene is still true, and the virtual part is just a small amount of additional information. In the technical solution of the embodiment, the virtual and the reality are perfectly integrated, the user can set the ratio of the virtual reality to the reality, and the VR imaging glasses 550 can actually feel the existence of the real element, thereby providing the user with a A new virtual and realistic experience.
综上所述,本申请提供了一种将虚拟和现实完美融合的方法和系统,让用户完全沉浸在虚拟与现实之间。并且,该方法和系统能够在虚拟现实系统的虚拟内容上可调节虚拟程度地叠加融合用户周围环境的现实元素,使得用户在戴上虚拟现实头盔的同时也能行走活动和完成一定的动作,带给用户全新的VR体验。用户在看到虚拟世界的同时,又能够从视觉到听觉自主调节周围现实世界在虚拟世界中显示的元素多少和虚拟度,纯虚拟体验中增加了被虚拟化的现实,让用户沉浸于这种全新的虚拟和现实融合的世界。本申请实施例的方法和系统打破了传统VR与现实隔离的界限,视像融合中优先融合安全现实元素,能够充分保证用户使用头盔走动和动作的安全性。In summary, the present application provides a method and system for seamlessly integrating virtual reality with reality, allowing users to completely immerse themselves between virtual reality and reality. Moreover, the method and system can superimpose and superimpose the realistic elements of the surrounding environment of the user on the virtual content of the virtual reality system, so that the user can walk the activity and complete certain actions while wearing the virtual reality helmet. Give users a new VR experience. While seeing the virtual world, the user can adjust the amount of the elements and the virtual degree displayed in the virtual world from the visual to the auditory, and the virtual reality is added to the pure virtual experience, so that the user is immersed in the virtual reality. A new world of virtual and reality integration. The method and system of the embodiment of the present application break the boundary between the traditional VR and the reality, and the security fusion element is preferentially integrated in the video fusion, which can fully ensure the security of the user using the helmet to walk and move.
本技术方案中:1、本申请通过优先在虚拟三维场景中融合安全性现实元素,使得用户戴着VR头盔也可以感知外部真实世界,实现安全的行走和活动;2、本申请设置现实元素渲染设置的交互装置,由用户选择现实元素渲染虚拟/真实程度,实现按用户意愿渲染,提供最佳体验;3、本申请通过若干现实捕获模组实现现实三维全景图像的捕获和建模,由视角彼此覆盖的若干现实捕获模组同时拍摄现实三维图像并拼接形成现实三 维全景图像;4、本申请还设置现实元素融合设置的交互装置,通过设置若干现实元素数量档位供用户选取,实现按用户意愿融合,提供最佳体验;5、本申请除了优先提取安全性现实元素,还对其它现实元素的匹配融合度进行排序,解决现实元素与虚拟三维场景的融合匹配问题。In the technical solution: 1. The present application integrates the security reality element in the virtual three-dimensional scene, so that the user can sense the external real world by wearing the VR helmet, and realize safe walking and activities; 2. The present application sets the realistic element rendering. Set the interactive device, the user selects the actual element to render the virtual/real degree, realizes the rendering according to the user's will, and provides the best experience; 3. This application realizes the capture and modeling of the realistic 3D panoramic image through several realistic capture modules. Several realistic capture modules covering each other simultaneously capture realistic three-dimensional images and stitch together to form a reality three Dimensional panoramic image; 4, the application also sets the interactive device of the real element fusion setting, by setting a number of realistic element number of gears for the user to select, to achieve the best experience according to the user's willingness; 5, in addition to preferential extraction of security in this application The realistic element also sorts the matching degree of other realistic elements to solve the fusion matching problem between the real element and the virtual three-dimensional scene.
图6是本申请实施例提供的虚拟与现实融合方法的电子设备600的硬件结构示意图,如图6所示,该电子设备600包括:FIG. 6 is a schematic diagram of the hardware structure of the electronic device 600 of the virtual and reality fusion method provided by the embodiment of the present application. As shown in FIG. 6, the electronic device 600 includes:
一个或多个处理器610以及存储器620,图6中以一个处理器610为例。One or more processors 610 and memory 620, one processor 610 is taken as an example in FIG.
处理器610和存储器620可以通过总线或者其他方式连接,图6中以通过总线连接为例。The processor 610 and the memory 620 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
存储器620作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的虚拟与现实融合方法对应的程序指令/模块(例如,附图1所示的融合匹配模块、现实元素提取模块和图像识别模块)。处理器610通过运行存储在存储器620中的非易失性软件程序、指令以及模块,从而执行终端或者服务器的各种功能应用以及数据处理,即实现上述方法实施例中的虚拟与现实融合方法。The memory 620 is used as a non-volatile computer readable storage medium, and can be used for storing a non-volatile software program, a non-volatile computer executable program, and a module, as in the virtual and reality fusion method in the embodiment of the present application. Program instructions/modules (eg, the fusion matching module, the realistic element extraction module, and the image recognition module shown in FIG. 1). The processor 610 executes various functional applications and data processing of the terminal or the server by executing non-volatile software programs, instructions, and modules stored in the memory 620, that is, implementing the virtual and reality fusion method in the foregoing method embodiments.
存储器620可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据虚拟与现实融合系统的使用所创建的数据等。此外,存储器620可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器620可选包括相对于处理器610远程设置的存储器,这些远程存储器可以通过网络连接至虚拟与现实融合系统。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 620 can include a storage program area and an storage data area, wherein the storage program area can store an operating system, an application required for at least one function; the storage data area can store data created according to the use of the virtual and reality fusion system, and the like. Moreover, memory 620 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 620 can optionally include memory remotely located relative to processor 610, which can be connected to the virtual reality fusion system via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
所述一个或者多个模块存储在所述存储器620中,当被所述一个或者多个处理器610执行时,执行上述任意方法实施例中的虚拟与现实融合方法,例如,执行以上描述的图4中的方法步骤403至步骤410,实现图1 中的融合匹配模块、现实元素提取模块和图像识别模块的功能。The one or more modules are stored in the memory 620, and when executed by the one or more processors 610, perform a virtual and reality fusion method in any of the above method embodiments, for example, performing the above described diagram Method 403 to step 410 in 4, implementing FIG. 1 The functions of the fusion matching module, the realistic element extraction module, and the image recognition module.
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。The above products can perform the methods provided by the embodiments of the present application, and have the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiments of the present application.
本申请实施例的电子设备以多种形式存在,比如以其他具有数据交互功能的电子装置。The electronic device of the embodiment of the present application exists in various forms, such as other electronic devices having data interaction functions.
本申请实施例提供了一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行,例如图6中的一个处理器610,可使得上述一个或多个处理器可执行上述任意方法实施例中的虚拟与现实融合方法,例如,执行以上描述的图4中的方法步骤403至步骤410,实现图1中的融合匹配模块、现实元素提取模块和图像识别模块的功能。The embodiment of the present application provides a non-transitory computer readable storage medium storing computer-executable instructions that are executed by one or more processors, such as in FIG. The processor 610 is configured to enable the one or more processors to perform the virtual and reality fusion method in any of the foregoing method embodiments, for example, to perform the method steps 403 to 410 in FIG. 4 described above, to implement FIG. The functions of the fusion matching module, the realistic element extraction module, and the image recognition module.
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
通过以上的实施方式的描述,本领域普通技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。Through the description of the above embodiments, those skilled in the art can clearly understand that the various embodiments can be implemented by means of software plus a general hardware platform, and of course, by hardware. A person skilled in the art can understand that all or part of the process of implementing the above embodiments can be completed by a computer program to instruct related hardware, and the program can be stored in a computer readable storage medium. When executed, the flow of an embodiment of the methods as described above may be included. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请 的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。 Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present application, and are not limited thereto; in the idea of the present application, the technical features in the above embodiments or different embodiments may also be combined. The steps can be implemented in any order and there is the present application as described above Many other variations of the various aspects are not provided in the details for the sake of brevity; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that The technical solutions are modified or equivalently replaced with some of the technical features; and the modifications or substitutions do not deviate from the technical solutions of the embodiments of the present application.

Claims (22)

  1. 一种虚拟与现实融合方法,其特征在于,包括:A virtual and reality fusion method, which comprises:
    获取用户周围现实环境的现实三维全景图像;Obtain a realistic three-dimensional panoramic image of the real environment around the user;
    根据用户选择的渲染虚拟程度,对所述现实三维全景图像进行渲染;Rendering the realistic three-dimensional panoramic image according to a rendering virtual degree selected by the user;
    将渲染得到的结果输出到虚拟现实显示模组中。The rendered result is output to the virtual reality display module.
  2. 根据权利要求1所述的方法,其特征在于,所述根据用户选择的渲染虚拟程度,对所述现实三维全景图像进行渲染,包括:The method according to claim 1, wherein the rendering of the realistic three-dimensional panoramic image according to a rendering virtual degree selected by a user comprises:
    识别和提取所述现实三维全景图像中的现实元素;Identifying and extracting realistic elements in the realistic three-dimensional panoramic image;
    获取用户选择的第一渲染虚拟程度;所述第一渲染虚拟程度用于指示用户希望保留的现实元素的比例;Obtaining a first rendering virtual degree selected by the user; the first rendering virtual degree is used to indicate a proportion of real elements that the user wishes to retain;
    根据获取到的第一渲染虚拟程度对所述现实三维全景图像中的现实元素进行处理。The real elements in the realistic three-dimensional panoramic image are processed according to the acquired first rendering virtual degree.
  3. 根据权利要求2所述的方法,其特征在于,所述现实元素包括安全性现实元素;所述根据获取到的第一渲染虚拟程度对所述现实三维全景图像中的现实元素进行处理,包括:The method according to claim 2, wherein the real-life element comprises a security reality element; the processing the real-world element in the real-world three-dimensional panoramic image according to the acquired first rendering virtual degree, comprising:
    在所述第一渲染虚拟程度指示用户希望保留的现实元素的比例为非0时,保留所述现实三维全景图像中的至少一部分安全性现实元素。And retaining at least a part of the security reality elements in the realistic three-dimensional panoramic image when the first rendering virtual degree indicates that the proportion of the real elements that the user wishes to retain is non-zero.
  4. 根据权利要求1所述的方法,其特征在于,所述根据用户选择的渲染虚拟程度,对所述现实三维全景图像进行渲染,包括:The method according to claim 1, wherein the rendering of the realistic three-dimensional panoramic image according to a rendering virtual degree selected by a user comprises:
    识别和提取所述现实三维全景图像中的现实元素;Identifying and extracting realistic elements in the realistic three-dimensional panoramic image;
    获取用户选择的第二渲染虚拟程度;所述第二渲染虚拟程度用于指示对各个现实元素的渲染程度;Obtaining a second rendering virtual degree selected by the user; the second rendering virtual degree is used to indicate a rendering degree of each real element;
    根据获取到的第二渲染虚拟程度对所述现实三维全景图像中的各个现实元素进行渲染。And rendering respective real elements in the realistic three-dimensional panoramic image according to the acquired second rendering virtual degree.
  5. 根据权利要求2-4任意一项所述的方法,其特征在于,根据所述安全性现实元素和所述其它现实元素,确定虚拟三维场景对应的声音渲染。The method according to any one of claims 2-4, characterized in that the sound rendering corresponding to the virtual three-dimensional scene is determined according to the security real element and the other real elements.
  6. 根据权利要求5所述的方法,其特征在于,所述获取用户周围现实环境的现实三维全景图像,包括: The method according to claim 5, wherein the acquiring a realistic three-dimensional panoramic image of a real environment around the user comprises:
    获取多个图像采集装置从不同角度拍摄的图像,并利用三维建模模块对获取到的图像进行三维建模。Obtain images taken by different image acquisition devices from different angles, and use the three-dimensional modeling module to perform three-dimensional modeling on the acquired images.
  7. 根据权利要求6所述的方法,其特征在于,所述第一渲染虚拟程度或者所述第二渲染虚拟程度由用户在交互界面上选择的比例确定。The method of claim 6, wherein the first rendering virtual level or the second rendering virtual level is determined by a ratio selected by a user on the interactive interface.
  8. 一种虚拟与现实融合系统,包括虚拟现实系统,用于为用户展现虚拟三维场景,其特征在于,还包括现实融合模组,其中,A virtual reality integration system, including a virtual reality system, is used for presenting a virtual three-dimensional scene for a user, and is characterized in that it further comprises a reality fusion module, wherein
    所述现实融合模组用于获取用户周围现实环境的现实三维全景图像;The reality fusion module is configured to acquire a realistic three-dimensional panoramic image of a real environment around the user;
    所述虚拟现实系统用于根据用户选择的渲染虚拟程度,对所述现实三维全景图像进行渲染;The virtual reality system is configured to render the realistic three-dimensional panoramic image according to a rendering virtual degree selected by a user;
    所述虚拟现实系统还用于将渲染得到的结果输出到虚拟现实显示模组中。The virtual reality system is further configured to output the rendered result to the virtual reality display module.
  9. 根据权利要求8所述的系统,其特征在于,所述虚拟现实系统还用于识别和提取所述现实三维全景图像中的现实元素;在根据用户选择的渲染虚拟程度,对所述现实三维全景图像进行渲染时,用于获取用户选择的第一渲染虚拟程度,所述第一渲染虚拟程度用于指示用户希望保留的现实元素的比例;以及用于根据获取到的第一渲染虚拟程度对所述现实三维全景图像中的现实元素进行处理。The system according to claim 8, wherein the virtual reality system is further configured to identify and extract a realistic element in the realistic three-dimensional panoramic image; and to view the realistic three-dimensional panoramic image according to a rendering virtual degree selected by a user. And a first rendering virtual degree used by the user to obtain a user-selected first rendering virtual degree, the first rendering virtual level is used to indicate a proportion of a real-life element that the user wishes to retain; and is configured to use the acquired first rendering virtual degree to The realistic elements in the realistic three-dimensional panoramic image are processed.
  10. 根据权利要求9所述的系统,其特征在于,所述现实元素包括安全性现实元素;所述虚拟现实系统用于在所述第一渲染虚拟程度指示用户希望保留的现实元素的比例为非0时,保留所述现实三维全景图像中的至少一部分安全性现实元素。The system according to claim 9, wherein the real element comprises a security reality element; the virtual reality system is configured to indicate, at the first rendering virtual level, that a proportion of real elements that the user wishes to retain is non-zero. At least a portion of the security reality elements in the realistic three-dimensional panoramic image are retained.
  11. 根据权利要求8所述的系统,其特征在于,所述虚拟现实系统还用于识别和提取所述现实三维全景图像中的现实元素;在根据用户选择的渲染虚拟程度,对所述现实三维全景图像进行渲染时,获取用户选择的第二渲染虚拟程度,所述第二渲染虚拟程度用于指示对各个现实元素的渲染程度;根据获取到的第二渲染虚拟程度对所述现实三维全景图像中的各个现实元素进行渲染。The system according to claim 8, wherein the virtual reality system is further configured to identify and extract a realistic element in the realistic three-dimensional panoramic image; and to view the realistic three-dimensional panoramic image according to a rendering virtual degree selected by a user. When the image is rendered, the second rendering virtual degree selected by the user is obtained, where the second rendering virtual level is used to indicate the rendering degree of each real element; and the acquired second rendering virtual degree is used in the real three-dimensional panoramic image. The various realistic elements are rendered.
  12. 一种电子设备,其特征在于,包括:处理器模组,处理器模组连接的交互模块和虚拟现实显示模组; An electronic device, comprising: a processor module, an interaction module connected by the processor module, and a virtual reality display module;
    处理器模组用于接收从多个角度捕获的用户周围环境的图像并根据所述图像获取现实三维全景图像,再根据用户在交互模块选择的渲染虚拟程度,对所述现实三维全景图像进行渲染;The processor module is configured to receive an image of a user's surrounding environment captured from multiple angles and acquire a realistic three-dimensional panoramic image according to the image, and then render the realistic three-dimensional panoramic image according to a rendering virtual degree selected by the user in the interaction module. ;
    其中,所述处理器模组还用于将渲染得到的结果输出到虚拟现实显示模组中。The processor module is further configured to output the rendered result to the virtual reality display module.
  13. 如权利要求12所述的电子设备,其特征在于,所述电子设备为虚拟现实头戴设备,还包括头盔和现实捕获模组,The electronic device according to claim 12, wherein the electronic device is a virtual reality wearing device, and further comprises a helmet and a reality capturing module.
    所述现实捕获模组用于从多个角度捕获用户周围环境的图像。The reality capture module is configured to capture images of a user's surroundings from multiple angles.
  14. 根据权利要求13所述的电子设备,其特征在于,所述交互模块包括数量调节交互装置,其中,所述处理器模组还用于识别和提取所述现实三维全景图像中的现实元素;并从数量调节交互装置获取用户选择的第一渲染虚拟程度,所述第一渲染虚拟程度用于指示用户希望保留的现实元素的比例;以及用于根据获取到的第一渲染虚拟程度对所述现实三维全景图像中的现实元素进行处理。The electronic device according to claim 13, wherein the interaction module comprises a quantity adjustment interaction device, wherein the processor module is further configured to identify and extract a real element in the realistic three-dimensional panoramic image; Obtaining, by the quantity adjustment interaction device, a first rendering virtual degree selected by the user, the first rendering virtual degree is used to indicate a proportion of a real element that the user wishes to retain; and is configured to target the reality according to the acquired first rendering virtual degree The realistic elements in the 3D panoramic image are processed.
  15. 根据权利要求14所述的电子设备,其特征在于,所述现实元素包括安全性现实元素,其中,所述处理器模组还用于在所述第一渲染虚拟程度指示用户希望保留的现实元素的比例为非0时,保留所述现实三维全景图像中的至少一部分安全性现实元素。The electronic device according to claim 14, wherein the real element comprises a security reality element, wherein the processor module is further configured to indicate, at the first rendering virtual level, a realistic element that the user wishes to retain When the ratio is non-zero, at least a part of the security reality elements in the realistic three-dimensional panoramic image are retained.
  16. 根据权利要求13所述的电子设备,其特征在于,所述交互模块还包括渲染虚拟度交互装置,其中,所述处理器模组还用于识别和提取所述现实三维全景图像中的现实元素;从渲染虚拟度交互装置获取用户选择的第二渲染虚拟程度,所述第二渲染虚拟程度用于指示对各个现实元素的渲染程度,并根据获取到的第二渲染虚拟程度对所述现实三维全景图像中的各个现实元素进行渲染。The electronic device according to claim 13, wherein the interaction module further comprises a rendering virtuality interaction device, wherein the processor module is further configured to identify and extract a realistic element in the realistic three-dimensional panoramic image Obtaining, by the rendering virtuality interaction device, a second rendering virtual degree selected by the user, the second rendering virtual degree is used to indicate a rendering degree of each real element, and the real three-dimensionality is obtained according to the acquired second rendering virtual degree Each realistic element in the panoramic image is rendered.
  17. 根据权利要求14-16任意一项所述的电子设备,其特征在于,所述处理器模组用于根据所述安全性现实元素和所述其它现实元素,确定虚拟三维场景对应的声音渲染。The electronic device according to any one of claims 14-16, wherein the processor module is configured to determine a sound rendering corresponding to the virtual three-dimensional scene according to the security real element and the other real elements.
  18. 根据权利要求14-16任意一项所述的电子设备,其特征在于,所述现实捕获模组包括多个摄像头,所述多个摄像头设置在所述头盔的不同 方位上。The electronic device according to any one of claims 14-16, wherein the reality capture module comprises a plurality of cameras, and the plurality of cameras are disposed in different ones of the helmets Orientation.
  19. 根据权利要求14-16任意一项所述的电子设备,其特征在于,所述交互模块包括旋钮和模数转换装置;所述模数转换装置与所述旋钮和所述处理器模组连接,用于根据旋钮的旋转角度生成对应的数字信号并输出到处理器模组。The electronic device according to any one of claims 14-16, wherein the interaction module comprises a knob and an analog to digital conversion device; the analog to digital conversion device is connected to the knob and the processor module, It is used to generate a corresponding digital signal according to the rotation angle of the knob and output it to the processor module.
  20. 一种电子设备,其特征在于,包括:An electronic device, comprising:
    至少一个处理器;以及,At least one processor; and,
    与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-7任一项所述的方法。The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the method of any of claims 1-7 method.
  21. 一种非易失性可读存储介质,其特征在于,所述可读存储介质存储有可执行指令,所述可执行指令用于使处理器执行权利要求1-7任一项所述的方法。A non-readable readable storage medium, characterized in that the readable storage medium stores executable instructions for causing a processor to perform the method of any one of claims 1-7 .
  22. 一种程序产品,其特征在于,所述程序产品包括存储在非易失性可读存储介质上的程序,所述程序包括程序指令,当所述程序指令被处理器模组执行时,使所述处理器模组时执行权利要求1-7任一项所述的方法。 A program product, comprising: a program stored on a non-volatile readable storage medium, the program comprising program instructions, when the program instructions are executed by a processor module, The method of any of claims 1-7 is performed when the processor module is described.
PCT/CN2016/101255 2016-09-30 2016-09-30 Method and system for fusing virtuality and reality, and virtual reality device WO2018058601A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/101255 WO2018058601A1 (en) 2016-09-30 2016-09-30 Method and system for fusing virtuality and reality, and virtual reality device
CN201680002728.5A CN107077755B (en) 2016-09-30 2016-09-30 Virtual and reality fusion method and system and virtual reality equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/101255 WO2018058601A1 (en) 2016-09-30 2016-09-30 Method and system for fusing virtuality and reality, and virtual reality device

Publications (1)

Publication Number Publication Date
WO2018058601A1 true WO2018058601A1 (en) 2018-04-05

Family

ID=59624180

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/101255 WO2018058601A1 (en) 2016-09-30 2016-09-30 Method and system for fusing virtuality and reality, and virtual reality device

Country Status (2)

Country Link
CN (1) CN107077755B (en)
WO (1) WO2018058601A1 (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109636919A (en) * 2018-11-29 2019-04-16 武汉中地地科传媒文化有限责任公司 A kind of virtual museum's construction method, system and storage medium based on holographic technique
CN109685878A (en) * 2018-12-29 2019-04-26 中铁工程装备集团有限公司 A kind of rock tunnel(ling) machine methods of exhibiting and system based on threedimensional model
CN109783926A (en) * 2019-01-09 2019-05-21 郑州科技学院 A kind of soft installing meter systems in interior
CN109872400A (en) * 2019-02-18 2019-06-11 上海电气集团股份有限公司 A kind of generation method of panoramic virtual reality scene
CN110232664A (en) * 2019-06-13 2019-09-13 大连民族大学 A kind of mask restorative procedure of exorcising based on augmented reality
CN110362209A (en) * 2019-07-23 2019-10-22 辽宁向日葵教育科技有限公司 A kind of MR mixed reality intelligent perception interactive system
CN110531865A (en) * 2019-09-20 2019-12-03 深圳市凯达尔科技实业有限公司 Actual situation scene operation platform and application method based on 5G and MR technology
CN110928402A (en) * 2018-12-13 2020-03-27 湖南汉坤建筑安保器材有限公司 Construction safety experience comprehensive solution based on VR technology
CN110989837A (en) * 2019-11-29 2020-04-10 上海海事大学 Virtual reality system for passenger liner experience
CN111179436A (en) * 2019-12-26 2020-05-19 浙江省文化实业发展有限公司 Mixed reality interaction system based on high-precision positioning technology
CN111192354A (en) * 2020-01-02 2020-05-22 武汉瑞莱保能源技术有限公司 Three-dimensional simulation method and system based on virtual reality
CN111240630A (en) * 2020-01-21 2020-06-05 杭州易现先进科技有限公司 Augmented reality multi-screen control method and device, computer equipment and storage medium
CN111275612A (en) * 2020-01-17 2020-06-12 成都库珀区块链科技有限公司 VR (virtual reality) technology-based K-line display and interaction method and device
CN111383313A (en) * 2020-03-31 2020-07-07 歌尔股份有限公司 Virtual model rendering method, device and equipment and readable storage medium
CN111429348A (en) * 2020-03-20 2020-07-17 中国铁建重工集团股份有限公司 Image generation method, device and system and readable storage medium
CN111667588A (en) * 2020-06-12 2020-09-15 上海商汤智能科技有限公司 Person image processing method, person image processing device, AR device and storage medium
CN111665944A (en) * 2020-06-09 2020-09-15 浙江商汤科技开发有限公司 Method and device for decorating special effect of space capsule, electronic equipment and storage medium
CN112291619A (en) * 2020-10-24 2021-01-29 西北工业大学 Mobile terminal small program frame rendering method based on blocking and pausing
CN112891940A (en) * 2021-03-16 2021-06-04 天津亚克互动科技有限公司 Image data processing method and device, storage medium and computer equipment
CN113671813A (en) * 2021-08-20 2021-11-19 中国人民解放军陆军装甲兵学院 Virtual and real scene fused full-parallax holographic volume view manufacturing method and system
CN113706722A (en) * 2020-05-21 2021-11-26 株洲中车时代电气股份有限公司 Product appearance and function display system, display method, medium and equipment
CN114416184A (en) * 2021-12-06 2022-04-29 北京航空航天大学 Memory computing method and device based on virtual reality equipment
CN115212565A (en) * 2022-08-02 2022-10-21 领悦数字信息技术有限公司 Method, apparatus, and medium for setting virtual environment in virtual scene
CN116301481A (en) * 2023-05-12 2023-06-23 北京天图万境科技有限公司 Multi-multiplexing visual bearing interaction method and device
CN116681869A (en) * 2023-06-21 2023-09-01 西安交通大学城市学院 Cultural relic 3D display processing method based on virtual reality application
CN117152349A (en) * 2023-08-03 2023-12-01 无锡泰禾宏科技有限公司 Virtual scene self-adaptive construction system and method based on AR and big data analysis

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019041351A1 (en) * 2017-09-04 2019-03-07 艾迪普(北京)文化科技股份有限公司 Real-time aliasing rendering method for 3d vr video and virtual three-dimensional scene
CN107610127B (en) * 2017-09-11 2020-04-03 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic apparatus, and computer-readable storage medium
EP3680857B1 (en) 2017-09-11 2021-04-28 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and apparatus, electronic device and computer-readable storage medium
US10616033B2 (en) * 2017-11-06 2020-04-07 Honda Motor Co., Ltd. Different perspectives from a common virtual environment
CN109982096A (en) * 2017-12-27 2019-07-05 艾迪普(北京)文化科技股份有限公司 360 ° of VR content broadcast control systems of one kind and method
CN108320333B (en) * 2017-12-29 2022-01-11 中国银联股份有限公司 Scene adaptive virtual reality conversion equipment and virtual reality scene adaptive method
WO2019143959A1 (en) * 2018-01-22 2019-07-25 Dakiana Research Llc Method and device for presenting synthesized reality companion content
CN110152291A (en) * 2018-12-13 2019-08-23 腾讯科技(深圳)有限公司 Rendering method, device, terminal and the storage medium of game picture
CN110691279A (en) * 2019-08-13 2020-01-14 北京达佳互联信息技术有限公司 Virtual live broadcast method and device, electronic equipment and storage medium
CN111243101B (en) * 2019-12-31 2023-04-18 浙江省邮电工程建设有限公司 Method, system and device for increasing AR environment immersion degree of user based on artificial intelligence
CN111311757B (en) * 2020-02-14 2023-07-18 惠州Tcl移动通信有限公司 Scene synthesis method and device, storage medium and mobile terminal
CN111563357B (en) * 2020-04-28 2024-03-01 纳威科技有限公司 Three-dimensional visual display method and system for electronic device
CN111899349B (en) * 2020-07-31 2023-08-04 北京市商汤科技开发有限公司 Model presentation method and device, electronic equipment and computer storage medium
CN114449251B (en) * 2020-10-31 2024-01-16 华为技术有限公司 Video perspective method, device, system, electronic equipment and storage medium
CN112669671B (en) * 2020-12-28 2022-10-25 北京航空航天大学江西研究院 Mixed reality flight simulation system based on physical interaction
CN114265496A (en) * 2021-11-30 2022-04-01 歌尔光学科技有限公司 VR scene switching method and device, VR head-mounted equipment and storage medium
CN114356087A (en) * 2021-12-30 2022-04-15 北京绵白糖智能科技有限公司 Interaction method, device, equipment and storage medium based on augmented reality

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102566756A (en) * 2010-12-16 2012-07-11 微软公司 Comprehension and intent-based content for augmented reality displays
CN102614665A (en) * 2012-04-16 2012-08-01 苏州市职业大学 Method for adding real object in online game image
CN102999160A (en) * 2011-10-14 2013-03-27 微软公司 User controlled real object disappearance in a mixed reality display
US20160140729A1 (en) * 2014-11-04 2016-05-19 The Regents Of The University Of California Visual-inertial sensor fusion for navigation, localization, mapping, and 3d reconstruction

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9779512B2 (en) * 2015-01-29 2017-10-03 Microsoft Technology Licensing, Llc Automatic generation of virtual materials from real-world materials
CN105872529A (en) * 2015-12-15 2016-08-17 乐视致新电子科技(天津)有限公司 Virtual reality switching method and device of virtual reality head-mounted display
CN105678831A (en) * 2015-12-30 2016-06-15 北京奇艺世纪科技有限公司 Image rendering method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102566756A (en) * 2010-12-16 2012-07-11 微软公司 Comprehension and intent-based content for augmented reality displays
CN102999160A (en) * 2011-10-14 2013-03-27 微软公司 User controlled real object disappearance in a mixed reality display
CN102614665A (en) * 2012-04-16 2012-08-01 苏州市职业大学 Method for adding real object in online game image
US20160140729A1 (en) * 2014-11-04 2016-05-19 The Regents Of The University Of California Visual-inertial sensor fusion for navigation, localization, mapping, and 3d reconstruction

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109636919A (en) * 2018-11-29 2019-04-16 武汉中地地科传媒文化有限责任公司 A kind of virtual museum's construction method, system and storage medium based on holographic technique
CN109636919B (en) * 2018-11-29 2023-04-07 武汉中地地科传媒文化有限责任公司 Holographic technology-based virtual exhibition hall construction method, system and storage medium
CN110928402A (en) * 2018-12-13 2020-03-27 湖南汉坤建筑安保器材有限公司 Construction safety experience comprehensive solution based on VR technology
CN109685878A (en) * 2018-12-29 2019-04-26 中铁工程装备集团有限公司 A kind of rock tunnel(ling) machine methods of exhibiting and system based on threedimensional model
CN109685878B (en) * 2018-12-29 2023-06-02 中铁工程装备集团有限公司 Tunnel boring machine display method and system based on three-dimensional model
CN109783926A (en) * 2019-01-09 2019-05-21 郑州科技学院 A kind of soft installing meter systems in interior
CN109872400A (en) * 2019-02-18 2019-06-11 上海电气集团股份有限公司 A kind of generation method of panoramic virtual reality scene
CN110232664A (en) * 2019-06-13 2019-09-13 大连民族大学 A kind of mask restorative procedure of exorcising based on augmented reality
CN110362209A (en) * 2019-07-23 2019-10-22 辽宁向日葵教育科技有限公司 A kind of MR mixed reality intelligent perception interactive system
CN110362209B (en) * 2019-07-23 2023-08-11 辽宁向日葵教育科技有限公司 MR mixed reality intelligent perception interactive system
CN110531865A (en) * 2019-09-20 2019-12-03 深圳市凯达尔科技实业有限公司 Actual situation scene operation platform and application method based on 5G and MR technology
CN110989837A (en) * 2019-11-29 2020-04-10 上海海事大学 Virtual reality system for passenger liner experience
CN110989837B (en) * 2019-11-29 2023-03-24 上海海事大学 Virtual reality system for passenger liner experience
CN111179436A (en) * 2019-12-26 2020-05-19 浙江省文化实业发展有限公司 Mixed reality interaction system based on high-precision positioning technology
CN111192354A (en) * 2020-01-02 2020-05-22 武汉瑞莱保能源技术有限公司 Three-dimensional simulation method and system based on virtual reality
CN111275612A (en) * 2020-01-17 2020-06-12 成都库珀区块链科技有限公司 VR (virtual reality) technology-based K-line display and interaction method and device
CN111240630A (en) * 2020-01-21 2020-06-05 杭州易现先进科技有限公司 Augmented reality multi-screen control method and device, computer equipment and storage medium
CN111240630B (en) * 2020-01-21 2023-09-26 杭州易现先进科技有限公司 Multi-screen control method and device for augmented reality, computer equipment and storage medium
CN111429348A (en) * 2020-03-20 2020-07-17 中国铁建重工集团股份有限公司 Image generation method, device and system and readable storage medium
CN111383313A (en) * 2020-03-31 2020-07-07 歌尔股份有限公司 Virtual model rendering method, device and equipment and readable storage medium
CN113706722A (en) * 2020-05-21 2021-11-26 株洲中车时代电气股份有限公司 Product appearance and function display system, display method, medium and equipment
CN111665944B (en) * 2020-06-09 2023-08-08 浙江商汤科技开发有限公司 Decoration method and device for space capsule special effect, electronic equipment and storage medium
CN111665944A (en) * 2020-06-09 2020-09-15 浙江商汤科技开发有限公司 Method and device for decorating special effect of space capsule, electronic equipment and storage medium
CN111667588A (en) * 2020-06-12 2020-09-15 上海商汤智能科技有限公司 Person image processing method, person image processing device, AR device and storage medium
CN112291619B (en) * 2020-10-24 2023-09-29 西北工业大学 Mobile terminal applet frame rendering method based on blocking and pause
CN112291619A (en) * 2020-10-24 2021-01-29 西北工业大学 Mobile terminal small program frame rendering method based on blocking and pausing
CN112891940A (en) * 2021-03-16 2021-06-04 天津亚克互动科技有限公司 Image data processing method and device, storage medium and computer equipment
CN112891940B (en) * 2021-03-16 2024-01-09 天津亚克互动科技有限公司 Image data processing method and device, storage medium and computer equipment
CN113671813B (en) * 2021-08-20 2022-09-13 中国人民解放军陆军装甲兵学院 Virtual and real scene fused full-parallax holographic volume view manufacturing method and system
CN113671813A (en) * 2021-08-20 2021-11-19 中国人民解放军陆军装甲兵学院 Virtual and real scene fused full-parallax holographic volume view manufacturing method and system
CN114416184B (en) * 2021-12-06 2023-08-01 北京航空航天大学 In-memory computing method and device based on virtual reality equipment
CN114416184A (en) * 2021-12-06 2022-04-29 北京航空航天大学 Memory computing method and device based on virtual reality equipment
CN115212565A (en) * 2022-08-02 2022-10-21 领悦数字信息技术有限公司 Method, apparatus, and medium for setting virtual environment in virtual scene
CN115212565B (en) * 2022-08-02 2024-03-26 领悦数字信息技术有限公司 Method, apparatus and medium for setting virtual environment in virtual scene
CN116301481A (en) * 2023-05-12 2023-06-23 北京天图万境科技有限公司 Multi-multiplexing visual bearing interaction method and device
CN116681869A (en) * 2023-06-21 2023-09-01 西安交通大学城市学院 Cultural relic 3D display processing method based on virtual reality application
CN116681869B (en) * 2023-06-21 2023-12-19 西安交通大学城市学院 Cultural relic 3D display processing method based on virtual reality application
CN117152349A (en) * 2023-08-03 2023-12-01 无锡泰禾宏科技有限公司 Virtual scene self-adaptive construction system and method based on AR and big data analysis
CN117152349B (en) * 2023-08-03 2024-02-23 无锡泰禾宏科技有限公司 Virtual scene self-adaptive construction system and method based on AR and big data analysis

Also Published As

Publication number Publication date
CN107077755A (en) 2017-08-18
CN107077755B (en) 2021-06-04

Similar Documents

Publication Publication Date Title
WO2018058601A1 (en) Method and system for fusing virtuality and reality, and virtual reality device
US10078917B1 (en) Augmented reality simulation
JP2023126303A (en) Method and apparatus for determining and/or evaluating localizing map of image display device
CN106157359B (en) Design method of virtual scene experience system
US11010958B2 (en) Method and system for generating an image of a subject in a scene
US8928659B2 (en) Telepresence systems with viewer perspective adjustment
CN111880644A (en) Multi-user instant location and map construction (SLAM)
KR20180120801A (en) Switching between binocular and monocular time
EP3308539A1 (en) Display for stereoscopic augmented reality
CN106780759A (en) Method, device and the VR systems of scene stereoscopic full views figure are built based on picture
JP2019121224A (en) Program, information processing device, and information processing method
CN107005689B (en) Digital video rendering
US20230105064A1 (en) System and method for rendering virtual reality interactions
JP6126272B1 (en) Method, program, and recording medium for providing virtual space
JP6775669B2 (en) Information processing device
US20240036327A1 (en) Head-mounted display and image displaying method
WO2020017435A1 (en) Information processing device, information processing method, and program
JP2018198025A (en) Image processing device, image processing device control method, and program
WO2017191703A1 (en) Image processing device
JP5864789B1 (en) Railway model viewing device, method, program, dedicated display monitor, scene image data for composition
JP7030075B2 (en) Programs, information processing devices, and information processing methods
WO2017149254A1 (en) Man/machine interface with 3d graphics applications
CN106961592B (en) VR (virtual reality) display method and system for 3D (three-dimensional) video
WO2018173206A1 (en) Information processing device
JP7044846B2 (en) Information processing equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16917351

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 21/08/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16917351

Country of ref document: EP

Kind code of ref document: A1