WO2018219336A1 - 一种模块化mr设备成像方法 - Google Patents

一种模块化mr设备成像方法 Download PDF

Info

Publication number
WO2018219336A1
WO2018219336A1 PCT/CN2018/089434 CN2018089434W WO2018219336A1 WO 2018219336 A1 WO2018219336 A1 WO 2018219336A1 CN 2018089434 W CN2018089434 W CN 2018089434W WO 2018219336 A1 WO2018219336 A1 WO 2018219336A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
smart phone
main
data
component
Prior art date
Application number
PCT/CN2018/089434
Other languages
English (en)
French (fr)
Inventor
胡伯涛
Original Assignee
福州光流科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 福州光流科技有限公司 filed Critical 福州光流科技有限公司
Priority to JP2019546878A priority Critical patent/JP7212819B2/ja
Priority to CN201880002360.1A priority patent/CN109313342B/zh
Priority to US16/477,527 priority patent/US11709360B2/en
Publication of WO2018219336A1 publication Critical patent/WO2018219336A1/zh

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6803Head-worn items, e.g. helmets, masks, headphones or goggles
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/724094Interfacing with a device worn on the user's body to provide access to telephonic functionalities, e.g. accepting a call, reading or composing a message
    • H04M1/724097Worn on the head
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/12Healthy persons not otherwise provided for, e.g. subjects of a marketing survey
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • the present invention relates to the field of display devices, and more particularly to a modular MR device imaging method.
  • the MR device is a display device that can superimpose virtual images on a real environment background.
  • Such devices are usually combined with high-performance computing devices and complicated optical paths, which are costly and inconvenient to use, and now portable VR glasses are popularized.
  • the mobile phone is the display core, so that the cost of forming a virtual reality image is greatly reduced and the use is convenient.
  • the invention provides a modular MR device imaging method, which can prepare an inexpensive MR device and flexibly define a rich MR interaction effect.
  • the present invention adopts the following technical solutions.
  • the MR device includes an MR calculation module, an MR optical path module, and an MR attitude module;
  • the MR calculation module includes a display component;
  • the MR posture module includes a shooting component and an IMU component;
  • the component is configured to collect an image in a preset angular direction of the display component;
  • the IMU component is configured to acquire posture data of the MR device;
  • the MR calculation module is connected to the MR posture module, and the image data and the attitude data collected according to the MR posture module are used. Adjust the display content of the display component.
  • the MR optical path module includes a virtual image optical path and a mixed optical path; the virtual image optical path is connected to the display component; the mixed optical path input end is connected to the virtual image optical path, the output end is an observation end; and the hybrid optical path is provided with a half mirror
  • the semi-transparent mirror is a real image introduction surface, and the other surface is a virtual image introduction surface; the real image introduction surface faces the real environment; the virtual image introduction surface faces the virtual image light path; and the display content of the display component is processed by the virtual image optical path After the transmission, a virtual image is formed, and the virtual image light is reflected to the observation end through the virtual image introduction surface; the light of the real environment is transmitted to the observation end through the real image introduction surface and mixed with the virtual image to form a mixed reality image.
  • the MR calculation module is a main smart phone; the display component is a display module of a main smart phone; the IMU component includes a magnetometer, a gyroscope, and an accelerometer; the IMU component includes a main IMU component and an auxiliary IMU component;
  • the main IMU component collects posture data of the display component; the main IMU component is disposed at the main smart phone; the auxiliary IMU component is disposed at one or more control devices wirelessly connected to the main smart phone; the auxiliary IMU
  • the component captures posture data or position data of the control device; the posture data includes attitude angle, angular rate or acceleration data; the shooting component includes a main shooting component and an auxiliary shooting component, and the main shooting component is a rear of the main smart phone A camera, the auxiliary shooting component is a camera at the control device.
  • the MR optical path module is a passive MR wearing mechanism; the main smart phone is fixed at the MR wearing mechanism; the main shooting component is a rear camera of the main smart phone; and the control device is a game handle. Or a wearable device that can be worn at the hand or foot, or a sensor and control device that is attached to the MR headset, or an auxiliary phone that is held by the user or attached to the limb.
  • the virtual image path of the MR wearing mechanism comprises a resting plate, a total reflection mirror and a field of view lens; the field lens is combined into two Fresnel lenses; the main smart phone is placed laterally on the shelf; When the headset is working, the main smartphone plays the image of the VR split screen mode in the horizontal double-divided screen.
  • the image light of the two-screen is reflected by the total reflection mirror to the two Fresnel lenses, and the two Fresnel lenses are refracted.
  • the image light of the two-screen screen causes the image light to form two virtual image rays having a preset angle of view, and the virtual image light is reflected by the virtual image introduction surface to the observation end; the light of the real environment is transmitted to the observation end through the real image introduction surface
  • the real ambient light is mixed with the virtual image light to form a mixed reality image at the observation end.
  • the orientation of the rear camera of the main smart phone is the orientation of the MR wearing mechanism;
  • the posture data of the display component is the posture data of the main smart phone;
  • the IMU component of the main smart phone collects the posture of the main smart phone Data, when the MR wearing mechanism is working, the rear camera of the main smart phone collects feature points at the real scene at the initial orientation of the MR wearing mechanism, and continuously collects images as a posture map while the MR wearing mechanism is working;
  • the MR calculation module adjusts the image image on the double split screen according to the change of the feature point at the posture map and the change of the posture data of the main smart phone.
  • the image played in the horizontal double-split screen includes a virtual character and a control identifier, and the MR calculation module generates a control identifier according to the posture data and the position data of the control device uploaded by the auxiliary IMU component, where the control identifier is associated with the control device Move and move; the virtual character can interact with the control identifier.
  • the master smart phone and the external device are connected through a network, and the virtual character and the control identifier included in the image played in the horizontal double-split screen form are part of the mixed reality image, and the virtual character corresponds to the external device, when the When a virtual character interacts with the control ID, the external device performs the corresponding job based on the interactive content.
  • the imaging method sequentially includes the following steps;
  • the user fixes the main smart phone pre-installed with the MR application APP to the shelf of the MR wearing mechanism, and holds the auxiliary mobile phone, which is also a smart phone and pre-installed with the MR application APP;
  • the user wears the MR wearing mechanism and brings the eyes close to the observation end to observe the mixed reality image;
  • the MR application APP of the main smart phone is activated and set as the display end, and the main smart phone plays the image in the horizontal double-divided screen, and the image light of the two-screen is reflected by the total reflection mirror to the two Fresnel lenses, two The block Fresnel lens refracts the image light of the two-screen, so that the image light forms two virtual image rays with a preset angle of view, and the virtual image light is reflected by the virtual image introduction surface to the observation end; the real environment light is reflected by the real image The introduction surface is transmitted to the observation end, and the real ambient light is mixed with the virtual image light to form a mixed reality image at the observation end;
  • the rear camera of the main smart phone collects a feature point at a real scene at an initial orientation of the MR wearing mechanism, and continuously collects an image as a posture map when the MR wearing mechanism works; the MR calculation module according to the feature point Adjusting the image on the double-split screen by the change in the attitude map and the change of the attitude data of the main smartphone;
  • the user lifts the auxiliary mobile phone to a specific point at the mixed reality image, activates the MR application APP on the auxiliary mobile phone and sets it as the control terminal; and the auxiliary IMU component on the auxiliary mobile phone collects the posture data and position of the auxiliary mobile phone.
  • Data; the control end and the display end are connected in a wireless manner, and the control end uploads posture data and control data of the auxiliary mobile phone to the display end;
  • the MR calculation module generates a control identifier in the mixed reality image according to the posture data and the position data of the auxiliary mobile phone, where the control identifier moves with the movement of the auxiliary mobile phone; when the control identifier on the mixed reality image contacts the virtual character Or adjacent, the virtual character interacts with the control identifier;
  • the virtual character corresponds to an external device.
  • the external device performs a corresponding job according to the interactive content.
  • the main smart phone and the auxiliary mobile phone generate and share unified spatial positioning data by a monocular visual inertial odometer method, and the monocular visual inertial odometer method is divided into the following steps;
  • the main smart phone and the auxiliary mobile phone collect images through the camera to generate their respective posture maps.
  • the main smart phone and the auxiliary mobile phone collect their respective posture data through the built-in IMU component, and the main smart phone and the auxiliary mobile phone respectively associate the posture map with the posture data.
  • Forming respective spatial image association data, the main smart phone and the auxiliary mobile phone are connected through the network to summarize the respective spatial image associated data, and generate a unified spatial image association database in the main smart phone and the auxiliary mobile phone;
  • the main smart phone and the auxiliary mobile phone continue to collect the attitude map and the attitude data during the movement, and add the newly acquired posture map and posture data into the spatial image association database and associate them;
  • the main smart phone and the auxiliary mobile phone perform data comparison in the spatial image association database with the currently collected posture map and posture data during the movement to obtain the specific orientation of the mobile phone in the current space, and predict the trajectory and posture of the mobile phone. Variety;
  • the main smart phone and the auxiliary mobile phone read the spatial image association database during the movement, and compare the currently acquired posture map with the posture map and the attitude data collected in the same coordinate and the same posture in the past N time frames. Yes, update the spatial image association database when differences are found;
  • steps B3 and B4 the main smart phone and the auxiliary mobile phone compare and check data with preset tolerance thresholds to improve the efficiency and robustness of spatial positioning.
  • the MR wearing mechanism is made of a piece of material, the sheet is provided with an A-folded section, a B-folded section and a C-folded section along the length direction; the A-folded section is fixed with a half mirror and a field lens.
  • the B-folding section is fixed with a total reflection mirror; the C-folding section is provided with a resting board; and the resting board is provided with an imaging hole for the external camera to collect an external image.
  • the preparation method of the MR wearing mechanism sequentially includes the following steps;
  • the A folded section and the B folded section are folded to form a diamond-shaped column;
  • the lens is located at a line connecting the vertices of the diamond; at the four sides of the diamond-shaped column, one side is open to the image light incident surface, and the other three sides are respectively closed
  • the image light incident surface faces the total reflection mirror wall;
  • the total reflection mirror wall is provided with a total reflection mirror;
  • the observation hole is located at the observation hole wall;
  • the side wall of the diamond column facing the observation hole is a semi-transparent mirror wall;
  • the semi-transparent half mirror is disposed at the wall of the semi-transparent mirror;
  • the observation end includes an observation hole.
  • the main smart phone plays the image of the VR split screen mode in the horizontal double-split screen, the mixed reality image formed by the image of the mobile phone screen and the external image can be seen at the observation hole.
  • the bottom of the shelving plate is provided with a damping member; the shelving plate is detachably coupled to the casing via a Velcro or a buckle; the shelving plate is fixedly coupled to the casing.
  • the MR calculation module is a main smart phone; the display component is a display module of a main smart phone; the IMU component includes a magnetometer, a gyroscope, an accelerometer; the IMU component includes a main IMU component and zero or more auxiliary An IMU component; the main IMU component collects posture data of the display component; the main IMU component is disposed at the main smart phone; the auxiliary IMU component is disposed at one or more control devices wirelessly connected to the main smart phone; The auxiliary IMU component collects posture data or position data of the control device; the posture data includes attitude angle, angular rate or acceleration data; the shooting component includes a main shooting component and an auxiliary shooting component, and the main shooting component is a main smart component The rear camera of the mobile phone, the auxiliary shooting component is a camera at the control device, and the auxiliary shooting component is optional.
  • the MR optical path module is a passive MR wearing mechanism; the main smart phone is fixed at the MR wearing mechanism; the main shooting component is a rear camera of the main smart phone; and the control device is a game handle. Or a wearable device that can be worn at the hand or foot, or a sensor and control device that is attached to the MR headset, or an auxiliary phone that is held by the user or attached to the limb.
  • the virtual image path of the MR wearing mechanism comprises a resting plate, a total reflection mirror and a field of view lens; the field lens is combined into two Fresnel lenses; the main smart phone is placed laterally on the shelf; When the headset is working, the main smartphone plays the image of the VR split screen mode in the horizontal double-divided screen.
  • the image light of the two-screen is reflected by the total reflection mirror to the two Fresnel lenses, and the two Fresnel lenses are refracted.
  • the image light of the two-screen screen causes the image light to form two virtual image rays having a preset angle of view, and the virtual image light is reflected by the virtual image introduction surface to the observation end; the light of the real environment is transmitted to the observation end through the real image introduction surface
  • the real ambient light is mixed with the virtual image light to form a mixed reality image at the observation end.
  • the orientation of the rear camera of the main smart phone is the orientation of the MR wearing mechanism;
  • the posture data of the display component is the posture data of the main smart phone;
  • the IMU component of the main smart phone collects the posture of the main smart phone Data, when the MR wearing mechanism is working, the rear camera of the main smart phone collects feature points of the real scene facing the MR wearing mechanism, and continuously collects the image forming feature point posture map while the MR wearing mechanism is working;
  • the MR calculation module calculates the spatial position of the main smart phone according to the change of the feature point at the feature point attitude map and the change of the posture data of the main smart phone, and adjusts the image image on the double split screen.
  • the image played in the horizontal double-split screen includes a virtual character and a control identifier, and the MR calculation module generates a spatial position of the controller according to the posture data and the position data uploaded by the auxiliary IMU component or the auxiliary shooting component on the control device. And forming a control identifier, the control identifier moves with the movement of the control device; the virtual character can interact with the control identifier.
  • the master smart phone and the external device are connected through a network, and the virtual character and the control identifier included in the image played in the horizontal double-split screen form are part of the mixed reality image, and the virtual character corresponds to the external device, when the When a virtual character interacts with the control ID, the external device performs the corresponding job based on the interactive content.
  • the imaging method sequentially includes the following steps;
  • the user fixes the main smart phone pre-installed with the MR application APP to the shelf of the MR wearing mechanism, and holds the control device, which may be a smart phone and pre-installed with the MR application APP;
  • the user wears the MR wearing mechanism and brings the eyes close to the observation end to observe the mixed reality image;
  • the MR application APP of the main smart phone is activated and set as the display end, and the main smart phone plays the image in the horizontal double-divided screen, and the image light of the two-screen is reflected by the total reflection mirror to the two Fresnel lenses, two The block Fresnel lens refracts the image light of the two-screen, so that the image light forms two virtual image rays with a preset angle of view, and the virtual image light is reflected by the virtual image introduction surface to the observation end; the real environment light is reflected by the real image The introduction surface is transmitted to the observation end, and the real ambient light is mixed with the virtual image light to form a mixed reality image at the observation end;
  • the rear camera of the main smart phone collects a feature point at a real scene at an initial orientation of the MR wearing mechanism, and continuously collects an image forming feature point posture map while the MR heading mechanism is working;
  • the MR calculation module is configured according to The change of the feature point at the attitude map and the change of the attitude data of the main smart phone to calculate the spatial position of the main smart phone, and use the information to adjust the image image on the double split screen;
  • the user lifts the control device to a specific point at the mixed reality image.
  • the control device is a smart phone
  • the MR application APP on the auxiliary mobile phone is activated and set as the control terminal
  • the auxiliary IMU control on the control device The attitude data and the position data of the device; the control end and the display end are connected in a wireless manner, and the control end displays the attitude data and the control data of the control device;
  • the MR calculation module generates a control identifier in the mixed reality image according to the posture data and the position data of the auxiliary mobile phone, where the control identifier moves with the movement of the auxiliary mobile phone; when the control identifier on the mixed reality image contacts the virtual character Or adjacent, the virtual character interacts with the control identifier;
  • the virtual character corresponds to an external device.
  • the external device performs a corresponding job according to the interactive content.
  • the MR wearing mechanism is made of a piece of material, the sheet is provided with an A-folded section, a B-folded section and a C-folded section along the length direction; the A-folded section is fixed with a half mirror and a field lens.
  • the B-folding section is fixed with a total reflection mirror; the C-folding section is provided with a resting board; and the resting board is provided with an imaging hole for the external camera to collect an external image.
  • the preparation method of the MR wearing mechanism sequentially includes the following steps;
  • the A folded section and the B folded section are folded to form a diamond-shaped column;
  • the lens is located at a line connecting the vertices of the diamond; at the four sides of the diamond-shaped column, one side is open to the image light incident surface, and the other three sides are respectively closed
  • the image light incident surface faces the total reflection mirror wall;
  • the total reflection mirror wall is provided with a total reflection mirror;
  • the observation hole is located at the observation hole wall;
  • the side wall of the diamond column facing the observation hole is a semi-transparent mirror wall;
  • the semi-transparent half mirror is disposed at the wall of the semi-transparent mirror;
  • the observation end includes an observation hole.
  • the main smart phone plays the image of the VR split screen mode in the horizontal double-split screen, the mixed reality image formed by the image of the mobile phone screen and the external image can be seen at the observation hole.
  • the invention adopts a modular design, and the optical path of the MR device is independent of the playing source.
  • the device can be used for the device, and the MR wearing mechanism
  • the casing can be equipped with no electronic equipment, or only a small number of sensors can be installed with the smart phone, thereby greatly reducing the manufacturing cost of the MR glasses.
  • the virtual video source of the MR device is a smart phone
  • the smart phone placed in the MR device shelf is a part of the MR device when the MR device is used, so the interactive function of the MR device can be related to the performance of the smart phone hardware.
  • the software function is improved, and when upgrading the MR device function, it is only necessary to upgrade the mobile phone hardware, or only upgrade the internal APP of the mobile phone, and it is not necessary to be bound to the fixed device as the traditional MR device, so that the MR device is The upgrade is more convenient and economical.
  • the smart phone has sensors such as a magnetometer, a gyroscope, and an accelerometer
  • the APP for the MR display can work with these sensors, the positioning display of the mixed reality image can be realized. That is, the virtual image on the mixed reality image can interact with the orientation and motion of the MR device, and when the functions and sampling performance of the IMU components are improved, the function of the MR device is also improved.
  • the optical path portion of the MR device is simply folded by a thin plate, different materials can be selected on the optical path substrate to suit the different requirements of the market for the price and strength of the MR glasses.
  • the shelf is provided with an image camera for the image pickup of the rear camera to capture an external image;
  • the image player is a smart phone; since in this product, the camera of the smart phone is pointed to the MR device
  • the realistic observation direction is consistent.
  • the actual observation direction of the MR device is the orientation of the observation hole; this design enables the virtual image of the MR device to interact with the external image captured by the mobile phone camera, because the image captured by the smartphone camera and the observation hole are
  • the realistic images that can be observed are basically the same, and when the virtual image can be interactively associated with the real image, the APP for the MR display in the smart phone can make the virtual image generate corresponding interaction, for example, if the smart phone is inside
  • the virtual image displayed by the APP for the MR display is a kettle, and when the MR device observes the hole facing a fire source, the camera of the smartphone collects the fire source image and transmits the image to the MR display APP, then the APP automatically puts the virtual image into the image.
  • the control mode of the MR device provides a powerful expansion capability, and only needs to be installed on the smart phone of the control terminal.
  • the MR application APP can virtually display the smart phone of the control terminal as various control indicators on the mixed reality image, such as a game gun, a shield or a grab, a robot, a file control icon, and the like.
  • the control mode of the MR device provides a powerful expansion capability, and only needs to be on the control side of the smart phone or
  • the corresponding MR application APP is installed on the game handle, and the smart phone of the control terminal can be virtually displayed as various control identifiers on the mixed reality image, such as a game gun, a shield or a grab, a robot, a file control icon, and the like.
  • the optical path module of the MR device of the present invention is simply folded by a thin plate, the cost is extremely low, and the computing and controlling device is a software-installable smart phone, so the user can at a low cost.
  • the handheld smartphone is virtualized as a file control icon. After reviewing the manuscript in the mixed reality image, it can be directly printed, and then the file control icon can be directly thrown into the printer virtual character in the mixed reality image.
  • the interaction of the file control icon with the printer's virtual character is equivalent to the print command, so the real printer performs the file print job directly.
  • Figure 1 is a schematic view of the present invention
  • Figure 2 is a schematic view of another aspect of the present invention.
  • Figure 3 is a schematic view of the optical path of the present invention.
  • Figure 4 is a schematic view showing the structure of a lens of the present invention.
  • Figure 5 is a schematic view showing the folding process of the casing of the present invention.
  • Figure 6 is a perspective view of the present invention.
  • Figure 7 is another perspective view of the present invention.
  • FIG. 8 is a schematic diagram of a main smart phone playing a VR split-screen mode image in a horizontal double-split screen according to the present invention
  • 1-MR optical path module 2-total mirror; 3-field lens; 4-half mirror; 5-shelf; 6-shield; 601-observation; 7-damper; -MR calculation module;
  • 101-Fresnel lens 102-observation hole; 103-C folding section; 104-A folding section; 105-B folding section; 106-master smartphone; 107-control identification; 108-virtual character; 109-display component .
  • the MR device includes an MR calculation module a, an MR optical path module 1 and an MR attitude module; the MR calculation module a includes a display component 109;
  • the posture module includes a shooting component and an IMU component; the imaging component is configured to acquire an image in a preset angular direction of the display component; the IMU component is configured to acquire posture data of the MR device; and the MR calculation module is connected to the MR posture module, And adjusting the display content of the display component according to the image data and the posture data collected by the MR posture module.
  • the MR optical path module 1 includes a virtual image optical path and a mixed optical path; the virtual image optical path is connected to the display component; the mixed optical path input end is connected to the virtual image optical path, and the output end is an observation end; and the mixed optical path is provided with a transflective half-reverse a mirror 4; one side of the half mirror 4 is a real image introduction surface, and the other surface is a virtual image introduction surface; the real image introduction surface faces the real environment; the virtual image introduction surface faces the virtual image light path; the display content of the display component is The virtual image light path is processed and transmitted to form a virtual image, and the virtual image light is reflected to the observation end through the virtual image introduction surface; the real environment light is transmitted to the observation end 601 through the real image introduction surface and mixed with the virtual image to form a mixed reality image.
  • the MR calculation module a is a main smart phone 106; the display component is a display module of the main smart phone 106; the IMU component includes a magnetometer, a gyroscope, an accelerometer; the IMU component includes a main IMU component and an auxiliary IMU
  • the main IMU component collects posture data of the display component; the main IMU component is disposed at the main smart phone; the auxiliary IMU component is disposed at one or more control devices wirelessly connected to the main smart phone;
  • the auxiliary IMU component collects attitude data or position data of the control device; the attitude data includes attitude angle, angular rate or acceleration data; the shooting component includes a main shooting component and an auxiliary shooting component, and the main shooting component is a main smart phone a rear camera that is a camera at the control device.
  • the MR optical path module 1 is a passive MR wearing mechanism; the main smart phone 106 is fixed at the MR wearing mechanism 1; the main shooting component is a rear camera of the main smart phone; the control device is A gamepad, or a wearable device that can be worn at the hand or foot, or a sensor and control device that is attached to the MR headset, or an auxiliary phone that is held by the user or tied to the limb.
  • the virtual image path of the MR wearing mechanism includes a resting plate 5, a total reflection mirror 2, and a field lens 3; the field lens 3 is combined into two Fresnel lenses 101; the main smartphone 106 is placed laterally On the shelf 5; when the MR headset is working, the main smartphone plays the image of the VR split screen mode in the horizontal double split screen, and the image light of the two split screen is reflected by the total reflection mirror to the two Fresnel lenses.
  • Two Fresnel lenses refract the image light of the two-screen, so that the image light forms two virtual image rays with a preset angle of view, and the virtual image light is reflected by the virtual image introduction surface to the observation end; the light of the real environment is The real image introduction surface is transmitted to the observation end, and the real ambient light is mixed with the virtual image light to form a mixed reality image at the observation end.
  • the orientation of the rear camera of the main smart phone is the orientation of the MR wearing mechanism;
  • the posture data of the display component is the posture data of the main smart phone 106;
  • the IMU component of the main smart phone collects the main smart phone Attitude data, when the MR headwear is working, the rear camera of the main smart phone collects feature points at the real scene at the initial orientation of the MR wearing mechanism, and continuously collects images as a posture map while the MR headwear is working
  • the MR calculation module adjusts the image image on the double-split screen according to the change of the feature point at the posture map and the change of the posture data of the main smartphone.
  • the image played in the horizontal double-split screen includes a virtual character 108 and a control identifier 107, and the MR calculation module generates a control identifier according to the posture data and the position data of the control device uploaded by the auxiliary IMU component, and the control identifier is controlled.
  • the device moves while moving; the virtual character can interact with the control identity.
  • the master smart phone and the external device are connected through a network, and the virtual character and the control identifier included in the image played in the horizontal double-split screen form are part of the mixed reality image, and the virtual character corresponds to the external device, when the When a virtual character interacts with the control ID, the external device performs the corresponding job based on the interactive content.
  • the imaging method sequentially includes the following steps;
  • the user fixes the main smart phone 106 pre-installed with the MR application APP to the shelf of the MR wearing mechanism, and holds the auxiliary mobile phone, which is also a smart phone and pre-installed with the MR application APP;
  • the user wears the MR wearing mechanism and brings the eyes close to the observation end to observe the mixed reality image;
  • the MR application APP of the main smart phone is activated and set as the display end, and the main smart phone plays the image in the horizontal double-divided screen, and the image light of the two-screen is reflected by the total reflection mirror to the two Fresnel lenses, two The block Fresnel lens refracts the image light of the two-screen, so that the image light forms two virtual image rays with a preset angle of view, and the virtual image light is reflected by the virtual image introduction surface to the observation end; the real environment light is reflected by the real image The introduction surface is transmitted to the observation end, and the real ambient light is mixed with the virtual image light to form a mixed reality image at the observation end;
  • the rear camera of the main smart phone collects a feature point at a real scene at an initial orientation of the MR wearing mechanism, and continuously collects an image as a posture map when the MR wearing mechanism works; the MR calculation module according to the feature point Adjusting the image on the double-split screen by the change in the attitude map and the change of the attitude data of the main smartphone;
  • the user lifts the auxiliary mobile phone to a specific point at the mixed reality image, activates the MR application APP on the auxiliary mobile phone and sets it as the control terminal; and the auxiliary IMU component on the auxiliary mobile phone collects the posture data and position of the auxiliary mobile phone.
  • Data; the control end and the display end are connected in a wireless manner, and the control end uploads posture data and control data of the auxiliary mobile phone to the display end;
  • the MR calculation module generates a control identifier in the mixed reality image according to the posture data and the position data of the auxiliary mobile phone, where the control identifier moves with the movement of the auxiliary mobile phone; when the control identifier on the mixed reality image contacts the virtual character Or adjacent, the virtual character interacts with the control identifier;
  • the virtual character corresponds to an external device.
  • the external device performs a corresponding job according to the interactive content.
  • the main smart phone and the auxiliary mobile phone generate and share unified spatial positioning data by a monocular visual inertial odometer method, and the monocular visual inertial odometer method is divided into the following steps;
  • the main smart phone and the auxiliary mobile phone collect images through the camera to generate their respective posture maps.
  • the main smart phone and the auxiliary mobile phone collect their respective posture data through the built-in IMU component, and the main smart phone and the auxiliary mobile phone respectively associate the posture map with the posture data.
  • Forming respective spatial image association data, the main smart phone and the auxiliary mobile phone are connected through the network to summarize the respective spatial image associated data, and generate a unified spatial image association database in the main smart phone and the auxiliary mobile phone;
  • the main smart phone and the auxiliary mobile phone continue to collect the attitude map and the attitude data during the movement, and add the newly acquired posture map and posture data into the spatial image association database and associate them;
  • the main smart phone and the auxiliary mobile phone perform data comparison in the spatial image association database with the currently collected posture map and posture data during the movement to obtain the specific orientation of the mobile phone in the current space, and predict the trajectory and posture of the mobile phone. Variety;
  • the main smart phone and the auxiliary mobile phone read the spatial image association database during the movement, and compare the currently acquired posture map with the posture map and the attitude data collected in the same coordinate and the same posture in the past N time frames. Yes, update the spatial image association database when differences are found;
  • steps B3 and B4 the main smart phone and the auxiliary mobile phone compare and check data with preset tolerance thresholds to improve the efficiency and robustness of spatial positioning.
  • the MR wearing mechanism is made of a piece of material, the sheet is provided with an A-folded section 104, a B-folded section 105 and a C-folded section 106 along the length direction; the A-folded section 104 is fixed with a half mirror 4.
  • the field lens 3; the B-folding section 105 is fixed with a total reflection mirror 2; the C-folding section 106 is provided with a resting board 5; and the resting board 5 is provided with a rear camera for the main smart phone 106. Capture the image hole of the external image.
  • the preparation method of the MR wearing mechanism sequentially includes the following steps;
  • the A folded section and the B folded section are folded to form a diamond-shaped column;
  • the lens is located at a line connecting the vertices of the diamond; at the four sides of the diamond-shaped column, one side is open to the image light incident surface, and the other three sides are respectively closed
  • the image light incident surface faces the total reflection mirror wall;
  • the total reflection mirror wall is provided with a total reflection mirror;
  • the observation hole is located at the observation hole wall;
  • the side wall of the diamond column facing the observation hole is a semi-transparent mirror wall;
  • the semi-transparent half mirror is disposed at the wall of the semi-transparent mirror;
  • the observation end includes an observation hole 102.
  • the main smart phone plays the image of the VR split screen mode in the horizontal double-split screen, the mixed reality image formed by mixing the image of the mobile phone screen with the external image can be seen at the observation hole.
  • the bottom of the shelving plate 5 is provided with a damping member 7; the shelving plate is detachably coupled to the casing via a Velcro or a buckle; the shelving plate is fixedly coupled to the casing.
  • the MR calculation module is a main smart phone; the display component is a display module of a main smart phone; the IMU component includes a magnetometer, a gyroscope, an accelerometer; and the IMU component includes a main An IMU component and zero or more auxiliary IMU components; the main IMU component collects gesture data of the display component; the main IMU component is disposed at the main smart phone; the auxiliary IMU component is disposed on one or more of the main smart phones a wirelessly connected control device; the auxiliary IMU component collects attitude data or position data of the control device; the attitude data includes attitude angle, angular rate, or acceleration data; the imaging component includes a main imaging component and an auxiliary imaging component, The main shooting component is a rear camera of the main smart phone, the auxiliary shooting component is a camera at the control device, and the auxiliary shooting component is optional.
  • the MR optical path module is a passive MR wearing mechanism; the main smart phone is fixed at the MR wearing mechanism; the main shooting component is a rear camera of the main smart phone; and the control device is a game handle. Or a wearable device that can be worn at the hand or foot, or a sensor and control device that is attached to the MR headset, or an auxiliary phone that is held by the user or attached to the limb.
  • the virtual image path of the MR wearing mechanism comprises a resting plate, a total reflection mirror and a field of view lens; the field lens is combined into two Fresnel lenses; the main smart phone is placed laterally on the shelf; When the headset is working, the main smartphone plays the image of the VR split screen mode in the horizontal double-divided screen.
  • the image light of the two-screen is reflected by the total reflection mirror to the two Fresnel lenses, and the two Fresnel lenses are refracted.
  • the image light of the two-screen screen causes the image light to form two virtual image rays having a preset angle of view, and the virtual image light is reflected by the virtual image introduction surface to the observation end; the light of the real environment is transmitted to the observation end through the real image introduction surface
  • the real ambient light is mixed with the virtual image light to form a mixed reality image at the observation end.
  • the orientation of the rear camera of the main smart phone is the orientation of the MR wearing mechanism;
  • the posture data of the display component is the posture data of the main smart phone;
  • the IMU component of the main smart phone collects the posture of the main smart phone Data, when the MR wearing mechanism is working, the rear camera of the main smart phone collects feature points of the real scene facing the MR wearing mechanism, and continuously collects the image forming feature point posture map while the MR wearing mechanism is working;
  • the MR calculation module calculates the spatial position of the main smart phone according to the change of the feature point at the feature point attitude map and the change of the posture data of the main smart phone, and adjusts the image image on the double split screen.
  • the image played in the horizontal double-split screen includes a virtual character and a control identifier, and the MR calculation module generates a spatial position of the controller according to the posture data and the position data uploaded by the auxiliary IMU component or the auxiliary shooting component on the control device. And forming a control identifier, the control identifier moves with the movement of the control device; the virtual character can interact with the control identifier.
  • the master smart phone and the external device are connected through a network, and the virtual character and the control identifier included in the image played in the horizontal double-split screen form are part of the mixed reality image, and the virtual character corresponds to the external device, when the When a virtual character interacts with the control ID, the external device performs the corresponding job based on the interactive content.
  • the imaging method sequentially includes the following steps;
  • the user fixes the main smart phone pre-installed with the MR application APP to the shelf of the MR wearing mechanism, and holds the control device, which may be a smart phone and pre-installed with the MR application APP;
  • the user wears the MR wearing mechanism and brings the eyes close to the observation end to observe the mixed reality image;
  • the MR application APP of the main smart phone is activated and set as the display end, and the main smart phone plays the image in the horizontal double-divided screen, and the image light of the two-screen is reflected by the total reflection mirror to the two Fresnel lenses, two The block Fresnel lens refracts the image light of the two-screen, so that the image light forms two virtual image rays with a preset angle of view, and the virtual image light is reflected by the virtual image introduction surface to the observation end; the real environment light is reflected by the real image The introduction surface is transmitted to the observation end, and the real ambient light is mixed with the virtual image light to form a mixed reality image at the observation end;
  • the rear camera of the main smart phone collects a feature point at a real scene at an initial orientation of the MR wearing mechanism, and continuously collects an image forming feature point posture map while the MR heading mechanism is working;
  • the MR calculation module is configured according to The change of the feature point at the attitude map and the change of the attitude data of the main smart phone to calculate the spatial position of the main smart phone, and use the information to adjust the image image on the double split screen;
  • the user lifts the control device to a specific point at the mixed reality image.
  • the control device is a smart phone
  • the MR application APP on the auxiliary mobile phone is activated and set as the control terminal
  • the auxiliary IMU control on the control device The attitude data and the position data of the device; the control end and the display end are connected in a wireless manner, and the control end displays the attitude data and the control data of the control device;
  • the MR calculation module generates a control identifier in the mixed reality image according to the posture data and the position data of the auxiliary mobile phone, where the control identifier moves with the movement of the auxiliary mobile phone; when the control identifier on the mixed reality image contacts the virtual character Or adjacent, the virtual character interacts with the control identifier;
  • the virtual character corresponds to an external device.
  • the external device performs a corresponding job according to the interactive content.
  • the MR wearing mechanism is made of a piece of material, the sheet is provided with an A-folded section, a B-folded section and a C-folded section along the length direction; the A-folded section is fixed with a half mirror and a field lens.
  • the B-folding section is fixed with a total reflection mirror; the C-folding section is provided with a resting board; and the resting board is provided with an imaging hole for the external camera to collect an external image.
  • the preparation method of the MR wearing mechanism sequentially includes the following steps;
  • the A folded section and the B folded section are folded to form a diamond-shaped column;
  • the lens is located at a line connecting the vertices of the diamond; at the four sides of the diamond-shaped column, one side is open to the image light incident surface, and the other three sides are respectively closed
  • the image light incident surface faces the total reflection mirror wall;
  • the total reflection mirror wall is provided with a total reflection mirror;
  • the observation hole is located at the observation hole wall;
  • the side wall of the diamond column facing the observation hole is a semi-transparent mirror wall;
  • the semi-transparent half mirror is disposed at the wall of the semi-transparent mirror;
  • the observation end includes an observation hole.
  • the main smart phone plays the image of the VR split screen mode in the horizontal double-split screen, the mixed reality image formed by the image of the mobile phone screen and the external image can be seen at the observation hole.
  • the main smart phone needs to set a unified coordinate system origin in the unified spatial positioning data shared by the main smart phone and the auxiliary mobile phone
  • one way to be taken is to let the main smart phone be used when the device is initially used.
  • the auxiliary mobile phone performs initial attitude map acquisition on the same target in the same posture to identify and mark the feature point in the initial posture map as the coordinate system origin.
  • the MR application APP is installed on the user's main smartphone and the auxiliary mobile phone, and the main smart phone is fixed on the shelf of the MR wearing mechanism, and the auxiliary mobile phone is held in the hand.
  • the handheld auxiliary mobile phone is used as a handle, and the auxiliary mobile phone is virtually displayed as a file control icon in the mixed reality.
  • the user directly reviews the paper manuscript in the real environment through the observation hole 102 and the half mirror 4, and the paper manuscript is associated with the file control icon, and the file control icon represents the manuscript corresponding to the paper manuscript.
  • Computer file
  • the hand-held auxiliary phone When the user reviews the manuscript in the mixed reality image and thinks that it can be officially printed, the hand-held auxiliary phone is swung, and the file control icon is directly thrown into the printer virtual character in the mixed reality image, since the printer character has been associated with the real printer. At this time, the interaction between the file control icon and the printer virtual character is equivalent to the user officially issuing a print command, so that the real printer in the office environment directly executes the file print job.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Optics & Photonics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Surgery (AREA)
  • Computer Hardware Design (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computer Graphics (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Studio Devices (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)
  • Exposure Control For Cameras (AREA)
  • Indication In Cameras, And Counting Of Exposures (AREA)
  • Telephone Function (AREA)

Abstract

一种模块化MR设备成像方法,其中MR设备包括MR计算模块(a)、MR光路模块(1)和MR姿态模块; MR计算模块包括显示组件(109); MR计算模块与MR姿态模块相连,并根据MR姿态模块采集的数据对显示内容进行调整;MR光路模块包括虚像光路和混合光路;虚像光路与显示组件相接;混合光路处设有半透半反镜(4);半透半反镜一面为实像引入面,另一面为虚像引入面;实像引入面朝向真实环境;虚像引入面朝向虚像光路;显示组件的显示内容经虚像光路处理并传输后形成虚拟像,虚拟像光线经虚像引入面反射至观察端(601)与真实环境的光线经实像引入面透射至观察端并与虚拟像混合形成混合现实图像;这种MR设备价格低廉,并能灵活定义丰富的MR互动效果。

Description

一种模块化MR设备成像方法 技术领域
本发明涉及显示设备领域,尤其是一种模块化MR设备成像方法。
背景技术
MR设备是能在真实环境背景上叠加虚拟图像的显示设备,但以往此类设备通常以高性能计算设备和复杂光路配合而成,成本很高且使用不便,而现在的便携式VR眼镜以普及化的手机为显示核心,使得形成虚拟现实图像的成本大幅降低且使用方便。
技术问题
如何把便携式VR眼镜的显示原理用于MR设备,以大幅降低MR设备的成本,是本发明需要解决的技术问题。
技术解决方案
本发明提出一种模块化MR设备成像方法,能制备价格低廉的MR设备,并能灵活定义丰富的MR互动效果。
本发明采用以下技术方案。
一种模块化MR设备成像方法,所述MR设备包括MR计算模块、MR光路模块和MR姿态模块;所述MR计算模块包括显示组件;所述MR姿态模块包括拍摄组件和IMU组件;所述拍摄组件用于采集显示组件预设角度方向上的图像;所述IMU组件用于采集MR设备的姿态数据;所述MR计算模块与MR姿态模块相连,并根据MR姿态模块采集的图像数据和姿态数据对显示组件的显示内容进行调整。
所述MR光路模块包括虚像光路和混合光路;所述虚像光路与显示组件相接;所述混合光路输入端与虚像光路相接,输出端为观察端;混合光路处设有半透半反镜;所述半透半反镜一面为实像引入面,另一面为虚像引入面;所述实像引入面朝向真实环境;所述虚像引入面朝向虚像光路;所述显示组件的显示内容经虚像光路处理并传输后形成虚拟像,虚拟像光线经虚像引入面反射至观察端;所述真实环境的光线经实像引入面透射至观察端并与虚拟像混合形成混合现实图像。
所述MR计算模块为主智能手机;所述显示组件为主智能手机的显示模块;所述IMU组件包括磁力计、陀螺仪、加速度计;所述IMU组件包括主IMU组件和辅助IMU组件;所述主IMU组件采集显示组件的姿态数据;所述主IMU组件设于主智能手机处;所述辅助IMU组件设于一个以上与该主智能手机以无线方式连接的控制设备处;所述辅助IMU组件采集控制设备的姿态数据或位置数据;所述姿态数据包括姿态角、角速率或加速度数据;所述拍摄组件包括主拍摄组件和辅助拍摄组件,所述主拍摄组件为主智能手机的后置摄像头,所述辅助拍摄组件为控制设备处的摄像头。
所述MR光路模块为一无源的MR头戴机构;所述主智能手机固定于MR头戴机构处;所述主拍摄组件为主智能手机的后置摄像头;所述控制设备为游戏手柄,或是可穿戴于手部或脚部处的可穿戴设备,或是固定于MR头戴机构处的传感器及控制装置,或是使用者所手持或缚于肢体处的辅助手机。
所述MR头戴机构的虚像光路包括搁置板、全反射镜和视场透镜;所述视场透镜以两块菲涅尔透镜拼合成型;所述主智能手机横向放置于搁置板上;在MR头戴机构工作时,主智能手机以横向双分屏形式播放VR分屏模式的影像,两分屏的影像光线被全反射镜反射至两块菲涅尔透镜处,两块菲涅尔透镜折射两分屏的影像光线,使影像光线形成两路具备预设视场角的虚拟像光线,虚拟像光线经虚像引入面反射至观察端;所述真实环境的光线经实像引入面透射至观察端,真实环境光线与虚拟像光线混合使观察端处形成混合现实图像。
所述主智能手机的后置摄像头的朝向为MR头戴机构的朝向;所述显示组件的姿态数据即为主智能手机的姿态数据;所述主智能手机处的IMU组件采集主智能手机的姿态数据,当MR头戴机构工作时,所述主智能手机的后置摄像头采集MR头戴机构初始朝向处的真实场景处的特征点,并在MR头戴机构工作时持续采集图像作为姿态图;所述MR计算模块根据特征点在姿态图处的变化、以及主智能手机的姿态数据的变化,来调整双分屏上的影像图像。
所述以横向双分屏形式播放的影像内包括虚拟角色和控制标识,所述MR计算模块根据辅助IMU组件上传的控制设备的姿态数据和位置数据生成控制标识,所述控制标识随控制设备的移动而移动;所述虚拟角色可与控制标识进行互动。
所述主智能手机与外部设备通过网络连接,所述以横向双分屏形式播放的影像内包括的虚拟角色和控制标识是混合现实图像的一部分,所述虚拟角色与外部设备对应,当所述虚拟角色可与控制标识进行互动时,外部设备根据互动内容进行相应的作业。
所述成像方法依次包括以下步骤;
A1、使用者把预装有MR应用APP的主智能手机固定于MR头戴机构的搁置板处,手持辅助手机,此辅助手机也是智能手机并预装有MR应用APP;
A2、使用者佩戴MR头戴机构,并使双眼贴近观察端以观察混合现实图像;
A3、启动主智能手机的MR应用APP并设定为显示端,主智能手机以横向双分屏形式播放影像,两分屏的影像光线被全反射镜反射至两块菲涅尔透镜处,两块菲涅尔透镜折射两分屏的影像光线,使影像光线形成两路具备预设视场角的虚拟像光线,虚拟像光线经虚像引入面反射至观察端;所述真实环境的光线经实像引入面透射至观察端,真实环境光线与虚拟像光线混合使观察端处形成混合现实图像;
A4、所述主智能手机的后置摄像头采集MR头戴机构初始朝向处的真实场景处的特征点,并在MR头戴机构工作时持续采集图像作为姿态图;所述MR计算模块根据特征点在姿态图处的变化、以及主智能手机的姿态数据的变化,来调整双分屏上的影像图像;
A5、使用者把辅助手机举起至混合现实图像处的特定点位处,启动辅助手机上的MR应用APP并设定为控制端;辅助手机上的辅助IMU组件采集辅助手机的姿态数据和位置数据;控制端与显示端以无线方式连接,控制端向显示端上传辅助手机的姿态数据和控制数据;
A6、所述MR计算模块根据辅助手机的姿态数据和位置数据,在混合现实图像内生成控制标识,所述控制标识随辅助手机的移动而移动;当混合现实图像上的控制标识与虚拟角色接触或相邻时,所述虚拟角色与控制标识进行互动;
A7、所述虚拟角色与外部设备对应,当所述虚拟角色可与控制标识进行互动时,外部设备根据互动内容进行相应的作业。
所述主智能手机、辅助手机通过单目视觉惯性里程计方法来生成并共享统一的空间定位数据,所述单目视觉惯性里程计方法分为以下步骤;
B1、主智能手机、辅助手机经摄像头采集图像生成各自的姿态图,主智能手机、辅助手机经内置的IMU组件采集各自的姿态数据,主智能手机、辅助手机各自把姿态图与姿态数据进行关联,形成各自的空间图像关联数据,主智能手机、辅助手机经网络连接来汇总各自的空间图像关联数据,在主智能手机、辅助手机内生成统一的空间图像关联数据库;
B2、主智能手机、辅助手机在运动过程中继续采集姿态图和姿态数据,并把新采集的姿态图、姿态数据添加入空间图像关联数据库内并进行关联;
B3、主智能手机、辅助手机在运动过程中以当前采集的姿态图、姿态数据在空间图像关联数据库内进行数据比对,以获取手机在当前空间的具体方位,并预判手机的轨迹和姿态变化;
B4、主智能手机、辅助手机在运动过程中,读取空间图像关联数据库,并以当前所采集的姿态图与过去N个时间帧内同坐标、同姿态所采集的姿态图和姿态数据进行比对,当发现差异时对空间图像关联数据库进行更新;
B5、在步骤B3、B4中,主智能手机、辅助手机以预设的公差阈值来比对和校验数据,以提升空间定位的效率和鲁棒性。
所述MR头戴机构由一片材制成,所述片材沿长度方向设有A折叠段、B折叠段和C折叠段;所述A折叠段固定有半透半反镜、视场透镜;所述B折叠段固定有全反射镜;所述C折叠段处设有搁置板;所述搁置板处设有供主智能手机的后置摄像头采集外界图像的摄像孔。
所述MR头戴机构的制备方法依次包括以下步骤;
B1、把A折叠段、B折叠段折合形成一菱形柱;使所述透镜位于菱形顶点连线处;菱形柱四个侧面处,其中一个侧面敞开为影像光线入射面,另三个侧面封闭分别形成为观察孔壁、半透半反镜壁和全反射镜壁;所述影像光线入射面朝向全反射镜壁;全反射镜壁处设有全反射镜;所述观察孔位于观察孔壁;观察孔所朝向的菱形柱侧壁为半透半反镜壁;所述半透半反镜设于半透半反镜壁处;
B2、展开A折叠段处的遮光罩,并使遮光罩插于观察孔壁;
B3、展开C折叠段,在搁置板上放置具有后置摄像头的主智能手机,使后置摄像头对准搁置板的摄像孔,然后把C折叠段折合至菱形柱的影像光线入射面处;所述观察端包括观察孔,当主智能手机以横向双分屏形式播放VR分屏模式的影像时,即可在观察孔处看到手机屏幕图像与外界图像混合形成的混合现实图像。
所述搁置板的底部设置有阻尼件;所述搁置板经魔术贴或卡扣与机壳可拆联接;所述搁置板与机壳固定联接。
所述MR计算模块为主智能手机;所述显示组件为主智能手机的显示模块;所述IMU组件包括磁力计、陀螺仪、加速度计;所述IMU组件包括主IMU组件和零个或以上辅助IMU组件;所述主IMU组件采集显示组件的姿态数据;所述主IMU组件设于主智能手机处;所述辅助IMU组件设于一个以上与该主智能手机以无线方式连接的控制设备处;所述辅助IMU组件采集控制设备的姿态数据或位置数据;所述姿态数据包括姿态角、角速率或加速度数据;所述拍摄组件包括主拍摄组件和辅助拍摄组件,所述主拍摄组件为主智能手机的后置摄像头,所述辅助拍摄组件为控制设备处的摄像头,辅助拍摄组件为可选。
所述MR光路模块为一无源的MR头戴机构;所述主智能手机固定于MR头戴机构处;所述主拍摄组件为主智能手机的后置摄像头;所述控制设备为游戏手柄,或是可穿戴于手部或脚部处的可穿戴设备,或是固定于MR头戴机构处的传感器及控制装置,或是使用者所手持或缚于肢体处的辅助手机。
所述MR头戴机构的虚像光路包括搁置板、全反射镜和视场透镜;所述视场透镜以两块菲涅尔透镜拼合成型;所述主智能手机横向放置于搁置板上;在MR头戴机构工作时,主智能手机以横向双分屏形式播放VR分屏模式的影像,两分屏的影像光线被全反射镜反射至两块菲涅尔透镜处,两块菲涅尔透镜折射两分屏的影像光线,使影像光线形成两路具备预设视场角的虚拟像光线,虚拟像光线经虚像引入面反射至观察端;所述真实环境的光线经实像引入面透射至观察端,真实环境光线与虚拟像光线混合使观察端处形成混合现实图像。
所述主智能手机的后置摄像头的朝向为MR头戴机构的朝向;所述显示组件的姿态数据即为主智能手机的姿态数据;所述主智能手机处的IMU组件采集主智能手机的姿态数据,当MR头戴机构工作时,所述主智能手机的后置摄像头采集MR头戴机构朝向处的真实场景的特征点,并在MR头戴机构工作时持续采集图像形成特征点姿态图;所述MR计算模块根据特征点在特征点姿态图处的变化、以及主智能手机的姿态数据的变化,来计算主智能手机的空间位置,来调整双分屏上的影像图像。
所述以横向双分屏形式播放的影像内包括虚拟角色和控制标识,所述MR计算模块根据控制设备上的辅助IMU组件或辅助拍摄组件上传的姿态数据和位置数据生成控制器的空间位置,并形成控制标识,所述控制标识随控制设备的移动而移动;所述虚拟角色可与控制标识进行互动。
所述主智能手机与外部设备通过网络连接,所述以横向双分屏形式播放的影像内包括的虚拟角色和控制标识是混合现实图像的一部分,所述虚拟角色与外部设备对应,当所述虚拟角色可与控制标识进行互动时,外部设备根据互动内容进行相应的作业。
所述成像方法依次包括以下步骤;
A1、使用者把预装有MR应用APP的主智能手机固定于MR头戴机构的搁置板处,手持控制设备,此控制设备可以是智能手机并预装有MR应用APP;
A2、使用者佩戴MR头戴机构,并使双眼贴近观察端以观察混合现实图像;
A3、启动主智能手机的MR应用APP并设定为显示端,主智能手机以横向双分屏形式播放影像,两分屏的影像光线被全反射镜反射至两块菲涅尔透镜处,两块菲涅尔透镜折射两分屏的影像光线,使影像光线形成两路具备预设视场角的虚拟像光线,虚拟像光线经虚像引入面反射至观察端;所述真实环境的光线经实像引入面透射至观察端,真实环境光线与虚拟像光线混合使观察端处形成混合现实图像;
A4、所述主智能手机的后置摄像头采集MR头戴机构初始朝向处的真实场景处的特征点,并在MR头戴机构工作时持续采集图像形成特征点姿态图;所述MR计算模块根据特征点在姿态图处的变化、以及主智能手机的姿态数据的变化,来计算主智能手机的空间位置,并用该信息来调整双分屏上的影像图像;
A5、使用者把控制设备举起至混合现实图像处的特定点位处,如控制设备为智能手机,则启动辅助手机上的MR应用APP并设定为控制端;控制设备上的辅助IMU控制设备的姿态数据和位置数据;控制端与显示端以无线方式连接,控制端向显示控制设备的姿态数据和控制数据;
A6、所述MR计算模块根据辅助手机的姿态数据和位置数据,在混合现实图像内生成控制标识,所述控制标识随辅助手机的移动而移动;当混合现实图像上的控制标识与虚拟角色接触或相邻时,所述虚拟角色与控制标识进行互动;
A7、所述虚拟角色与外部设备对应,当所述虚拟角色可与控制标识进行互动时,外部设备根据互动内容进行相应的作业。
所述MR头戴机构由一片材制成,所述片材沿长度方向设有A折叠段、B折叠段和C折叠段;所述A折叠段固定有半透半反镜、视场透镜;所述B折叠段固定有全反射镜;所述C折叠段处设有搁置板;所述搁置板处设有供主智能手机的后置摄像头采集外界图像的摄像孔。
所述MR头戴机构的制备方法依次包括以下步骤;
B1、把A折叠段、B折叠段折合形成一菱形柱;使所述透镜位于菱形顶点连线处;菱形柱四个侧面处,其中一个侧面敞开为影像光线入射面,另三个侧面封闭分别形成为观察孔壁、半透半反镜壁和全反射镜壁;所述影像光线入射面朝向全反射镜壁;全反射镜壁处设有全反射镜;所述观察孔位于观察孔壁;观察孔所朝向的菱形柱侧壁为半透半反镜壁;所述半透半反镜设于半透半反镜壁处;
B2、展开A折叠段处的遮光罩,并使遮光罩插于观察孔壁;
B3、展开C折叠段,在搁置板上放置具有后置摄像头的主智能手机,使后置摄像头对准搁置板的摄像孔,然后把C折叠段折合至菱形柱的影像光线入射面处;所述观察端包括观察孔,当主智能手机以横向双分屏形式播放VR分屏模式的影像时,即可在观察孔处看到手机屏幕图像与外界图像混合形成的混合现实图像。
有益效果
本发明采用了模块化设计,MR设备的光路与播放源之间是相互独立的,只要作为播放源的手机显示能力符合MR头戴机构光路的兼容范围,即可用于本设备,MR头戴机构机壳可不设电子设备,或是仅设置少许与智能手机配套的传感器即可,从而大大降低了MR眼镜的制作成本。
本发明中,由于MR设备的虚拟视像源为智能手机,置入MR设备搁置板的智能手机在MR设备使用时即为MR设备的一部分,因此MR设备的互动功能可以随智能手机硬件性能、软件功能的提升而提升,在升级MR设备功能时,只需升级手机硬件,或是仅升级手机内部APP即可实现,无须如传统MR设备那样需与固定设备商绑定,从而让MR设备的升级更为便捷经济,例如当智能手机具备磁力计、陀螺仪、加速度计这些传感器时,如果智能手机用于MR显示的APP能与这些传感器配合工作,则即可实现混合现实图像的定位显示,即混合现实图像上的虚像可与MR设备的朝向和运动进行互动,而当这些IMU组件的功能、采样性能提升时,本MR设备的功能也会得到提升。
本发明中,由于MR设备的光路部分仅以一块薄板简单地折合而成,从而能在光路基板上选用不同的材料,以适合市场对MR眼镜的价格、强度等方面的不同要求。
本发明中,所述搁置板处设有供图像播放器的后置摄像头采集外界图像的摄像孔;所述图像播放器为智能手机;由于在本产品中,智能手机的摄像头指向是与MR设备的现实观察方向一致的,MR设备的现实观察方向即为观察孔的朝向;此设计能使MR设备的虚拟图像与手机摄像头采集的外界图像产生互动,由于智能手机摄像头采集的图像与观察孔所能观察到的现实图像是基本一致的,则当虚拟图像能与现实图像间产生互动关联时,智能手机内用于MR显示的APP即可使虚拟图像产生相应的互动,例如,如果智能手机内用于MR显示的APP显示的虚像为水壶,而MR设备观察孔正对一火源时,智能手机的摄像头采集到火源图像并把图像传给MR显示APP,则此APP自动把虚拟图像内的水壶转为浇开水的水壶状态。
本发明中,由于显示端的智能手机和控制端的智能手机是通过无线连接方式配合的独立设备,因此为本MR设备的控制方式提供了强大的扩展能力,只需在控制端的智能手机上安装相应的MR应用APP,即可在混合现实图像上把控制端的智能手机虚拟显示为各类控制标识,如游戏枪、盾或抓斗、机械手、文件控制图标等。
本发明中,由于显示端的智能手机和控制端的智能手机或游戏手柄是通过无线连接方式配合的独立设备,因此为本MR设备的控制方式提供了强大的扩展能力,只需在控制端的智能手机或游戏手柄上安装相应的MR应用APP,即可在混合现实图像上把控制端的智能手机虚拟显示为各类控制标识,如游戏枪、盾或抓斗、机械手、文件控制图标等。
由于本发明的MR设备的光路模块仅以一块薄板简单地折合而成,成本极期低廉,而作为运算和控制的设备又是软件可独立安装的智能手机,因此使用者能以很低的成本获得MR设备的各个模块,再通过对智能手机上APP的设置,让MR设备能与办公或家用场景充分结合,大大提升了MR设备的实用性;例如使用者带上MR头戴机构之后,以手持的智能手机来虚拟显示为文件控制图标,在混合现实图像中审核完稿件,认为可正式打印后,就可以直接把文件控制图标扔到混合现实图像中的打印机虚拟角色里,由于打印机角色已与真实打印机相关联,文件控制图标与打印机虚拟角色的互动等同于打印命令,所以真实打印机就直接执行文件打印作业。
附图说明
下面结合附图和具体实施方式对本发明进一步详细的说明:
图1为本发明的示意图;
图2为本发明另一方向上的示意图;
图3为本发明的光路示意图;
图4为本发明透镜的构造示意图;
图5为本发明机壳的折叠过程示意图;
图6为本发明的立体示意图;
图7为本发明的另一立体示意图;
图8为本发明的主智能手机以横向双分屏形式播放VR分屏模式影像的示意图;
图中:1-MR光路模块;2-全反射镜;3-视场透镜;4-半透半反镜;5-搁置板;6-遮光罩;601-观察端;7-阻尼件;a-MR计算模块;
101-菲涅尔透镜;102-观察孔;103-C折叠段;104-A折叠段;105-B折叠段;106-主智能手机;107-控制标识;108-虚拟角色;109-显示组件。
本发明的最佳实施方式
如图1-8所示,一种模块化MR设备成像方法,所述MR设备包括MR计算模块a、MR光路模块1和MR姿态模块;所述MR计算模块a包括显示组件109;所述MR姿态模块包括拍摄组件和IMU组件;所述拍摄组件用于采集显示组件预设角度方向上的图像;所述IMU组件用于采集MR设备的姿态数据;所述MR计算模块与MR姿态模块相连,并根据MR姿态模块采集的图像数据和姿态数据对显示组件的显示内容进行调整。
所述MR光路模块1包括虚像光路和混合光路;所述虚像光路与显示组件相接;所述混合光路输入端与虚像光路相接,输出端为观察端;混合光路处设有半透半反镜4;所述半透半反镜4一面为实像引入面,另一面为虚像引入面;所述实像引入面朝向真实环境;所述虚像引入面朝向虚像光路;所述显示组件的显示内容经虚像光路处理并传输后形成虚拟像,虚拟像光线经虚像引入面反射至观察端;所述真实环境的光线经实像引入面透射至观察端601并与虚拟像混合形成混合现实图像。
所述MR计算模块a为主智能手机106;所述显示组件为主智能手机106的显示模块;所述IMU组件包括磁力计、陀螺仪、加速度计;所述IMU组件包括主IMU组件和辅助IMU组件;所述主IMU组件采集显示组件的姿态数据;所述主IMU组件设于主智能手机处;所述辅助IMU组件设于一个以上与该主智能手机以无线方式连接的控制设备处;所述辅助IMU组件采集控制设备的姿态数据或位置数据;所述姿态数据包括姿态角、角速率或加速度数据;所述拍摄组件包括主拍摄组件和辅助拍摄组件,所述主拍摄组件为主智能手机的后置摄像头,所述辅助拍摄组件为控制设备处的摄像头。
所述MR光路模块1为一无源的MR头戴机构;所述主智能手机106固定于MR头戴机构1处;所述主拍摄组件为主智能手机的后置摄像头;所述控制设备为游戏手柄,或是可穿戴于手部或脚部处的可穿戴设备,或是固定于MR头戴机构处的传感器及控制装置,或是使用者所手持或缚于肢体处的辅助手机。
所述MR头戴机构的虚像光路包括搁置板5、全反射镜2和视场透镜3;所述视场透镜3以两块菲涅尔透镜101拼合成型;所述主智能手机106横向放置于搁置板5上;在MR头戴机构工作时,主智能手机以横向双分屏形式播放VR分屏模式的影像,两分屏的影像光线被全反射镜反射至两块菲涅尔透镜处,两块菲涅尔透镜折射两分屏的影像光线,使影像光线形成两路具备预设视场角的虚拟像光线,虚拟像光线经虚像引入面反射至观察端;所述真实环境的光线经实像引入面透射至观察端,真实环境光线与虚拟像光线混合使观察端处形成混合现实图像。
所述主智能手机的后置摄像头的朝向为MR头戴机构的朝向;所述显示组件的姿态数据即为主智能手机106的姿态数据;所述主智能手机处的IMU组件采集主智能手机的姿态数据,当MR头戴机构工作时,所述主智能手机的后置摄像头采集MR头戴机构初始朝向处的真实场景处的特征点,并在MR头戴机构工作时持续采集图像作为姿态图;所述MR计算模块根据特征点在姿态图处的变化、以及主智能手机的姿态数据的变化,来调整双分屏上的影像图像。
所述以横向双分屏形式播放的影像内包括虚拟角色108和控制标识107,所述MR计算模块根据辅助IMU组件上传的控制设备的姿态数据和位置数据生成控制标识,所述控制标识随控制设备的移动而移动;所述虚拟角色可与控制标识进行互动。
所述主智能手机与外部设备通过网络连接,所述以横向双分屏形式播放的影像内包括的虚拟角色和控制标识是混合现实图像的一部分,所述虚拟角色与外部设备对应,当所述虚拟角色可与控制标识进行互动时,外部设备根据互动内容进行相应的作业。
所述成像方法依次包括以下步骤;
A1、使用者把预装有MR应用APP的主智能手机106固定于MR头戴机构的搁置板处,手持辅助手机,此辅助手机也是智能手机并预装有MR应用APP;
A2、使用者佩戴MR头戴机构,并使双眼贴近观察端以观察混合现实图像;
A3、启动主智能手机的MR应用APP并设定为显示端,主智能手机以横向双分屏形式播放影像,两分屏的影像光线被全反射镜反射至两块菲涅尔透镜处,两块菲涅尔透镜折射两分屏的影像光线,使影像光线形成两路具备预设视场角的虚拟像光线,虚拟像光线经虚像引入面反射至观察端;所述真实环境的光线经实像引入面透射至观察端,真实环境光线与虚拟像光线混合使观察端处形成混合现实图像;
A4、所述主智能手机的后置摄像头采集MR头戴机构初始朝向处的真实场景处的特征点,并在MR头戴机构工作时持续采集图像作为姿态图;所述MR计算模块根据特征点在姿态图处的变化、以及主智能手机的姿态数据的变化,来调整双分屏上的影像图像;
A5、使用者把辅助手机举起至混合现实图像处的特定点位处,启动辅助手机上的MR应用APP并设定为控制端;辅助手机上的辅助IMU组件采集辅助手机的姿态数据和位置数据;控制端与显示端以无线方式连接,控制端向显示端上传辅助手机的姿态数据和控制数据;
A6、所述MR计算模块根据辅助手机的姿态数据和位置数据,在混合现实图像内生成控制标识,所述控制标识随辅助手机的移动而移动;当混合现实图像上的控制标识与虚拟角色接触或相邻时,所述虚拟角色与控制标识进行互动;
A7、所述虚拟角色与外部设备对应,当所述虚拟角色可与控制标识进行互动时,外部设备根据互动内容进行相应的作业。
所述主智能手机、辅助手机通过单目视觉惯性里程计方法来生成并共享统一的空间定位数据,所述单目视觉惯性里程计方法分为以下步骤;
B1、主智能手机、辅助手机经摄像头采集图像生成各自的姿态图,主智能手机、辅助手机经内置的IMU组件采集各自的姿态数据,主智能手机、辅助手机各自把姿态图与姿态数据进行关联,形成各自的空间图像关联数据,主智能手机、辅助手机经网络连接来汇总各自的空间图像关联数据,在主智能手机、辅助手机内生成统一的空间图像关联数据库;
B2、主智能手机、辅助手机在运动过程中继续采集姿态图和姿态数据,并把新采集的姿态图、姿态数据添加入空间图像关联数据库内并进行关联;
B3、主智能手机、辅助手机在运动过程中以当前采集的姿态图、姿态数据在空间图像关联数据库内进行数据比对,以获取手机在当前空间的具体方位,并预判手机的轨迹和姿态变化;
B4、主智能手机、辅助手机在运动过程中,读取空间图像关联数据库,并以当前所采集的姿态图与过去N个时间帧内同坐标、同姿态所采集的姿态图和姿态数据进行比对,当发现差异时对空间图像关联数据库进行更新;
B5、在步骤B3、B4中,主智能手机、辅助手机以预设的公差阈值来比对和校验数据,以提升空间定位的效率和鲁棒性。
所述MR头戴机构由一片材制成,所述片材沿长度方向设有A折叠段104、B折叠段105和C折叠段106;所述A折叠段104固定有半透半反镜4、视场透镜3;所述B折叠段105固定有全反射镜2;所述C折叠段106处设有搁置板5;所述搁置板5处设有供主智能手机106的后置摄像头采集外界图像的摄像孔。
所述MR头戴机构的制备方法依次包括以下步骤;
B1、把A折叠段、B折叠段折合形成一菱形柱;使所述透镜位于菱形顶点连线处;菱形柱四个侧面处,其中一个侧面敞开为影像光线入射面,另三个侧面封闭分别形成为观察孔壁、半透半反镜壁和全反射镜壁;所述影像光线入射面朝向全反射镜壁;全反射镜壁处设有全反射镜;所述观察孔位于观察孔壁;观察孔所朝向的菱形柱侧壁为半透半反镜壁;所述半透半反镜设于半透半反镜壁处;
B2、展开A折叠段处的遮光罩6,并使遮光罩6插于观察孔壁;
B3、展开C折叠段,在搁置板上放置具有后置摄像头的主智能手机,使后置摄像头对准搁置板的摄像孔,然后把C折叠段折合至菱形柱的影像光线入射面处;所述观察端包括观察孔102,当主智能手机以横向双分屏形式播放VR分屏模式的影像时,即可在观察孔处看到手机屏幕图像与外界图像混合形成的混合现实图像。
所述搁置板5的底部设置有阻尼件7;所述搁置板经魔术贴或卡扣与机壳可拆联接;所述搁置板与机壳固定联接。
在本发明另一方案中,所述MR计算模块为主智能手机;所述显示组件为主智能手机的显示模块;所述IMU组件包括磁力计、陀螺仪、加速度计;所述IMU组件包括主IMU组件和零个或以上辅助IMU组件;所述主IMU组件采集显示组件的姿态数据;所述主IMU组件设于主智能手机处;所述辅助IMU组件设于一个以上与该主智能手机以无线方式连接的控制设备处;所述辅助IMU组件采集控制设备的姿态数据或位置数据;所述姿态数据包括姿态角、角速率或加速度数据;所述拍摄组件包括主拍摄组件和辅助拍摄组件,所述主拍摄组件为主智能手机的后置摄像头,所述辅助拍摄组件为控制设备处的摄像头,辅助拍摄组件为可选。
所述MR光路模块为一无源的MR头戴机构;所述主智能手机固定于MR头戴机构处;所述主拍摄组件为主智能手机的后置摄像头;所述控制设备为游戏手柄,或是可穿戴于手部或脚部处的可穿戴设备,或是固定于MR头戴机构处的传感器及控制装置,或是使用者所手持或缚于肢体处的辅助手机。
所述MR头戴机构的虚像光路包括搁置板、全反射镜和视场透镜;所述视场透镜以两块菲涅尔透镜拼合成型;所述主智能手机横向放置于搁置板上;在MR头戴机构工作时,主智能手机以横向双分屏形式播放VR分屏模式的影像,两分屏的影像光线被全反射镜反射至两块菲涅尔透镜处,两块菲涅尔透镜折射两分屏的影像光线,使影像光线形成两路具备预设视场角的虚拟像光线,虚拟像光线经虚像引入面反射至观察端;所述真实环境的光线经实像引入面透射至观察端,真实环境光线与虚拟像光线混合使观察端处形成混合现实图像。
所述主智能手机的后置摄像头的朝向为MR头戴机构的朝向;所述显示组件的姿态数据即为主智能手机的姿态数据;所述主智能手机处的IMU组件采集主智能手机的姿态数据,当MR头戴机构工作时,所述主智能手机的后置摄像头采集MR头戴机构朝向处的真实场景的特征点,并在MR头戴机构工作时持续采集图像形成特征点姿态图;所述MR计算模块根据特征点在特征点姿态图处的变化、以及主智能手机的姿态数据的变化,来计算主智能手机的空间位置,来调整双分屏上的影像图像。
所述以横向双分屏形式播放的影像内包括虚拟角色和控制标识,所述MR计算模块根据控制设备上的辅助IMU组件或辅助拍摄组件上传的姿态数据和位置数据生成控制器的空间位置,并形成控制标识,所述控制标识随控制设备的移动而移动;所述虚拟角色可与控制标识进行互动。
所述主智能手机与外部设备通过网络连接,所述以横向双分屏形式播放的影像内包括的虚拟角色和控制标识是混合现实图像的一部分,所述虚拟角色与外部设备对应,当所述虚拟角色可与控制标识进行互动时,外部设备根据互动内容进行相应的作业。
所述成像方法依次包括以下步骤;
A1、使用者把预装有MR应用APP的主智能手机固定于MR头戴机构的搁置板处,手持控制设备,此控制设备可以是智能手机并预装有MR应用APP;
A2、使用者佩戴MR头戴机构,并使双眼贴近观察端以观察混合现实图像;
A3、启动主智能手机的MR应用APP并设定为显示端,主智能手机以横向双分屏形式播放影像,两分屏的影像光线被全反射镜反射至两块菲涅尔透镜处,两块菲涅尔透镜折射两分屏的影像光线,使影像光线形成两路具备预设视场角的虚拟像光线,虚拟像光线经虚像引入面反射至观察端;所述真实环境的光线经实像引入面透射至观察端,真实环境光线与虚拟像光线混合使观察端处形成混合现实图像;
A4、所述主智能手机的后置摄像头采集MR头戴机构初始朝向处的真实场景处的特征点,并在MR头戴机构工作时持续采集图像形成特征点姿态图;所述MR计算模块根据特征点在姿态图处的变化、以及主智能手机的姿态数据的变化,来计算主智能手机的空间位置,并用该信息来调整双分屏上的影像图像;
A5、使用者把控制设备举起至混合现实图像处的特定点位处,如控制设备为智能手机,则启动辅助手机上的MR应用APP并设定为控制端;控制设备上的辅助IMU控制设备的姿态数据和位置数据;控制端与显示端以无线方式连接,控制端向显示控制设备的姿态数据和控制数据;
A6、所述MR计算模块根据辅助手机的姿态数据和位置数据,在混合现实图像内生成控制标识,所述控制标识随辅助手机的移动而移动;当混合现实图像上的控制标识与虚拟角色接触或相邻时,所述虚拟角色与控制标识进行互动;
A7、所述虚拟角色与外部设备对应,当所述虚拟角色可与控制标识进行互动时,外部设备根据互动内容进行相应的作业。
所述MR头戴机构由一片材制成,所述片材沿长度方向设有A折叠段、B折叠段和C折叠段;所述A折叠段固定有半透半反镜、视场透镜;所述B折叠段固定有全反射镜;所述C折叠段处设有搁置板;所述搁置板处设有供主智能手机的后置摄像头采集外界图像的摄像孔。
所述MR头戴机构的制备方法依次包括以下步骤;
B1、把A折叠段、B折叠段折合形成一菱形柱;使所述透镜位于菱形顶点连线处;菱形柱四个侧面处,其中一个侧面敞开为影像光线入射面,另三个侧面封闭分别形成为观察孔壁、半透半反镜壁和全反射镜壁;所述影像光线入射面朝向全反射镜壁;全反射镜壁处设有全反射镜;所述观察孔位于观察孔壁;观察孔所朝向的菱形柱侧壁为半透半反镜壁;所述半透半反镜设于半透半反镜壁处;
B2、展开A折叠段处的遮光罩,并使遮光罩插于观察孔壁;
B3、展开C折叠段,在搁置板上放置具有后置摄像头的主智能手机,使后置摄像头对准搁置板的摄像孔,然后把C折叠段折合至菱形柱的影像光线入射面处;所述观察端包括观察孔,当主智能手机以横向双分屏形式播放VR分屏模式的影像时,即可在观察孔处看到手机屏幕图像与外界图像混合形成的混合现实图像。
在本例使用中,使用者如果需在主智能手机、辅助手机所共享的统一的空间定位数据内设置统一的坐标系原点,可采取的一种方式是在设备初始使用时,让主智能手机、辅助手机以同一姿态对相同目标物进行初始姿态图采集,以识别并标记初始姿态图中的特征点作为坐标系原点。
实施例:
使用者主智能手机、辅助手机上分别安装MR应用APP,在MR头戴机构的搁置板上固定主智能手机,把辅助手机持于手上。
使用者戴上MR头戴机构之后,以手持的辅助手机作为操纵柄,并把辅助手机在混合现实中虚拟显示为文件控制图标。
使用者直接通过观察孔102、半透半反镜4来在混合现实图像中审核真实环境中的纸件稿件,此纸件稿件与文件控制图标关联,文件控制图标代表了与此纸件稿件对应的电脑档案。
当使用者在混合现实图像中审核完稿件,认为可正式打印后,挥动手持的辅助手机,直接把文件控制图标扔入混合现实图像中的打印机虚拟角色里,由于打印机角色已与真实打印机相关联,此时文件控制图标与打印机虚拟角色的互动等同于使用者正式发出打印命令,使得办公环境的真实打印机直接执行文件打印作业。

Claims (19)

  1. 一种模块化MR设备成像方法,其特征在于:所述MR设备包括MR计算模块、MR光路模块和MR姿态模块;所述MR计算模块包括显示组件;所述MR姿态模块包括拍摄组件和IMU组件;所述拍摄组件用于采集显示组件预设角度方向上的图像;所述IMU组件用于采集MR设备的姿态数据;所述MR计算模块与MR姿态模块相连,并根据MR姿态模块采集的图像数据和姿态数据对显示组件的显示内容进行调整;
    所述MR光路模块包括虚像光路和混合光路;所述虚像光路与显示组件相接;所述混合光路输入端与虚像光路相接,输出端为观察端;混合光路处设有半透半反镜;所述半透半反镜一面为实像引入面,另一面为虚像引入面;所述实像引入面朝向真实环境;所述虚像引入面朝向虚像光路;所述显示组件的显示内容经虚像光路处理并传输后形成虚拟像,虚拟像光线经虚像引入面反射至观察端;所述真实环境的光线经实像引入面透射至观察端并与虚拟像混合形成混合现实图像。
  2. 根据权利要求1所述的一种模块化MR设备成像方法,其特征在于:所述MR计算模块为主智能手机;所述显示组件为主智能手机的显示模块;所述IMU组件包括磁力计、陀螺仪、加速度计;所述IMU组件包括主IMU组件和辅助IMU组件;所述主IMU组件采集显示组件的姿态数据;所述主IMU组件设于主智能手机处;所述辅助IMU组件设于一个以上与该主智能手机以无线方式连接的控制设备处;所述辅助IMU组件采集控制设备的姿态数据或位置数据;所述姿态数据包括姿态角、角速率或加速度数据;所述拍摄组件包括主拍摄组件和辅助拍摄组件,所述主拍摄组件为主智能手机的后置摄像头,所述辅助拍摄组件为控制设备处的摄像头。
  3. 根据权利要求2所述的一种模块化MR设备成像方法,其特征在于:所述MR光路模块为一无源的MR头戴机构;所述主智能手机固定于MR头戴机构处;所述主拍摄组件为主智能手机的后置摄像头;所述控制设备为游戏手柄,或是可穿戴于手部或脚部处的可穿戴设备,或是固定于MR头戴机构处的传感器及控制装置,或是使用者所手持或缚于肢体处的辅助手机。
  4. 根据权利要求3所述的一种模块化MR设备成像方法,其特征在于:所述MR头戴机构的虚像光路包括搁置板、全反射镜和视场透镜;所述视场透镜以两块菲涅尔透镜拼合成型;所述主智能手机横向放置于搁置板上;在MR头戴机构工作时,主智能手机以横向双分屏形式播放VR分屏模式的影像,两分屏的影像光线被全反射镜反射至两块菲涅尔透镜处,两块菲涅尔透镜折射两分屏的影像光线,使影像光线形成两路具备预设视场角的虚拟像光线,虚拟像光线经虚像引入面反射至观察端;所述真实环境的光线经实像引入面透射至观察端,真实环境光线与虚拟像光线混合使观察端处形成混合现实图像。
  5. 根据权利要求4所述的一种模块化MR设备成像方法,其特征在于:所述主智能手机的后置摄像头的朝向为MR头戴机构的朝向;所述显示组件的姿态数据即为主智能手机的姿态数据;所述主智能手机处的IMU组件采集主智能手机的姿态数据,当MR头戴机构工作时,所述主智能手机的后置摄像头采集MR头戴机构初始朝向处的真实场景处的特征点,并在MR头戴机构工作时持续采集图像作为姿态图;所述MR计算模块根据特征点在姿态图处的变化、以及主智能手机的姿态数据的变化,来调整双分屏上的影像图像。
  6. 根据权利要求5所述的一种模块化MR设备成像方法,其特征在于:所述以横向双分屏形式播放的影像内包括虚拟角色和控制标识,所述MR计算模块根据辅助IMU组件上传的控制设备的姿态数据和位置数据生成控制标识,所述控制标识随控制设备的移动而移动;所述虚拟角色可与控制标识进行互动。
  7. 根据权利要求6所述的一种模块化MR设备成像方法,其特征在于:所述主智能手机与外部设备通过网络连接,所述以横向双分屏形式播放的影像内包括的虚拟角色和控制标识是混合现实图像的一部分,所述虚拟角色与外部设备对应,当所述虚拟角色可与控制标识进行互动时,外部设备根据互动内容进行相应的作业。
  8. 根据权利要求7所述的一种模块化MR设备成像方法,其特征在于:所述成像方法依次包括以下步骤;
    A1、使用者把预装有MR应用APP的主智能手机固定于MR头戴机构的搁置板处,手持辅助手机,此辅助手机也是智能手机并预装有MR应用APP;
    A2、使用者佩戴MR头戴机构,并使双眼贴近观察端以观察混合现实图像;
    A3、启动主智能手机的MR应用APP并设定为显示端,主智能手机以横向双分屏形式播放影像,两分屏的影像光线被全反射镜反射至两块菲涅尔透镜处,两块菲涅尔透镜折射两分屏的影像光线,使影像光线形成两路具备预设视场角的虚拟像光线,虚拟像光线经虚像引入面反射至观察端;所述真实环境的光线经实像引入面透射至观察端,真实环境光线与虚拟像光线混合使观察端处形成混合现实图像;
    A4、所述主智能手机的后置摄像头采集MR头戴机构初始朝向处的真实场景处的特征点,并在MR头戴机构工作时持续采集图像作为姿态图;所述MR计算模块根据特征点在姿态图处的变化、以及主智能手机的姿态数据的变化,来调整双分屏上的影像图像;
    A5、使用者把辅助手机举起至混合现实图像处的特定点位处,启动辅助手机上的MR应用APP并设定为控制端;辅助手机上的辅助IMU组件采集辅助手机的姿态数据和位置数据;控制端与显示端以无线方式连接,控制端向显示端上传辅助手机的姿态数据和控制数据;
    A6、所述MR计算模块根据辅助手机的姿态数据和位置数据,在混合现实图像内生成控制标识,所述控制标识随辅助手机的移动而移动;当混合现实图像上的控制标识与虚拟角色接触或相邻时,所述虚拟角色与控制标识进行互动;
    A7、所述虚拟角色与外部设备对应,当所述虚拟角色可与控制标识进行互动时,外部设备根据互动内容进行相应的作业。
  9. 根据权利要求8所述的一种模块化MR设备成像方法,其特征在于:所述主智能手机、辅助手机通过单目视觉惯性里程计方法来生成并共享统一的空间定位数据,所述单目视觉惯性里程计方法分为以下步骤;
    B1、主智能手机、辅助手机经摄像头采集图像生成各自的姿态图,主智能手机、辅助手机经内置的IMU组件采集各自的姿态数据,主智能手机、辅助手机各自把姿态图与姿态数据进行关联,形成各自的空间图像关联数据,主智能手机、辅助手机经网络连接来汇总各自的空间图像关联数据,在主智能手机、辅助手机内生成统一的空间图像关联数据库;
    B2、主智能手机、辅助手机在运动过程中继续采集姿态图和姿态数据,并把新采集的姿态图、姿态数据添加入空间图像关联数据库内并进行关联;
    B3、主智能手机、辅助手机在运动过程中以当前采集的姿态图、姿态数据在空间图像关联数据库内进行数据比对,以获取手机在当前空间的具体方位,并预判手机的轨迹和姿态变化;
    B4、主智能手机、辅助手机在运动过程中,读取空间图像关联数据库,并以当前所采集的姿态图与过去N个时间帧内同坐标、同姿态所采集的姿态图和姿态数据进行比对,当发现差异时对空间图像关联数据库进行更新;
    B5、在步骤B3、B4中,主智能手机、辅助手机以预设的公差阈值来比对和校验数据,以提升空间定位的效率和鲁棒性。
  10. 根据权利要求5所述的一种模块化MR设备成像方法,其特征在于:所述MR头戴机构由一片材制成,所述片材沿长度方向设有A折叠段、B折叠段和C折叠段;所述A折叠段固定有半透半反镜、视场透镜;所述B折叠段固定有全反射镜;所述C折叠段处设有搁置板;所述搁置板处设有供主智能手机的后置摄像头采集外界图像的摄像孔;
    所述MR头戴机构的制备方法依次包括以下步骤;
    B1、把A折叠段、B折叠段折合形成一菱形柱;使所述透镜位于菱形顶点连线处;菱形柱四个侧面处,其中一个侧面敞开为影像光线入射面,另三个侧面封闭分别形成为观察孔壁、半透半反镜壁和全反射镜壁;所述影像光线入射面朝向全反射镜壁;全反射镜壁处设有全反射镜;所述观察孔位于观察孔壁;观察孔所朝向的菱形柱侧壁为半透半反镜壁;所述半透半反镜设于半透半反镜壁处;
    B2、展开A折叠段处的遮光罩,并使遮光罩插于观察孔壁;
    B3、展开C折叠段,在搁置板上放置具有后置摄像头的主智能手机,使后置摄像头对准搁置板的摄像孔,然后把C折叠段折合至菱形柱的影像光线入射面处;所述观察端包括观察孔,当主智能手机以横向双分屏形式播放VR分屏模式的影像时,即可在观察孔处看到手机屏幕图像与外界图像混合形成的混合现实图像。
  11. 根据权利要求1所述的一种模块化MR设备成像方法,其特征在于:所述MR计算模块为主智能手机;所述显示组件为主智能手机的显示模块;所述IMU组件包括磁力计、陀螺仪、加速度计;所述IMU组件包括主IMU组件和零个或以上辅助IMU组件;所述主IMU组件采集显示组件的姿态数据;所述主IMU组件设于主智能手机处;所述辅助IMU组件设于一个以上与该主智能手机以无线方式连接的控制设备处;所述辅助IMU组件采集控制设备的姿态数据或位置数据;所述姿态数据包括姿态角、角速率或加速度数据;所述拍摄组件包括主拍摄组件和辅助拍摄组件,所述主拍摄组件为主智能手机的后置摄像头,所述辅助拍摄组件为控制设备处的摄像头,辅助拍摄组件为可选。
  12. 根据权利要求11所述的一种模块化MR设备成像方法,其特征在于:所述MR光路模块为一无源的MR头戴机构;所述主智能手机固定于MR头戴机构处;所述主拍摄组件为主智能手机的后置摄像头;所述控制设备为游戏手柄,或是可穿戴于手部或脚部处的可穿戴设备,或是固定于MR头戴机构处的传感器及控制装置,或是使用者所手持或缚于肢体处的辅助手机。
  13. 根据权利要求12所述的一种模块化MR设备成像方法,其特征在于:所述MR头戴机构的虚像光路包括搁置板、全反射镜和视场透镜;所述视场透镜以两块菲涅尔透镜拼合成型;所述主智能手机横向放置于搁置板上;在MR头戴机构工作时,主智能手机以横向双分屏形式播放VR分屏模式的影像,两分屏的影像光线被全反射镜反射至两块菲涅尔透镜处,两块菲涅尔透镜折射两分屏的影像光线,使影像光线形成两路具备预设视场角的虚拟像光线,虚拟像光线经虚像引入面反射至观察端;所述真实环境的光线经实像引入面透射至观察端,真实环境光线与虚拟像光线混合使观察端处形成混合现实图像。
  14. 根据权利要求13所述的一种模块化MR设备成像方法,其特征在于:所述主智能手机的后置摄像头的朝向为MR头戴机构的朝向;所述显示组件的姿态数据即为主智能手机的姿态数据;所述主智能手机处的IMU组件采集主智能手机的姿态数据,当MR头戴机构工作时,所述主智能手机的后置摄像头采集MR头戴机构朝向处的真实场景的特征点,并在MR头戴机构工作时持续采集图像形成特征点姿态图;所述MR计算模块根据特征点在特征点姿态图处的变化、以及主智能手机的姿态数据的变化,来计算主智能手机的空间位置,来调整双分屏上的影像图像。
  15. 根据权利要求14所述的一种模块化MR设备成像方法,其特征在于:所述以横向双分屏形式播放的影像内包括虚拟角色和控制标识,所述MR计算模块根据控制设备上的辅助IMU组件或辅助拍摄组件上传的姿态数据和位置数据生成控制器的空间位置,并形成控制标识,所述控制标识随控制设备的移动而移动;所述虚拟角色可与控制标识进行互动。
  16. 根据权利要求15所述的一种模块化MR设备成像方法,其特征在于:所述主智能手机与外部设备通过网络连接,所述以横向双分屏形式播放的影像内包括的虚拟角色和控制标识是混合现实图像的一部分,所述虚拟角色与外部设备对应,当所述虚拟角色可与控制标识进行互动时,外部设备根据互动内容进行相应的作业。
  17. 根据权利要求16所述的一种模块化MR设备成像方法,其特征在于:所述成像方法依次包括以下步骤;
    A1、使用者把预装有MR应用APP的主智能手机固定于MR头戴机构的搁置板处,手持控制设备,此控制设备可以是智能手机并预装有MR应用APP;
    A2、使用者佩戴MR头戴机构,并使双眼贴近观察端以观察混合现实图像;
    A3、启动主智能手机的MR应用APP并设定为显示端,主智能手机以横向双分屏形式播放影像,两分屏的影像光线被全反射镜反射至两块菲涅尔透镜处,两块菲涅尔透镜折射两分屏的影像光线,使影像光线形成两路具备预设视场角的虚拟像光线,虚拟像光线经虚像引入面反射至观察端;所述真实环境的光线经实像引入面透射至观察端,真实环境光线与虚拟像光线混合使观察端处形成混合现实图像;
    A4、所述主智能手机的后置摄像头采集MR头戴机构初始朝向处的真实场景处的特征点,并在MR头戴机构工作时持续采集图像形成特征点姿态图;所述MR计算模块根据特征点在姿态图处的变化、以及主智能手机的姿态数据的变化,来计算主智能手机的空间位置,并用该信息来调整双分屏上的影像图像;
    A5、使用者把控制设备举起至混合现实图像处的特定点位处,如控制设备为智能手机,则启动辅助手机上的MR应用APP并设定为控制端;控制设备上的辅助IMU控制设备的姿态数据和位置数据;控制端与显示端以无线方式连接,控制端向显示控制设备的姿态数据和控制数据;
    A6、所述MR计算模块根据辅助手机的姿态数据和位置数据,在混合现实图像内生成控制标识,所述控制标识随辅助手机的移动而移动;当混合现实图像上的控制标识与虚拟角色接触或相邻时,所述虚拟角色与控制标识进行互动;
    A7、所述虚拟角色与外部设备对应,当所述虚拟角色可与控制标识进行互动时,外部设备根据互动内容进行相应的作业。
  18. 根据权利要求14所述的一种模块化MR设备成像方法,其特征在于:所述MR头戴机构由一片材制成,所述片材沿长度方向设有A折叠段、B折叠段和C折叠段;所述A折叠段固定有半透半反镜、视场透镜;所述B折叠段固定有全反射镜;所述C折叠段处设有搁置板;所述搁置板处设有供主智能手机的后置摄像头采集外界图像的摄像孔。
  19. 根据权利要求18所述的一种模块化MR设备成像方法,其特征在于:所述MR头戴机构的制备方法依次包括以下步骤;
    B1、把A折叠段、B折叠段折合形成一菱形柱;使所述透镜位于菱形顶点连线处;菱形柱四个侧面处,其中一个侧面敞开为影像光线入射面,另三个侧面封闭分别形成为观察孔壁、半透半反镜壁和全反射镜壁;所述影像光线入射面朝向全反射镜壁;全反射镜壁处设有全反射镜;所述观察孔位于观察孔壁;观察孔所朝向的菱形柱侧壁为半透半反镜壁;所述半透半反镜设于半透半反镜壁处;
    B2、展开A折叠段处的遮光罩,并使遮光罩插于观察孔壁;
    B3、展开C折叠段,在搁置板上放置具有后置摄像头的主智能手机,使后置摄像头对准搁置板的摄像孔,然后把C折叠段折合至菱形柱的影像光线入射面处;所述观察端包括观察孔,当主智能手机以横向双分屏形式播放VR分屏模式的影像时,即可在观察孔处看到手机屏幕图像与外界图像混合形成的混合现实图像。
     
PCT/CN2018/089434 2017-06-02 2018-06-01 一种模块化mr设备成像方法 WO2018219336A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2019546878A JP7212819B2 (ja) 2017-06-02 2018-06-01 モジュラー型mr装置のための撮像方法
CN201880002360.1A CN109313342B (zh) 2017-06-02 2018-06-01 一种模块化mr设备成像方法
US16/477,527 US11709360B2 (en) 2017-06-02 2018-06-01 Imaging method for modular mixed reality (MR) device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710406371.3 2017-06-02
CN201710406371.3A CN107065195B (zh) 2017-06-02 2017-06-02 一种模块化mr设备成像方法

Publications (1)

Publication Number Publication Date
WO2018219336A1 true WO2018219336A1 (zh) 2018-12-06

Family

ID=59617774

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/089434 WO2018219336A1 (zh) 2017-06-02 2018-06-01 一种模块化mr设备成像方法

Country Status (4)

Country Link
US (1) US11709360B2 (zh)
JP (1) JP7212819B2 (zh)
CN (2) CN107065195B (zh)
WO (1) WO2018219336A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112526757A (zh) * 2020-12-15 2021-03-19 闪耀现实(无锡)科技有限公司 头戴式设备及其增强现实光机模组

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107065195B (zh) * 2017-06-02 2023-05-02 那家全息互动(深圳)有限公司 一种模块化mr设备成像方法
CN107390370B (zh) * 2017-09-21 2020-05-01 中新国际电子有限公司 一种vr头显
TWI679555B (zh) * 2017-10-12 2019-12-11 華碩電腦股份有限公司 擴增實境系統以及提供擴增實境之方法
CN108489482B (zh) * 2018-02-13 2019-02-26 视辰信息科技(上海)有限公司 视觉惯性里程计的实现方法及系统
US11422530B2 (en) * 2018-08-20 2022-08-23 Dell Products, L.P. Systems and methods for prototyping a virtual model
CN109814710B (zh) * 2018-12-27 2022-05-13 青岛小鸟看看科技有限公司 数据处理方法、装置及虚拟现实设备
CN110731882A (zh) * 2019-11-13 2020-01-31 常州大连理工大学智能装备研究院 一种视力恢复训练仪及其工作方法
CN111474723A (zh) * 2020-05-09 2020-07-31 Oppo广东移动通信有限公司 显示光学系统及头戴显示设备
WO2024010220A1 (ko) * 2022-07-06 2024-01-11 삼성전자 주식회사 거리 센서 활성화 방법 및 전자 장치

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104995583A (zh) * 2012-12-13 2015-10-21 微软技术许可有限责任公司 用于混合现实环境的直接交互系统
CN105144030A (zh) * 2013-02-27 2015-12-09 微软技术许可有限责任公司 混合现实增强
US20160216760A1 (en) * 2015-01-23 2016-07-28 Oculus Vr, Llc Headset with strain gauge expression recognition system
CN206115040U (zh) * 2016-10-31 2017-04-19 京东方科技集团股份有限公司 显示装置及可穿戴设备
CN106610527A (zh) * 2017-02-24 2017-05-03 关春东 一种近眼显示光学装置
CN107065195A (zh) * 2017-06-02 2017-08-18 福州光流科技有限公司 一种模块化mr设备成像方法

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160267720A1 (en) * 2004-01-30 2016-09-15 Electronic Scripting Products, Inc. Pleasant and Realistic Virtual/Augmented/Mixed Reality Experience
US7513668B1 (en) * 2005-08-04 2009-04-07 Rockwell Collins, Inc. Illumination system for a head up display
JP4777182B2 (ja) * 2006-08-01 2011-09-21 キヤノン株式会社 複合現実感提示装置及びその制御方法、プログラム
US20120050144A1 (en) 2010-08-26 2012-03-01 Clayton Richard Morlock Wearable augmented reality computing apparatus
US9041622B2 (en) * 2012-06-12 2015-05-26 Microsoft Technology Licensing, Llc Controlling a virtual object with a real controller device
US20140146394A1 (en) * 2012-11-28 2014-05-29 Nigel David Tout Peripheral display for a near-eye display device
JP3183228U (ja) 2013-02-19 2013-05-09 博史 田原 映像観察装置
US10330931B2 (en) 2013-06-28 2019-06-25 Microsoft Technology Licensing, Llc Space carving based on human physical data
US9311718B2 (en) * 2014-01-23 2016-04-12 Microsoft Technology Licensing, Llc Automated content scrolling
JP2016081021A (ja) 2014-10-09 2016-05-16 礼二郎 堀 画像表示装置を着脱できるヘッドマウントディスプレイ
JP6492531B2 (ja) 2014-10-27 2019-04-03 セイコーエプソン株式会社 表示装置、及び、表示装置の制御方法
CN204203553U (zh) * 2014-10-31 2015-03-11 成都理想境界科技有限公司 头戴式显示装置
CN104483753A (zh) * 2014-12-04 2015-04-01 上海交通大学 自配准透射式头戴显示设备
CN204613516U (zh) * 2015-04-21 2015-09-02 杨振亚 头戴式手机类小屏幕多功能拓展装置
CN105068659A (zh) * 2015-09-01 2015-11-18 陈科枫 一种增强现实系统
KR20180104056A (ko) * 2016-01-22 2018-09-19 코닝 인코포레이티드 와이드 필드 개인 디스플레이
CN105929958B (zh) * 2016-04-26 2019-03-01 华为技术有限公司 一种手势识别方法,装置和头戴式可视设备
US10802147B2 (en) * 2016-05-18 2020-10-13 Google Llc System and method for concurrent odometry and mapping
EP3943888A1 (en) * 2016-08-04 2022-01-26 Reification Inc. Methods for simultaneous localization and mapping (slam) and related apparatus and systems
US10402663B1 (en) * 2016-08-29 2019-09-03 Trifo, Inc. Visual-inertial positional awareness for autonomous and non-autonomous mapping
US10859713B2 (en) * 2017-01-04 2020-12-08 Qualcomm Incorporated Position-window extension for GNSS and visual-inertial-odometry (VIO) fusion
US10347001B2 (en) * 2017-04-28 2019-07-09 8th Wall Inc. Localizing and mapping platform

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104995583A (zh) * 2012-12-13 2015-10-21 微软技术许可有限责任公司 用于混合现实环境的直接交互系统
CN105144030A (zh) * 2013-02-27 2015-12-09 微软技术许可有限责任公司 混合现实增强
US20160216760A1 (en) * 2015-01-23 2016-07-28 Oculus Vr, Llc Headset with strain gauge expression recognition system
CN206115040U (zh) * 2016-10-31 2017-04-19 京东方科技集团股份有限公司 显示装置及可穿戴设备
CN106610527A (zh) * 2017-02-24 2017-05-03 关春东 一种近眼显示光学装置
CN107065195A (zh) * 2017-06-02 2017-08-18 福州光流科技有限公司 一种模块化mr设备成像方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112526757A (zh) * 2020-12-15 2021-03-19 闪耀现实(无锡)科技有限公司 头戴式设备及其增强现实光机模组
CN112526757B (zh) * 2020-12-15 2023-04-07 闪耀现实(无锡)科技有限公司 头戴式设备及其增强现实光机模组

Also Published As

Publication number Publication date
CN109313342B (zh) 2021-05-11
CN107065195A (zh) 2017-08-18
CN107065195B (zh) 2023-05-02
JP2020521992A (ja) 2020-07-27
US20190361236A1 (en) 2019-11-28
US11709360B2 (en) 2023-07-25
CN109313342A (zh) 2019-02-05
JP7212819B2 (ja) 2023-01-26

Similar Documents

Publication Publication Date Title
WO2018219336A1 (zh) 一种模块化mr设备成像方法
KR102060453B1 (ko) 화상 표시 시스템, 화상 표시 시스템의 제어방법, 화상 전송 시스템 및 헤드 마운트 디스플레이
JP4869430B1 (ja) 画像処理プログラム、画像処理装置、画像処理システム、および、画像処理方法
JP5541974B2 (ja) 画像表示プログラム、装置、システムおよび方法
JP2022502800A (ja) 拡張現実のためのシステムおよび方法
JP5525923B2 (ja) 画像処理プログラム、画像処理装置、画像処理システム、および画像処理方法
JP6027747B2 (ja) 空間相関したマルチディスプレイヒューマンマシンインターフェース
CN107438804B (zh) 一种用于控制无人机的穿戴式设备及无人机系统
CN104995583A (zh) 用于混合现实环境的直接交互系统
JP6452440B2 (ja) 画像表示システム、画像表示装置、画像表示方法、およびプログラム
US20160171780A1 (en) Computer device in form of wearable glasses and user interface thereof
JP2017187667A (ja) 頭部装着型表示装置およびコンピュータープログラム
JP2010102215A (ja) 表示装置、画像処理方法、及びコンピュータプログラム
JP2018097160A (ja) 表示システム、表示装置、及び、表示装置の制御方法
JP2012243147A (ja) 情報処理プログラム、情報処理装置、情報処理システム、および、情報処理方法
JP2018165066A (ja) 頭部装着型表示装置およびその制御方法
JP2019164420A (ja) 透過型頭部装着型表示装置および透過型頭部装着型表示装置の制御方法、透過型頭部装着型表示装置の制御のためのコンピュータープログラム
CN106327583A (zh) 一种实现全景摄像的虚拟现实设备及其实现方法
CN108830944B (zh) 光学透视式三维近眼显示系统及显示方法
CN103517061A (zh) 一种终端设备的显示控制方法及装置
JP5602702B2 (ja) 画像処理プログラム、画像処理装置、画像処理システム、および、画像処理方法
US20220351442A1 (en) Animation production system
JP5525924B2 (ja) 立体画像表示プログラム、立体画像表示装置、立体画像表示システム、および、立体画像表示方法
US20220351446A1 (en) Animation production method
JP2017182413A (ja) 頭部装着型表示装置およびその制御方法、並びにコンピュータープログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18808994

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019546878

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18808994

Country of ref document: EP

Kind code of ref document: A1