WO2019016820A1 - A METHOD FOR PLACING, TRACKING AND PRESENTING IMMERSIVE REALITY-VIRTUALITY CONTINUUM-BASED ENVIRONMENT WITH IoT AND/OR OTHER SENSORS INSTEAD OF CAMERA OR VISUAL PROCCESING AND METHODS THEREOF - Google Patents

A METHOD FOR PLACING, TRACKING AND PRESENTING IMMERSIVE REALITY-VIRTUALITY CONTINUUM-BASED ENVIRONMENT WITH IoT AND/OR OTHER SENSORS INSTEAD OF CAMERA OR VISUAL PROCCESING AND METHODS THEREOF Download PDF

Info

Publication number
WO2019016820A1
WO2019016820A1 PCT/IL2018/050813 IL2018050813W WO2019016820A1 WO 2019016820 A1 WO2019016820 A1 WO 2019016820A1 IL 2018050813 W IL2018050813 W IL 2018050813W WO 2019016820 A1 WO2019016820 A1 WO 2019016820A1
Authority
WO
WIPO (PCT)
Prior art keywords
beacons
eyes
reality
environment
based environment
Prior art date
Application number
PCT/IL2018/050813
Other languages
French (fr)
Inventor
Alon Melchner
Original Assignee
Alon Melchner
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alon Melchner filed Critical Alon Melchner
Publication of WO2019016820A1 publication Critical patent/WO2019016820A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present invention relates to methods and devices ("MR beacons") for defining an accurate position indoors and outdoors of an immersive reality-virtuality continuum-based environment and the vectors and/or positioning and/or rotation of a physical device or devices (that may include sensors, beacons and/or IoT devices and/or Smart City devices and/or unlimited amount of mobile devices or other mechanize to define and send or transmit data and/or energy) compared to a visual representation device's positioning (smart glasses / camera / screen / mobile / Holographic etc.) in real environment or virtual 3D space without using visual processing methods.
  • MR beacons for defining an accurate position indoors and outdoors of an immersive reality-virtuality continuum-based environment and the vectors and/or positioning and/or rotation of a physical device or devices (that may include sensors, beacons and/or IoT devices and/or Smart City devices and/or unlimited amount of mobile devices or other mechanize to define and send or transmit data and/or energy) compared to a visual representation device's
  • the invention includes a device ("MR beacon”) or several devices (“MR beacons”) for estimated position and rotation according to a position in a matrix created by "MR Beacons” in real space compared to the position and point of view of the visual representation (smart glasses/contact lenses/camera/screen/mobile etc..) device ("MR eyes”) so the immersive reality- virtuality continuum-based environment will be placed in the appropriate position/rotation/vector to the point of view of the visual representation or a plurality of visual representations , each from its point of view.
  • MR beacon device
  • MR beacons for estimated position and rotation according to a position in a matrix created by "MR Beacons” in real space compared to the position and point of view of the visual representation (smart glasses/contact lenses/camera/screen/mobile etc..) device ("MR eyes”) so the immersive reality- virtuality continuum-based environment will be placed in the appropriate position/rotation/vector to the point of view of the visual representation or a plurality of visual representations , each from its
  • Virtual, augmented or mixed reality environments are generated by computers using, in part, data that is analyzed from the environment.
  • Virtual, augmented or mixed realities generally refer to altering a view of reality.
  • Artificial information about a real environment can be overlaid over a view of the real environment.
  • the artificial information can be interactive or otherwise manipulable, providing the user of such information with an altered, and often enhanced, perception of reality.
  • reality-virtuality continuum-based environment is still a relatively new area of interest with limited present-day applications.
  • the present invention relates to those devices and infrastructure that will be placed by others and also to special devices that will be designed by the inventor or by others based on this invention for presenting, rendering or projecting and also tracking immersive reality-virtuality continuum-based environment on top of or superimposed upon the estimated real position.
  • the present invention offers new alternatives for visual processing and analysis to realistically place and track virtual environments as layers on see- through devices, camera rendered environments, mobile devices, holograms or any method of mixing virtual and real environments.
  • the mixing process that will enable placements and tracking of virtual environments on top of real environments will be done by calculating and analyzing different sensors and technologies like IoT, BLE, WiFi and Smart City infrastructure and others that are sometimes used for location based services or other sensing methods like proximity or level of energy, signal strength or other sensing options to define position.
  • sensors and technologies like IoT, BLE, WiFi and Smart City infrastructure and others that are sometimes used for location based services or other sensing methods like proximity or level of energy, signal strength or other sensing options to define position.
  • the present invention also enables to synchronize the accurate starting position of a an immersive reality-virtuality continuum-based environment or content or 3D model so it will be placed accurately in its physical position, from that point there the invention may be using visual processing technologies such as ARkit or ARcore to continue the experience but without the need to scan the environment first or to recognize the accurate position of a predefined content.
  • the present invention also enables to load the predefined spatial 3D mapping of a certain location in the real environment so an immersive reality-virtuality continuum-based environment or content or 3D model will be placed accurately in its physical position without the need to spatialy scan it first.
  • the present invention may also use a visual image and/or a visual picture and/or QR code and/or any other visual representation that is recognized by the system/server/software to enables synchronize the accurate starting position of a an immersive reality-virtuality continuum-based environment or content or 3D model so it will be placed accurately in its physical position, from that point there the invention may be using visual processing technologies such as ARkit or ARcore to continue the experience but without the need to scan the environment first or to recognize the accurate position of a predefined content.
  • the same usage of visual representation also enables to load the predefined spatial 3D mapping of a certain location in the real environment so an immersive reality-virtuality continuum-based environment or content or 3D model will be placed accurately in its physical position without the need to spatialy scan it first.
  • the placement of virtual environments on top of the real environment with this invention may also use other sensors on the visual representations device like Gyro, Accelerometer and others to better adjust the virtual environments representation but they will be secondary to the main invention.
  • MR BEACONS and EYES offer advantages like low energy and processing use, the ability to offer results not related to visual analysis and more so the use of any visual analysis with MR BEACONS will improve their abilities dramatically due to reduced computer resources (CPU, GPU, memory and battery) needed for implementation of the present invention.
  • Object recognition and tracking currently using several processes and places heavy demands on resources from devices which consist of mainly visual processing to enable placement of virtual environments on top of real environments (Fig. 1). SUMMARY OF THE INVENTION
  • the aforesaid system comprises: (a) at least 3 mixed reality (MR) beacons; each MR beacon further configured to define individual positions thereof and relative position thereof; (b) at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons; (c) a source of an immersive reality-virtuality continuum-based environment; (d) and at least one IoT device.
  • MR mixed reality
  • Another object of the invention is to disclose a mutual arrangement of said MR beacons and said MR eyes which is calibrated by means of measuring an arrival time of a signal emitted by at least one transponder or a level of energy of said signal.
  • a further object of the invention is to disclose the mixed reality experience comprising at least one element selected from the group consisting of cinema experience, theatre experience, indoor environment, outdoor environment and any combination thereof.
  • a further object of the invention is to disclose the mutual arrangement is calculated by means of tri angulation.
  • a further object of the invention is to disclose the triangulation performed by an ear clipping method or a monotone polygon method.
  • a further object of the invention is to disclose the at least of said MR beacon or MR eyes which is carried by an internet-of-things article.
  • a further object of the invention is to disclose the MR eyes functions as MR beacon.
  • a further object of the invention is to disclose a method of providing a mixed reality experience;
  • the aforesaid method comprises the steps of (a) providing a system further comprising (i) at least 3 mixed reality (MR) beacons; each MR beacon configured to define individual positions thereof and relative position thereof; (ii) at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons; (iii) a camera configured for capturing an image of a real environment; (iv) a source of an immersive reality-virtuality continuum-based environment; said MR eyes unit is configured for combining said image of a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit; (b) triangulating said MR eyes relative to said MR beacons; (c) calculating said spatial position of said MR eyes relative to said beacons; (d) providing an immersive reality-virtuality continuum-based content corresponding to said real environment; (e)
  • a further object of the invention is to disclose the step of triangulating said MR eyes relative to said MR beacons which comprises measuring an arrival time of a signal emitted by at least one transponder or a level of energy of said signal.
  • a further object of the invention is to disclose the step of triangulating said MR eyes relative to said MR beacons performed by an ear clipping method or a monotone polygon method.
  • FIG. 1 is a flowchart of a prior art method of displaying virtual reality
  • FIG. 2 a flowchart of a method of displaying virtual reality according to the present invention
  • FIGS 3 and 4 are schematic diagrams of triangle pyramids
  • FIG. 5 is a schematic diagram illustrating pyramid triangulation with IoT objects
  • FIG. 6 is a schematic diagram which illustrates user' travelling within immersive reality- virtuality continuum-based environment
  • FIGS 7a to 7c are schematic diagrams presenting beacon arrangements for appropriate orientation of immersive reality-virtuality continuum-based environment;
  • FIG. 7d presents views of the method and technology to fill, map and cover the world with digital layers;
  • FIG. 8 presents views of a method and technology that includes immersive reality-virtuality continuum-based environment data and/or 3D model data and/or code data within any physical object;
  • Figs 9(a, b) and 10(a,b) illustrate a method to change the room and corridors in hospitals to new reality with MR Beacons the place immersive reality-virtuality continuum-based environment will appear with the right angle and point of view it will reflect the real the real world around it.
  • Figs l l(a,b) illustrate a method to change the room and corridors in EVENTS & EXHIBITIONS to new reality with MR Beacons to place immersive reality-virtuality continuum-based environment will appear with the right angle and point of view it will reflect the real the real world around it;
  • Figs 11a and l ib illustrate Change the room and corridors and EVENTS & EXHIBITIONS to new reality with MR Beacons to place immersive reality-virtuality continuum-based environment will appear with the right angle and point of view it will reflect the real the real world around it;
  • Fig. 12 illustrates a method and technology to fill the world and cover the world and map it with digital layers
  • Fig. 13 illustrates a method and technology that includes immersive reality-virtuality continuum- based environment data and/or 3D model data and/or code data within any physical object
  • the invention's core innovation is the mechanism that may include different algorithmics that enable to present immersive reality-virtuality continuum-based environment on a defined position/vector/angle/object compared to the camera or device that aims at it without any visual calculation or processing.
  • Fig. 1 illustrating the existing prior art method for tracking and understanding where to display virtual environments by using visual devices and visual processing.
  • Fig. 2 illustrating diagrammatically the present invention enabling low processing and no visual feed process in order to display virtual environments by sensors.
  • Fig. 3 illustrating the Triangle Pyramid created from MR EYES and 3 or more MR beacons.
  • (a),(b),(c) - can be the same distance or totally different, they may be in millimeters, centimeters, meters or kilometers.
  • MR EYES the visual representation device's position, rotation and direction compared (1,2,3 & 5). Thanks to 3 or more triangles of the pyramid combined with the base shape of the pyramid there is enough data to know the exact angles and distances.
  • a onetime sync may be done inside the system to translate real world distance and size to the immersive reality-virtuality continuum-based environment and all calculations will be accordingly.
  • the same method will be done with Square Pyramid (MR EYES and 4 or more MR beacons that creates the pyramid square base.
  • Fig 4 illustrating calculation of vectors.
  • FIG. 5 illustrating another aspect of the present invention: Multi MR BEACONS or the use of IoT or Smart city to map a city with MR.
  • the world will become a matrix of billions of IoT devices, they will be used to act as MR beacons, and some will act as MR eyes too.
  • FIG. 6 Streaming 3D data while walking.
  • This streaming data concept will include all types of data including video, 3D model, logic and code, sound and more.
  • the invention will include different data types, priority definition for any content, understanding and forecasting the walking or driving path, streaming enough data to fill 360 degrees of content for enough information to cover line of sight.
  • Fig. 7a illustrating a man using his eyes with glasses, contact lenses or any other direct vision device or technology to become MR EYES and the peak of the pyramid to look at the pyramid base.
  • Fig. 7b illustrating another point of view to show pyramid from all angles or heights that by creating pyramid and calculating triangles we can define the virtual environment vectors.
  • Fig. 7c iUustratingmobile MR EYES as another example and the view on the real-world angle with the layer of virtual environments in perfect vector from MR EYES point of view.
  • Fig. 7d illustrating reflection of real environment; the place immersive reality- virtuality continuum-based environment will appear with the right angle and point of view will reflect the real the real world around it.
  • Fig. 8 illustrating a method and technology to fill the world and cover the world and map it with digital layers.
  • FIGs 9(a, b) and 10(a , b) illustrating different embodiments of a method and technology that includes immersive reality- virtuality continuum-based environment data and/or 3D model data and/or code data within any physical object.
  • Figs 11 (a, b) illustrating different embodiments of the technology to a room and corridors in an exhibition wherein the room and corridors may be altered to a new mixed reality and EVENTS & EXHIBITIONS to new reality with MR Beacons by placing immersive reality-virtuality continuum-based environments appearing with the right angle and point of view reflecting the real world around it.
  • Figs 12(a, b) illustrating a method and technology to fill the world and cover the world and map it with digital layers. The need is to know what type of content, what is its 3D shape/model, size, angle, position and more.
  • Figs 13 to 16 illustrating a method and technology that includes immersive reality-virtuality continuum-based environment data and/or 3D model data and/or code data within any physical object.
  • Input data includes type of content, what is its 3D shape/model, size, angle, position and more.
  • the present invention relates to different devices and technologies ("Mixed Reality beacons") for sensing and calculating distances and positions (like IoT, WiFi, RF, BLE, Magnets, Infra-red and all other types of technologies that uses the same method as the invention).
  • the present invention provides means and method to present immersive reality-virtuality continuum-based environment on a relatively accurate position in real space without the need to visually analyze the data and by calculating the relative position of the camera/eyes.
  • the invention will include mathematical and/or physical and/or logical mechanisms that will enable the visual representation device to get and send its estimated position compared to one, two, three or more MR beacons but the visual representation device will be the" top of the Pyramid" and the 3 or 4 MR beacons will be the base of the pyramid so that a 3 dimensional pyramid shape will be created with 3+1 (visual representation device) or 4+1 corners of the pyramid and the visual representation device will also create more triangles with every 2 MR beacons to improve accuracy, to understand the 3D shape but mainly to act as the viewing point between its vector to the Pyramid base. (Fig 3).
  • the methodology is based on a "pyramid” concept but the present invention also encompasses the use of different types of connections and data or a matrix of infinite devices to place a virtual object in a physical place without the need for camera and/or visual analysis.
  • the physical device that is an embodiment of the invention is based on beacons and sensors that may exist (for lot for example) but instead of using them to compare the place of the mobile device or the user compared to the world and to their location in the world, the Invention is the technical means and algorithmic method for calculating the place of the device (“Mixed Reality beacon") compared to the point of view of the visual representation (smart glasses/camera/screen/mobile etc..) device (“Mixed Reality eyes”) so that the immersive reality- virtuality continuum-based environment will appear on top of the "MR beacon" position or in an area or of a group of "MR beacons”.
  • MR beacons will define one XYZ position, vector and rotation compared to the "MR eyes” position and vector so if the "MR eyes” changes position or rotation it's point of view on the "MR beacon” will be relative to its new vector.
  • MR beacon will transmit radio, magnetic, IR, sound wave or any other type of energy or communication.
  • MR eyes will act as another beacon for the purpose of having another point of reference to calculate positioning but all positioning calculations of the beacons will be compared to it.
  • MR eyes will always be the sharp edge of the pyramid - whether it's a triangle based pyramid, a square based pyramid or more. The result is like a spotlight based on the sharpest point of the pyramid or a virtual camera in 3D space that aims at the center base of it. (Fig3 3)
  • MR beacons can be done in different ways and they represent an area, a room, an object compared to the MR eyes and the place that virtual contact should be placed, for example: if 4 to 8 MR beacons are placed in 4 to 8 corners of the room one can achieve a box area so inside it the immersive reality-virtuality continuum-based environment "knows" the size and space to cover or fill with content, a group of MR beacons can represent a place and by defining or placing an immersive reality-virtuality continuum-based environment that represent the real 3D shapes they can be covered or get mixed with immersive reality-virtuality continuum-based environment.
  • the Invention will include a way to have 3 small “mini MR beacon” installed in one "MR beacon” that may be lmm in diameter and up to 5 cm (for small size) or higher for medium and bigger size :MR beacon” that will be used for larger distances and power.
  • the "MR eyes” enables the visual representation of the immersive reality-virtuality continuum-based environment and will place inside the immersive reality-virtuality continuum- based environment a 3D virtual digital camera or representation screen such that when aimed toward the position of the immersive reality-virtuality continuum-based relative location of the "MR beacon” or “MR beacons” places the relevant virtual objects or content on the exact vector/ position and angle so that the result and the experience of the user will be that the visual representation of the immersive reality-virtuality continuum-based environment is on top of the real environment.
  • the real "MR beacon” will be seamless and as realistic as possible.
  • Continuous tracking will ensure that the visual representation of the immersive reality-virtuality continuum-based environment will stay on the same position in real environment and will rotate and adjust compared to the "MR eyes” movement and point of view.
  • the result will be a light 360 degrees immersive reality-virtuality continuum-based environment fixed to the defined "MR beacon" with no need for visual process, no loss of sight or understanding as to where the visual representation of the immersive reality-virtuality continuum-based environment should be.
  • This invention will enable defining 3D mesh and grid of meshes on top of real environments without any visual limitation related to light or distance or quality of camera. It will enable to placing of 3D virtual layers in predefined fixed places and cover even a building or a big object and look at it from any distance or angle.
  • the Pyramid base can be horizontal, vertical or in any angle, and the top of the pyramid represented by the visual representation device's angle, position, vector compared to the Pyramid can be any angle, position or vector.
  • the visual representation device When the visual representation device is aiming toward the pyramid base it will also present the immersive reality-virtuality continuum-based environment.
  • the visual representation device's angle, position, vector When the visual representation device's angle, position, vector will not aim toward directly toward the pyramid base it may still show the immersive reality-virtuality continuum-based environment if it is supposed to be presented according to its size, the amount of content etc.
  • the Pyramid base size or the distance between the MR beacons may be millimeters or miles, the small size base may be used for high resolution positioning or for small devices that may be included inside the MR beacons Pyramid base so it may be covered with the immersive reality- virtuality continuum-based environment so as to change its visual appearance and track and modify its appearance in any angle, rotation or position it appear compared to the visual representation device "MR eyes”.
  • Large pyramid base may be used to place large immersive reality-virtuality continuum-based environments to cover buildings or large area of land.
  • Pyramid base and MR beacons may offer immersive reality-virtuality continuum-based environment to infinite number of visual representation devices "MR eyes" that each will act as his own pyramid top and each will show the user the relevant angle, position and vector of the immersive reality-virtuality continuum-based environment.
  • the pyramid base can be created from 3 or 4 or more separated MR beacons that communicate with each other or they can be place on one electronic device that include the pyramid base and acts as a standalone base for the pyramid.
  • the pyramid can be created with 3,4 or more MR beacons placed in one device or can be any 3,4 or more separated MR beacons, they can also be 3,4 or more other IoT, BLE or even mobile devices that can transmit their location to the any other MR EYES and create a pyramid.
  • MR Beacons can also be MR Eyes, if 3,4 or more MR eyes also act as beacons they can each act as beacons to each other.
  • MR Eyes and MR beacons may include other sensors to improve its accuracy like gyro, accelerometer that can improve accuracy.
  • IoT devices are going to cover the world, they can communicate with each other and with other devices including MR EYES, AI systems will enable them all to talk to each other, transmit info and location and will enable the invention to use their existence to create MR beacons out of them and MR EYES to use them.
  • This invention will include the use of different IoT devices with or without AI systems to support and basically to use their matrix or to create one for mapping all positions compared to GPS or maps but also to remember and cover the world/matrix with multi realities/ multi layers of immersive reality-virtuality continuum-based environment.
  • immersive reality-virtuality continuum-based environment When immersive reality-virtuality continuum-based environment will be created on a specific place on earth or any other planet, it will be recorded with all possible data, position, maps, GPS, MR beacons, angles, vectors, rotation but also all data that will be collected to effect the immersive reality-virtuality continuum-based environment like pictures, physical objects that later can be transformed to immersive reality-virtuality continuum-based environment too or atleast be known so that the immersive reality-virtuality continuum-based environment will react to their position or existence.
  • MR BEACONS TO and a main infrastructure for MR EYES Some of them use AR to show direction or information with different levels of AR UI and they also use their technology to map the beacons and the rooms or the streets but they don't use it in a way to locate the beacons and translate the enviroment into Mixed Reality or Augmented reality or Virtual reality on top and combined with the real world. This ability to use the infrastructure that is growing as MR BEACONS is a main invention.
  • MR beacons for transforming a room or area into a different environment with a layer of immersive reality-virtuality continuum-based environment that covers this area is a core main invention.
  • MR BEACONS for cinemas - with MR Beacons it is possible to cover the walls of the cinema with a layer of immersive reality-virtuality continuum-based environment at any light level, or even near darkness to complete the movie experience.
  • the present invention includes other technologies to enhance cinema space with mixed reality in a parallel content to the movies content and/or changing the cinema's environment to a reality that will be decided while and before and after the movie is presented, such as S.L.A.M, depth cameras, devices and tech like Hololens etc.
  • a movie featuring a spaceship can be programmed to change the walls of the cinema to spaceship interior walls with windows that can show the stars and space and action outside.
  • Real 3D immersive reality-virtuality continuum-based environment can be added to the cinema space and enviroment, birds can fly above the heads of the visitors, immersive reality-virtuality continuum-based environment can be interacted with such as flapping or gesturing the hand to drive the bird away. Characters and object can fly out of the movie screen toward the people or the floor of the cinema and stay there, and any other interaction or creative idea that combined the immersive reality-virtuality continuum-based environment with the real cinema can be implemented by use of the present invention.
  • v. Activation of MR content can be done according to time of film, BLE or other types of communication to trigger MR events or any other method of activating it even manually.
  • Immersive reality-virtuality continuum-based environment actors, characters, art, scenery, objects, content or any other idea can be part of the film, in or out of the screen, as part of the cinema room and/or its new reality created by immersive reality-virtuality continuum-based environment.
  • Advertisements, promotion activities, sponsorships and other method of publications with MR beacons and/or mixed reality in cinemas are included.
  • MR BEACONS for theaters, preforming halls, live shows, live concerts etc. - the same as cinemas.
  • MR BEACONS for HOSPITALS (Fig)- with MR Beacons it is possible to cover the walls of the hospitals, walls, the floor, the celling, the corridors with a layer of immersive reality-virtuality continuum-based environment in any light level even near darkness to complete the movie experience.
  • GUIDED VISIT EFFECTIVE BUSINESS Visitors will now be guided with an interactive MR host, floating direction arrows, marked destinations
  • MR BEACONS for EVENTS & EXHIBITIONS with MR Beacons it is possible to cover the walls, the floor, the celling, the corridors with a layer of immersive reality-virtuality continuum-based environment in any light level even near darkness to complete the movie experience.
  • GUIDED VISIT EFFECTIVE BUSINESS Visitors will now be guided with an interactive MR host, floating direction arrows, marked destinations
  • v. ATTRACT VISITORS Enable attractions, games, selfie areas, guided tours
  • SMART ANALYTICS Learn what worked, where they went, how long.
  • MR BEACONS for RETAIL, STORES, SUPERMARKETS - with MR Beacons it is possible to cover the walls, the floor, the celling, the corridors with a layer of immersive reality-virtuality continuum-based environment in any light level even near darkness to complete the movie experience. Advertisements, promotion activities, sponsorships and other method of publications with MR beacons and/or mixed reality in cinemas are included.
  • Pyramid MR BEACONS enable the creation immersive reality-virtuality continuum-based environment in any visual condition at night, in darkness, fog or any visual limitation thanks to the methodology of MR BEACONS the ability to use non-visual sensing mechanism it is possible to place immersive reality-virtuality continuum-based environment in the dark, in bad weather and far away.
  • Shadows will appear on the real environment when is needed according to light source, shadows needs to be presented as immersive reality-virtuality continuum-based environment.
  • the position of the immersive reality-virtuality continuum-based environment in the real environment need to appear real and natural in its environment, that includes the right reflections according to the place of the MR eyes, shadows, light etc..
  • the reflections map on the immersive reality-virtuality continuum-based environment can be (a) the pictures that were collected on the Internet, on the cloud on search engines that represent the place and/or (b) AI search engines for pictures relevant for positions and/or (c) using google earth or other source of location based images and/or (d) source from live cameras feed from the location, and/or (e) camera feed from mobile or other devices or IoT that record the location.
  • Those sources will create a reflection map and/or color mapping and/or specular mapping and/or any other mapping that will cover immersive reality-virtuality continuum-based environment to create realistic real-time look and feel will create an image, changing images or video that will reflect on the immersive reality-virtuality continuum-based environment according to their materials to make them look more realistic on the real environment. They can be used as a skybox or any other ways to cover the virtual model. (FIG. 12)
  • MR eyes the immersive reality-virtuality continuum-based environment related to it, on it, cover it, interact with it, upgrade it and more.
  • a micro one or more MR beacons inside or with a physical object like a wooden chair, a toy, a building or any other real object it is possible that they will include or not their 3D shape to be manipulated or to act as a "mask” or ghost virtual shape so that the virtual content added to them will "know” their shape, where to be hidden, what to cover and by that create realistic composition of the real and the virtual.
  • the Tagged area cannot be tagged by another user, it may be sold with real or virtual currency, it may be viewed or visited or interacted with according to the controller rules, It may charge virtual currency from visitors that will be paid to the controller.
  • Neighbor cube areas may be tagged by others, all those cube areas will be marked on the real world map and/or grid and or streets and will create a second or more realities of the same spots on the real world. Tagging a place and owning it, creating a demand for hot places in the real world, getting currency from traffic of real people into this area and viral results of that are part of the invention.
  • a method and technology is provided to fill the world and cover the world and map it with digital layers.
  • the present invention includes means to provide characteristics of content, such as 3D shape/model, size, angle, position and more.
  • the present invention includes mechanisms for recording, saving, uploading and downloading 3D and/or digital content and/or interactive logic and/or code and/or any immersive reality-virtuality continuum-based environment data from a cloud server and/or another device (like mobile phone, laptop, hard drive) and/or any other device that may contain data and connect and place and show it in a predefined place/area in real environment with AR/VR/MR based on exact location with MR Beacons and/or GPS and/or any other ways of presenting immersive reality-virtuality continuum-based environment.
  • the area is defined as a 2D rectangle area that is part of a predefined grid/matrix on the map.
  • Each rectangle or Mixed.Place can be a flat rectangle on the ground or wall and/or as 3D cube area based in the real world. This area may be changed with immersive reality-virtuality continuum-based environment, updated, interacted with and more.
  • Any user with the specific technology and app on any device mainly mobile phones, smart glasses or other MR eyes can now TAG their own area for free or by paying a defined amount of virtual coins or real money.
  • a place is rendered free or defined as free of any other owner.
  • the user marks this Mixed.Place as his own and can then manipulate the virtual / digital reality of his Mixed.place.
  • the owner can than create content in its area, draw, build, offer messages, interact and create any type of reality he desires so others can see too. If he wants to TAG another Mixed.Place he needs to have enough coins / money to buy it.
  • the need is to know what type of content, download/install its 3D shape/model/immersive reality- virtuality continuum-based environment and/or size and/or angle and/or and position and be able to accurately load and/or cover and/or replace, update and/or change the appearance of the physical shape with digital layers (Fig. 16) and place it more realistically in the real world and use its shape as a mask (depth mask) to show other immersive reality-virtuality continuum-based environment behind it. (Fig. 14)
  • This mechanism includes immersive reality-virtuality continuum-based environment data and/or 3D model and/or any other digital code inside an RF and/or BLE and/or QR code and/or any other data containing or transmitting device that will become part of the physical object.
  • the transmitting device can be inserted or connected to a device in any stage, QR code may help identify the data quicker and also give extra info like the exact direction on the 3D model compared to the device.
  • the data When activated by a device (mobile, computer, glasses etc ..) the data is then being downloaded from the cloud to the device and/or MR EYES which then can use it according to need.
  • the data can be unique per unique device including a unique ID code, unique color and any other relevant data to describe the physical object better and then represent it accuracy in immersive reality-virtuality continuum-based environment.
  • the data can contain fabric type, flexibility of different parts of the objects, mass and any other physical data that needs to be transformed into digital.
  • This mechanism includes record, update, save, upload and download 3D and/or digital content and/or interactive logic and/or code and/or any immersive reality- virtuality continuum-based environment data from a cloud server and/or another device (like mobile phone, laptop, hard drive) and/or any other device that may contain data and connect, place and show it in a predefine place/area in real environment with AR/VR/MR and/or MR Beacons and other ways of presenting immersive reality-virtuality continuum-based environment.
  • Updating the data may be allowed with a code that may be created by the creator of the physical object or can be fixed to avoid data lose or misuse.
  • This patent is a part of the future where physical and digital will be combined it will be used in any type of object for different usage: furniture, 3D/2D Printed objects, home decorations, statues, art, outdoor fountains, monuments, buildings, shape of stairs and walls and more.
  • the flow will be: Scanning the objects ID » downloading data from cloud or device » placing or replacing the immersive reality-virtuality continuum-based environment on top or combined with the physical object » Interacting with it j.
  • Virtual/digital content will be added to the product, on top of it or around it, the product can be covered with digital materials or its surrounding by covering the product with depth mask so content will be hidden behind it.
  • a method and technology that includes immersive reality-virtuality continuum-based environment data and/or 3D model data and/or code data within any physical object that is manufactured and/or created and/or manufactures and/or printed with 3D printers and/or can be implemented inside an existing device so when it is needed the object can transmit this data to replace and/or cover and/or map physical object with digital layers.
  • the need is to know what type of content, should be downloaded and place at a predefined real environment.
  • a method for including immersive reality-virtuality continuum-based environment 3D drawing in real environments that will be represented as mixed reality and is combinable with the real environment comprises steps of:
  • This method and technology can be used in all virtual environments - augmented reality, virtual reality and mixed reality. Drawing can be done by mobile as described but also with controllers, hand gestures, digital pen, wearable devices and more. Drawing - recognition - make it 3D and place it on Mixed.Places and make it stay there, enable it for anyone to see and interact whether it is a drawing or a 3D model made out of the drawing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A system for providing a mixed reality experience comprises: (a) at least 3 mixed reality (MR) beacons positioned at predetermined distances from each other; each MR beacon further configured to define individual positions thereof and relative position thereof on the basis of cosines; (b) at least one MR eyes unit comprising means for calculating a spatial position thereof relative to the MR beacons; and (c) a source of an immersive reality-virtuality continuum-based environment. The MR eyes unit is configured for combining a real environment with the immersive reality-virtuality continuum-based environment oriented synchronically with the MR eyes unit.

Description

A METHOD FOR PLACING, TRACKING AND PRESENTING IMMERSIVE REALITY- VIRTUALITY CONTINUUM-BASED ENVIRONMENT WITH IoT AND/OR OTHER SENSORS INSTEAD OF CAMERA OR VISUAL PROCCESING AND METHODS THEREOF
FIELD OF THE INVENTION
The present invention relates to methods and devices ("MR beacons") for defining an accurate position indoors and outdoors of an immersive reality-virtuality continuum-based environment and the vectors and/or positioning and/or rotation of a physical device or devices (that may include sensors, beacons and/or IoT devices and/or Smart City devices and/or unlimited amount of mobile devices or other mechanize to define and send or transmit data and/or energy) compared to a visual representation device's positioning (smart glasses / camera / screen / mobile / Holographic etc.) in real environment or virtual 3D space without using visual processing methods.
The invention includes a device ("MR beacon") or several devices ("MR beacons") for estimated position and rotation according to a position in a matrix created by "MR Beacons" in real space compared to the position and point of view of the visual representation (smart glasses/contact lenses/camera/screen/mobile etc..) device ("MR eyes") so the immersive reality- virtuality continuum-based environment will be placed in the appropriate position/rotation/vector to the point of view of the visual representation or a plurality of visual representations , each from its point of view.
BACKGROUND OF THE INVENTION
Virtual, augmented or mixed reality environments are generated by computers using, in part, data that is analyzed from the environment. Virtual, augmented or mixed realities generally refer to altering a view of reality. Artificial information about a real environment can be overlaid over a view of the real environment. The artificial information can be interactive or otherwise manipulable, providing the user of such information with an altered, and often enhanced, perception of reality. However, reality-virtuality continuum-based environment is still a relatively new area of interest with limited present-day applications.
IoT, WiFi, Bluetooth, BLE, RF, Magnetic fields, infra-red, Sound wave, Smart cities and many other technologies are soon going to cover the world with its infrastructure, the present invention relates to those devices and infrastructure that will be placed by others and also to special devices that will be designed by the inventor or by others based on this invention for presenting, rendering or projecting and also tracking immersive reality-virtuality continuum-based environment on top of or superimposed upon the estimated real position.
Currently, virtual, augmented or mixed reality environments are placed, mixed and tracked with visual / digital processing of the environment in 2D or 3D cameras or by using an image target to process where to visualize the 3D virtual imagery. To create intuitive, realistic mixing of virtual environments with the real environments it is needed to constantly process the visual data form visual sensors like cameras, calculate the vectors and the 3D space and place virtual environments accordingly. See-thru head mount displays like glasses or contact lenses enables the user to see the real environment thru the glasses, they only need to render the virtual environments on the lens or in front of the user to combine and mix with the real environments but they still need a camera or visual device to constantly process the visual data and adjust the virtual content accordingly.
The present invention offers new alternatives for visual processing and analysis to realistically place and track virtual environments as layers on see- through devices, camera rendered environments, mobile devices, holograms or any method of mixing virtual and real environments.
The mixing process that will enable placements and tracking of virtual environments on top of real environments will be done by calculating and analyzing different sensors and technologies like IoT, BLE, WiFi and Smart City infrastructure and others that are sometimes used for location based services or other sensing methods like proximity or level of energy, signal strength or other sensing options to define position. Instead of just letting the user know where he is indoors or outside, he will be able to compare his location with the position of the virtual environments compared to his location so when looking at that area he will see the virtual environments in the right position and angle compared to his position The present invention also enables to synchronize the accurate starting position of a an immersive reality-virtuality continuum-based environment or content or 3D model so it will be placed accurately in its physical position, from that point there the invention may be using visual processing technologies such as ARkit or ARcore to continue the experience but without the need to scan the environment first or to recognize the accurate position of a predefined content. The present invention also enables to load the predefined spatial 3D mapping of a certain location in the real environment so an immersive reality-virtuality continuum-based environment or content or 3D model will be placed accurately in its physical position without the need to spatialy scan it first. The present invention may also use a visual image and/or a visual picture and/or QR code and/or any other visual representation that is recognized by the system/server/software to enables synchronize the accurate starting position of a an immersive reality-virtuality continuum-based environment or content or 3D model so it will be placed accurately in its physical position, from that point there the invention may be using visual processing technologies such as ARkit or ARcore to continue the experience but without the need to scan the environment first or to recognize the accurate position of a predefined content. The same usage of visual representation also enables to load the predefined spatial 3D mapping of a certain location in the real environment so an immersive reality-virtuality continuum-based environment or content or 3D model will be placed accurately in its physical position without the need to spatialy scan it first. The placement of virtual environments on top of the real environment with this invention may also use other sensors on the visual representations device like Gyro, Accelerometer and others to better adjust the virtual environments representation but they will be secondary to the main invention.
The placement of virtual environments on top of the real environment with this invention may also use S.L.A.M technology (simultaneous localization and mapping) or other AR technologies like Apple's ARkit. MR BEACONS and EYES offer advantages like low energy and processing use, the ability to offer results not related to visual analysis and more so the use of any visual analysis with MR BEACONS will improve their abilities dramatically due to reduced computer resources (CPU, GPU, memory and battery) needed for implementation of the present invention.. Object recognition and tracking currently using several processes and places heavy demands on resources from devices which consist of mainly visual processing to enable placement of virtual environments on top of real environments (Fig. 1). SUMMARY OF THE INVENTION
It is hence one object of the invention to disclose a system for providing a mixed reality experience. The present invention will use less resources and less processes to enable accurate placement of virtual environments on top of real environments and there will be no need of camera or visual processing devices and software. (Fig. 2)
The aforesaid system comprises: (a) at least 3 mixed reality (MR) beacons; each MR beacon further configured to define individual positions thereof and relative position thereof; (b) at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons; (c) a source of an immersive reality-virtuality continuum-based environment; (d) and at least one IoT device.
It is a core purpose of the invention to provide the MR eyes unit configured for combining said image of a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit.
Another object of the invention is to disclose a mutual arrangement of said MR beacons and said MR eyes which is calibrated by means of measuring an arrival time of a signal emitted by at least one transponder or a level of energy of said signal.
A further object of the invention is to disclose the mixed reality experience comprising at least one element selected from the group consisting of cinema experience, theatre experience, indoor environment, outdoor environment and any combination thereof.
A further object of the invention is to disclose the mutual arrangement is calculated by means of tri angulation.
A further object of the invention is to disclose the triangulation performed by an ear clipping method or a monotone polygon method.
A further object of the invention is to disclose the at least of said MR beacon or MR eyes which is carried by an internet-of-things article.
A further object of the invention is to disclose the MR eyes functions as MR beacon.
A further object of the invention is to disclose a method of providing a mixed reality experience; The aforesaid method comprises the steps of (a) providing a system further comprising (i) at least 3 mixed reality (MR) beacons; each MR beacon configured to define individual positions thereof and relative position thereof; (ii) at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons; (iii) a camera configured for capturing an image of a real environment; (iv) a source of an immersive reality-virtuality continuum-based environment; said MR eyes unit is configured for combining said image of a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit; (b) triangulating said MR eyes relative to said MR beacons; (c) calculating said spatial position of said MR eyes relative to said beacons; (d) providing an immersive reality-virtuality continuum-based content corresponding to said real environment; (e) combining said real environment and corresponding immersive reality- virtuality continuum-based content.
A further object of the invention is to disclose the step of triangulating said MR eyes relative to said MR beacons which comprises measuring an arrival time of a signal emitted by at least one transponder or a level of energy of said signal.
A further object of the invention is to disclose the step of triangulating said MR eyes relative to said MR beacons performed by an ear clipping method or a monotone polygon method.
BRIEF DESCRIPTION OF THE FIGURES
In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration FIG. 1 is a flowchart of a prior art method of displaying virtual reality;
FIG. 2 a flowchart of a method of displaying virtual reality according to the present invention; FIGS 3 and 4 are schematic diagrams of triangle pyramids;
FIG. 5 is a schematic diagram illustrating pyramid triangulation with IoT objects;
FIG. 6 is a schematic diagram which illustrates user' travelling within immersive reality- virtuality continuum-based environment;
FIGS 7a to 7c are schematic diagrams presenting beacon arrangements for appropriate orientation of immersive reality-virtuality continuum-based environment; FIG. 7d presents views of the method and technology to fill, map and cover the world with digital layers;
FIG. 8 presents views of a method and technology that includes immersive reality-virtuality continuum-based environment data and/or 3D model data and/or code data within any physical object;
Figs 9(a, b) and 10(a,b) illustrate a method to change the room and corridors in hospitals to new reality with MR Beacons the place immersive reality-virtuality continuum-based environment will appear with the right angle and point of view it will reflect the real the real world around it. Figs l l(a,b) illustrate a method to change the room and corridors in EVENTS & EXHIBITIONS to new reality with MR Beacons to place immersive reality-virtuality continuum-based environment will appear with the right angle and point of view it will reflect the real the real world around it;
Figs 11a and l ib illustrate Change the room and corridors and EVENTS & EXHIBITIONS to new reality with MR Beacons to place immersive reality-virtuality continuum-based environment will appear with the right angle and point of view it will reflect the real the real world around it;
Fig. 12 illustrates a method and technology to fill the world and cover the world and map it with digital layers; and
Fig. 13 illustrates a method and technology that includes immersive reality-virtuality continuum- based environment data and/or 3D model data and/or code data within any physical object
DETAILED DESCRIPTION OF THE INVENTION
The invention's core innovation is the mechanism that may include different algorithmics that enable to present immersive reality-virtuality continuum-based environment on a defined position/vector/angle/object compared to the camera or device that aims at it without any visual calculation or processing.
Reference is now made to Fig. 1 illustrating the existing prior art method for tracking and understanding where to display virtual environments by using visual devices and visual processing. Reference is now made to Fig. 2 illustrating diagrammatically the present invention enabling low processing and no visual feed process in order to display virtual environments by sensors.
Reference is now made to Fig. 3 illustrating the Triangle Pyramid created from MR EYES and 3 or more MR beacons. (1),(2),(3) - Predefined MR Beacons with fixed, predefined distance between them (a),(b),(c) or they can be 3 chosen beacons that will create a triangle and transmit the (a),(b),(c) distances/energy/power/data to (4) and analyzed to distance.
(a),(b),(c) - can be the same distance or totally different, they may be in millimeters, centimeters, meters or kilometers. (4) "MR EYES" - the visual representation device's position, rotation and direction compared (1,2,3 & 5). Thanks to 3 or more triangles of the pyramid combined with the base shape of the pyramid there is enough data to know the exact angles and distances.
(5) "POINT ZERO" - the vector (0,0,0), will be found by defining the center of the triangle (1),(2),(3), this point will be the vector zero (0,0,0) for the immersive reality-virtuality continuum-based environment.
(D),(E),(F) are the distances between MR EYES to each of the MR Beacons.
Using (D),(E),(a) we can have the angles of all this triangle corners. Using (D),(F),(c) we can have the angles of all this triangle corners. Using (E),(F),(b) we can have the angles of all this triangle corners. Creating 3 vectors is now possible from (4) to (1) and to (2) and to (3) so the immersive reality- virtuality continuum-based environment that will be droped on vector (0,0,0) will be found by (4) the MR EYES at the right distance, angle, rotation and size.
A onetime sync may be done inside the system to translate real world distance and size to the immersive reality-virtuality continuum-based environment and all calculations will be accordingly. The same method will be done with Square Pyramid (MR EYES and 4 or more MR beacons that creates the pyramid square base.
Reference is now made to Fig 4 illustrating calculation of vectors. An optional formula and example for how to calculate first triangle out of 3 in triangle pyramid or 4 in square based pyramid. The same calculation will be made to all other triangles to help achieve the vectors and/or the angles and position of the pyramid base compared to the MR EYES position and angle.
Reference is now made to Fig. 5 illustrating another aspect of the present invention: Multi MR BEACONS or the use of IoT or Smart city to map a city with MR. The world will become a matrix of billions of IoT devices, they will be used to act as MR beacons, and some will act as MR eyes too. They ability of using their position to create MR Pyramids to multiple accuracy and offer multi base pyramid infrastructure to achieve the goal of MR EYES, there will not be a need to build an infrastructure by several companies, it will be created and used by itself by millions of people and billions of talking lot devices. Constant updating of pyramid will create seamless information and better accuracy for the MR eyes.
Reference is now made to Fig. 6 illustrating another aspect of the present invention. Streaming 3D data while walking.
Thanks to IoT, smart cities and other matrix of sensors and/or the MR beacons, it will be possible for a user to go from place to place non-stop and get the relevant immersive reality- virtuality continuum-based environment relevant to his position by streaming the data while he advances on foot, on car or any other means. This streaming data concept will include all types of data including video, 3D model, logic and code, sound and more.
The need for instant feedbacks with the infinite of data needed to be transferred and the limitation of the MR eyes like mobile phones or smart glasses, the ability to stream data as needed according to the position is critical. The invention will include different data types, priority definition for any content, understanding and forecasting the walking or driving path, streaming enough data to fill 360 degrees of content for enough information to cover line of sight.
Reference is now made to Fig. 7a illustrating a man using his eyes with glasses, contact lenses or any other direct vision device or technology to become MR EYES and the peak of the pyramid to look at the pyramid base.
Reference is now made to Fig. 7b illustrating another point of view to show pyramid from all angles or heights that by creating pyramid and calculating triangles we can define the virtual environment vectors. Reference is now made to Fig. 7c iUustratingmobile MR EYES as another example and the view on the real-world angle with the layer of virtual environments in perfect vector from MR EYES point of view.
Reference is now made to Fig. 7d illustrating reflection of real environment; the place immersive reality- virtuality continuum-based environment will appear with the right angle and point of view will reflect the real the real world around it.
Reference is now made to Fig. 8 illustrating a method and technology to fill the world and cover the world and map it with digital layers.
Reference is now made to Figs 9(a, b) and 10(a , b) illustrating different embodiments of a method and technology that includes immersive reality- virtuality continuum-based environment data and/or 3D model data and/or code data within any physical object.
Reference is now made to Figs 11 (a, b) illustrating different embodiments of the technology to a room and corridors in an exhibition wherein the room and corridors may be altered to a new mixed reality and EVENTS & EXHIBITIONS to new reality with MR Beacons by placing immersive reality-virtuality continuum-based environments appearing with the right angle and point of view reflecting the real world around it. Reference is now made to Figs 12(a, b) illustrating a method and technology to fill the world and cover the world and map it with digital layers. The need is to know what type of content, what is its 3D shape/model, size, angle, position and more.
Reference is now made to Figs 13 to 16 illustrating a method and technology that includes immersive reality-virtuality continuum-based environment data and/or 3D model data and/or code data within any physical object.
Input data includes type of content, what is its 3D shape/model, size, angle, position and more.
The present invention relates to different devices and technologies ("Mixed Reality beacons") for sensing and calculating distances and positions (like IoT, WiFi, RF, BLE, Magnets, Infra-red and all other types of technologies that uses the same method as the invention). The present invention provides means and method to present immersive reality-virtuality continuum-based environment on a relatively accurate position in real space without the need to visually analyze the data and by calculating the relative position of the camera/eyes.
The invention will include mathematical and/or physical and/or logical mechanisms that will enable the visual representation device to get and send its estimated position compared to one, two, three or more MR beacons but the visual representation device will be the" top of the Pyramid" and the 3 or 4 MR beacons will be the base of the pyramid so that a 3 dimensional pyramid shape will be created with 3+1 (visual representation device) or 4+1 corners of the pyramid and the visual representation device will also create more triangles with every 2 MR beacons to improve accuracy, to understand the 3D shape but mainly to act as the viewing point between its vector to the Pyramid base. (Fig 3). (The methodology is based on a "pyramid" concept but the present invention also encompasses the use of different types of connections and data or a matrix of infinite devices to place a virtual object in a physical place without the need for camera and/or visual analysis. The physical device that is an embodiment of the invention is based on beacons and sensors that may exist (for lot for example) but instead of using them to compare the place of the mobile device or the user compared to the world and to their location in the world, the Invention is the technical means and algorithmic method for calculating the place of the device ("Mixed Reality beacon") compared to the point of view of the visual representation (smart glasses/camera/screen/mobile etc..) device ("Mixed Reality eyes") so that the immersive reality- virtuality continuum-based environment will appear on top of the "MR beacon" position or in an area or of a group of "MR beacons".
"MR beacons" will define one XYZ position, vector and rotation compared to the "MR eyes" position and vector so if the "MR eyes" changes position or rotation it's point of view on the "MR beacon" will be relative to its new vector.
"MR beacon" will transmit radio, magnetic, IR, sound wave or any other type of energy or communication. Two or more different "MR beacons" can now calculate and compare the distance and position of each relative to the others by measuring the speed of feedback, the energy level or any other means of calculating the distance between each other with different formulas like distance (d=sqr(x2-xi)2+(y2-yi)2), trilateration, triangulation etc.
"MR eyes" will act as another beacon for the purpose of having another point of reference to calculate positioning but all positioning calculations of the beacons will be compared to it. "MR eyes" will always be the sharp edge of the pyramid - whether it's a triangle based pyramid, a square based pyramid or more. The result is like a spotlight based on the sharpest point of the pyramid or a virtual camera in 3D space that aims at the center base of it. (Fig3 3)
The use of MR beacons can be done in different ways and they represent an area, a room, an object compared to the MR eyes and the place that virtual contact should be placed, for example: if 4 to 8 MR beacons are placed in 4 to 8 corners of the room one can achieve a box area so inside it the immersive reality-virtuality continuum-based environment "knows" the size and space to cover or fill with content, a group of MR beacons can represent a place and by defining or placing an immersive reality-virtuality continuum-based environment that represent the real 3D shapes they can be covered or get mixed with immersive reality-virtuality continuum-based environment.
One of the main novel and innovative features of the present invention is the concept of automatic distance calibration with a predefined design of one "MR beacon" containing 3 or more transmitters/receivers that creates a triangle shape or a 4 units to create a square shape or more with predefined accurate position and distance between them so they will act as a anchors to calibrate distance compared to level of energy, time or other means to calculate and calibrate distance and position, e.g. If (TIME) = 2 CM then (TIME) x5 = 10 CM distance. The Invention will include a way to have 3 small "mini MR beacon" installed in one "MR beacon" that may be lmm in diameter and up to 5 cm (for small size) or higher for medium and bigger size :MR beacon" that will be used for larger distances and power.
As a result, the "MR eyes" enables the visual representation of the immersive reality-virtuality continuum-based environment and will place inside the immersive reality-virtuality continuum- based environment a 3D virtual digital camera or representation screen such that when aimed toward the position of the immersive reality-virtuality continuum-based relative location of the "MR beacon" or "MR beacons" places the relevant virtual objects or content on the exact vector/ position and angle so that the result and the experience of the user will be that the visual representation of the immersive reality-virtuality continuum-based environment is on top of the real environment. The real "MR beacon" will be seamless and as realistic as possible. Continuous tracking will ensure that the visual representation of the immersive reality-virtuality continuum-based environment will stay on the same position in real environment and will rotate and adjust compared to the "MR eyes" movement and point of view. The result will be a light 360 degrees immersive reality-virtuality continuum-based environment fixed to the defined "MR beacon" with no need for visual process, no loss of sight or understanding as to where the visual representation of the immersive reality-virtuality continuum-based environment should be.
This invention will enable defining 3D mesh and grid of meshes on top of real environments without any visual limitation related to light or distance or quality of camera. It will enable to placing of 3D virtual layers in predefined fixed places and cover even a building or a big object and look at it from any distance or angle.
The Pyramid base can be horizontal, vertical or in any angle, and the top of the pyramid represented by the visual representation device's angle, position, vector compared to the Pyramid can be any angle, position or vector. When the visual representation device is aiming toward the pyramid base it will also present the immersive reality-virtuality continuum-based environment. When the visual representation device's angle, position, vector will not aim toward directly toward the pyramid base it may still show the immersive reality-virtuality continuum-based environment if it is supposed to be presented according to its size, the amount of content etc. The Pyramid base size or the distance between the MR beacons may be millimeters or miles, the small size base may be used for high resolution positioning or for small devices that may be included inside the MR beacons Pyramid base so it may be covered with the immersive reality- virtuality continuum-based environment so as to change its visual appearance and track and modify its appearance in any angle, rotation or position it appear compared to the visual representation device "MR eyes".
Large pyramid base may be used to place large immersive reality-virtuality continuum-based environments to cover buildings or large area of land.
Pyramid base and MR beacons may offer immersive reality-virtuality continuum-based environment to infinite number of visual representation devices "MR eyes" that each will act as his own pyramid top and each will show the user the relevant angle, position and vector of the immersive reality-virtuality continuum-based environment.
The pyramid base can be created from 3 or 4 or more separated MR beacons that communicate with each other or they can be place on one electronic device that include the pyramid base and acts as a standalone base for the pyramid.
The pyramid can be created with 3,4 or more MR beacons placed in one device or can be any 3,4 or more separated MR beacons, they can also be 3,4 or more other IoT, BLE or even mobile devices that can transmit their location to the any other MR EYES and create a pyramid.
MR Beacons can also be MR Eyes, if 3,4 or more MR eyes also act as beacons they can each act as beacons to each other.
Existing and future infrastructure for lot, smart city and any future tech that can act as a MR beacon and become part of a 3, 4 or more pyramid with MR eyes. MR Eyes and MR beacons may include other sensors to improve its accuracy like gyro, accelerometer that can improve accuracy.
IoT devices are going to cover the world, they can communicate with each other and with other devices including MR EYES, AI systems will enable them all to talk to each other, transmit info and location and will enable the invention to use their existence to create MR beacons out of them and MR EYES to use them. This invention will include the use of different IoT devices with or without AI systems to support and basically to use their matrix or to create one for mapping all positions compared to GPS or maps but also to remember and cover the world/matrix with multi realities/ multi layers of immersive reality-virtuality continuum-based environment. When immersive reality-virtuality continuum-based environment will be created on a specific place on earth or any other planet, it will be recorded with all possible data, position, maps, GPS, MR beacons, angles, vectors, rotation but also all data that will be collected to effect the immersive reality-virtuality continuum-based environment like pictures, physical objects that later can be transformed to immersive reality-virtuality continuum-based environment too or atleast be known so that the immersive reality-virtuality continuum-based environment will react to their position or existence.
Indoor navigation beacons, outdoor methods and technologies, BLE or other sensors and all present or future technologies that will enable conectiviy, positioning and data infrastructure will become MR BEACONS TO and a main infrastructure for MR EYES. Some of them use AR to show direction or information with different levels of AR UI and they also use their technology to map the beacons and the rooms or the streets but they don't use it in a way to locate the beacons and translate the enviroment into Mixed Reality or Augmented reality or Virtual reality on top and combined with the real world. This ability to use the infrastructure that is growing as MR BEACONS is a main invention.
The use of MR beacons for transforming a room or area into a different environment with a layer of immersive reality-virtuality continuum-based environment that covers this area is a core main invention.
a. MR BEACONS for cinemas - with MR Beacons it is possible to cover the walls of the cinema with a layer of immersive reality-virtuality continuum-based environment at any light level, or even near darkness to complete the movie experience.
b. The present invention includes other technologies to enhance cinema space with mixed reality in a parallel content to the movies content and/or changing the cinema's environment to a reality that will be decided while and before and after the movie is presented, such as S.L.A.M, depth cameras, devices and tech like Hololens etc.
c. Scenarios:
i. A movie featuring a spaceship can be programmed to change the walls of the cinema to spaceship interior walls with windows that can show the stars and space and action outside. ii. Real 3D immersive reality-virtuality continuum-based environment can be added to the cinema space and enviroment, birds can fly above the heads of the visitors, immersive reality-virtuality continuum-based environment can be interacted with such as flapping or gesturing the hand to drive the bird away. Characters and object can fly out of the movie screen toward the people or the floor of the cinema and stay there, and any other interaction or creative idea that combined the immersive reality-virtuality continuum-based environment with the real cinema can be implemented by use of the present invention.
iii. Visitors will be provided with MR glasses based on MR EYES technology instead of current simple 3D glasses. Such MR glasses will dramatically enhance the user experience.
iv. Movie producers will be able to create advanced content and experiences for their films with MR BEACONS that can adjust automatically to the size and/or spatial mapping of each cinema room.
v. Activation of MR content can be done according to time of film, BLE or other types of communication to trigger MR events or any other method of activating it even manually.
vi. Immersive reality-virtuality continuum-based environment actors, characters, art, scenery, objects, content or any other idea can be part of the film, in or out of the screen, as part of the cinema room and/or its new reality created by immersive reality-virtuality continuum-based environment.
vii. Advertisements, promotion activities, sponsorships and other method of publications with MR beacons and/or mixed reality in cinemas are included.
MR BEACONS for theaters, preforming halls, live shows, live concerts etc. - the same as cinemas.
MR BEACONS for HOSPITALS (Fig)- with MR Beacons it is possible to cover the walls of the hospitals, walls, the floor, the celling, the corridors with a layer of immersive reality-virtuality continuum-based environment in any light level even near darkness to complete the movie experience.
i. ENHANCE THE PLACE Digital magic will cover the walls, the banners, the floor and the celling with interactive rich content.
ii. GUIDED VISIT = EFFECTIVE BUSINESS Visitors will now be guided with an interactive MR host, floating direction arrows, marked destinations
iii. INTERACTIVE FRIENDS Kids can now have an interactive friend that can talk to them, answer questions & play with them
iv. ENTERTAINMENT Enable attractions, games, selfie areas and more to help time pass faster
v. PERSONALIZED INFO Each visitor will enjoy personalized information, host, messages and even atmosphere
vi. SMART ANALYTICS Learn what worked, behavior of kids and their requests, visitor behavior and more.
MR BEACONS for EVENTS & EXHIBITIONS with MR Beacons it is possible to cover the walls, the floor, the celling, the corridors with a layer of immersive reality-virtuality continuum-based environment in any light level even near darkness to complete the movie experience.
i. ENHANCE THE PLACE Digital magic will cover the walls, the banners, the floor and the celling with interactive rich content.
ii. GUIDED VISIT = EFFECTIVE BUSINESS Visitors will now be guided with an interactive MR host, floating direction arrows, marked destinations
iii. PERSONALIZED INFO Each visitor will enjoy personalized tour, information, host, messages and even atmosphere
iv. MARK YOUR TARGETS Visitors will be recognized easily, marked when they are relevant for another visitor or exhibitor
v. ATTRACT VISITORS Enable attractions, games, selfie areas, guided tours
vi. SMART ANALYTICS Learn what worked, where they went, how long. MR BEACONS for RETAIL, STORES, SUPERMARKETS - with MR Beacons it is possible to cover the walls, the floor, the celling, the corridors with a layer of immersive reality-virtuality continuum-based environment in any light level even near darkness to complete the movie experience. Advertisements, promotion activities, sponsorships and other method of publications with MR beacons and/or mixed reality in cinemas are included.
Pyramid MR BEACONS enable the creation immersive reality-virtuality continuum-based environment in any visual condition at night, in darkness, fog or any visual limitation thanks to the methodology of MR BEACONS the ability to use non-visual sensing mechanism it is possible to place immersive reality-virtuality continuum-based environment in the dark, in bad weather and far away.
Placing immersive reality-virtuality continuum-based environment in the real environment and assigning the right light level, color and atmosphere will make the immersive reality-virtuality continuum-based environment appear event more realistic.
Shadows will appear on the real environment when is needed according to light source, shadows needs to be presented as immersive reality-virtuality continuum-based environment.
Reflection of real environment on the immersive reality-virtuality continuum-based environment.
The position of the immersive reality-virtuality continuum-based environment in the real environment need to appear real and natural in its environment, that includes the right reflections according to the place of the MR eyes, shadows, light etc..
The reflections map on the immersive reality-virtuality continuum-based environment can be (a) the pictures that were collected on the Internet, on the cloud on search engines that represent the place and/or (b) AI search engines for pictures relevant for positions and/or (c) using google earth or other source of location based images and/or (d) source from live cameras feed from the location, and/or (e) camera feed from mobile or other devices or IoT that record the location. Those sources will create a reflection map and/or color mapping and/or specular mapping and/or any other mapping that will cover immersive reality-virtuality continuum-based environment to create realistic real-time look and feel will create an image, changing images or video that will reflect on the immersive reality-virtuality continuum-based environment according to their materials to make them look more realistic on the real environment. They can be used as a skybox or any other ways to cover the virtual model. (FIG. 12)
Small MR BEACON pyramid to cover objects with a small device that includes MR BEACON that can be implemented on a product (a milk box, a toy, a chair etc .). It will be possible to look at it with a the visual representation device (smart glasses/contact lenses/camera/screen/mobile etc..) ("MR eyes") and see the immersive reality-virtuality continuum-based environment related to it, on it, cover it, interact with it, upgrade it and more. By using a micro one or more MR beacons inside or with a physical object like a wooden chair, a toy, a building or any other real object it is possible that they will include or not their 3D shape to be manipulated or to act as a "mask" or ghost virtual shape so that the virtual content added to them will "know" their shape, where to be hidden, what to cover and by that create realistic composition of the real and the virtual.
A method and technology to define, tag and control a predefine size of space, 2D land rectangle or 3D cube area in the real world, this control or tag will enable the controller to create a new virtual real estate space for immersive reality-virtuality continuum-based environment. This area may be changed with immersive reality-virtuality continuum-based environment, updated, interacted with and more.
The Tagged area cannot be tagged by another user, it may be sold with real or virtual currency, it may be viewed or visited or interacted with according to the controller rules, It may charge virtual currency from visitors that will be paid to the controller. Neighbor cube areas may be tagged by others, all those cube areas will be marked on the real world map and/or grid and or streets and will create a second or more realities of the same spots on the real world. Tagging a place and owning it, creating a demand for hot places in the real world, getting currency from traffic of real people into this area and viral results of that are part of the invention.
A method and technology is provided to fill the world and cover the world and map it with digital layers. The present invention includes means to provide characteristics of content, such as 3D shape/model, size, angle, position and more. The present invention includes mechanisms for recording, saving, uploading and downloading 3D and/or digital content and/or interactive logic and/or code and/or any immersive reality-virtuality continuum-based environment data from a cloud server and/or another device (like mobile phone, laptop, hard drive) and/or any other device that may contain data and connect and place and show it in a predefined place/area in real environment with AR/VR/MR based on exact location with MR Beacons and/or GPS and/or any other ways of presenting immersive reality-virtuality continuum-based environment. The area is defined as a 2D rectangle area that is part of a predefined grid/matrix on the map. Each rectangle or Mixed.Place can be a flat rectangle on the ground or wall and/or as 3D cube area based in the real world. This area may be changed with immersive reality-virtuality continuum-based environment, updated, interacted with and more.
Any user with the specific technology and app on any device mainly mobile phones, smart glasses or other MR eyes can now TAG their own area for free or by paying a defined amount of virtual coins or real money. By tagging, a place is rendered free or defined as free of any other owner. The user marks this Mixed.Place as his own and can then manipulate the virtual / digital reality of his Mixed.place.
a. The owner can than create content in its area, draw, build, offer messages, interact and create any type of reality he desires so others can see too. If he wants to TAG another Mixed.Place he needs to have enough coins / money to buy it.
b. Any visitor that interacts with another Mixed.Place makes the owner get more coins, owners that gets lots of traffic get richer and get more virtual land so making a place more relevant to the real place in the world, choosing a heavy traffic area is better.
c. The Idea of enabling users to own virtual mixed.places in the real world in a real accurate place in the world, show their content, get paid for visits and connecting all this data to a main server is an aspect of the present invention. d. When a user tries to TAG a place he can see the grid and see what places are taken and what places are free. Icons or other visual representations will appear to show what types of places surround the uses position in the world, representation of the traffic around and more.
Analytics system will record all types of places, number of visits, interaction, income and demographics and behaviors to create deep learning and data for advertisements or any other use. (FIG 12). a. A method and technology that includes immersive reality-virtuality continuum- based environment data and/or 3D model data and/or code data within any physical object that is manufactured and/or created and/or manufactures and/or printed with 3D printers and/or can be implemented inside an existing device so when it is needed the object can transmit this data to replace and/or cover and/or map physical object with digital layers for the use in AR/VR/MR and/or any other immersive reality-virtuality continuum-based environment and/or just to contain the data for any other usage as part of the physical object. The need is to know what type of content, download/install its 3D shape/model/immersive reality- virtuality continuum-based environment and/or size and/or angle and/or and position and be able to accurately load and/or cover and/or replace, update and/or change the appearance of the physical shape with digital layers (Fig. 16) and place it more realistically in the real world and use its shape as a mask (depth mask) to show other immersive reality-virtuality continuum-based environment behind it. (Fig. 14)
b. This mechanism includes immersive reality-virtuality continuum-based environment data and/or 3D model and/or any other digital code inside an RF and/or BLE and/or QR code and/or any other data containing or transmitting device that will become part of the physical object.
c. The transmitting device can be inserted or connected to a device in any stage, QR code may help identify the data quicker and also give extra info like the exact direction on the 3D model compared to the device.
d. When activated by a device (mobile, computer, glasses etc ..) the data is then being downloaded from the cloud to the device and/or MR EYES which then can use it according to need. The data can be unique per unique device including a unique ID code, unique color and any other relevant data to describe the physical object better and then represent it accuracy in immersive reality-virtuality continuum-based environment.
e. The data can contain fabric type, flexibility of different parts of the objects, mass and any other physical data that needs to be transformed into digital. f. This mechanism includes record, update, save, upload and download 3D and/or digital content and/or interactive logic and/or code and/or any immersive reality- virtuality continuum-based environment data from a cloud server and/or another device (like mobile phone, laptop, hard drive) and/or any other device that may contain data and connect, place and show it in a predefine place/area in real environment with AR/VR/MR and/or MR Beacons and other ways of presenting immersive reality-virtuality continuum-based environment.
g. Updating the data may be allowed with a code that may be created by the creator of the physical object or can be fixed to avoid data lose or misuse. h. This patent is a part of the future where physical and digital will be combined it will be used in any type of object for different usage: furniture, 3D/2D Printed objects, home decorations, statues, art, outdoor fountains, monuments, buildings, shape of stairs and walls and more.
i. The flow will be: Scanning the objects ID » downloading data from cloud or device » placing or replacing the immersive reality-virtuality continuum-based environment on top or combined with the physical object » Interacting with it j. Virtual/digital content will be added to the product, on top of it or around it, the product can be covered with digital materials or its surrounding by covering the product with depth mask so content will be hidden behind it.
A method and technology that includes immersive reality-virtuality continuum-based environment data and/or 3D model data and/or code data within any physical object that is manufactured and/or created and/or manufactures and/or printed with 3D printers and/or can be implemented inside an existing device so when it is needed the object can transmit this data to replace and/or cover and/or map physical object with digital layers. The need is to know what type of content, should be downloaded and place at a predefined real environment.
A method for including immersive reality-virtuality continuum-based environment 3D drawing in real environments that will be represented as mixed reality and is combinable with the real environment. The aforesaid method comprises steps of:
a. drawing 3D pixels in a Mixed.Place
b. using a drawing mechanism in a real 3d environment with the use of a mobile phone, its movements while screen is touched draws a pixel and when dragged multiple 3D pixels are created thereby creating a line, a shape or any other creation done according to hand movements.
c. interpreting and analyzing movement of the mobile phone compared to the environment by sensing mechanisms including the camera, accelerometer, gyro, MR beacons and other technologies known at the time.
d. placing the drawings as Mixed Place technology and / or MR beacons and/or image targets.
e. saving drawings to the cloud including connections to the user, placing on earth and all related data with Mixed.Place and MR beacons.
A method for drawing a 2D doodle and transforming it into 3D model by digitally recognizing the 2D drawings (optionally with AI) and tagging them as a category representative of an archetype object such as a dog, a flower, a chair drawing said drawings with 3D tools it on the real 3D environment modifying the 3D model with or without animation said 3D models optionally stored in a bank of models categorized and defined and superimposable as a virtual environment on top of the real one. This method and technology can be used in all virtual environments - augmented reality, virtual reality and mixed reality. Drawing can be done by mobile as described but also with controllers, hand gestures, digital pen, wearable devices and more. Drawing - recognition - make it 3D and place it on Mixed.Places and make it stay there, enable it for anyone to see and interact whether it is a drawing or a 3D model made out of the drawing.

Claims

CLAIMS What is claimed is:
1. A system for providing a mixed reality experience; said system comprising:
a. at least 3 mixed reality (MR) beacons positioned at predetermined distances from each other; each MR beacon further configured to define individual positions thereof and relative position thereof on the basis of cosines;
b. at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons;
c. a source of an immersive reality-virtuality continuum-based environment;
wherein said MR eyes unit is configured for combining a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit.
2. The system according to claim 1, wherein a mutual arrangement of said MR beacons and said MR eyes is calibrated by means of measuring an arrival time of a signal emitted by at least one transponder or a level of energy of said signal.
3. The system according to claim 1, wherein mixed reality experience comprises at least one element selected from the group consisting of cinema experience, theatre experience, indoor environment, outdoor environment and any combination thereof.
4. The system according to claim 1, wherein said mutual arrangement is calculated by means of triangulation.
5. The system according to claim 4, wherein said triangulation is performed by an ear clipping method or a monotone polygon method.
6. The system according to claim 1, wherein at least of said MR beacon or MR eyes is carried by an internet-of-things article.
7. The system according to claim 1 comprises at least one transponder attachable or embeddable into an object of interest; said at least one transponder is configured for determining a spatial position thereof, receiving data and commands from surrounding said MR beacon or MR eyes and transmitting data comprising said spatial position of said transponder and details of said object of interest.
8. The system according to claim 7, wherein said object of interest is selected from the group consisting of a piece of houseware, a piece furniture, an art object and any combination thereof.
9. The system according to claim 7, wherein said at least one transponder is a sticker- shaped attachable circuitry.
10. The system according to claim 1, wherein MR eyes functions as MR beacon.
11. A system for providing a mixed reality experience; said system comprising:
a. a plurality of MR beacons distributed over an area of mixed reality; each MR beacon further configured to define an individual position thereof and position thereof relative to other beacons;
b. at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons;
c. a source of an immersive reality-virtuality continuum-based environment;
wherein said MR eyes unit is configured for combining a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit.
12. A system for providing a mixed reality experience; said system comprising:
a. at least 3 MR beacons; each MR beacon further comprising at least three transmitters configured for calibrating an individual position of said MR beacon; b. at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons;
c. a source of an immersive reality-virtuality continuum-based environment;
wherein said MR eyes unit is configured for combining a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit.
13. A system for providing a mixed reality experience; said system comprising:
a. a plurality of articles interconnected therebetween by an ad hoc IoT network and functioning as MR beacons distributed over an area of mixed reality; each MR beacon further configured to define an individual position thereof and position thereof relative to other beacons; b. at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons;
c. a source of an immersive reality-virtuality continuum-based environment;
wherein said MR eyes unit is configured for combining a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit.
14. A system for providing a mixed reality experience; said system comprising:
a. at least 3 MR beacons; each MR beacon further comprising at least three transmitters configured for calibrating an individual position of said MR beacon; b. at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons;
c. a source of an immersive reality-virtuality continuum-based environment;
d. a server configured for interrogating said MR beacons, collect positioning data thereof and provide said positioning data to said MR eyes;
wherein said MR eyes unit is configured for combining a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit.
15. A system for providing a mixed reality experience; said system comprising:
a. a plurality of MR beacons distributed over an area of mixed reality; each MR beacon further configured to define an individual position thereof and position thereof relative to other beacons;
b. at least one MR eyes unit comprising image -processing-free means for calculating a spatial position thereof relative to said MR beacons;
c. a source of an immersive reality-virtuality continuum-based environment;
wherein said MR eyes unit is configured for combining a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit.
16. A system for providing a mixed reality experience; said system comprising:
a. a plurality of MR beacons distributed over an area of mixed reality; each MR beacon further configured to define an individual position thereof and position thereof relative to other beacons; b. at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons;
c. a source of an immersive reality-virtuality continuum-based environment;
wherein said MR eyes unit is configured for combining a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit;
further wherein said MR eyes are configured for displaying a predetermined piece of said reality-virtuality continuum-based environment in response to positioning said MR eyes in proximity with a predetermined beacon.
17. A system for providing a mixed reality experience; said system comprising:
a. a plurality of MR beacons distributed over an indoor space; each MR beacon further configured to define an individual position thereof and position thereof relative to other beacons;
b. at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons;
c. a source of an immersive reality-virtuality continuum-based environment;
wherein said MR eyes unit is configured for combining a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit.
18. A system for orientation in conditions of reduced visibility; said system comprising: a. a plurality of MR beacons distributed over an area of mixed reality; each MR beacon further configured to define an individual position thereof and position thereof relative to other beacons;
b. at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons;
c. a source of an immersive reality-virtuality continuum-based environment;
wherein said MR eyes unit is configured for combining a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit.
19. A system for providing a mixed reality experience; said system comprising: a. a plurality of MR beacons distributed over an area of mixed reality; each MR beacon further configured to define an individual position thereof and position thereof relative to other beacons;
b. at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons;
c. a source of an immersive reality-virtuality continuum-based environment;
wherein said MR eyes unit is configured for combining a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit;
further wherein multimedia content relevant to said real environment is streamed to said MR eyes in real time.
20. A system for providing a mixed reality experience at an event at a public place; said system comprising:
a. a plurality of MR beacons distributed over said public place; each MR beacon further configured to define an individual position thereof and position thereof relative to other beacons;
b. at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons;
c. a source of an immersive reality-virtuality continuum-based environment;
wherein said MR eyes unit is configured for combining an image of said public place with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit.
21. A system for providing a mixed reality experience; said system comprising:
a. a plurality of MR beacons distributed over an area of mixed reality; each MR beacon further configured to define an individual position thereof and position thereof relative to other beacons;
b. at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons;
c. a source of an immersive reality-virtuality continuum-based environment; said source is selected from the group consisting of an Internet content, a result obtained by an artificial intelligence search engine; a content provided by an internet geographic service, a content fed by a live camera, a content fed by an internet-of- things network and any combination thereof;
wherein said MR eyes unit is configured for combining an image of said public place with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit.
22. A system for providing a mixed reality experience; said system comprising:
a. a plurality of MR beacons distributed over an area of mixed reality; each MR beacon further configured to define an individual position thereof and position thereof relative to other beacons; at least of said plurality of MR beacons is embedded into a consumer good;
b. at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons;
c. a source of an immersive reality-virtuality continuum-based environment;
wherein said MR eyes unit is configured for combining an image of said public place with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit such that said MR eyes unit displays a corresponding content relevant to said consumer good and instructions for navigation thereto.
23. A system for providing a mixed reality experience; said system comprising:
a. a plurality of MR beacons distributed over an area of mixed reality; each MR beacon further configured to define an individual position thereof and position thereof relative to other beacons;
b. at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons;
c. a source of a dynamic immersive reality-virtuality continuum-based environment; wherein said MR eyes unit is configured for combining an image of a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit;
wherein said MR eyes unit comprises a user interface enabling a user of tagging a location of said real environment; said tagging is selected from the group consisting of marking an area of interest, banning an access to said area of interest, permitting a paid access and any combination thereof.
24. A system for providing a mixed reality experience; said system comprising:
a. a plurality of MR beacons distributed over an indoor space; each MR beacon further configured to define an individual position thereof and position thereof relative to other beacons;
b. at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons;
c. a source of a dynamic immersive reality-virtuality continuum-based environment; wherein said MR eyes unit is configured for combining an image of a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit;
wherein said system further comprises a plurality of stickers attachable to within said indoor space; said stickers are configured for providing data relevant to locations thereof; said MR eyes unit is configured for scanning said stickers and collecting said data.
25. A method of providing a mixed reality experience; said method comprising the steps of a. providing a system further comprising
i. at least 3 mixed reality (MR) beacons; each MR beacon configured to define individual positions thereof and relative position thereof;
ii. at least one MR eyes unit comprising means for calculating a spatial position thereof relative to said MR beacons;
iii. a camera configured for capturing an image of a real environment;
iv. a source of an immersive reality-virtuality continuum-based environment; said camera being orientable synchronically with said MR eyes unit;
wherein said MR eyes unit is configured for combining said image of a real environment with said immersive reality-virtuality continuum-based environment oriented synchronically with said MR eyes unit. b. triangulating said MR eyes relative to said MR beacons;
c. calculating said spatial position of said MR eyes relative to said beacons; d. capturing a real environment observable by a user of said MR eyes by said camera;
e. providing an immersive reality-virtuality continuum-based content corresponding to said real environment;
f. combining said real environment and corresponding immersive reality-virtuality continuum-based content.
26. The method according to claim 25, wherein said step of triangulating said MR eyes relative to said MR beacons comprises measuring an arrival time of a signal emitted by at least one transponder or a level of energy of said signal.
27. The method according to claim 25, wherein mixed reality experience comprises at least one element selected from the group consisting of cinema experience, theatre experience, indoor environment, outdoor environment and any combination thereof.
28. The method according to claim 25, wherein said aid step of triangulating said MR eyes relative to said MR beacons is performed by an ear clipping method or a monotone polygon method.
29. The method according to claim 25, wherein at least of said MR beacon or MR eyes is carried by an internet-of-things article.
30. A method for providing an immersive reality-virtuality continuum-based environment data and/or 3D model data and/or code data within any physical object comprising steps of
a. Loading or transmitting or scanning the objects ID;
b. downloading data from cloud or device
c. placing or replacing the immersive reality-virtuality continuum- based environment on top or combined with the physical object; d. Interacting with said physical object;
e. adding Virtual/digital content said object , by covering the product with depth mask so content is be hidden behind it.
31. The method of claim 30, wherein said data is provided by means of mobile phones, smart glasses or other MR eyes.
32. The method of claim 31, wherein said method includes uniquely tagging an area said tagging rendering free or defined as free of any other owner and defining a Mixed place virtual / digital reality of said Mixed Place manipulable exclusively by the tagger said manipulation comprising adding content in said tagged area, (a real world area of certain size) drawing, building, messaging, interacting transmitting and creating any predetermined reality.
33. The method of claim 31, wherein said method comprises steps of authorizing, verifying and certifying said tagged Mixed Places further wherein said tagged Mixed Places are exchangeable between users on release of certain permissions.
34. The method of claim 31 wherein said tags are represented on a virtual grid representing occupied and free Mixed Places.
35. The method of claim 30 comprising steps of recording data selected from the group comprising places, visits, interactions, income, demographics, behaviors, said data provided to a deep learning system.
36. A system including immersive reality-virtuality continuum-based environment data and/or 3D model data and/or code data within any physical object that is manufactured and/or created and/or manufactures and/or printed with 3D printers and/or can be implemented inside an existing device such that said object can transmit this data to replace and/or cover and/or map physical object with digital layers.
37. The system of claim 36 comprising means for downloading content, download and installing 3D shape/model/immersive reality-virtuality continuum-based environment data, size data, angle, and position and loading, covering, replacing, updating, changing physical shape with digital layers placing it in the real world and using its said shape as a depth mask for showing show another immersive reality-virtuality continuum-based environment.
38. The system of claim 36 including immersive reality-virtuality continuum-based environment data and/or 3D model and/or any other digital code inside an RF and/or BLE and/or QR code and/or any other data containing or transmitting device that will become part of said physical object.
39. The system of claim 38 wherein said transmitting device can be inserted or connected to a device at any stage.
40. The system of claim 36 comprising QR code for identifying data quicker and give providing additional information such as exact direction on the 3D model in reference to said device.
41. The system of claim 36, wherein said system comprises means such that when activated by a device such as mobile, computer, glasses said data is downloaded from the cloud to said device and/or MR EYES for storage or use on demand.
42. The system of claim 36, wherein said data can be unique per unique device including a unique ID code, unique color and any other relevant data to describe said physical object better and then represent it accuracy in immersive reality-virtuality continuum- based environment.
43. The system of claim 36, wherein said data comprises material type, flexibility of different parts of said objects, mass and any combination thereof.
44. The system of claim 33 comprising modules for recording, updating, saving , uploading and downloading 3D and/or digital content and/or interactive logic and/or code and/or any immersive reality-virtuality continuum-based environment data from a cloud server and/or additional device selected from the group consisting of a mobile phone, a laptop, a hard drive and any combination thereof and/or a device containing data said modules for connecting, placing and presenting in a predefined place/area in real environment with AR/VR/MR and/or MR Beacons for presenting immersive reality-virtuality continuum-based environment.
45. A module for including immersive reality-virtuality continuum-based environment data and/or 3D model data and/or code data within any physical object of manufacture comprising computer implemented instructions for causing said object to transmit data to replace and/or cover and/or map said physical object with digital layers.
46. A method for including immersive reality-virtuality continuum-based environment 3D drawings in real environments that will be represented as mixed reality and is combinable with the real environment comprising steps of drawing 3D pixels in a mixed place; said method comprising steps of:
a. using a drawing mechanism in a real 3d environment with the use of a mobile phone, while screen is touched drawing a pixel and when dragged creating multiple 3D pixels thereby creating a line, a shape or any other creation done according to hand movements; b. interpreting and analyzing movement of the mobile phone compared to the environment by sensing mechanisms including the camera, accelerometer, gyro, MR beacons and other conventional technologies;
c. placing the drawings as MixedPlace technology and /or MR beacons and/or image targets; and
d. saving drawings to the cloud including connections to the user, placing on earth and all related data with Mixed.Place and MR beacons.
47. A method for drawing a 2D doodle and transforming said drawing into 3D model by digitally recognizing the 2D drawings optionally with AI, and tagging them as a category representative of an archetype object such as a dog, a flower, a chair drawing said drawings with 3D tools on the real 3D environment modifying the 3D model with or without animation said 3D models optionally stored in a bank of models categorized and defined and superimposable as a virtual environment on top of the real one wherein said environment is augmented reality, virtual reality and mixed reality,
48. The method of claim 47, wherein said drawing is done by a device selected from the group consisiting of a mobile phone, a controller, a hand gesture, a digital pen, a wearable device and any combination thereof.
49. The method of claim 47 wherein said drawing is 3D and placed on mixed.places enabled for interaction whether it is a drawing or a 3D model made out of said drawing.
PCT/IL2018/050813 2017-07-20 2018-07-22 A METHOD FOR PLACING, TRACKING AND PRESENTING IMMERSIVE REALITY-VIRTUALITY CONTINUUM-BASED ENVIRONMENT WITH IoT AND/OR OTHER SENSORS INSTEAD OF CAMERA OR VISUAL PROCCESING AND METHODS THEREOF WO2019016820A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762534697P 2017-07-20 2017-07-20
US62/534,697 2017-07-20

Publications (1)

Publication Number Publication Date
WO2019016820A1 true WO2019016820A1 (en) 2019-01-24

Family

ID=65016558

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2018/050813 WO2019016820A1 (en) 2017-07-20 2018-07-22 A METHOD FOR PLACING, TRACKING AND PRESENTING IMMERSIVE REALITY-VIRTUALITY CONTINUUM-BASED ENVIRONMENT WITH IoT AND/OR OTHER SENSORS INSTEAD OF CAMERA OR VISUAL PROCCESING AND METHODS THEREOF

Country Status (1)

Country Link
WO (1) WO2019016820A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10681183B2 (en) 2014-05-28 2020-06-09 Alexander Hertel Platform for constructing and consuming realm and object featured clouds
CN111625093A (en) * 2020-05-19 2020-09-04 昆明埃舍尔科技有限公司 Dynamic scheduling display method of massive digital point cloud data in MR glasses
CN113384868A (en) * 2021-06-25 2021-09-14 歌尔光学科技有限公司 Hand model establishing method and device, electronic equipment and storage medium
CN113566829A (en) * 2021-07-19 2021-10-29 上海极赫信息技术有限公司 High-precision positioning technology-based mixed reality navigation method and system and MR (magnetic resonance) equipment
FR3135856A1 (en) * 2022-05-23 2023-11-24 Ary METHOD FOR ANCHORING A VIRTUAL OBJECT, ASSOCIATED SYSTEM

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140225916A1 (en) * 2013-02-14 2014-08-14 Research In Motion Limited Augmented reality system with encoding beacons
US20140285519A1 (en) * 2013-03-22 2014-09-25 Nokia Corporation Method and apparatus for providing local synchronization of information for augmented reality objects
EP3062253A1 (en) * 2015-02-25 2016-08-31 BAE Systems PLC Method and apparatus for data verification in a mixed reality system
WO2018035362A1 (en) * 2016-08-19 2018-02-22 Pcms Holdings, Inc. System and methods for communications in mixed reality applications

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140225916A1 (en) * 2013-02-14 2014-08-14 Research In Motion Limited Augmented reality system with encoding beacons
US20140285519A1 (en) * 2013-03-22 2014-09-25 Nokia Corporation Method and apparatus for providing local synchronization of information for augmented reality objects
EP3062253A1 (en) * 2015-02-25 2016-08-31 BAE Systems PLC Method and apparatus for data verification in a mixed reality system
WO2018035362A1 (en) * 2016-08-19 2018-02-22 Pcms Holdings, Inc. System and methods for communications in mixed reality applications

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10681183B2 (en) 2014-05-28 2020-06-09 Alexander Hertel Platform for constructing and consuming realm and object featured clouds
US11368557B2 (en) 2014-05-28 2022-06-21 Alexander Hertel Platform for constructing and consuming realm and object feature clouds
US11729245B2 (en) 2014-05-28 2023-08-15 Alexander Hertel Platform for constructing and consuming realm and object feature clouds
CN111625093A (en) * 2020-05-19 2020-09-04 昆明埃舍尔科技有限公司 Dynamic scheduling display method of massive digital point cloud data in MR glasses
CN113384868A (en) * 2021-06-25 2021-09-14 歌尔光学科技有限公司 Hand model establishing method and device, electronic equipment and storage medium
CN113566829A (en) * 2021-07-19 2021-10-29 上海极赫信息技术有限公司 High-precision positioning technology-based mixed reality navigation method and system and MR (magnetic resonance) equipment
FR3135856A1 (en) * 2022-05-23 2023-11-24 Ary METHOD FOR ANCHORING A VIRTUAL OBJECT, ASSOCIATED SYSTEM
WO2023227611A1 (en) * 2022-05-23 2023-11-30 Ary Method and associated system for anchoring a virtual object

Similar Documents

Publication Publication Date Title
US12008719B2 (en) Wide area augmented reality location-based services
US11120628B2 (en) Systems and methods for augmented reality representations of networks
US10275945B2 (en) Measuring dimension of object through visual odometry
WO2019016820A1 (en) A METHOD FOR PLACING, TRACKING AND PRESENTING IMMERSIVE REALITY-VIRTUALITY CONTINUUM-BASED ENVIRONMENT WITH IoT AND/OR OTHER SENSORS INSTEAD OF CAMERA OR VISUAL PROCCESING AND METHODS THEREOF
CA2949543C (en) Platform for constructing and consuming realm and object feature clouds
CN104484327A (en) Project environment display method
CN105051648A (en) Mixed reality filtering
US20210038975A1 (en) Calibration to be used in an augmented reality method and system
CN106464773A (en) Augmented reality apparatus and method
CN112684893A (en) Information display method and device, electronic equipment and storage medium
CN117419713A (en) Navigation method based on augmented reality, computing device and storage medium
US11189097B2 (en) Simulated reality transition element location
Hew et al. Markerless Augmented Reality for iOS Platform: A University Navigational System
Woodward et al. Case Digitalo-A range of virtual and augmented reality solutions in construction application
US11568616B1 (en) Display apparatuses and methods for facilitating location-based virtual content
US20240194093A1 (en) Information processing device, information processing method, and program
Ghimire Augmented Reality in Historical Museum: A case study at Fjell Fortress
Ali et al. Design an augmented reality application for Android smart phones
Blanco Pons Analysis and development of augmented reality applications for the dissemination of cultural heritage
KR20230166760A (en) Method for generating metaverse space of hyper-personalized design and a metaverse system for performing the same
Saha et al. CHAPTER THIRTEEN Indoor Navigation System using Augmented Reality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18836079

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.07.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18836079

Country of ref document: EP

Kind code of ref document: A1