EP2850609A1 - A system worn by a moving user for fully augmenting reality by anchoring virtual objects - Google Patents
A system worn by a moving user for fully augmenting reality by anchoring virtual objectsInfo
- Publication number
- EP2850609A1 EP2850609A1 EP12876696.1A EP12876696A EP2850609A1 EP 2850609 A1 EP2850609 A1 EP 2850609A1 EP 12876696 A EP12876696 A EP 12876696A EP 2850609 A1 EP2850609 A1 EP 2850609A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- virtual
- objects
- real
- world
- real world
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000004873 anchoring Methods 0.000 title claims description 21
- 230000003190 augmentative effect Effects 0.000 title abstract description 14
- 230000033001 locomotion Effects 0.000 claims abstract description 22
- 230000003993 interaction Effects 0.000 claims abstract description 18
- 238000000034 method Methods 0.000 claims description 33
- 239000011521 glass Substances 0.000 claims description 32
- 230000004438 eyesight Effects 0.000 claims description 21
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 230000010354 integration Effects 0.000 claims description 6
- 238000012986 modification Methods 0.000 claims description 5
- 230000004048 modification Effects 0.000 claims description 5
- 238000005259 measurement Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 230000003542 behavioural effect Effects 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000003068 static effect Effects 0.000 claims description 2
- 230000000087 stabilizing effect Effects 0.000 claims 3
- 238000004891 communication Methods 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 description 11
- 239000004984 smart glass Substances 0.000 description 7
- 230000009471 action Effects 0.000 description 6
- 230000018109 developmental process Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000012015 optical character recognition Methods 0.000 description 6
- 241001122767 Theaceae Species 0.000 description 5
- 239000000243 solution Substances 0.000 description 5
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 5
- 238000011161 development Methods 0.000 description 4
- 238000002347 injection Methods 0.000 description 4
- 239000007924 injection Substances 0.000 description 4
- 230000002207 retinal effect Effects 0.000 description 4
- 241000282412 Homo Species 0.000 description 3
- 230000006399 behavior Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 238000007667 floating Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000004886 head movement Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013479 data entry Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000008909 emotion recognition Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000005021 gait Effects 0.000 description 1
- 238000001093 holography Methods 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 108020001568 subdomains Proteins 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/001—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
- G09G3/003—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/02—Networking aspects
- G09G2370/022—Centralised management of display operation, e.g. in a server instead of locally
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/12—Use of DVI or HDMI protocol in interfaces along the display data pipeline
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/16—Use of wireless transmission of display information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6812—Motion detection based on additional sensors, e.g. acceleration sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
- H04N5/2723—Insertion of virtual advertisement; Replacing advertisements physical present in the scene by virtual advertisement
Definitions
- the present invention generally relates to augmented reality systems, and more particularly to a system to visually anchor virtual objects to real world objects functionally and behaviorally, to create an integrated, comprehensive, rational augmented reality environment in a fixed position and where the user/observer can move around without loss of context, including relative position, 3D perspective and viewing angle of the virtual objects in the real world, and the interaction between the virtual objects with the real world and between multiple virtual objects. Interaction of virtual objects for multiple users is also enabled, wherein the system has been provided for each of the multiple users in communication with each other.
- Augmented reality is a live, direct or indirect, view of a physical, real- world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data.
- AR is related to the more general mediated reality (MR) concept, in which a view of reality is modified, rather than augmented by a computer.
- MR mediated reality
- the technology functions by enhancing the current perception of reality.
- virtual reality replaces the real world with a completely simulated one.
- Augmentation is conventionally in real-time and in semantic context with environmental elements, such as current and external sports scores on TV during a match.
- advanced AR technology e.g. adding computer vision and object recognition
- the information about the surrounding real world of the user becomes interactive and digitally manipulatable.
- Artificial information about the environment and its objects can be overlaid on the real world.
- AR technology includes head-mounted displays and virtual retinal displays for visualization and building of controlled environments, enabled by sensors and actuators.
- See-through glasses are an existing technology, which comprise: an electro optic device; and a pair of transparent glasses, which project a given display screen seen with the user's eyesight, as if it were a real display screen in the real world with a focal length at infinity, so the display images can be seen, although it is positioned at a close proximity to the eyes. Since see-through glasses screen to each eye separately, the image displayed can be very realistic 3 dimensional holography. Since the color black is seen as transparent in the see-through glasses, because black does not return light, objects within a black screen are insulated and are seen normally as they exist.
- Total Immersion is an augmented reality company whose D'Fusion technology uses the black frame feature to merge real time interactive 3D graphics into a live video sources.”
- CV computer vision
- a set of logic rules dedicated to process computer generated images according to their defined nature of interaction including positioning, perspective, functionality and behavior.
- SDK software development kit
- the source image is the computer generated image (CGI), as opposed to the image displayed on the black frame.
- the video image received by the cameras installed on the see through device is the reference of the real world from which the software identifies real objects as markers, using computer vision applications and respective algorithms, in order to "hard anchor” (i.e., tight relative connection between the virtual object and the says real object) the CGI to the real world.
- the source image (CGI) is to be distinguished from the reference image for anchoring using computer vision
- Soft anchoring is anchoring to a certain point, independent of any modifications and circumstances in the real world.
- hard anchoring is anchoring to objects in the real world, pin-pointed by the markers, including modification and transformation, perspective, broken, etc., in space, not to a specific object in the real world.
- 3D image view and anchoring If the viewer is watching a virtual sculpture while walking around it, he will see the sculpture from different angles accordingly to his viewing angle in relation to the sculpture.
- a head- mounted display comprising see-through glasses, a virtual retina display device or any other device or technology which allows a computer generated image (CGI) to be superimposed on a real-world view.
- CGI computer generated image
- the head-mounted display is worn on the head or as part of a helmet that has a small display optic in front of one eye (monocular HMD) or each eye (binocular HMD).
- a typical HMD has either one or two small displays with lenses and semi- transparent mirrors embedded in a helmet, eye-glasses (also known as data glasses) or visor.
- the display units are miniaturized and may include cathode ray tube (CRT), liquid crystal display (LCD), Liquid crystal on silicon (LCoS), or Organic Light- Emitting Diode (OLED).
- CTR cathode ray tube
- LCD liquid crystal display
- LCDoS Liquid crystal on silicon
- OLED Organic Light- Emitting Diode
- multiple micro-displays are implemented to increase total resolution and field of view.
- the device enables a computer generated image (CGI) to be superimposed on a real-world view.
- CGI computer generated image
- Combining real-world view with CGI is accomplished by projecting the CGI through a partially reflective mirror and viewing the real world directly. This method is often called Optical See-Through.
- Combining real-world view with CGI can also be done electronically by accepting video from a camera and mixing it electronically with CGI. This method is often called Video See-Through.
- a virtual retinal display also known as a retinal scan display (RSD) or retinal projector (RP) is a display technology that draws a raster display, typical of television, directly onto the retina of the user's eyes.
- the user sees what appears to be a conventional display floating in space in front of his eyes.
- the present invention provides a computer generated image integrated on a real-world view, seen by the observer on the see through display glasses he is wearing at the time.
- the virtual objects will be seen by the observer, all around him, as those were real objects in the real world, displayed on the glasses he is wearing at the time, seen on his eyesight only.
- the method involves defining each virtual object, relevant to the specific application, and the nature of its interaction with the rest of the world
- the solution software utilizes the data input in order to anchor the virtual objects as seen through the glasses, to the real world, using different approaches.
- IMU inertial movement unit
- An inertial measurement unit is an electronic device that measures and reports on a craft's velocity, orientation, and gravitational forces, using a combination of accelerometers and gyroscopes.
- IMU's are typically used to maneuver aircraft, including unmanned aerial vehicles (UAV's), among many others, and spacecraft, including shuttles, satellites and landers.
- UAV's unmanned aerial vehicles
- spacecraft including shuttles, satellites and landers.
- IMU enabled GPS devices An IMU allows a GPS to work when GPS-signals are unavailable, such as in tunnels, inside buildings, or when electronic interference is present.
- a wireless IMU is known as a WIMU.
- the IMU is the main component of inertial navigation systems used in air, spacet, water vehicles, and guided missiles among others.
- the data collected from the IMU's sensors allow computer-tracking of a vehicle's position, using dead reckoning.
- IMU detects the current rate of acceleration using accelerometers, and detects changes in rotational attributes like pitch, roll and yaw using one or more gyroscopes.
- the source image size is reduced and projected into a full size black frame in which the source image floats.
- the floating aspect of the black frame may be defined as an aspect of a "black hydraulic frame.”
- the black frame containing the source image is transmitted to the glasses projector for display. Since the black color is seen as transparent in the glasses since black does not return light. Objects within a black screen are isolated and seen as they are. I.e., the observer does not see the black frame but only the source image.
- the source image is inserted in the black frame as inverse movenent to the head movement, according to the IMU data input, using a compensation calculation formula.
- the source image as seen through the glasses, is steady at a certain point in space within the user's field of view.
- a virtual object is anchored to the real world, visually and functionally, using computer vision applications and a video camera integrated on the glasses.
- the combination of the cameras and the IMU enable hard anchoring of the virtual and real objects, while the viewer can be on the move since the system can separate the viewer movements from the object movements.
- CV Computer vision
- a recent CV development has been to duplicate the abilities of human vision by electronically perceiving and understanding an image. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics and learning theory.
- computer vision is concerned with the theory behind artificial systems that extract information from images.
- the image data can take many forms, such as video sequences, views from multiple cameras, or multi- dimensional data from a medical scanner.
- Sub-domains of computer vision include scene reconstruction, event detection, video tracking, object recognition, learning, indexing, motion estimation and image restoration.
- the computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common.
- CV is used to recognize structures and objects in the real world. Using a set of rules regarding the characteristics of each and every virtual object, and concerning its visual and functional behavior within the real world, including different view angles and perspective, markers are created in the real world and the virtual objects are anchored to them.
- a virtual tea cup cannot float in the air and should be bound to a solid surface. If the virtual tea cup sits on a real table, and somebody turns the real table around, the tea cup should turn around accordingly relative to the table, and should be seen as a 3D tea cup turning around from all perspectives.
- a user is looking at a particular field of vision. His head is naturally moving, smoothly.
- the virtual objects seen on the certain field of vision are steady, as if they were real objects. This is because they (the source CGI images) are displayed, and float, within a black frame.
- the user is looking around. He sees a table. On the table there is a virtual cup of tea. This is because the algorithm recognizes that this specific cup of tea should be on that specific table and with the very same geo-position. When he looks around the algorithm recognizes, through computer vision, this particular table -as a marker - and displays this particular cup of tea - this CGI.
- the present invention relies on 2D information, 3D static information and dynamic computer vision (CV).
- CV is the inverse of computer graphics. While computer graphics produces image data from 3D models, CV often produces 3D models from image data.
- binocular CV two cameras are used together. Having two cameras confers a few advantages over having one.
- OCR optical character recognition
- OCR is the mechanical or electronic conversion of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used as a form of data entry from some sort of original paper data source, whether documents, sales receipts, mail or any number of other printed records. It is crucial to the computerization of printed texts so that they can be electronically searched, stored more compactly, displayed on-line and used in machine processes such as machine translation, text-to-speech and text mining. OCR is a field of research in pattern recognition, artificial intelligence and computer vision.
- Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state, but commonly originate from the face , hands and voice. Current focus in the field includes emotion recognition from the face and hand gesture recognition. Enhanced results have been achieved using cameras and CV algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behavior are also subjects of gesture recognition techniques.
- Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even graphical user interfaces (GUI's), which still limit the majority of keyboard and mouse input.
- GUI graphical user interfaces
- Gesture recognition enables humans to interface with the machine (HMI) and interact naturally without any mechanical devices. Using the concept of gesture recognition, it is possible to point a finger at the computer screen so that the cursor will move accordingly. This could potentially make conventional input devices such as mouse, keyboard and even touch-screen redundant. Gesture recognition can be implemented with techniques from CV and image processing.
- the present invention can be implemented using any smartphone as a computer, any other computer system or using cloud computing.
- Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility, such as the electricity grid, over a network, typically the Internet.
- Cloud computing typically entrusts centralized services with data, software, and computation on a published application programming interface (API) over a network. It has a lot of overlap with software as a service (SaaS).
- API application programming interface
- End users access cloud based applications through a web browser or a light weight desktop or mobile app while the business software and data are stored on servers at a remote location.
- Cloud application providers strive to give the same or better service and performance than if the software programs were installed locally on end-user computers.
- CI Converged Infrastructure
- This type of data center environment allows enterprises to get their applications up and running faster, with easier manageability and less maintenance, and enables information technology (IT) to more rapidly adjust IT resources, such as servers, storage, and networking, to meet fluctuating and unpredictable business demand.
- the present invention provides for Implementing individual applications, implementing multi-user applications, and using wireless technologies and self- supply energy technologies.
- the present invention provides a software development kit (SDK).
- SDK includes a data base containing specific definitions for each and every virtual object concerning the nature of its visual, functional and behavioral characteristics, including its interaction characteristics with the rest of the world, virtual and real.
- the SDK includes a set of logic rules concerning features of any kind relevant to the development of applications which integrate a virtual world with a real world.
- Information sharing is defined between individuals to create a common virtual world integrated with the real world. For example:
- the tennis match is a sequence of interactions between the two players.
- the ball should be displayed in both applications reflecting the sequence of actions and reactions, concerning speed and direction, calculated in each application as a result of the previous action, and creating a mutual logic between the two independent applications.
- this mutual logic represents the "intelligence of the ball.”
- this is an essential requirement in order to implement mobile augmented reality.
- the "intelligence of the ball" logic enables mobile augmented reality as a personal solution for each user, as well as a common solution for all users, allowing many to implement a common experience, each one from his unique point of view and by his unique sequence of interactions with the general experience, and with each of the others, separately.
- the present invention enables a dedicated micro processor to activate and implement applications, including IMU anchoring, computer vision anchoring, external computer system emulation, application interfacing, OCR applications, gesture control applications and other applications relevant to the invention. All the above and other characteristics and advantages of the invention will be further understood through the following illustrative and non-limitative description of preferred embodiments thereof.
- Fig. 1 is a schematic illustration for a system to anchor virtual objects to the real world, constructed according to the principles of the present invention
- Fig. 2 is a functional structure diagram for a system to anchor virtual objects to the real world, constructed according to the principles of the present invention
- Fig. 3 is a general block diagram for a system to anchor virtual objects to the real world, constructed according to the principles of the present invention
- Fig. 4 is a wireless connection block diagram for a system to anchor virtual objects to the real world, constructed according to the principles of the present invention
- Fig. 1 is a schematic illustration for a system to anchor virtual objects to the real world, constructed according to the principles of the present invention.
- An exemplary embodiment includes a user's smartphone 1 10, printed circuit hardware 120 and smart glasses 130 worn by the user.
- Smartphone 1 10 provides an image input 1 1 1 to printed circuit hardware 120, which includes processing software 122 and orientation data coming from an inertial movement unit (IMU) on smart glasses.
- IMU inertial movement unit
- Mobile augmented reality output 124 is returned from printed circuit hardware 120 back to smart glasses 130.
- Smart glasses 130 include 3D orientation sensors
- Fig. 2 is a functional structure diagram 200 for a system to anchor virtual objects to the real world, constructed according to the principles of the present invention.
- High-Definition Multimedia Interface (HDMI) 212 is a compact audio/video adapter for transferring encrypted uncompressed digital audio/video data from an HDMI-compliant device ("the source” or "input” smartphone digital audio device, computer monitor or video projector "box” 220.
- Smartphone source 210 includes a built-in interface 21 1 , which receives data from HDMI adapter 212 and returns images to microprocessor 221 .
- a pair of smart glasses 230 worn by a user, houses a microcamera 231 and an IMU 232, both of which provide data input to a microprocessor/software unit 221 in box 220, which also houses a battery 222, in an exemplary embodiment.
- Smart glasses 230 include also a left screen 233 and a right screen 233, which receive display images output from microprocessor 221 to be viewed by the user.
- Fig. 3 is a general block diagram 300 for a system to anchor virtual objects to the real world, constructed according to the principles of the present invention. Connections are shown between a smartphone 310 (or other computing source in other embodiments) and the glasses. Smartphone 310 modules include a video injection application 31 1 , a display interface application 312, an Application Programming Interface (API) 3 rd party application 313, a rightside video stream pull application 314, a leftside video stream pull application 315 and an inertial measurement unit (IMU) communication interface 316.
- API Application Programming Interface
- IMU inertial measurement unit
- Interfaces between smartphone 310 and the glasses include a video interface display command interface 321 , a rightside video streamer interface 322 and a leftside video streamer interface 323.
- Glasses modules include a rightside display 324, a leftside display 325, a rightside camera 326, a leftside camera 327 and an IMU 328.
- the glasses also have a bias adjustment 329.
- Video and command interface 321 receives video from video injection application 31 1 via a video display channel 317, rightside video streamer interface 322 transmits a rightside camera video stream 318 to rightside video stream pull application 314 and a leftside video streamer interface 323 transmits a leftside camera video stream 319 to leftside video stream pull application 315.
- Fig. 4 is a wireless connection block diagram 400 for a system to anchor virtual objects to the real world, constructed according to the principles of the present invention. Wireless connections are shown between a smartphone 410 (or other computing source) and the glasses.
- Smartphone 410 modules include a video injection application 41 1 , a display interface application 412, an Application Programming Interface (API) 3 rd party application 413, a rightside stream input application 414, a leftside stream input application 415 and an inertial measurement unit (IMU) communication interface 416.
- API Application Programming Interface
- IMU inertial measurement unit
- Video interface and display command 421 receives video from video injection application 41 1 via WiFi display transmission 441 .
- Glasses modules include a rightside display 424, a leftside display 425, a rightside WiFi IP camera 426, a leftside WiFi IP camera 427 and an IMU 428 with WiFi buffer 440 and passes information to rightside display 424 and leftside display 425.
- the glasses also have a bias adjustment 429 powered by a battery 450.
- Rightside camera WiFi IP 426 transmits rightside WiFi 442 to rightside stream input application 414 and leftside camera WiFi IP 427 transmits leftside WiFi 443 to leftside stream input application 415.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/IL2012/050173 WO2013171731A1 (en) | 2012-05-16 | 2012-05-16 | A system worn by a moving user for fully augmenting reality by anchoring virtual objects |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2850609A1 true EP2850609A1 (en) | 2015-03-25 |
EP2850609A4 EP2850609A4 (en) | 2017-01-11 |
Family
ID=49583229
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP12876696.1A Withdrawn EP2850609A4 (en) | 2012-05-16 | 2012-05-16 | A system worn by a moving user for fully augmenting reality by anchoring virtual objects |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP2850609A4 (en) |
CN (1) | CN104603865A (en) |
HK (1) | HK1207918A1 (en) |
WO (1) | WO2013171731A1 (en) |
Families Citing this family (51)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103900473A (en) * | 2014-03-31 | 2014-07-02 | 浙江大学 | Intelligent mobile device six-degree-of-freedom fused pose estimation method based on camera and gravity inductor |
US9600743B2 (en) | 2014-06-27 | 2017-03-21 | International Business Machines Corporation | Directing field of vision based on personal interests |
US9471837B2 (en) | 2014-08-19 | 2016-10-18 | International Business Machines Corporation | Real-time analytics to identify visual objects of interest |
US9697383B2 (en) | 2015-04-14 | 2017-07-04 | International Business Machines Corporation | Numeric keypad encryption for augmented reality devices |
WO2016185845A1 (en) * | 2015-05-21 | 2016-11-24 | 日本電気株式会社 | Interface control system, interface control device, interface control method and program |
US10799792B2 (en) | 2015-07-23 | 2020-10-13 | At&T Intellectual Property I, L.P. | Coordinating multiple virtual environments |
CN106371571B (en) * | 2015-11-30 | 2019-12-13 | 北京智谷睿拓技术服务有限公司 | Information processing method, information processing device and user equipment |
DE102016200956A1 (en) * | 2016-01-25 | 2017-07-27 | Robert Bosch Gmbh | Method and device for visualizing software |
US10169922B2 (en) * | 2016-02-16 | 2019-01-01 | Microsoft Technology Licensing, Llc | Reality mixer for mixed reality |
CN205430338U (en) * | 2016-03-11 | 2016-08-03 | 依法儿环球有限公司 | Take VR content to gather smart mobile phone or portable electronic communication device of subassembly |
CN106546238B (en) * | 2016-10-26 | 2020-09-01 | 北京小鸟看看科技有限公司 | Wearable device and method for determining user displacement in wearable device |
CN117251053A (en) * | 2016-12-29 | 2023-12-19 | 奇跃公司 | Automatic control of wearable display device based on external conditions |
KR102464296B1 (en) * | 2017-01-24 | 2022-11-04 | 론자 리미티드 | Methods and systems for performing industrial maintenance using a virtual or augmented reality display |
US11287292B2 (en) | 2017-02-13 | 2022-03-29 | Lockheed Martin Corporation | Sensor system |
CN108427195A (en) * | 2017-02-14 | 2018-08-21 | 深圳梦境视觉智能科技有限公司 | A kind of information processing method and equipment based on augmented reality |
CN108427194A (en) * | 2017-02-14 | 2018-08-21 | 深圳梦境视觉智能科技有限公司 | A kind of display methods and equipment based on augmented reality |
CN107168619B (en) * | 2017-03-29 | 2023-09-19 | 腾讯科技(深圳)有限公司 | User generated content processing method and device |
PL3392987T3 (en) * | 2017-04-21 | 2021-06-14 | Rittal Gmbh & Co. Kg | Method and system for automated support of a connection process, in particular for components in a switch cabinet or on a mounting system |
WO2018209515A1 (en) * | 2017-05-15 | 2018-11-22 | 上海联影医疗科技有限公司 | Display system and method |
CN110869980B (en) * | 2017-05-18 | 2024-01-09 | 交互数字Vc控股公司 | Distributing and rendering content as a spherical video and 3D portfolio |
GB2567012B (en) * | 2017-10-02 | 2021-05-12 | Advanced Risc Mach Ltd | Motion Sensing |
CN107657574A (en) * | 2017-10-06 | 2018-02-02 | 杭州昂润科技有限公司 | It is a kind of based on the underground utilities asset management system of AR technologies and method |
US10976982B2 (en) * | 2018-02-02 | 2021-04-13 | Samsung Electronics Co., Ltd. | Guided view mode for virtual reality |
US10908419B2 (en) | 2018-06-28 | 2021-02-02 | Lucyd Ltd. | Smartglasses and methods and systems for using artificial intelligence to control mobile devices used for displaying and presenting tasks and applications and enhancing presentation and display of augmented reality information |
US11036284B2 (en) * | 2018-09-14 | 2021-06-15 | Apple Inc. | Tracking and drift correction |
CN109343815A (en) * | 2018-09-18 | 2019-02-15 | 上海临奇智能科技有限公司 | A kind of implementation method of virtual screen device and virtual screen |
US11321768B2 (en) * | 2018-12-21 | 2022-05-03 | Shopify Inc. | Methods and systems for an e-commerce platform with augmented reality application for display of virtual objects |
WO2020153946A1 (en) * | 2019-01-22 | 2020-07-30 | Hewlett-Packard Development Company, L.P. | Mixed reality presentation |
USD900205S1 (en) | 2019-03-22 | 2020-10-27 | Lucyd Ltd. | Smart glasses |
USD900203S1 (en) | 2019-03-22 | 2020-10-27 | Lucyd Ltd. | Smart glasses |
USD899495S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
USD900204S1 (en) | 2019-03-22 | 2020-10-27 | Lucyd Ltd. | Smart glasses |
USD899493S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
USD899498S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
USD899494S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
USD900920S1 (en) | 2019-03-22 | 2020-11-03 | Lucyd Ltd. | Smart glasses |
USD899500S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
USD900206S1 (en) | 2019-03-22 | 2020-10-27 | Lucyd Ltd. | Smart glasses |
USD899497S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
USD899496S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
USD899499S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
CN110850977B (en) * | 2019-11-06 | 2023-10-31 | 成都威爱新经济技术研究院有限公司 | Stereoscopic image interaction method based on 6DOF head-mounted display |
USD954136S1 (en) | 2019-12-12 | 2022-06-07 | Lucyd Ltd. | Smartglasses having pivot connector hinges |
USD958234S1 (en) | 2019-12-12 | 2022-07-19 | Lucyd Ltd. | Round smartglasses having pivot connector hinges |
USD954135S1 (en) | 2019-12-12 | 2022-06-07 | Lucyd Ltd. | Round smartglasses having flat connector hinges |
USD955467S1 (en) | 2019-12-12 | 2022-06-21 | Lucyd Ltd. | Sport smartglasses having flat connector hinges |
USD954137S1 (en) | 2019-12-19 | 2022-06-07 | Lucyd Ltd. | Flat connector hinges for smartglasses temples |
USD974456S1 (en) | 2019-12-19 | 2023-01-03 | Lucyd Ltd. | Pivot hinges and smartglasses temples |
US11282523B2 (en) | 2020-03-25 | 2022-03-22 | Lucyd Ltd | Voice assistant management |
US12020379B2 (en) * | 2020-04-17 | 2024-06-25 | Apple Inc. | Virtual anchoring systems and methods for extended reality |
CN112684883A (en) * | 2020-12-18 | 2021-04-20 | 上海影创信息科技有限公司 | Method and system for multi-user object distinguishing processing |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5742264A (en) * | 1995-01-24 | 1998-04-21 | Matsushita Electric Industrial Co., Ltd. | Head-mounted display |
US7002551B2 (en) * | 2002-09-25 | 2006-02-21 | Hrl Laboratories, Llc | Optical see-through augmented reality modified-scale display |
US6867753B2 (en) * | 2002-10-28 | 2005-03-15 | University Of Washington | Virtual image registration in augmented display field |
WO2006081198A2 (en) * | 2005-01-25 | 2006-08-03 | The Board Of Trustees Of The University Of Illinois | Compact haptic and augmented virtual reality system |
IL172797A (en) * | 2005-12-25 | 2012-09-24 | Elbit Systems Ltd | Real-time image scanning and processing |
US8730156B2 (en) * | 2010-03-05 | 2014-05-20 | Sony Computer Entertainment America Llc | Maintaining multiple views on a shared stable virtual space |
JP4795091B2 (en) * | 2006-04-21 | 2011-10-19 | キヤノン株式会社 | Information processing method and apparatus |
JP2013521576A (en) * | 2010-02-28 | 2013-06-10 | オスターハウト グループ インコーポレイテッド | Local advertising content on interactive head-mounted eyepieces |
US20110213664A1 (en) * | 2010-02-28 | 2011-09-01 | Osterhout Group, Inc. | Local advertising content on an interactive head-mounted eyepiece |
US8884984B2 (en) * | 2010-10-15 | 2014-11-11 | Microsoft Corporation | Fusing virtual content into real content |
-
2012
- 2012-05-16 WO PCT/IL2012/050173 patent/WO2013171731A1/en active Application Filing
- 2012-05-16 CN CN201280074650.XA patent/CN104603865A/en active Pending
- 2012-05-16 EP EP12876696.1A patent/EP2850609A4/en not_active Withdrawn
-
2015
- 2015-09-01 HK HK15108522.3A patent/HK1207918A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
CN104603865A (en) | 2015-05-06 |
EP2850609A4 (en) | 2017-01-11 |
WO2013171731A1 (en) | 2013-11-21 |
HK1207918A1 (en) | 2016-02-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9210413B2 (en) | System worn by a moving user for fully augmenting reality by anchoring virtual objects | |
EP2850609A1 (en) | A system worn by a moving user for fully augmenting reality by anchoring virtual objects | |
US11747618B2 (en) | Systems and methods for sign language recognition | |
CN113168007B (en) | System and method for augmented reality | |
US9934614B2 (en) | Fixed size augmented reality objects | |
Anthes et al. | State of the art of virtual reality technology | |
CN107209386B (en) | Augmented reality view object follower | |
US11217024B2 (en) | Artificial reality system with varifocal display of artificial reality content | |
US9824499B2 (en) | Mixed-reality image capture | |
EP3137982B1 (en) | Transitions between body-locked and world-locked augmented reality | |
EP3137976B1 (en) | World-locked display quality feedback | |
US9165381B2 (en) | Augmented books in a mixed reality environment | |
CN107209565B (en) | Method and system for displaying fixed-size augmented reality objects | |
CN106489171B (en) | Stereoscopic image display | |
CN105393158A (en) | Shared and private holographic objects | |
US9989762B2 (en) | Optically composited augmented reality pedestal viewer | |
Huang | Virtual reality/augmented reality technology: the next chapter of human-computer interaction | |
US11656679B2 (en) | Manipulator-based image reprojection | |
Piszczek et al. | Photonic input-output devices used in virtual and augmented reality technologies | |
Hamadouche | Augmented reality X-ray vision on optical see-through head mounted displays | |
NZ792186A (en) | Sensory eyewear | |
NZ792193A (en) | Sensory eyewear |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20141211 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAX | Request for extension of the european patent (deleted) | ||
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06T 19/00 20110101ALI20160426BHEP Ipc: G06F 3/01 20060101ALI20160426BHEP Ipc: G09G 5/00 20060101AFI20160426BHEP Ipc: H04N 5/232 20060101ALI20160426BHEP Ipc: G09G 3/00 20060101ALI20160426BHEP Ipc: G06T 15/20 20110101ALI20160426BHEP Ipc: H04N 5/272 20060101ALN20160426BHEP |
|
RA4 | Supplementary search report drawn up and despatched (corrected) |
Effective date: 20161208 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04N 5/232 20060101ALI20161202BHEP Ipc: H04N 5/272 20060101ALN20161202BHEP Ipc: G06T 15/20 20110101ALI20161202BHEP Ipc: G09G 5/00 20060101AFI20161202BHEP Ipc: G06F 3/01 20060101ALI20161202BHEP Ipc: G09G 3/00 20060101ALI20161202BHEP Ipc: G06T 19/00 20110101ALI20161202BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20171012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: GRINBERG, DANIEL Inventor name: SARUSI, GABBY |
|
19U | Interruption of proceedings before grant |
Effective date: 20161030 |
|
19W | Proceedings resumed before grant after interruption of proceedings |
Effective date: 20231201 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20240604 |