GB2519744A - Augmented reality systems and methods - Google Patents

Augmented reality systems and methods Download PDF

Info

Publication number
GB2519744A
GB2519744A GB1317629.2A GB201317629A GB2519744A GB 2519744 A GB2519744 A GB 2519744A GB 201317629 A GB201317629 A GB 201317629A GB 2519744 A GB2519744 A GB 2519744A
Authority
GB
United Kingdom
Prior art keywords
data
camera
images
attitude
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1317629.2A
Other versions
GB201317629D0 (en
Inventor
Crispin Hoult
George Banfill
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LINKNODE Ltd
Original Assignee
LINKNODE Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LINKNODE Ltd filed Critical LINKNODE Ltd
Priority to GB1317629.2A priority Critical patent/GB2519744A/en
Publication of GB201317629D0 publication Critical patent/GB201317629D0/en
Publication of GB2519744A publication Critical patent/GB2519744A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V3/00Electric or magnetic prospecting or detecting; Measuring magnetic field characteristics of the earth, e.g. declination, deviation
    • G01V3/40Electric or magnetic prospecting or detecting; Measuring magnetic field characteristics of the earth, e.g. declination, deviation specially adapted for measuring magnetic field characteristics of the earth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • G01S19/14Receivers specially adapted for specific applications
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B19/00Cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C19/00Gyroscopes; Turn-sensitive devices using vibrating masses; Turn-sensitive devices without moving masses; Measuring angular rate using gyroscopic effects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P15/00Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V7/00Measuring gravitational fields or waves; Gravimetric prospecting or detecting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering

Abstract

Image capture apparatus comprising: digital camera 102 (eg. video camera) capturing images 150 and defining movable reference frame; location sensor 120 (eg. satellite positioning such as GPS) measuring camera position in space during image capture; attitude sensor 122 (eg. magnetometer measuring Earths magnetic field, accelerometer measuring Earths gravity or gyroscope measuring rotation about axes) measuring camera orientation in three dimensions; processor 110 generating metadata 152 associating geospatial location 158, attitude 160 and, optionally, field of view with images 150. Image data 150 and metadata 152 are delivered to storage device 140 or communication channel 142 that may link to remote server. Also disclosed is augmented reality image generating apparatus (eg. portable computing device (210, fig. 2) such as tablet 110 or phone) comprising: 3D model data source (276, fig. 2) representing structures (280, fig. 2) to be visualised within captured scene; means (274, fig. 2) for rendering 2D images of structures (280, fig. 2); compositing means (272, fig. 2) for combining the rendered images to appear correctly located within camera images. 3D topographic models 286 may be used to mask 290 structures (280, fig. 2) simulating correct obscuration by foreground landscape features. Elements of mask 290 may be created, drawn or edited by users via user interface (232, fig. 2).

Description

Augmented Reality Systems and Methods
FIELD OF THE INVENTION
The present invention relates generally to augmented reality systems which synthesise images from the real world with computer-generated images. The invention relates further to computer program products for use in implementing such systems) and to augmented reality products produced by such systems.
BACKGROUND
Augmented Reality (AR] is a computer science discipline for the integration of computer generated content (overlays] onto images of a real-world environment The overlays are commonly a visual (image) representation of icons/graphics, text, video, pictures or 3D models. The resultant "augmented" display is designed to enhance the perception and understanding of the real-world. The augmentation is conventionally performed in real-time utilising a camera sensor displayed on a visual display device. Existing AR systems are typically marker-based, using a visual registration system to overlay information based on known markers placed in (or at least recognisable in) the real environment This imposes technical challenges and restricts their applicability, especially outdoors.
Markerless systems are known which require integration with input from at least one other MEMS (microelectromechanical systems] sensor. Such sensors include compasses, accelerometers and gyroscopes for geospatial attitude. Geospatial AR is known, which uses GPS for geospatial location and hence absolute or relative positioning relative to the viewed environment Geospatial AR is described for example by Crispin Hoult in "Glality -Geospatial Data for Augmented Reality" Director, Linknode Ltd, proceedings of AG! GeoCommunity conference 2012. (TJRL http://www.agi.org.uk/storage/GeoCommunity/AGIT2/Papers/Crispinhoult.pdf] A different approach to avoid tracking markers is proposed by Jason Wither et a! in "Indirect augmented reality", Computers & Graphics 35 (2011] 810-822 (avai!ab!e at URL http://www.rona!dazuma.com/papers/CG_IndirectAR.pd. "Indirect Augmented Reality" involves a pre-determined panoramic image being captured at a known viewpoint, which is then used in on-site visualisation.
SUMMARY OF THE INVENTION
The inventor have sought to allow for augmented reality to work in several scenarios where it is not currently employed) for example access to remote sites for safety & security and for assessing different designs and overlays.
According to a first aspect of the invention, there is provided an image capture apparatus for supporting the generation of augmented rea!ity images comprising: -a camera for capturing images of a scene in digital form, the camera defining a movable reference frame; -a location sensor mounted to move with the camera for measuring an absolute location of the camera in three-dimensional space at the time of capturing said images; -an attitude sensor mounted to move with the camera for measuring an attitude of the camera in three dimensions atthe time of capturingsaid images; and -a data processor for associating location data and attitude data from said location sensor and attitude sensor as metadata with image data representing the images, and delivering the image data and metadata together to at least one of a storage device and a communication channel.
The inclusion of location data and attitude data with the captured image data allows AR image generation in a variety of "reconstructed geospatial AR" modes, that is not tied to the time and place of capture.
The location sensor may be for example a satellite positioning system receiver. For use in situations where satellite positioning is not available or not reliable (for example within buildings) other types of location sensors can be used.
The attitude sensor may comprise a magnetometer and an accelerometer for measuring attitude relative to the Earth's magnetic field and gravity. The attitude sensor further comprises a gyroscopic sensor for detecting rotation about one or more axes, the attitude data being calculated using a combination of signals from the gyroscopic sensor, magnetometer and accelerometer.
The camera may be a motion video camera and the location and attitude data may record variations in associated location and attitude during a motion picture sequence represented in the image data.
The data processor maybe further arranged to include field of view data within said metadata.
The data processor and camera may be integrated within a portable computing device such as a tablet or phone computer. In other embodiments, the data processor is within a portable computing device such as a tablet computer, while the camera is a separate unit, for example a digital SLR camera.
The location sensor and attitude sensor may be integrated with said data processor in said portable computing device.
According to a second aspect of the invention, there is provided an apparatus for use in generating augmented reality images, the apparatus comprising: -a source of image data representing in digital form images of a scene taken by a camera; -a source of 3-D model data representing one or more structures to be visualised within said scene by augmented reality, the model data specifying locations for said structures in said three-dimensional space; -a computer image generator for rendering 2-D images of said structures from the 3-D model data; -a composite image generator for combining the rendered images of said structures with camera images reproduced from the image data; and -a source of metadata by which a camera location and a camera attitude are associated with each of the camera images represented in the image data, wherein the computer image generator is arranged to control the rendering of images of said structures such that the 2-D images rendered from the 3-D model data when combined with the camera images appear correctly ocated in said three-dimensional space.
This apparatus, which may be regarded as a "reconstructed augmented reality" generator, can be used in conjunction with the apparatus of the first aspect of the invention to realise a complete AR capture and production system, in which the benefits of geospatial augmented reality can be combined with the generate and view AR images at a different time and/or different place from the one in which the direct images (still images or videos) are captured.
The source of 3-D model data may be operable to supply alternative sets of 3-D model data selected by a user, whereby said computer image generator and composite image generator are operable to repeat the rendering of 2-D images using different sets of model data, while using the same metadata to provide a visualisation of different structures in combination with same camera images.
The apparatus may further comprise a source of 3-D topographic model data, the topographic model data being used by said computer image generator in combination with said metadata so that the 2-D image rendered from the 3-D model data appear correctly obscured by foreground features, the foreground features being features of structure and/or landscape that are located in said three-dimertsiona space between the camera location represented in the metadata and a part of the modefled structure.
S
The apparatus may further comprise a source of masking data, while at least one of said computer image generator and said composite image generator is responsive to the masking data to simulate obscuration of the modelled structure by features not represented in the topographic model data. Said masking data may generated with reference to one or more a specific camera locations. The apparatus may be operable to use masking data for a specific camera location with camera images taken with different attitudes at the same location.
The apparatus may include a user interface by which a user can create or edit elements of said masking data by drawing around features reproduced in a selected camera image.
Said source of 3-D model data is a communication link to a remote server, whereby said 3-D model data can be updated remotely.
The features and functions of the first and second aspects of the invention can be integrated in one apparatus, if desired, either completely or in subsets. As one example, in a capture system according to the first aspect, it is useful to incorporate also the computer image generator using model data and topographic model data, so that a user may obtain a virtual view of a scene, before deciding to visit a particular viewpoint and capture direct imagery using the camera.
According to other aspects of the invention, there are provided methods of image capture for supporting the generation of augmented reality images, methods of generating augmented reality images. There are further provided computer program products comprising machine readable instructions for causing a general purpose computer to implement the data processing functions of apparatus according to the first and/or second aspects of the invention as set forth above.
These computer program products may be provided on non-transitory recording media, or in the form of data sLreams sent through a communications network.
The above and other aspects, features and advantages of the Invention will be understood by the skilled reader from a consideration of the following detailed
descrIption of exemplary embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying schematic drawings in which corresponding reference symbols indicate corresponding parts.
Figure 1 (a] is a functional schematic view of a capture system forming part of an augmented reality system embodying the present invention; Figure 1 (b] is a schematic view of the capture system in use; Figure 2 is a functional schematic view of a viewing system forming another part of an augmented reality system embodying the present invention; Figures 3 to 11 show various user interface screens presented by the viewing system and/or capture system in an embodiment of the invention.
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
Various embodiments are now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more embodiments. It may be evident, however, that such embodiments may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing one or more embodiments.
Before describing embodiments of the invention, some technical background will be provided, as an introduction to augmented reality (AR] systems.
Background -Augmeilted Reality Hardware and APIS
To support geospatial augmented reality a computing device with a visual display, a programmable operating system and a set of integrated sensors may be provided.
The main function of the sensors is to measure in real-time the geospatial attitude of a camera. In the present application, the geospatial attitude can be defined as a device's attitude (orientation] compared to a real-world frame of reference. This
B
attitude may be expressed for example by angular coordinates comprising a X-rotation/pitch and a Y-rotation/roll measured relative to the geoid (equipotential plane due to gravity) and a Z-rotation/yaw against a North value (magnetic or locally corrected to a projection or grid).
For geospatial augmented reality) therefore) the sensors may be a combination of these below: * Still/Video Camera -for imagery of a real scene * Magnetometer -for bearing/heading (attitude direction) * Accelerometer -for gravity (attitude plane) * Gyroscope -for rotation velocity (attitude stability) * Positioning -for location (eg GPS) * Clock -for timing and sensor integration/alignment * Camera Metrics -for accurate reconstruction within a field-of-view For implementation of reconstructed geospatial augmented reality according to embodiments of the invention described below, it is assumed that all of the above are provided in a capture system. An example capture system will be described below, with reference to Figure 1. Additional sensors/metadata sources may be provided) for example to record lighting conditions directly or indirectly (for examp'e by time of day in the case of an exterior scene). Ambient light can be useful in subsequently rendering computer-generated images to appear natural in the setting of the captured camera images. Some portable computing devices are starting to include light sensors, so that a dedicated device would not be needed.
Other sensors that may be included in the future include proximity, infrared) barometer, temperature.
Sensors may be as accessed as individual electronic MEMS (micro-electromechanical systems) components that integrate into existing computing devices such as a PC or laptop (such as hardware cable-based USB or wireless communications). Alternatively) we see sensor devices and associated interfaces that are inherently inbuilt within mobile or tablet computing tablets) phones and laptops. Sensors in a modern tablet computer such as an Apple (R) iPad (R) may include all of the sensors above, including the camera.
Sensors are accessed and used via application programming interfaces (APIs) with different names on different devices. APIs may be either available as individual components or as integrated access. Device operating system manufactures often provide an improved "motion" or "fusion" API that applies advanced mathematical techniques to the raw sensor data to "smooth" and "stabilise" the sensor readings.
Through the use of combined or aggregated sensor APIs measurements, it is readily possible to realise the geospatial attitude of a device. By knowing the location and geospatial attitude of a device camera (and viewing display), it is then possible to integrate additional image overlays in the form of augmented reality using software integration.
Accordingly, an augmented reality software application creates a composite ofa live scene with computer-generated information to create so-called augmented reality.
Existing augmented reality systems utilise the image sensor (camera) of a computing device to stream image data that is then rendered on the display of the same computing device to give the appearance of live video. This is essential in the reality perception as the association between the user's visual perception of their environment and the view on the mobile display screen. The image is most relevant when it is live and oriented to the direction the camera is pointed -hence a device with an integrated camera fixed in relation to the screen is ideal. Most modern mobile tablets and phones incorporate a rear-facing camera and high quality display.
Augmented reality applications are most commonly seen and used where the live-streamed camera image and sensors are both synchronised in real-time. However, systems also exist where the sensors are real-time and the image is pre-captured or rendered -such implementations called "indirect reality" as described above.
Background -Geospatial Augmented Reality
Augmented Reality utilising geospatial information is the integration of real-world data into the AR visualisation environment This relies on an additional datasets relevant to the location) such as points of interest (PoIs) in order to create a relationship between the camera/observer location and the location of the reference objects.
The location of the device is determined by a positioning sensor) such as GPS or Wi-Fi location, and this position is recorded or transformed or projected onto a geospatial frame of reference. A geospatial frame of reference may be a worldwide geodetic coordinate system like WGSB4 or a local country projection such as OSGB36 in the UK or a local frame of reference such as a Cartesian position within a particular site) for example a factory or shopping centre.
Once the observer's location is determined, the local Pols may be looked up from a local or remote database using a local spatial query. The relative locations may be compared and used to determine the distance and bearing from the observer to each PoT. Using this and the geospatial attitude sensors enables accurate geospatial AR. It is known for AR to systems display points of interest on a view of the environment) for example to indicate the location of a particular premises in street scene or a star in the night sky, these are generally based simply on knowledge of the direction from a users position to the PoT.
A more complex example is one which identifies the location of an observer and then looks up not simple PoTs but 3D models in the same real-world environment.
This can then be used to display the 3D model as augmented reality integrated into the device screen for planned construction or environmental/visual impact assessment In other words each PoT can be in fact represent model of a structure.
"Structure" in this context could relate to structure such as a bridge, a building or wind turbine. In other applications, structure could be any 3-D object, on any scale.
In the present text, including in the claims, where the term "modelled structure" is used for ease of understanding, it should be understood that the modelled structure can represent a conventional, built structure, but in principle also could just be a set of points of interest having some other significance. The structure may be physical in one sense, but not solid or visible in camera images. It maybe rendered visible by computer-generated imagery. Useful examples of PoIs that represent non-visible structures, would include for example a 3-D model of radio coverage, or a plume of pollution. In the context of wind turbines as examples, structures and PoIs may also be defined by 3-D models showing different average wind speeds at different heights, showing turbulence from a turbine in a visual way.
In such an example, it also becomes important to consider camera metrics in geospatial augmented reality, in order to determine the relationships between the camera resolution, aspect ratio and field of view and the display device resolution and aspect ratio.
Example capture system and viewing system for reconstructed augmented reality Figure 1 (a) is a functional block diagram of a capture system 100, forming part of a novel system for what will be called "reconstructed geospatial augmented reality" in the present application. Figure 1 (b) shows schematically the physical form of the capture system 100 in one example. Capture system 100 in this example comprises an SLR camera 102 and sensing module 104, mounted on the common support 106, for example a tripod. A computing module 110 comprises for example a tablet computer laptop. The computing module and sensing module may be combined in one piece of hardware mounted on the support 106, or they may be separate and communicating by wire or wireless connection, as illustrated. It will be understood that the capture system 100 is for use by a "capture user" Ui. Unlike conventional augmented reality systems, however, a distinction will be drawn between the capture user and a viewing user, as explained further below.
In other examples, the assembly of camera and sensing module may be handheld. As mentioned already, some tablet computers already combine the hardware necessary o implement in one unif basic camera functions together with the necessary sensing and computing functions of the capture system 100. For certain applications, however, an SLR or other specialist camera may be required.
Returning to the functional block diagram of Figure 1 (a], we see the camera 102 and the support 106. Support 106 provides a common frame of reference for sensors induding a geospatia position sensor 120 and attitude sensors 122.
Referring to the list of sensors described in the background section above) position sensor 120 comprises for exampk a sateflite positioning receiver (GPS receiver)) while attitude sensors 122 encompass the magnetometer, accekrometer, and gyroscope. Other data sources, such as a clock 124 are provided. The capture system further comprises a display 130. A user interface 132 uses the display and, for example a touchscreen or other pointing/keyboard input devices. In the example where capture system 100 includes a computing module 110 in the form of a tablet computer, the user interface naturally relies upon touch-based input from the capture user. Display 130 may be the tablet display. Display 130 may also or additionally be provided by a screen viewfinder on camera 102.
Capture system 100 further comprises storage 140 and/or a communication interface 142. In operation, image data [still images and/or motion video data) 150 is sent to the storage 140 and/or communication interface 142, while also typically being displayed on the display 130 for the capture user to see what is being recorded/transmitted. The capture user may be free to choose the viewpoints and camera angles from which images are captured. Alternatively or in addition, guidance as to desired viewpoints and angles may be provided through the user interface, based on desired views being specified in advance and based on current position and attitude data. In other applications, the support may be part of a vehicle, remotely operated or autonomous. In addition to the image data 150, indudes camera metadata 152 is generated, for examp'e to record field of view (FOV] for the lens and image sensor. A data stream 1S4 delivered for storage or transmission includes the image data and the camera metadata, and also importantly includes the time data 156 from clock 124, position data 158 from position sensor 120 and attitude data 160 from attitude sensors 122. The position and attitude data is synchronised with the capture of still and/or motion video data, so that the data stream 154 can be used to identify the geospatial position and attitude of the camera 102 at the time of image capture.
Different impkmentations are possible, to convey the geospatial position and attitude data (158, 162) along with the video data 150 so that the position and attitude associated with each frame of the video can be identified correctly by reference frame generator 278. In one of embodiment of the novel system) the association of position and attitude with video data is achieved by the creation of "sidecar" files to provide a geospatial attitude and position. Also) existing 3-D model file formats such as FBX) OBJ and other formats do not have a geospatial reference. If Although they can be recorded (and hence located) from a coordinate) this is only in some arbitrary local reference frame. The model store 276 in the novel system therefore includes geospatial attitude data for use in positioning and orienting the local reference frame of a 3-D model correctly in relation to the geospatial reference frame generated by reference frame generator 278. It is the additional geospatial attitude as well as position that allows for the service to deliver a mod& that is both locatable and correctly aligned with the real-world visualisation.
Figure 2 is a functional block diagram of a viewing system 200 of the reconstructed geospatial augmented reality system. Viewing system 200 may be implemented for example in a computer 210 operated by a viewing user U2. Computer 210 is illustrated in an inset as a laptop computer) it may equally be a tabkt computer) or a display connected to a krger computer system. Viewing system 200 comprises a display 230 forming part of a user interface 232. Storage 240 and/or a communications receiver 242 are provided, which receive the image data and associated position and attitude data from the capture system. The various data items are contained in files or stream 154T, which has the same content as the captured stream 154, but may differ in format if desired. Similarly the data items 150', 152') 156, 158' and 160' contain the content of captured streams 150 etc but may be in the same or a different form from that in which they were captured.
A video player module 270 receives the image data stream 250' and provides it to one input of an image synthesiser 272. The second input of image synthesiser 272 is connected to a computer image generator 274. Image synthesiser 272 receives 3-D and optionally 2-D model data from a model store 276, together with geospatial attitude information from a reference frame generator 278. In this way, computer-generated imagery is rendered based on stored models, in accordance with geospatial position and attitude information received with the image data 250'. In particular, the rendering of computer-generated imagery is synchronised with the rendering of the video imagery) using the known synchronisation between the position and attitude data and the image data that have been captured simultaneously by the capture system 100. Image synthesiser 272 combines these to create an augmented reality image and supply it to display 230 for display to the user U2.
Within the model store 276, different types of model data may be stored. At 280, one or more 3-D models are stored, representing objects that are to be displayed in a simulated form, for example for visualisation of a proposed construction. Under control of the user or pre-stored instructions, a selection 282 can be made from among alternative models, to provide object data 284 to computer image generator 274. At 286 in the model store, terrain model data is stored, representing the topography of a landscape or other environment surrounding the image capture position. Relevant terrain model data 288 is supplied to the computer image generator 274, in order that visibility constraints dictated by the environment can be applied to the object data 284.
Finally, in this embodiment, model store 276 contains mask data 290. In one embodiment, mask data is stored in 2-D form, for use with certain predefined views (views defined by geospatial position and attitude, together with field of view). This causes the release of 2-D masking data 292 for masking image data generated by computer image generator 274, when the synthesised with the images from video player module 270. In alternative embodiments, mask data 290 can be stored as a further 3-D model, with 3-D masking data 294 being supplied to computer image generator 274. The purpose of the mask data is to allow objects represented in the object data 284 to be obscured by features that are not represented in the terrain model data 288. For example, mask data may represent buildings or trees that are too small or too local to be represented in an available terrain model, or even persons, vehicles or animals that happened to be present in the field of view, at the time of image capture.
At 296, gallery information is stored, representing a library of views [still or video) that can be synthesised with different selected object data at will. As mentioned in the background section, existing augmented reality systems use integrated devices and sensors to provide "live" augmented overlay from the camera image onto the display screen in real-time. Thus the novel system has much greater flexibility and convenience than existing systems in a wide range of practical situations. Of course, instead of being displayed only transiently, the synthesised video and/or still images can be stored for reproduction at a later date. Indeed, while the novel system provides great flexibility to explore both the landscape and varieties of models, the end product is likely to be selected images or video clips in an everyday file format, showing the final results of an investigation. Therefore image/video storage 298 is provide for this purpose. It goes without saying that storage 298 does not need to be physically separate from storage 240 or 276. Any of these individual storage functions may of course use shared storage hardware, and may even share with storage 140, where the capture and viewing systems 100 and 200 are combined in a single unit The novel reconstructed augmented reality system implemented using the capture system 100 and viewing system 200 described above provides for all the sensor data [needed to calculate and create the augmented overlay image) be captured along with the image and stored and/or transmitted for use at a remote time and/or location. All the data from sensors 120, 122 etc (location, time and geospatial attitude) are captured as a snapshot of the local environment, along with the still and/or video images captured by the camera 102.
Currently there are some existing cameras [and mobile devices with imaging sensors] that use geotagging to store the location within an image. The ExchangeaNe image file format (Exif] is a standard that specifies the formats for images) sound, and ancillary tags, hence many digital images a'so record camera and image individual metrics with an image file.
In order to store and associate all the sensor information with the image, two example imp'ementations are mentioned, by way of examp'e. One is to use an non-standard extension to Exif, hence embedding aH the metadata within the image file.
The other is to provide a secondary, additional associated companion [sidecar') file) hence requiring additional management.
The Exif standard version 2.2 (JEITA CP-3451] is documented here: bi:')&vn'.exifor Ex112-2PDF. Embedding the data simplifies data transfer but Exif does not support tags for full geospatial attitude (even though GPS locaflon is supported). Thus, Exif does not allow to record, for example, the angle that the camera is to the horizon and other va'ues. In order to create "downstream" or "off-line" reconstructed augmented reality the novel system described here provides a mechanism that goes beyond the basic Exif storage systems for image metadata.
In the other implementation, a companion file could record binary compressed data, simple separated values or XML compatible with a schema. This may be stored as an associated sidecar file or embedded in the image as XMP -new XML-based "Extensible Metadata Platform" developed by Adobe in 2001. XMP is an open-source, public standard, making it easier for developers to adopt the specification in third-party software. XMP metadata can be added to many file types, but for graphic images it is generally stored in JPEG and TIFF files.
Either of these formats will provide the desired functionality and capability of being able to record, associate and decode the camera's geospatial attitude along with the camera image(s).
There is [currendy) no equivalent of the Exif standard for attaching metadata to video files, so for motion video capture, the sensor data is recorded as an associated file. The associated file will log the location and geospatial attitude in such a format that it can be time synchronised with the image frames in the captured video. This will enable a reconstructed AR to be created from the video in a similar manner to real-time live AR where the overlay image is calculated on-demand at a similar frame-rate to the "real world" video (many times per second). It will be understood that the associated file does not need to contain fresh location and geospatial attitude data for every frame: the camera maybe static for long periods of time, or moving only slowly, so that the same location and attitude can be recorded as applying to a range of frames. The frequency of recording fresh sensor data can be determined dynamically, according to whether the camera is moving or static.
The additional geospatial attitude data may be stored as either raw data or processed data. In raw data form, the digital representations of all the sensors are captured, for example signals from the 3-axis gyroscope are maintained as numeric values for location and geospatial attitude. In processed data form, a composite calculation of the device location and geospatial attitude is obtained, which can be stored in a more efficient manner. In principle, however, the information content in either case is the same.
The Appendix below includes a listing of an example "sidecar" file in XML format) conveying camera metadata and geospatial positioning data for association with a camera image. This file may be stored for example in storage 242 of the viewing system 200. Explanatory text is included between the marks c!--and -->. It will be seen that the file includes the following blocks of information, in order: * Location Information From GPS, including Altitude) Latitude and Longitude; * Angles of correction between reference frames Grid north and Magnetic north and between True north and magnetic north; * Geospatial orientation in the form of a quaternion and/or magnetometer & accelerometer data; * Capture (camera) Metadata such as image size, field of view; * IJRL5 for finding the image and sidecar files; * Roll and pitch correction matrix for translating sensor orientation to camera orientation and/or for relating different sensors together. One application is to correct for misalignment between accelerometer & magnetometer, as a few degrees of misalignment can make a huge difference over a 10km viewing distance.
* Foreground traces (these will be added after capture, as described further below]; and * Capture date and time.
A similar file can be stored at storage 142 when the data is captured. That file may have more raw sensor data and less processed data than the one illustrated. The processed data can be added to the file without deleting the original sensor data, for maximum flexibility.
Model store 276 may be stored within a single computer 210 that provides the entire viewing system 200. Alternatively, model store 276 may be located wholly or partly at a central server, with object model data 284 and/or terrain model data 288 and/or mask data 292/294 being served by the server as and when needed. In other embodiments, renderer 274 is also at a remote server, and the geospatial attitude is transmitted to the renderer) from which 2-D images are received over a communication link. In this way, for example, the system can access high performance processing servers, and can share the augmented reality images via a webpage or to produce reports in a document format such as PDF.
In a particular implementation, the 3-D model is handled by using a remote management and delivery facility. Within the remote facility a file-system of 3-D models is stored along with a coordinate and geospatial attitude on the visualisation frame of reference. That is to say, the object model data store 280 is located remotely from other components of the viewing system 200. A lookup service on the 3-D models as PoIs can be used to find local or relevant 3-D models from the library.
For example a 3-D mode! 280 of a wind turbine could be stored in a 3-D file format The location of several instances of the 3-D mode! cou!d be stored in some reference index or database. This a!lows a remote visualisation system to query a cloud/web/internet service to determine what local information is relevant using a spatial query. The models that are relevant to the current location of the viewing computer 200 [such as within 50km view) may be downloaded along with the position and geospatial attitude of the model. Communications to the service can be through wireless networks such as a cellular data network, to allow live access to the models from remote locations. A local database or a cached version of the entire database can be provided so that the queries, results and model building could be done without a communications network to external services.
As mentioned, in addition to the 3-D models, a 3-D terrain or landscape model 286 is provided in order to determine if the Pol object represented in object model store 280 is visible. The terrain model comprises for examp!e a grid of geolocated heights.
The terrain mode! maybe stored within viewing system 200, or may be downloaded on demand from a remote server. A local digital terrain model (DTM) can be created and then the observer and the 3-D PoT digitally located within the model. When viewing the AR visua!isation from the observers!ocation within the model the 3D Pol will be appropriately masked by the intermediate terrain giving a more rea!istic impression of the object within the!andscape.
The use of 3-D modelled structures and geolocated terrain model integration) loaded from remote services, allows for flexibility and ensures that the mobile viewing system remains up-to-date and accurate. For example, design changes or new variants of a proposed construction can be made and immediately viewed by augmented reality on the site. Terrain models may be sourced commercially, and storing a complete terrain model within the portable viewing system may be prohibitively expensive) in licence fees as well as in volume of data.
Also shown in Figure 2 is a further communication link 300 for communicating the synthesised augmented reality images to a third display 302, where they can be viewed by a third user 113. In other words, the rendering of the models and the synthesising with captured images does not have to happen on the computer that is being viewed by the end user. The viewing system 200 may rather become a rendering system, with the end user viewing at another location. The other location could even be back on the site. The third display maybe a passive terminal, or may provide a user interface by which the user 113 can select different models at 282, or even pass commands to the capture system 100, in the case of simultaneous capture. In another scenario, the systems 100 and 200 are combined into a single capture and rendering unit, with remote viewing by display 302.
The capture and viewing systems 100, 200 illustrated in Figures 1 and 2 enable novel embodiments and appilcations of what we shall call reconstructed geospatial augmented reality. Some of these appilcations will now be described. Depending on the application, capture and viewing systems maybe implemented with one or more features omitted, or with the features integrated in different combinations in different hardware. For example, it will be seen that the viewing system may be the same hardware as the capture system, but with the generation and viewing of the augmented realities displaced in time. In such case, storage 140 and storage 240 may be the same, while the communication interfaces 142 and 242 may be redundant and need not be provided. As another example, the capture system 100 and viewing system 200 may be at physically remote locations, but the viewing user U2 watches the augmented reality in real-time, simultaneously with the video images being captured by the capture user Ui. In such case, the communication interfaces 142, 242 are used, while storage 140, 240 may be redundant.
Any of the storage 140, 240, 298 can be made removable to allow transfer and reproduction of captured data, models, gallery and synthesised images or videos.
The synthesised images and videos may be supplied as commercial products, either on physical storage media or by electronic file transfer. Program instructions (software) for causing a tablet computer or other computer to implement the features of the embodiments can be delivered on a removable medium and/or delivered by electronic file transfer. In a commercial embodiment, the software may be pre-loaded on a computer device, with both hardware and software being supplied as a self-contained unit Each of these implementation and/or application scenarios brings a particular set of technical requirements or problems) and the following discussion of the examples presents also example solutions. While the applications to be described generally referred to the example of visualising new constructions in a landscape, the same system and principles can be applied to many other situations, both exterior and interior.
Embodiment/Application example: Reconstructed Augmented Reality Exploiting the features of the novel systems described above, reconstructed augmented reality enables the augmented reality overlay to be calculated and presented within the image independently from the original capture. This frees the user from restrictions of existing augmented reality systems that are based on an interaction of the live sensors. The augmented reality can be "reconstructed" for one or more users at a later time.
In the first case, reconstructed augmented reality allows for different modelled points of interest or structures to be displayed on an image based on different dynamic lookups from a source of object model data, as described above. By using the image and AR-sensor data captured by capture system 100 along with the image data, the viewing system 200 can know what location and geospatial attitude are associated with the image (even with each frame in a motion video). The stored location and geospatial attitude are thus made available to be combined with the modelled structure in the same method as with live augmented reality in order to create an augmented view for a user.
An example of use could be in visualising different 3D models of a wind farm development At an early point in development, a preliminary design could be used in-field to assess impact and at each location the image and sensor values captured.
A subsequent redesign of the wind farm layout could be viewed with reconstructed AR, in order to compare different designs rapidly and efficiently, without having to return to the field and relocate an image location or environment, but having a digital record of the sensor capture. In other words, the integration of pre-captured imagery, associated sensor information and AR processing can be used to create flexible reconstructive AR visualisations on existing imagery with dynamic modelled structures.
In implementing such a system, certain technical issues should be addressed. Firstly, of course, geospatial augmented reality requires that captured imagery, image location & geospatial attitude, 3-D models and model location & geospatial attitude are all required to be combined. This is the same process as with live AR, however, and can be based on known techniques. Secondly, the camera-metrics which define for example the image field of view (stored as part of the standard image metadata) should be used in association with a current display screen (for example, display 230), to ensure that appropriate stretch and distortion are applied. The display ratios of the captured image and assorted reconstructed display devices may all be different so dynamic processing and adjustment of the 3D image into the augmented reality display must be calculated and applied on each viewing. Rendering of captured images by player module 270 and rendering of computer generated images by computer image generator 274 both need to be performed with regard to these parameters.
Embodiment/application example: Location-Offset Reconstructed Augmented Reality In this embodiment, camera images and geospatial attitude data are sent to and presented on a separate, remote display screen, more or less simultaneously with their capture.
The hardware in this environment could be for example a high-quality lens camera (such as a digital SLR) with image communication APIs linked to a remote laptop by cable/wireless. For example, current guidelines for imaging for photomontages for planning matters in Scotland require the use of a SLR camera with a 50mm or 75mm lens for landscape and visual impact analysis image capture. Laptop, phone or tablet cameras do not comply with this specification, but conversely, the SLR camera typically lacks built-in geospatial attitude sensors and/or GPS position sensors.
Separate MEMS [micro-electromechanical systems) sensors are then aligned with the camera imaging plane through the common support 106 in order to record the other sensor information required for reconstructive AR. The design and data representation needs to take account the physical relationship between the imaging data capture and the location and geospatial attitude data capture for example the way the sensor module 104 is mounted on tripod 106 in the example of Figure 1 [b).
Also, SLR cameras may require skilled operation to achieve the best image quality and composition, while a wide range of persons may want to view and/or create customised augmented reality views. Similarly, the capture of wanted images may require the capture user to access inconvenient or even hazardous locations.
With technique of Location-Offset Reconstructed AR disclosed herein, the practicalities of image capture are separated from the time and/or place of viewing a 3-D geospatial AR visualisation. Visualisation software, which may be similar to software known and used already in live AR systems, can be implemented in remote devices in order to achieve an AR visualisation at any location.
In one embodiment, from a remote computer such as viewing system 200, a capture request can be sent via software and hardware interfaces to record image and sensor data at an instance in time. The capture user 111 need not be present or involved in control of the capture system. The camera and sensors can for example be on a remotely operated vehicle in that case. As soon as the image and associated environmental information is captured and transferred, the user of the viewing system 200 can use the image with the associated location and geospatial attitude information to create the augmented reality image on a display screen or device that is remote from the actual imaging camera and sensors.
Remote Reconstructed AR allows the integration of high-quality imaging hardware and sensors at a location independent of the AR viewer. This brings benefits of for example hardware compliance and remote access safety. For implementation of this embodiment the designer should of course attend properly to hardware management, to provide smooth, near-live transmission of the image and associated location and geospatial attitude information. The data inputs must be accessed across streams or files that are delivered and synchronised. Where the previous application example allows for files to be saved and processed using conventional file systems and data transfer) in this case, more specialised file handling and/or streaming protocols may be required) in order that the data transfer and re-alignment or synchronisation happen substantially in real-time.
Existing systems for requesting and transferring imagery from remote cameras already exist (examples include the Canon and Nikon "Camera Control" APIs and SDKs]. A parallel API can be provided, in order that the location and geospatial attitude sensor information) can be accessed) for example through hard-wired (eg USB) or Wi-Fi link to the MEMS sensors.
Embodiment/application example: Capture and Associate All Sensor Information with a Video (Images) As already mentioned above, existing augmented reality systems use integrated devices and sensors to provide "live" augmented overlay from the camera image onto the display screen in real-time. The embodiments described above the capture of sensor information for reconstructed AR based on a static (still) image and image data. The present example extends the teaching to capture of sensor information aligned with a series of images -i.e. motion video.
As mentioned already, for motion video captures, sensor data can be recorded in an associated file. In future) of course, a video file format maybe defined that has fields within it for such data. In the case of a motion video signal, the augmented reality image can be displayed to the user of the viewing system 200, while further frames of the same video are still being recorded by the capture system 100.
Alternatively, for live video transmission, a live transmission of the sensor-AR location and geospatial attitude can be used to create a transient remote augmented reality. With video data the location and sensors data for location and geospatial attitude must also be captured and transmitted (implicitly or explicitly) for each frame of the video stream. This data could be as a sidecar file or embedded Extensible Metadata Platform file. Where the video data is transmitted as a stream in real-time, the sensor data can be transmitted as a "sidecar stream", rather than video and sensor data being continued within finite files. It will be understood that real-time streams of data may be implemented as sequences of individual files, but packaged and delivered using stream management protocols.
Embodiment/application example: Dynamic Reconstructed Augmented Reality Following on from the previous two embodiments, it is possible to both store sensor-AR information associated with a video file and also use a remote recording device to stream the AR onto a separate display.
Thus a practical system include the functions of more than one of the above examples, so that it may be used for example in either the Live Location-Offset Reconstructed AR mode or the Offline Subsequent Reconstructed AR mode. In the Live Location-Offset Reconstructed AR mode, with a remote viewing display screen and communications link 142/242 it is possible to display augmented overlays onto the display similar to in an integrated device but with the advantages of remote operation for practicality, safety etc.. In the Offline Subsequent Reconstructed AR mode, alternative 3-D-geolocated modelled structure models and terrain can be used to visualise alternative augmented reality views of a development overlaid on video. The video could be at a static location moving the camera (as in a panorama) or moving such as the view from a vehicle or train.
User Interface Example Figures 3 to 11 show example of a user interface such as the user interface 232 on the display to 30 of the viewing system 200 in Figure 2. Further it is assumed that the viewing system 200 is integrated in a portable device with the capture system 100, so that full functionality of both systems is accessible. Of course individual devices may have subsets of the functionality, as already mentioned.
Figure 3 shows a "home' screen interface, which is designed for use on a touch tablet computing device 500. The user interface features several different screens which will be described. Generally the screens have a common "look and feel' with panel 502 at the left hand side for a column of list items, and a larger panel 504 to the right hand side for a map or image/video display. A status bar 506 is visible on every page, with common status items such as clock, battery, connectivity and the like. An action bar 508 includes a page title & common actions (for example refresh, zoom and other view control options). A navigation bar 510 is also visible on all pages.
Using the action bar, a user can select different views. Views available are: Nearby, All, nearby with my captures. Action Buttons are: Refresh, Search, Location correction. On the right-hand panel 504 we see a map including the device's location 511 derived from sensor data and a project location 512. In the left column 502 we see a number of different project items 513-516. (Place names in this example are genuine, but the projects are fictitious.) The content of this list and/or the appearance of an item changes based on whether there is already a capture or not Choosing an item in the left column, expands the item to give more details and highlight it on the map. A button 518 can be used to go to a "project details" screen for one of the projects.
Figure 4 shows the "project details screen" for item 514 in Figure 3. The status bar and the general layout of the same as the home screen. In the right-hand panel, shows a present location 511 and the location of project features 512 such as, for example, wind turbines. On the action bar, the project name appears, and a "back" button 520, for reverting to the home screen. The left panel provides a horizontal scroll bar 522 to change project versions. e.g. vi, v2, v3 as tabs. Project details [for the present version) are shown at the top of left panel. Where there is a recent capture, this will be shown at the top right of the panel. A turbine list is shown as a shown as map expanding list Choosing an item 524 expands to revea' more details and highlights it on the map. The list will scroll, if such spaces required. Lower in the left-hand panel there are links to three more details views: Tlive view'T, Tfly through" and "gallery" Figure 5 shows a "relocate" screen which can be used to access different viewpoints iO of the same project In the right-hand panel, a map is shown, marked with a present location 532 (from GPS) and alternative viewpoints such as 533. The map view can be switched to a satellite view, in a conventional manner. In the left-hand panel, more detail of the availaNe viewpoints can be seen, with a button 534 to override the GPS with a selected location. The current location is displayed, including a look iS Address. Buttons can be provided for storing/cancelling different viewpoints.
Viewpoints can be freely chosen, while predefined locations are also loaded as part of the project data. These may be for example locations which have been identified from technical requirements, imposed by the Local Authority. These locations are listed at 535 are the list with names starting "VP1", "VP2" etc. The UI may also allow the user to add a viewpoint to this list when they are out on site.
The "relocate" function has a variety of purposes. It does not allow the user to capture video from the other location) and they cannot view video from that location if it has not been captured. However, one application is to go in "virtual reality" to that location. For example, using appropriate terrain model data 288, the computer image generator 274 can produce a textured or wireframe view to see if the modelled structure such as turbines would be visible from that viewpoint The user can then decide whether to that viewpoint should be visited to take a capture. The Relocate function also allows the user to enter a trusted GPS location if they don't trust the accuracy of the built in GPS location. It may only correct by a few meters, but that may be important in some application.
Figure 6 shows a "my view" which is a live camera view from the present location or from a relocated/override location. On this screen, a selective version of a selected project is rendered as an augmented reality image from the position of the user. The model data for the project, in this case showing three wind turbines 536, is rendered into a 2-D image, based on the position and geospatial attitude of the camera used to take the view. This may be a camera in the back of the tablet computer, or it may be a separate camera, as seen in Figure 1 (b). The combination of the camera image, together with the terrain model and the geospatial attitude and position data from the sensors allows the computer to render the wind turbines in the correct position in the camera image of the landscape. By use of the terrain model in particular, the computer image generator 274 is able to ensure that the model wind turbines 537 appear in front of background landscape features 537, but are obscured by foreground landscape features 538.
A sidebar 540 can be hidden or opened to view/edit parameters of the displayed image. For example, the project version can be changed, or even the project itself. In view of the wind turbine application, sliders are provided for altering wind speed and wind direction of a simulation. (These alter the attitude and rotational speed of the turbine blades.) Other controls are provided to activate/deactivate still image capture (541) or video capture (542), and to enable/disable various terrain model display overlays (544). These may also have Pol labels. In the case where the device in hand includes the capture system 100, naturally captures are taken from the present location. The device in hand may also be communicating with one or more remote capture systems 100, so that the capture location is the location of a selected capture system 100.
Figure 7 shows a "fly through" screen. This provides in a main panel 546 virtual reality overview of the area around the project, based entirely on the terrain model data. As mentioned already in reference to the Relocate screen, such computer-generated imagery can be useful in selecting sites for real image captures. This can be supplemented with aerial or satellite imagery if available, similarly to the well-known Google Earth ® system. It does not use the camera view so does not use the novel data capture systems. . Controls are similar to the "live view" screen, except that capture functions are not provided.
Figure 8 shows the "gallery" screen. A drop-down selector 550 allows different views, such as "all" "my captures". The screen in the state shown is substantially filled with a great of square thumbnails 552 of available images/videos. In the present embodiment) the available images/videos include one is stored locally on the device, and also others from a published image service. Various details can be shown under each item, including for example its position and address. Buttons can be provided also to change the current project) noting that different projects may be visible in the same field of view presented by one of the image/video captures.
Sharing of captured images between projects is enabled or not according to implementation rules and goals. A set of captured images (with sensor data] may be associated with a single project (and any of its versions]) and so not visible to any other projects. However, a user has two different development projects next to each other, they may want to take one set of captured image for Project A and then reuse them for Project B. This can be addressed in the database structures and UI design.
Figure 9 shows a static (still image] view screen. A main part of the screen is taken up with an image of a landscape with superimposed modelled objects (wind turbines]. The landscape view is one that has been captured previously and retrieved from the gallery. As in the case of the "live view" screen, as the viewpoint represented in camera image changes, the object model and terrain model are used by the computer image generator 274 to ensure that the wind turbines appear correctly position, in front of background landscape features 570, but obscured by foreground landscape features 572.
As may be appreciated, the terrain model is a bare earth model, and limited in resolution. It will not contain every detail of the foreground in a scene (trees, houses etc], leading to anomalies in the rendering of the composite image. To allow for manual correction of such anomalies in a static image, the user can press a "start trace" button 574. This takes the user to a "foreground mask" screen.
Figure 10 shows the "foreground mask" screen in use. The camera angle from which the scene has been taken includes foreground objects [trees and buildings] which are not represented in the terrain model.
The need for masking is illustrated by one of the trees labelled 576, for which no mask has yet been defined. One of the wind turbines labelled 578 has been rendered by the computer image generator 274, and superimposed on the camera image by image synthesiser 272, so as to appear wrongly in front of the tree 576. The same happened originally with the tree labelled 580 and a house. However, using a drawing tool 584, the user has traced a mask pattern around the tree 580. As seen in figure 2, this mask pattern is represented in mask data 290. The mask pattern is associated with the particular 2-D view shown in Figure 10, and a mask signal 292 has been generated to prevent the wind turbines from being displayed in the masked area. With respect to the tree 580, the image appears correct A pop-up panel 586 allows the user to manage the creation, editing and deletion of masks for objects in each view, and also for each project. As can be seen, an entry entitled "Treel" has been created, corresponding to the tree 580. An entry entitled "House" has been created, and is in the course of editing using the drawing tool 584.
A dotted outline 582 is being drawn around the house by hand. Once the house mask pattern has been drawn, it can be saved, and a mask pattern for tree 576 can be added by clicking on the "+ New" button 590.
While in principle it would be an option to create 3-D models of features such as trees and buildings in order to supplement the terrain model, it will be simpler in many cases simply to create a foreground mask, appropriate to a particular 2-D view. This is what is provided by the "foreground mask" screen in the present embodiment. Note, however, that the foreground mask can be used with any images taken from the same viewpoint, by panning it according to the camera angle, and/or scaling it according to the field of view. In other words, as long as the actual location position of that observer is not moving the viewing system can use that 2D mask without manually creating a mask for each frame. This is useful in motion video captures. So long as the camera is located at the same coordinate, the same mask can be used in generating an augmented reality video, even when the pitch/roll/yaw are recorded as changing.
In summary, then, the "foreground mask" screen allows 2-D masking details to be added, which is supplementary to the terrain model. As mentioned ah'eady, 3-D models could be added supplementary to a downloaded terrain model, if desired.
This would be particularly useful if a motion video sequence is to be presented from a moving viewpoint, which otherwise might require the user to draw a 2-D mask pattern around objects in each frame of the video. The mask editing functions can be enhanced, to facilitate copying masks from one image to a similar image. Even where a 3-D model of an environment is available in addition to bare earth terrain data, the fine detail of a 2-D foreground mask is likely to be useful.
Figure 11 shows a "settings" screen. In the left-hand panel various tabs 592 are provided to access different categories of settings and "about" information.
Further Fields of Use:
The following fields of use can be envisaged for the novel system in its various forms.
Utilities & Infrastructure/Monitoring & Management: For remote survey of utilities by unmanned vehicles, for example but no limited to unmanned airborne vehicles (UAVs], one can equip the UAV with AR-sensors and store and/or transmit the location and geospatial attitude along with the video feed. A 3-D model of a utility installation such as a pipeline or railway line can be superimposed on the video images to aid interpretation.
Security & Surveillance/Command & Control: In command and control, integrating the locations of personnel from their tracking device into a geolocated frame of reference can be combined with helicopter surveillance camera and sensor feeds. A video feed can be augmented to show geospatial-based augmented information to identify individuals within the image. (In such an application, the 3-D model data is not static and hypothetical, but rather derived from other 3-D sensor data This illustrates the range of object data that can be used within the principles of the present invention.) Health & Safety/Search & Rescue: A remote-controlled vehicle (ROV) having geospatial sensors and video communication can integrate with a geospatial modelled structure source or full 3-D geospatial model/BIM (building information modelling) to assist in navigation and exploration of dangerous locations.
In search and rescue, this could be used by a RoY within an unsafe building or landscape (fire, structural, radiation). Remote reconstructed AR could assist in search and rescue enabling navigation with AR information assisting the operator in an unfamiliar environment Naturally, when extending application to places that do not have GPS signals, other ways of identifying the capture location are required.
The reconstruction of video imagery requires to be aligned not on a single image frame, but on a continuous live stream with the location and sensor information -plus 3D data to augment -in order to create visualisation experience. Therefore for such applications, the transfer APIs for video streams and sensor data streams must be tightly synchronised to ensure accurate and timely reconstruction. Time codes can be included implicitly or explicitly in the streams to allow re-synchronisation after transmission. This allows the synchronisation requirements of the transmission channel to be relatively relaxed.
The descriptions above are intended to be illustrative, not limiting. Thus, it will be apparent to one skilled in the art that various modifications may be made to the invention as described without departing from the spirit and scope of the invention.
APPENDIX
The following is a listing of an example "sidecar" file in XML formal:, conveying camera metadata and geospatial positioning data for association with a camera image. Explanatory text is included between the marks c!--and -->.
<?xml version="l. 0" encodi ng="utf-8'"?> <Pub] i shedimage> c!--Location Information --> <Locati onMode>From GPSc/Locati onMode> <CurrentLocation> <Al ti tude>3.2616081237792969</Altitude> cLati tude>56. 115798121703662</Lad tude> <Longi tude>-3.9351607000956457</Longitude> </CurrentLocati on> <Hon zontalAccuracy>10</Hori zontalAccuracy> <Verti calAccuracy>19</Verti calAccuracy> <FormattedAddress>-52 upper Craigs, Stirling, FK8 2.c/Fo rmattedAdd ress> <cal cul atedAl ti tude>18. 6m</cal cul atedAl ti tude> c!--Geographical correction information --> <!--Convergence: Angle of correction between Grid north and Magnetic north --> <Convergence>-1.60677266</Convergence> <ConvergenceAlti tude>0</ConvergenceAlti tude> <ConvergenceLatitude>56. 1154</ConvergenceLatitude> <ConvergenceLongi tude>-3. 9352c/convergenceLongi tude> <!--Declination: Angle of correction between True north and magnetic north--> <Dec11 nati on>-3.93</Declination> <DeclinationAltitude>0c/DeclinationAltitude> <Decl i nati onLati tude>56. 1154</Dec] i nati onLati tude> <Decli nationLongi tude>--3. 9352</Dec] i nationLongitude> <Dec] inationsecularvaniation>10. 48</Dec] i nationSecular Van ati on> <Hei ghtAbovecround>1. 7</Hei ghtAboveGnound> <!--The Geospatial orientation as a Quaternion--> <Attitude > cw>0.09194812</w> cx>-0. 9841044</x> <Y>-0. 06990147 38c/Y> <z>-0. 134899244c/z> </Atti tude> <!--The heading of the device (derived from Attitude) --> <Headi ng>345.0571</Heading> <!--The inclination of the device (derived from Attitude) --> <mci i nati on>-3-cz/Incl i nati on> <!--Capture Metadata --> cFul lCaptureHei ghtPx>1936</Ful lCapturel-Iei ghtPx> <FullCaptureHorizontalFieldOfView>-0.7714355</FullCaptu
reHorizontal FieldOfview>
<FullCaptureVerticalFieldofView>0.5934119</FullCapture verti cal Fi el dofvi ew> <Full CaptureWi dthpx>2 592</Full Captu reWi dthpx> cHorizontalViewAngleRadins>0.7714355c/HorizontalViewAn gl eRadi ns> cverticalviewAngleRadins>0.5934119</verticalviewAngleR adins> cprevi ewAspectRatio>1. 3c/Previ ewAspectRatio> <Previ ewi-iei ghtPx>1280</rrevi ewuei ghtPx> <Previ ewWi dthPx>720</Previ ewWi dthPx> c!--Published locations--> <ImageURL>http://ardevt. blob. core.windows. net/ventusar up] oads/dSd4Ob3c-39a6-4b70-8ab3-b2cOdfO3fd9d.1 pgc/ImageURL> <Xm1IJRL>http://ardevt.b]ob.core.windows.net/ventusarup 1 oads/ba407b60-fbe2-46d3-9fOe-d668bd93b096 xm] c/Xml URL> <!--Rol] and pitch correction matrix --> <Ro]]Correction xni]ns d2pl="http://schemas datacontract.org/2004/07/Pl atforniinterface.Data"> <M11>1</M11> <M12>O</M12> <M13>O</M13> <M14>O</M14> <M21>O</M21> <M22>1</M22> <i'i2 3>O</rvi2 3> <M24>-O</M24> <M31>O</M31> <M32>O</M32> <M33>-1</M33> <M34>O</M34> <M41>O</M41> <M42>O</M42> <M43>O</M43> <M44>1</M44> </Ro]lCorrection> <!--Foreground traces --> <Trace> <[nab] eTrace>true-c/Enab] CT race> clrace>185, 339, 185, 339, 324, 3 58, 324, 3 58, 361, 364 361, 364 35,396,373,396,373,430,385,462,398,462,398,491,412,491,4 699, 836, 699, 840, 708, 840, 708, 844, 723, 844, 723, 845, 734, 8 5,814,804,814,804,806,817,806,817,797,828,797,828,781,8 92,368,770,368,770,348,738,348,738,335,717,335,717,320 15,352,291,352,185,339c/Trace> c/Trace> cDateCaptured>2013-09- 27T13:14:01.204812+00: 00cz/DateCaptu red> </Publ i shedirnage>

Claims (27)

  1. CLAIMS1. An image capture apparatus for supporting the generation of augmented reality images comprising: -a camera for capturing images of a scene in digita' form, the camera defining a movable reference frame; -a location sensor mounted to move with the camera for measuring an absolute ocation of the camera in three-dimensiona' space at the time of capturing said images; -an attitude sensor mounted to move with the camera for measuring an attitude of the camera in three dimensions at the time of capturing said images; and -a data processor for associating location data and attitude data from said location sensor and attitude sensor as metadata with image data representing the images, and delivering the image data and metadata together to at least one of a storage device and a communication channel.
  2. 2. An apparatus as claimed in claim 1 wherein said location sensor is a satellite positioning system receiver.
  3. 3. An apparatus as claimed in claim 1 or 2 wherein the attitude sensor comprises at least a magnetometer and an accelerometer for measuring attitude relative to the Earth's magnetic field and gravity.
  4. 4. An apparatus as claimed in claim 3 wherein the attitude sensor further comprises a gyroscopic sensor for detecting rotation about one or more axes, the attitude data being calculated using a combination of signals from the gyroscopic sensor, magnetometer and accelerometer.
  5. 5. An apparatus as daimed in any preceding claim wherein the camera is a motion video camera and the location and attitude data records variations in location and attitude associated during a motion picture sequence represented in the image data.
  6. 6. An apparatus as claimed in any preceding claim wherein said data processor is further arranged to include field of view data within said metadata.
  7. 7. An apparatus as claimed in any preceding claim wherein said data processor and camera are integrated within a portable computing device such as a tablet or phone computer.
  8. 8. An apparatus as claimed in any of claims 1 to 6 wherein said data processor is within a portable computing device such as a tablet computer) while the camera is a separate unit.
  9. 9. An apparatus as claimed in claim 7 or 8 wherein said location sensor and attitude sensor are integrated with said data processor in said portable computing device.
  10. 10. An apparatus as claimed in any preceding claim further comprising: -a source of 3-D model data representing one or more structures to be visualised within said scene by augmented reality, the model data specifying locations for said structures in said three-dimensional space; -a computer image generator for rendering 2-D images of said structures from the 3-D model data; and -a composite image generator for combining the rendered images of said structures with camera images reproduced from the image data, wherein the computer image generator is arranged to use said metadata to control the rendering of images of said structures such that the 2-D images rendered from the 3-D model data when combined with the camera images appear correctly located in said three-dimensional space.
  11. 11. An apparatus as claimed in claim 10 wherein said source of 3-D model data is operable to supply alternative sets of 3-D model data selected by a user, whereby said computer image generator and composite image generator are operable to repeat the rendering of 2-D images using different sets of model data, while using the same metadata to provide a visualisation of different structures in combination with same camera images.
  12. 12. An apparatus as claimed in claim 10 or 11 further comprising a source of 3-D topographic model data, the topographic model data being used by said computer image generator in combination with said metadata so that the 2-D image rendered from the 3-D model data appear correctly obscured by foreground features, the foreground features being features of structure and/or landscape that are located in said three-dimensional space between the camera location represented in the metadata and a part of the modelled structure.
  13. 13. An apparatus as claimed in claim 12 further comprising a source of masking data and wherein at least one of said computer image generator and said composite image generator is responsive to the masking data to simulate obscuration of the modelled structure by features not represented in the topographic model data.
  14. 14. An apparatus as claimed in claim 13 wherein said masking data is generated with reference to one or more a specific camera locations.
  15. 15. An apparatus as claimed in claim 14 further comprising a user interface by which a user can create or edit elements of said masking data by drawing around features reproduced in a selected camera image.
  16. 16. An apparatus as claimed in any of claims 10 to 15 wherein said source of 3-D model data is a communication link to a remote server, whereby said 3-D model data can be updated remotely.
  17. 17. An apparatus for use in generating augmented reality images, the apparatus comprising: -a source of image data representing in digital form images of a scene taken by a camera; -a source of 3-D model data representing one or more structures to be visualised within said scene by augmented reality, the model data specifying positions for said structures in said three-dimensional space; -a computer image generator for rendering 2-D images of said structures from the 3-D model data; -a composite image generator for combining the rendered images of said structures with camera images reproduced from the image data; and -a source of metadata by which a camera location and a camera attitude are associated with each of the camera images represented in the image data, wherein the computer image generator is arranged to control the rendering of images of said structures such that the 2-D images rendered from the 3-D model data when combined with the camera images appear correctly located in said three-dimensional space.
  18. 18. An apparatus as claimed in claim 17 wherein said source of 3-D model data is operable to supply alternative sets of 3-D model data selected by a user, whereby said computer image generator and composite image generator are operable to repeat the rendering of 2-D images using different sets of model data, while using the same metadata to provide a visualisation of different structures in combination with same camera images.
  19. 19. An apparatus as claimed in claim 18 or 19 further comprising a source of 3-D topographic model data, the topographic model data being used by said computer image generator in combination with said metadata so that the 2-D image rendered from the 3-D model data appear correctly obscured by foreground features, the foreground features being features of structure and/or landscape that are located in said three-dimensional space between the camera location represented in the metadata and a part of the modelled structure.
  20. 20. An apparatus as claimed in claim 19 further comprising a source of masking data and wherein at least one of said computer image generator and said composite image generator is responsive to the masking data to simulate obscuration of the modelled structure by features not represented in the topographic model data.
  21. 21. An apparatus as claimed in claim 20 wherein said masking data is generated with reference to one or more a specific camera locations.
  22. 22. An apparatus as claimed in claim 21 further comprising a user interface by which a user can create or edit elements of said masking data by drawing around features reproduced in a selected camera image.
  23. 23. An apparatus as claimed in any of claims 17 to 22 wherein the camera is a motion video camera and the location and attitude data records variations in location and attitude associated during a motion picture sequence represented in the image data.
  24. 24. An apparatus as claimed in any of claims 17 to 23 wherein said data processor is further arranged to include field of view data within said metadata.
  25. 25. An apparatus as claimed in any of claims 17 to 24 wherein said data processor and camera are integrated within a portable computing device such as atablet computer.
  26. 26. An apparatus as claimed in any of claims 17 to 24 wherein said data processor is within a portable computing device such as a tablet computer, while the camera is a separate unit.
  27. 27. An apparatus as claimed in claim 25 or 26 wherein said location sensor and attitude sensor are integrated with said data processor in said portable computing device.Amendments to the claims have been made as followsCLAIMS1. An image capture apparatus for supporting the generation of augmented reality images comprising: -a camera for capturing images of a scene in digital form, the camera defining a movable reference frame; -a location sensor mounted to move with the camera for measuring an absolute location of the camera in three-dimensional space at the time of capturing said images; -an attitude sensor mounted to move with the camera for measuring an attitude of the camera in three dimensions at the time of capturing said images; and -a data processor for embedding location data from said location sensor as metadata in image data representing each of the respective images and associating attitude data from said attitude sensor as metadata with said image data, and LI') 15 delivering the image data and metadata together to at least one of a storage device and a communication channel. C?)2. An apparatus as claimed in claim 1 wherein said location sensor is a satellite positioning system receiver.3. An apparatus as claimed in claim 1 or 2 wherein the attitude sensor comprises at least a magnetometer and an accelerometer for measuring attitude relative to the Earth's magnetic field and gravity.4. An apparatus as claimed in claim 3 wherein the attitude sensor further comprises a gyroscopic sensor for detecting rotation about one or more axes, the attitude data being calculated using a combination of signals from the gyroscopic sensor, magnetometer and accelerometer.5. An apparatus as claimed in any preceding claim wherein the camera is a motion video camera and the location and attitude data records variations in location and attitude associated during a motion picture sequence represented in the image data.6. An apparatus as claimed in any preceding claim wherein said data processor is further arranged to include field of view data within said metadata.7. An apparatus as claimed in any preceding claim wherein said data processor and camera are integrated within a portable computing device such as a tablet or phone computer.8. An apparatus as claimed in any of claims 1 to 6 wherein said data processor is within a portable computing device such as a tablet computer, while the camera is a separate unit.LI') 15 9. An apparatus as claimed in claim 7 or 8 wherein said location sensor and attitude sensor are integrated with said data processor in said portable computing C1') device.10. An apparatus as claimed in any preceding claim further comprising: -a source of 3-D model data representing one or more structures to be visualised within said scene by augmented reality, the model data specifring locations for said structures in said three-dimensional space; -a computer image generator for rendering 2-D images of said structures from the 3-D model data; and -a composite image generator for combining the rendered images of said structures with camera images reproduced from the image data, wherein the computer image generator is arranged to use said metadata to control the rendering of images of said structures such that the 2-ID images rendered from the 3-ID model data when combined with the camera images appear correctly located in said three-dimensional space.11. An apparatus as claimed in claim 10 wherein said source of 3-fl model data is operable to supply alternative sets of 3 -D model data selected by a user, whereby said computer image generator and composite image generator are operable to repeat the rendering of 2-13 images using different sets of model data) while using the same metadata to provide a visualisation of different structures in combination with same camera images.12. An apparatus as daimed in claim 10 or 11 further comprising a source of 3-D topographic model data, the topographic model data being used by said computer image generator in combination with said metadata so that the 2-13 image rendered from the 3-13 model data appear correctly obscured by foreground features, the foreground features being features of structure and/or landscape that are located in said three-dimensional space between the camera location represented in the metadata and a part of the modelled structure. Lfl1513. An apparatus as claimed in claim 12 further comprising a source of masking data and wherein at least one of said computer image generator and said composite image generator is responsive to the masking data to simulate obscuration of the modelled structure by features not represented in the topographic modd data. 2O14. An apparatus as claimed in claim 13 wherein said masking data is generated with reference to one or more a specific camera locations.15. An apparatus as claimed in claim 14 further comprising a user interface by which a user can create or edit elements of said masking data by drawing around features reproduced in a selected camera image.16. An apparatus as daimed in any of daims 10 to 15 wherein said source of 3-13 model data is a communication link to a remote server, whereby said 3-fl model data can be updated remotely.17. An apparatus for use in generating augmented reality images, the apparatus comprising: -a source of image data representing in digital form images of a scene taken by a camera; -a source of 3-ID model data representing one or more structures to be visualised within said scene by augmented reality, the model data specifying positions for said structures in said three-dimensional space; -a computer image generator for rendering 2-D images of said structures from the 3-ID model data; -a composite image generator for combining the rendered images of said structures with camera images reproduced from the image data; and -a source of metadata by which a camera location is embedded in and a camera attitude is associated with each of the camera images represented in the image data, LI') 15 wherein the computer image generator is arranged to control the rendering of images of said structures such that the 2-D images rendered from the 3-ID model data when combined with the camera images appear correctly located in said three-dimensional space, and wherein the apparatus is operable to store the metadata along with each of the respective camera images represented in the image data.18. An apparatus as claimed in claim 17 wherein said source of 3-ID model data is operable to supply alternative sets of 3-ID model data selected by a user, whereby said computer image generator and composite image generator are operable to repeat the rendering of 2-ID images using different sets of model data) while using the same metadata to provide a visualisation of different structures in combination with same camera images.19. An apparatus as claimed in claim 18 or 19 further comprising a source of 3-D topographic model data, the topographic model data being used by said computer image generator in combination with said metadata so that the 2-ID image rendered from the 3-ID model data appear correctly obscured by foreground features, the foreground features being features of structure and/or landscape that are located in said three-dimensional space between the camera location represented in the metadata and a part of the modelled structure.20. An apparatus as claimed in claim 19 further comprising a source of masking data and wherein at least one of said computer image generator and said composite image generator is responsive to the masking data to simulate obscuration of the modelled structure by features not represented in the topographic modd data.21. An apparatus as claimed in claim 20 wherein said masking data is generated with reference to one or more a specific camera locations.22. An apparatus as claimed in claim 21 further comprising a user interface by which a user can create or edit elements of said masking data by drawing around LI') 15 features reproduced in a selected camera image.C?) 23. An apparatus as claimed in any of claims 17 to 22 wherein tile camera is a motion video camera and the location and attitude data records variations in location and attitude associated during a motion picture sequence represented in the image data.24. An apparatus as claimed in any of claims 17 to 23 wherein said data processor is further arranged to include field of view data within said metadata.25. An apparatus as claimed in any of claims 17 to 24 wherein said data processor and camera are integrated within a portable computing device such as atablet computer.26. An apparatus as claimed in any of claims 17 to 24 wherein said data processor is within a portable computing device such as a tablet computer, while the camera is a separate unit.27. An apparatus as claimed in claim 25 or 26 wherein said location sensor and attitude sensor are integrated with said data processor in said portable computing device. IC)CON (4
GB1317629.2A 2013-10-04 2013-10-04 Augmented reality systems and methods Withdrawn GB2519744A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1317629.2A GB2519744A (en) 2013-10-04 2013-10-04 Augmented reality systems and methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1317629.2A GB2519744A (en) 2013-10-04 2013-10-04 Augmented reality systems and methods

Publications (2)

Publication Number Publication Date
GB201317629D0 GB201317629D0 (en) 2013-11-20
GB2519744A true GB2519744A (en) 2015-05-06

Family

ID=49630230

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1317629.2A Withdrawn GB2519744A (en) 2013-10-04 2013-10-04 Augmented reality systems and methods

Country Status (1)

Country Link
GB (1) GB2519744A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2539182A (en) * 2015-06-02 2016-12-14 Vision Augmented Reality Ltd Dynamic augmented reality system
CN107762499A (en) * 2017-10-17 2018-03-06 深圳市晓控通信科技有限公司 A kind of intelligent magnetic force detection device for oil exploration based on Internet of Things
CN111179436A (en) * 2019-12-26 2020-05-19 浙江省文化实业发展有限公司 Mixed reality interaction system based on high-precision positioning technology
US10832558B2 (en) 2018-01-08 2020-11-10 Honeywell International Inc. Systems and methods for augmenting reality during a site survey using an unmanned aerial vehicle
EP3901668A1 (en) * 2020-04-24 2021-10-27 Trimble Inc. Methods of displaying an augmented reality model on an augmented reality device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107291786A (en) * 2016-04-13 2017-10-24 北京四维益友信息技术有限公司 A kind of three-dimensional geographic information acquisition system
CN115035626A (en) * 2022-05-19 2022-09-09 成都中科大旗软件股份有限公司 Intelligent scenic spot inspection system and method based on AR

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090073034A1 (en) * 2007-05-19 2009-03-19 Ching-Fang Lin 4D GIS virtual reality for controlling, monitoring and prediction of manned/unmanned system
US20090153550A1 (en) * 2007-12-18 2009-06-18 Disney Enterprises, Inc. Virtual object rendering system and method
US20090248300A1 (en) * 2008-03-31 2009-10-01 Sony Ericsson Mobile Communications Ab Methods and Apparatus for Viewing Previously-Recorded Multimedia Content from Original Perspective
EP2175636A1 (en) * 2008-10-10 2010-04-14 Honeywell International Inc. Method and system for integrating virtual entities within live video
US20110137561A1 (en) * 2009-12-04 2011-06-09 Nokia Corporation Method and apparatus for measuring geographic coordinates of a point of interest in an image
US20110249122A1 (en) * 2010-04-12 2011-10-13 Symbol Technologies, Inc. System and method for location-based operation of a head mounted display
US20120008931A1 (en) * 2010-07-07 2012-01-12 Samsung Electronics Co., Ltd. Apparatus and method for displaying world clock in portable terminal
WO2012115593A1 (en) * 2011-02-21 2012-08-30 National University Of Singapore Apparatus, system, and method for annotation of media files with sensor data
US20120293550A1 (en) * 2011-05-17 2012-11-22 National Chiao Tung University Localization device and localization method with the assistance of augmented reality
US20120317825A1 (en) * 2011-06-14 2012-12-20 Pentax Ricoh Imaging Company, Ltd. Direction determining method and apparatus using a triaxial electronic compass

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090073034A1 (en) * 2007-05-19 2009-03-19 Ching-Fang Lin 4D GIS virtual reality for controlling, monitoring and prediction of manned/unmanned system
US20090153550A1 (en) * 2007-12-18 2009-06-18 Disney Enterprises, Inc. Virtual object rendering system and method
US20090248300A1 (en) * 2008-03-31 2009-10-01 Sony Ericsson Mobile Communications Ab Methods and Apparatus for Viewing Previously-Recorded Multimedia Content from Original Perspective
EP2175636A1 (en) * 2008-10-10 2010-04-14 Honeywell International Inc. Method and system for integrating virtual entities within live video
US20110137561A1 (en) * 2009-12-04 2011-06-09 Nokia Corporation Method and apparatus for measuring geographic coordinates of a point of interest in an image
US20110249122A1 (en) * 2010-04-12 2011-10-13 Symbol Technologies, Inc. System and method for location-based operation of a head mounted display
US20120008931A1 (en) * 2010-07-07 2012-01-12 Samsung Electronics Co., Ltd. Apparatus and method for displaying world clock in portable terminal
WO2012115593A1 (en) * 2011-02-21 2012-08-30 National University Of Singapore Apparatus, system, and method for annotation of media files with sensor data
US20120293550A1 (en) * 2011-05-17 2012-11-22 National Chiao Tung University Localization device and localization method with the assistance of augmented reality
US20120317825A1 (en) * 2011-06-14 2012-12-20 Pentax Ricoh Imaging Company, Ltd. Direction determining method and apparatus using a triaxial electronic compass

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2539182A (en) * 2015-06-02 2016-12-14 Vision Augmented Reality Ltd Dynamic augmented reality system
CN107762499A (en) * 2017-10-17 2018-03-06 深圳市晓控通信科技有限公司 A kind of intelligent magnetic force detection device for oil exploration based on Internet of Things
US10832558B2 (en) 2018-01-08 2020-11-10 Honeywell International Inc. Systems and methods for augmenting reality during a site survey using an unmanned aerial vehicle
CN111179436A (en) * 2019-12-26 2020-05-19 浙江省文化实业发展有限公司 Mixed reality interaction system based on high-precision positioning technology
EP3901668A1 (en) * 2020-04-24 2021-10-27 Trimble Inc. Methods of displaying an augmented reality model on an augmented reality device
US11250624B2 (en) 2020-04-24 2022-02-15 Trimble Inc. Methods of displaying an augmented reality model on an augmented reality device

Also Published As

Publication number Publication date
GB201317629D0 (en) 2013-11-20

Similar Documents

Publication Publication Date Title
US11860923B2 (en) Providing a thumbnail image that follows a main image
US20210209857A1 (en) Techniques for capturing and displaying partial motion in virtual or augmented reality scenes
GB2519744A (en) Augmented reality systems and methods
Behzadan et al. Georeferenced registration of construction graphics in mobile outdoor augmented reality
Zollmann et al. Augmented reality for construction site monitoring and documentation
AU2014295814B2 (en) Geo-located activity visualisation, editing and sharing
KR101260576B1 (en) User Equipment and Method for providing AR service
Kikuchi et al. Future landscape visualization using a city digital twin: Integration of augmented reality and drones with implementation of 3D model-based occlusion handling
Santos et al. Methodologies to represent and promote the geoheritage using unmanned aerial vehicles, multimedia technologies, and augmented reality
US20190088025A1 (en) System and method for authoring and viewing augmented reality content with a drone
Keil et al. The House of Olbrich—An augmented reality tour through architectural history
US20190356936A9 (en) System for georeferenced, geo-oriented realtime video streams
CN112396686A (en) Three-dimensional scene engineering simulation and live-action fusion system and method
US11403822B2 (en) System and methods for data transmission and rendering of virtual objects for display
CN107248194A (en) A kind of CAE data three-dimensionals based on cloud computing show exchange method
CN105205853A (en) 3D image splicing synthesis method for panoramic view management
Gomez-Jauregui et al. Quantitative evaluation of overlaying discrepancies in mobile augmented reality applications for AEC/FM
JP2017212510A (en) Image management device, program, image management system, and information terminal
Adăscăliţei et al. The Influence of Augmented Reality in Construction and Integration into Smart City.
TW201126451A (en) Augmented-reality system having initial orientation in space and time and method
CN109598786B (en) Simulation method of measuring instrument
Luley et al. Mobile augmented reality for tourists–MARFT
KR101873681B1 (en) System and method for virtual viewing based aerial photography information
KR101265554B1 (en) 3D advertising method and system
Chen et al. Integration of Augmented Reality and indoor positioning technologies for on-site viewing of BIM information

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)