GB2544827A - Viewer and viewing method - Google Patents

Viewer and viewing method Download PDF

Info

Publication number
GB2544827A
GB2544827A GB1605545.1A GB201605545A GB2544827A GB 2544827 A GB2544827 A GB 2544827A GB 201605545 A GB201605545 A GB 201605545A GB 2544827 A GB2544827 A GB 2544827A
Authority
GB
United Kingdom
Prior art keywords
dataset
real world
point data
application
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1605545.1A
Inventor
Hands Stephen
Malcolm Tulloch Andrew
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pixel Matter Ltd
Original Assignee
Pixel Matter Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GBGB1517033.5A external-priority patent/GB201517033D0/en
Priority claimed from GBGB1519376.6A external-priority patent/GB201519376D0/en
Application filed by Pixel Matter Ltd filed Critical Pixel Matter Ltd
Publication of GB2544827A publication Critical patent/GB2544827A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0261Targeted advertisements based on user location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0623Item investigation
    • H04W4/043

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • Computer Hardware Design (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Computer Graphics (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An augmented reality (AR) system comprising a viewer, a scanning device 2 and a link to a database having point datasets of real world objects of interest. The first database, is linked to a second database which enables the display or playing of material from a dataset of animations, information, graphics, videos and/or sound and display it on a screen. The scanner and display may be integrated into the same mobile device. The data from the second database may be overlaid on the real world object on the viewer, so as to create a mixed reality environment and the viewer might be a retina projection device. A method of augmenting the image of real world objects is also disclosed.

Description

Viewer and viewing method
This invention relates to a viewer and viewing method.
Virtual Reality (VR) and Augmented Reality (AR) technologies have expanded massively in the last three years with all major suppliers in the field investing hugely. Additionally, there has been a significant rise of other products through smaller companies. However, these systems are still limited in what can be achieved.
The invention mixes reality with artificial intelligence to produce a viewing system in which real world objects can be seen overlaid with additional graphics in the form of animation, information or still images. No current viewing system achieves this.
According to the present invention a viewing system for real world objects having a scanner and a screen in which the real world view can be displayed on the screen comprises a link to a database having point datasets of real world objects of interest which is in turn linked to an application dataset of animations, information, graphics, videos and/or sound associated with each object point dataset, in which a display selected from the artificial intelligence augmented dataset is selected and displayed on the screen and/or played on a speaker.
Normally a display selected from application dataset is displayed in overlaying the direct image of the real world object the screen.
In many cases, but not exclusively, the scanner and screen are part of the same device such as a mobile phone or retina glasses, but they could be a separate camera and screen. In the latter case they may even be remote from one another, with real world view and overlaying animation, information, graphics, video or sound being seen remotely by one person to give instruction to another, or to control a machine or robot arm remotely.
In a preferred embodiment the datasets are augmented using artificial intelligence.
In another aspect of the invention a method of augmenting the image of real world objects comprises: a. directing a scanner at an object; b. finding in a database having point datasets of real world objects, point data corresponding to the object and linking that to a dataset of individual animations, information and/or graphics, each individual application dataset being associated with an object point dataset; c. selecting from the dataset a display and/or sound; d. displaying said selected dataset on a screen and/or playing said selected dataset through speakers overlaying the real world view seen on the screen.
Normally the scanner would be a device camera but can be a separate camera.
Normally the display selected from the artificial intelligence augmented dataset is displayed in association with over the direct image of the real world object on the screen.
In a further step in the method, objects of potential interest are prescanned to create the database of the scanned objects in the form of object point datasets.
In a further step in the invention the database of the scanned objects in the form of point datasets may be further enhanced by scanning and adding point data sets of further objects. The object dataset may also be created directly from a CAD file or other file or other input data.
For clarity in the following description of the invention and subsequent examples of its application, there are two categories of developer of the invention, or sub-components of the invention. Firstly there developers who may use the object point data sets to develop their own dataset of animations, information, graphics, videos and/or sound related to real world objects, and secondly end use customers who use the invention as a whole to augment views of real world objects.
The point data-set database may be hosted in the viewing system, in a computer system associated with the viewing system or remotely with on-viewer line access. In this configuration, the access may be by subscription. In this configuration too, access to point datasets added by an individual developer may be restricted to the developer concerned or to a specific group of developers.
This invention also gives developers of such viewing systems the ability quickly to create Virtual Reality (VR) and Augmented Reality (AR) ready content to close the gap between that which is recognizable to the human eye/brain and that which the current best software can deliver.
On line access systems to a point data-set database of objects allows developers of viewing systems to have access to a library of object point data to which they can also contribute their own scanned and custom made content thereby enhancing their ability to add content and to link these real objects to their own developed dataset of animations, information and/or graphics to achieve the mixed reality output to be seen on the screen that they seek,
The augmented dataset of animations, information and/or graphics may be also hosted on the viewing system, on a computer associated with the viewing system or remotely with on-line access. In this configuration, the access may be by subscription. In this configuration too, access to the all or part of augmented dataset of animations, information and/or graphics access to point datasets may be restricted to an individual developer or to a specific group of developers. A number of existing commercial platforms exist to provide a method of scanning real world objects and turning them in to object point data.
The viewing system may comprise devices such as mobile devices, standalone web cameras, Head Mounted Displays (HMD) to identify and view real world objects, in the future other devices may have the same or a similar function.
It can be seen therefore that the present Invention provides a three dimensional object search algorithms. A scanner is pointed at an object, the object is recognised in the object point data-base from the object points, and animations, information and/or graphics related to the object once the object is recognised is linked to the object.
The invention uses artificial intelligence to bridge between recognizable shapes at say 70% certain to 100%, by using additional cues in the object to move towards certainty. For example, the more bottles from a particular manufacturer that are shown to the system, even if of different sizes, the better the system will identify design cues, such as the waist of the bottle and the fluting, which will confirm that it is a bottle of that manufacturer. Scaling can be used to deal with bottles of different sizes.
In a further development of the invention may be used as part of a training aid, an object in the real wold seen through the scanner is recognised in the object point data base and a training graphic, video or sound track is linked to the object point data and played overlaying the view of the real world view on the screen. In this way a trainee can, by looking at the screen, compare what he/she is doing with the training material. For example, the object point dataset may contain information about a particular surgical glove to be worn in a training procedure. The glove is recognised when it comes into the real world field of view, triggering playing a training video of the procedure overlaying the real world view of the training exercise. Indeed, such a process may extend to overlaying instruction graphics, video and/or sound on a real world view of a real process or procedure, such as replacing a machine part or carrying out a medical procedure.
The screen can be associated with the view or remote therefrom. For example the scanner can be placed in a tight space or unsafe place with the screen mounted elsewhere. This could be particularly helpful when the real world view is inside an engine compartment or a nuclear reactor, for example.
As more content is added to the object point database, the more efficient the ability of the system to identify objects.
The invention will now be described by way of example with reference to the accompanying drawings, in which:
Figure 1 is a flow chart illustrating capture of information concerning objects and combining that information with animations to be linked on a viewing system according to the invention;
Figure 2 shows a viewing system according to the invention with a camera directed at an object; and
Figure 3 shows the display on a screen of a viewing system according to the invention in which the object has been linked with an animation. Turning to figure 1, one method of preparing for an object image to be linked to an animation is illustrated in a display of a screen of a viewing system. In this example the animation could be replaced by information, videos and/or sound, and the examples should be construed accordingly. Box 1 (Select Objects): A real world objects of interest 13 are identified, in this example, it is a cube.
Box 2 (Scan): The identified real world object 13 is scanned by a scanning device 11.
Box 3 (Captured Raw Data) The captured scan data of the real world object 13 is converted into object point data 15 which is a digital reproduction of key points of the real object 13. A virtual object could be also used by converting it to object point data.
The captured object point data 15 is used to link real world objects to virtual content and allow mobiles, tablets and camera devices or other technologies to align object point data with real world objects to project augmented virtual content from the dataset of animations, information and/or graphics related to the point datasets onto the real world objects. The scanner 11 undertaking the object recognition could be undertaken by any smart camera technologies such as Head Mounted Displays (HMD), smart phones, tablet devices, web cameras, 3D cameras and any similar future device performing similar functions (including wearable cameras, glasses, or devices projecting onto the retina)..
During the scanning process data points are augmented on top of the real object picking up edges and contrasting sections of the three dimensional object. The scanning process can be repeated several times for the object, to ensure all relevant point data is captures These data points make the form a virtual mesh of the object when saved as an object point data file. The captured raw object point data set of an object is a three dimensional mesh of the object is created and stored locally on the viewing system. This data may then be sent to an external cloud store such as Drop Box,
Google Drive, Apple Storage, FTP, or by local drive and/ or email an external machine or suitable local installation.
Box 4 (Developer toolkit): Someone who wishes to develop one or more application dataset of animations, information, graphics, videos and/or sound associated with the object has access to an object raw data editor as a basic editing tool which developers install on their desktop system or browser based editors. The developer can be a user of a viewing system according to the invention or be independent of the end user.
Box 5 (Edit raw data): A developer will have access to the raw object point data 15 and can will import the raw object point data and use editing tools to interact with the raw data to improve its quality and positioning of data points. Any erroneous data points 17 captured during the scanning process can be removed.
Additional raw object point data can also be added by using the editing tools to link together the data points in a digital ‘dot to dot’ method thus creating data lines of the virtual object to increase the data mesh and building the first version of the virtual object. At this stage, it is also possible to custom build object data files without using the scanning tools. In addition to uploading object point data derived from scanned objects or created from scratch, it is possible to upload three dimensional content from computer aided design files,
Box 6 (Export final data): The final edited object point data file 19 is exported to a central server or to more local installations.
Box 7 (Upload to server): If the final object point data 19 is exported to a server. It is possible for one such server to offer a range of services to enable the process of data capture, editing, exporting, hosting, managing and linking to a central artificial intelligence resource.
Box 8 (Data Server): The server is an online storage space for final point data files to be uploaded to and management tools of object point data files, with an interface for developers to interact with their uploaded content.
The server provides developers with a database library of scanned objects stored as object point data packages enabling developers to contribute to and improve the artificial intelligence system via the artificial intelligence learning and developer meta-tagging (stage 9) of object point datasets. Such a central server would, today, be a cloud based solution for storing data and available as a further option for importing point data directly from the scanning device to a connected location for management.
Once the custom data has been uploaded to the server, the server software scans the database for possible duplications and to present other possible duplicate data-files to the developer before the developer undertakes the tagging process (See Box 10 Meta Tags).
This would enable developers who own design information or who specifically design an object to work in this environment to support a high degree of optical correlation between the object and the scanning / recognition phase, automatic or manual, which is a precursor to the mixed reality augmentation created using this invention.
Box 9 (Server portal account): In a server based system, a developer would have an account on the server to access all tools and will work across all products that that developer can use or is allowed to use. The account will enable the developer to access and download the files for scanning and subscribe to the service package they wish to use.
All data uploaded via a developer account will be linked to the developer with an easy to use interface for accessing and managing content.
Box 10 (Meta tag data fields): When a developer uploads object point data to their developer account there are multiple input fields the developer will use to tag the data uploaded.
For example, if a cube has been scanned and a 3D mesh created and uploaded to the server and linked to the developer account, the developer will then tag the cube using a database of options to define the object. A dataset of images is made available and continually added to for developers from which the developers may pick. An image of a cube will be selected from this database and bound to the point data; key words are then applied to the object point data sets such as ‘Cube’, ‘Cuboid’, ‘Box’, etc. The measurements of the cube are added.
Once the object point data is uploaded, a visual player on the server presents the data, which a developer can then navigate, spin, rotate, zoom and scroll. A description off the object can be added in free text to provide a detailed overview of the object uploaded.
Once all necessary options have been completed then the developer will then save the object to the server.
During the process of uploading the object point data to the database, the system scans for other objects of similar design and where applicable feedback to the developer that a similar object exists in the database. Depending on the percentage of similarity between the newly created package and existing data packages, the developer will be informed of the similarity and when the new data is has a high degree of similarity to an already saved package will not allow for upload to avoid duplication.
If the object is derived specifically from CAD design the developer will not need to use the object recognition phase and rely on automated recognition of his/her own object.
Box 11; (Virtual Object Code Applied): As each package of object point data is uploaded it is given a unique identifier and database code. A Virtual Object Code will be linked to an object indefinitely and be the basis of the artificial intelligence’s ability to learn, bind data and present data against objects.
Box 12 (Download Data): Once the object point data package is complete and uploaded to the server a developer can download the package to work with software development kits, such as Unity3D™. Other possibilities include the Unreal Development Kit (UDK)™, Blender™, Crisis Engine™ and development packages which enable 3D data creation Box 13 (Plugin Toolkit) As an example, Unity3D™ has its own file infrastructure which enables developers to create a wide variety of animations with the ability to import content from and develop plugins to the core object point data files, A plug in tool Unity3D™ provides developers with an interface for managing the saved object point data. Thus when a developer downloads their Unity3D™ object package it will be downloaded, allowing developers to import their content quickly and efficiently. A custom editor plugin is also downloaded and enables developers to access other application dataset of animations, information, graphics, videos and/or sound by developed third parties which are available on the server, to provide a custom set of features, functions and options designed to provide the developer with all the tools they need to begin developing their own application dataset. The plugin manages imported data and files to allow the process of binding object point data to custom content.
Box 14(Creation of the artificial intelligence augmented data-set): Using Unity3D™, the developer can now create their own applications and use the plugin to link their augmented data set application design to the object point data on the server for an object of interest. This allows the production of a wide range of custom applications to be developed and connected. Once the developer has linked up their account to the Unity 3D™ infrastructure the functionality enables the artificial intelligence to use device based cameras to scan real world objects, scan back through the database and links, in a matter of seconds, content as recognised or custom created to be augmented or mapped on the device.
Box 15 (Artificial Intelligence/Mixed Reality Engine) Data from a server’s object point data database to created applications, either by the developer, the patentee or third parties through cloud based technologies.
This streaming could include not only "scan recognition" data but also augmentation video, data, still image modification, audio, force feedback and Bluetooth®/wireless circuit based devices.
Thus artificial intelligence functionality is configured within a developed application, either for testing or distribution, if an object has been scanned and added to the object point data database, Applications can scan a multitude of point data and compare real objects with the database packages. This in turn creates an engine mixing reality with applications derived by use of artificial intelligence in an artificial intelligence mixed reality engine.
If an object scanned exists in the object point database the real object can be linked using the package code, custom content and a device with a camera.
Once a camera is pointed at an object and it is recognised any custom content created using the development tools mentioned in Box 14 will be displayed and linked as the developer intended.
Box 16 (Developer Created Mobile Apps): The developers can finalise their application and can build their application dataset using tools such as Apple’s X-Code® and Google Android® SDK platforms to distribute their application via online application stores.
Box 17 (Download application from appropriate application stores): End -customers, if not the developer, would download applications developed as above to their devices and systems.
Box 18 (End user scans objects) End-users open the application dataset in applications downloaded to their devices and point a camera 21 of their viewing system (which may or may not be the scanner 11) at objects. Depending on the subject material or theme of the application, for example an application designed to augment technical data on particular motorcycle engine components and motorcycles, the customer uses the application the see information provided by the application in association with the real world objects, motor cycles engine components, on the screen of the camera,
Box 19 (Feedback): As scans are carried out by end users and developers during the process data is fed back data to the servers to enhance the quality of the data, to learn more shapes and evolve.
Box 20. (App Augments content attached to real world object): The outcome of the development is that the application datasets evolved as described are able to display virtual content in the way that the developer intended on to the real world objects in the screen of a camera 21 or similar device. The camera 21 may or may not be the same device as scanner 11. This projection of additional information can be used for a vast array of applications from, for example, training, assembly of objects and playthings, to entertainment and games, and to emergency response.
Figures 2 and 3 illustrate more fully the configuration of an example viewing system according to the invention.
The camera 22 of a mobile phone, the mobile phone comprising the viewing system 21 of the invention, is directed towards a real life object 23, a cooking pot in this illustration, the image of the cooking pot i23 is focussed through the camera 22 onto a liquid crystal screen 24. At the same time the cooking pot 23 is scanned and object point data 25 from the scan is exported by the mobile phone 21 to an object data store 27; this store can be local within the viewing system, but when using a mobile phone as the viewing system, it will almost certainly be remote as shown. The tools on the store relate the cooking pot 23 to any previously scanned object point data concerning cooking pots. If a match is made, key data concerning the scanned cooking pot is fed back 29 to the viewing system 21. Here the feedback is releases an application 33, which will have been preloaded from an application data set 31. This release allows the application a23, that of flames coming from the cooking pot 23, to be shown on the display 24 overlaying the image i23 of the cooking pot.
Using this invention, it can be seen that a myriad of individual applications’ dataset of animations, information, graphics, videos and/or sound can be linked for display on a screen with different objects. It can also be recognised that a user can associated a particular application to be displayed with an object on a particular occasion, with a different application on another occasion.
In particular, using the illustration in figures 2 and 3, rather than overlaying the reproduction of the cooking pot with a picture of fire, the cooking pot image could be overlain with marketing information about the cooking put, say for example, price information, discounts or promotional information, or basis information concerning the cooking pot itself, such as materials of manufacture, nature of the base etc. In this was the invention, with the right applications, becomes a powerful marketing tool. Other applications include animation of toys or providing moving foreground in which the image of a toy is placed, training, assembly information for a kit. It is also possible to apply identical overlays to objects which the system recognises but which have been adjusted in some way in its x- or y-axis, or both, say, for example, a promotional container of drink that is 25% oversize. Thus by choosing appropriate object data points, the artificial intelligence of the invention will recognise that an object presented to a viewing system is bigger or smaller that an otherwise identical object (as occurs with different sized container of an otherwise identical product).
Expanding on theme discussed in the paragraph it would also be possible, to rescale object point data, to enable a scan of an object to be resized, saved, so that the original sized object and a bigger or smaller version would be recognised linked to the same application dataset.
Physical environments made up on simple geometric shapes such as walls and boxes that create a textureless environment for people to navigate. Digital content from an application dataset, which is scaled to the size of the physical environment viewed through the viewing system, links the physical environment with the application software. The user can navigates the real geometry corridors formed by the shapes but views, interacts with the digital content mapped onto the real geometry. The user may see the digital content, however when the digital content is mapped to particular real world objects such as a box - the box is displayed to the user via the software as a digital keyboard. The user will use their real hand and fingers to interact with the digital keyboard and as a result of the mixed reality software will touch the box in the real world but only interact with the digital keyboard mapped to the real world box.
Such will be the case that function and functionless content can be used as a physical entity onto which digital content is mapped and scaled from the application dataset. Using device or retina projection, the user is able to see a virtual world and to navigate it and touch the original environmental features. A wall in the real world would then have a digital interface which the user can interact with by pressing their fingers against the real world and interacting with the blended digital content.
As a further example of the use of the invention, devices are being manufactured for quick ordering of consumables, the technology of this invention will enable e-commerce functionality. A user in the form of a customer can point a camera at a marker, product, product box or image and automatically re-order the consumable by the derived object point dataset invention linking to an appropriate application dataset, either automatically ordering or putting an application on the screen to enable the customer to order. Data could be overlaid on the object such as statistics of use, how frequently the product is ordered, usage instructions and video.
It should be noted that throughout this specification and claims, where viewing systems are mentioned, these include retina projection devices, and where cameras are noted as the tool to point at an object, that expression includes such retina projection devices.

Claims (24)

Claims
1. A viewing system for real world objects having a scanner and a screen in which the real world view can be displayed on the screen comprises a link to a database having point datasets of real world objects of interest which is in turn linked to an application dataset of animations, information, graphics, videos and/or sound associated with each object point dataset, in which a display selected from the artificial intelligence augmented dataset is selected and displayed on the screen and/or played on a speaker.
2. A viewing system according to claim 1 in which a display selected from application dataset is displayed overlaying the direct image of the real world object in the screen.
3. A viewing system according to claim 1 or 2 in which the scanner and screen are part of a single mobile device.
4. A viewer according to any receding claim 1 in which the application selected from the application dataset is displayed overlain on the direct image of the real world object in the viewer.
5. A viewer according any preceding claim in which one or both datasets are augmented using artificial intelligence.
6. A viewer according any preceding claim in which the viewer is a retina projection device.
7. A method of augmenting the image of real world objects comprising: • directing a scanner at an object; • finding in a database having point datasets of real world objects, point data corresponding to the object and linking that to a dataset of individual animations, information and/or graphics, each individual application dataset being associated with an object point dataset; • selecting from the dataset a display and/or sound; • displaying said selected dataset on a screen and/or playing said selected dataset through speakers overlaying the real world view seen on the screen.
8. A method according to claim 7 including the step of augmenting one or other or both datasets using artificial intelligence
9. A method according to claim 7 or 8 including the step of pre-scanning an object and storing object point dataset of the object.
10. A method according to claim 9 in which the object point data is stored on a server.
11. A method according to any one of claims 7 to 10 in which the object point data is created from a computer aided design file of the object.
12. A method according to any one of claims 7 to 10 in which the object point data is generated from an uploaded digital file of the object concerned uploaded .
13. A method according to any one of claims 10 to 12 in which the object point data of an object is available to developers of application datasets and to users of viewers employing the method of the invention.
14. A method according to any one of claims 7 to 13 including the step of enhancing the object point data for an object by undertaking multiple scans of the object.
15. A method according to any one of claims 7 to 13 including the step of enhancing the object point data for an object by undertaking scans of objects which are otherwise identical but having different sizes and scaling.
16. A method according to claim 15 in which the method recognises that an object is bigger or smaller than an otherwise identical object.
17. A method according to any one of claims 7 to 16 including the step of amending the object point data of an object.
18. A method according to claim 17 in which the amending is as a result of further scans of the object during use of the method.
19. A method according to claim 17 in which amending is done manually.
20. A method according to anyone of claims 7 to 19 in which the viewer is a retina projection device.
21. A method according to anyone of claims 7 to 20 in which the application dataset linked by an object point dataset is an ordering/purchasing application.
22. A method according to anyone of claims 21 in which the ordering/purchasing application orders items automatically.
23. A viewing system substantially as hereinbefore described with reference to the accompanying drawings.
24. A method of augmenting the image of real world objects substantially as hereinbefore described with reference to the accompanying drawings.
GB1605545.1A 2015-09-25 2016-04-01 Viewer and viewing method Withdrawn GB2544827A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB1517033.5A GB201517033D0 (en) 2015-09-25 2015-09-25 Visualisation system
GBGB1519376.6A GB201519376D0 (en) 2015-11-03 2015-11-03 Viewer and viewing method

Publications (1)

Publication Number Publication Date
GB2544827A true GB2544827A (en) 2017-05-31

Family

ID=58667333

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1605545.1A Withdrawn GB2544827A (en) 2015-09-25 2016-04-01 Viewer and viewing method

Country Status (1)

Country Link
GB (1) GB2544827A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6175343B1 (en) * 1998-02-24 2001-01-16 Anivision, Inc. Method and apparatus for operating the overlay of computer-generated effects onto a live image
US20060038833A1 (en) * 2004-08-19 2006-02-23 Mallinson Dominic S Portable augmented reality device and method
US20110213664A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
US20130107021A1 (en) * 2010-07-20 2013-05-02 Primesense Ltd. Interactive Reality Augmentation for Natural Interaction
US20140028712A1 (en) * 2012-07-26 2014-01-30 Qualcomm Incorporated Method and apparatus for controlling augmented reality
US20140362111A1 (en) * 2013-06-07 2014-12-11 Samsung Electronics Co., Ltd. Method and device for providing information in view mode
US20150187141A1 (en) * 2013-12-26 2015-07-02 Empire Technology Development Llc Out-of-focus micromirror to display augmented reality images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6175343B1 (en) * 1998-02-24 2001-01-16 Anivision, Inc. Method and apparatus for operating the overlay of computer-generated effects onto a live image
US20060038833A1 (en) * 2004-08-19 2006-02-23 Mallinson Dominic S Portable augmented reality device and method
US20110213664A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
US20130107021A1 (en) * 2010-07-20 2013-05-02 Primesense Ltd. Interactive Reality Augmentation for Natural Interaction
US20140028712A1 (en) * 2012-07-26 2014-01-30 Qualcomm Incorporated Method and apparatus for controlling augmented reality
US20140362111A1 (en) * 2013-06-07 2014-12-11 Samsung Electronics Co., Ltd. Method and device for providing information in view mode
US20150187141A1 (en) * 2013-12-26 2015-07-02 Empire Technology Development Llc Out-of-focus micromirror to display augmented reality images

Similar Documents

Publication Publication Date Title
Rahaman et al. From photo to 3D to mixed reality: A complete workflow for cultural heritage visualisation and experience
Walczak et al. Dynamic interactive VR network services for education
US11363329B2 (en) Object discovery and exploration in video content
US11222479B2 (en) Object customization and accessorization in video content
CN107633441A (en) Commodity in track identification video image and the method and apparatus for showing merchandise news
US9892556B2 (en) Real-time exploration of video content
WO2014063657A1 (en) Online experience system for 3d products
US20090143881A1 (en) Digital media recasting
Amin et al. An augmented reality-based approach for designing interactive food menu of restaurant using android
US10996914B2 (en) Persistent geo-located augmented reality social network system and method
US20180349367A1 (en) Systems and methods for associating virtual objects with electronic documents, and searching for a virtual object or an electronic document based on the association
EP3299935A1 (en) Method for providing demonstration information in simulation environment, and associated simulation system
US20140074664A1 (en) System to purchase products seen in content media
CN109863746B (en) Immersive environment system and video projection module for data exploration
Dokonal et al. ’VR or Not VR–No longer a question?’
Walczak et al. Virtual and augmented reality for configuring, promoting and servicing household appliances
Campbell A Rift in our Practices, Toward Preserving Virtual Reality
GB2544827A (en) Viewer and viewing method
Wichrowski Teaching augmented reality in practice: tools, workshops and students' projects
US10939175B2 (en) Generating new video content from pre-recorded video
US11068145B2 (en) Techniques for creative review of 3D content in a production environment
Creutzburg et al. Virtual reality, augmented reality, mixed reality & visual effects: new potentials by event Technology for the Immanuel Kant Anniversary 2024 in Kaliningrad
Kozhevnikov Augmented and Mixed Reality-based Modules for Scientific Instrumentation Training
Elstner Use cases of extended reality in the construction industry
Dokonal et al. VR or not VR–an eeZee question?

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)