GB2564715A - Systems and methods of forming virtual models - Google Patents

Systems and methods of forming virtual models Download PDF

Info

Publication number
GB2564715A
GB2564715A GB1711792.0A GB201711792A GB2564715A GB 2564715 A GB2564715 A GB 2564715A GB 201711792 A GB201711792 A GB 201711792A GB 2564715 A GB2564715 A GB 2564715A
Authority
GB
United Kingdom
Prior art keywords
virtual model
image
tokens
physical tokens
physical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1711792.0A
Other versions
GB2564715B (en
GB201711792D0 (en
Inventor
Thorold Lyons Edwin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Firebolt Games Ltd
Firebolt Games Ltd
Original Assignee
Firebolt Games Ltd
Firebolt Games Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Firebolt Games Ltd, Firebolt Games Ltd filed Critical Firebolt Games Ltd
Priority to GB1711792.0A priority Critical patent/GB2564715B/en
Publication of GB201711792D0 publication Critical patent/GB201711792D0/en
Publication of GB2564715A publication Critical patent/GB2564715A/en
Application granted granted Critical
Publication of GB2564715B publication Critical patent/GB2564715B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/24Constructional details thereof, e.g. game controllers with detachable joystick handles
    • A63F13/245Constructional details thereof, e.g. game controllers with detachable joystick handles specially adapted to a particular type of game, e.g. steering wheels
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/69Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H33/00Other toys
    • A63H33/04Building blocks, strips, or similar building parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06018Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking one-dimensional coding
    • G06K19/06028Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking one-dimensional coding using bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14131D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/22Character recognition characterised by the type of writing
    • G06V30/224Character recognition characterised by the type of writing of printed characters having additional code marks or containing code marks
    • G06V30/2247Characters composed of bars, e.g. CMC-7
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2250/00Miscellaneous game characteristics
    • A63F2250/28Miscellaneous game characteristics with a two-dimensional real image

Abstract

Forming a virtual model of a plurality of physical tokens 1201 - 1204, each physical element comprising a visual design, comprising: arranging the plurality of tokens (e.g. construction elements) into a pattern 1240, based on the visual designs of each of the tokens; capturing multiple images of the tokens (e.g. elements); forming a master virtual model of the physical tokens in accordance with the pattern, based on recognisable aspects of each physical token (e.g. tile) (e.g. barcode, QR code (RTM), 3D shape of token). Forming a virtual model, comprising: capturing multiple images of tokens (e.g. construction elements), with each image including markers; from a first image, generating an intermediate virtual model, comprising virtual representations of markers captured; for each subsequent image: generating an image specific virtual model comprising virtual representations of markers in subsequent image; comparing image specific virtual model with intermediate virtual model to determine quality score representing the similarity between them; in event that quality score exceeds threshold, merging image specific virtual model with the intermediate model to update intermediate virtual model. The recognisable aspect of the apparatus may be a machine readable marker, a standard image, or a 3D shape of the token.

Description

Systems and Methods of Forming Virtual Models
The present invention relates to systems and methods of forming virtual models. In particular, the present invention relates to systems and method of forming virtual models corresponding to a plurality of physical tokens.
Background to the Invention
Virtual models are digital representations of real-world objects. Some virtual models are exact replicas of real-world objects. Exact replica virtual models are typically created by using exhaustive laser or optical scanning techniques to map every feature of a real-world object. Such exact models generally require expensive scanning equipment and are time-consuming to make.
Alternatively, a virtual model may be created by extracting a recognisable feature of an object from an image and then retrieving a known virtual representation of the entire object from a local or network-connected source. This alternative method is quicker and computationally far simpler, although it relies on having a virtual representation of a given real-world object available.
These recognisable features within an image are known in the art as fiduciary markers, which are effectively measurement or reference points that allow a computer to recognise objects within an image. Systems which can extract one or more fiduciary markers from an image can use these markers as the basis for augmented reality techniques. These augmented reality techniques use fiducial markers to represent physical objects and make them recognisable to the software. These techniques superimpose a three-dimensional model over a captured image, and update the model as the image alters.
There are a number of systems that use augmented reality techniques. For example, the
NINTENDO 3DS system and the GOOGLE GLASS system. Whilst these systems are capable of generating virtual models from real-world objects, they provide a virtual model which is only an instantaneous snap-shot of an object, or objects, which cannot be manipulated thereafter by a user.
There are some systems which can provide a more permanent virtual model of a realworld object using fiduciary marker extraction techniques. An example of such a system is the computer game level builder system, BLOXELS BUILDER. This system enables a user to arrange real-world objects within a pre-defined grid, each object relating to a computer game element. An imaging device then takes a photo or video of the arranged real-world objects and a virtual model is generated by assembling virtual representations of each object. This virtual model is then saved by the system and can be added to and/or manipulated thereafter by a user. However, this system and method of generating a virtual model both is time consuming and limited in its functionality.
Therefore, there exists a need to provide improved systems and methods for generating virtual models of real-world objects.
Summary of the Invention
In a first aspect, the present invention provides a method of forming a virtual model of a plurality of physical tokens, each physical token comprising a visual design, the method comprising the steps of: arranging the plurality of physical tokens into a pattern based on the visual design on each of the plurality of physical tokens; capturing a plurality of digital images of the arranged plurality of physical tokens with a computing device; forming a master virtual model of the plurality of physical tokens in accordance with the pattern, based on an aspect of each of the plurality of physical tokens that is recognisable to the computing device.
In a second aspect, the present invention provides a method of forming a virtual model, comprising: capturing a plurality of digital images of a plurality of physical tokens, wherein each image includes one or more of the markers; from a first of said images, generating an intermediate virtual model comprising virtual representations of the physical markers captured in that image; for each subsequent digital image: a) generating an image-specific virtual model comprising virtual representation of the physical markers in the subsequent image; b) comparing the image-specific virtual model with the intermediate virtual model to determine a quality score representing a degree of similarity between the image-specific virtual model and the intermediate virtual; and c) in the event that the quality score exceeds a threshold, merging the imagespecific virtual model with the intermediate model to update the intermediate virtual model; repeating steps a) to c) for each subsequent digital image.
Further features of the invention are defined in the appended dependent claims.
Brief Description of the Drawings
By way of example only, the present invention will now be described with reference to the drawings, in which:
Figure 1 is a diagram of four physical tokens in accordance with an embodiment of the present invention;
Figure 2 is a representation of an imaging device in accordance with an embodiment of the present invention;
Figure 3 is a flow chart showing a method in accordance with an embodiment of the present invention;
Figure 4 is a flow chart of a further method in accordance with an embodiment of the present invention;
Figure 5 is a flow chart showing a further method in accordance with an embodiment of the present invention;
Figure 6 is a diagram of the graphical user interface of an imaging device in accordance with an embodiment of the present invention;
Figure 7 is a diagram of four physical tokens in accordance with an embodiment of the present invention;
Figure 8 is a diagram of the graphical user interface of an imaging device in accordance with an embodiment of the present invention;
Figure 9 is a diagram of four physical tokens in accordance with an embodiment of the present invention;
Figure 10 is a diagram of the graphical user interface of an imaging device in accordance with an embodiment of the present invention;
Figure 11 is a flow chart of a method in accordance with an embodiment of the present invention;
Figure 12 is a diagram of four physical tokens in accordance with an embodiment of the present invention; and
Figure 13 is a diagram of the graphical user interface of an imaging device in accordance with an embodiment of the present invention.
Detailed Description of Preferred Embodiments
Figure 1 shows a plurality of physical tokens in accordance with an embodiment of the present invention. Four physical tokens 101 to 104 are shown, however the present invention works equally well with any number of physical tokens. The physical tokens 101 to 104 have images printed on one surface (not shown). These images correspond to elements of a virtual model that may be generated as described below. For example, the physical tokens may include different sections of a racetrack. The user arranges the physical tokens to form a racetrack. As described below, in embodiments of the invention, an imaging device produces a virtual model of the racetrack that corresponds to the racetrack formed by the physical tokens.
Each physical token 101 to 104 comprises an identifier (not shown). The identifier is a fiducial marker which enables the physical token to be recognised within an image of the physical token. Additionally, the identifier is orientation specific, meaning the identifier enables the orientation of the physical token to be recognised within an image. Preferably, the identifier is a machine readable code such as a matrix barcode (hereinafter known as a QR code), or a barcode. The identifier may however be any feature of the physical token. The identifier may be machine recognisable and orientation specific. The physical token may also include an image with integrated high-contrast features (for example, high contrast corners or edges) that can be detected using imaging techniques. The physical token may also be a three-dimensional object that can be recognised by its colour, outline or texture.
The physical tokens 101 to 104 may be any 2-dimensional or 3-dimensional object. Preferably, the physical tokens 101 to 104 are substantially flat shapes cut out of plastic, paper or metal. The identifiers may be transferred onto the physical tokens 101 to 104 by any suitable method such as printing, etching or engraving. Optionally, the physical tokens 101 to 104 are shaped such that they tessellate with each other. The physical tokens 101 to 104 shown in Figure 1 are hexagons.
Figure 2 is a schematic diagram of an imaging device 200 in accordance with an embodiment of the present invention. The imaging device 200 comprises a processor 201, and a non-volatile memory 202. Processor 201 may comprise one or more individual processor cores. The imaging device 200 comprises an imaging means 203 capable of capturing still and/or video images. The imaging device 200 further comprises a display 210 and a user interface 220. The imaging device 200 may be a desktop computer, laptop or tablet device, smartphone or other computing device. In some examples, the imaging device 200 will be a single complete unit. In other examples the imaging device 200 may be two or more discrete units, in data communication with each other. In some examples the display 210 will be integrated into the imaging device 200. The user interface 220 may be a keyboard and/or mouse arrangement, or may utilise a touch screen where appropriate. The computing device 200 also comprises an operating system (not shown). The methods described herein may run on the operating system (not shown) of computing device 200.
A method of forming a virtual model in accordance with the present invention will now be described by reference to Figure 3. Figure 3 is a flow chart setting out the method steps in accordance with the present invention.
Firstly, the physical tokens 101, 102, 103 and 104 are arranged by a user into a desired pattern (S301), as illustrated in Figure 1. As noted above, this may be, for example, a racetrack. A video stream of the physical tokens is captured by the imaging device (S3 02). An image-specific virtual model is created for each image of the video stream (S303). These models are combined to create a master virtual model (S304).
The method of forming the image-specific virtual models is described in connection with Figure 4.
At step S401, the user begins capturing a stream of images of the physical tokens 101 to 104 with the imaging device 200. This may be done by capturing a video stream, which is a series of digital images, or by otherwise capturing multiple digital images. As the imaging device moves to capture all of the physical tokens, each digital image is different. Each image may include different subsets of the physical tokens, captured from different angles.
Depending on the arrangement of the physical tokens 101 to 104 and the capabilities of the imaging device 200, a single image may capture all of the physical tokens. However, as will be seen below, there are advantages to capturing multiple images of the tokens.
The imaging device 200 analyses the captured images in real-time. As such, a virtual model is built up as the digital images are captured. At step S402, the imaging device 200 analyses the first image that has been captured and extracts the identifier of each of the physical tokens 101 to 104.
At step S403, the imaging device 200 determines, based on the identifiers, the orientation of each of the physical tokens 101 to 104 present in the first image.
At step S404, the imaging device 200 retrieves a virtual representation of each physical token present in the first image. The virtual representation may be a virtual copy of the physical token, or may be a virtual element related to the physical token. The imaging device 200 may retrieve the virtual representations from local non-volatile storage if available. Alternatively, the imaging device 200 may contact a storage server and request suitable virtual representations based on the determined identifiers. Such contact may be, for example, over a wired or wireless internet connection.
At step S405, the imaging device 200 orientates each virtual representation depending on the determined orientation of the respective physical tokens.
At step S406, the imaging device 200 determines the relative position of the physical tokens 101 to 104 present in the first image, based on the position of the identifier of each physical token. In the present example, each of the physical tokens 101 to 104 can be considered to be a 2-dimensional physical token, enabling the imaging device to determine relative position and orientation of each physical token relative to a common 2-dimensional plane. If the physical tokens 101 to 104 are all arranged on a single plane, the imaging device may produce a more accurate estimation of their position and rotation in 3D space.
At step S407, the imaging device 200 arranges the orientated virtual representations in accordance with the determined relative positions of the physical tokens 101 to 104, to form an image-specific virtual model of the physical tokens.
As each new image-specific virtual model is created, it is compared with earlier models to determine if the models can be combined. As long as there is sufficient overlap between the models, they can be combined. If the only virtual model available is the first image-specific model, the new image-specific model is compared with that model, and if appropriate combined with it (as will be described in more detail below). Once two virtual models are combined, an intermediate virtual model is created. Any subsequent image-specific virtual models are then compared and combined with the intermediate virtual model.
The method of generating a master model will now be described with reference to Figure 5.
Firstly, the imaging device 200 compares the image-specific virtual model with any image-specific virtual models (S501). The imaging device 200 looks for a subset of virtual representations that have similar relative positions and orientations.
Where a match is found between a subset of the virtual representations, a score is calculated that measures the quality of the match (S502). If the score is over a certain threshold, the imaging device merges the image-specific virtual models to create an intermediate virtual model (S503).
Any further image-specific virtual models may be combined with the intermediate model in the same manner. Assuming all image-specific models are combined with the intermediate model, once all images have been analysed, the intermediate model becomes the master virtual model.
In the event that an image-specific virtual model does not pass the match quality score, the image-specific model is saved separately. Any further image-specific models are compared with the intermediate model. If there is not match, they are compared with the saved image-specific model. If there is a match, a second intermediate model is created. In this manner, more than one intermediate model may be active at any one time. After processing each image, the intermediate models are also compared with each, and if there is a match with a quality score above the threshold, the intermediate models are combined.
The skilled person would understand that the present invention is not restricted to the exact order of steps as defined above.
Figure 6 is a diagram of a graphical user interface 600 of the imaging device 200 illustrating a master virtual model resulting from the above described method. The graphical user interface 600 is a visual representation of an operating system, such as
MICROSOFT WINDOWS ™ and any applications running on the operating system. In
Figure 6, the graphical user interface 600 is displaying the operating system 610 and a virtual model application window 620. The virtual model application window 620 is a graphical representation of software executing computer-readable instructions in accordance with above described method.
Within the virtual model application window 620 is displayed a master virtual model 630. The virtual model corresponds to the physical tokens 101 to 104 shown in Figure
1. The virtual model 630 can now be manipulated as required within the virtual model application window 620.
One advantage of the present invention is that it supports the presence and use of multiple identical physical tokens within a single image. Since the system recognises the position of each physical token relative to every other physical token identified in an image, multiple identical physical tokens can be identified and discriminated within a single image. Consequently, the number of unique physical tokens may be drastically reduced for a given pattern of physical tokens. In this manner, the number of possible physical token identifiers is not quickly exhausted. Furthermore, manufacturing of the physical tokens is simplified.
Figures 7 and 8 illustrate the method of amending the virtual model 630. Figure 7 shows the physical tokens 101, 102, 103 and 104 of Figure 1. However, physical token 104 has been moved from its original position (as indicated by the dashed line) to a new position 704, as illustrated by arrow 710.
Moving one or more of the physical tokens in this way allows the virtual model to be altered and previewed, enabling a user to quickly and easily make design alterations. Furthermore, adjusting just part of the virtual model is computationally far simpler than generating a completely new virtual model. To enable the virtual model to be updated, at least one of the physical tokens will have to remain in its original position to enable the system to identify the relative positions of the moved physical tokens to the physical tokens in the original image. Furthermore, physical tokens should not be moved during the image capturing process as this will confuse the system as to how to combine the moved physical tokens with the originally captured physical tokens. Figure 8 is a diagram of the graphical user interface 600 of the imaging device 200 illustrating the virtual model 630 resulting from the movement of physical token 104.
Within the virtual model application window 620 of Figure 8 is displayed an amended virtual model 801. The virtual model 801 corresponds to the physical tokens 101, 102, 103 and 704 shown in Figure 7.
Figures 9 and 10 illustrate the method of expanding the virtual model 630. Figure 9 shows the physical tokens 101, 102 of Figure 1. However, Figure 9 includes new physical tokens 901 and 902, arranged adjacent to physical tokens 101 and 102. Physical tokens 901 and 902 may be completely new physical tokens, or may be original physical tokens 103 and 104 moved to a new position or orientation.
Moving one or more of the physical tokens, or adding additional physical tokens, allows a large virtual model to be constructed from a small number of physical tokens. However, at least one of the physical tokens will have to remain in its original position to enable the system to identify the relative positions of the moved physical tokens to the physical tokens in the original image. Furthermore, physical tokens should not be moved during the image capturing process as this will confuse the system as to how to combine the moved physical tokens with the originally captured physical tokens.
Figure 10 is a diagram of the graphical user interface 600 of the imaging device 200 illustrating the virtual model 1001 resulting from the addition of physical tokens 901 and 902. Within the virtual model application window 620 of Figure 10 is displayed the expanded virtual model 1001. The virtual model 1001 corresponds to the physical tokens 101, 102, 103, 104 shown in Figure 1, with additional physical tokens 901 and 902 shown in Figure 9.
In an alternative, to creating multiple virtual models from multiple images and then combining them into a single virtual model, the multiple images may first be stitched together using panorama stitching methods and then a single virtual model created from the stitched image.
In some examples, the user creating a second virtual model may be a second user, using a second imaging device 200. In this case, the second imaging device 200 will be in data communication (such as by BLUETOOTH or WIFI) with the first imaging device so virtual models can be shared. This enables multiple users to scan in the same environment collaboratively using their own imaging devices. Implementing this invention with two, or more, users thus enables large virtual models to be quickly created from a large number of physical tokens.
An advantage of the present invention is that it provides a computationally efficient system and method to work around the fact that information captured by an imaging device is often noisy and unreliable. This may lead to the presence of physical tokens within an image frame may not always being correctly detected. Further, even when physical tokens are detected, the exact position and/or rotation of the physical tokens between frames may not always be consistent. These problems can be exacerbated by changes in lighting, low resolution imaging devices and motion blur from the user, or users, moving their devices too fast.
To overcome these difficulties, the present invention is built to be capable of dealing with physical tokens that appear and disappear from frame to frame. The present invention can use multiple images to gradually build up a virtual model of all of the physical tokens, based upon physical tokens detected in the same position and/or rotation across multiple images. Moreover, the present invention may also combine multiple sightings of a particular physical token to gradually build a more accurate view of the relative positions of the physical tokens.
In some of the circumstances set out above, a second virtual model may not be combinable with the first virtual model if a match between physical tokens in the two models cannot be obtained. In this case, the second virtual model may be stored and used as a potential target for merging with subsequent virtual models. If this second virtual model is not stored, the user will have to be prompted to re-image the physical tokens until the two virtual models can be successfully combined. This can lead to a frustrating user experience.
Therefore, the system may at any one time be storing a number of different partial virtual models. Each newly created virtual models is then compared with each stored virtual model. Once a satisfactory match has been obtained, two or more of the virtual models can then be combined as described above. This method is explained below:
1. A first image is taken, the first image comprises three physical tokens, physical tokens 1, 2 and 3. This image forms frame 1. A virtual model of the three physical tokens may now be created, as described above.
2. A second image is taken, the second image comprises physical tokens 2, 3 and
4. This image forms frame 2. Frame 2 can be directly merged into frame 1 using common physical tokens 2 and 3, and a combined virtual model created based on frames 1 and 2. Alternatively, a second virtual model may be created based on frame 2. Subsequently, the first and second virtual models may be combined into a single virtual model using common physical tokens 2 and 3.
3. A third image is taken, the third image comprises physical tokens 5, 6 and 7. This image forms frame 3. There is no commonality between the physical tokens in frame 3 and the previously imaged physical tokens. Therefore, frame 3's virtual model, once created, is stored without being merged.
4. A fourth image is taken, the fourth image comprises physical tokens 6, 7 and 8.
This image forms frame 4. Frame 4 can be directly merged into frame 3 using common physical tokens 5 and 6 and a combined virtual model created based on frames 3 and 4. Alternatively, a virtual model may be created based on frame
4. Subsequently, the virtual models of frames 3 and 4 may be combined into a single virtual model using common physical tokens 5 and 6. However, there are still no common physical tokens between the virtual model of step 2 and step 4. Therefore, at this step of the method, there are still two disparate virtual models.
5. A fifth image is taken, the firth image comprises physical tokens 1, 2, 7 and 8. This image forms frame 5. Frame 5 could be merged into either of the existing virtual models, but let's assume it is merged into the one created in respect of frames 3 and 4. Frame 3 and 4's model now comprises physical tokens 1,2, 5, 6, 7 and 8. Frame 1 and 2's model comprises physical tokens 1, 2, 3 and 4. Now, because physical tokens 1 and 2 are common to both virtual models, they can be merged into a single virtual model comprising 8 physical tokens.
In a further embodiment of the invention, systems and methods are provided for tracking the imaging device 200 and adjusting the first virtual model 630 based on the tracking of the imaging device 200.
To enable device tracking, the imaging device 200 may further comprise one or more sensors for position and/or orientation tracking (not shown). The sensors may be one or more of an accelerometer, a magnetometer and a gyroscope. Such sensors are generally provided in imaging devices such as mobile phones and laptops. Alternatively, tracking of the imaging device 200 may be achieved by an optical technique such as optical flow or feature extraction. These optical techniques work on the presumption that objects in a digital image stream are fixed and thus any movement of the object in the image must relate to movement of the imaging device 200.
An exemplary method of adjusting the first virtual model 630 based on the tracking of the imaging device 200 will now be explained by reference to Figure 11. Figure 11 is a flow chart showing an exemplary method.
At step SI 101, a virtual model is created in accordance with an embodiment of the invention described above.
At step SI 102, the movement of an imaging device 200 is tracked. The tracking comprises monitoring the orientation of the device 200 and/or the relative position of the device 200.
At step SI 103, a change in the orientation and/or the relative position of the imaging device 200 is detected.
At step SI 104, the imaging device adjusts the virtual model based on the detected change in the orientation device.
The virtual model may be adjusted by orientating the virtual model in correspondence to changes in orientation of the imaging device 200. This enables a user of the imaging device to intuitively rotated and examine the virtual model.
Additionally, the virtual model may be adjusted by being translated in correspondence to changes in the relative position of the imaging device 200. This enables a user of the imaging device to intuitively move and explore the virtual model.
An exemplary use of the presently invention will now be described by reference to Figures 12 and 13. Figure 12 shows plurality of physical tokens 1201, 1202, 1203 and 1204 in accordance with the present invention. Figure 13 shows a graphical user interface 1300 illustrating the virtual model resulting from physical tokens 1201, 1202, 1203 and 1204.
Each of the plurality of physical tokens 1201, 1202, 1203 and 1204 comprises an identifier (not shown) as explained previously. In addition, each of the plurality of physical tokens 1201, 1202, 1203 and 1204 has a pattern on its surface. In the present example, the pattern relates to sections of a car racing track. For example, on physical token 1201 the pattern may be a representation of a straight section of track, on physical token 1202 the pattern may be a representation of a comer in the track, on physical token 1203 the pattern may be a representation of a start/finish line, and so on. A user may arrange these physical tokens 1201, 1202, 1203 and 1204 in many different arrangements and orientations, each creating a different combined pattern. This in turn results in a different car racing track being produced. Figure 12 shows one potential arrangement of physical tokens 1201, 1202, 1203 and 1204 which results in one particular combined pattern 1240.
Figure 13 shows a graphical user interface 1300 illustrating the virtual model resulting from physical tokens 1201, 1202, 1203 and 1204, by following a method of forming a virtual model as described above.
Figure 13 shows the graphical user interface 1300 displaying the operating system 610 and a virtual model application window 620. Within the virtual model application window 620 of Figure 13 is displayed a virtual model 1330. The virtual model 1330 corresponds to the physical tokens 1201, 1202, 1203 and 1204 shown in Figure 12.
Virtual model 1330 is formed of four virtual representations (not individually marked) corresponding to the physical tokens 1201, 1202, 1203 and 1204. In the present example, each of the four virtual representations are playable video game elements, corresponding to sections of a car racing track. In contrast to the virtual models shown in Figures 6, 8 and 10, virtual model 1230 is not an exact copy of the respective physical tokens. Instead, the four virtual representations forming virtual model 1230 conform to pattern 1220 in the shape of the resultant race track but comprise further visual elements. These further elements include the sides 1335 of the race track 1330 and a moveable element in the form of a racing car 1340.
The race track 1335 may now be extended by the user rearranging physical tokens 1201, 1202, 1203 and 1204.
When the user is finished creating the virtual model, the virtual model may be finalised, published, or otherwise completed for use. To allow the system to know that the virtual model is complete, the user may provide a suitable input to the system to indicate completion. Alternatively, the system may automatically detect that it has scanned in the full set of physical tokens that is required. For example, in the example of race track 1335, the system may detect that the virtual model now forms a complete circuit. The system may also detect completion by the incorporation of particular physical tokens that are used to indicate a start and finish. Alternatively, the system may detect completion upon the number of scanned physical tokens reaching a threshold number. Alternatively, the system may detect completion upon recognition of a special pattern of physical tokens (for example, a 4 by 4 grid of physical tokens).
Forming a virtual model of a car racing track is provided as an example of a use of the present invention. The present invention is not limited however to modelling car racing tracks or for creating playable video games. Instead, the present invention provides a powerful and intuitive method of creating virtual models of real-world objects.
The above embodiments describe one way of implementing the present invention. It will be appreciated that modifications of the features of the above embodiments are possible within the scope of the independent claims. For example, the methods described herein may be applied to any kind of window. The features of the browser 5 windows and web windows described herein are for example only and should not be seen as limiting to the claimed invention.
Features of the present invention are defined in the appended claims. While particular combinations of features have been presented in the claims, it will be appreciated that 10 other combinations, such as those provided above, may be used.

Claims (33)

Claims
1. A method of forming a virtual model of a plurality of physical tokens, each physical token comprising a visual design, the method comprising the steps of:
arranging the plurality of physical tokens into a pattern based on the visual design on each of the plurality of physical tokens;
capturing a plurality of digital images of the arranged plurality of physical tokens with a computing device;
forming a master virtual model of the plurality of physical tokens in accordance with the pattern, based on an aspect of each of the plurality of physical tokens that is recognisable to the computing device.
2. A method according to claim 1, the recognisable aspect is one of a machinereadable marker, a standard image or a three-dimensional shape of the token.
3. A method according to claims 1 or 2, wherein the plurality of digital images are frames of a video stream.
4. A method according to claim 3, further comprising recognising the recognisable aspect in each digital image and extracting one or more parameters relating to each token.
5. A method according to claim 4, further comprising building a virtual representation of each token, based on the extracted parameters.
6. A method according to claims 4 or 5, wherein the one or more parameters include one or more of an identifier, token position, and token orientation.
7. A method according to claim 6, wherein the virtual representation includes position and orientation information.
8. A method according to claim 5, wherein the virtual representations are used to generate an image-specific virtual model representing the physical tokens captured in a respective digital image.
9. A method according to claim 8, wherein the master virtual model is generated by combining two or more image-specific virtual models.
10. A method according to any preceding claim, further comprising generating an image-specific virtual model in connection with each digital image, and combining two or more image-specific models to generate at least one intermediate virtual model.
11. A method according to claim 10, further comprising updating the at least one intermediate model by combining further image-specific virtual models with the at least one intermediate virtual model.
12. A method according to claim 11, wherein each image-specific virtual model is compared with the at least one intermediate model to determine similarity quality score, and in the event that the score exceeds a predetermined threshold, the models are combined.
13. A method according to claim 12, wherein, in the event that the score does not exceed a threshold, the virtual models are not combined, and a new intermediate virtual model is generated.
14. A method according to claims 12 or 13, wherein the similarity quality score is based on one or more of an identifier, token position relative to adjacent tokens, and token orientation relative to adjacent tokens.
15. A method according to claim 14, wherein if it is not possible to determine position and orientation relative to other tokens, the similarity quality score is based on screen-based position and orientation.
16. A method according to claim 13, wherein, in the event of two or more intermediate models, comparing each subsequent image-specific virtual model with each intermediate virtual model.
17. A method according to claim 16, wherein, in the event of two or more intermediate models, comparing the intermediate models with each other after each image-specific virtual model has been processed, and combining the intermediate models, in the event of a match, to generate the master virtual model.
18. A method according to any preceding claim, further comprising the steps of determining the position of the computing device relative to the plurality of physical tokens; and forming the master virtual model further based on the relative position of the computing device and the plurality of physical tokens.
19. A method according to claim 18, wherein the position of the computing device relative to the plurality of physical tokens is determined from the output of one or more accelerometers, magnetometers and/or gyroscopes provided in the computing device.
20. A method according to claim 18, wherein the position of the computing device relative to the plurality of physical tokens is determined from tracking relative movement of one or more of the plurality of physical tokens across the plurality of digital images.
21. A method according to any preceding claim, further comprising the steps of determining the orientation of the computing device relative to the plurality of physical tokens; and forming the master virtual model further based on the relative orientation of the computing device and the plurality of physical tokens.
22. A method according to claim 21, wherein the orientation of the computing device relative to the plurality of physical tokens is determined from the output of one or more accelerometers, magnetometers and/or gyroscopes provided in the computing device.
23. A method according to claim 21, wherein the position of the computing device relative to the plurality of physical tokens is determined from tracking relative movement of one or more of the plurality of physical tokens across the plurality of digital images.
24. A method according to any preceding claim, wherein the digital images are processed in a sequence.
25. A method according to claim 24, wherein the digital images are frames of a video stream.
26. A method according to any preceding claim, wherein data extracted from each digital image is compared with data extracted from other digital images, and in the event that the degree of similarity exceeds a threshold, the digital images are combined.
27. A method according to any preceding claim, wherein the machine-readable marker is a QR code or a barcode.
28. A method according to any preceding claim, further comprising the steps of receiving a second plurality of digital images of the arranged plurality of physical tokens from a second computing device; and forming the master virtual model further based on the received second plurality of digital images.
29. A method according to any of claims 1 to 27, further comprising the steps of receiving a second master virtual model of the arranged plurality of physical tokens from a second computing device; and comparing the master virtual models with each other, and combining the master virtual models in the event of a match.
30. An apparatus for performing the method of any of claims 1 to 29.
31. A computer program comprising code which, when run on a computer, would cause the computer to perform the method of any of claims 1 to 29.
32. A computer readable medium having code stored thereon which, when run on a computer, causes the computer to perform the method of any of claims 1 to 29.
33. A method of forming a virtual model, comprising:
capturing a plurality of digital images of a plurality of physical tokens, wherein each image includes one or more of the markers;
from a first of said images, generating an intermediate virtual model comprising virtual representations of the physical markers captured in that image;
for each subsequent digital image:
a) generating an image-specific virtual model comprising virtual representation of the physical markers in the subsequent image;
b) comparing the image-specific virtual model with the intermediate virtual model to determine a quality score representing a degree of similarity between the image-specific virtual model and the intermediate virtual; and
c) in the event that the quality score exceeds a threshold, merging the image-specific virtual model with the intermediate model to update the intermediate virtual model;
repeating steps a) to c) for each subsequent digital image.
GB1711792.0A 2017-07-21 2017-07-21 Systems and methods of forming virtual models Active GB2564715B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1711792.0A GB2564715B (en) 2017-07-21 2017-07-21 Systems and methods of forming virtual models

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1711792.0A GB2564715B (en) 2017-07-21 2017-07-21 Systems and methods of forming virtual models

Publications (3)

Publication Number Publication Date
GB201711792D0 GB201711792D0 (en) 2017-09-06
GB2564715A true GB2564715A (en) 2019-01-23
GB2564715B GB2564715B (en) 2022-08-24

Family

ID=59771751

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1711792.0A Active GB2564715B (en) 2017-07-21 2017-07-21 Systems and methods of forming virtual models

Country Status (1)

Country Link
GB (1) GB2564715B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220118375A1 (en) * 2019-01-31 2022-04-21 Lego A/S A modular toy system with electronic toy modules

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010150232A1 (en) * 2009-06-25 2010-12-29 Zyx Play Aps A game system comprising a number of building elements
EP3042704A1 (en) * 2011-05-23 2016-07-13 Lego A/S A toy construction system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010150232A1 (en) * 2009-06-25 2010-12-29 Zyx Play Aps A game system comprising a number of building elements
EP3042704A1 (en) * 2011-05-23 2016-07-13 Lego A/S A toy construction system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220118375A1 (en) * 2019-01-31 2022-04-21 Lego A/S A modular toy system with electronic toy modules

Also Published As

Publication number Publication date
GB2564715B (en) 2022-08-24
GB201711792D0 (en) 2017-09-06

Similar Documents

Publication Publication Date Title
US9412034B1 (en) Occlusion handling for computer vision
KR101689923B1 (en) Online reference generation and tracking for multi-user augmented reality
JP5248806B2 (en) Information processing apparatus and information processing method
EP1308902A2 (en) Three-dimensional computer modelling
US10726612B2 (en) Method and apparatus for reconstructing three-dimensional model of object
US11182945B2 (en) Automatically generating an animatable object from various types of user input
JP2010519629A (en) Method and device for determining the pose of a three-dimensional object in an image and method and device for creating at least one key image for object tracking
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
WO2019093457A1 (en) Information processing device, information processing method and program
US11645800B2 (en) Advanced systems and methods for automatically generating an animatable object from various types of user input
KR102464271B1 (en) Pose acquisition method, apparatus, electronic device, storage medium and program
JP7241812B2 (en) Information visualization system, information visualization method, and program
US20190371001A1 (en) Information processing apparatus, method of controlling information processing apparatus, and non-transitory computer-readable storage medium
JP5295044B2 (en) Method and program for extracting mask image and method and program for constructing voxel data
WO2019087404A1 (en) Calculation method, calculation program, and information processing device
GB2564715A (en) Systems and methods of forming virtual models
JP2006113832A (en) Stereoscopic image processor and program
JP2017033556A (en) Image processing method and electronic apparatus
JP2007140729A (en) Method and device detecting position and attitude of article
CN113167568A (en) Coordinate calculation device, coordinate calculation method, and computer-readable recording medium
JP5218034B2 (en) Method and program for extracting mask image and method and program for constructing voxel data
US11508083B2 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
JP2004252815A (en) Image display device, its method and program
JP7418107B2 (en) Shape estimation device, shape estimation method and program
JP4380376B2 (en) Image processing apparatus, image processing method, and image processing program