AU2018203909A1 - A User Interface - Google Patents

A User Interface Download PDF

Info

Publication number
AU2018203909A1
AU2018203909A1 AU2018203909A AU2018203909A AU2018203909A1 AU 2018203909 A1 AU2018203909 A1 AU 2018203909A1 AU 2018203909 A AU2018203909 A AU 2018203909A AU 2018203909 A AU2018203909 A AU 2018203909A AU 2018203909 A1 AU2018203909 A1 AU 2018203909A1
Authority
AU
Australia
Prior art keywords
user
iteration
panoramic
image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2018203909A
Inventor
Akash Nigam
Priyanka Nigam
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Virtuality-360 Pty Ltd
Original Assignee
Virtuality 360 Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2017902103A external-priority patent/AU2017902103A0/en
Application filed by Virtuality 360 Pty Ltd filed Critical Virtuality 360 Pty Ltd
Publication of AU2018203909A1 publication Critical patent/AU2018203909A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

A USER INTERFACE 5 The present invention relates in general to computer implemented methods for providing a user interface and in particular, to a user interface which allows a user to create a virtual tour of a space. The computer implemented method provides a user interface for a virtual tour application program specifying details and uploading user information of a project to a 10 server and creating a first iteration of the project at a first time and/or date. Receiving through the user interface, user-input specifying information regarding or relating to the project and a plurality of panoramic images of the project and adding the plurality of panoramic images to the first iteration. Saving the first iteration to the server and generating a multi-pane web page in 15 response to the user interface which is transmitted via the Internet and viewed by the user using a web browser. Cd~ CO r

Description

FIELD OF THE INVENTION
The invention relates generally to computer implemented methods for providing a user interface and in particular, to a user interface which allows a user to create a virtual tour of a space. The present invention has been found to be particularly useful in a number of industries in which it can be used as a means to digitise a space.
io
This present invention also extends to a user interface and process for creating iterations of a space which show the visual change in the space over time. This present invention further extends to a system and process for an analysis tool that provides consistent, transparent, and efficient analysis of a 15 project.
BACKGROUND OF THE INVENTION
It should be noted that reference to the prior art herein is not to be taken 20 as an acknowledgement that such prior art constitutes common general knowledge in the art.
A user interface (Ul), is the means in which a person controls a software application or hardware device. A good user interface provides a user-friendly 25 experience, allowing the user to interact with the software or hardware in a natural and intuitive way. A web-based user interface accepts inputs from a user and provides an output by generating web pages which are transmitted via the Internet and viewed by the user using a web browser. A cloud-based Ul can be equally used to access online services over the Internet like web-based 30 Ul but not always exclusively dependent on web browsers to work. It’s possible for a custom built cloud application or user interface to be installed on internet connected devices such as desktops and mobile phones.
2018203909 02 Jun 2018
In today’s changing world users are storing their data on remote data servers, and using remotely provided web-based applications to manipulate and organise that data. Cloud computing provides computing resources such as application programs and file storage which are remotely provided over the 5 Internet, typically through a web browser. These web browsers are capable of running applications, which can themselves be application programming interfaces (“API's”) to more sophisticated applications that are running on remote servers. A web browser interfaces with and allows a user to control an application program that is running on a remote server. Cloud computing io provides the user with the advantages of being able to quickly log onto their computer, launch a web browser, and access data and programs of interest to them, which are accessible through the world wide web.
One such application which can take advantage of both the web-based 15 and cloud-based user interfaces is the virtual tour or 360 degree virtual tour. Virtual tours have become a frequently used technique for providing viewers with information about three-dimensional spaces of interest. A virtual tour is basically a simulation of an existing location, usually composed of a sequence of images forming a panorama. A panorama is any wide-angle view or 20 representation of a physical space and when used in a virtual tour is mainly associated with images created using still cameras. Panoramic images are usually considered to be any images with an aspect ratio greater than 2:1. This means that they are twice as long as they are high. The panorama is used to create completely spherical virtual tours that show an environment with a 25 viewing angle of up to 360 by 180 degrees.
Virtual tours are typically created in a number of known steps. The first step is to provide the panoramic images. This can be by taking the imagery using the proper equipment such as a digital camera or through computer 30 graphics to produce computer generated imagery (CGI). The second step is to stitch the raw photos together to form the 360 by 180 degrees panoramic image. If an omnidirectional camera is used to produce the panoramic images there is no need to stitch the raw photos together. The stitching process requires specialised equipment and software to produce the final product. The
2018203909 02 Jun 2018 final step is to import the 360 by 180 degrees image into the virtual tour software. The importing of the 360° by 180° images into the software allows the user to build a virtual tour console. Files are then exported and uploaded onto a web server. Typically an executable HTML file is used as the URL to run 5 the virtual tour normally placed within an iframe or called by an application programming interface (API). The process can be carried out manually or fully automated.
The application of virtual tours and 360 degree virtual tours have been io found to be very effective method for the presentation of a location or service to the online public. Virtual tours can allow a user to view an environment while on-line. Currently a variety of industries use such technology to help market their services and product. Over the last few years the quality and accessibility of virtual tours has improved considerably, with some websites allowing the 15 user to navigate the tours by clicking on maps or integrated floor plans.
Virtual tours or walkthroughs are very popular in the real estate industry. Such tours can be a simple interactive floorplan with attached images or, an advanced virtual tour including professionally photographed images run through 20 expensive specialised stitching software to produce 360 degree panoramic images with embedded hotspots, floorplans and additional linked information including multimedia elements to produce a realistic 3D view of a home or property which can be presented to customers using the world wide web. However, these conventional methods fail to provide a flexible method to 25 present and organise all types of highly desirable information that is required for an effective tour and marketing tool.
Another area in which these virtual tours are useful is in the construction industry where it is important to track every stage of a project. Virtual tours 30 enable a company to be able to track the progress of a project over time through a virtual tour or walkthrough. The progress can be tracked through time at different times and dates and visually produced as a virtual tour incremented over time. The incremented virtual tours are typically known as iterations within a project. However the current methods of tracking progress
2018203909 02 Jun 2018 fail to provide a seamless way of maintaining perspective as a user moves between different iterations of a project over time.
Clearly it would be advantageous if a user interface could be devised that helped to at least ameliorate some of the shortcomings described above. In particular, it would be beneficial to provide a user interface which allows a user to create a virtual tour of a space that is suitable for many applications and industries.
SUMMARY OF THE INVENTION
Development of a user interface which allows a user to easily create a 360 degree virtual tour is an attractive proposition, especially in view of the potential number of different applications and industries that will benefit from 15 this technology. The present invention has been developed to create a user friendly interface that simplifies the creation of a 360 degree virtual tour or walkthrough.
In accordance with a first aspect, the present invention provides a 20 computer-implemented method comprising: providing a user interface for a virtual tour application program specifying details and uploading user information of a project to a server; creating a first iteration of the project at a first time and/or date; receiving through the user interface, user-input specifying information regarding or relating to the project; receiving through the user 25 interface a plurality of panoramic images of the project and adding the plurality of panoramic images to the first iteration; saving the first iteration to the server; and generating a multi-pane web page and/or generating an application programming interface return call in response to the user interface which is transmitted via the Internet and viewed by the user using a web browser.
Preferably, the user interface may be a web based or a cloud based user interface. The user-input information may comprise any one or more of the following: (i) a location of the project; (ii) at least one contact for the project if different from the user; (iii) at least one event related to the project; (iv) any
2018203909 02 Jun 2018 project branding; or (v) any information that adds context to the project, including but not only limited to audio, video and floorplans.
Preferably, the user information may comprise user details and/or a 5 company details.
Preferably, each one of the plurality of panoramic images may be recorded with an original file creation time and date stamp and each image is saved on the server in time and date order starting from an earliest time and io date to a latest time and date. Each one of the plurality of panoramic images may be recorded using an omnidirectional camera or created by a threedimensional software application.
Preferably, creating the first iteration at the first time and/or date of the 15 project may be performed manually by a user interacting with the user interface to produce the first iteration. Creating the first iteration at the first time and/or date by the user may further comprise the steps of: (i) choosing a first panoramic image from the plurality of panoramic images; and (ii) rendering the selected panoramic image in a pane of the multi-pane web page. Creating the 20 first iteration may further comprise the user manually adding at least one hotspot to the first panoramic image to assist in an alignment of the images to produce the virtual tour.
Preferably, adding the at least one hotspot may comprise the steps of: (i) 25 rotating the first panoramic image to a position to add the hotspot; (ii) locating the hotspot on the first panoramic image; (iii) saving the co-ordinates of the hotspot, the co-ordinates including the location, time and direction of the hotspot; and (iv) zooming into the location of the hotspot on the first panoramic image.
Preferably, creating the first iteration may further comprise selecting a further panoramic image to link to the first panoramic image. Selecting the further panoramic image may comprise the steps of: (i) selecting the date and time information for each of the remaining plurality of panoramic images; (ii)
2018203909 02 Jun 2018 calculating a time difference of each of the remaining plurality of panoramic images with respect to the time and date of the first panoramic image; (iii) creating a list of the remaining panoramic images and the corresponding time differences and displaying the list by overlaying the list over the first panoramic 5 image in the pane of the multi-pane web page; and (iv) selecting from the list the further panoramic image based on a shortest time difference calculated with respect to the first panoramic image.
Preferably, linking the first panoramic image to the further panoramic io image may comprise the steps of: (i) rendering the further panoramic image in an overlapping window over the pane of the first panoramic image; (ii) aligning the first and further panoramic images by aligning a visible perspective of both the first image and the further image by rotating the further image; iii) checking the visible perspective is maintained between the first image and the further 15 image; iv) adding at least one corresponding hotspot to the further panoramic image; (v) saving a heading of the first panoramic image on the server once the user is satisfied that the alignment is correct and the first and further panoramic images and the hotspots in each image are linked in the first iteration; (vi) updating the further panoramic image as the first panoramic image; (vii) linking 20 each one of the remaining panoramic images to the first iteration by performing steps (i) to (vi) until each further panoramic image is linked to form the virtual tour; and (vii) saving the first iteration to the server.
Preferably, the method further comprises automatically adding a mirror 25 hotspot in the further panoramic image that connects the further panoramic image to the first panoramic image.
Alternatively, rotating the first panoramic image may further comprise rotating the first panoramic image to orient the first panoramic image with 30 respect to a physical feature such as those on a map.
Alternatively, rotating the first panoramic image may further comprise rotating the first panoramic image to orient the first panoramic image with respect to the compass heading north.
2018203909 02 Jun 2018
Preferably, creating the first iteration further comprises an image matching algorithm which automatically adds hotspots to the plurality of panoramic images to assist in an alignment of the images to produce the virtual tour. The image matching algorithm may identify similarities between the 5 plurality of panoramic images and based on the similarities automatically adds at least one hotspot that connects each one of the plurality of panoramas to each other.
Preferably, the image matching algorithm may perform the following io steps: (a) selecting from the saved original file creation time and date a first panoramic image based on the earliest time and date of the plurality of panoramic images, the first panoramic image having a first heading; (b) scanning the first panoramic image for features by splitting the first panoramic image into discrete portions and analysing each discrete portion within the first 15 image; (c) saving each discrete portion with features on the server as an object with a defined size, form and shape; (d) selecting from the saved original file creation time and date a further panoramic image based on a shortest time difference calculated from the first panoramic image; (e) searching for the saved objects within the further panoramic image to identify any matching 20 objects; (f) comparing the size of the matching objects from the first and further images to determine a difference in each objects size; (g) identifying the objects with a largest difference in size, the largest difference in size showing a direction of motion within the first and further panoramic images; (h) updating the first heading as per the direction of motion; (i) adding at least one hotspot in 25 the direction of motion in the first panoramic image; (j) repeating steps (d) to (i) for each further panoramic image of the plurality of panoramic images in the first iteration; and (k) saving the first iteration to the server.
Preferably, the image matching algorithm may further comprise in step 30 (c) removing any saved objects which are a matching object within the saved objects to remove any duplicate saved objects.
2018203909 02 Jun 2018
Preferably, the image matching algorithm may automatically add a mirror hotspot in the further panoramic image that connects the further panoramic image to the first panoramic image.
Preferably, the discrete portions may have a size which varies dependent upon the size of the image. The size of the discrete portions may be approximately a 10 pixel square or a user defined size for object identification.
Preferably, the direction of motion may show the direction in which a 10 photographer or the omnidirectional camera is moving from the first panoramic image through to a last panoramic image of the plurality of panoramic images in the first iteration.
Preferably, the first heading may comprise a pitch, a yaw and a 15 horizontal field of view of the first panoramic image.
Preferably, creating the first iteration may further comprise an image matching algorithm for automatically creating the virtual tour using 3D mesh structures of the first iteration, the 3D mesh structures allowing the user to 20 browse the plurality of panoramic images and allowing the plurality of panoramic images to be located within a co-ordinate based system.
Preferably, the image matching algorithm for creating the virtual tour using the 3D mesh structures may comprise: receiving through the user 25 interface a plurality of panoramic images of a 3D scene; reconstructing, by the image matching algorithm, geometry of a plurality of 3D bubble-views from the panoramic images, wherein reconstructing includes: using a structure from motion framework for camera localisation; generating a 3D surface mesh model of the scene using multi-view stereo via cylindrical surface sweeping for each 30 bubble-view, wherein the cylindrical surface sweeping quantizes the scene with multiple depth surfaces with respect to a bubble view centre and hypothesizes a depth of each light ray to be intersecting with one of the depth surfaces, wherein an intersecting point of each light ray is projected on each depth surface, and thereafter the cylindrical surface sweeping performs forward
2018203909 02 Jun 2018 projection to find correspondences across multiple cameras; and registering multiple 3D bubble-views in a common coordinate system, wherein registering multiple 3D bubble-views in a common coordinate system comprises registering partial images from different bubble-views to form a new coordinate system, 5 and estimating a relative pose from each bubble-view to map images from each bubble-view to the new coordinate system; and displaying the surface mesh models.
Preferably, reconstructing may further comprise refining the 3D surface io mesh model of each bubble-view using a depth optimisation technique that utilises a smoothness constraint to resolve issues caused by textureless regions of the scene.
Preferably, the method may further comprise adding at least one 15 information hotspot to any one of the plurality of panoramic images to identify any one or more of the features within the panoramic image, using any one of the following: (a) an image; (b) a video; (c) a title; (d) a hyperlink; or (e) a description.
Preferably, the method may further comprise the user adding at least one floorplan to the project to be displayed in another pane of the multi-pane web page. The at least one floorplan may comprise a map or site layout or anything with a spatial significance, the map being a view from above, of the relationships between rooms, spaces and other physical features at one level of the project. The floorplan may further comprise at least one active region within the floorplan. Each one of the active regions may be linked to at least one of the plurality of panoramic images, allowing the user to select one of the active regions and the corresponding panoramic image will be displayed within the pane of the multi-pane web page.
Preferably, the method may further comprise allowing the use of image metadata in any one of the plurality of panoramic images to automatically locate the image on a map, such as by locating the image on a web mapping service
2018203909 02 Jun 2018 using embedded global positioning system (GPS) co-ordinates in the image metadata.
Preferably, the method may further comprise the user creating at least 5 one further iteration of the project at a different time and/or date to the first iteration. The at least one further iteration may allow the user to show changes in the project over time. Each one of the further iterations is created in the same way in which the first iteration is created and hotspots are manually added.
io
Preferably, each one of the further iterations may be created in the same way in which the first iteration is created and hotspots from the first iteration are automatically added to each one of the further iterations by the user interacting with a bridge algorithm.
Preferably, the bridge algorithm may comprise the steps of: (i) retrieving all of the plurality of panoramic images of the further iteration from the server and adding the images to a first list and displaying the first list on the multi-pane web page; and (ii) retrieving all of the plurality of panoramic images of the first 20 iteration from the server and adding the images to a second list and rendering the second list as a hidden list on the multi-pane web page.
Preferably, the bridge algorithm further comprises the steps of: (i) allowing the user to select a first panoramic image from the first list and 25 displaying that image on the pane of the multi-pane web page; (ii) allowing the user to rotate the selected first panoramic image to a desired visual angle; (iii) allowing the user to select the iteration connect button which reveals the hidden second list to the user; (iv) allowing the user to select from the second list a further panoramic image which has the closest similar visual features to the 30 selected first panoramic image from the first list; (v) overlaying the further panoramic image selected over the selected first panoramic image; (vi) allowing the user to rotate the further panoramic image to align the similar visual features in the first panoramic image; (vii) calculating an updated heading of the first panoramic image with respect to the rotated further panoramic image and
2018203909 02 Jun 2018 saving the updated heading on the server; and (viii) performing steps (i) to (vii) for each one of the plurality of panoramic images of the further iteration.
Preferably, each one of the further iterations once linked by the bridge algorithm to a previous iteration may show the created hotspots and any information hotspots in any one of the plurality of panoramic images in each one of the further iterations, such that any one hotspot can transcend through each iteration.
io Preferably, by calculating the updated heading of the first panoramic image with respect to the rotated further panoramic image and saving the updated heading on the server to align each one of the plurality of panoramic images across iterations may allow the user to control the heading of individual panoramas across iterations and to therefore seamlessly maintain a visible 15 perspective throughput each iteration.
Preferably, the bridge algorithm may further comprise using the image matching algorithm to automatically find visual similarities between the plurality of panoramic images of the first iteration and the plurality of panoramic images 20 of the further iteration and automatically aligning the panoramic images in the further iteration with the first iteration.
Preferably, each one of the further iterations may be automatically compared with a previous iteration to find visual similarities between the 25 plurality of panoramic images of the each further iteration and the plurality of panoramic images of the previous iteration and automatically aligning the panoramic images in the each further iteration with the previous iteration.
Preferably, the method may further comprise analysing changes in visual 30 similarities between the plurality of panoramic images of the first iteration and the plurality of panoramic images of each one of the further iterations using an artificial intelligence process to infer differences and changes in a location across multiple iterations to reach conclusions based on common understanding. The artificial intelligence process may comprise any one or
2018203909 02 Jun 2018 more of the following: (a) a machine learning algorithm; or (b) a pattern recognition algorithm; or (c) a machine vision algorithm.
Preferably, the method may further comprise allowing the removal of any 5 one of the further iterations without removing any of the linked hotspots from any other one of the remaining further iterations.
Preferably, a link to each one of the iterations may be displayed in the another pane of the multi-pane web page.
io
Preferably, the method may further comprise allowing the use of image metadata in any one of the plurality of panoramic images to automatically locate the image on a map, such as by locating the image on a web mapping service using embedded global positioning system (GPS) co-ordinates in the image 15 metadata.
Preferably, the method may further comprise providing an application programming interface (API) which allows a third party technology to interact with and use the web based user interface or parts thereof. The third party 20 technologies may comprise any one or more of the following: (i) a floorplan design program; (ii) a web mapping service; (iii) any technology which provides a plurality of panoramic images sourced over or at different times and/or dates; or (iv) any technology which requires the iterative sorting of panoramic images over any domain.
Preferably, the API may provide sorted and linked plurality of panoramic images which are connected to any one or more iteration of the plurality of panoramic images at a different time and/or date and allowing through the use of image metadata in any one of the plurality of panoramic images to 30 automatically locate the image on a map, such as by locating the image on a web mapping service using embedded global positioning system (GPS) coordinates in the image metadata
2018203909 02 Jun 2018
Preferably, the method may further comprise providing an analysis tool that provides consistent, transparent, and efficient analysis of the project to understand where viewers are looking in the panoramic images to gauge interest in specific panoramas across multiple iterations and then developing at 5 least one graphical representation which provides the user with insights on user behaviour within the project.
Preferably, the method may further comprise allowing the user to pixelate or blur out faces or vehicle license or number plates as required to io censor each one of the plurality of panoramic images.
Alternatively, the method may further comprise providing an algorithm which allows for automatic recognition of items which require censorship such as faces or vehicle number plates in the plurality of panoramic images, the 15 automatic recognition algorithm will automatically pixelate or blur out faces or number plates within the images.
Preferably, the user interface may be utilized in any one or more of the following industries: (i) real estate industry; (ii) travel and hospitality industries; 20 (iii) education; (iv) automotive industry; (v) e-commerce industry; (vi) construction industry; (vii) 3D graphics industry; (viii) warehouse and storage industry; (ix) disaster management and risk assessment industries; (x) traffic management including parking and city resources industries; or (xi) any industry or domain which can provide a plurality of panoramic images sourced 25 over different times and/or dates which can be iteratively sorted and connected to form a virtual tour.
Preferably, the method may further comprise allowing a user to register to use the user interface. Allowing the user to register may comprise the steps 30 of: (i) entering the user’s details including any company or organization details into a user’s detail file; (ii) choosing a payment and/or billing plan; (iii) allowing the user to invite other users from within their organization to use the interface and setting those other users access rights or levels; and (iv) providing login details to the user and other users.
2018203909 02 Jun 2018
Preferably, the user once registered may login and use the user interface, wherein logging in comprises the steps of: (i) receiving at the server a request from a requesting computer to login the user; (ii) authenticating the user at the server, and upon authenticating the user, retrieving from a database 5 stored on the server the user’s detail file wherein the user’s detail file includes any user’s preference for configuring the user interface including any users branding and/or company branding; and (iii) sending to the requesting computer the user’s detail file, wherein the preference file contains information to allow the requesting computer to implement and configure the user interface by io directing output on the requesting computer to the user interface component that processes the output to provide the user interface to the user.
Preferably, the method may further comprise allowing the user or other users to create a project and to create a first iteration, and generating a virtual 15 tour on a multi-pane web page in response to the user interface which is transmitted via the Internet and viewed by the user using a web browser.
Preferably, the method may further comprise the step of allowing the user or other users to create a further iteration and generating an iterative 20 virtual tour on a multi-pane web page in response to the user interface which is transmitted via the Internet and viewed by the user using a web browser which shows changes over time of a space.
In accordance with a further aspect, the present invention provides a 25 system for creating, managing and publishing an interactive virtual tour the system comprising: a user interface; a client device having one or more processors, memory, and one or more programs stored in the memory and configured for execution by the one or more processors, the one or more programs comprising instructions for generating and displaying the user 30 interface on a display; a web server for allowing the client device to access and store the interactive virtual tour, virtual tour data, information and commands and for generating web pages for display in response to commands from the user interface; and a communications network for connecting the client device, server and displaying the user interface.
2018203909 02 Jun 2018
In accordance with a still further aspect, the present invention provides a web based user interface comprising: a panorama data acquisition unit implementing means of capturing panoramic data and saving the panoramic data to a server for further processing; a package generator adapted to 5 generate virtual tour packages containing the panoramic data, commands and virtual tour data; a viewing engine responsive to the panoramic data and virtual tour packages and implementing means for perspective correction, and user interaction with, said panoramic data and virtual tour data when necessary; a control engine adapted to facilitate interaction with the panoramic data and io virtual tour data, wherein the control engine is connected operatively to and communicates bi-directionally with the viewing engine, renders representative information about all or parts of the virtual tour, permits a particular portion to be selected from the virtual tour and sends signals to the viewing engine that cause the viewing engine to permit the interactive navigation of the selected 15 portion of the virtual tour, wherein the control engine also indicates or causes to be indicated what portion of the virtual tour is currently selected and what subpart of said selected portion of the virtual tour is currently rendered, wherein the control engine is responsive to user input and/or commands from the viewing engine and is in turn capable of modifying the representative information about 20 all or parts of the virtual tour in response to the user input and/or said commands from the viewing engine and is capable of communicating information indicative of such externally induced modifications to the user and/or the viewing engine; and a display means for rendering output of the viewing engine, control engine, package generator and panoramic data 25 acquisition unit.
In accordance with a still further aspect, the present invention provides a computer system for providing a user interface of an application program having multiple panes, each pane providing access to functionality of the 30 application program to create a virtual tour of a space, the computer system comprising: a first component which displays a first pane of the user interface of the application program, the first pane of the first component allowing a first user to access a first function to: register and enter details of the first user and/or organization details of a project, the details of the first user and/or
2018203909 02 Jun 2018 organization details being saved to a server; upload user-input specifying information relating to the project; and upload a plurality of panoramic images of the project to create a first iteration of the project and saving the first iteration to the server at a first time and/or date; a second component that replaces the 5 display of the first pane of the user interface by displaying a second function which allows the user to link and align each one of the panoramic images in the first iteration by adding at least one hotspot and manually rotating images to align visual perspectives; and a third component, that upon receiving from the user a selection of a save iteration icon or button of the user interface of the io application program, the third component displays the virtual tour of the first iteration showing the linked and aligned plurality of panoramic images and in a second pane displays a first iteration button or icon.
Preferably, the computer system may further comprise a fourth 15 component that displays in the second pane of the user interface a fourth function which allows the user to: add a floorplan showing a map or site layout or anything with a spatial significance; and add at least one active region within the floorplan which is linked to at least one of the plurality of panoramic images.
Preferably, the computer system may further comprise a fifth component that replaces the display of the first pane of the user interface by displaying a fifth function which allows the user to add a further iteration at a different time and/or date to the first iteration by adding a further plurality of panoramic images of the project.
Preferably, the computer system may further comprise a sixth component to create a bridge between the first and further iteration which allows a user to: rotate and align each one of the further panoramic images from the further iteration with each one of the corresponding panoramic image 30 from the first iteration, or optionally, align all panoramic images of each iteration with a feature on a map/floorplan; and calculate an updated heading for each one of the first panoramic images with respect to the rotated each one of the further panoramic images and saving the updated heading to automatically
2018203909 02 Jun 2018 align visible features in the panoramic images and automatically add a hotspot from the first iteration to the further iteration.
Preferably, the computer system may further comprise a seventh component that upon receiving from the user a selection of a save iteration icon or button of the user interface of the application program, the seventh component displays the virtual tour of the second iteration showing the linked and aligned plurality of panoramic images and in a second pane displays a second iteration button or icon.
io
Preferably, the computer system may further comprise repeating the steps of this aspect for each new further iteration at a different time and/or date to the previous further iteration.
Preferably, the computer system may further comprise generating a multi-pane web page in response to the user interface to display the virtual tour of the space, the virtual tour is transmitted via the Internet and viewed by a plurality of users using a web browser. The user interface may be a web based or a cloud based user interface.
Preferably, the user-input information comprises any one or more of the following: (i) a location of the project, (ii) at least one contact for the project if different from the user; (iii) at least on event related to the project; (iv) any project branding; or (v) any information that adds context to the project, 25 including but not only limited to audio, video and floorplans.
Preferably, each one of the plurality of panoramic images may be recorded with an original file creation time and date stamp and each image is saved on a server in time and date order starting from an earliest time and date 30 to a latest time and date.
Preferably, each one of the plurality of panoramic images may be recorded with image metadata and the metadata is saved on the server, the metadata is used to automatically locate the image on a map, such as by
2018203909 02 Jun 2018 locating the image on a web mapping service using embedded global positioning system (GPS) co-ordinates in the image metadata.
Preferably, each one of the plurality of panoramic images may be 5 recorded by an omnidirectional camera.
Preferably, the second component may further comprise rotating the panoramic images to orient the panoramic images with respect to a physical feature such as those on a map. Alternatively, the second component may io further comprise rotating the panoramic images to orient the panoramic images with respect to the compass heading north.
Preferably, the second component may further comprise an image matching algorithm which automatically adds hotspots to the plurality of 15 panoramic images by identifying similarities between the plurality of panoramic images and based on the similarities automatically adds at least one hotspot that connects each one of the plurality of panoramas to each other.
Preferably, the sixth component to create the bridge may further 20 comprise using the image matching algorithm to automatically find visual similarities between the plurality of panoramic images of the first iteration and the plurality of panoramic images of the further iteration and automatically aligning the panoramic images in the further iteration with the first iteration. Each one of the further iterations may be automatically compared with a 25 previous iteration to find visual similarities between the plurality of panoramic images of the each further iteration and the plurality of panoramic images of the previous iteration and automatically aligning the panoramic images in the each further iteration with the previous iteration.
Preferably, the second component may further comprise an image matching algorithm which automatically creates the virtual tour of the first iteration using a plurality of 3D mesh structures of the plurality of panoramic images, the 3D mesh structures allowing the user to browse the plurality of
2018203909 02 Jun 2018 panoramic images and allowing the plurality of panoramic images to be located within a co-ordinate based system.
Preferably, the computer system may further comprise analysing changes in visual similarities between the plurality of panoramic images of the first iteration and the plurality of panoramic images of each one of the further iterations using an artificial intelligence engine. The artificial intelligence engine may comprise any one or more of the following: (a) a machine learning algorithm; or (b) a pattern recognition algorithm; or (c) a machine vision io algorithm.
Preferably, the computer system may further comprise an analysis tool that provides consistent, transparent, and efficient analysis of the project.
Preferably, the computer system may further comprise a further component to allow a user to pixelate or blur out faces or vehicle license or number plates as required to censor each one of the plurality of panoramic images. Alternatively, the computer system may further comprise an algorithm which allows for automatic recognition of items which require censorship such 20 as faces or vehicle number plates in the plurality of panoramic images, the automatic recognition algorithm will automatically pixelate or blur out faces or number plates within the images.
Preferably, the user interface may be utilized in any one or more of the 25 following industries: (i) real estate industry; (ii) travel and hospitality industries;
(iii) education; (iv) automotive industry; (v) e-commerce industry; (vi) construction industry; (vii) 3D graphics industry; (viii) warehouse and storage industry; (ix) disaster management and risk assessment industries; (x) traffic management including parking and city resources industries; or (xi) any 30 industry or domain which can provide a plurality of panoramic images sourced over different times and/or dates which can be iteratively sorted and connected to form a virtual tour.
2018203909 02 Jun 2018
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be understood more fully from the detailed description given hereinafter and from the accompanying drawings of the 5 preferred embodiment of the present invention, which, however, should not be taken to be limitative to the invention, but are for explanation and understanding only.
Fig. 1 illustrates an overview of the computer system showing some of the components of the system in accordance with an aspect of the present io invention;
Fig, 2 shows a flowchart illustrating the application architecture of the project using the user interface in accordance with an embodiment of the present invention;
Fig. 3 shows the flow diagram relating to project branding from A and B 15 of the flowchart of Fig. 2;
Fig. 4 shows the flow diagram relating to the addition of floorplans from C of the flowchart of Fig. 2;
Fig. 5 illustrates the flow diagram relating to both iterations and the bridge from E and D respectively of the flowchart of Fig. 2;
Fig. 6 shows the flow diagram relating to the event information from F of the flowchart of Fig. 2;
Fig. 7 shows the flow diagram relating to the contact information from G of the flowchart of Fig. 2;
Fig. 8 shows the flow diagram relating to user information from H of the 25 flowchart of Fig. 2;
Fig. 9 illustrates the flow diagram for the addition of hotspots from HSP1 to HSPX of the flowchart of Fig. 5;
Fig. 10 illustrates the flow diagram of the iterations section of the user interface in accordance with an embodiment of the present invention;
Fig. 11 shows the flow diagram for the addition of a hotspot in the iteration from B1 of the flowchart of Fig 10;
Fig. 12 shows the flow diagram of the direction alignment from B1a of the flowchart of Fig. 11;
2018203909 02 Jun 2018
Fig. 13 shows the flows diagram of the add information hotspot from B1b of the flowchart of Fig. 11;
Fig. 14 illustrates the flow diagram of the set target and use auto buttons from B1 d and B1 e of the flowchart of Fig. 12;
Fig. 15 shows the flow diagram of the remove hotspot button from B2 of the flowchart of Fig. 10;
Fig. 16 shows the flow diagram of the gallery setting button from B3 of the flowchart of Fig. 10;
Fig. 17 illustrates the flow diagram of the disable panorama button from ίο B4 of the flowchart of Fig. 10;
Fig. 18 shows the flow diagram of the add background audio/video button from B5 of the flowchart of Fig. 10;
Fig. 19 shows the flowchart for creating a bridge for the user interface in accordance with an embodiment of the present invention;
Fig. 20 shows the flow diagram for the connect button from CB2 of the flowchart of Fig. 19;
Fig. 21 shows the flowchart for creating or adding a floorplan to the project using the user interface in accordance with an embodiment of the present invention;
Fig. 22 shows the flow diagram for the work on this button from FA3 of the flowchart of Fig. 21;
Fig. 23 shows the flow diagram for the floorplan highlighter button from FA4 of the flowchart of Fig. 21;
Fig. 24 shows the flow diagram for the delete floorplan button from FA5 25 of the flowchart of Fig. 21;
Fig. 25 illustrates a screenshot of the user interface as will be utilised in the following case studies;
Fig. 26 shows the flowchart for the retrieval of the required project from the server;
Fig. 27 shows the application architecture for the project PR12;
Fig. 28 shows the flowchart of case study 1 for project PR12;
Fig. 29 shows the flowchart of case study 2 for project PR12;
Fig. 30 illustrates the flowchart of the analytics program in accordance with an embodiment of the present invention;
2018203909 02 Jun 2018
Fig. 31 shows a screenshot of the user interface in accordance with an embodiment of the present invention; and
Fig. 32 shows a flowchart of a process for capturing and visualising three-dimensional scenes in accordance with an embodiment of the present 5 invention.
DETAILED DESCRIPTION OF THE INVENTION
The following description, given by way of example only, is described in io order to provide a more precise understanding of the subject matter of a preferred embodiment or embodiments.
The virtual tour or 360 degree virtual tour has become a frequently used technique for providing viewers with information about three-dimensional 15 spaces of interest. They have been found to be useful in but not only limited to such industries as real estate, travel and hospitality, universities, automotive, ecommerce, education and construction. The virtual tour opens your location or services to the online public, allowing you to showcase key features of your location, product or service.
The present invention provides a user interface which allows a user to create a 360 degree virtual tour and visual iterations of a space with the ability to maintain perspectives as a user jumps between multiple locations and multiple iterations of that space. A web based user interface allows the user to 25 create a first iteration of a space by adding hotspots which allow motion in space and aligning corresponding images linked in time within the first iteration to create a virtual tour of the space. Once the first iteration is completed and saved the user interface can be utilised to add further items related to the project such as floorplans, information hotspots, event times and dates such as 30 open house events, and user contact details.
The user interface platform allows the user the ability to create 360 degree iterations of the space. Iterations are a way of showing the same space over and over again to show visual change in the space. When the user
2018203909 02 Jun 2018 uploads the second iteration, they interact with the bridge. The bridge through the user interface allows a user to visually line up the panoramas from the second iteration with the first iteration to simply provide a seamless transition through space of a 360 degree virtual tour with the ability to maintain 5 perspectives as a user jumps between multiple locations and multiple iterations of that space.
The user interface and the bridge align the panoramas in each iteration and ensure that the hotspots from the first iteration automatically appear in the next iteration. For example, if a viewer is viewing room 1 in iteration 2, when io they click on room 2 in the floorplan, they land in room 2 from iteration 2. If they were in any other iteration of the same space, they will land in room 2 from that iteration. Furthermore as the user moves from one iteration to another the same visual perspective is maintained. For example, if a viewer is in room 2 in iteration 1 and then they click on iteration 2, they still land in room 2 in iteration 15 2, looking in the same direction as they were when they were in iteration 1.
The user interface ensures that any interactive elements added in the first iteration are accessible in each and every further iteration of the space. The ability to easily align panoramas visually in every iteration provides the 20 process of creating iterations which is fast and accurate to replicate changes over time in a project. One of the advantages of the present invention is the ability to maintain the visual perspective between multiple panoramic images of a space even when the heading or north offset of panoramas has no physical reference point, like a map, or GPS coordinates.
While the above described user interface is mainly defined as a manual process, and in particular the alignment of images in an iteration and between iterations relies on the user to visually align panoramic images, the present invention also provides the ability to automatically align panoramic images. The 30 present invention extends to the use of image recognition algorithms to automatically identify and infer the similarities and differences in spaces. This includes the ability to automatically generate hotspots and to enable the
2018203909 02 Jun 2018 automatic alignment of images in iterations by using only two dimensional imagery.
As described previously a panoramic image is a view synthesis 5 technique that provides 360-degree view visualisation for a scene. In its most common form, the two-dimensional (2D) panorama, visualisation is restricted with respect to a fixed view centre. While it is sufficient for most visualisation purposes, it does not provide functionalities such as three-dimensional (3D) navigation and measurement, which are useful for virtual tours.
io
The image recognition algorithms of the present invention can also be used to reconstruct an image-based 3D panorama to automatically create a virtual tour of the first iteration and subsequent iterations by creating a plurality of 3D mesh structures using the plurality of panoramic images. The 3D mesh 15 structures allow the user to browse the plurality of panoramic images and allow the plurality of panoramic images to be located within a co-ordinate based system.
The present invention integrates automated image recognition 20 algorithms with reliable and precise photogrammetric methods such as a structure from motion (SfM) range imaging technique for automated 3D reconstructions from large image datasets. The present invention uses SfM to recover a 3D structure of a panoramic scene from a set of projective measurements, represented as a collection of 2D images, via estimation of 25 motion of the cameras corresponding to these images. The automated image recognition algorithms are used to extract the features in images (e.g., points of interest, lines, etc.) and matching these features between the plurality of 2D panoramic images. Then using the extracted features from the plurality of 2D panoramic images the camera motion is estimated by using relative pairwise 30 camera positions from the extracted features. Finally, recovery of the 3D structure using the estimated motion and features by minimising the reprojection error to produce the 3D mesh structures.
2018203909 02 Jun 2018
The 3D mesh structures are basically resolved from a series of overlapping, offset images. By using the geometry of the scene, camera positions and orientation is solved automatically without the need to specify a network of targets which have known 3D positions. Instead, these are solved 5 simultaneously using a highly redundant, iterative bundle adjustment procedure, based on a database of features automatically extracted by the image recognition algorithms from a set of multiple overlapping 2D panoramic images. This approach has been found to be particularly suited to sets of images with a high degree of overlap that capture full 3D structure of the scene io viewed from a wide array of positions, or 2D panoramic images derived from a moving sensor.
In order to determine the 3D location of points within a scene the present invention determines the camera pose and scene geometry through 15 reconstructing simultaneously through the automatic identification of matching features in multiple images using the image matching algorithm. These features are tracked from image to image, enabling initial estimates of camera positions and object coordinates which are then refined iteratively using nonlinear least-squares minimisation.
The camera positions derived above lack the scale and orientation provided by ground-control coordinates. Consequently, the 3D point clouds are generated in a relative image-space coordinate system, which must be aligned to a real-world, object-space co-ordinate system. The present invention 25 achieves the transformation of image-space coordinates to an absolute coordinate system using a 3D similarity transform based on a small number of known ground-control points (GCPs) with known object-space coordinates. Such GCPs can be derived post-hoc, identifying candidate features clearly visible in both the resulting point cloud and in the field, and obtaining their 30 coordinates by ground survey such as by GPS.
Once transformed to the coordinate system and georeferenced the 3D mesh structures are generated and rendered for display. Typically the 3D mesh structures are generated using a polygonal or polyhedral mesh that
2018203909 02 Jun 2018 approximates a geometric domain. Preferably, the 3D mesh structures are created using tetrahedra, pyramids, prisms or hexahedra. Alternatively, when used for a finite volume method structures can be generated using an arbitrary polyhedra. Those used for finite difference methods usually need to consist of 5 piecewise structured arrays of hexahedra known as multi-block structured meshes.
The present invention also extends to both a web-based and cloudbased user interface. In particular, a web-based and cloud-based panoramic io creation tool that provides a user of the user interface with the ability to customize the panoramic viewer's interface including branding and colour schemes to suit that particular user.
In order to simplify the description process, the following description will 15 refer mainly to a user interface for generating a virtual tour of a real estate property. However, it should be understood that the present invention is not only limited to this use and could be used in and for a number of different industries and applications both real (existing) or 3D generated environments (non-existing).
In accordance with a first embodiment, the present invention provides a computer-implemented method providing a user interface for a virtual tour application program specifying details and uploading user information of a project to a server. The user can create a first iteration of the project at a first 25 time and/or date. The first step in the process is for the user to input through the user interface any user-input specifying information regarding or relating to the project. The user-input can include but is not only limited to the location of the project, one or more contacts for the project if different from the user, one or more events related to the project; or any project branding. The user can then 30 interact with the user interface to upload a plurality of panoramic images of the project and adding the plurality of panoramic images to the first iteration. The first iteration is then saved to the server. The first iteration can then be displayed through the user interface by generating a multi-pane web page and/or generating an application programming interface return call in response
2018203909 02 Jun 2018 to the user interface which is transmitted via the Internet and viewed by the user using a web browser.
Fig. 1 illustrates a graphical overview of the system 10. A user is able to login to the user interface 30 through any suitable web browser 40 using a computer 50, mobile phone 52 or tablet 51 connected to the cloud 11. Also connected in the cloud is the server 20 and storage 21. The server 11 comprises mainly storage of the application program and the processors that manipulate the algorithms of the user interface 30. The majority of the services io and storage of the plurality of panoramic images is performed by the cloud 11.
The cloud 11 is basically a network of servers 20, and each server has a different function. Some servers 20 use computing power to run applications or deliver a service and others simply provide the ability to store and access data.
In order to capture the plurality of panoramic images to generate the virtual tour a camera 13 is shown being used to take images of the house 12. The camera 13 is an omnidirectional camera capable of producing a 360 degree field of view in the horizontal plane, or with a visual field that covers (approximately) the entire sphere. Omnidirectional cameras are important in areas where large visual field coverage is needed, such as in panoramic photography. By way of example only the Ricoh Theta is a spherical digital camera for capturing full spherical photos. Likewise, the Nikon KeyMission 360 and Samsung Gear VR are also spherical digital cameras. The omnidirectional camera 13 allows the user to photograph an entire scene or the floor, ceilings, 25 and all four walls of a room 12 with the touch of a button. The camera 13 produces a 360 degree representation of the surrounding area or space, allowing users to pan and zoom around a scene as if they are standing in the space. Users can also view the photographs on iOS, Android, PC, and Mac using any associated camera applications.
The present invention also allows for any image to be captured using a digital camera and the images can then be processed using a software image stitching program to produce any panoramic image with a field of view of up to and including 360 by 180 degrees.
2018203909 02 Jun 2018
The panoramic image can also be digitally produced using any two (2D) or three dimensional (3D) software applications. For example any computer program used for developing a mathematical representation of any two or three dimensional surface of objects, such as Autodesk 3DS Max, Maya computer 5 animation and modelling software and Google SketchUp.
The user interface 30 as shown in Fig. 1 consists of a number of panes in a multi-pane display. In the main window the representation of the virtual tour of a space is shown. In this case a view of a real estate property. The 10 user application 30 allows the user to move through the space as though they were actually standing in the space, being able to view a room as it would be seen in real time. The side pane or window shows further information about the property including details of the property, sales contacts and an interactive floorplan.
Fig. 2 shows an overview of the application architecture showing the major components of the user interface 30. The first step in the process is allowing a user to register to use the user interface 30. The user 65 must first enter their details including but not only limited to their name, address, phone 20 number, email and any company or organization details 60 into a user’s detail file. Any number of users 65 can be registered for using the user interface. This can also include any company branding 80. For example in the real estate industry each particular company have a standard layout, logo and colour scheme on their company website, these colours can be carried over to the 25 user interface to further customise the user interface 30 in their unique colour scheme. As illustrated in Fig. 2 the branding off page reference (A) 81 leads to Fig 3 which shows a simplified version of the flow diagram for branding 80. In respect of the organisations or users branding this can include a specific hotspot fill colour 84, floorplan highlighter colour 85, a company logo 86, the 30 choice of one or two colours 87, 88 representative of the users or organisations colour scheme, and if required background music 89. All of these items are used to customise the user interface 30 for the particular user 65 or company
60.
2018203909 02 Jun 2018
The next step in the registration process includes the user 65 choosing a payment and/or billing plan. This is simply how the user 65 wishes to pay to use the user interface 30. As described above the user 65 can now invite other users 65 from within their organization 60 to use the interface 30. This can 5 include the setting of those other users 65 access rights or levels. Each user 65 can be given either complete access or limited access to functions of the user interface 30 or access to the application in its entirety. This helps to regulate the number of users and control what they can manipulate or use when it comes to the user interface 30. Finally each user 65 is required to enter io their login details. For example, a username and password which is unique to each individual user 65.
Once the registration process is completed the user 65 can login and use the user interface 30. When logging in the following steps are performed. 15 The server 20 receives a request from a requesting computer 50 to login the user 65. The server 20 then authenticates the user 65 at the server 20, and upon authenticating the user 65, retrieving from a database 21 stored on the server 20 the user’s detail file. The user’s detail file includes any user’s preference for configuring the user interface 30 including any users branding 20 and/or company branding 80. The server 20 then sends to the requesting computer 50 the user’s detail file. The preference file contains information to allow the requesting computer 50 to implement and configure the user interface 30 by directing output on the requesting computer 50 to the user interface component that processes the output to provide the user interface 30 to the 25 user 65.
As illustrated in Fig. 2, user 3 is now able to create a first iteration of a project 70, in this example user 3 is about to begin with project (2) 71. Any user 65 can create any number of projects 71 once they have registered. Under the 30 projects tab 71 and in particular project (2) are a number of options available to the user 65. The project information is entered and saved on the server 20. The project information can include but is not only limited to the location of the project including the address and/or map co-ordinates, contact details for the project if different from the user, any planned events related to the project, any
2018203909 02 Jun 2018 project branding and any information that adds context to the project, including but not only limited to audio, video and floorplans.
Other options being project branding 82, floorplans 90, bridges 110, iterations 120, events 140, contacts 150 and users 160. Each will be explained in the typical order in which a project and virtual tour is created from start to finish and to eventually being published on a web page of a web browser. Each of the options above lead to an off page reference which will be described in further detail below.
io
As described previously with regards to adding branding 80 to the user interface 30, it is also possible to add branding 82 to each individual project 71. The branding 82 and off page reference (B) 83 is directed to Fig. 3 which has previously been described above in relation to branding 80 and will not be 15 described again.
The user 65 is now able to create the first iteration 120 of the project 71. The iteration process 120 will be described with reference to Figs 5 and 9 to 18. The overview of the process is shown in Fig 5 in which off page reference (E) 20 121 shows the flow diagram of the creation of each iteration 122, 123, 124, 125.
In this instance we are looking at the process of creating the very first iteration 122. As described above the plurality of panoramic images 130 are recorded using the camera 13 or could be produced using any 2D or 3D software application. With each image file the image metadata is saved as well as the 25 original file creation time and date stamp. The user 65 creates the first iteration 122 at a first time and/or date by using the user interface 30 to upload the plurality of panoramic images 130 relating to the first iteration 122 of the project (2) 71. Each one of the plurality of panoramic images 130 are saved on the server 20 in time and date order starting from the earliest time and date of 30 original creation and labelled Panorama 1 to the latest time and date of original creation labelled Panorama X.
Referring now to Fig. 10 which shows a more detailed flow diagram of the iteration section 120 of the project 71. As described above the first iteration
2018203909 02 Jun 2018 or in this case a new iteration 200 is created and the user 65 can add details
201 about the current iteration prior to uploading the panoramic images 202 to the iteration and saving the panoramas in list L1. Alternatively, if the user 65 has already created an existing iteration 203, the user 65 may choose a saved iteration 203 to begin the step of working 204 on that iteration. With either option the user 65 now begins the process of working 205 on the first or saved iteration. The first step 206 involves the user 65 using the user interface 30 to select from list L1 a first panoramic image which is labelled P1. The first panoramic image P1 is then rendered and displayed in window pane W1 of the io multi-pane web page as the active panoramic image P1. From here a number of processes are available to the user 65 to create the virtual tour and these processes are typically displayed to the user 65 as buttons on the user interface for the user 65 to select the appropriate process required..
In creating the first iteration the user 65 manually adds at least one navigation hotspot to the first panoramic image to assist in an alignment of the images to produce the virtual tour. The first step in this process is selecting the button add hotspot 210. This leads us to off page reference (B1) 211 to Fig. 11 which describes the flow diagram steps of the add hotspot button 210. The first step 212 the user can rotate the first panoramic image P1 to a position or location for the first navigation hotspot. Alternatively, step 212 can also include rotating the first panoramic image P1 to orient the first panoramic image with respect to a physical feature such as those on a map. Further alternatively, step 212 can include rotating P1 to orient the first panoramic image with respect to the compass heading north.
This is followed by step 213 in which the user 65 double clicks on P1 to locate the hotspot on P1. The next step 214 the front-end software used to generate the user interface 30 saves the co-ordinates of the navigation hotspot 30 or the location which has been double clicked by the user 65 in variable V1.
Typically user interface variables are auxiliary variables that are used to control specific aspects of the installer's user interface behaviour. The viewport or window pane W1 then zooms into location of the co-ordinates of the hotspot at step 215.
2018203909 02 Jun 2018
The next stage of the process in generating the first iteration is selecting a further panoramic image from the panoramic images in this iteration to link to the first panoramic image. At step 216 all of the panoramic images from the first iteration are retrieved from the server 20. The front-end software at step 5 217 retrieves the time and date stamp of the original file creation for each one of the panoramic images 130 in the first iteration and calculates at step 218 the time differences between the original creation times for each panoramic image with respect to the active P1 panoramic image. The viewport at step 219 creates and displays a hidden list L2 comprising each one of the panoramic io images 130 of iteration 1 with the calculated time difference displayed beneath each one of the plurality of panoramic images 130. This is followed by removing the active panoramic image P1 from the list L2 at step 220 and further removing any already connected panoramic images to P1 from the list L2 at step 221. Given that this is the first creation of the first iteration there should be 15 no connected panoramic images in list L2. The viewport at step 222 then displays the list L2 as an overlay window on the active panoramic image P1 in the main window W1.
The off page reference (B1a) 230 takes us to Fig. 12 in which the user 20 65 selects at step 231 the next panoramic image P2 to link to P1. Typically the selection of P2 is based at least in part on the calculated shortest time difference from P1. This is also an indication of the direction in which the original photographer or the camera 13 is moving through the space which is being photographed for the virtual tour. The next step 232 the viewport then 25 renders and displays P2 in an overlapping window W2 over the active panoramic image P1. The user 65 then aligns a visible portion of P1 by rotating P2 in the overlapping window W2. This step allows the user 65 to align any visible perspective which is located in both images P1 and P2 to link the second panoramic image P2 to the active first panoramic image P1.
The user 65 now has three options, two of these options allow the linking of the two images by adding the navigational hotspots and the third option allows the user to return to the overlay list L2. The viewport displays three buttons which allow the user to set the target 234, use auto set button 235 or
2018203909 02 Jun 2018 the go back button 236. The go back button 236 allows the user 65 to return the overlay list L2 as displayed at step 222 of Fig. 11. This is illustrated by off the page reference (B1c) 237.
If the user 65 chooses to add a navigational hotspot by selecting the set target button 234 the off page reference (Bid) 250 takes us to Fig. 14. The set target button 234 allows the user 65 to specifically set the pitch and yaw of the second panoramic image P2 with respect to the alignment between P2 and P1 as set in step 233 of Fig 12. At steps 259 and 260 respectively the front-end io software retrieves the pitch PI2 and yaw Y2 of panoramic image P2. The server 20 at step 261 then saves the hotspot with the specific target yaw Y2 and if PI2 is set saves the specific target pitch as PI2. The hotspot is therefore added by setting the target pitch and yaw of the panoramic image P2.
If the user 65 chooses to add a navigational hotspot by selecting the Use
Auto button 235 the off page reference (B1e) 251 takes us to Fig. 14. Firstly at step 252 the server 20 checks to see if the heading value HU for P2 has been updated. If HU=1 then the heading of P2 has previously been updated, this could only be the case if the iteration has previously been worked on. Given 20 that this is the first time the first iteration is being created the HU flag will be HU=0. Therefore the next step would be step 253 and the server 20 would save the heading of P2 as H2 and set the HU flag to HU=1. The server at step 256 would then save the hotspot without a specific target yaw in order to inherit the yaw on panoramic image changes. Finally the server 20 would check at 25 step 257 if a hotspot exists within P2 which was connected to P1, if no hotspot connected to P1 exists in P2 at step 258 the server would save a mirror hotspot H2 with the same heading as H1 but adding 180 degrees to the location yaw of H1.
In the case were at step 252 the HU flag is HU=1 then the next step 254 in the use auto button 235 process the front-end software check if an updated heading for P2 which is saved on the server 20 is within ±10% of the current P2 heading which is shown in the overlapping window W2. If the check is correct and the heading for P2 is within ±10% of the current P2 heading then the next
2018203909 02 Jun 2018 step 255 the server will not change the heading of P2 and sets HU=1. As above the server at step 256 would then save the hotspot without a specific target yaw in order to inherit the yaw on panoramic image changes. Finally the server 20 would check at step 257 if a hotspot exists within P2 which was connected to P1, if no hotspot is connected to P1 exists in P2 at step 258 the server would automatically save a mirror hotspot H2 with the same heading as H1 but adding 180 degrees to the location yaw of H1.
If the check is not correct and the heading for P2 is not within ±10% of io the current P2 heading then the next step is 260. At step 260 the front-end software retrieves the yaw Y2 of panoramic image P2. The server 20 at step 261 then saves the hotspot with the specific target yaw Y2 and if PI2 is set saves the specific target pitch as PI2. The hotspot is therefore added by setting the target pitch and yaw of the panoramic image P2.
The addition of navigation hotspots for the alignment of P1 and P2 in the above steps is an alignment of direction which ensures that as the user 65 progresses through the virtual tour they are looking in the right direction and the visual perspective is maintained between P1 and P2 and each further 20 panoramic image. For each additional panoramic image 130 from P3 to PX the process of alignment is carried out until all the plurality of panoramic images are aligned to form the virtual tour.
The viewport at step 222 which displays the list L2 as an overlay window 25 on the active panoramic image P1 in the main window W1 also provides the ability to add at least one information hotspot as shown by off page reference (B1b) 240 which leads us to Fig. 13. An information hotspot can include any one or more of the following: a simple text hotspot which shows an extra block of text to point out details about any aspect of the environment or space; a still 30 photo hotspot which displays any supporting image from your supplied photos (with or without text); a video hotspot can seamlessly incorporate existing videos from YouTube or Vimeo channels; or a photo gallery hotspot which can incorporate live data feeds through any image hosting and video hosting website and web services suite. All of the above can also include a uniform
2018203909 02 Jun 2018 resource locator (URL) link to a web resource that specifies its location on a computer network and a mechanism for retrieving it or a hyperlink or link which is a reference to data that the reader can directly follow either by clicking, tapping, or hovering over the link.
Fig. 13 shows at step 241 a user 65 can add any one or more of the above information hotspots. For example at step 242, the user 65 can add a title, description, image, video, a hyperlink and also choose a custom information hotspot icon to be displayed on the user interface 30. At step 243 io the viewport renders the at least one information hotspot on P1 and at step 244 the server 20 saves the information hotspot details on the server 20.
Returning now to Fig. 10 and describing the remaining buttons available to the user 65 of the user interface 30. With the selected panoramic image P1 15 displayed in the main window W1 the following button options are displayed to the user 65.
The remove hotspot button 270 and off page reference (B2) 271 takes us to Fig. 15. Fig. 15 describes the process should a user 65 decide to delete a 20 navigation hotspot. With reference to the first active panoramic image P1 the front-end software will find all hotspots in P1 at step 272. The viewport at step 273 will then display all hotspots found in step 272 with a delete button. The user 65 at step 274 then simply clicks the hotspot they wish to delete and at step 275 the server 20 will locate the hotspot by its unique identification and 25 delete that hotspot. Finally at step 276 the viewport will exit the delete or remove hotspot mode and change all of the hotspots remaining in R1 to the default icon.
The set default view button 280 simply allows the user 65 to set the 30 current active panoramic image (in this case P1) and save to the server at step 281 the current yaw, pitch and horizontal field of view (HFOV) of P1 as seen in the main window W1.
2018203909 02 Jun 2018
The gallery setting button 290 and off page reference (B3) 291 takes us to Fig. 16. Fig. 16 describes the process by which a panoramic image is displayed as a thumbnail image in another pane of the user interface 30 which allows the user 65 to work on that particular image in the gallery of the first 5 iteration. At step 292 the user 65 selects the show in gallery button and the images of the current iteration are all displayed as thumbnails in the gallery. If yes, the user then proceeds to step 294 were the user 65 can add a title to the selected image in the gallery and at step 295 the server saves all set values for that particular image in the first iteration. If no is selected, and the user 65 at io step 293 does nothing.
The disable panorama button 300 and off page reference (B4) 301 takes us to Fig. 17. The disable panorama button 300 allows the user 65 in the first or current iteration to delete all panoramas that have navigation hotspots 15 pointing to the current active panoramic image (P1). Fig. 17 describes the process and begins at step 302 by the viewport displaying a prompt for confirmation showing all incoming navigation hotspots will be deleted. The user 65 must then decide at step 303 if they are to be deleted or not. If the user 65 selects no, then at step 304 then nothing happens. If the user 65 selects yes 20 then at step 305 the server searches for all panoramic images in the first iteration that have a navigation hotspot pointing to the current active panoramic image (P1). At step 306 the server 20 then deletes all records found in step 305.
The last button, the add background audio/video button 310 simply allows the user 65 to add background audio or video to the iteration. The off page reference (B5) 311 takes us to Fig. 18. Step 312 allows the user 65 to decide if they wish to add any background audio or video. If a video is to be added at step 313 a link to the hosted video is added. Likewise if the user 65 30 wishes to add audio then an audio file is uploaded at step 314. The background audio/video is saved to the server. The video and audio file formats are not restricted to any particular format provided they are capable of being displayed by the user interface in the virtual tour.
2018203909 02 Jun 2018
The first iteration of panoramic images can now be saved along with all current headings, navigation hotspots and information hotspots for each image of the plurality of panoramic images 130. The virtual tour of the first iteration 122 can now be rendered in the main window W1 of the user interface 30. With 5 the first iteration 122 completed the user 65 can now add such items as a floorplan 90 or further iterations 123, 124, 125 at a different time and/or date from the first iteration 122. Both floorplans 90 and the further iterations 123, 124, 125 and their interaction with the bridges 110 will be described in detail below.
io
The manual process of aligning the first, second and any further panoramic image 130 has been described above. However, as an alternative the present invention also extends to a process which allows for the automatic image matching and addition of navigation hotspots based on similarities 15 identified by the image matching algorithm. While the automatic alignment image matching has not been illustrated it will be described below with reference to some of the items or features described above.
The image matching algorithm can automatically add hotspots to the 20 plurality of panoramic images 130 to assist in the alignment of the images to produce the virtual tour. This is achieved by the image matching algorithm identifying similarities between the plurality of panoramic images 130 and based on the similarities automatically adding at least one navigation hotspot to each image that connects each image of the plurality of panoramic images 130 25 to form the virtual tour of a space.
The algorithm simply replaces the steps described above in relation to Figs. 11,12 and 14 with the algorithm described below. The steps for creating the iteration 120 up to step 205 of working on the iteration remain unchanged.
The image matching algorithm first selects from the saved original file creation time and date a first panoramic image P1 based on the earliest time and date of the plurality of panoramic images 130. The first panoramic image P1 has a first heading consisting of the yaw, pitch and HFOV. The algorithm will then scan the first panoramic image P1 for any features by splitting the first panoramic
2018203909 02 Jun 2018 image into discrete portions and analysing each discrete portion within the first image. Each discrete portion with an identified feature will have a size, form and shape which varies dependent upon the size of the image and the identified feature. By way of example only a typical size of the discrete portions is approximately a 10 pixel by 10 pixel square. Alternatively, the user 65 may define or set the size of the discrete portions.
The server then saves each discrete portion with features as an object with a defined size. The algorithm will then start looking for the identified io features of P1 in P2. P2 is the next closest earliest creation time and/or date from P1. It is therefore assumed that P2 and P1 being shot adjacent to one another will therefore contain a lot of matching feature elements or objects. The discrete portions of P2 are saved as objects on the server with a defined size. The algorithm compares the size of each returned matching object from 15 P1 and P2. The elements with the highest difference in size signify a direction of motion of the photographer or camera 13. The algorithm uses the differences in size of the objects identified in the compared images P1 and P2 to determine the direction of motion within P1 and P2. If the algorithm doesn’t find any matches between P2 and P1, it assumes that shots are taken in a 20 scattered manner and will then start comparing each image against every other image to determine matches.
The direction of motion shows the direction in which a photographer or the omnidirectional camera 13 is moving from the first panoramic image 25 through to the last panoramic image of the plurality of panoramic images 130 in the first iteration.
The algorithm then updates the heading of P1 in accordance with the direction of motion and at least one navigation hotspot is added in the direction 30 of motion to P1. The above steps are then repeated between P1 and P3 the next panoramic image 130 in the iteration based on the time difference from the original creation time and/or date of P1. Distant navigation hotspots are then added to P3 to connect P1, P2 and P3. Alternatively a mirror navigation hotspot can be added in P2 and P3 that connect each one to P1. The above
2018203909 02 Jun 2018 steps are performed for each one of the plurality of panoramic images 130 until all of the panoramic images have been linked and navigation hotspots have been added. The server will then save the first iteration 122.
The image matching algorithm also takes steps to avoid being confused by similarities in structures. For example, an image of a structure which consists of a room with a number of identical arches joined in sequence across the structure and all of the same shape and colour, though analysed individually in 10 pixel by 10 pixel segments will be understood as different arches by the io image matching algorithm. However, due to the arches being identical some sections of each arch will be recognised as the same. In order to prevent the algorithm from becoming confused the algorithm while scanning the image will compare every 10 pixel by 10 pixel segment with every other segment of the image and remove all matching segments from the stored set of objects saved 15 on the server.
The image recognition or matching algorithms of the present invention can also be used to reconstruct an image-based 3D panorama to automatically create a virtual tour of the first iteration and subsequent iterations by creating a 20 plurality of 3D mesh structures using the plurality of panoramic images. Fig. 32 illustrates the process and system for capturing and visualising threedimensional scenes and is described below.
The process starts at 800 by the system receiving the plurality of 25 panoramic images 130 of a 3D scene at 805. At step 810 the automated image recognition algorithm is executed and produces at 811 the identification of features in individual images which may be used for image correspondence. This identifies features in each image that are invariant to the image scaling and rotation and partially invariant to changes in illumination conditions and 3-D 30 camera viewpoint. Points of interest, or ‘keypoints’, are automatically identified over all scales and locations in each image, followed by the creation of a feature descriptor, computed by transforming local image gradients into a representation that is largely insensitive to variations in illumination and
2018203909 02 Jun 2018 orientation. These descriptors are unique enough to allow features to be matched in large datasets.
The next step in the process the system reconstructs geometry of a 5 plurality of 3D bubble-views from the plurality of panoramic images 130. The 3D scene reproduction begins with the system utilising a structure from motion framework for camera localisation at step 815 to recover intrinsic parameters such as focal length and principle points, and extrinsic parameters such as camera orientation and translation for each camera. Camera localisation 815 is io the foundation for a multi-view 3D reconstruction. Standard pipeline of structure from motion is used for recovering the camera pose (the motion) and obtaining a sparse point cloud (the structure) 816. The point cloud is sparse because only a few distinct points such as object corners can be easily identified and matched across multiple images. Although it may not be 15 sufficient for 3D navigation, a sparse point cloud can serve as a rough representation of the 3D scene and also provides reliable landmarks for camera localisation.
The next stage in reconstructing the 3D surface is to execute the multi20 view stereo algorithm at step 820 to produce the dense point cloud 821. When reconstructing, a 3D surface mesh model of each bubble-view is generated using multi-view stereo via cylindrical surface sweeping. For example, when reconstructing, a cylindrical surface sweeping process quantizes the scene with multiple depth surfaces with respect to the bubble-view centre and 25 hypothesizes the depth of each light ray to be intersecting with one of these surfaces as illustrated by the hypothetical surfaces. The intersecting point of each light ray is projected on each depth surface, and then the process performs forward projection to find the correspondences across multiple cameras. This additional processing is a significant increase in point density 30 which produces the dense point cloud 821.
In addition, when reconstructing, multiple bubble-view fusion is used to register multiple 3D bubble-views in a common coordinate system. For example, partial images from different bubble-views may be registered to form
2018203909 02 Jun 2018 a new coordinate system, and then the relative pose from each individual bubble-view is estimated to map images from each bubble-view to the new coordinate system. In addition, the camera pose from different bubble-views may be jointly estimated while keeping camera pose intact for one bubble-view 5 that serves as the reference bubble-view.
At step 825 in the process the 3D surface mesh model of each bubbleview may be refined using a depth optimisation technique 825, to generate a smoother and more correct geometry based on the raw depth estimation 10 obtained by surface sweeping. A key part of depth optimisation is to explore the depth continuity between each point and its neighbouring points. By way of example only, the optimisation can be formulated as a Markov Random Field (MRF) energy minimization problem.
At step 830 and prior to the generation and display of the 3D mesh structure further post-processing and digital elevation model generation is carried out. This can include but is not only limited to transformation from a relative to absolute co-ordinate system. The transformation of SfM imagespace coordinates to an absolute coordinate system can be achieved using a 20 3D similarity transform based on a small number of known ground-control points (GCPs) with known object-space coordinates. Such GCPs can be derived post-hoc, identifying candidate features clearly visible in both the resulting point cloud and in the field, and obtaining their coordinates by ground survey (i.e., by GPS).
Likewise, georeferencing may be required as the panoramic images may not contain any spatial reference information. In these cases you will need to use accurate location data to align or georeference the images to a map coordinate system.
At step 840, the final step in the process includes generating and displaying the surface mesh model. As described above, the output of bubbleview reconstruction is a surface mesh model instead of a full 3D model.
Therefore, walking through different bubble-views is achieved by dynamically
2018203909 02 Jun 2018 blending multiple mesh models on the fly, where the blending weight is determined by the inverse of squared distance between the current viewing position to nearby bubble-view centres. As all bubble-views acquired from one set of panoramic images can be registered in the common coordinate system 5 as described above, smooth transition can be accomplished as if a user is walking through the panoramic image.
The techniques disclosed include the “bubble-view,” an image-based 3D panorama of a scene. As described above the images can be easily acquired io with a single camera for creation and visualisation of a 3D scene. Fusion for multiple bubble-views is also addressed which register all bubble-views in the common coordinate system that enables 3D navigation and measurement. Also disclosed is an integrated device combing 2D and 3D sensors for 3D panorama reconstruction, and technology that uses multiple RGB images and 15 depth images for 3D panorama reconstruction that enables 3D navigation and measurement. Disclosed embodiments solve the indoor 3D reconstruction problem using camera localisation, data fusion, and depth optimisation via energy minimisation.
From Fig. 2 and the off page reference (C) 91 takes us to Figs. 4 and 21 to 24. Fig. 4 shows the simple application architecture of the floorplans 90. With the user able to enter any number of floorplans 1 to X 92 and as illustrated from floorplan 2 multiple highlighters 1 to X 93, 94, 95, 96 can also be added. The highlighters and floorplans are linked back to Fig. 5 via off page references (P1)to(PX).
Figs. 21 to 24 describe the processes by which a user 65 can add a floorplan to a current iteration. The floorplan 90 once added to the current first iteration 122 is displayed on the user interface in a second window or pane. 30 Typically a floorplan 90 comprises a map or site layout or anything with a spatial significance. The map is usually viewed from above showing the relationships between rooms, spaces and other physical features at one level of the project. As will be described below each floorplan 90 can also have highlighters to show active regions within each floorplan 90. The user interface
2018203909 02 Jun 2018 links each one of the active regions to at least one of the plurality of panoramic images 130, allowing the user 65 to select one of the active regions and the corresponding panoramic image 130 will be displayed within the main pane W1 of the multi-pane web page. Multiple floorplans 90 can be added to 5 each project, for example, a multiple story building would have multiple floorplans, one for each floor of the building.
The process starts at step 380 with the server first checking to see if a floorplan 90 already exists in the current project 71. If a floorplan 90 does exist io then the process moves to step 383 in which the viewport shows a list of all floorplans 90 in list L1. The server at step 384 then retrieves any floorplan highlighters P1 to PX 97, 98, 99, 100. Each floorplan highlighter is used to highlight a particular region on the floorplan 90. The viewport then displays the highlighters 97, 98, 99, 100 over the floorplan 90 in the user interface 30.
If at step 380 there are no floorplans 90 in the current project 71, the viewport at step 381 will display the message add a first floorplan to the user 65. The user 65 is now able to upload a first floorplan 90 at step 382 and the uploaded floorplan is displayed by the viewport in step 383 described above. 20 Floorplans 90 are designed using any third party floorplan software and uploaded at step 382. Alternatively, and as will be described below the user 65 can utilise the user interface 30 to design and add the floorplan 90. The user 65 creates their own floorplan using any available software, or alternatively uses an existing map. The user 65 can also use a Google map to signify the 25 locations, as the images already have GPS coordinates in the metadata which are saved to the database on the server 20.
Once the viewport displays the floorplan 90 and any associated highlighters 97, 98, 99, 100 in step 385, the user is presented with the following 30 button selection options; add a floorplan button 415; work on this floorplan button 390, floorplan highlighter button 400 and delete floorplan button 405. Add a floorplan button 415 allows the user to prepare a floorplan 90 using the user interface and once completed can be uploaded at step 382.
2018203909 02 Jun 2018
The button work on this 390 takes us to off page reference (FA3) 391 which leads us to Fig. 22. Fig. 22 shows the process steps which allow the user 65 to work on the current floorplan 90 and add and remove highlighters to the floorplan 90. At step 392 the viewport displays the floorplan in the main 5 window W1. The user 65 at step 393 can then click on multiple locations on the floorplan to highlight a region within the floorplan 90 which is designated highlighter H. By way of example only, the highlighted region H could be a room or space within a structure such as a house.
io The user 65 now has two further options, the user 65 may clear the current highlighted region H by selecting button 397 which then removes the highlighted region H from the floorplan 90 and returns to step 392 above. The user may also decide to add the selected highlighted region H to the current floorplan 90 by selecting the add selection button 394. The server 20 will then save at step 395 the co-ordinates of the highlighted region H and return to step to step 383 of Fig 21 via off page reference (FA2) 396.
The button floorplan highlighter 400 takes us to off page reference (FA4) 401 to Fig. 23. This button simply allows the user 65 to delete a highlighter 97, 20 98, 99, 100 from the current floorplan 90. The viewport at step 402 displays the delete symbol on mouse over a particular region within the floorplan 90 and at step 403 the server 20 will delete the selected highlighter upon the mouse click by a user 65.
Finally the button delete floorplan 405 takes us to off page reference (FA5) 406 to Fig. 24. The delete floorplan button 405 simply allows a user 65 to delete the current floorplan 90. When the button 405 is selected, the user 65 will be prompted by the user interface 30 at step 407 to confirm if they wish to delete or not. If no is selected the user interface does nothing at step 409 and ends. If yes is selected the server 20 at step 408 will delete the current floorplan and all corresponding highlighters 97, 98, 99, 100 and returns via off page reference (FA1) 410 to step 380 on Fig. 21.
2018203909 02 Jun 2018
As referred to above in Fig. 2 the user 65 can also add events 140, contacts 150 and users 160 to a project 70. The process for adding an event 140 is further defined by off page reference (F) 141 in Fig. 6. An event 140 could include anything that happens or takes place, especially one of 5 importance. By way of example only, and using the real estate industry as a guide an event could include an open house time and date for a real estate property or an auction time and date for the property. The application architecture shown in Fig. 6, shows that the user interface includes any number of events 142 from 1 to X. The illustrated event 3 shows that the an event 142 io can include information saved to the server including but not only limited to the event date and time 143, event title 144, event description 145 and if applicable the event URL 146.
The process for adding a contact 150 is further defined by off page 15 reference (G) 151 in Fig. 7. A contact can include any one of the user 65 or any other user within the organisation. However, the contact may also include someone who is not a user 65 for example an employee of the user 65 who is simply employed by the user 65. Typically the contact 150 is simply a person to whom can be reached who has details of the project 70, the person is the 20 designated point of call for information about or relating to the project 70. As illustrated in Fig. 7, two contacts 152 can be added however any number of contacts could be added if required. There is no limit on the number of contacts 150 for any project 70, however one contact 150 must be provided for each project 70. The information typically provided for each contact 152 is a 25 photograph 153 of the contact 152 and includes an email 154 and a contact phone number 155.
As referred to above in Fig. 2 the user 65 can also enter further users 162 to a project 70 using the user interface 30. The application architecture is 30 further defined by off page reference (H) 161 which refers to Fig. 8. In Fig. 8 the user 65 can add multiple further users 162 and can also allocate to each one of those additional users 162 various access levels 163 and they can be assigned various tasks 164. This can include providing an additional user 162 a private access level 163 or an open access level 163. For example, a project
2018203909 02 Jun 2018 could be a plurality of panoramic images 130 relating to a rental property. A first iteration 122 of the rental property could include when a new renter moves into the property and the property is uploaded to the user interface 30 to provide a virtual tour recording of the property at the first time and/or date to be used for the initial condition report. In this situation the access rights 163 for the real estate agent and the owner of the property may be a private access right 163 and the person renting the property may have a more limited viewing access right 163.
io From Fig. 5 we also show the overview of the application architecture with regards to navigation hotspots 170, 171, 172 and 173. Off page references (HSP1 to HSPX) direct us to Fig. 9. As described above with reference to the creation of the first iteration 122 any number of navigational hotspots 174 can be added to any one or more of the plurality of panoramic images 130. In the context of the user interface 30 a navigational hotspot links one panoramic image of the plurality of panoramic images 130 to another to form a seamless passage of a virtual tour of a space. As illustrated in Fig. 9 the navigational hotspots (HSP1) 170 is linked to panoramic 1 on Fig. 5, (HSP2) 171 is linked to panoramic 2 on Fig. 5, (HSP3) 172 is linked to panoramic 3 on
Fig. 5 and (HSPX) 173 is linked to panoramic X on Fig. 5. Any number or plurality of navigational hotspots 174 can be linked to any number of panoramic images in the iteration 122.
The present invention also provides the ability to create multiple 25 iterations 120 to show changes of the same space over time that is at different times and/or date from the first iteration 122. Like the first iteration 122 described above and as illustrated in Figs. 10 to 18, each further iteration 120 is created in the same way in which the first iteration 122 was created and navigation hotspots can be both manually added or can be automatically added 30 through the use of the image matching algorithm, both of which have been described above. Each further iteration 120 is linked to the first iteration 122 through the bridge 110. The bridge 110 allows the user 65 to manually link all of the navigational and information hotspots created in the first iteration 122, to each and every further iteration 120 by simply aligning the panoramic images
2018203909 02 Jun 2018
130 visually. This ensures that the visual perspective is maintained as the user 65 jumps from one location to another within the iteration 120 or between iterations 120.
As illustrated in Figs. 2 and 5 the application architecture for the user interface 30 consists of a bridge 110 for each one of the plurality of panoramic images 130, the off page reference (D) 111 directs us to Fig. 5. Therefore each project 70 can consist of multiple iterations 122, 123, 124, 124 and multiple bridges 112, 113, 114, 115 linking each of the plurality of panoramic images io 130 in each iteration to each and every further iteration. Fig. 5 shows that bridge (1) 112 links panoramic image (1) in iteration (1) 122 to panoramic image (1) in iteration (2) 123, panoramic image (1) in iteration (3) 124 and panoramic image (1) in iteration (X) 125. Likewise, bridge (2) 113 links panoramic image (2) in iteration (1) 122 to panoramic image (2) in iteration (2) 15 123, panoramic image (2) in iteration (3) 124 and panoramic image (2) in iteration (X) 125. Additional bridge (3) 114 links panoramic image (3) to each iteration and so forth up to bridge (X). For each one of the plurality of panoramic images 130 a bridge 110 is created which links that image to each and every iteration 120 within the project 70.
As noted above a bridge 110 is only created when a further iteration 120 is created of the project 70. By way of example only, if a user 65 created four iteration 120 of a space at different times to show the changes in the space overtime and each iteration 120 contained four panoramic images 130, in order 25 to link each and every panoramic image 130 within each iteration 120 to provide a seamless virtual tour the front-end software would need to create a total of four bridges 110. The user 65 would then need to visually align images 130 within corresponding iterations 120, this will be described in further detail below.
Figs. 19 and 20 describe the process of manually adding each further iteration 120 and the interaction of the user 65 with the bridge algorithm to create the bridge 110. Creating a bridge 110 starts by the server at step 350 retrieving all panoramic images 130 from the current iteration (11) 123, and
2018203909 02 Jun 2018 adding these to the list L1. As illustrated in Fig. 5 the current iteration will be iteration (2) 123. The viewport at step 351 then displays a bridged symbol on all panoramic images 130 the show the bridged flag BF1=1. In this case none of the panoramic images 130 would show the bridged symbol as no image has 5 previously been aligned with a previous iteration. The viewport will at step 352 display the list L1 on one side of the user interface 65, in this case the left side of the display.
Also at the same time as retrieving, generating and displaying the list L1 io the server will at step 357 retrieve all panoramic images 130 from the previous iteration (I2) 122 and adding these to list L2. As illustrated in Fig. 5 the previous iteration will be iteration (1) 122. The viewport will then render the list L2 as a hidden list at step 358 and will remove from L2 any panoramic image 130 which shows the bridged flag BF2=1 at step 359. In this case none of the 15 panoramic images 130 would show the BF2=1 flag as no image has previously been aligned with a previous iteration.
The user 65 would then at step 353 select a first panoramic image P1 form the list L1 by clicking on the required image. The viewport will then at step 20 354 render and display the selected image P1 in the main window W1 of the user interface 30. The user 65 at step 355 then has the ability to rotate the image P1 to any desired visual viewing angle. Alternatively, the user 65 can also rotate the image P1 to orient the panoramic image P1 with respect to a physical feature such as those on a map. By way of example only, image 25 metadata in the panoramic image P1 could be used to automatically locate the image on a map, such as by locating the image on a web mapping service using embedded global positioning system (GPS) co-ordinates in the image metadata. Further alternatively, the user 65 could rotate P1 to orient the image with respect to the compass heading north.
The next step 356 in the process allows the user 65 to connect the selected image P1 with an image with similar characteristics or features from the list L2. The off page reference (CB2) 360 directs us to Fig. 20. The user 65 selects the connect button 356 and the viewport at step 361 reveals and
2018203909 02 Jun 2018 displays the list L2 of panoramic images from the previous iteration. The user 65 then selects from L2 the panoramic image P2 which has the closest similarities to image P1 at step 362. The viewport at step 363 will then render and display the image P2 in a small overlay window W2 over the image P1 which is displayed in window W1. The user 65 is now able to rotate P2 to align it with the visible features in P1 at step 364. The viewport at step 365 will then calculate the updated heading of P1 with respect to P2’s rotation. The heading of P1 includes the pitch, yaw and HFOV of P1. This step ensures that the same heading is maintained in each iteration 120 and in each further iteration io 120. The server 20 at step 366 will then save the updated heading for P1 and set the flags HU=1 and BF1 =1. At step 367 the server 20 saves the flag BF2=1 for P2. This then returns to the off page reference (CB1) 370 to Fig. 19 in which the viewport will add the bridged symbol to panorama P1 in the list L1 since P1 now has the bridged flag BF1 =1. As the BF2=1 has been updated on 15 the server 20 for P2 this will also remove P2 from the list L2.
The user can now perform the above steps for each one of the plurality of panoramic images 130 of the further iteration which do not have the bridged symbol. Once all panoramic images 130 from the first iteration and further 20 iteration have been linked by the bridge 110 the server 20 will save the iterations. Each iteration 120 once linked by the bridge algorithm to a previous iteration shows the created navigation and any information hotspots linked in any one of the plurality of panoramic images 130 in each one of the further iterations, such that any one hotspot can transcend through each iteration. For 25 example, if three navigation hotspots are added to panoramic image 1 in iteration 1 and the user 65 then adds iteration 2 at a different time and/or date and panoramic image 1 is linked from iteration 2 to iteration 1, the three hotspots from panoramic image 1 in iteration 1 will transcend to iteration 2 panoramic image 1.
By allowing the user 65 to be able to rotate similar images in different iterations with interaction with the bridge 110 allows the user 65 to seamlessly maintain the visible perspective throughout each iteration. The rotation by the user 65 and then the server 20 calculating the updated heading of the first
2018203909 02 Jun 2018 panoramic image with respect to the rotated further panoramic image, allows the user 65 to align each one of the plurality of panoramic images 130 across iterations 120. This allows the user 65 to control the heading of individual panoramas 130 across iterations 120.
Alternatively, the bridge 110 and the bridge algorithm may also use the image matching algorithm to automatically find visual similarities between the plurality of panoramic images 130 in each iteration 120. The image matching algorithm allows the user interface 30 to automatically align the panoramic io images in the further iteration 123 with the first iteration 122. Each image in the further iterations can be automatically compared with the previous iteration to find visual similarities between the plurality of panoramic images and then this can be utilised to automatically align the panoramic images in the each further iteration with the previous iteration.
As previously discussed with the creation of the first iteration 122 the image matching algorithm could also utilise any artificial intelligence process to compare the images between iterations and automatically link the iterations and the navigational and information hotspots between iterations. The Al process 20 can be used to infer differences and changes in a location across multiple iterations to reach conclusions based on common understanding. For example, the Al process could include but is not only limited to a machine learning algorithm, a pattern recognition algorithm, a machine vision algorithm or any device that perceives its environment and takes actions that maximise its 25 chance of success at some goal, in this case the analysis of panoramic images in one or more iterations for similarities. For example, a machine learning algorithm is basically an algorithm which studies computer algorithms that improve automatically through experience. Likewise, pattern recognition is a branch of machine learning that focuses on the recognition of patterns and 30 regularities in the plurality of panoramic images 130 and determines a pattern based on the similarities between those images and between iterations over time.
2018203909 02 Jun 2018
The machine learning algorithm or artificial intelligence algorithm would recognise the similarities and/or differences between different panoramic images of a space to automatically generate navigation hotspots. Likewise once a further iteration of a space is generated the algorithms could be used to 5 automatically align visual similarities between images from different iterations of a space at a different time and/or date.
The design of the bridge 110 also allows the user 65 to easily delete or remove one or more further iterations without disturbing the link between 10 navigation hotspots of the remaining iterations.
Like the first iteration 122, each further iteration once generated and saved to the server 20 will be identified in a window of the user interface 30 to allow the user 65 to access each iteration respectively.
To enable other technologies to plug-in directly and use the features of the user interface 30, the present invention also provides an application programming interface (API). The API is simply a set of subroutine definitions, protocols, and tools for building application software that can allow other 20 technologies to interface with the user interface 30 and other components of the present invention. The API will allow other technologies to communicate with the user interface 30 to enable a programmer to develop a computer program based on and utilising the unique features of the present invention. The API is a web-based system, operating system, database system, computer hardware 25 or software library. The API also provides a developer with the ability to access other parts of the application, including, but not limited to panorama creation, floorplan addition and highlighter creation, using the Al to create hotspots, using iterations to show iterative visuals of a space.
The API could be utilised with the user interface 30 to provide access to such third party technologies as a floorplan design program, a web mapping service, any technology which provides a plurality of panoramic images sourced over or at different times and/or dates, or any technology which requires the iterative sorting of panoramic images over any domain. For example, the API
2018203909 02 Jun 2018 may provide sorted and linked plurality of panoramic images which are connected to any one or more iteration generated at a different time and/or date. The API may automatically locate the image on a map, such as by locating the image on a web mapping service using embedded global 5 positioning system (GPS) co-ordinates in the image through the API accessing metadata located in any one of the plurality of panoramic images.
By way of a further example, another application which could benefit from the user interface 30 and the API of the present invention is any industry 10 which gathers a lot of images of the same space over time, for example any natural disaster management company. In these types of companies there is a need to map and record disasters and to correlate this data to weather charts or seismic maps, etc. In a typical scenario the company would send out drones set to a specific GPS path that take 360° panoramic images every hour of the 15 same location or space. Once completed, the company have a large number of 360° panoramic images of the space collected at different times and/or dates that are scattered across multiple folders and are hard to browse through. The present invention provides access to the API which allows the natural disaster company to produce imagery sorted into iterations which are automatically 20 connected to any previous iteration (via the GPS data). To further simplify the process the API could also be designed to automatically upload the latest shoot cycle from the drone to the user interface to sort into each iteration and automatically connect to any previous iteration.
The API provides a significant difference to this type of company from what they would typically provide to a real estate company to produce a virtual tour of a space. What would be provided would be in the form of raw data, which their servers can then make use of in any way they want to use it. They can also use the differentiation that has been derived out of the image matching 30 algorithm or Al analysis of the images, which simply highlights the regions which have changed over a time duration.
Figs. 25 to 29 illustrate two examples of the interaction between the bridge 110, iterations 120 and the plurality of panoramic images 130. Fig. 25
2018203909 02 Jun 2018 shows the display screen or viewport for the user interface 30. The screen is broken into two distinct panes or windows 32, 33. The main window (W1) 32 which in this illustration shows the active panoramic image 130 with ID= 12 which has the heading with pitch= -1 degree, yaw= 41.25 degrees and 5 horizontal field of view (HFOV)= 84 degrees. The other window (W2) 33 shows the floorplan 92 broken into four highlighted regions 93, 94, 95, 96 and buttons for accessing each iteration 122, 123, 124.
Fig. 26 shows the initial steps involved in retrieving, opening and viewing io the saved virtual tour in the web browser. The process starts at 500 by the user logging into the web browser using the URL 31 provided. The server at step 501 will obtain the project ID from the input URL 31 and at step 502 retrieve all information pertaining to the selected project including all saved iterations 122, 123 124, all panoramic images 130, all floorplans 93, 94, 95, 96 and all input 15 information which relates to the current project ID. At step 503 the viewport will retrieve the starting panoramic image 130 from the heading information provided in the URL 31. If the heading information is correct and the viewport can identify the current panoramic image from the saved project then the viewport at step 505 will render and display the starting panoramic image. If 20 the information for the starting panoramic image is not available or does not exist the viewport at step 504 will set a fist panoramic image 130 from the list of panoramic images of the project as the starting panoramic image 130 and at step 505 as above will render and display that image.
Fig. 27 shows the application architecture which applies to the current project (PR12) 71. The project 71 consists of floorplans 90, iterations 120 and bridges 110. The floorplans 90 consist of floorplan 91 broken into four highlighted regions 93, 94, 5, 96. The highlighter (H1) 93 is located or linked to panoramic image (P1) 510, highlighter (H2) 94 is linked to panoramic image 30 (P2) 511, highlighter (H3) 95 is linked to panoramic image (P3) 512 and highlighter (H4) 96 is linked to panoramic image (P4) 513.
The project 71 consists of three iterations 120 with four panoramic images in each iteration 120. Iteration 1 (IT1) 122 consists of panoramic
2018203909 02 Jun 2018 images Pano 1 (P1) 510, Pano 2 (P2) 511, Pano 3 (P3) 512 and Pano 4 (P4) 513. Iteration 2 (IT2) 123 consists of panoramic images Pano 1 (P5) 514, Pano 2 (P6) 515, Pano 3 (P7) 516 and Pano 4 (P8) 517. Iteration 3 (IT3) 124 consists of panoramic images Pano 1 (P9) 518, Pano 2 (P10) 519, Pano 3 (P11) 5 520 and Pano 4 (P12) 521.
For each panoramic image in the iteration there must be an equal number of bridges 110 created to ensure that each panoramic image in each iteration is able to be linked successfully as the user jumps from one iteration to io the next. As shown in Fig. 27 the project 71 consists of four bridges 112, 113, 114, 115. Bridge (B1) 112 links Pano 1 in each iteration 122, 123, 124, bridge (B2) 113 links Pano 2 in each iteration 122, 123, 124 and bridge (B3) 115 links Pano 4 in each iteration 122, 123, 124.
Also shown in Fig. 27 are navigation hotspots 170, 171 which connect each image in the iteration to form the virtual tour. As illustrated navigation hotspots 170 consists of (HS1) 172 in Pano 2 (P2) 511, (HS2) 173 in Pano 3 (P3) 512 and (HS3) 174 in Pano (P4) 513. Navigation hotspots 171 consists of (HS4) 175 in Pano 1 (P1) 510, (HS5) 176 in Pano 3 (P3) 512 and (HS6) 177 in 20 Pano (P4) 513.
The first example, case (1) 530 shown in Fig. 28 will illustrate how the user interface 30 uses the bridges 112, 113, 114, 115 to ensure that if a user 65 is in iteration 1 (IT1) 122 looking at Pano 2 (P2) 511 and clicks on iteration 3 25 (IT3) 124 in window (W2) the user 65 should land in iteration 3 (IT3) at image
Pano 2 (P10) 519. The first step 531 the server 20 counts all bridges 112, 113, 114, 115 in the current project (PR12) 71 and saves the count in variable C. The server 20 will then set flag i = 1 at step 532. At decision step 533 the server 20 will check if the count variable C is less than or equal to i. In this 30 example, with the count variable C = 4, i is less than 4 therefore the decision is true and the process moves to step 534 in which the server 20 will search bridge 1 for Pano 2 (P2) 511. If the decision had returned a false answer then the process would move to step 536 and the viewport would display the error message that the iteration or panoramic image does not exist.
2018203909 02 Jun 2018
In this case the statement is true and the server 20 in step 534 will then search bridge 1 for Pano 2 (P2) 511. If Pano 2 (P2) 511 did exist in bridge 1 the true decision would be returned and the process would move to step 535. However in the current example Pano 2 (P2) 511 does not exist in bridge 1 and the false decision would be returned and the i flag would be incremented by 1 as in step 537. The server 20 would then check if i was less than C in step 533 and if true the server at step 534 would now search bridge 2 for Pano (P2) 511.
In this case Pano 2 (P2) is found in bridge 2 and the true decision is returned and the process moves on to step 535. In step 535 the server searches bridge io 2 to find any corresponding images in iteration 3. In bridge 2 the corresponding iteration 3 image is Pano 2 (P10) 519. If there was no corresponding iteration 3 image found in bridge 2 then a false decision would be returned and the process would move to step 536 and the viewport would display the error message that the iteration or panoramic image does not exist.
Given that at step 535 the server has found a corresponding image in iteration 3 the true decision is returned and the process moves on to step 538. In step 538 the server will retrieve current heading, yaw, pitch and HFOV of Pano 2 (P2) 511 and saves those variables in H1 (heading), Ya (yaw), Pi (pitch) 20 and Hf (HFOV). The server would then retrieve at step 539 the heading (H2) of Pano 2 (P10) 519 the corresponding image in iteration 3. The viewport at step 540 would then render Pano 2 (P10) 519 with the Ya (yaw), adjusted H1 and H2 (heading), Pi (pitch) and Hf (HFOV). Finally at step 541 the viewport would retrieve any navigational hotspots from Pano 2 (P2) 511 and render those on 25 Pano 2 (P10) 519.
The second example, case (2) 550 shown in Fig. 29 will illustrate how the user interface 30 uses the bridges 112, 113, 114, 115 to ensure that if a user 65 is in iteration 2 (IT2) 123 looking at Pano 3 (P7) 516 and clicks on 30 highlighter (H2) 94 on the floorplan 92 in window (W2) the user 65 should remain in iteration 2 (IT2) 123 but be directed to image Pano 2 (P6) 515 as will be described below. The first step 551 the server 20 counts all bridges 112, 113, 114, 115 in the current project (PR12) 71 and saves the count in variable C. The server at step 552 will then retrieve the identification number of the
2018203909 02 Jun 2018 current iteration, in this case ID = IT2 and save the identification number in variable IT. The front-end software will then find what highlighter (H2) 94 points to, in this case highlighter (H2) 94 points to panoramic image (P2) 511.
The server 20 will then set flag i = 1 at step 554. At decision step 555 the server 20 will check if the count variable C is less than or equal to i. In this example, with the count variable C = 4, i is less than 4 therefore the decision is true and the process moves to step 553 in which the server 20 will search bridge 1 for Pano 2 (P2) 511. If the decision returned a false answer then the io process would move to step 558 and the viewport would display the error message that the iteration or panoramic image does not exist.
In this case the statement is true and the server 20 in step 556 will then search bridge 1 for Pano 2 (P2) 511. If Pano 2 (P2) 511 did exist in bridge 1 15 the true decision would be returned and the process would move to step 557.
However in the current example Pano 2 (P2) 511 does not exist in bridge 1 and the false decision would be returned and the i flag would be incremented by 1 as in step 559. The server 20 would then check if i was less than C in step 555 and if true the server at step 556 would now search bridge 2 for Pano (P2) 511.
In this case Pano 2 (P2) is found in bridge 2 and the true decision is returned and the process moves on to step 557. in step 557 the server searches for variable IT= IT2 and looks in bridge 2 to find any corresponding images. In bridge 2 the corresponding iteration 2 image is Pano 2 (P6) 515. If there was no corresponding iteration 2 image found in bridge 2 then a false decision would be returned and the process would move to step 558 and the viewport would display the error message that the iteration or panoramic image does not exist.
Given that at step 557 the server has found a corresponding image in 30 iteration 2 the true decision is returned and the process moves on to step 560.
In step 560 the server will retrieve current heading, yaw, pitch and HFOV of Pano 2 (P2) 511 and saves those variables in H1 (heading), Ya (yaw), Pi (pitch) and Hf (HFOV). The server would then retrieve at step 561 the heading (H2) of Pano 2 (P6) 515 the corresponding image in iteration 2 that the highlighter (H2)
2018203909 02 Jun 2018 is pointing to. The viewport at step 562 would then render Pano 2 (P6) 515 with the Ya (yaw), adjusted H1 and H2 (heading), Pi (pitch) and Hf (HFOV). Finally at step 563 the viewport would retrieve any navigational hotspots from Pano 2 (P2) 511 and render those on Pano 2 (P6) 515.
Both examples above show that as the user 65 moves through an iteration or from one iteration to another, or from one highlighted region in an iteration to another highlighted region, the user interface 30 maintains the perspective from one image to another and from one iteration to another, io Furthermore the virtual tour of the space produced by the user interface allows the user 65 to be immersed in a more realistic image by providing a wider field of vision with respect to a scene.
Fig. 30 a further extension of the present invention in which the user 15 interface 30 can also be utilised to provide analytical analysis of certain aspects of the present invention. In particular, the user interface 30 provides an analysis tool that provides consistent, transparent, and efficient analysis of the project 70. Analytics can be used to understand where viewers are looking in the panoramic images 130 to gauge interest in specific panoramas across 20 multiple iterations and then develop a graphical representation or report which provides the user 65 with insights on user behaviour within the project 70.
As illustrated in Fig. 30 the process started or is initiated at 600 and at 601 the first step is whenever a panoramic image (P) 130 is loaded the 25 following steps occur. Firstly, at step 602 the server 20 checks the URL for referral hash H and saves if present. The server then at step 603 saves the time the URL and panoramic image is opened in T1. At this point the server at step 604 also sets the user clicks in panorama counter UC1 to zero (UC1= 0). At the same time the server 20 at step 605 saves the user IP address browser 30 and any user device information. The user 65 will now either click in the panorama 130 or will click on another or next panoramic image 130 as shown in step 608 or could simply click on the share button as shown in step 610.
2018203909 02 Jun 2018
If the user 65 clicks within the current panoramic image (P) then the process moves to step 607 in which the user click counter UC1 is incremented by one and the updated UC1 is saved to the server 20. This can occur until the user 65 stops clicking within the current panoramic image (P) and the UC1 user click counter will continue to update and save. If the user 65 clicks onto another or next panoramic image at step 608 the process will move to step 609 in which the server will save both the time of exit from panoramic image (P) as T2 and the identification number of the next panoramic image 130 and returns to step 601 and the process begins for the next panoramic image.
io
Alternatively, if the user 65 decides to click on the share button 610 then the process moves to step 611 were the server 20 will generate the share hash H and at step 612 the server 20 will save the share hash H, the time and/or date of the share, the user ID of the user who shared the project 70, the share 15 identifier and the identification number of the panoramic image 130 that has generated the share request. The process then returns to step 601 and the process begins for the next panoramic image.
In light of the above having save times T1 and T2 allow the analytic 20 program to define how long a viewer was viewing each one of the plurality of panoramic images 130. Having the user click counter UC1 allows the analytics program to show how intently a viewer interacts with each one of the plurality of panoramic images 130. Finally having H allows the analytics program to help track all of the shares of each project and also identify anyone who received a 25 share invite actually viewed the plurality of panoramic images as a result of receiving a share invite.
In today’s society we also need to take into consideration the censorship of certain information found within the plurality of panoramic images 130. For 30 example, the pixilation of vehicle license plates and faces of people found in the plurality of images need to be considered by the user interface 30 in order to avoid infringing any rights of individuals or inadvertently displaying an image which should not be displayed. In the present invention, pixelisation is any technique used in editing images, whereby an image is blurred by displaying
2018203909 02 Jun 2018 part or all of it at a markedly lower resolution. It is primarily used for censorship. The effect is a standard graphics filter, available in all but the most basic bitmap graphics editors.
The user interface 30 may include a further algorithm which allows the user to manually select and edit each image which contains a face of a person or a vehicle number plate or licence plate. Optionally the program may automatically search for images of people and faces or vehicle number plates and then allow the user 65 to select which of those images should be pixelated 10 or blurred. As a further option an algorithm may be included that automatically finds and pixelates or blurs any image which meets a certain criteria, such as faces of people and vehicle number plates.
The image matching algorithm which provides the automatic comparison 15 of similarities and differences in the plurality of images may also be modified to provide the automatic recognition of items which require censorship such as faces or vehicle number plates, and to perform the censorship of those items.
Alternatively the user interface 30 may comprise a separate program 20 designed to automatically pixelate or blur the required images. Such a program would include an automatic face and/or number detector which runs in the browser and provides the user 65 with the ability to censor certain required images.
The user interface 30 has been largely developed to enable a user 65 to easily and intuitively design a virtual tour of a space. The user 65 can create a first iteration of a project which includes either manually or automatically, through the use of the image matching algorithm aligning the plurality of panoramic images to produce and generate a virtual tour on a multi-pane web 30 page or API return call in response to user inputs into the user interface 30.
The virtual tour is transmitted via the Internet and able to be viewed by users using a web browser. The user is also able to create multiple further iterations of the space over time to show any changes in the space. The user interface through the intuitive bridge ensures that each iteration is either manually or
2018203909 02 Jun 2018 automatically aligned with each other iteration. The user interface will then in accordance with the user input generate an iterative virtual tour on a multi-pane web page of a space, and as above the virtual tour is transmitted via the Internet and able to be viewed by users using a web browser which shows 5 changes over time of the space. Alternatively, the virtual tour may be generated through an API return call. When the API is accessed through a specified URL a return call will allow access to the features or data of the user interface 30.
The user interface 30 will produce a web page using either a web-based interface or a cloud-based interface. The user 65 creates content using a webbased browser, and the cloud-based interface helps applications communicate with the cloud-based service. Alternatively, for the applications to interpret data such as the natural disaster application described above, they do not 15 necessarily need a web-based browser. They can take this data manipulate it in any way they need to in order to achieve their desired goal and then provide an adequate output.
In order to cater to as many technologies as possible the virtual tour 20 produced by the user interface 30 is compatible with iOS, Android and all other computer devices. Also, with the move to wearable technologies and mobile devices, exporting the virtual tours as HTML5 makes them viewable across all platforms. The user 65 has the ability to split the viewport into two windows allowing the user 65 to use head mounted devices to see a stereoscopic view 25 of the space. The virtual tours currently produced use HTML5 and so are compatible with all mobile platforms, and are mobile responsive.
The present invention is based on a user interface 30 which allows the user 65 to generate the virtual tour of a space. The user interface 30 also 30 includes components or a system for creating, managing and publishing an interactive virtual tour. The system consisting of the user interface 30 for use in either a web-based or cloud-based environment and allowing a user 65 to create through user input multiple iterations of a space. The system also embodies a client device having one or more processors, memory, and one or
2018203909 02 Jun 2018 more programs stored in the memory and configured for execution by the one or more processors. The one or more programs comprising instructions for generating and displaying the user interface on a display. The system further consists of a web server for allowing the client device to access and store the 5 interactive virtual tour, virtual tour data, information and commands. The primary function of the web server is to store, process and deliver web pages to clients. The final component of the system is a communications network for connecting the client device, server and displaying the user interface. The communications network can be either a local area networks, or LAN, or a wide io area networks, or WAN, with multiple communication connections, including microwave radio links and satellites, used to connect computers and other terminals over large geographic distances. Both communication networks can be wired or wireless.
The present invention also extends to a computer system for providing a user interface 30 of an application program having multiple panes, each pane providing access to functionality of the application program to create a virtual tour of a space. The computer system consists of a number of components which provides the user interface 30. In its broadest form the user interface 30 20 consists of front-end software saved on the server 20 and a viewport or display which is shown or displayed in the form of a multi-pane web page. The first component which is displayed in the first window or pane allows the user 65 to interact with the user interface to register to use the user interface 30. This can include but is not only limited to registering and entering details of the first user 25 and/or organization details, all details being saved to the server 20. Once the user 65 is registered they are able to upload user-input specifying information relating to the project and upload a plurality of panoramic images of the project. This can then be used to create a first iteration of the project which is saved to the server 20 at a first time and/or date.
The second component of the computer system for producing the user interface 30 replaces the display of the first pane of the user interface by displaying a second function which allows the user to link and align each one of the panoramic images in the first iteration by manually adding navigation
2018203909 02 Jun 2018 hotspots. This is performed by the user 65 manually rotating images to align visual perspectives.
Once all of the images in the first iteration have been manually linked by 5 the user 65 a third component, that upon receiving from the user 65 a selection of the save iteration icon or button of the user interface 30 will save the first iteration to the server 20. The third component also allows the application program to display the virtual tour of the first iteration showing the linked and aligned plurality of panoramic images and in a second pane displays a first io iteration button or icon.
The system further has a fourth component which allows a user to display in the second pane of the user interface 30 floorplans showing a map or site layout or anything with a spatial significance and add active highlighted 15 regions within the floorplan which are linked to one of the plurality of panoramic images. The user can now interact with the user interface to create and add a further iteration at a different time and/or date to the first iteration. This is carried out in the same manner as was previously described above for the first iteration. By creating the further iteration also means that the user interface 30 20 will implement the creation of a bridge. For each one of the plurality of images a bridge will be created which allows the user to rotate and align each one of the further panoramic images from the further iteration with each one of the corresponding panoramic image from the first iteration or optionally, align all panoramic images of each iteration with a feature on a map or floorplan. The 25 user interface 30 will then calculate an updated heading for each one of the first panoramic images with respect to the rotated further panoramic images and save the updated heading to automatically align visible features in the panoramic images and automatically add a hotspot from the first iteration to the further iteration.
A further component of the user interface 30 that upon receiving from the user 65 a selection to save the further iteration icon or button of the user interface 65 will display the virtual tour of the second iteration showing the linked and aligned plurality of panoramic images and in a second pane displays
2018203909 02 Jun 2018 a second iteration button or icon. The computer system will repeat the above steps for each further iteration at a different time and/or date to the previous iteration.
Finally the computer system will generate a multi-pane web page in response to the user interface to display the virtual tour of the space. The virtual tour is transmitted via the Internet and viewed by a plurality of users using a web browser. The web browser could be any one of the popular web browsers, such as Google Chrome, Microsoft Internet Explorer and Firefox.
io
As previously described the present invention allows the user 65 with the second component to allow the rotating of the panoramic images to orient the panoramic images with respect to a physical feature such as those on a map. This could include any GPS co-ordinates. Alternatively, the user 65 using the 15 second component could rotate the panoramic images to orient the panoramic images with respect to the compass heading north.
The second component could also be optionally replaced with an image matching algorithm which automatically adds hotspots to the plurality of 20 panoramic images by identifying similarities and differences between the images and based on those similarities or differences automatically adds a navigational hotspot that connects each one of the panoramic images to each other.
Likewise the creation of the bridge component can also utilise the image matching algorithm to automatically find visual similarities between the panoramic images of the first iteration and the panoramic images of the further iteration and automatically align the further iteration with the first iteration. This includes each further iteration to automatically compare with the previous 30 iteration to find visual similarities between the panoramic images of the further iterations and automatically align the panoramic images in the each further iteration with the previous iteration. The image matching algorithm could also include using an artificial intelligence engine such as a machine learning algorithm, a pattern recognition algorithm, or a machine vision algorithm.
2018203909 02 Jun 2018
Fig. 31 shows an exemplary screenshot of the user interface 30. The screenshot or creator window of the user interface 30 includes along the left hand side thumbnails of each of the panoramic images 130 in the current project. It also identifies the selected active panoramic image P1. The selected 5 active panoramic image P1 is also displayed in the main window W1. Located across the top of the screenshot are a number of buttons which include the view this project button 70, the view contacts button 150, work on floorplans button 90, the view events button 140 and the see or view final virtuality button 700. Each of these buttons are related to the current project 70 and iteration io 120. The button on the top right hand corner always shows the “View Final Virtuality” button 700 and the user 65 can click on this button 700 to view the virtual tour which has been created for that iteration 120. Likewise as each further iteration is created the button 700 will display each and every iteration and the virtual tour formed with those iterations 120.
Along the bottom of the user interface creator window are a number of further buttons which allow as user 65 to add a navigational hotspot 210, set a default view 280, disable a panorama 300, delete a panorama 710 and to remove a navigational hotspot 270.
Another application of the present invention involves the user interface 30 being utilised as a location identifier for a user 65. The user 65 can take a photograph at their current location and upload that photograph to the user interface 30. The user interface 30 uses the image matching algorithm to 25 match a saved panoramic image 130 with the photograph uploaded by the user 65. This user interface 30 determines the location of the matched images through saved GPS data and provides a panoramic image to the user 65 which contains navigational hotspots and information hotspots to identify information to the user 65 about their current location. This can include providing a virtual 30 tour of the location. For example, if a user 65 is standing in front of a museum and takes a photograph of the museum, the user interface can then match that photograph with panoramic images stored on the server 20. By using the image uploaded to locate the user at the museum can allow the user interface
2018203909 02 Jun 2018 to provide a virtual tour of the museum to the user 65. This is beneficial in particular to the tourism industry.
The present invention is useful for a number of different applications and a number of different industries. The following are provided by way of exemplary uses only and should not be limited to only these following uses or industries. The first is the real estate industry including both residential and commercial property industries, where a realistic 3D view of a home or property can be presented to interested purchasers. The virtual tour produced by the 10 user interface 30 would include an interactive floorplan which showed highlighted regions of the property which are linked to panoramic images of the virtual tour. Within the virtual tour arrows or unique designed icons would indicate where each photograph was taken. Clicking on the arrows or icons shows the user where the camera was and which way the camera was pointing. 15 The arrows or icons would lead the user around the virtual tour as though the user was immersed within space
Another application which benefits from the present invention is any object, item or physical space which can be displayed to show changes in that 20 object or space over time. For example, an exhibition hall which has multiple uses can be displayed using the user interface in any number of different configurations. This could include the hall being dressed for a wedding, an exhibition, a conference including different seating plans and setup options.
Another industry which will benefit from the user interface is the travel and hospitality industries. The ability to provide an online virtual tour of a hotel showing different rooms, amenities and dining experiences will allow the user to experience a virtual stay in the property prior to making a decision to actually travel. Likewise, showing highlights of a location with the user immersed in that 30 location is something which the travel industry can utilise with the user interface.
Another industry which can take advantage of the present invention is the automotive industry. A user can take a virtual tour a new car prior to
2018203909 02 Jun 2018 purchasing that vehicle and prior to actually going to see the vehicle at the showroom. Being able to take a tour of the interior and exterior of a car will give customers a feeling of actually being in the car showroom in the vehicle.
The world of e-commerce is another industry which will benefit from the present invention. The display and promotion of products and enhanced user experience will lead to increased sales of those products. Being able to display in a virtual tour a 360 degree panorama of a product will provide customers with a realistic view of the product without actually seeing or being able to handle 10 the product. This will also add to the popular online shopping sites and the visual presentation of a product. You can not only see the products displayed in 3D mode, but also experience the real store shopping surroundings, thus receiving more customer confidence.
Another industry which will benefit greatly from the present invention is the construction industry. The present invention provides the ability to be able to view a construction site from day one through to completion by showing 360° iterations of a space over and over again to show visual change in the location. A construction site, which keeps changing and evolving every week, can be 20 shown as a 360° virtual tour construction updates while allowing interactions with a floorplan and hotspots that allow navigation within the space. Changes in time are shown when a person selects a different iteration, different iterations are recorded at different times and/or dates.
Within the construction industry such projects as the construction of a road or a bridge can be recorded at each stage and each stage linked to form a number of iterations of the project to give an overall investigation and evaluation of the quality, visual effect and the surroundings of the construction. A visual record of the construction of any project over time can be linked to form 30 a virtual tour of the project using the user interface of the present invention. A 360 degree tour linked over time can be used as a visual record of project analysis and research.
2018203909 02 Jun 2018
Each stage of the residential construction of a house can be documented to provide a visual tour of the project over time. By recording each stage through the use of a simple omnidirectional camera, such as the Ricoh Theta allows the user to produce linked iterations each containing a plurality of 5 panoramic images to produce a visual representation of the construction of the property over time. The present invention maintains perspectives as a user jumps between multiple locations and multiple iterations. Each interactive element a user added into the first iteration is reused for the next iteration by having the user simply align panoramas visually. The present invention io ensures that the perspective is being maintained between multiple panoramas, even when the heading or north offset of panoramas has no physical reference point, like a map, or GPS coordinates. Optionally, the present invention also provides the ability to be able to align north offsets of panoramas to a physical reference point like a map or a floorplan.
ADVANTAGES
The present invention provides computer implemented methods for providing a user interface and in particular, to a user interface which allows a 20 user to create a virtual tour of a space. This present invention also extends to a user interface and process for creating iterations of a space which show the visual change in the space over time.
The present invention ensures that as a user jumps between multiple 25 locations and multiple iterations the visual perspective is maintained. By allowing the user to interact with the user interface to visually align panoramic images of a space ensures that the perspective is maintained. Likewise the image matching algorithm provides the automatic visual alignment of the panoramic images through the comparison of similarities and differences in 30 those images. It is also important that the present invention maintains the perspective between multiple panoramas, even when the heading or north offset of panoramas has no physical reference point, like a map, or GPS coordinates.
2018203909 02 Jun 2018
The ability to be able to create multiple iterations of a space and transpose any interactive elements a user added into the first iteration and reusing them for the next iteration by having the user simply align panoramas visually, makes the process of iterating much faster than it would be otherwise.
By adding a machine learning algorithm, which would recognise the similarities and differences between different spaces to automatically generate hotspots and also to align iterative visuals using only the panoramic images provided by the omnidirectional camera provides a truly unique user interface io which will provide significant advances for virtual tours of any space in any domain.
The present invention also provides the advantage of having a cloudbased user interface creation tool which allows a user to customise the 15 panoramic viewer's interface, branding and colour schemes.
The present invention also provides for mirror hotspots to be created in panoramic images. For example, when you add a hotspot that goes from panoramic A to panoramic B, the user interface algorithm automatically adds a 20 hotspot in panoramic B that connects to panoramic A.
The bridge which forms an integral component of the present invention provides the user with the ability to easily create multiple iterations of many panoramic images and seamlessly link each image and each iteration. 25 Because the present invention is an iterative method, the bridge provides the advantage of being able to remove an iteration from the multiple iterations without causing a break between the remaining iterations. A user can still after the iteration has been removed seamlessly move through each iteration. The present invention provides an editable iterative process that once you have 30 connected a panoramic image from a previous iteration to the next iteration, it can be removed or deleted without effecting overall virtual tour.
The present invention and in particular the bridge allows the user to align panoramas across iterations. The present invention does not assume the gyro
2018203909 02 Jun 2018 reading from the image's metadata to be its true heading. Most omnidirectional cameras available today, have a heading which is incremented in 36 degree increments, leaves the alignment between images way out. The present invention allows the control of the heading of individual panoramic images 5 across iterations.
The present invention also provides the advantage of being able to have navigational hotspots transcending through iterations. If you added three navigational hotspots to a first panoramic image in the first iteration, and then 10 you added the second iteration and connected the first panoramic image from the second iteration to the first iteration, the navigational hotspots created in the first iteration will be generated in the first panoramic image of the second iteration.
Analysis is provided to understand where viewers are looking in the panorama to gauge interest in specific panoramas across multiple iterations and then developing charts that provide the creator with insights on user behaviour within the project.
The user interface also allows viewers to be in a geo-spatial canvas (a physical location) and pointing their phone camera to take a picture of the physical features of that location and upload that image to the user interface to allow the viewer to be transported to that location on their phone with all sorts of information marked up to convey deeper context and understanding of the 25 location. The user interface uses the image matching algorithm or Al to match a saved panoramic image with the photograph uploaded by the viewer. The user interface determines the location of the matched images through saved GPS data and provides a panoramic image to the viewer which contains navigational hotspots and information hotspots to identify information to the 30 viewer about their current location.
VARIATIONS
It will be realized that the foregoing has been given by way of illustrative
2018203909 02 Jun 2018 example only and that all other modifications and variations as would be apparent to persons skilled in the art are deemed to fall within the broad scope and ambit of the invention as herein set forth.
In this specification, the term panoramic image denotes a combined image generated by connecting a series of images shot in a plurality of directions. A panoramic image may include an image or collection of images having a field of view which is greater than that of the human eye, e.g., 180 degrees or greater. Some panoramas may provide a 360-degree view of a location.
In this specification, adjectives such as first and second, left and right, top and bottom, and the like may be used solely to distinguish one element or action from another element or action without necessarily requiring or implying any actual such relationship or order. Where the context permits, reference to an integer or a component or step (or the like) is not to be interpreted as being limited to only one of that integer, component, or step, but rather could be one or more of that integer, component, or step etc.
The above description of various embodiments of the present invention is provided for purposes of description to one of ordinary skill in the related art. It is not intended to be exhaustive or to limit the invention to a single disclosed embodiment. As mentioned above, numerous alternatives and variations to the present invention will be apparent to those skilled in the art of the above teaching. Accordingly, while some alternative embodiments have been discussed specifically, other embodiments will be apparent or relatively easily developed by those of ordinary skill in the art. The invention is intended to embrace all alternatives, modifications, and variations of the present invention that have been discussed herein, and other embodiments that fall within the scope of the above described invention.
In the specification the term “comprising” shall be understood to have a broad meaning similar to the term “including” and will be understood to imply the inclusion of a stated integer or step or group of integers or steps but not the exclusion of any other integer or step or group of integers or steps. This definition also applies to variations on the term “comprising” such as “comprise” and “comprises”.

Claims (5)

1. A computer-implemented method comprising:
providing a user interface for a virtual tour application program specifying
5 details and uploading user information of a project to a server;
creating a first iteration of the project at a first time and/or date;
receiving through the user interface, user-input specifying information regarding or relating to the project;
receiving through the user interface a plurality of panoramic images of io the project and adding the plurality of panoramic images to the first iteration;
saving the first iteration to the server; and generating a multi-pane web page and/or generating an application programming interface return call in response to the user interface which is transmitted via the Internet and viewed by a user using a web browser.
2. A method as claimed in claim 1, wherein the user interface is a web based or a cloud based user interface.
3. A method as claimed in claim 1 or claim 2, wherein the user-input 20 information comprises any one or more of the following:
(i) a location of the project;
(ii) at least one contact for the project if different from the user;
(iii) at least one event related to the project;
(iv) any project branding; or
25 (v) any information that adds context to the project, including but not only limited to audio, video and floorplans.
4. A method as claimed in claim 1, wherein the user information comprises user details and/or a company details.
5. A method as claimed in claim 1, wherein each one of the plurality of panoramic images is recorded with an original file creation time and date stamp and each image is saved on the server in time and date order starting from an earliest time and date to a latest time and date.
2018203909 02 Jun 2018
6. A method as claimed in claim 5, wherein each one of the plurality of panoramic images is recorded using an omnidirectional camera or created by a three-dimensional software application.
5
7. A method as claimed in claim 1, wherein creating the first iteration at the first time and/or date of the project is performed manually by a user interacting with the user interface to produce the first iteration.
8. A method as claimed in claim 7, wherein creating the first iteration at the io first time and/or date by the user further comprises the steps of:
(i) choosing a first panoramic image from the plurality of panoramic images; and (ii) rendering the selected panoramic image in a pane of the multi-pane web page.
9. A method as claimed in claim 8, wherein creating the first iteration further comprises the user manually adding at least one hotspot to the first panoramic image to assist in an alignment of the images to produce the virtual tour.
20 10. A method as claimed in claim 8 or claim 9, wherein adding the at least one hotspot comprises the steps of:
(i) rotating the first panoramic image to a position to add the hotspot;
(ii) locating the hotspot on the first panoramic image;
(iii) saving the co-ordinates of the hotspot, the co-ordinates including the 25 location, time and direction of the hotspot; and (iv) zooming into the location of the hotspot on the first panoramic image.
11. A method as claimed in any one of claim 8 to 10, wherein creating the first iteration further comprises selecting a further panoramic image to link to the
30 first panoramic image.
12. A method as claimed in claim 11, wherein selecting the further panoramic image comprises the steps of:
2018203909 02 Jun 2018 (i) selecting the date and time information for each of the remaining plurality of panoramic images;
(ii) calculating a time difference of each of the remaining plurality of panoramic images with respect to the time and date of the first panoramic
5 image;
(iii) creating a list of the remaining panoramic images and the corresponding time differences and displaying the list by overlaying the list over the first panoramic image in the pane of the multi-pane web page; and (iv) selecting from the list the further panoramic image based on a io shortest time difference calculated with respect to the first panoramic image.
13. A method as claimed in claim 11 or claim 12, wherein linking the first panoramic image to the further panoramic image comprises the steps of:
(i) rendering the further panoramic image in an overlapping window over 15 the pane of the first panoramic image;
(ii) aligning the first and further panoramic images by aligning a visible perspective of both the first image and the further image by rotating the further image;
(iii) checking the visible perspective is maintained between the first 20 image and the further image;
(iv) adding at least one corresponding hotspot to the further panoramic image;
(v) saving a heading of the first panoramic image on the server once the user is satisfied that the alignment is correct and the first and further panoramic
25 images and the hotspots in each image are linked in the first iteration;
(vi) updating the further panoramic image as the first panoramic image;
(vii) linking each one of the remaining panoramic images to the first iteration by performing steps (i) to (vi) until each further panoramic image is linked to form the virtual tour; and
30 (vii) saving the first iteration to the server.
14. A method as claimed in any one of claims 8 to 13, further comprises automatically adding a mirror hotspot in the further panoramic image that connects the further panoramic image to the first panoramic image.
2018203909 02 Jun 2018
15. A method as claimed in claim 10, wherein rotating the first panoramic image further comprises rotating the first panoramic image to orient the first panoramic image with respect to a physical feature such as those on a map.
5 16. A method as claimed in claim 10, wherein rotating the first panoramic image further comprises rotating the first panoramic image to orient the first panoramic image with respect to the compass heading north.
17. A method as claimed in claim 8, wherein creating the first iteration further io comprises an image matching algorithm which automatically adds hotspots to the plurality of panoramic images to assist in an alignment of the images to produce the virtual tour.
18. A method as claimed in claim 17, wherein the image matching algorithm 15 identifies similarities between the plurality of panoramic images and based on the similarities automatically adds at least one hotspot that connects each one of the plurality of panoramas to each other.
19. A method as claimed in claim 17 or claim 18, wherein the image 20 matching algorithm performs the following steps:
(a) selecting from the saved original file creation time and date a first panoramic image based on the earliest time and date of the plurality of panoramic images, the first panoramic image having a first heading;
(b) scanning the first panoramic image for features by splitting the first 25 panoramic image into discrete portions and analysing each discrete portion within the first image;
(c) saving each discrete portion with features on the server as an object with a defined size, form and shape;
(d) selecting from the saved original file creation time and date a further 30 panoramic image based on a shortest time difference calculated from the first panoramic image;
(e) searching for the saved objects within the further panoramic image to identify any matching objects;
2018203909 02 Jun 2018 (f) comparing the size of the matching objects from the first and further images to determine a difference in each objects size;
(g) identifying the objects with a largest difference in size, the largest difference in size showing a direction of motion within the first and further
5 panoramic images;
(h) updating the first heading as per the direction of motion;
(i) adding at least one hotspot in the direction of motion in the first panoramic image;
(j) repeating steps (d) to (i) for each further panoramic image of the io plurality of panoramic images in the first iteration; and (k) saving the first iteration to the server.
20. A method as claimed in any one of claims 17 to 19, wherein the image matching algorithm further comprises in step (c) removing any saved objects
15 which are a matching object within the saved objects to remove any duplicate saved objects.
21. A method as claimed in any one of claims 17 to 20, wherein the image matching algorithm automatically adds a mirror hotspot in the further panoramic
20 image that connects the further panoramic image to the first panoramic image.
22. A method as claimed in any one of claims 17 to 21, wherein the discrete portions have a size which varies dependent upon the size of the image.
25 23. A method as claimed in claim 22, wherein the size of the discrete portions are approximately a 10 pixel square or a user defined size for object identification.
24. A method as claimed in any one of claims 17 to 23, wherein the direction 30 of motion shows the direction in which a photographer or the omnidirectional camera is moving from the first panoramic image through to a last panoramic image of the plurality of panoramic images in the first iteration.
2018203909 02 Jun 2018
25. A method as claimed in any one of claims 17 to 24, wherein the first heading comprises a pitch, a yaw and a horizontal field of view of the first panoramic image.
5 26. A method as claimed in claim 8, wherein creating the first iteration further comprises an image matching algorithm for automatically creating the virtual tour using 3D mesh structures of the first iteration, the 3D mesh structures allowing the user to browse the plurality of panoramic images and allowing the plurality of panoramic images to be located within a co-ordinate based system.
io
27. A method as claimed in claim 26, wherein the image matching algorithm for creating the virtual tour using the 3D mesh structures comprises:
receiving through the user interface a plurality of panoramic images of a 3D scene;
15 reconstructing, by the image matching algorithm, geometry of a plurality of 3D bubble-views from the panoramic images, wherein reconstructing includes:
using a structure from motion framework for camera localisation;
generating a 3D surface mesh model of the scene using multi20 view stereo via cylindrical surface sweeping for each bubble-view, wherein the cylindrical surface sweeping quantizes the scene with multiple depth surfaces with respect to a bubble view center and hypothesizes a depth of each light ray to be intersecting with one of the depth surfaces, wherein an intersecting point of each light ray is 25 projected on each depth surface, and thereafter the cylindrical surface sweeping performs forward projection to find correspondences across multiple cameras; and registering multiple 3D bubble-views in a common coordinate system, wherein registering multiple 3D bubble-views in a common 30 coordinate system comprises registering partial images from different bubble-views to form a new coordinate system, and estimating a relative pose from each bubble-view to map images from each bubble-view to the new coordinate system; and displaying the surface mesh models.
2018203909 02 Jun 2018
28. A method as claimed in claim 26 or claim 27, wherein reconstructing further comprises refining the 3D surface mesh model of each bubble-view using a depth optimisation technique that utilises a smoothness constraint to resolve issues caused by textureless regions of the scene.
29. A method as claimed in any one of the preceding claims, further comprising adding at least one information hotspot to any one of the plurality of panoramic images to identify any one or more of the features within the panoramic image, using any one of the following:
io (a) an image;
(b) a video;
(c) a title;
(d) a hyperlink; or (e) a description.
30. A method as claimed in any one of the preceding claims, further comprising the user adding at least one floorplan to the project to be displayed in another pane of the multi-pane web page.
20 31. A method as claimed in claim 30, wherein the at least one floorplan comprises a map or site layout or anything with a spatial significance, the map being a view from above, of the relationships between rooms, spaces and other physical features at one level of the project.
25 32. A method as claimed in claim 30 or claim 31, wherein the floorplan further comprises at least one active region within the floorplan.
33. A method as claimed in any one of claims 30 to 32, where each one of the active regions is linked to at least one of the plurality of panoramic images,
30 allowing the user to select one of the active regions and the corresponding panoramic image will be displayed within the pane of the muiti-pane web page.
34. A method as claimed in any one of the preceding claims, further comprising allowing the use of image metadata in any one of the plurality of
2018203909 02 Jun 2018 panoramic images to automatically locate the image on a map, such as by locating the image on a web mapping service using embedded global positioning system (GPS) co-ordinates in the image metadata.
5 35. A method as claimed in any one of the preceding claims, further comprising the user creating at least one further iteration of the project at a different time and/or date to the first iteration.
36. A method as claimed in claim 35, wherein the at least one further 10 iteration allows the user to show changes in the project over time.
37. A method as claimed in claim 35 or claim 36, wherein each one of the further iterations is created in the same way in which the first iteration is created and hotspots are manually added in accordance with claims 8 to 16.
38. A method as claimed in claim 35 or claim 36, wherein each one of the further iterations is created in the same way in which the first iteration is created and hotspots from the first iteration are automatically added to each one of the further iterations by the user interacting with a bridge algorithm.
39. A method as claimed in claim 38, wherein the bridge algorithm comprises the steps of:
(i) retrieving all of the plurality of panoramic images of the further iteration from the server and adding the images to a first list and displaying the
25 first list on the multi-pane web page; and (ii) retrieving all of the plurality of panoramic images of the first iteration from the server and adding the images to a second list and rendering the second list as a hidden list on the multi-pane web page.
30 40. A method as claimed in claim 38 or claim 39, wherein the bridge algorithm further comprises the steps of:
(i) allowing the user to select a first panoramic image from the first list and displaying that image on the pane of the multi-pane web page;
2018203909 02 Jun 2018 (ii) allowing the user to rotate the selected first panoramic image to a desired visual angle;
(iii) allowing the user to select the iteration connect button which reveals the hidden second list to the user;
5 (iv) allowing the user to select from the second list a further panoramic image which has the closest similar visual features to the selected first panoramic image from the first list;
(v) overlaying the further panoramic image selected over the selected first panoramic image;
io (vi) allowing the user to rotate the further panoramic image to align the similar visual features in the first panoramic image;
(vii) calculating an updated heading of the first panoramic image with respect to the rotated further panoramic image and saving the updated heading on the server; and
15 (viii) performing steps (i) to (vii) for each one of the plurality of panoramic images of the further iteration.
41. A method as claimed in claim 39 or claim 40, wherein each one of the further iterations once linked by the bridge algorithm to a previous iteration
20 shows the created hotspots and any information hotspots in any one of the plurality of panoramic images in each one of the further iterations, such that any one hotspot can transcend through each iteration.
42. A method as claimed in any one of claims 39 to 41, wherein by 25 calculating the updated heading of the first panoramic image with respect to the rotated further panoramic image and saving the updated heading on the server to align each one of the plurality of panoramic images across iterations allows the user to control the heading of individual panoramas across iterations and to therefore seamlessly maintain a visible perspective throughout each iteration.
43. A method as claimed in claim 38 or claim 39, wherein the bridge algorithm further comprises using the image matching algorithm of claims 18 to 25 to automatically find visual similarities between the plurality of panoramic images of the first iteration and the plurality of panoramic images of the further
2018203909 02 Jun 2018 iteration and automatically aligning the panoramic images in the further iteration with the first iteration.
44. A method as claimed in claim 43, wherein each one of the further
5 iterations is automatically compared with a previous iteration to find visual similarities between the plurality of panoramic images of the each further iteration and the plurality of panoramic images of the previous iteration and automatically aligning the panoramic images in the each further iteration with the previous iteration.
45. A method as claimed in any one of claim 35 to 44, further comprises analysing changes in visual similarities between the plurality of panoramic images of the first iteration and the plurality of panoramic images of each one of the further iterations using an artificial intelligence process to infer differences
15 and changes in a location across multiple iterations to reach conclusions based on common understanding.
46. A method as claimed in claim 45, wherein the artificial intelligence process comprises any one or more of the following:
20 (a) a machine learning algorithm; or (b) a pattern recognition algorithm; or (c) a machine vision algorithm.
47. A method as claimed in any one of claims 35 to 46, further comprising 25 allowing the removal of any one of the further iterations without removing any of the linked hotspots from any other one of the remaining further iterations.
48. A method as claimed in any one of claims 35 to 47, wherein a link to each one of the iterations is displayed in the another pane of the multi-pane
30 web page.
49. A method as claimed in any one of claims 35 to 48, further comprising allowing the use of image metadata in any one of the plurality of panoramic images to automatically locate the image on a map, such as by locating the
2018203909 02 Jun 2018 image on a web mapping service using embedded global positioning system (GPS) co-ordinates in the image metadata.
50. A method as claimed in any one of the preceding claims, further
5 comprises providing an application programming interface (API) which allows a third party technology to interact with and use the web based user interface or parts thereof.
51. A method as claimed in claim 50, wherein the third party technologies 10 comprise any one or more of the following:
(i) a floorplan design program;
(ii) a web mapping service;
(iii) any technology which provides a plurality of panoramic images sourced over or at different times and/or dates; or
15 (iv) any technology which requires the iterative sorting of panoramic images over any domain.
52. A method as claimed in claim 50 or claim 51, wherein the API provides sorted and linked plurality of panoramic images which are connected to any one
20 or more iteration of the plurality of panoramic images at a different time and/or date and allowing through the use of image metadata in any one of the plurality of panoramic images to automatically locate the image on a map, such as by locating the image on a web mapping service using embedded global positioning system (GPS) co-ordinates in the image metadata.
53. A method as claimed in any one of the preceding claims, further comprises providing an analysis tool that provides consistent, transparent, and efficient analysis of the project to understand where viewers are looking in the panoramic images to gauge interest in specific panoramas across multiple
30 iterations and then developing at least one graphical representation which provides the user with insights on user behavior within the project.
54. A method as claimed in any one of the preceding claims, further comprises allowing the user to pixelate or blur out faces or vehicle license or
2018203909 02 Jun 2018 number plates as required to censor each one of the plurality of panoramic images.
55. A method as claimed in any one of claims 1 to 53, further comprising
5 providing an algorithm which allows for automatic recognition of items which require censorship such as faces or vehicle number plates in the plurality of panoramic images, the automatic recognition algorithm will automatically pixelate or blur out faces or number plates within the images.
io 56. A method as claimed in any one of the preceding claims, wherein the user interface can be utilized in any one or more of the following industries:
(i) real estate industry;
(ii) travel and hospitality industries;
(iii) education;
15 (iv) automotive industry;
(v) e-commerce industry;
(vi) construction industry;
(vii) 3D graphics industry;
(viii) warehouse and storage industries;
20 (ix) disaster management and risk assessment industries;
(x) traffic management including parking and city resource management industries; or (xi) any industry or domain which can provide a plurality of panoramic images sourced over different times and/or dates which can be iteratively
25 sorted and connected to form a virtual tour.
57. A method as claimed in any one of the preceding claims, further comprising allowing a user to register to use the user interface.
30 58. A method as claimed in claim 57, wherein allowing the user to register comprises the steps of:
(i) entering the user’s details including any company or organization details into a user’s detail file;
(ii) choosing a payment and/or billing plan;
2018203909 02 Jun 2018 (iii) allowing the user to invite other users from within their organization to use the interface and setting those other users access rights or levels; and (iv) providing login details to the user and other users.
5 59. A method as claimed in claim 57 or claim 58, wherein the user once registered can login and use the user interface, wherein logging in comprises the steps of:
(i) receiving at the server a request from a requesting computer to login the user;
10 (ii) authenticating the user at the server, and upon authenticating the user, retrieving from a database stored on the server the user’s detail file wherein the user’s detail file includes any user’s preference for configuring the user interface including any users branding and/or company branding; and (iii) sending to the requesting computer the user’s detail file, wherein the 15 preference file contains information to allow the requesting computer to implement and configure the user interface by directing output on the requesting computer to the user interface component that processes the output to provide the user interface to the user.
20 60. A method as claimed in any one of claims 57 to 59, further comprising allowing the user or other users to create a project and to create a first iteration in accordance with claims 1 to 34, and generating a virtual tour on a multi-pane web page in response to the user interface which is transmitted via the Internet and viewed by the user using a web browser.
61. A method as claimed in any one of claims 57 to 60, further comprises the step of allowing the user or other users to create a further iteration in accordance with any one of claims 35 to 49 and generating an iterative virtual tour on a multi-pane web page in response to the user interface which is
30 transmitted via the Internet and viewed by the user using a web browser which shows changes over time of a space.
62. A system for creating, managing and publishing an interactive virtual tour the system comprising:
2018203909 02 Jun 2018 a user interface;
a client device having one or more processors, memory, and one or more programs stored in the memory and configured for execution by the one or more processors, the one or more programs comprising instructions for 5 generating and displaying the user interface on a display;
a web server for allowing the client device to access and store the interactive virtual tour, virtual tour data, information and commands and for generating web pages for display in response to commands from the user interface; and io a communications network for connecting the client device, server and displaying the user interface.
63. A web based user interface comprising:
a panorama data acquisition unit implementing means of capturing 15 panoramic data and saving the panoramic data to a server for further processing;
a package generator adapted to generate virtual tour packages containing the panoramic data, commands and virtual tour data;
a viewing engine responsive to the panoramic data and virtual tour 20 packages and implementing means for perspective correction, and user interaction with, said panoramic data and virtual tour data when necessary;
a control engine adapted to facilitate interaction with the panoramic data and virtual tour data, wherein the control engine is connected operatively to and communicates bi-directionally with the viewing engine, renders representative 25 information about all or parts of the virtual tour, permits a particular portion to be selected from the virtual tour and sends signals to the viewing engine that cause the viewing engine to permit the interactive navigation of the selected portion of the virtual tour, wherein the control engine also indicates or causes to be indicated what portion of the virtual tour is currently selected and what sub30 part of said selected portion of the virtual tour is currently rendered, wherein the control engine is responsive to user input and/or commands from the viewing engine and is in turn capable of modifying the representative information about all or parts of the virtual tour in response to the user input and/or said commands from the viewing engine and is capable of communicating
2018203909 02 Jun 2018 information indicative of such externally induced modifications to the user and/or the viewing engine; and a display means for rendering output of the viewing engine, control engine, package generator and panoramic data acquisition unit.
64. A computer system for providing a user interface of an application program having multiple panes, each pane providing access to functionality of the application program to create a virtual tour of a space, the computer system comprising:
io a first component which displays a first pane of the user interface of the application program, the first pane of the first component allowing a first user to access a first function to:
register and enter details of the first user and/or organization details of a project, the details of the first user and/or organization details 15 being saved to a server;
upload user-input specifying information relating to the project; and upload a plurality of panoramic images of the project to create a first iteration of the project and saving the first iteration to the server at a 20 first time and/or date;
a second component that replaces the display of the first pane of the user interface by displaying a second function which allows the user to link and align each one of the panoramic images in the first iteration by adding at least one hotspot and manually rotating images to align visual perspectives; and
25 a third component, that upon receiving from the user a selection of a save iteration icon or button of the user interface of the application program, the third component displays the virtual tour of the first iteration showing the linked and aligned plurality of panoramic images and in a second pane displays a first iteration button or icon.
65. A computer system as claimed in claim 64, further comprising a fourth component that displays in the second pane of the user interface a fourth function which allows the user to:
2018203909 02 Jun 2018 add a floorplan showing a map or site layout or anything with a spatial significance; and add at least one active region within the floorplan which is linked to at least one of the plurality of panoramic images.
66. A computer system as claimed in claim 64 or claim 65, further comprising a fifth component that replaces the display of the first pane of the user interface by displaying a fifth function which allows the user to add a further iteration at a different time and/or date to the first iteration by adding a
10 further plurality of panoramic images of the project.
67. A computer system as claimed in any one of claim 64 to 66, further comprising a sixth component to create a bridge between the first and further iteration which allows a user to:
15 rotate and align each one of the further panoramic images from the further iteration with each one of the corresponding panoramic image from the first iteration, or optionally, align all panoramic images of each iteration with a feature on a map/floorplan;
calculate an updated heading for each one of the first panoramic images 20 with respect to the rotated each one of the further panoramic images and saving the updated heading to automatically align visible features in the panoramic images and automatically add a hotspot from the first iteration to the further iteration.
25 68. A computer system as claimed in any one of claim 64 to 67, further comprising a seventh component that upon receiving from the user a selection of a save iteration icon or button of the user interface of the application program, the seventh component displays the virtual tour of the second iteration showing the linked and aligned plurality of panoramic images and in a second 30 pane displays a second iteration button or icon.
69. A computer system as claimed in any one of claim 64 to 68, further comprising repeating the steps of claims 66 to 68 for each new further iteration at a different time and/or date to the previous further iteration.
2018203909 02 Jun 2018
70. A computer system as claimed in any one of claim 64 to 69 further comprising generating a multi-pane web page in response to the user interface to display the virtual tour of the space, the virtual tour is transmitted via the Internet and viewed by a plurality of users using a web browser.
71. A computer system as claimed in any one of claims 64 to 70, wherein the user interface is a web based or a cloud based user interface.
72. A computer system as claimed in any one of claims 64 to 71, wherein io the user-input information comprises any one or more of the following:
(i) a location of the project, (ii) at least one contact for the project if different from the user;
(iii) at least on event related to the project;
(iv) any project branding; or
15 (v) any information that adds context to the project, including but not only limited to audio, video and floorplans.
73. A computer system method as claimed in any one of claims 64 to 72, wherein each one of the plurality of panoramic images is recorded with an
20 original file creation time and date stamp and each image is saved on a server in time and date order starting from an earliest time and date to a latest time and date.
74. A computer system method as claimed in claim 73, wherein each one of 25 the plurality of panoramic images is recorded with image metadata and the metadata is saved on the server, the metadata is used to automatically locate the image on a map, such as by locating the image on a web mapping service using embedded global positioning system (GPS) co-ordinates in the image metadata.
75. A computer system as claimed in any one of claims 64 to 74, wherein each one of the plurality of panoramic images is recorded by an omnidirectional camera.
2018203909 02 Jun 2018
76 A computer system as claimed in any one of claims 64 to 75, wherein the second component further comprises rotating the panoramic images to orient the panoramic images with respect to a physical feature such as those on a map.
77. A computer system as claimed in any one of claims 64 to 75, wherein the second component further comprises rotating the panoramic images to orient the panoramic images with respect to the compass heading north.
io 78. A computer system as claimed in any one of claims 64 to 77, wherein the second component further comprises an image matching algorithm which automatically adds hotspots to the plurality of panoramic images by identifying similarities between the plurality of panoramic images and based on the similarities automatically adds at least one hotspot that connects each one of 15 the plurality of panoramas to each other.
79. A computer system as claimed in any one of claims 64 to 78, wherein the sixth component to create the bridge further comprises using the image matching algorithm to automatically find visual similarities between the plurality
20 of panoramic images of the first iteration and the plurality of panoramic images of the further iteration and automatically aligning the panoramic images in the further iteration with the first iteration.
80. A computer system as claimed in claim 79, wherein each one of the 25 further iterations is automatically compared with a previous iteration to find visual similarities between the plurality of panoramic images of the each further iteration and the plurality of panoramic images of the previous iteration and automatically aligning the panoramic images in the each further iteration with the previous iteration.
81. A computer system as claimed in any one of claims 64 to 77, wherein the second component further comprises an image matching algorithm which automatically creates the virtual tour of the first iteration using a plurality of 3D mesh structures of the plurality of panoramic images, the 3D mesh structures
2018203909 02 Jun 2018 allowing the user to browse the plurality of panoramic images and allowing the plurality of panoramic images to be located within a co-ordinate based system.
82. A computer system as claimed in claim 78 or claim 79, further comprises
5 analysing changes in visual similarities between the plurality of panoramic images of the first iteration and the plurality of panoramic images of each one of the further iterations using an artificial intelligence engine.
83. A computer system as claimed in claim 82, wherein the artificial io intelligence engine comprises any one or more of the following:
(a) a machine learning algorithm; or (b) a pattern recognition algorithm; or (c) a machine vision algorithm.
15 84. A computer system as claimed in any one of claims 64 to 83, further comprises an analysis tool that provides consistent, transparent, and efficient analysis of the project.
85. A computer system as claimed in any one of claims 64 to 84, further 20 comprises a further component to allow a user to pixelate or blur out faces or vehicle license or number plates as required to censor each one of the plurality of panoramic images.
86. A computer system as claimed in any one of claims 64 to 84, further 25 comprising an algorithm which allows for automatic recognition of items which require censorship such as faces or vehicle number plates in the plurality of panoramic images, the automatic recognition algorithm will automatically pixelate or blur out faces or number plates within the images.
30 87. A computer system as claimed in any one of claims 64 to 86, wherein the user interface can be utilized in any one or more of the following industries:
(i) real estate industry;
(ii) travel and hospitality industries;
(iii) education;
2018203909 02 Jun 2018 (iv) automotive industry;
(v) e-commerce industry;
(vi) construction industry;
(vii) 3D graphics industry;
5 (viii) warehouse and storage industries;
(ix) disaster management and risk assessment industries;
(x) traffic management including parking and city resource management industries; or (xi) any industry or domain which can provide a plurality of panoramic io images sourced over different times and/or dates which can be iteratively sorted and connected to form a virtual tour.
AU2018203909A 2017-06-02 2018-06-02 A User Interface Abandoned AU2018203909A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2017902103A AU2017902103A0 (en) 2017-06-02 A User Interface
AU2017902103 2017-06-02

Publications (1)

Publication Number Publication Date
AU2018203909A1 true AU2018203909A1 (en) 2018-12-20

Family

ID=64662285

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2018203909A Abandoned AU2018203909A1 (en) 2017-06-02 2018-06-02 A User Interface

Country Status (1)

Country Link
AU (1) AU2018203909A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947650A (en) * 2021-09-30 2022-01-18 完美世界(北京)软件科技发展有限公司 Animation processing method, animation processing device, electronic equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947650A (en) * 2021-09-30 2022-01-18 完美世界(北京)软件科技发展有限公司 Animation processing method, animation processing device, electronic equipment and medium
CN113947650B (en) * 2021-09-30 2023-04-07 完美世界(北京)软件科技发展有限公司 Animation processing method, animation processing device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
US11783409B1 (en) Image-based rendering of real spaces
US10593104B2 (en) Systems and methods for generating time discrete 3D scenes
US11113882B2 (en) Generating immersive trip photograph visualizations
US9542778B1 (en) Systems and methods related to an interactive representative reality
JP2023036602A (en) Augmented and virtual reality
RU2491638C2 (en) 3d content aggregation built into devices
US6972757B2 (en) Pseudo 3-D space representation system, pseudo 3-D space constructing system, game system and electronic map providing system
Kopf et al. Street slide: browsing street level imagery
US20120128205A1 (en) Apparatus for providing spatial contents service and method thereof
US20140181630A1 (en) Method and apparatus for adding annotations to an image
JP4153761B2 (en) 3D model space generation device, 3D model space generation method, and 3D model space generation program
Miles et al. Alternative representations of 3D-reconstructed heritage data
Bolkas et al. Creating a virtual reality environment with a fusion of sUAS and TLS point-clouds
Maiellaro et al. Digital data, virtual tours, and 3D models integration using an open-source platform
EP2936442A1 (en) Method and apparatus for adding annotations to a plenoptic light field
Pintore et al. Mobile mapping and visualization of indoor structures to simplify scene understanding and location awareness
Adorjan Opensfm: A collaborative structure-from-motion system
Cui et al. Fusing surveillance videos and three‐dimensional scene: A mixed reality system
KR101724676B1 (en) System for recording place information based on interactive panoramic virturl reality
Kim et al. Multimodal visual data registration for web-based visualization in media production
AU2018203909A1 (en) A User Interface
Krasić et al. Comparative analysis of terrestrial semi-automatic and automatic photogrammetry in 3D modeling process
Buckley et al. Virtual field trips: Experience from a global pandemic and beyond
Netek et al. From 360° camera toward to virtual map app: Designing low‐cost pilot study
Devaux et al. Increasing interactivity in street view web navigation systems

Legal Events

Date Code Title Description
MK1 Application lapsed section 142(2)(a) - no request for examination in relevant period