AU2020277242A1 - Method and system for augmented reality - Google Patents

Method and system for augmented reality Download PDF

Info

Publication number
AU2020277242A1
AU2020277242A1 AU2020277242A AU2020277242A AU2020277242A1 AU 2020277242 A1 AU2020277242 A1 AU 2020277242A1 AU 2020277242 A AU2020277242 A AU 2020277242A AU 2020277242 A AU2020277242 A AU 2020277242A AU 2020277242 A1 AU2020277242 A1 AU 2020277242A1
Authority
AU
Australia
Prior art keywords
virtual model
computing device
portable computing
processor
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2020277242A
Inventor
Troy Cavallaro
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CAVTEC Pty Ltd
Original Assignee
CAVTEC Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2012903240A external-priority patent/AU2012903240A0/en
Application filed by CAVTEC Pty Ltd filed Critical CAVTEC Pty Ltd
Priority to AU2020277242A priority Critical patent/AU2020277242A1/en
Publication of AU2020277242A1 publication Critical patent/AU2020277242A1/en
Abandoned legal-status Critical Current

Links

Abstract

A system and method of augmenting a virtual model and image data is provided. The method includes receiving a virtual model on a 5 portable computing device, estimating a field of view of the portable computing device, anchoring the virtual model relative to the field of view, and rendering the virtual model on image data associated with the field of view. The virtual model includes at least an anchor point to a physical location and an orientation and the virtual model is anchored using the 10 anchor point and the orientation of the virtual model. The field of view of the portable computing device is based upon a location and an orientation of the portable computing device and the virtual model is rendered on image data based upon the anchoring of the virtual model.

Description

TITLE METHOD AND SYSTEM FOR AUGMENTED REALITY
FIELD OF THE INVENTION The present invention relates to augmented reality. In particular, although not exclusively, the invention relates to augmented reality with 3D building models.
BACKGROUND TO THE INVENTION Augmented reality relates to the augmentation of real world data with virtual data, and can provide greater user experience than presentation of the virtual data alone. Augmented reality applications can be implemented on portable computing devices, wherein a camera captures real world data, a processor overlays virtual data, and a data display presents the augmented real world and virtual data. Augmented reality applications can, for example, be used to view a room with a virtual piece of furniture in order to assist in the purchase of furniture, or be used to overlay textual information on an image of a city. In certain augmented reality applications of the prior art, 3D virtual data is overlayed, and movement of the portable computing device results in a change of the 3D virtual data, such as viewing angle, corresponding to a change in orientation of the device. This is typically performed by analysing a camera input, and detecting motion there from. In this case, well defined anchor points in the camera input are necessary, such as edges or other distinctive points. Articulated Naturality Web (ANW) is an example of such augmented reality. A disadvantage of many augmented reality applications is that significant input from the user is required. For example, a user may be required to manually place the virtual data in the model. In other augmented reality applications, the virtual data is displayed based upon a location and direction of the portable computing device. This can, for example, be information relating to peaks of mountains in a landscape overlayed on top of an image of the mountains. Based upon the location of the portable computing device, which may, for example, be retrieved using a Global Positioning System (GPS) sensor, and an orientation of the device, which may be retrieved from a compass, for example, the direction of each mountain can be determined and overlayed. While these approaches do not require well defined anchor points, a problem with approaches based upon GPS positioning is that they do not perform well on objects that are close to the portable computing device. Due to parallax, coordinates from the GPS sensor may be sufficiently good when an object is far away, but on close objects the error may be significant. Furthermore, the GPS sensor may provide coordinates that fluctuate although the portable computing device is stationary. This can cause the virtual data to flutter, levitate, drift, jump, or otherwise move with respect to the background, rather than being stationary. Accordingly, there is a need for improved augmented reality systems and methods.
OBJECT OF THE INVENTION It is an object of some embodiments of the present invention to provide consumers with improvements and advantages over the above described prior art, and/or overcome and alleviate one or more of the above described disadvantages of the prior art, and/or provide a useful commercial choice.
SUMMARY OF THE INVENTION According to one aspect, the invention resides in a method of augmenting a virtual model and image data, the method including: receiving, on a data interface of a portable computing device and from a remote server, a virtual model, the virtual model including at least an anchor point to a physical location and an orientation; estimating, by a processor, a field of view of the portable computing device based upon a location and an orientation of the portable computing device; anchoring, by the processor, the virtual model relative to the field of view of the portable computing device using the anchor point and the orientation of the virtual model; and rendering the virtual model on image data associated with the field of view based upon the anchoring of the virtual model. Preferably, the field of view corresponds to a viewing direction of a display screen of the portable computing device. Alternatively or additionally, the field of view of the portable computing device corresponds to a view captured by a camera of the portable computing device. Preferably, anchoring the virtual model comprises anchoring the anchor point to an object identified in the field of view of the portable computing device. Preferably, rendering the virtual model comprises rendering a plurality of sequential images, wherein the virtual model is rotated and or translated between images of the plurality of sequential images based upon movement of the portable computing device and the anchoring of the virtual model. Preferably, the image data corresponds to data captured by a camera of the portable computing device. Preferably, the method further comprises: sending, on the data interface and to the remote server, a location of the portable computing device, wherein the virtual model is associated with the location of the portable computing device. Preferably, the anchor point is a Global Positioning System (GPS) coordinate. Preferably, the virtual model is a house model. Preferably, the physical location is a plot of land. Preferably, the virtual model includes interior and exterior data, wherein the end user can navigate between the interior data and the exterior data. More preferably again, the user can navigate to an interior of the virtual model by moving to a location of the virtual model. Preferably, data describing a location of the portable computing device is filtered, by a processor, according to a predetermined pathway. Preferably, the method further includes: receiving, on the data interface and from the remote server, data associated with the physical location. More preferably, the data comprises price data. Preferably, the method further comprises: capturing, via a camera, a plurality of images corresponding to the field of view of the portable computing device; and generating, by the processor, a three dimensional model of the field of view; wherein rendering the virtual model comprises rendering the virtual model as a three dimensional object on the three dimensional model. Preferably, the method further comprises: receiving, on a data interface, a bid associated with the physical location; determining, by the processor, that the bid is a winning bid; associating, by the processor, the virtual model with the physical location based upon the winning bid. Preferably, the method further comprises presenting, on a data interface, a map of physical locations, wherein a bid on the physical location is made by selecting the physical location on the map. According to a second aspect, the invention resides in an augmented reality system including: a processor; a data interface coupled to the processor; a display screen coupled to the processor; and a memory coupled to the processor, the memory including computer readable instruction code for: receiving, on the data interface and from a remote server, a virtual model, the virtual model including at least an anchor point to a physical location and an orientation; estimating, by the processor, a field of view corresponding to the display screen based upon a location and an orientation of the display screen; anchoring, by the processor, the virtual model relative to the field of view using the anchor point and the orientation of the virtual model; rendering, by the processor, the virtual model on image data associated with the field of view based upon the anchoring of the virtual model; and presenting, on the display screen, the rendered virtual model and image data.
BRIEF DESCRIPTION OF THE DRAWINGS To assist in understanding the invention and to enable a person skilled in the art to put the invention into practical effect, preferred embodiments of the invention are described below by way of example only with reference to the accompanying drawings, in which: FIG. 1 illustrates an augmented reality system, according to an embodiment of the present invention; FIG. 2 illustrates input data to the system of FIG. 1, according to an embodiment of the present invention; FIG. 3 illustrates a screenshot of a main user screen of the system of FIG. 1, according to an embodiment of the present invention; FIG. 4 illustrates a screenshot of an interior view screen of the system of FIG. 1, according to an embodiment of the present invention; FIG. 5 illustrates a screenshot of a details screen of the system of FIG. 1, according to an embodiment of the present invention; FIG. 6a illustrates predetermined pathways of the data of FIG. 2 and a plurality of location points as determined by a mobile computing device of the system of FIG. 1;
FIG. 6b illustrates the predetermined pathways of FIG. 6a and a plurality of adjusted location points, based upon the plurality of location points and the predetermined pathways of FIG. 6a; FIG. 7 illustrates input data to the system of FIG. 1, according to an alternative embodiment of the present invention; FIG. 8 illustrates a screenshot of a developers portal search screen of the system of FIG. 1, according to an embodiment of the present invention; FIG. 9 illustrates a screenshot of a developers portal search results screen of the system of FIG. 1, according to an embodiment of the present invention; FIG. 10 illustrates a screenshot of a developers portal home screen of the system of FIG. 1, according to an embodiment of the present invention; FIG. 11 illustrates a screenshot of a bidding screen of the system of FIG. 1, according to an alternative embodiment of the present invention; and FIG. 12 diagrammatically illustrates a computing device, according to an embodiment of the present invention. Those skilled in the art will appreciate that minor deviations from the layout of components as illustrated in the drawings will not detract from the proper functioning of the disclosed embodiments of the present invention.
DETAILED DESCRIPTION OF THE INVENTION Embodiments of the present invention comprise systems and methods for augmenting a virtual model and image data. Elements of the invention are illustrated in concise outline form in the drawings, showing only those specific details that are necessary to the understanding of the embodiments of the present invention, but so as not to clutter the disclosure with excessive detail that will be obvious to those of ordinary skill in the art in light of the present description.
In this patent specification, adjectives such as first and second, left and right, front and back, top and bottom, etc., are used solely to define one element or method step from another element or method step without necessarily requiring a specific relative position or sequence that is described by the adjectives. Words such as "comprises" or "includes" are not used to define an exclusive set of elements or method steps. Rather, such words merely define a minimum set of elements or method steps included in a particular embodiment of the present invention. The reference to any prior art in this specification is not, and should not be taken as, an acknowledgement or any form of suggestion that the prior art forms part of the common general knowledge. According to one aspect, the invention resides in a method of augmenting a virtual model and image data, the method including: receiving, on a data interface of a portable computing device and from a remote server, a virtual model, the virtual model including at least an anchor point to a physical location and an orientation; estimating, by a processor, a field of view of the portable computing device based upon a location and an orientation of the portable computing device; anchoring, by the processor, the virtual model relative to the field of view of the portable computing device using the anchor point and the orientation of the virtual model; and rendering the virtual model on image data associated with the field of view based upon the anchoring of the virtual model. Advantages of certain embodiments of the present invention include an ability to efficiently visualise a house on a particular block of land. The house can be visualised inside and out, appearing as if it was actually been built on the particular block of land. This enables a potential purchaser of a block of land and/or house to more efficiently compare houses and or blocks of land. Advantages of other embodiments include enabling a developer to efficiently advertise by placing their house proposal on a block of land virtually. This can be done using a bidding process, which provides efficient distribution of blocks among developers.
FIG. I illustrates an augmented reality system 100, according to an embodiment of the present invention. The augmented reality system 100 includes a server 105, a database 110 coupled to the server 105, and a portable computing device 115 coupled to the server 105. The portable computing device is coupled to the server 105 by a data communication network 130, such as the Internet. The portable computing device 115 includes a Global Positioning System (GPS) sensor (not shown), which interprets signal from a plurality of GPS satellites 125 in order to determine a position of the portable computing device 115. The augmented reality system 100 can, for example, be used to present virtual houses on an image of a real plot or block of land using augmented reality. In this case, augmented reality refers to the augmentation of the virtual house and the image of the plot. Accordingly, a developer can choose house colours, an orientation of the house, or other features which are particularly suited to the block of land, and use the augmented reality system 100 to sell such houses. A field of view of the portable computing device 115 is estimated based upon a location and an orientation of the portable computing device 115. The field of view corresponds to a viewing direction of a display screen 135 of the portable computing device 115 which in turn also corresponds to a view captured by a camera (not shown) of the portable computing device 115. Accordingly, the display screen 135 acts in a similar way to a viewfinder of the camera. The virtual model is then anchored relative to the field of view of the portable computing device 115 using an anchor point and an orientation of the virtual model. The anchor point can comprise a GPS coordinate and the orientation can comprise an orientation in which the virtual model is to be displayed relative to a known coordinate system such as due north. Accordingly, the virtual model is associated with a well defined position. The anchoring can further comprise anchoring the anchor point to an object identified in the field of view of the portable computing device
115, such as a tree, a building, or a combination thereof. The anchoring is performed to provide smooth rendering of the virtual model when navigating around the model. The virtual model is then rendered on image data associated with the field of view based upon the abovementioned anchoring. The image data can be captured by a camera of the portable computing device to provide a realistic augmented reality experience. The virtual model is rotated and or translated between subsequent images based upon movement of the portable computing device 115 and the anchoring of the virtual model. As will be readily understood by a person skilled in the art, the 3D models need not be houses, and the image need not relate to a plot of land. Similarly, the image of the plot of land need not show any of the actual land, but instead can include only scenery around the plot such as views. The location of the portable computing device 115 is sent to the server 105, and at least one 3D house model is returned to the portable computing device 115 based upon the location. The database 110 includes a plurality of 3D house models, each associated with at least one physical location. Furthermore, the database 110 can include other data, such as roads, predetermined pathways, or other data associated with a physical location. According to certain embodiments, the location of the portable computing device 115 is refined using differential GPS (DGPS). In this case, a second GPS receiver is located at a known location and is used to compute an introduced error and calculate corrections to the GPS satellite measurements. The DGPS refinements can then be sent to the portable computing device 115 via a radio transmitter, and such service can be part of a subscription that is independent of the system 100. Alternatively or additionally, other systems can be used to determine or refine a location of the portable computing device 115. If the portable computing device 115 is connected to a cellular network, details of the cellular network can be used to determine a location of the portable computing device 115. An example of such a method is Cell Global Identity and Time Advance (CGI-TA) positioning method. The Cell Global Identity (CGI) is a parameter that identifies within which network cell the portable computing device 115 is located. The accuracy of the CGI is limited by the size of network cell, however by using timing advance (TA) information, i.e. the time taken for information to reach the portable computing device 115 from a base transceiver station the accuracy can be improved. Uplink Time of Arrival Positioning (UL TOA) and Enhanced Observed Time Difference Method (E-OTD) are other examples of methods that can be used to determine or refine a location of the portable computing device 115. Both methods use positional information from several base station transceivers. In UL TOA, calculations are performed away from the client at a dedicated server in the mobile network. In contrast, the calculations in E-OTD are client based. Yet another example of technology suitable for determining or refining a location of the portable computing device 115 is magnetic anomaly-based positioning, wherein measurements of a magnetic field are used to determine or refine the location. Examples of such software include the The IndoorAtlas Maps API of IndoorAtlas Ltd. of Oulu, Finland. According to certain embodiments, location data of the portable computing device 115 is filtered according to one or more predetermined pathways. If, based upon the location data, the portable computing device 115 appears to be travelling substantially along a predetermined pathway, the differences between the predetermined pathway and the location data can be determined. The differences can be considered noise and be removed from the location data, thus providing a smooth flow along a path, or considered substantial, and correspond to a deviation from the predetermined pathway. Filtering location data based upon a predetermined pathway is advantageous, for example, when driving around an estate viewing 3D house models. In such case, the predetermined pathway will typically follow one or more roads of the estate, and the location data can be filtered to remove variations in location that do not correspond to that predetermined pathway, thus removing annoying random movements caused by errors in a location sensor, or similar. FIG. 2 illustrates data 200 relating to an estate, according to an embodiment of the present invention. The data 200 includes a plurality of predetermined pathway identifiers 205, road identifiers 210 and land identifiers 215. The road identifiers 210 correspond to roads of the estate, the land identifiers correspond to blocks of land of the estate, and the pathway identifiers 205 correspond to paths along roads of the estate. The data 200 further includes coordinate points (not shown), which can be used to map the data 200 to a physical location. Each block of land in the estate can be associated with one or more 3D house models (not shown), and each 3D house model can have a location and orientation on the block of land. The data 200 can be downloaded to the mobile computing device 115 when it is determined that the mobile computing device 115 is within the estate. Alternatively, estate data can be downloaded to the mobile computing device 115 prior to entering the estate. Alternatively again, the data can be stored on a server, wherein the data 200 is made available to the mobile computing device 115 as required. Upon receiving the data 200 (or relevant part thereof), the mobile computing device 115 can display a 3D house model associated with a block of land, overlayed on an image of the block of land as received by the computing device, as discussed further below. As will be understood by the skilled addressee, the mobile computing device 115, the server 105, or any other associated device may perform part of all of the overlay operations. FIG. 3 illustrates a screenshot 300 of a main user screen, according to an embodiment of the present invention. The main user screen includes a 3D house model image 305 and a background image 310. As discussed earlier, the background image 310 is location specific, and can, for example, correspond to image data captured by a camera of the mobile computing device 115 when at the location. The 3D house model image 305 is rendered onto the background image 310 to provide the user with an experience of how the house will look on a particular block of land. This can be done by anchoring the 3D house model to the background image 310 captured by the camera, as discussed above. The 3D house model image 305 can be dynamically rendered onto a moving background, the moving background defined by a plurality of background images 310 captured by the camera. As the mobile computing device 115 is portable, the background image 310 as captured by the camera will vary based upon translational and rotational movement of the mobile computing device 115. The 3D house model is thus rotated, shifted and resized to correspond to the movements of the mobile computing device 115, while being anchored to the background image 310 such that the 3D house model appears to be static relative to objects in the background image 310. According to certain embodiments, a three dimensional model of the block of land and surrounding areas is generated based upon one or more background images 310, e.g. the field of view of the portable computing device 115. This can be done by estimating a depth from the mobile computing device 115 to certain objects, and by allocating each object a point in a three dimensional plot model. The 3D house model is then anchored in the three dimensional plot model. This allows for objects in the three dimensional plot model to be rendered relative to the house model, such as trees and other objects. This is particularly relevant as outdoor tracking becomes more precise and three dimensional models of outdoor environments become more available. An example of such three dimensional rendering is Articulated Naturality Web (ANW) by QderoPateo Communications of Tianjin, People's Republic of China.
Several sensors can be present on the mobile computing device 115 to determine movement of the mobile computing device 115 and/or depth information relating to the plot. According to certain embodiments, a gyroscope, accelerometer, motion sensor, or other similar sensor can be used together with a sequence of captured images to estimate motion of the mobile computing device 115. According to other embodiments, the images alone are used to estimate motion of the mobile computing device 115. The main user screen may allow several different 3D house models to be rendered on a particular background image 310. In this case a user may navigate between 3D house models using navigation buttons 315. Additionally, as a house may be able to be placed on a site in different configurations, a 3D house model can be rendered onto a background image 310 in a number of different configurations. For example, a 3D house model may be rendered on the background image in several orientations. In these cases, each orientation of the 3D house model is advantageously rendered independently and can include separate location and/or orientation data. The main user screen further includes a details button 320, an enquiry button 325, a snapshot button 330, and an interior button 335. The details button 320 is for providing further details of the house and/or block of land, as discussed below with reference to FIG. 5. Examples of details include a price of the house and/or the block of land. The enquiry button 325 is for enabling the user to make an enquiry about the house and/or block of land. Upon selection of the enquiry button 325, an email client is automatically opened and at least partially pre-populated with fields relating to the house and/or land. The user can then send the enquiry to a developer associated with the house and/or the plot. Alternatively, a proprietary messaging system may be used, in which an electronic message is sent to the developer. This can be done without any further interaction from the user if the user's contact details have been previously entered.
The snapshot button 330 is for saving a copy of the image presently being displayed, i.e. the 3D house model image 305 rendered on the background image 310. The image can, for example, be saved to a memory of the mobile computing device 115, or automatically sent to an email address of the user. This enables a user to share the image with others, or simply save the image for later viewing. The interior button 335 enables the user to navigate to an interior of the 3D house model. According to certain embodiments, rendering the 3D house model image 305 can automatically move into an interior view when present in a particular part of the land/plot. This can be, for example, when the user enters the location where the house would be built. According to an alternative embodiment, the background image 310 is provided to the mobile computing device 115. The background image 310 can be a professional photograph, for example, that illustrates the plot and their surroundings. Similarly, the background image 310 can be a professionally generated 3D model of the plot and surrounding areas, which is particularly suited for rendering on a portable device. As will be readily understood by a person skilled in the art, the background image 310 can include virtual objects, such as a modified skyline according to future developments. FIG. 4 illustrates a screenshot 400 of an interior view screen, according to an embodiment of the present invention. Similar to the main user screen of FIG. 3, the user can navigate between different house plans, i.e. virtual models, using navigation buttons 405. The interior view screen enables a user to visualise an interior of different 3D house models while present on a plot. The interior can be rendered together with images of the plot in a similar manner to as described with reference to FIG. 3. This can, for example, include rendering views through windows according to real views captured by the portable computing device 115. The interior view screen is advantageously updated as the user navigates through the house. This can include moving to different rooms as the user moves across the plot, but can also include rendering the interior according to movement of the portable computing device 105. The interior view screen additionally includes a floor selection tab 410, which enables the user to select different floor levels of the interior of the 3D house model. This enables a user to virtually navigate vertically in a house model without any physical structure being present to enable the user to physically navigate vertically. FIG. 5 illustrates a screenshot 500 of a details screen, according to an embodiment of the present invention. The details screen includes a plan image 505 associated with of the relevant block/plot, and a house image 510 of the selected house. The details screen further includes plot information 515 associated with the plan image 505, such as size and price. Similarly house information 520 is associated with the house image 510, and can include size and price information. The details screen further includes navigation buttons 525, for enabling a user to navigate between different house plans. Upon navigation, at least the house image 510 and house information 520 is updated according to the selected house model. According to certain embodiments, the plot information 515 can be updated according to a selected house plan. This can be the case when, for example, a specific developer offers a discounted purchase price on the plot, or pays for council fees. In this case, selecting a house plan from that developer will result in a lower purchase price for the plot as the discount or council fees will be subtracted from the regular price. FIG. 6a illustrates the predetermined pathway identifiers 205 of FIG. 2 and a plurality of location point identifiers 605a as determined by the mobile computing device 115, and FIG. 6b illustrates the predetermined pathway identifiers 205 of FIG. 2 and a plurality of adjusted location point identifiers 605b. The location point identifiers 605a correspond to location points, and the adjusted location point identifiers 605a correspond to location points that have been adjusted according to a predetermined pathway.
The server 105 or the mobile computing device 115 initially compares the location points with the predetermined pathways. If at least two sequential location points are within a threshold distance of the predetermined pathway, the at least two sequential location points are adjusted to fall on the predetermined pathway, as illustrated by adjusted location point identifiers 605b. This essentially provides a filtering of location points that removes noise and provides a smooth flow when viewing a 3D house model. According to certain embodiments, professional images of the block of land can be provided when travelling along a predetermined pathway 205, whereas images captured by the mobile computing device 115 are used when travelling away from a predetermined pathway. FIG. 7 illustrates data 700 relating to a plot of land, according to an alternative embodiment of the present invention. The data 700 includes a plot identifier 705, identifying a plot of land, a plurality of points 710, corresponding to points in space equally spaced in and around the plot of land. The data 700 can further include coordinates (not shown), and 3D house models (not shown), similar to the data 200 of FIG. 2. According to an alternative embodiment, the plurality of points 710 are not equally spaced, but instead are located at points of interest in the plot of land, or otherwise non-uniformly spaced. The data 700 can be used to assist in generating an augmented reality experience by reducing movement of the 3D model with respect to the background. For each point 710 of the plurality of points 710, an image of the 3D model can be provided. The location of the portable computing device 115 is determined as discussed above in respect of FIG. 1. A point 710 of the plurality of points 710 is then chosen which corresponds to an estimated location of the portable computing device 115. The 3D house model is then rendered according to the chosen point 705, and displayed to the user. The user can then view the 3D house model as seen from the location and feel how the house will fit on the block of land and influence its surroundings.
As the house model is only rendered according to a limited number of points 710, part or all of the rendering can take place offline. This has the advantage of not requiring large amounts of processing power in the terminal, as a rendered image may be downloaded rather than generated. The rendered image may then be manipulated using simple operations such as translation and warping, based upon movements of the portable computing device. The plurality of points 710 may, for example, be defined with respect to a plane, which is then projected onto a 3D surface. The 3D points may accordingly be equally spaced on the plane, but after mapping may not be equally on the land due to terrain differences According to an alternative embodiment, the points 710 can be manually placed at positions corresponding to desired points on the 3D landscape, for example based upon features of the land such as natural entrance points, pathways or similar. According to another aspect of the invention, only one 3D house model is available per block of land. This has the advantage of a user viewing several blocks of land, for example in a new estate, not having to explicitly choose a 3D house model per block, but is instead presented with a predetermined 3D house model. A bidding system can be used, wherein a developer can bid on a plot to enable the developer's 3D model to be displayed. Alternatively, a default 3D house model may be presented to the user, which can then be manually changed, as discussed earlier. Even in such case a developer can bid on a plot to have the developer's model as the default 3D house model. FIG. 8 illustrates a screenshot 800 of a developer portal search screen, according to an embodiment of the present invention. The developer portal search screen includes details of a developer 805, which are obtained during a registration phase. The developer can log in and log out according to known methods, which may include password based authentication.
The developer can search for a plot using a unique plot identifier in a plot identifier field 810, or search based upon location using a suburb field 815, an estate name field 820 and/or a post code field 825. The developer can further limit the search based upon a status of the plots. The developers portal search screen includes a bidding checkbox 830, for selecting plots that are currently being bid on, an available checkbox 835, for selecting plots that are currently available, and a latest checkbox 840, for selecting only the most recently listed plots. A search button 845 is provided for submitting the search query to a server, which responds with a developers portal search results screen. FIG. 9 illustrates a screenshot 900 of a developer portal search results screen, according to an embodiment of the present invention. The developer portal search results screen includes a plurality of plot entries 905 that correspond to plots and a particular search query. Each plot entry 905 of the plurality of plot entries 905 includes a plot image 910, a plot number field 915, a plot location field 920, a plot owner field 925, a plot status field 930, a bids field 935 and an average bid field 940. This enables a developer to navigate between different plot entries 905 and potentially place bids associated with one or more plots. Upon selection of a plot entry 905, a bidding screen is presented to the developer, similar to auction sites of the prior art. The developer can then place a bid on the associated plot, or navigate back to the developers portal search results screen. FIG. 10 illustrates a screenshot 1000 of a developer portal home screen, according to an embodiment of the present invention. The developer portal home screen includes a list of plot entries 1005 that are currently in control of the developer. This includes plot entries associated with plots that the developer currently has a 3D model associated with, and plots that the developer has won bids on. The list of plot entries 1005 may further include entries associated with plots that the developer has previously bid on, but not won, as these may be of interest to the developer again in the future.
The developer portal home screen enables a developer to efficiently manage their plots and associated 3D house models. Each plot entry 1005 includes a plot number field 1010, a location field 1015, and an owner field 1020, similar to the plot entry 905 of FIG. 9. Additionally, the plot entry 1005 includes details of 3D house models 1025 that have been associated with the corresponding plot, and an image 1030. The image 1030 can, for example, comprise a 3D house model rendered on an image of the block. FIG. 11 illustrates a screenshot 1100 of a bidding screen, according to an alternative embodiment of the present invention. In this case, the plots of land are presented graphically using plot identifiers 1105 on a map 1110. If a plot of land is available for auction, a bid now button 1115 is presented in association with the corresponding plot identifier 1105. A developer can then click on the bid now button 1115 to place a bid on the plot of land. The current bid may be presented on or in association with the bid now button 1115. If an auction is about to finish for a particular block of land, a final bids button 1120 may be presented in association with the corresponding plot identifier 1105. The final bids button 1120 can be similar to the bid now button 1115, but with the addition of information that the auction is ending soon. The information that the auction is ending soon may comprise a colour coding, textual information, or any other suitable information. Finally, if the developer is the highest bidder on a plot of land, a highest bidder button 1125 may be displayed in association with the corresponding plot identifier 1105. The highest bidder button 1125 can be similar to the bid now button 1115, but allowing the user to increase their bid instead of placing a bid. The bidding screen is advantageously updated live as bids are placed by other developers. According to certain embodiments, each plot of land is auctioned separately. Advantageously an open auction using a one month listing time is used. In the final week of the auction, no new bidders are allowed to bid on the block, essentially leaving a bidding war amongst existing bidders. At the end of the auction, a winning bid entitles the bidder, upon payment of the winning bid, to place a 3D house model on the plot of land, either as default or as the only alternative, depending on the configuration of the system and as discussed earlier. The bidder may be entitled to place the 3D house model on the block of land until the block is sold, or for a predetermined period of time. For example, the winning bidder may have rights to display a 3D house model on the block of land for a period of 3 months. FIG. 12 diagrammatically illustrates a computing device 1200, according to an embodiment of the present invention. The server 105 of FIG. 1 and/or the portable computing device 115 can be identical to or similar to the computing device 1200 of FIG. 12. The computing device 1200 includes a central processor 1202, a system memory 1204 and a system bus 1206 that couples various system components, including coupling the system memory 1204 to the central processor 1202. The system bus 1206 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The structure of system memory 1204 is well known to those skilled in the art and may include a basic input/output system (BIOS) stored in a read only memory (ROM) and one or more program modules such as operating systems, application programs and program data stored in random access memory (RAM). The computing device 1200 can also include a variety of interface units and drives for reading and writing data. The data can include, for example, the 3D housing models. In particular, the computing device 1200 includes a hard disk interface 1208 and a removable memory interface 1210, respectively coupling a hard disk drive 1212 and a removable memory drive 1214 to the system bus 1206. Examples of removable memory drives 1214 include magnetic disk drives and optical disk drives. The drives and their associated computer-readable media, such as a Digital Versatile Disc (DVD) 1216 provide non-volatile storage of computer readable instructions, data structures, program modules and other data for the computer system 1200. A single hard disk drive 1212 and a single removable memory drive 1214 are shown for illustration purposes only and with the understanding that the computing device 1200 can include several similar drives. Furthermore, the computing device 1200 can include drives for interfacing with other types of computer readable media. The computing device 1200 may include additional interfaces for connecting devices to the system bus 1206. FIG. 12 shows a universal serial bus (USB) interface 1218 which may be used to couple a device to the system bus 1206. For example, an IEEE 1394 interface 1220 may be used to couple additional devices to the computing device 1200. Examples of additional devices include GPS modules, cellular communication modules, gyroscopes, and motion sensors. The computing device 1200 can operate in a networked environment using logical connections to one or more remote computers or other devices, such as a server, a router, a network personal computer, a peer device or other common network node, a wireless telephone or wireless personal digital assistant. The computing device 1200 includes a network interface 1222 that couples the system bus 1206 to a local area network (LAN) 1224. Networking environments are commonplace in offices, enterprise-wide computer networks and home computer systems. A wide area network (WAN), such as the Internet, can also be accessed by the computing device, for example via a modem unit connected to a serial port interface 1226 or via the LAN 1224. It will be appreciated that the network connections shown and described are exemplary and other ways of establishing a communications link between computers can be used. The existence of any of various well-known protocols, such as TCP/IP, Frame Relay, Ethernet, FTP, HTTP and the like, is presumed, and the computing device can be operated in a client-server configuration to permit a user to retrieve data from, for example, a web-based server. The operation of the computing device can be controlled by a variety of different program modules. Examples of program modules are routines, programs, objects, components, and data structures that perform particular tasks or implement particular abstract data types. The present invention may also be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, personal digital assistants and the like. Furthermore, the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. The above description of various embodiments of the present invention is provided for purposes of description to one of ordinary skill in the related art. It is not intended to be exhaustive or to limit the invention to a single disclosed embodiment. As mentioned above, numerous alternatives and variations to the present invention will be apparent to those skilled in the art of the above teaching. Accordingly, while some alternative embodiments have been discussed specifically, other embodiments will be apparent or relatively easily developed by those of ordinary skill in the art. Accordingly, this patent specification is intended to embrace all alternatives, modifications and variations of the present invention that have been discussed herein, and other embodiments that fall within the spirit and scope of the above described invention.

Claims (20)

The claims defining the invention are:
1. A method of augmenting a virtual model and image data, the method including: receiving, on a data interface of a portable computing device and from a remote server, a virtual model, the virtual model including at least an anchor point to a physical location and an orientation; estimating, by a processor, a field of view of the portable computing device based upon a location and an orientation of the portable computing device; anchoring, by the processor, the virtual model relative to the field of view of the portable computing device using the anchor point and the orientation of the virtual model; and rendering the virtual model on image data associated with the field of view based upon the anchoring of the virtual model.
2. The method of claim 1, wherein the field of view of the portable computing device corresponds to a viewing direction of a display screen of the portable computing device.
3. The method of claim 1 or claim 2, wherein the field of view of the portable computing device corresponds to a view captured by a camera of the portable computing device.
4. The method of any one of the preceding claims, wherein anchoring the virtual model comprises anchoring the anchor point to an object identified in the field of view of the portable computing device.
5. The method of any one of the preceding claims, wherein rendering the virtual model comprises rendering a plurality of sequential images, wherein the virtual model is rotated and or translated between images of the plurality of sequential images based upon movement of the portable computing device and the anchoring of the virtual model.
6. The method of any one of the preceding claims, wherein the image data corresponds to data captured by a camera of the portable computing device.
7. The method of any one of the preceding claims, further comprising: sending, on the data interface and to the remote server, a location of the portable computing device, wherein the virtual model is associated with the location of the portable computing device.
8. The method of any one of the preceding claims, wherein the anchor point is a Global Positioning System (GPS) coordinate.
9. The method of any one of the preceding claims, wherein the virtual model is a house model and the physical location is a plot of land.
10. The method of any one of the preceding claims, wherein the virtual model includes interior and exterior data, wherein a user can navigate between the interior data and the exterior data.
11. The method of claim 10, wherein the user can navigate to an interior of the virtual model by moving to a corresponding location of the virtual model.
12. The method of any one of the preceding claims, wherein data describing the location of the portable computing device is filtered, by a processor, according to a predetermined pathway.
13. The method of any one of the preceding claims, further including: receiving, on the data interface and from the remote server, data associated with the physical location.
14. The method of any one of the preceding claims, further including: capturing, via a camera, a plurality of images corresponding to the field of view of the portable computing device; and generating, by the processor, a three dimensional model of the field of view; wherein rendering the virtual model comprises rendering the virtual model as a three dimensional object on the three dimensional model.
15. The method of any one of the preceding claims, further including: receiving, on a data interface, a bid associated with the physical location; determining, by the processor, that the bid is a winning bid; and associating, by the processor, the virtual model with the physical location based upon the winning bid.
16. The method of any one of the preceding claims, further comprising: presenting, on a data interface, a map of physical locations, wherein a bid on the physical location is made by selecting the physical location on the map.
17. An augmented reality system including: a processor; a data interface coupled to the processor; a display screen coupled to the processor; and a memory coupled to the processor, the memory including computer readable instruction code for: receiving, on the data interface and from a remote server, a virtual model, the virtual model including at least an anchor point to a physical location and an orientation; estimating, by the processor, a field of view corresponding to the display screen based upon a location and an orientation of the display screen; anchoring, by the processor, the virtual model relative to the field of view using the anchor point and the orientation of the virtual model; rendering, by the processor, the virtual model on image data associated with the field of view based upon the anchoring of the virtual model; and presenting, on the display screen, the rendered virtual model and image data.
18. The augmented reality system of claim 17, further comprising a camera, coupled to the processor, wherein the field of view corresponds to a view captured by the camera
19. The augmented reality system of claim 17 or claim 18, further comprising a Global Positioning System (GPS) coupled to the processor, wherein the anchor point comprises a GPS coordinate.
20. The augmented reality system of any one of claim 17 to claim 19, wherein the computer readable instruction code further includes instruction code for filtering the data describing the location of the display screen according to a predetermined pathway.
AU2020277242A 2012-07-27 2020-11-26 Method and system for augmented reality Abandoned AU2020277242A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020277242A AU2020277242A1 (en) 2012-07-27 2020-11-26 Method and system for augmented reality

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
AU2012903240 2012-07-27
AU2012903240A AU2012903240A0 (en) 2012-07-27 Method and system for augmented reality
AU2013213701A AU2013213701A1 (en) 2012-07-27 2013-07-29 Method and system for augmented reality
AU2019200032A AU2019200032A1 (en) 2012-07-27 2019-01-03 Method and system for augmented reality
AU2020277242A AU2020277242A1 (en) 2012-07-27 2020-11-26 Method and system for augmented reality

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU2019200032A Division AU2019200032A1 (en) 2012-07-27 2019-01-03 Method and system for augmented reality

Publications (1)

Publication Number Publication Date
AU2020277242A1 true AU2020277242A1 (en) 2020-12-24

Family

ID=50070130

Family Applications (3)

Application Number Title Priority Date Filing Date
AU2013213701A Abandoned AU2013213701A1 (en) 2012-07-27 2013-07-29 Method and system for augmented reality
AU2019200032A Abandoned AU2019200032A1 (en) 2012-07-27 2019-01-03 Method and system for augmented reality
AU2020277242A Abandoned AU2020277242A1 (en) 2012-07-27 2020-11-26 Method and system for augmented reality

Family Applications Before (2)

Application Number Title Priority Date Filing Date
AU2013213701A Abandoned AU2013213701A1 (en) 2012-07-27 2013-07-29 Method and system for augmented reality
AU2019200032A Abandoned AU2019200032A1 (en) 2012-07-27 2019-01-03 Method and system for augmented reality

Country Status (1)

Country Link
AU (3) AU2013213701A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9646400B2 (en) 2015-02-12 2017-05-09 At&T Intellectual Property I, L.P. Virtual doorbell augmentations for communications between augmented reality and virtual reality environments
CN105204347B (en) * 2015-06-18 2018-11-09 丰唐物联技术(深圳)有限公司 Smart home exchange method based on augmented reality, device and system
CN107030705B (en) * 2016-12-01 2019-11-12 珠海幸福家网络科技股份有限公司 House viewing system and see room robot
US20180190033A1 (en) 2016-12-30 2018-07-05 Facebook, Inc. Systems and methods for providing augmented reality effects and three-dimensional mapping associated with interior spaces
EP3343490A1 (en) * 2016-12-30 2018-07-04 Facebook, Inc. Systems and methods for providing augmented reality effects and three-dimensional mapping associated with interior spaces
EP3343491A1 (en) * 2016-12-30 2018-07-04 Facebook, Inc. Systems and methods for providing augmented reality personalized content
CN107393007A (en) * 2017-08-01 2017-11-24 贺州学院 Wall assembly and detection method and system
CN108846898B (en) * 2018-05-30 2020-08-11 贝壳找房(北京)科技有限公司 Method and device for presenting 3D orientation on 3D house model
CN111127627B (en) * 2019-11-20 2020-10-27 贝壳找房(北京)科技有限公司 Model display method and device in three-dimensional house model
US11854230B2 (en) * 2020-12-01 2023-12-26 Meta Platforms Technologies, Llc Physical keyboard tracking

Also Published As

Publication number Publication date
AU2019200032A1 (en) 2019-01-24
AU2013213701A1 (en) 2014-02-13

Similar Documents

Publication Publication Date Title
AU2020277242A1 (en) Method and system for augmented reality
AU2022203756B2 (en) Unmanned aircraft structure evaluation system and method
US10834317B2 (en) Connecting and using building data acquired from mobile devices
AU2021206838B2 (en) Self-supervised training of a depth estimation model using depth hints
CN106471548B (en) Use the method and apparatus of the acceleration template matches of peripheral information
US9224243B2 (en) Image enhancement using a multi-dimensional model
US10475224B2 (en) Reality-augmented information display method and apparatus
WO2015164373A1 (en) Systems and methods for context based information delivery using augmented reality
US11682103B2 (en) Selecting exterior images of a structure based on capture positions of indoor images associated with the structure
Honkamaa et al. Interactive outdoor mobile augmentation using markerless tracking and GPS
US20240046610A1 (en) Determining visual overlap of images by using box embeddings
CA3069813C (en) Capturing, connecting and using building interior data from mobile devices
KR101912241B1 (en) Augmented reality service providing apparatus for providing an augmented image relating to three-dimensional shape of real estate and method for the same
CN116406461B (en) Generating measurements of physical structure and environment by automatic analysis of sensor data
TWI750821B (en) Navigation method, system, equipment and medium based on optical communication device
KR20200004135A (en) Method for providing model house virtual image based on augmented reality
JP2011022662A (en) Portable telephone terminal and information processing system
CN117893634A (en) Simultaneous positioning and map construction method and related equipment
FR2986891A1 (en) Method for displaying outdoor composite image on screen of mobile terminal e.g. smartphone, involves determining spatial coordinates of portion of elementary image, and assigning coordinates to observation point, to form composite image

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application