US20170278294A1 - Texture Blending Between View-Dependent Texture and Base Texture in a Geographic Information System - Google Patents

Texture Blending Between View-Dependent Texture and Base Texture in a Geographic Information System Download PDF

Info

Publication number
US20170278294A1
US20170278294A1 US15/621,345 US201715621345A US2017278294A1 US 20170278294 A1 US20170278294 A1 US 20170278294A1 US 201715621345 A US201715621345 A US 201715621345A US 2017278294 A1 US2017278294 A1 US 2017278294A1
Authority
US
United States
Prior art keywords
texture
fragment
view
dependent
stretching factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/621,345
Inventor
Scott E. Dillard
Brett A. Allen
Aleksey Golovinskiy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to US15/621,345 priority Critical patent/US20170278294A1/en
Assigned to GOOGLE INC. reassignment GOOGLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ALLEN, BRETT A., DILLARD, SCOTT E., GOLOVINSKIY, ALEKSEY
Publication of US20170278294A1 publication Critical patent/US20170278294A1/en
Assigned to GOOGLE LLC reassignment GOOGLE LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: GOOGLE INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images

Definitions

  • the present disclosure relates generally to interactive geographic information systems, and more particularly to rendering view-dependent textures in conjunction with at least a portion of a three-dimensional model in a geographic information system.
  • Geographic information systems provide for the archiving, retrieving, and manipulating of data that has been stored and indexed according to geographic coordinates of its elements.
  • a geographic information system can be used for storing, manipulating, and displaying a three-dimensional model of a geographic area.
  • An interactive geographic information system can present a graphical representation of the three-dimensional model to a user in a suitable user interface, such as a browser.
  • a user can navigate the three-dimensional model by controlling a virtual camera that specifies what portion of the three-dimensional model is rendered and presented to a user.
  • the three-dimensional model can include a polygon mesh, such as a triangle mesh, used to model the geometry (e.g. terrain, buildings, and other objects) of the geographic area.
  • Geographic imagery such as aerial or satellite imagery, can be texture mapped to the three-dimensional model so that the three-dimensional model provides a more accurate and realistic representation of the scene.
  • a single base texture is texture mapped to the polygon mesh regardless of the viewpoint of three-dimensional model.
  • the single base texture can be optimized based on viewing the three-dimensional model from a plurality of differing viewpoints for the scene. For instance, the geographic imagery mapped to each polygon face (e.g. triangle face) in the polygon mesh can be selected according to a selection mechanism or algorithm that favors geographic imagery with a direct or near direct view of the polygon face.
  • a view-dependent texture can be rendered in conjunction with the three-dimensional model when the virtual camera views the three-dimensional model from a perspective associated with a reference viewpoint, such as a canonical viewpoint (e.g. a top-down or nadir perspective, a north perspective, a south perspective, an east perspective, and a west perspective).
  • the view-dependent texture can be optimized for viewing the three-dimensional model from a single view direction associated with the reference viewpoint.
  • objects rendered near the edges of the field of view defined by the virtual camera can be viewed from a direction that is different from the reference direction associated with the view-dependent texture. This can cause visual artifacts in the representation of the three-dimensional model. For instance, taller buildings rendered near the edge of the field of view can have a streaky appearance along the face of the building.
  • One exemplary aspect of the present disclosure is directed to a computer-implemented method of rendering a three-dimensional model of a geographic area.
  • the method includes rendering on a display of a computing device a polygon mesh from a virtual camera viewpoint.
  • the polygon mesh models geometry of the geographic area.
  • the method further includes identifying, with the computing device, a reference direction associated with a view-dependent texture to be rendered in conjunction with the polygon mesh.
  • the view-dependent texture is optimized for viewing the three-dimensional model from a reference viewpoint associated with the reference direction.
  • the method further includes determining, with the computing device, a viewpoint direction associated with a fragment of the polygon mesh. The viewpoint direction extends from the virtual camera towards the fragment.
  • the method further include determining, with the computing device, a stretching factor for the fragment based at least in part on the viewpoint direction associated with the fragment and the reference direction.
  • the stretching factor is indicative of the amount that a texture mapped image is stretched when mapped to the fragment.
  • the method further includes selecting, with the computing device, a texture for rendering at the fragment based at least in part on the stretching factor.
  • the view-dependent texture can be selected for rendering at the fragment when the stretching factor is less than a threshold and a base texture can be selected for rendering at the fragment when the stretching factor is greater than the threshold.
  • a blended texture can be selected for rendering at the fragment.
  • the blended texture can be a blend between the base texture and the view-dependent texture.
  • exemplary aspects of the present disclosure are directed to systems, apparatus, non-transitory computer-readable media, user interfaces and devices for rendering a view-dependent texture in conjunction with a geographic information system.
  • FIG. 1( a )-1( c ) depict a graphical representation of an exemplary three-dimensional model including a view-dependent texture mapped to at least a portion of the polygon mesh;
  • FIG. 2 depicts an exemplary system for rendering a graphical representation of a three-dimensional model of a geographic area according to an exemplary embodiment of the present disclosure
  • FIG. 3 depicts a flow diagram of an exemplary method for rendering a view-dependent texture in conjunction with a three-dimensional model of a geographic area according to an exemplary embodiment of the present disclosure
  • FIG. 4 depicts a flow diagram of an exemplary method for selecting a texture for rendering at a fragment based on a stretching factor for the fragment according to an exemplary embodiment of the present disclosure
  • FIG. 5 graphically depicts the exemplary selection of a texture for rendering at a fragment based on a stretching factor for the fragment according to an exemplary embodiment of the present disclosure
  • FIG. 6 depicts the exemplary determination of a stretching factor for different fragments of a polygon mesh according to an exemplary embodiment of the present disclosure
  • FIG. 7 depicts a flow diagram of an exemplary method for determining a stretching factor for a fragment according to an exemplary embodiment of the present disclosure
  • FIG. 8 depicts a representation of a circular element in two-dimensional texture space projected as an ellipse onto a polygon mesh
  • FIG. 9 depicts a flow diagram of an exemplary method for determining a stretching factor for a fragment according to an exemplary embodiment of the present disclosure.
  • FIG. 10 depicts an exemplary computing environment for rendering a view-dependent texture in conjunction with a three-dimensional model according to an exemplary embodiment of the present disclosure.
  • the present disclosure is directed to rendering a view-dependent texture and a base texture in the same field of view of a virtual camera to provide a graphical representation of a three-dimensional model of a geographic area.
  • the view-dependent texture can be optimized for viewing the three-dimensional model from a single reference direction.
  • the base texture can be optimized based on viewing the three-dimensional model from a plurality of different viewpoints of the three-dimensional model.
  • the base texture can be optimized for providing a direct (or near direct) and/or non-occluded (or near non-occluded) view of various portions of the three-dimensional model.
  • the view-dependent texture and the base texture can be mapped to a polygon mesh to provide an interactive three-dimensional model of the geographic area for presentation to a user, for instance, on a display device.
  • a user can navigate a virtual camera using controls provided in a suitable user interface to a particular camera viewpoint of the three-dimensional model.
  • the virtual camera defines the field of view of the three-dimensional model to be rendered and presented to the user.
  • a view-dependent texture can be rendered in conjunction with at least portions of the three-dimensional model.
  • portions of the three-dimensional model within the field of view of the virtual camera will have the same orientation relative to the reference direction associated with the reference viewpoint.
  • portions of the three-dimensional model near the edges of the field of view such as portions of the model associated with tall buildings, can be viewed from a slightly different direction than the reference direction.
  • a base texture can be rendered at such portions of the three-dimensional model. Combining or “blending” the use of base textures and view-dependent textures can improve the appearance of the three-dimensional model by removing visual artifacts that can occur when rendering a view-dependent texture in conjunction with objects viewed from a slightly different perspective than the reference direction associated with the view-dependent texture.
  • FIG. 1( a ) depicts a graphical representation of a three-dimensional model 50 rendered on a display device from a perspective of a virtual camera at a first virtual camera viewpoint.
  • the virtual camera defines the field of view of the three-dimensional model for presentation on the display device.
  • the virtual camera provides a perspective of the three-dimensional model from a viewpoint associated with a reference direction, namely a nadir perspective (i.e. a top-down view).
  • a view-dependent texture 52 is rendered in conjunction with the three-dimensional model.
  • the view-dependent texture 52 can be texture mapped to a three-dimensional polygon mesh representing geometry of the geographic area.
  • the view-dependent texture 52 can be optimized for viewing the three-dimensional model from the reference direction.
  • the view-dependent texture can be generated from source images that are more closely aligned with the reference direction.
  • window 60 calls attention to certain portions of the three-dimensional model.
  • the portions of the three-dimensional model 50 in the window 60 are near the edges of the field of view defined by the virtual camera and are viewed from a slightly different perspective than the remainder of the three-dimensional model 50 , such as the portions near the center of the field of view.
  • rendering the view-dependent texture associated with the nadir perspective in conjunction with these portions of the three-dimensional model 50 can lead to visual anomalies. For instance the sides of the buildings 55 have a streaky appearance.
  • FIG. 1( c ) depicts a blow up of window 60 when a base texture 54 is rendered in conjunction with the three-dimensional model 50 .
  • the sides of the buildings 55 have an improved and more realistic appearance when compared to the view-dependent texture 52 depicted in FIG. 1( b ) . Accordingly, rendering the base texture for these portions of three-dimensional model 50 can improve the quality of the three-dimensional model 50 .
  • a computing device can decide for each portion (e.g. pixel) of graphical representation of the three-dimensional model, whether to render the base texture or the view-dependent texture in conjunction with the polygon mesh.
  • a stretching factor can be determined for each fragment in the polygon mesh.
  • Each fragment in the polygon mesh can correspond to a pixel in the graphical representation of the three-dimensional model to be rendered on a display device.
  • the stretching factor can be indicative of the amount a texture mapped image is stretched when mapped to the fragment.
  • the stretching factor can be determined based on the relationship between the reference direction and a viewpoint direction at the fragment.
  • the viewpoint direction can extend from the virtual camera towards the fragment.
  • Other suitable factors can be used in the determination of the stretching factor, such as a surface normal associated with the fragment.
  • the computing device can select the base texture, the view-dependent texture, or a blended texture for rendering at the fragment based at least in part on the stretching factor. For instance, in one implementation, the stretching factor can be compared to a threshold. The view-dependent texture can be selected for rendering at the fragment when the stretching factor is less than a threshold. The base texture can be selected for rendering at the fragment when the stretching factor is greater than the threshold. In certain cases, a blended texture can be selected for rendering at the fragment. The blended texture can be a blend of the base texture and the view-dependent texture.
  • the stretching factor can be determined by accessing a mathematical model that projects a circular element in the two-dimensional space associated with the view-dependent texture (e.g. the texture atlas associated with the view-dependent texture) as an ellipse onto the polygon mesh.
  • the ellipse can include a minor axis and a major axis.
  • the major axis can be indicative of the stretch of a texture mapped image when mapped to a fragment of the polygon mesh.
  • the stretching factor can be determined, based at least in part on the major axis of the ellipse. For instance, the stretching factor can be determined based on the length of the projection of the major axis of the ellipse in the graphical representation of the three-dimensional model presented on a display device.
  • the stretching factor can include an inverse texture stretch component and a view stretch component.
  • the inverse texture stretch component can be based on the relationship between the reference direction and a surface normal.
  • the view stretch component can be based on the relationship between the surface normal associated with the fragment and the viewpoint direction associated with the fragment.
  • the computing device can select whether to render the view-dependent texture or the base texture at a fragment based on both the inverse texture stretch and the view stretch.
  • a three-dimensional model with an improved appearance can be presented to the user.
  • a view-dependent texture can be presented to the user to provide a more realistic appearing graphical representation of the geographic area to the user.
  • the appearance of the view-dependent texture can be even further enhanced by rendering a base texture in conjunction with the view-dependent texture for portions of the three-dimensional model within the same field of view that are observed from a direction that is different from the reference direction.
  • FIG. 2 depicts an exemplary system 100 for rendering a three-dimensional model of a geographic area according to an exemplary embodiment of the present disclosure.
  • the system 100 can include a server 110 for hosting a geographic information system 120 .
  • the server 110 can be any suitable computing device, such as a web server.
  • the server 110 can be in communication with a user device 130 over a network 140 , such as the Internet.
  • the user device 130 can be any suitable computing device, such as a laptop, desktop, smartphone, tablet, mobile device, wearable computing device, or other computing device.
  • the server 130 can host an interactive geographic information system 120 that serves geographic data stored, for instance, in a geographic database 118 .
  • the geographic database 118 can include geographic data for rendering an interactive graphical representation of the three-dimensional model of a geographic area.
  • the geographic data can include a polygon mesh representing the geometry of the geographic area and one or more textures for mapping to the polygon mesh.
  • the geographic data can be stored in a hierarchical tree data structure, such a quadtree or octree data structure, that spatially partitions the geographic data according to geospatial coordinates.
  • the polygon mesh can include a plurality of polygons (e.g. triangles) interconnected by vertices and edges to model the geometry of the geographic area.
  • the polygon mesh can be represented in any suitable format, such as a depth map, height field, closed mesh, signed distance field, or any other suitable type of representation.
  • the polygon mesh can be a stereo reconstruction generated from aerial or satellite imagery of the geographic area.
  • the imagery can be taken by overhead cameras, such as from an aircraft, at various oblique or nadir perspectives. In the imagery, features are detected and correlated with one another.
  • the points can be used to determine a stereo mesh from the imagery such that a three-dimensional model can be determined from two-dimensional imagery.
  • the geographic data can also include a plurality of textures that can be mapped to the polygon mesh.
  • the textures can be generated from aerial or satellite imagery of the geographic area.
  • the geographic data can include a base texture for the geographic area and one or more view-dependent textures for reference viewpoints (e.g. canonical viewpoints) of the geographic area.
  • the geographic data can include a base texture and five different view-dependent textures, one for each of a north 45° oblique viewpoint, a south 45° oblique viewpoint, a west 45° oblique viewpoint, and a nadir viewpoint.
  • the textures can be stored in any suitable format, such as using texture atlases.
  • a canonical viewpoint can refer to a standard and/or a predominate view of a geographic area, such as a north view, a south view, an east view, or a west view.
  • Other suitable canonical viewpoints can include a northeast view, a northwest view, a southeast view, or southwest view.
  • the canonical views can be standard or default views of the three-dimensional model in the geographic information system.
  • the base texture can be optimized based on a plurality of differing viewpoints of the three-dimensional model.
  • An exemplary base texture is optimized for providing a direct (or near direct) and/or a non-occluded (or near non-occluded) view of various portions of the three-dimensional model.
  • the base texture can be generated by selecting source images for texture mapping to polygon faces in the polygon mesh using selection criteria that favors the selection of source images that have a non-occluded and/or direct or near direct view of the polygon face.
  • the source images used for the base texture can be associated with a variety of different view perspectives.
  • the view-dependent textures can be optimized for viewing the three-dimensional model from a single view direction (i.e. the reference direction) associated with the reference viewpoint.
  • a view-dependent texture can be generated from source images that are more closely aligned with the reference direction.
  • the view-dependent texture can be generated by creating a texture atlas mapping texture to the polygon mesh providing a representation of the geometry of the geographic area. The texture for each portion of the polygon mesh can be selected using a texture selection algorithm that favors source imagery more closely aligned with the single view direction associated with the reference direction.
  • a view-dependent texture can be generated by determining a score for each source image that views a polygon face of the polygon mesh.
  • the score can favor the selection of a source image for texturing a polygon face that is aligned more closely with the reference direction.
  • the score computed for each source image can include a base component and a view dependent component.
  • the base component can be determined to favor source images that directly point to a surface normal associated with the polygon face.
  • the base component can also take into account other factors, such as occlusion of the polygon face in the source image.
  • the view dependent component can be based on the relationship between the camera view direction associated with the source image (e.g.
  • a graph cut algorithm can also be used in the generation of the base texture and the view-dependent texture to avoid choosing images that cause large color discontinuities when textured onto the polygon mesh.
  • the user device 130 can implement a user interface 134 that allows a user 132 to interact with the geographic information system 120 hosted by the server 110 .
  • the user interface 134 can be a browser or other suitable client application that can render a graphical representation of a three-dimensional model of the geographic area on a display associated with the user device 130 .
  • the user 132 can interact with the user interface 134 to navigate a virtual camera to view the three-dimensional model from a variety of different virtual camera viewpoints.
  • the user interface 134 can present a variety of different control tools to allow the user to pan, tilt, zoom, search, or otherwise navigate the virtual camera to view different portions of the three-dimensional model of the geographic area from different perspectives.
  • requests for geographic data can be provided from the user device 130 over the network 140 to the server 110 .
  • the server 110 can provide geographic data, such as a polygon mesh and one or more textures, to the user device 130 .
  • the user device 130 can then render one or more of the textures in conjunction with the polygon mesh from a viewpoint associated with the virtual camera to present a graphical representation of the three-dimensional model of the geographic area to the user.
  • the base texture can be rendered in conjunction with the three-dimensional model.
  • a view-dependent texture for the reference direction can be rendered in conjunction with at least portions of the three-dimensional model.
  • the user device 130 can render a base texture in the same field of view as the view-dependent texture for portions of the three-dimensional model that are viewed from a view direction that is different than the reference direction.
  • FIG. 3 depicts a flow diagram of an exemplary method ( 200 ) for rendering a view-dependent texture in conjunction with a three-dimensional model of a geographic area according to an exemplary embodiment of the present disclosure.
  • the method ( 200 ) can be implemented using any suitable computing system, such as the user device 130 depicted in FIG. 1 .
  • FIG. 3 depicts steps performed in a particular order for purposes of illustration and discussion.
  • One of ordinary skill in the art, using the disclosures provided herein, will understand that the steps of any of the methods discussed herein can be omitted, adapted, rearranged, or expanded in various ways without deviating from the scope of the present disclosure.
  • the method includes receiving a user input requesting a view of a three-dimensional model of a geographic area from a virtual camera viewpoint.
  • a user can navigate a virtual camera using a suitable user interface to view the three-dimensional model from a perspective of the virtual camera viewpoint.
  • the virtual camera viewpoint can be associated with a position and orientation of the virtual camera relative to the three-dimensional model.
  • the virtual camera can define the field of view of the three-dimensional model.
  • the virtual camera can be non-orthographic such that certain portions of the three-dimensional model are viewed from a different direction than other portions of the three-dimensional model. For instance, portions of the three-dimensional model proximate the edges of the field of view can be viewed from different directions than portions of the three-dimensional model near the center of the field of view.
  • a polygon mesh modeling geometry of the geographic area and a base texture for the three-dimensional model can be obtained, for instance, over a network or from memory. For example, if data associated with the polygon mesh and the base texture have previously been fetched from a remote server, the polygon mesh and the base texture can be accessed from a local memory. If the data associated with the polygon mesh and the base texture are not available in a local memory, a request can be made to fetch the data from a remote server over a network, such as the Internet.
  • the method includes determining a stretching factor for each fragment of the polygon mesh.
  • the stretching factor can be indicative of how much the view of the fragment from the perspective of the virtual camera differs from the reference direction associated with the view-dependent texture. More particularly, the stretching factor can be determined based on a viewpoint direction associated with the fragment and the reference direction. A fragment associated with a viewpoint direction that is closely aligned with the reference direction can have a stretching factor favoring selection of a view-dependent texture for rendering at the fragment. A fragment associated with a viewpoint direction that differs sufficiently from the reference direction can have stretching factor favoring the selection of the base texture for rendering at the fragment.
  • the stretching factor can be determined based on other factors as well, such as a surface normal associated with the fragment. Exemplary techniques for determining a stretching factor according to aspects of the present disclosure will be discussed in detail below with reference to FIGS. 7-9 .
  • a texture is selected for rendering at the fragment based on the stretching factor.
  • the view-dependent texture or the base texture can be selected for rendering at the fragment based on the stretching factor.
  • the base texture can be selected for rendering at the fragment if the stretching factor is greater than a threshold.
  • the view-dependent texture can be selected for rendering at the fragment when the stretching factor is less than the threshold.
  • a blended texture can be selected for rendering at the fragment.
  • the blended texture can be a blend between the color defined by the base texture for the fragment and the color defined by the view-dependent texture for the fragment.
  • FIG. 4 depicts a flow diagram of one exemplary method for selecting a texture for rendering at a fragment according to an exemplary embodiment of the present disclosure.
  • the stretching factor for the fragment is accessed, for instance, from a memory.
  • the first threshold can be set to any suitable value depending on desired performance. If the stretching factor is less than the first threshold, the view-dependent texture can be selected for rendering at the fragment ( 226 ).
  • the method proceeds to ( 228 ) where it is determined whether the stretching factor exceeds a second threshold.
  • the second threshold can be set to any suitable value depending desired performance. If the stretching factor exceeds the second threshold, the base texture can be selected for rendering at the fragment ( 230 ).
  • a blended texture can be selected for rendering at the fragment ( 232 ).
  • the blended texture can be a blend between the view-dependent texture and the base texture.
  • the amount of the blended texture attributable to the view-dependent texture and the amount of the blended texture attributable to the base texture can be determined based on the stretching factor. For instance, alpha values associated with the base texture and the view-dependent texture can be controlled based on the stretching factor.
  • FIG. 5 depicts a graphical representation of selecting a texture for rendering at the fragment using the exemplary method ( 220 ) shown in FIG. 4 .
  • the view-dependent texture can be selected for rendering at the fragment when the stretching factor is less than a first threshold ST 1 .
  • the stretching factor is greater than a second threshold ST 2
  • the base texture can be selected for rendering at the fragment.
  • a blended texture can be selected for rendering at the fragment. The ratio of the blend of the blended texture can be determined based on the stretching factor.
  • the stretching factor approaches the second threshold ST 2 , more and more of the blended texture can be attributable to the base texture while less and less of the blended texture is attributable to the view-dependent texture.
  • the blend can vary linearly with the stretching factor as shown in FIG. 5 .
  • other suitable relationships can be used to determine the blend ratio of the blended texture without deviating from the scope of the present disclosure.
  • the selected texture can be rendered at the fragment on the display of the computing device in conjunction with the three-dimensional model.
  • the method ( 200 ) can render the view-dependent texture at fragments where the viewpoint direction is closely aligned with the reference direction and can render the base texture at fragments where the viewpoint direction differs from the reference direction.
  • FIG. 6 depicts the exemplary determination of a stretching factor for different fragments of a polygon mesh according to an exemplary embodiment of the present disclosure.
  • FIG. 6 depicts a polygon mesh 300 having a plurality of polygon faces.
  • First and second fragments 320 and 330 of the polygon mesh are singled out for analysis for purposes of illustration and discussion.
  • Each of the first and second fragments 320 and 330 can be associated with a different pixel in a graphical representation of the polygon mesh 300 presented on a display device.
  • a view-dependent texture can be identified for rendering in conjunction with the polygon mesh 300 .
  • the view-dependent texture can have a reference direction 310 .
  • a user can request a view of the polygon mesh 300 from the perspective associated with a virtual camera 340 .
  • Stretching factors can be determined for the first and second fragment 330 and 330 .
  • the stretching factor for the first fragment 320 can be determined based on the viewpoint direction 322 associated with the first fragment 320 and the reference direction 310 .
  • the viewpoint direction 322 associated with the first fragment 320 extends from the virtual camera 340 to the first fragment 320 .
  • the stretching factor can also be determined based at least in part on the surface normal 324 associated with the first fragment 320 .
  • the stretching factor for the second fragment 320 can be determined based on the viewpoint direction 332 associated with the second fragment 330 and the reference direction 310 .
  • the viewpoint direction 322 associated with the second fragment extends from the virtual camera 340 to the second fragment 330 .
  • the stretching factor can also be determined based at least in part on the surface normal 334 associated with the first fragment 330 .
  • the viewpoint direction 332 associated with the second fragment 330 is more closely aligned with the reference direction 310 than the viewpoint direction 322 associated with the first fragment 320 .
  • the stretching factor associated with the first fragment 320 can be greater than the stretching factor associated with the second fragment 330 . Accordingly, a base texture can be selected for rendering at the first fragment 320 and a view-dependent texture can be selected for rendering at the second fragment 330 .
  • FIGS. 7-9 exemplary techniques for determining a stretching factor for a fragment of a polygon mesh will be set forth. These exemplary techniques are presented for purposes of illustration and discussion. Those of ordinary skill in the art, using the disclosures provided herein, will understand that other suitable techniques can be used for determining the stretching factor without deviating from the scope of the present disclosure.
  • FIG. 7 depicts an exemplary method ( 400 ) for determining a stretching factor for a fragment according to one exemplary embodiment of the present disclosure.
  • the method ( 400 ) determines a stretching factor using a mathematical model that projects a circular element in a two-dimensional space associated with the view-dependent texture as an ellipse onto the polygon mesh.
  • FIG. 8 depicts a circular element 350 in a texture space associated with the view-dependent texture.
  • the circular element 350 When the circular element 350 is projected onto different fragments of the polygon mesh, the circular element will stretch to form an ellipse 360 .
  • the size and shape of the ellipse 360 will vary depending on the fragment to which the circular element is projected.
  • the ellipse 360 includes a minor axis r and a major axis s. There is no stretch is the minor axis direction. The stretch is along the major axis s. Accordingly, the stretching factor can be determined based at least in part on the major axis of the ellipse.
  • a reference vector associated with the reference direction of the view-dependent texture can be obtained at ( 402 ).
  • the reference vector can be a unit vector that points in the reference direction associated with the view-dependent texture. Referring to FIG. 6 , the reference vector can extend in the reference direction 310 .
  • a viewpoint vector can be obtained for the fragment at ( 404 ).
  • the viewpoint vector can also be a unit vector and can point along the viewpoint direction associated with the fragment.
  • the viewpoint direction extends from the virtual camera to the fragment.
  • the first fragment 320 can have a viewpoint vector that points along the viewpoint direction 322 .
  • the second fragment 330 can have viewpoint vector that points along the viewpoint direction 332 .
  • a surface normal can also be obtained for the fragment ( 406 ).
  • the surface normal can be obtained by either determining the surface normal for the fragment or accessing a previously determined surface normal for the fragment stored in a memory.
  • Many different techniques are known for determining the surface normal of a fragment of a polygon mesh. Any suitable technique can be used without deviating from the scope of the present disclosure.
  • the first fragment 320 has a surface normal 324 .
  • the second fragment 330 has a surface normal 334 .
  • a mathematical model projecting a circular element as an ellipse on the polygon mesh can be accessed.
  • the mathematical model can specify the minor axis of the ellipse based at least in part on the relationship between the surface normal at the fragment and the reference direction.
  • the mathematical model can specify the minor axis of the ellipse as the cross product of the reference vector and the surface normal as follows:
  • r is the minor axis of the ellipse
  • v is the reference vector
  • n is the surface normal
  • the mathematical model can specify the direction of the major axis of the ellipse based on the relationship between the minor axis of the ellipse and the surface normal. For instance, the mathematical model can specify that the direction of major axis extends in the direction defined by the cross product of the minor axis and the surface normal as follows:
  • s is the major axis of the ellipse
  • r is the minor axis of the ellipse
  • n is the surface normal
  • the mathematical model can further specify that magnitude of the major axis is determined based on the relationship between the reference direction and the surface normal. For instance, the mathematical model can specify that the magnitude of the major axis is determined based on the dot product of the reference vector and the surface normal as follows:
  • s is the major axis of the ellipse
  • v is the reference vector
  • n is the surface normal
  • the major axis and the minor axis of the ellipse are determined from the mathematical model.
  • the reference vector and the surface normal obtained for the fragment can be used to solve for minor axis and major axis of the ellipse using the mathematical model.
  • the stretching factor is determined from the major axis of the ellipse.
  • the stretching factor can be determined based on the relationship between the major axis of the ellipse and the viewpoint direction associated with the fragment. In one particular implementation, the stretching factor can be determined as follows:
  • Stretch f is the stretching factor associated with the fragment
  • s is the major axis of the ellipse
  • e is the viewpoint vector associated with the fragment.
  • the stretching factor can be used to select a texture for rendering at the fragment.
  • FIG. 9 depicts a flow diagram of another exemplary method ( 500 ) for determining a stretching factor for a fragment according to an exemplary embodiment of the present disclosure.
  • the method ( 500 ) determines a stretching factor having an inverse texture stretch component for the fragment and a view stretch component for the fragment.
  • the inverse texture stretch component can be determined based on the relationship between the reference direction and a surface normal associated with the fragment.
  • the view stretch component can be determined based on the relationship between the viewpoint direction and the surface normal associated with the fragment.
  • a reference vector and surface normal can be obtained for the fragment at ( 502 ).
  • the reference vector can be a unit vector that points in the reference direction associated with the view-dependent texture.
  • the surface normal for the fragment can be accessed from memory or determined using any suitable surface normal determination algorithm.
  • the inverse texture stretch component is determined based on the reference vector and the surface normal.
  • the inverse texture stretch component can have a value ranging from 0 to 1.
  • the inverse texture stretch component can be computed as the dot product of the reference vector and the surface normal as follows:
  • s0 is the inverse texture stretch component
  • v is the reference direction
  • n is the surface normal
  • the inverse texture stretch component can be 0 for fragments associated with vertical walls and other geometry that has a surface normal perpendicular to the reference direction.
  • the inverse texture stretch component can be 1 for fragments associated with roofs and other geometry that has a surface normal parallel to the reference direction.
  • a viewpoint vector can be obtained for the fragment.
  • the viewpoint vector can be a unit vector and can point along the viewpoint direction associated with the fragment.
  • the view stretch component is determined based on the viewpoint vector at ( 508 ).
  • the view stretch component can be determined based on the relationship between the viewpoint vector and the surface normal.
  • the view stretch component can be 0 when squarely viewing the fragment and can be 1 when looking at the polygon face associated with the fragment edge-on.
  • the view stretch component can be determined based on the dot product of the viewpoint vector and the surface normal as follows:
  • s1 is the view stretch component
  • e is the viewpoint vector
  • n is the surface normal
  • a texture can be selected for rendering at the fragment based on both the inverse texture stretch component associated with the fragment and the view stretch component associated with the fragment.
  • the view-dependent texture can be selected for rendering at the fragment when s1 ⁇ s0 ⁇ .
  • a base texture can be selected for rendering at the fragment when s0 ⁇ s1 ⁇ .
  • ⁇ and ⁇ are constants that can be controlled based on the desired amount of stretching to be allowed for the view-dependent texture.
  • FIG. 10 depicts an exemplary computing system 600 that can be used to implement the methods and systems for generating and rendering view-dependent textures according to aspects of the present disclosure.
  • the system 600 is implemented using a client-server architecture that includes a server 610 that communicates with one or more client devices 630 over a network 640 .
  • the system 600 can be implemented using other suitable architectures, such as a single computing device.
  • the system 600 includes a server 610 , such as a web server used to host a geographic information system.
  • the server 610 can be implemented using any suitable computing device(s).
  • the server 610 can have a processor(s) 612 and a memory 614 .
  • the server 610 can also include a network interface used to communicate with one or more client computing devices 630 over a network 640 .
  • the network interface can include any suitable components for interfacing with one more networks, including for example, transmitters, receivers, ports, controllers, antennas, or other suitable components.
  • the processor(s) 612 can be any suitable processing device, such as a microprocessor, microcontroller, integrated circuit, or other suitable processing device.
  • the memory 614 can include any suitable computer-readable medium or media, including, but not limited to, non-transitory computer-readable media, RAM, ROM, hard drives, flash drives, or other memory devices.
  • the memory 614 can store information accessible by processor(s) 612 , including instructions 616 that can be executed by processor(s) 612 .
  • the instructions 616 can be any set of instructions that when executed by the processor(s) 612 , cause the processor(s) 612 to provide desired functionality. For instance, the instructions 616 can be executed by the processor(s) 612 to implement a geographic information system module 620 .
  • the geographic information system module 620 can be configured to perform functionality associated with hosting a geographic information system, such as responding to requests for geographic data used to render a three-dimensional model of a geographic area.
  • module refers to computer logic utilized to provide desired functionality.
  • a module can be implemented in hardware, application specific circuits, firmware and/or software controlling a general purpose processor.
  • the modules are program code files stored on the storage device, loaded into memory and executed by a processor or can be provided from computer program products, for example computer executable instructions, that are stored in a tangible computer-readable storage medium such as RAM, hard disk or optical or magnetic media.
  • Memory 614 can also include data 618 that can be retrieved, manipulated, created, or stored by processor(s) 612 .
  • the data can include geographic data to be served as part of the geographic information system, such as polygon meshes, base textures, view-dependent textures, and other geographic data.
  • the geographic data can be stored in a hierarchical tree data structure, such as a quadtree or octree data structure, that spatially partitions the geographic data according to geospatial coordinates.
  • the data 618 can be stored in one or more databases.
  • the one or more databases can be connected to the server 610 by a high bandwidth LAN or WAN, or can also be connected to server 610 through network 640 .
  • the one or more databases can be split up so that they are located in multiple locales.
  • the server 610 can exchange data with one or more client devices 630 over the network 640 . Although two client devices 630 are illustrated in FIG. 10 , any number of client devices 630 can be connected to the server 610 over the network 640 .
  • the client devices 630 can be any suitable type of computing device, such as a general purpose computer, special purpose computer, laptop, desktop, mobile device, smartphone, tablet, wearable computing device, or other suitable computing device.
  • a client device 630 can include a processor(s) 632 and a memory 634 .
  • the processor(s) 632 can include one or more central processing units, graphics processing units dedicated to efficiently rendering images, etc.
  • the memory 634 can store information accessible by processor(s) 632 , including instructions 636 that can be executed by processor(s) 632 .
  • the memory 634 can store instructions 636 for implementing an application that provides a user interface (e.g. a browser) for interacting with the geographic information system.
  • the memory 634 can also store instructions 636 for implementing a rendering module and a stretching factor module.
  • the rendering module can be configured to render a textured polygon mesh to provide a graphical representation of a three-dimensional model of a geographic area.
  • the stretching factor module can be configured to determine a stretching factor for each fragment of the polygon mesh to be presented to a user.
  • the renderer module can select a view-dependent texture or a base texture for rendering at each fragment based at least in part on the stretching factor
  • the memory 634 can also store data 638 , such as polygon meshes, base textures, view-dependent textures, and other geographic data received by the client device 630 from the server 610 over the network.
  • the geographic data can be stored in a hierarchical tree data structure that spatially partitions the geographic data according to geospatial coordinates associated with the data.
  • the client device 630 can include various input/output devices for providing and receiving information from a user, such as a touch screen, touch pad, data entry keys, speakers, and/or a microphone suitable for voice recognition.
  • the computing device 630 can have a display 635 for rendering the graphical representation of the three-dimensional model.
  • the client device 630 can also include a network interface used to communicate with one or more remote computing devices (e.g. server 610 ) over the network 640 .
  • the network interface can include any suitable components for interfacing with one more networks, including for example, transmitters, receivers, ports, controllers, antennas, or other suitable components.
  • the network 640 can be any type of communications network, such as a local area network (e.g. intranet), wide area network (e.g. Internet), or some combination thereof.
  • the network 640 can also include a direct connection between a client device 630 and the server 610 .
  • communication between the server 610 and a client device 630 can be carried via network interface using any type of wired and/or wireless connection, using a variety of communication protocols (e.g. TCP/IP, HTTP, SMTP, FTP), encodings or formats (e.g. HTML, XML), and/or protection schemes (e.g. VPN, secure HTTP, SSL).

Abstract

Systems and methods for rendering a view-dependent texture in conjunction with a three-dimensional model of a geographic area are provided. A view-dependent texture can be rendered in conjunction with at least portions of the three-dimensional model. A base texture can be rendered for portions of the three-dimensional model in the same field of view that are viewed from a slightly different perspective than a reference direction associated with the view-dependent texture. For instance, a stretching factor can be determined for each portion of the three-dimensional model based on the reference direction and a viewpoint direction associated with the portion of the three-dimensional model. A base texture, a view-dependent texture, or a blended texture can be selected for rendering at the portion of the three-dimensional model based on the stretching factor.

Description

    PRIORITY CLAIM
  • The present application is a continuation of U.S. application Ser. No. 13/921,631 having a filing date of Jun. 19, 2013 and U.S. application Ser. No. 14/875,886 having a filing date of Oct. 6, 2015. Applicants claim priority to and benefit of all such applications and incorporate all such applications herein by reference.
  • FIELD
  • The present disclosure relates generally to interactive geographic information systems, and more particularly to rendering view-dependent textures in conjunction with at least a portion of a three-dimensional model in a geographic information system.
  • BACKGROUND
  • Geographic information systems provide for the archiving, retrieving, and manipulating of data that has been stored and indexed according to geographic coordinates of its elements. A geographic information system can be used for storing, manipulating, and displaying a three-dimensional model of a geographic area. An interactive geographic information system can present a graphical representation of the three-dimensional model to a user in a suitable user interface, such as a browser. A user can navigate the three-dimensional model by controlling a virtual camera that specifies what portion of the three-dimensional model is rendered and presented to a user.
  • The three-dimensional model can include a polygon mesh, such as a triangle mesh, used to model the geometry (e.g. terrain, buildings, and other objects) of the geographic area. Geographic imagery, such as aerial or satellite imagery, can be texture mapped to the three-dimensional model so that the three-dimensional model provides a more accurate and realistic representation of the scene. Typically, a single base texture is texture mapped to the polygon mesh regardless of the viewpoint of three-dimensional model. The single base texture can be optimized based on viewing the three-dimensional model from a plurality of differing viewpoints for the scene. For instance, the geographic imagery mapped to each polygon face (e.g. triangle face) in the polygon mesh can be selected according to a selection mechanism or algorithm that favors geographic imagery with a direct or near direct view of the polygon face.
  • In certain circumstances, a view-dependent texture can be rendered in conjunction with the three-dimensional model when the virtual camera views the three-dimensional model from a perspective associated with a reference viewpoint, such as a canonical viewpoint (e.g. a top-down or nadir perspective, a north perspective, a south perspective, an east perspective, and a west perspective). The view-dependent texture can be optimized for viewing the three-dimensional model from a single view direction associated with the reference viewpoint.
  • In cases where the virtual camera is not orthographic, objects rendered near the edges of the field of view defined by the virtual camera can be viewed from a direction that is different from the reference direction associated with the view-dependent texture. This can cause visual artifacts in the representation of the three-dimensional model. For instance, taller buildings rendered near the edge of the field of view can have a streaky appearance along the face of the building.
  • SUMMARY
  • Aspects and advantages of the invention will be set forth in part in the following description, or may be obvious from the description, or may be learned through practice of the invention.
  • One exemplary aspect of the present disclosure is directed to a computer-implemented method of rendering a three-dimensional model of a geographic area. The method includes rendering on a display of a computing device a polygon mesh from a virtual camera viewpoint. The polygon mesh models geometry of the geographic area. The method further includes identifying, with the computing device, a reference direction associated with a view-dependent texture to be rendered in conjunction with the polygon mesh. The view-dependent texture is optimized for viewing the three-dimensional model from a reference viewpoint associated with the reference direction. The method further includes determining, with the computing device, a viewpoint direction associated with a fragment of the polygon mesh. The viewpoint direction extends from the virtual camera towards the fragment. The method further include determining, with the computing device, a stretching factor for the fragment based at least in part on the viewpoint direction associated with the fragment and the reference direction. The stretching factor is indicative of the amount that a texture mapped image is stretched when mapped to the fragment. The method further includes selecting, with the computing device, a texture for rendering at the fragment based at least in part on the stretching factor.
  • In a particular implementation, the view-dependent texture can be selected for rendering at the fragment when the stretching factor is less than a threshold and a base texture can be selected for rendering at the fragment when the stretching factor is greater than the threshold. In certain aspects, a blended texture can be selected for rendering at the fragment. The blended texture can be a blend between the base texture and the view-dependent texture.
  • Other exemplary aspects of the present disclosure are directed to systems, apparatus, non-transitory computer-readable media, user interfaces and devices for rendering a view-dependent texture in conjunction with a geographic information system.
  • These and other features, aspects and advantages of the present invention will become better understood with reference to the following description and appended claims. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • A full and enabling disclosure of the present invention, including the best mode thereof, directed to one of ordinary skill in the art, is set forth in the specification, which makes reference to the appended figures, in which:
  • FIG. 1(a)-1(c) depict a graphical representation of an exemplary three-dimensional model including a view-dependent texture mapped to at least a portion of the polygon mesh;
  • FIG. 2 depicts an exemplary system for rendering a graphical representation of a three-dimensional model of a geographic area according to an exemplary embodiment of the present disclosure;
  • FIG. 3 depicts a flow diagram of an exemplary method for rendering a view-dependent texture in conjunction with a three-dimensional model of a geographic area according to an exemplary embodiment of the present disclosure;
  • FIG. 4 depicts a flow diagram of an exemplary method for selecting a texture for rendering at a fragment based on a stretching factor for the fragment according to an exemplary embodiment of the present disclosure;
  • FIG. 5 graphically depicts the exemplary selection of a texture for rendering at a fragment based on a stretching factor for the fragment according to an exemplary embodiment of the present disclosure;
  • FIG. 6 depicts the exemplary determination of a stretching factor for different fragments of a polygon mesh according to an exemplary embodiment of the present disclosure;
  • FIG. 7 depicts a flow diagram of an exemplary method for determining a stretching factor for a fragment according to an exemplary embodiment of the present disclosure;
  • FIG. 8 depicts a representation of a circular element in two-dimensional texture space projected as an ellipse onto a polygon mesh;
  • FIG. 9 depicts a flow diagram of an exemplary method for determining a stretching factor for a fragment according to an exemplary embodiment of the present disclosure; and
  • FIG. 10 depicts an exemplary computing environment for rendering a view-dependent texture in conjunction with a three-dimensional model according to an exemplary embodiment of the present disclosure.
  • DETAILED DESCRIPTION
  • Reference now will be made in detail to embodiments of the invention, one or more examples of which are illustrated in the drawings. Each example is provided by way of explanation of the invention, not limitation of the invention. In fact, it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the scope or spirit of the invention. For instance, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that the present invention covers such modifications and variations as come within the scope of the appended claims and their equivalents.
  • Overview
  • Generally, the present disclosure is directed to rendering a view-dependent texture and a base texture in the same field of view of a virtual camera to provide a graphical representation of a three-dimensional model of a geographic area. The view-dependent texture can be optimized for viewing the three-dimensional model from a single reference direction. The base texture can be optimized based on viewing the three-dimensional model from a plurality of different viewpoints of the three-dimensional model. For instance, the base texture can be optimized for providing a direct (or near direct) and/or non-occluded (or near non-occluded) view of various portions of the three-dimensional model. The view-dependent texture and the base texture can be mapped to a polygon mesh to provide an interactive three-dimensional model of the geographic area for presentation to a user, for instance, on a display device.
  • More particularly, a user can navigate a virtual camera using controls provided in a suitable user interface to a particular camera viewpoint of the three-dimensional model. The virtual camera defines the field of view of the three-dimensional model to be rendered and presented to the user. When a user navigates the virtual camera to a camera viewpoint of the three-dimensional model associated with or near a reference viewpoint, a view-dependent texture can be rendered in conjunction with at least portions of the three-dimensional model.
  • Not all portions of the three-dimensional model within the field of view of the virtual camera will have the same orientation relative to the reference direction associated with the reference viewpoint. For instance, portions of the three-dimensional model near the edges of the field of view, such as portions of the model associated with tall buildings, can be viewed from a slightly different direction than the reference direction. According to particular aspects of the present disclosure, a base texture can be rendered at such portions of the three-dimensional model. Combining or “blending” the use of base textures and view-dependent textures can improve the appearance of the three-dimensional model by removing visual artifacts that can occur when rendering a view-dependent texture in conjunction with objects viewed from a slightly different perspective than the reference direction associated with the view-dependent texture.
  • For instance, FIG. 1(a) depicts a graphical representation of a three-dimensional model 50 rendered on a display device from a perspective of a virtual camera at a first virtual camera viewpoint. The virtual camera defines the field of view of the three-dimensional model for presentation on the display device. In FIG. 1(a), the virtual camera provides a perspective of the three-dimensional model from a viewpoint associated with a reference direction, namely a nadir perspective (i.e. a top-down view). A view-dependent texture 52 is rendered in conjunction with the three-dimensional model. In particular, the view-dependent texture 52 can be texture mapped to a three-dimensional polygon mesh representing geometry of the geographic area. The view-dependent texture 52 can be optimized for viewing the three-dimensional model from the reference direction. In particular, the view-dependent texture can be generated from source images that are more closely aligned with the reference direction.
  • Due to the non-orthographic nature of the virtual camera, certain portions of the three-dimensional model 50 are viewed from a slightly different perspective than the reference direction. For instance, window 60 calls attention to certain portions of the three-dimensional model. The portions of the three-dimensional model 50 in the window 60 are near the edges of the field of view defined by the virtual camera and are viewed from a slightly different perspective than the remainder of the three-dimensional model 50, such as the portions near the center of the field of view. As demonstrated in the blowup of window 60 depicted in FIG. 1(b), rendering the view-dependent texture associated with the nadir perspective in conjunction with these portions of the three-dimensional model 50 can lead to visual anomalies. For instance the sides of the buildings 55 have a streaky appearance.
  • FIG. 1(c) depicts a blow up of window 60 when a base texture 54 is rendered in conjunction with the three-dimensional model 50. As shown, the sides of the buildings 55 have an improved and more realistic appearance when compared to the view-dependent texture 52 depicted in FIG. 1(b). Accordingly, rendering the base texture for these portions of three-dimensional model 50 can improve the quality of the three-dimensional model 50.
  • According to particular aspects of the present disclosure, a computing device can decide for each portion (e.g. pixel) of graphical representation of the three-dimensional model, whether to render the base texture or the view-dependent texture in conjunction with the polygon mesh. In particular, a stretching factor can be determined for each fragment in the polygon mesh. Each fragment in the polygon mesh can correspond to a pixel in the graphical representation of the three-dimensional model to be rendered on a display device. The stretching factor can be indicative of the amount a texture mapped image is stretched when mapped to the fragment. The stretching factor can be determined based on the relationship between the reference direction and a viewpoint direction at the fragment. The viewpoint direction can extend from the virtual camera towards the fragment. Other suitable factors can be used in the determination of the stretching factor, such as a surface normal associated with the fragment.
  • The computing device can select the base texture, the view-dependent texture, or a blended texture for rendering at the fragment based at least in part on the stretching factor. For instance, in one implementation, the stretching factor can be compared to a threshold. The view-dependent texture can be selected for rendering at the fragment when the stretching factor is less than a threshold. The base texture can be selected for rendering at the fragment when the stretching factor is greater than the threshold. In certain cases, a blended texture can be selected for rendering at the fragment. The blended texture can be a blend of the base texture and the view-dependent texture.
  • Any suitable technique can be used to determine the stretching factor for each fragment of the polygon mesh. In one exemplary embodiment, the stretching factor can be determined by accessing a mathematical model that projects a circular element in the two-dimensional space associated with the view-dependent texture (e.g. the texture atlas associated with the view-dependent texture) as an ellipse onto the polygon mesh. The ellipse can include a minor axis and a major axis. The major axis can be indicative of the stretch of a texture mapped image when mapped to a fragment of the polygon mesh. The stretching factor can be determined, based at least in part on the major axis of the ellipse. For instance, the stretching factor can be determined based on the length of the projection of the major axis of the ellipse in the graphical representation of the three-dimensional model presented on a display device.
  • In another exemplary embodiment, the stretching factor can include an inverse texture stretch component and a view stretch component. The inverse texture stretch component can be based on the relationship between the reference direction and a surface normal. The view stretch component can be based on the relationship between the surface normal associated with the fragment and the viewpoint direction associated with the fragment. The computing device can select whether to render the view-dependent texture or the base texture at a fragment based on both the inverse texture stretch and the view stretch.
  • In this way, a three-dimensional model with an improved appearance can be presented to the user. When a user navigates to a reference direction, a view-dependent texture can be presented to the user to provide a more realistic appearing graphical representation of the geographic area to the user. The appearance of the view-dependent texture can be even further enhanced by rendering a base texture in conjunction with the view-dependent texture for portions of the three-dimensional model within the same field of view that are observed from a direction that is different from the reference direction.
  • Exemplary System for Rendering a Three-Dimensional Model of a Geographic Area
  • FIG. 2 depicts an exemplary system 100 for rendering a three-dimensional model of a geographic area according to an exemplary embodiment of the present disclosure. The system 100 can include a server 110 for hosting a geographic information system 120. The server 110 can be any suitable computing device, such as a web server. The server 110 can be in communication with a user device 130 over a network 140, such as the Internet. The user device 130 can be any suitable computing device, such as a laptop, desktop, smartphone, tablet, mobile device, wearable computing device, or other computing device.
  • The server 130 can host an interactive geographic information system 120 that serves geographic data stored, for instance, in a geographic database 118. The geographic database 118 can include geographic data for rendering an interactive graphical representation of the three-dimensional model of a geographic area. The geographic data can include a polygon mesh representing the geometry of the geographic area and one or more textures for mapping to the polygon mesh. The geographic data can be stored in a hierarchical tree data structure, such a quadtree or octree data structure, that spatially partitions the geographic data according to geospatial coordinates.
  • The polygon mesh can include a plurality of polygons (e.g. triangles) interconnected by vertices and edges to model the geometry of the geographic area. The polygon mesh can be represented in any suitable format, such as a depth map, height field, closed mesh, signed distance field, or any other suitable type of representation. The polygon mesh can be a stereo reconstruction generated from aerial or satellite imagery of the geographic area. The imagery can be taken by overhead cameras, such as from an aircraft, at various oblique or nadir perspectives. In the imagery, features are detected and correlated with one another. The points can be used to determine a stereo mesh from the imagery such that a three-dimensional model can be determined from two-dimensional imagery.
  • The geographic data can also include a plurality of textures that can be mapped to the polygon mesh. The textures can be generated from aerial or satellite imagery of the geographic area. According to aspects of the present disclosure, the geographic data can include a base texture for the geographic area and one or more view-dependent textures for reference viewpoints (e.g. canonical viewpoints) of the geographic area. In one implementation there is a view-dependent texture for some, but not all, viewing angles of the model. For instance, in a particular implementation, the geographic data can include a base texture and five different view-dependent textures, one for each of a north 45° oblique viewpoint, a south 45° oblique viewpoint, a west 45° oblique viewpoint, and a nadir viewpoint. The textures can be stored in any suitable format, such as using texture atlases. As used herein, a canonical viewpoint can refer to a standard and/or a predominate view of a geographic area, such as a north view, a south view, an east view, or a west view. Other suitable canonical viewpoints can include a northeast view, a northwest view, a southeast view, or southwest view. The canonical views can be standard or default views of the three-dimensional model in the geographic information system.
  • The base texture can be optimized based on a plurality of differing viewpoints of the three-dimensional model. An exemplary base texture is optimized for providing a direct (or near direct) and/or a non-occluded (or near non-occluded) view of various portions of the three-dimensional model. In one implementation, the base texture can be generated by selecting source images for texture mapping to polygon faces in the polygon mesh using selection criteria that favors the selection of source images that have a non-occluded and/or direct or near direct view of the polygon face. The source images used for the base texture can be associated with a variety of different view perspectives.
  • The view-dependent textures can be optimized for viewing the three-dimensional model from a single view direction (i.e. the reference direction) associated with the reference viewpoint. In particular, a view-dependent texture can be generated from source images that are more closely aligned with the reference direction. In one implementation, the view-dependent texture can be generated by creating a texture atlas mapping texture to the polygon mesh providing a representation of the geometry of the geographic area. The texture for each portion of the polygon mesh can be selected using a texture selection algorithm that favors source imagery more closely aligned with the single view direction associated with the reference direction.
  • In a particular implementation, a view-dependent texture can be generated by determining a score for each source image that views a polygon face of the polygon mesh. The score can favor the selection of a source image for texturing a polygon face that is aligned more closely with the reference direction. In one particular implementation, the score computed for each source image can include a base component and a view dependent component. The base component can be determined to favor source images that directly point to a surface normal associated with the polygon face. The base component can also take into account other factors, such as occlusion of the polygon face in the source image. The view dependent component can be based on the relationship between the camera view direction associated with the source image (e.g. the position and orientation of the camera that captured the source image) and the reference direction associated with the view-dependent texture. The view dependent component can dominate the computation of the score for each source image. A graph cut algorithm can also be used in the generation of the base texture and the view-dependent texture to avoid choosing images that cause large color discontinuities when textured onto the polygon mesh.
  • The user device 130 can implement a user interface 134 that allows a user 132 to interact with the geographic information system 120 hosted by the server 110. The user interface 134 can be a browser or other suitable client application that can render a graphical representation of a three-dimensional model of the geographic area on a display associated with the user device 130. The user 132 can interact with the user interface 134 to navigate a virtual camera to view the three-dimensional model from a variety of different virtual camera viewpoints. For instance, the user interface 134 can present a variety of different control tools to allow the user to pan, tilt, zoom, search, or otherwise navigate the virtual camera to view different portions of the three-dimensional model of the geographic area from different perspectives.
  • In response to the user interactions with the user interface 134, requests for geographic data can be provided from the user device 130 over the network 140 to the server 110. The server 110 can provide geographic data, such as a polygon mesh and one or more textures, to the user device 130. The user device 130 can then render one or more of the textures in conjunction with the polygon mesh from a viewpoint associated with the virtual camera to present a graphical representation of the three-dimensional model of the geographic area to the user.
  • When the user navigates the virtual camera to a virtual camera viewpoint that is not associated with a reference direction, the base texture can be rendered in conjunction with the three-dimensional model. When the user navigate the virtual camera to a virtual camera viewpoint that is associated with a reference direction, a view-dependent texture for the reference direction can be rendered in conjunction with at least portions of the three-dimensional model. As will be discussed in detail below, the user device 130 can render a base texture in the same field of view as the view-dependent texture for portions of the three-dimensional model that are viewed from a view direction that is different than the reference direction.
  • Exemplary Method for Rendering a View-Dependent Texture
  • FIG. 3 depicts a flow diagram of an exemplary method (200) for rendering a view-dependent texture in conjunction with a three-dimensional model of a geographic area according to an exemplary embodiment of the present disclosure. The method (200) can be implemented using any suitable computing system, such as the user device 130 depicted in FIG. 1. In addition, FIG. 3 depicts steps performed in a particular order for purposes of illustration and discussion. One of ordinary skill in the art, using the disclosures provided herein, will understand that the steps of any of the methods discussed herein can be omitted, adapted, rearranged, or expanded in various ways without deviating from the scope of the present disclosure.
  • At (202), the method includes receiving a user input requesting a view of a three-dimensional model of a geographic area from a virtual camera viewpoint. For instance, a user can navigate a virtual camera using a suitable user interface to view the three-dimensional model from a perspective of the virtual camera viewpoint. The virtual camera viewpoint can be associated with a position and orientation of the virtual camera relative to the three-dimensional model. The virtual camera can define the field of view of the three-dimensional model. The virtual camera can be non-orthographic such that certain portions of the three-dimensional model are viewed from a different direction than other portions of the three-dimensional model. For instance, portions of the three-dimensional model proximate the edges of the field of view can be viewed from different directions than portions of the three-dimensional model near the center of the field of view.
  • At (204), a polygon mesh modeling geometry of the geographic area and a base texture for the three-dimensional model can be obtained, for instance, over a network or from memory. For example, if data associated with the polygon mesh and the base texture have previously been fetched from a remote server, the polygon mesh and the base texture can be accessed from a local memory. If the data associated with the polygon mesh and the base texture are not available in a local memory, a request can be made to fetch the data from a remote server over a network, such as the Internet.
  • At (206), it can be determined whether to render a view-dependent texture in conjunction with the three-dimensional model. For instance, it can be determined whether to render a view-dependent texture based on the difference between a virtual camera viewpoint associated with the virtual camera and the reference direction associated with a view-dependent texture. The decision to render a view-dependent texture can also be based on the available bandwidth/memory for rendering the view-dependent texture. If it is determined not to render a view-dependent texture in conjunction with the three-dimensional model, the method can render the polygon mesh and the base texture to provide a graphical representation of the three-dimensional model (208). If it is determined to render a view-dependent texture, the method can obtain the view-dependent texture associated with the reference direction (210). For instance, the view-dependent texture can be accessed from a local memory and/or fetched from a remote computing device over a network.
  • At (212), the method includes determining a stretching factor for each fragment of the polygon mesh. The stretching factor can be indicative of how much the view of the fragment from the perspective of the virtual camera differs from the reference direction associated with the view-dependent texture. More particularly, the stretching factor can be determined based on a viewpoint direction associated with the fragment and the reference direction. A fragment associated with a viewpoint direction that is closely aligned with the reference direction can have a stretching factor favoring selection of a view-dependent texture for rendering at the fragment. A fragment associated with a viewpoint direction that differs sufficiently from the reference direction can have stretching factor favoring the selection of the base texture for rendering at the fragment. The stretching factor can be determined based on other factors as well, such as a surface normal associated with the fragment. Exemplary techniques for determining a stretching factor according to aspects of the present disclosure will be discussed in detail below with reference to FIGS. 7-9.
  • Referring back to FIG. 3 at (214), a texture is selected for rendering at the fragment based on the stretching factor. In particular, the view-dependent texture or the base texture can be selected for rendering at the fragment based on the stretching factor. For instance, in one embodiment, the base texture can be selected for rendering at the fragment if the stretching factor is greater than a threshold. The view-dependent texture can be selected for rendering at the fragment when the stretching factor is less than the threshold. In another exemplary embodiment, a blended texture can be selected for rendering at the fragment. The blended texture can be a blend between the color defined by the base texture for the fragment and the color defined by the view-dependent texture for the fragment.
  • FIG. 4 depicts a flow diagram of one exemplary method for selecting a texture for rendering at a fragment according to an exemplary embodiment of the present disclosure. At (222), the stretching factor for the fragment is accessed, for instance, from a memory. At (224), it can be determined whether the stretching factor is less than a first threshold. The first threshold can be set to any suitable value depending on desired performance. If the stretching factor is less than the first threshold, the view-dependent texture can be selected for rendering at the fragment (226).
  • Otherwise, the method proceeds to (228) where it is determined whether the stretching factor exceeds a second threshold. The second threshold can be set to any suitable value depending desired performance. If the stretching factor exceeds the second threshold, the base texture can be selected for rendering at the fragment (230).
  • If the stretching factor does not exceed the second threshold, a blended texture can be selected for rendering at the fragment (232). The blended texture can be a blend between the view-dependent texture and the base texture. At (234), the amount of the blended texture attributable to the view-dependent texture and the amount of the blended texture attributable to the base texture can be determined based on the stretching factor. For instance, alpha values associated with the base texture and the view-dependent texture can be controlled based on the stretching factor.
  • FIG. 5 depicts a graphical representation of selecting a texture for rendering at the fragment using the exemplary method (220) shown in FIG. 4. As demonstrated in FIG. 5, the view-dependent texture can be selected for rendering at the fragment when the stretching factor is less than a first threshold ST1. When the stretching factor is greater than a second threshold ST2, the base texture can be selected for rendering at the fragment. When the stretching factor is in a blend range between ST1 and ST2, a blended texture can be selected for rendering at the fragment. The ratio of the blend of the blended texture can be determined based on the stretching factor. For instance, as the stretching factor approaches the second threshold ST2, more and more of the blended texture can be attributable to the base texture while less and less of the blended texture is attributable to the view-dependent texture. The blend can vary linearly with the stretching factor as shown in FIG. 5. However, other suitable relationships can be used to determine the blend ratio of the blended texture without deviating from the scope of the present disclosure.
  • Referring back to FIG. 3 at (216), the selected texture can be rendered at the fragment on the display of the computing device in conjunction with the three-dimensional model. In this manner, the method (200) can render the view-dependent texture at fragments where the viewpoint direction is closely aligned with the reference direction and can render the base texture at fragments where the viewpoint direction differs from the reference direction.
  • The exemplary method (200) of rendering view-dependent textures in conjunction with a three-dimensional model of a geographic area can be more readily understood with reference to FIG. 6. FIG. 6 depicts the exemplary determination of a stretching factor for different fragments of a polygon mesh according to an exemplary embodiment of the present disclosure. In particular, FIG. 6 depicts a polygon mesh 300 having a plurality of polygon faces. First and second fragments 320 and 330 of the polygon mesh are singled out for analysis for purposes of illustration and discussion. Each of the first and second fragments 320 and 330 can be associated with a different pixel in a graphical representation of the polygon mesh 300 presented on a display device. A view-dependent texture can be identified for rendering in conjunction with the polygon mesh 300. The view-dependent texture can have a reference direction 310.
  • A user can request a view of the polygon mesh 300 from the perspective associated with a virtual camera 340. Stretching factors can be determined for the first and second fragment 330 and 330. In particular, the stretching factor for the first fragment 320 can be determined based on the viewpoint direction 322 associated with the first fragment 320 and the reference direction 310. The viewpoint direction 322 associated with the first fragment 320 extends from the virtual camera 340 to the first fragment 320. In particular implementations, the stretching factor can also be determined based at least in part on the surface normal 324 associated with the first fragment 320.
  • The stretching factor for the second fragment 320 can be determined based on the viewpoint direction 332 associated with the second fragment 330 and the reference direction 310. The viewpoint direction 322 associated with the second fragment extends from the virtual camera 340 to the second fragment 330. In particular implementations, the stretching factor can also be determined based at least in part on the surface normal 334 associated with the first fragment 330.
  • As demonstrated in FIG. 6, the viewpoint direction 332 associated with the second fragment 330 is more closely aligned with the reference direction 310 than the viewpoint direction 322 associated with the first fragment 320. As a result, the stretching factor associated with the first fragment 320 can be greater than the stretching factor associated with the second fragment 330. Accordingly, a base texture can be selected for rendering at the first fragment 320 and a view-dependent texture can be selected for rendering at the second fragment 330.
  • Exemplary Methods for Determining a Stretching Factor
  • With reference now to FIGS. 7-9, exemplary techniques for determining a stretching factor for a fragment of a polygon mesh will be set forth. These exemplary techniques are presented for purposes of illustration and discussion. Those of ordinary skill in the art, using the disclosures provided herein, will understand that other suitable techniques can be used for determining the stretching factor without deviating from the scope of the present disclosure.
  • FIG. 7 depicts an exemplary method (400) for determining a stretching factor for a fragment according to one exemplary embodiment of the present disclosure. The method (400) determines a stretching factor using a mathematical model that projects a circular element in a two-dimensional space associated with the view-dependent texture as an ellipse onto the polygon mesh.
  • More particularly, FIG. 8 depicts a circular element 350 in a texture space associated with the view-dependent texture. When the circular element 350 is projected onto different fragments of the polygon mesh, the circular element will stretch to form an ellipse 360. The size and shape of the ellipse 360 will vary depending on the fragment to which the circular element is projected. As shown, the ellipse 360 includes a minor axis r and a major axis s. There is no stretch is the minor axis direction. The stretch is along the major axis s. Accordingly, the stretching factor can be determined based at least in part on the major axis of the ellipse.
  • More particularly, a reference vector associated with the reference direction of the view-dependent texture can be obtained at (402). The reference vector can be a unit vector that points in the reference direction associated with the view-dependent texture. Referring to FIG. 6, the reference vector can extend in the reference direction 310.
  • Referring back to FIG. 7, a viewpoint vector can be obtained for the fragment at (404). The viewpoint vector can also be a unit vector and can point along the viewpoint direction associated with the fragment. As discussed above, the viewpoint direction extends from the virtual camera to the fragment. Referring to the example of FIG. 6, the first fragment 320 can have a viewpoint vector that points along the viewpoint direction 322. The second fragment 330 can have viewpoint vector that points along the viewpoint direction 332.
  • Referring back to FIG. 8, a surface normal can also be obtained for the fragment (406). The surface normal can be obtained by either determining the surface normal for the fragment or accessing a previously determined surface normal for the fragment stored in a memory. Many different techniques are known for determining the surface normal of a fragment of a polygon mesh. Any suitable technique can be used without deviating from the scope of the present disclosure. Referring to the example of FIG. 6, the first fragment 320 has a surface normal 324. The second fragment 330 has a surface normal 334.
  • Referring to FIG. 7 at (408), a mathematical model projecting a circular element as an ellipse on the polygon mesh can be accessed. The mathematical model can specify the minor axis of the ellipse based at least in part on the relationship between the surface normal at the fragment and the reference direction. For instance, the mathematical model can specify the minor axis of the ellipse as the cross product of the reference vector and the surface normal as follows:

  • r=cross (v, n)
  • where r is the minor axis of the ellipse, v is the reference vector, and n is the surface normal.
  • The mathematical model can specify the direction of the major axis of the ellipse based on the relationship between the minor axis of the ellipse and the surface normal. For instance, the mathematical model can specify that the direction of major axis extends in the direction defined by the cross product of the minor axis and the surface normal as follows:

  • Direction of s=cross (r, n)
  • where s is the major axis of the ellipse, r is the minor axis of the ellipse, and n is the surface normal.
  • The mathematical model can further specify that magnitude of the major axis is determined based on the relationship between the reference direction and the surface normal. For instance, the mathematical model can specify that the magnitude of the major axis is determined based on the dot product of the reference vector and the surface normal as follows:

  • Magnitude of s=|1/dot(v, n)|
  • where s is the major axis of the ellipse, v is the reference vector, and n is the surface normal.
  • At (410), the major axis and the minor axis of the ellipse are determined from the mathematical model. In particular, the reference vector and the surface normal obtained for the fragment can be used to solve for minor axis and major axis of the ellipse using the mathematical model. At (412), the stretching factor is determined from the major axis of the ellipse. For instance, the stretching factor can be determined based on the relationship between the major axis of the ellipse and the viewpoint direction associated with the fragment. In one particular implementation, the stretching factor can be determined as follows:

  • Stretchf=|(∥s∥*(1-dot(s/∥, e)
  • where Stretchf is the stretching factor associated with the fragment, s is the major axis of the ellipse, and e is the viewpoint vector associated with the fragment. The stretching factor can be used to select a texture for rendering at the fragment.
  • FIG. 9 depicts a flow diagram of another exemplary method (500) for determining a stretching factor for a fragment according to an exemplary embodiment of the present disclosure. The method (500) determines a stretching factor having an inverse texture stretch component for the fragment and a view stretch component for the fragment. The inverse texture stretch component can be determined based on the relationship between the reference direction and a surface normal associated with the fragment. The view stretch component can be determined based on the relationship between the viewpoint direction and the surface normal associated with the fragment.
  • More particularly, a reference vector and surface normal can be obtained for the fragment at (502). The reference vector can be a unit vector that points in the reference direction associated with the view-dependent texture. The surface normal for the fragment can be accessed from memory or determined using any suitable surface normal determination algorithm.
  • At (504), the inverse texture stretch component is determined based on the reference vector and the surface normal. The inverse texture stretch component can have a value ranging from 0 to 1. For instance, the inverse texture stretch component can be computed as the dot product of the reference vector and the surface normal as follows:

  • s0=dot (v, n)
  • where s0 is the inverse texture stretch component, v is the reference direction, and n is the surface normal.
  • In one example where the view-dependent texture is for a nadir perspective (i.e. is associated with a top-down reference direction), the inverse texture stretch component can be 0 for fragments associated with vertical walls and other geometry that has a surface normal perpendicular to the reference direction. The inverse texture stretch component can be 1 for fragments associated with roofs and other geometry that has a surface normal parallel to the reference direction.
  • At (506), a viewpoint vector can be obtained for the fragment. The viewpoint vector can be a unit vector and can point along the viewpoint direction associated with the fragment.
  • The view stretch component is determined based on the viewpoint vector at (508). In particular, the view stretch component can be determined based on the relationship between the viewpoint vector and the surface normal. For instance, the view stretch component can be 0 when squarely viewing the fragment and can be 1 when looking at the polygon face associated with the fragment edge-on. In a particular implementation, the view stretch component can be determined based on the dot product of the viewpoint vector and the surface normal as follows:

  • s1=dot (n, e)
  • where s1 is the view stretch component, e is the viewpoint vector, and n is the surface normal.
  • A texture can be selected for rendering at the fragment based on both the inverse texture stretch component associated with the fragment and the view stretch component associated with the fragment. For example, the view-dependent texture can be selected for rendering at the fragment when s1≧s0−α. A base texture can be selected for rendering at the fragment when s0≦s1−β. α and β are constants that can be controlled based on the desired amount of stretching to be allowed for the view-dependent texture.
  • Exemplary Computing Environment for Rendering A View-Dependent Texture in Conjunction with a Three-Dimensional Model
  • FIG. 10 depicts an exemplary computing system 600 that can be used to implement the methods and systems for generating and rendering view-dependent textures according to aspects of the present disclosure. The system 600 is implemented using a client-server architecture that includes a server 610 that communicates with one or more client devices 630 over a network 640. The system 600 can be implemented using other suitable architectures, such as a single computing device.
  • The system 600 includes a server 610, such as a web server used to host a geographic information system. The server 610 can be implemented using any suitable computing device(s). The server 610 can have a processor(s) 612 and a memory 614. The server 610 can also include a network interface used to communicate with one or more client computing devices 630 over a network 640. The network interface can include any suitable components for interfacing with one more networks, including for example, transmitters, receivers, ports, controllers, antennas, or other suitable components.
  • The processor(s) 612 can be any suitable processing device, such as a microprocessor, microcontroller, integrated circuit, or other suitable processing device. The memory 614 can include any suitable computer-readable medium or media, including, but not limited to, non-transitory computer-readable media, RAM, ROM, hard drives, flash drives, or other memory devices. The memory 614 can store information accessible by processor(s) 612, including instructions 616 that can be executed by processor(s) 612. The instructions 616 can be any set of instructions that when executed by the processor(s) 612, cause the processor(s) 612 to provide desired functionality. For instance, the instructions 616 can be executed by the processor(s) 612 to implement a geographic information system module 620. The geographic information system module 620 can be configured to perform functionality associated with hosting a geographic information system, such as responding to requests for geographic data used to render a three-dimensional model of a geographic area.
  • It will be appreciated that the term “module” refers to computer logic utilized to provide desired functionality. Thus, a module can be implemented in hardware, application specific circuits, firmware and/or software controlling a general purpose processor. In one embodiment, the modules are program code files stored on the storage device, loaded into memory and executed by a processor or can be provided from computer program products, for example computer executable instructions, that are stored in a tangible computer-readable storage medium such as RAM, hard disk or optical or magnetic media.
  • Memory 614 can also include data 618 that can be retrieved, manipulated, created, or stored by processor(s) 612. The data can include geographic data to be served as part of the geographic information system, such as polygon meshes, base textures, view-dependent textures, and other geographic data. The geographic data can be stored in a hierarchical tree data structure, such as a quadtree or octree data structure, that spatially partitions the geographic data according to geospatial coordinates. The data 618 can be stored in one or more databases. The one or more databases can be connected to the server 610 by a high bandwidth LAN or WAN, or can also be connected to server 610 through network 640. The one or more databases can be split up so that they are located in multiple locales.
  • The server 610 can exchange data with one or more client devices 630 over the network 640. Although two client devices 630 are illustrated in FIG. 10, any number of client devices 630 can be connected to the server 610 over the network 640. The client devices 630 can be any suitable type of computing device, such as a general purpose computer, special purpose computer, laptop, desktop, mobile device, smartphone, tablet, wearable computing device, or other suitable computing device.
  • Similar the computing device 610, a client device 630 can include a processor(s) 632 and a memory 634. The processor(s) 632 can include one or more central processing units, graphics processing units dedicated to efficiently rendering images, etc. The memory 634 can store information accessible by processor(s) 632, including instructions 636 that can be executed by processor(s) 632. For instance, the memory 634 can store instructions 636 for implementing an application that provides a user interface (e.g. a browser) for interacting with the geographic information system.
  • The memory 634 can also store instructions 636 for implementing a rendering module and a stretching factor module. The rendering module can be configured to render a textured polygon mesh to provide a graphical representation of a three-dimensional model of a geographic area. The stretching factor module can be configured to determine a stretching factor for each fragment of the polygon mesh to be presented to a user. The renderer module can select a view-dependent texture or a base texture for rendering at each fragment based at least in part on the stretching factor
  • The memory 634 can also store data 638, such as polygon meshes, base textures, view-dependent textures, and other geographic data received by the client device 630 from the server 610 over the network. The geographic data can be stored in a hierarchical tree data structure that spatially partitions the geographic data according to geospatial coordinates associated with the data.
  • The client device 630 can include various input/output devices for providing and receiving information from a user, such as a touch screen, touch pad, data entry keys, speakers, and/or a microphone suitable for voice recognition. For instance, the computing device 630 can have a display 635 for rendering the graphical representation of the three-dimensional model.
  • The client device 630 can also include a network interface used to communicate with one or more remote computing devices (e.g. server 610) over the network 640. The network interface can include any suitable components for interfacing with one more networks, including for example, transmitters, receivers, ports, controllers, antennas, or other suitable components.
  • The network 640 can be any type of communications network, such as a local area network (e.g. intranet), wide area network (e.g. Internet), or some combination thereof. The network 640 can also include a direct connection between a client device 630 and the server 610. In general, communication between the server 610 and a client device 630 can be carried via network interface using any type of wired and/or wireless connection, using a variety of communication protocols (e.g. TCP/IP, HTTP, SMTP, FTP), encodings or formats (e.g. HTML, XML), and/or protection schemes (e.g. VPN, secure HTTP, SSL).
  • While the present subject matter has been described in detail with respect to specific exemplary embodiments and methods thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, the scope of the present disclosure is by way of example rather than by way of limitation, and the subject disclosure does not preclude inclusion of such modifications, variations and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art.

Claims (20)

What is claimed is:
1. A computer-implemented method of providing a three-dimensional model of a geographic area, comprising:
identifying, by one or more computing devices, a perspective of a virtual camera for viewing a polygon mesh, the polygon mesh modeling geometry of a geographic area;
identifying, by the one or more computing devices, a view-dependent texture associated with a reference direction, the view-dependent texture generated for viewing the three-dimensional model from a reference viewpoint associated with the reference direction;
identifying, by the one or more computing devices, a base texture generated for viewing the three-dimensional model from a plurality of different viewpoints;
determining, by the one or more computing devices, a viewpoint direction associated with a fragment of the polygon mesh, the viewpoint direction extending from the virtual camera towards the fragment; and
determining, by the one or more computing devices, a texture for display at the fragment based at least in part on an amount that a texture mapped image is stretched when mapped to the fragment when viewed from the viewpoint direction;
wherein the texture determined for display at the fragment comprises one or more elements of the base texture or the view-dependent texture.
2. The computer-implemented method of claim 1, wherein the method comprises providing for display, by the one or more computing devices, the texture determined for display at the fragment.
3. The computer-implemented method of claim 1, wherein the texture is determined for display at the fragment based at least in part on a stretching factor indicative of the amount the texture mapped image is stretched when mapped to the fragment when viewed from the viewpoint direction.
4. The computer-implemented method of claim 3, wherein the view-dependent texture is selected for display at the fragment when the stretching factor is less than a threshold.
5. The computer-implemented method of claim 3, wherein the base texture is selected for display at the fragment when the stretching factor is greater than a threshold.
6. The computer-implemented method of claim 3, wherein a blended texture is selected for display at the fragment based at least in part on the stretching factor, the blended texture comprising a blend between the base texture and the view-dependent texture.
7. The computer-implemented method of claim 3, wherein the stretching factor for the fragment is determined based at least in part on a surface normal associated with the fragment.
8. The computer-implemented method of claim 3, wherein the stretching factor is determined based at least in part by:
accessing, by the one or more computing devices, a mathematical model projecting a circular element in a two-dimensional space associated with the view-dependent texture as an ellipse on the polygon mesh, the ellipse having a minor axis and a major axis; and
determining, by the one or more computing devices, the stretching factor based at least in part on the major axis of the ellipse.
9. The computer-implemented method of claim 8, wherein the mathematical model specifies the minor axis of the ellipse based at least in part on the relationship between a surface normal associated with the fragment and the reference direction, the mathematical model further specifying a direction of the major axis based on the relationship between the minor axis of the ellipse and the surface normal, the mathematical model further specifying a magnitude of the major axis based on the relationship between the reference direction and the surface normal.
10. The computer-implemented method of claim 9, wherein the stretching factor is determined based at least in part on the relationship between the major axis of the ellipse and the viewpoint direction associated with the fragment.
11. The computer-implemented method of claim 3, wherein the stretching factor has an inverse texture stretch component for the fragment and a view stretch component for the fragment.
12. The computer implemented method of claim 11, wherein the inverse texture stretch component for the fragment is determined based on the relationship between the reference direction and a surface normal associated with the fragment.
13. The computer-implemented method of claim 12, wherein the view stretch component for the fragment is determined based on the relationship between the viewpoint direction associated with the fragment and the surface normal associated with the fragment.
14. The computer-implemented method of claim 1, wherein the reference viewpoint is a canonical viewpoint.
15. A computing system for render a three-dimensional model of a geographic area, the system comprising:
a display;
one or more processors;
one or more computer-readable media, the computer-readable media storing instructions that when executed by the one or more processors cause the processors to perform operations, the operations comprising:
identifying a perspective of a virtual camera for viewing a polygon mesh, the polygon mesh modeling geometry of a geographic area;
identifying a view-dependent texture associated with a reference direction, the view-dependent texture generated for viewing the three-dimensional model from a reference viewpoint associated with the reference direction;
identifying a base texture generated for viewing the three-dimensional model from a plurality of different viewpoints;
determining a viewpoint direction associated with a fragment of the polygon mesh, the viewpoint direction extending from the virtual camera towards the fragment; and
determining a texture for display at the fragment based at least in part on an amount that a texture mapped image is stretched when mapped to the fragment when viewed from the viewpoint direction;
wherein the texture determined for display at the fragment comprises one or more elements of the base texture or the view-dependent texture.
16. The computing system of claim 15, wherein the texture is determined for display at the fragment based at least in part on a stretching factor indicative of the amount the texture mapped image is stretched when mapped to the fragment when viewed from the viewpoint direction.
17. The computing system of claim 16, wherein the view-dependent texture is selected for display at the fragment when the stretching factor is less than a threshold.
18. The computing system of claim 16, wherein the base texture is selected for display at the fragment when the stretching factor is greater than a threshold.
19. A tangible non-transitory computer-readable medium comprising computer-readable instructions that when executed by one or more processors, cause the one or more processors to perform operations, the operations comprising:
identifying a perspective of a virtual camera for viewing a polygon mesh, the polygon mesh modeling geometry of a geographic area;
identifying a view-dependent texture associated with a reference direction, the view-dependent texture generated for viewing the three-dimensional model from a reference viewpoint associated with the reference direction;
identifying a base texture generated for viewing the three-dimensional model from a plurality of different viewpoints;
determining a viewpoint direction associated with a fragment of the polygon mesh, the viewpoint direction extending from the virtual camera towards the fragment; and
determining a texture for display at the fragment based at least in part on an amount that a texture mapped image is stretched when mapped to the fragment when viewed from the viewpoint direction;
wherein the texture determined for display at the fragment comprises one or more elements of the base texture or the view-dependent texture.
20. The tangible non-transitory computer-readable medium of claim 19, wherein the texture is determined for display at the fragment based at least in part on a stretching factor indicative of the amount a texture mapped image is stretched when mapped to the fragment when viewed from the viewpoint direction, wherein the view-dependent texture is selected for display at the fragment when the stretching factor is less than a first threshold;
wherein the base texture is selected for display at the fragment when the stretching factor is greater than a second threshold; a blended texture is selected for display at the fragment based at least in part on the stretching factor when the stretching factor is between the first threshold and the second threshold, the blended texture comprising a blend between the base texture and the view-dependent texture.
US15/621,345 2013-06-19 2017-06-13 Texture Blending Between View-Dependent Texture and Base Texture in a Geographic Information System Abandoned US20170278294A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/621,345 US20170278294A1 (en) 2013-06-19 2017-06-13 Texture Blending Between View-Dependent Texture and Base Texture in a Geographic Information System

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/921,631 US9165397B2 (en) 2013-06-19 2013-06-19 Texture blending between view-dependent texture and base texture in a geographic information system
US15/621,345 US20170278294A1 (en) 2013-06-19 2017-06-13 Texture Blending Between View-Dependent Texture and Base Texture in a Geographic Information System

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/921,631 Continuation US9165397B2 (en) 2013-06-19 2013-06-19 Texture blending between view-dependent texture and base texture in a geographic information system

Publications (1)

Publication Number Publication Date
US20170278294A1 true US20170278294A1 (en) 2017-09-28

Family

ID=52110516

Family Applications (3)

Application Number Title Priority Date Filing Date
US13/921,631 Active 2034-05-02 US9165397B2 (en) 2013-06-19 2013-06-19 Texture blending between view-dependent texture and base texture in a geographic information system
US14/875,886 Expired - Fee Related US9704282B1 (en) 2013-06-19 2015-10-06 Texture blending between view-dependent texture and base texture in a geographic information system
US15/621,345 Abandoned US20170278294A1 (en) 2013-06-19 2017-06-13 Texture Blending Between View-Dependent Texture and Base Texture in a Geographic Information System

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US13/921,631 Active 2034-05-02 US9165397B2 (en) 2013-06-19 2013-06-19 Texture blending between view-dependent texture and base texture in a geographic information system
US14/875,886 Expired - Fee Related US9704282B1 (en) 2013-06-19 2015-10-06 Texture blending between view-dependent texture and base texture in a geographic information system

Country Status (1)

Country Link
US (3) US9165397B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108269304A (en) * 2017-12-22 2018-07-10 中国科学院电子学研究所苏州研究院 A kind of scene fusion visualization method under more geographical information platforms

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10325402B1 (en) * 2015-07-17 2019-06-18 A9.Com, Inc. View-dependent texture blending in 3-D rendering
US10586379B2 (en) 2017-03-08 2020-03-10 Ebay Inc. Integration of 3D models
GB2569546B (en) * 2017-12-19 2020-10-14 Sony Interactive Entertainment Inc Determining pixel values using reference images
US11727656B2 (en) 2018-06-12 2023-08-15 Ebay Inc. Reconstruction of 3D model with immersive experience
US10762690B1 (en) * 2019-02-11 2020-09-01 Apple Inc. Simulated overhead perspective images with removal of obstructions
JP7193728B2 (en) * 2019-03-15 2022-12-21 富士通株式会社 Information processing device and stored image selection method
CN112288873B (en) * 2020-11-19 2024-04-09 网易(杭州)网络有限公司 Rendering method and device, computer readable storage medium and electronic equipment
CN112862968B (en) * 2021-03-15 2024-01-19 网易(杭州)网络有限公司 Rendering display method, device and equipment of target vegetation model and storage medium
CN116433821B (en) * 2023-04-17 2024-01-23 上海臻图信息技术有限公司 Three-dimensional model rendering method, medium and device for pre-generating view point index

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040012603A1 (en) * 2002-07-19 2004-01-22 Hanspeter Pfister Object space EWA splatting of point-based 3D models
US7558400B1 (en) * 2005-12-08 2009-07-07 Nvidia Corporation Anisotropic texture filtering optimization

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6525731B1 (en) 1999-11-09 2003-02-25 Ibm Corporation Dynamic view-dependent texture mapping
US7116841B2 (en) * 2001-08-30 2006-10-03 Micron Technology, Inc. Apparatus, method, and product for downscaling an image
JP3855053B2 (en) * 2003-01-30 2006-12-06 国立大学法人 東京大学 Image processing apparatus, image processing method, and image processing program
US7348989B2 (en) 2003-03-07 2008-03-25 Arch Vision, Inc. Preparing digital images for display utilizing view-dependent texturing
US7221371B2 (en) * 2004-03-30 2007-05-22 Nvidia Corporation Shorter footprints for anisotropic texture filtering
GB2415344B (en) * 2004-06-14 2010-10-06 Canon Europa Nv Texture data compression and rendering in 3D computer graphics
US7369136B1 (en) * 2004-12-17 2008-05-06 Nvidia Corporation Computing anisotropic texture mapping parameters
US7747556B2 (en) 2005-02-28 2010-06-29 Microsoft Corporation Query-based notification architecture
US7925649B2 (en) 2005-12-30 2011-04-12 Google Inc. Method, system, and graphical user interface for alerting a computer user to new results for a prior search
US8314811B2 (en) * 2006-03-28 2012-11-20 Siemens Medical Solutions Usa, Inc. MIP-map for rendering of an anisotropic dataset
US8049750B2 (en) * 2007-11-16 2011-11-01 Sportvision, Inc. Fading techniques for virtual viewpoint animations
US8155880B2 (en) 2008-05-09 2012-04-10 Locomatix Inc. Location tracking optimizations
WO2010134502A1 (en) * 2009-05-18 2010-11-25 小平アソシエイツ株式会社 Image information output method
US20120311585A1 (en) 2011-06-03 2012-12-06 Apple Inc. Organizing task items that represent tasks to perform
US8595049B2 (en) 2009-09-08 2013-11-26 York Eggleston Method and system for monitoring internet information for group notification, marketing, purchasing and/or sales
IL202460A (en) * 2009-12-01 2013-08-29 Rafael Advanced Defense Sys Method and system of generating a three-dimensional view of a real scene
US8669976B1 (en) 2010-10-12 2014-03-11 Google Inc. Selecting and verifying textures in image-based three-dimensional modeling, and applications thereof
WO2012071445A2 (en) * 2010-11-24 2012-05-31 Google Inc. Guided navigation through geo-located panoramas
US8686852B2 (en) 2011-05-30 2014-04-01 Microsoft Corporation Location-based notification services
US10109255B2 (en) * 2012-06-05 2018-10-23 Apple Inc. Method, system and apparatus for dynamically generating map textures
US9064337B2 (en) * 2012-06-05 2015-06-23 Apple Inc. Method, system and apparatus for rendering a map with adaptive textures for map features
US9269178B2 (en) * 2012-06-05 2016-02-23 Apple Inc. Virtual camera for 3D maps
US8983778B2 (en) * 2012-06-05 2015-03-17 Apple Inc. Generation of intersection information by a mapping service

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040012603A1 (en) * 2002-07-19 2004-01-22 Hanspeter Pfister Object space EWA splatting of point-based 3D models
US7558400B1 (en) * 2005-12-08 2009-07-07 Nvidia Corporation Anisotropic texture filtering optimization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Krecklau, View-Dependent Realtime Rendering of Procedural Facades with High Geometric Detail, May 2013, EUROGRAPHICS 2013, Volume 32 (2013), Number 2, pp. 10 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108269304A (en) * 2017-12-22 2018-07-10 中国科学院电子学研究所苏州研究院 A kind of scene fusion visualization method under more geographical information platforms

Also Published As

Publication number Publication date
US9165397B2 (en) 2015-10-20
US9704282B1 (en) 2017-07-11
US20140375633A1 (en) 2014-12-25

Similar Documents

Publication Publication Date Title
US9704282B1 (en) Texture blending between view-dependent texture and base texture in a geographic information system
US9626790B1 (en) View-dependent textures for interactive geographic information system
US10984582B2 (en) Smooth draping layer for rendering vector data on complex three dimensional objects
EP3655928B1 (en) Soft-occlusion for computer graphics rendering
EP3057066B1 (en) Generation of three-dimensional imagery from a two-dimensional image using a depth map
WO2018188479A1 (en) Augmented-reality-based navigation method and apparatus
US10950040B2 (en) Labeling for three-dimensional occluded shapes
US10733786B2 (en) Rendering 360 depth content
US9965893B2 (en) Curvature-driven normal interpolation for shading applications
CN112288873B (en) Rendering method and device, computer readable storage medium and electronic equipment
US9547921B1 (en) Texture fading for smooth level of detail transitions in a graphics application
US10652514B2 (en) Rendering 360 depth content
US10275939B2 (en) Determining two-dimensional images using three-dimensional models
US10453247B1 (en) Vertex shift for rendering 360 stereoscopic content
WO2023224627A1 (en) Face-oriented geometry streaming
CN116310041A (en) Rendering method and device of internal structure effect, electronic equipment and storage medium
JP2023529787A (en) Method, Apparatus and Program for Constructing 3D Geometry
CN116668661A (en) Image processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: GOOGLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DILLARD, SCOTT E.;ALLEN, BRETT A.;GOLOVINSKIY, ALEKSEY;REEL/FRAME:042693/0689

Effective date: 20130619

AS Assignment

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044567/0001

Effective date: 20170929

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION