US20150310596A1 - Automatically Generating Panorama Tours - Google Patents
Automatically Generating Panorama Tours Download PDFInfo
- Publication number
- US20150310596A1 US20150310596A1 US14/260,862 US201414260862A US2015310596A1 US 20150310596 A1 US20150310596 A1 US 20150310596A1 US 201414260862 A US201414260862 A US 201414260862A US 2015310596 A1 US2015310596 A1 US 2015310596A1
- Authority
- US
- United States
- Prior art keywords
- image
- images
- tour
- transition
- panoramic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000007704 transition Effects 0.000 claims abstract description 192
- 238000004091 panning Methods 0.000 claims description 26
- 238000000034 method Methods 0.000 claims description 20
- 230000015654 memory Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 15
- 230000004044 response Effects 0.000 description 8
- 230000003247 decreasing effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013515 script Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/5866—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/587—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/101—Collaborative creation, e.g. joint development of products or services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
- G06V30/42—Document-oriented image-based pattern recognition based on the type of document
- G06V30/422—Technical drawings; Geographical maps
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/16—Image acquisition using multiple overlapping images; Image stitching
Definitions
- panoramic images may include an image or collection of images having a field of view which is greater than that of the human eye, e.g., 180 degrees or greater.
- panoramics may provide a 360-degree view of a location.
- Some systems may allow users to view images in sequences, such as in time or space. In some examples, these systems can provide a navigation experience in a remote or interesting location. Some systems allow users to feel as if they are rotating within a virtual world by clicking on the edges of a displayed portion of a panorama and having the panorama appear to “rotate” in the direction of the clicked edge.
- aspects of the disclosure provide a computer-implemented method for generating automated tours using images.
- the method includes receiving a request, by one or more computing devices, to generate an automated tour based on a set of images.
- Each particular image in the set of images is associated with geographic location information and linking information linking the particular image with one or more other images in the set of images.
- the one or more computing devices identify a starting image of the set of images, determine a second image of the set of images based at least in part on the starting image and the linking information associated with the starting and second images, determine a first transition of a first transition mode between the starting image and the second image based at least in part on the linking information associated with the starting image and the second image, determine additional images from the set of images for the tour based at least in part on the linking information associated with the additional images and whether a minimum image quantity constraint has been met, and determine a second transition of a second transition mode, different from the first transition mode, for between ones of the additional images.
- the one or more computing devices add to the tour an identifier for the first image, the first transition, a second identifier for the second image, the second transition, and the additional images according to an order of the tour.
- identifying the starting image is based on an image-type identifier associated with the starting image.
- the second transition of the second transition mode is not added to the tour according to the order of the tour until a second image constraint has been met for the tour, and the second image constraint includes a fixed number of images that have been added with transitions of the first transition mode.
- the second transition of a second transition mode is not added to the tour according to the order of the tour until a second image constraint has been met for the tour, and the second image constraint includes adding a given image of the set of images having linking information indicating that the given image is linked to at least three other images of the set of images.
- the second transition is determined based on a set of parameters that includes a transition turn percent parameter that identifies a percentage of transitions in the automated tour that will include at least some degree of rotation. In another example, the second transition is determined based on a set of parameters that includes a skip rotate percentage parameter that identifies a percentage of transitions in the automated tour that will not be followed by a panning rotation. In another example, the second transition is determined based on a set of parameters that includes a minimum degree of rotation parameter that identifies a minimum required degree of rotation for transitions between images. In another example, the second transition is determined based on a set of parameters that includes an overlap threshold parameter that identifies a minimum degree of overlap in the fields of view between two images at a particular orientation. In another example, determining the additional images from the set of images for the tour is further based on whether the automated tour includes a particular image of the set of images.
- the system includes one or more computing devices.
- the one or more computing devices are configured to receive a request to generate an automated tour based on a set of images.
- Each particular image in the set of images is associated with geographic location information and linking information linking the particular image with one or more other images in the set of images.
- the one or more computing devices are also configured to identify a starting image of the set of images; determine a second image of the set of images based at least in part on the starting image and the linking information associated with the starting and second images; determine a first transition of a first transition mode between the starting image and the second image based at least in part on the linking information associated with the starting image and the second image; determine additional images from the set of images for the tour based at least in part on the linking information associated with the additional images and whether a minimum image quantity constraint has been met; determine a second transition of a second transition mode, different from the first transition mode, for between ones of the additional images; and add to the tour: an identifier for the first image, the first transition, a second identifier for the second image, the second transition, and the additional images according to an order of the tour.
- the one or more computing devices are configured to determine the starting image based on an image-type identifier associated with the starting image. In another example, the one or more computing devices are configured to add the second transition of a second transition mode only when a second image constraint has been met for the tour, and the second image constraint includes a fixed number of images have been added with transitions of the first transition mode. In another example, the one or more computing devices are configured to add the second transition of a second transition mode only when a second image constraint has been met for the tour, and the second image constraint includes adding a given image of the set of images having linking information indicating that the given image is linked to at least three other images of the set of images.
- the one or more computing devices are configured to determine the second transition based on a set of parameters that includes a transition turn percent parameter that identifies a percentage of transitions in the automated tour that will include at least some degree of rotation. In another example, the one or more computing devices are configured to determine the second transition based on a set of parameters that includes a skip rotate percentage parameter that identifies a percentage of transitions in the automated tour that will not be followed by a panning rotation. In another example, the one or more computing devices are configured to determine the second transition based on a set of parameters that includes a minimum degree of rotation parameter that identifies a minimum required degree of rotation for transitions between images.
- a further aspect of the disclosure provides a non-transitory computer-readable storage medium on which computer readable instructions of a program are stored.
- the instructions when executed by one or more processors, because the processors to perform a method of generating automated tours using images.
- the method includes identifying a set of images, each particular image in the set of images being associated with geographic location information and linking information linking that image with one or more other images; identifying a starting image of the set of images; and generating an automated tour based on at least the starting image and a second image, the linking information associated with the set of images, and a set of requirements.
- the set of requirements includes a first requirement that the automated tour begin by displaying the starting image, a second requirement that the automated tour include a first transition of a first transition mode between the starting image and the second image of the set based at least in part on the linking information associated with the starting image and the second image, a third requirement that additional images of the set of images are added to the automated tour based on the linking information associated with the additional images until a minimum image quantity constraint has been met, and a fourth requirement that the automated tour include a second transition of a second transition mode, different from the first transition mode, between two of the additional images, and that the second transition mode is determined according to a set of parameters.
- identifying the starting image is based on an image-type identifier associated with the starting image.
- the method further comprises identifying the particular image based on a second image-type identifier associated with the starting image, the second image-type identifier being different from the image-type identifier.
- the third requirement further includes that additional images of the set of images are added to the automated tour based on the linking information associated with the additional images when at least one of (a) the minimum image quantity constraint has been met or (b) the automated tour includes a particular image of the set of images.
- FIG. 1 is a functional diagram of an example system in accordance with aspects of the disclosure.
- FIG. 2 is a pictorial diagram of the example system of FIG. 1 .
- FIG. 3 is an example diagram of panoramic image information in accordance with aspects of the disclosure.
- FIG. 4 is an example diagram of pre-linked panoramic images in accordance with aspects of the disclosure.
- FIG. 5 is an example diagram of pre-linked panoramic images and identifiers in accordance with aspects of the disclosure.
- FIG. 6 is an example diagram of transitions between panoramic images in accordance with aspects of the disclosure.
- FIG. 7 is an example diagram of pre-linked panoramic images and a portion of an automated tour in accordance with aspects of the disclosure.
- FIG. 8 is another example diagram of pre-linked panoramic images and a portion of an automated tour in accordance with aspects of the disclosure.
- FIG. 9 is another example diagram of a transition between panoramic images in accordance with aspects of the disclosure.
- FIG. 10 is an example diagram for determining a percentage of overlap between panoramic images in accordance with aspects of the disclosure.
- FIG. 11 is a further example diagram of pre-linked panoramic images and a portion of an automated tour in accordance with aspects of the disclosure.
- FIG. 12 is an example diagram of pre-linked panoramic images and a portion of an automated tour in accordance with aspects of the disclosure.
- FIG. 13 is an example diagram of pre-linked panoramic images and an automated tour in accordance with aspects of the disclosure.
- FIG. 14 is a flow diagram in accordance with aspects of the disclosure.
- the technology relates to automatically generating automated tours using panoramas or panoramic images.
- a user of a computing device may view or create an automated tour by selecting or indicating a starting panoramic image from a set of pre-linked panoramic images as well as one or more other panoramic images of the set.
- an automated tour may be generated by stringing together the selected panoramic image, the one or more other panoramic images, and, in some examples, additional panoramic images of the set.
- the automated tour When the automated tour is displayed, it may appears as a virtual tour of a location captured in the panoramic images of the tour and may provide a user with a feeling that he or she is moving (e.g., walking or running, and in some cases turning) through that location.
- panoramic images may be arranged in sets. Each panoramic image in a set may be linked to one or more other panoramic images in the set.
- panoramic images may be linked to one another manually or automatically based on location and orientation information and subsequently confirmed by a user.
- a first image may be associated with linking information indicating an orientation of the first image that aligns with a second orientation of a second image to indicate a physical path between the images. This relationship may be displayed, for example, in a “constellation” or a series of points representing the location of panoramic images of the set with lines linking them together.
- a pre-linked set of panoramic images and a starting image for the tour are identified. This may be done manually, for example, by a user specifically selecting a particular panoramic image of a pre-linked set.
- a user may have pre-designated a particular image with an identifier indicating that the particular image is a starting image or has some type of relationship to the location (e.g., exterior of a building, interior of a building, hotel lobby, includes an exterior business sign, etc.).
- the system may automatically identify the pre-linked set for that location as well as the starting image for the set using the identifier.
- a second panoramic image of the pre-linked set may also be identified.
- the second panoramic image may be selected manually or based on an identifier (e.g., interior of a building, best interior image of a building, includes an interior business sign, etc.).
- An automated tour may then be generated using a number of predetermined requirements.
- an automated tour may be generated for a location corresponding to a business where a starting image is an exterior panoramic image and a second panoramic image is an interior panoramic image.
- a requirement may include that the automated tour begin by displaying the starting image and transitioning between images in a first transition mode (e.g., the direction of motion is consistent, such as, without rotation) according to the constellation until a predetermined number of images is displayed (e.g., 3 or 4) or a branch (in the constellation of the pre-liked set) is reached.
- a requirement may include that the tour continue to display images until the second panoramic image is reached, transitioning between images in a second transition mode.
- the second transition mode may include various parameters that enable different automated tours to be less predictable and more varied.
- transitions between images may include a panning rotation (rotating within a panoramic image) in a first panoramic image before displaying the second panoramic image as well as a panning rotation after the transition is complete.
- a transition turn percent parameter may be used to determine a percentage of transitions that will include at least some degree of rotation (e.g., 35%).
- a skip rotate percentage parameter may be used to determine a percentage of transitions which will not be followed immediately by a panning rotation (e.g., 35%).
- Another parameter may include a minimum rotation degree (e.g., 130 degrees) for transitions.
- the transitions between images need not always be from the point of view of a user moving forward through space, but also to the side, backwards, etc.
- an overlap threshold parameter may be used.
- the fields of view of two panoramic images may be determined based on the parameters of the first or second transition mode. These fields of view may be projected onto an imaginary wall at a fixed distance. If the overlap of (a) the cone that is projected from the first image with (b) the cone that is projected from the second panoramic image does not satisfy the overlap threshold parameter (e.g., a threshold of 50% overlap), then the amount of panning rotation in the transition from the first panoramic image may be increased in order to ensure that the overlap threshold parameter is satisfied. If the rotation required to meet the overlap is less than the minimum rotation degree, the minimum rotation degree may be adjusted, for example, by decreasing the value or simply foregoing the requirement.
- the overlap threshold parameter e.g., a threshold of 50% overlap
- Another requirement may include that once the second panoramic image is reached, if at least a minimum number of panoramic images have been displayed, then the automated tour may end. If not, the automated tour may continue to include other panoramic images in the set, for example, by using the constellation to reach the farthest panoramic image from the second panoramic image until the minimum number of panoramic images has been reached. In some examples, the last panoramic image may be rotated so that the last view in an automated tour is oriented towards the second panoramic image.
- Other parameters may also be used to generate the automated tour. For example, the amount of time that it takes to complete a transition between images, the percentage of time spent rotating for a transition, the minimum amount of time for a rotation, the speed of a rotation (e.g., degrees per second of rotation), whether that speed is adjusted during a rotation (e.g., faster, slower, faster), etc.
- the automated tours may be pre-determined using the above features and then stored until requested by a user to be displayed.
- the stored automated tour may include, for example, a set of parameters including identifiers for panoramas to be displayed, order, rotations, timing, and transition information.
- the automated tours may be generated “on demand” in response to a specific request for an automated tour.
- an automated tour may provide a user with a comprehensive view of a location without needing to “click” multiple times in order to maneuver around the location.
- FIGS. 1 and 2 include an example system 100 in which the features described above may be implemented. It should not be considered as limiting the scope of the disclosure or usefulness of the features described herein.
- system 100 can include computing devices 110 , 120 , 130 , and 140 as well as storage system 150 .
- Computing device 110 can contain one or more processors 112 , memory 114 and other components typically present in general purpose computing devices.
- Memory 114 of computing device 110 can store information accessible by processor 112 , including instructions 116 that can be executed by the processor 112 .
- Memory can also include data 118 that can be retrieved, manipulated or stored by the processor.
- the memory can be of any non-transitory type capable of storing information accessible by the processor, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories.
- the instructions 116 can be any set of instructions to be executed directly, such as machine code, or indirectly, such as scripts, by the processor.
- the terms “instructions,” “application,” “steps” and “programs” can be used interchangeably herein.
- the instructions can be stored in object code format for direct processing by the processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. Functions, methods and routines of the instructions are explained in more detail below.
- Data 118 can be retrieved, stored or modified by processor 112 in accordance with the instructions 116 .
- the data can be stored in computer registers, in a relational database as a table having many different fields and records, or XML documents.
- the data can also be formatted in any computing device-readable format such as, but not limited to, binary values, ASCII or Unicode.
- the data can comprise any information sufficient to identify the relevant information, such as numbers, descriptive text, proprietary codes, pointers, references to data stored in other memories such as at other network locations, or information that is used by a function to calculate the relevant data.
- the one or more processors 112 can include any conventional processors, such as a commercially available CPU. Alternatively, the processor can be a dedicated component such as an ASIC or other hardware-based processor. Although not necessary, computing devices 110 may include specialized hardware components to perform specific computing processes, such as decoding video, matching video frames with images, distorting videos, encoding distorted videos, etc. faster or more efficiently.
- FIG. 1 functionally illustrates the processor, memory, and other elements of computing device 110 as being within the same block
- the processor, computer, computing device, or memory can actually comprise multiple processors, computers, computing devices, or memories that may or may not be stored within the same physical housing.
- the memory can be a hard drive or other storage media located in a housing different from that of computing devices 110 .
- references to a processor, computer, computing device, or memory will be understood to include references to a collection of processors, computers, computing devices, or memories that may or may not operate in parallel.
- the computing devices 110 may include server computing devices operating as a load-balanced server farm.
- some functions described below are indicated as taking place on a single computing device having a single processor, various aspects of the subject matter described herein can be implemented by a plurality of computing devices, for example, communicating information over network 160 .
- the computing devices 110 can be at various nodes of a network 160 and capable of directly and indirectly communicating with other nodes of network 160 . Although only a few computing devices are depicted in FIGS. 1-2 , it should be appreciated that a typical system can include a large number of connected computing devices, with each different computing device being at a different node of the network 160 .
- the network 160 and intervening nodes described herein can be interconnected using various protocols and systems, such that the network can be part of the Internet, World Wide Web, specific intranets, wide area networks, or local networks.
- the network can utilize standard communications protocols, such as Ethernet, WiFi and HTTP, protocols that are proprietary to one or more companies, and various combinations of the foregoing.
- computing devices 110 may include one or more web servers that are capable of communicating with storage system 150 as well as computing devices 120 , 130 , and 140 via the network.
- server computing devices 110 may use network 160 to transmit and present information to a user, such as user 220 , 250 , or 250 , on a display, such as displays 122 , 132 , or 142 of computing devices 120 , 130 , or 140 .
- computing devices 120 , 130 , and 140 may be considered client computing devices and may perform all or some of the features described below.
- Each of the client computing devices may be configured similarly to the server computing devices 110 , with one or more processors, memory and instructions as described above.
- Each client computing device 120 , 130 or 140 may be a personal computing device intended for use by a user 220 , 250 , 250 , and have all of the components normally used in connection with a personal computing device such as a central processing unit (CPU), memory (e.g., RAM and internal hard drives) storing data and instructions, a display such as displays 122 , 132 , or 142 (e.g., a monitor having a screen, a touch-screen, a projector, a television, or other device that is operable to display information), and user input device 125 (e.g., a mouse, keyboard, touch-screen or microphone).
- the client computing device may also include a camera for recording video streams, speakers, a network interface device, and all of the components used for connecting these elements to one another.
- client computing devices 120 , 130 and 140 may each comprise a full-sized personal computing device, they may alternatively comprise mobile computing devices capable of wirelessly exchanging data with a server over a network such as the Internet.
- client computing device 120 may be a mobile phone or a device such as a wireless-enabled PDA, a tablet PC, or a netbook that is capable of obtaining information via the Internet.
- client computing device 130 may be a head-mounted computing system.
- the user may input information using a small keyboard, a keypad, microphone, using visual signals with a camera, or a touch screen.
- Storage system 150 may store various images, such as panoramic images including a single image or a collection of images as described above having a field of view which is greater than that of the human eye, e.g., 180 degrees or greater.
- the example panoramic images described herein provide a 360-degree view of a location, though other types of images, such as those having a view of less than 360 degrees as well as combinations of images with different viewing areas, may also be used.
- each panoramic image may be associated with an image identifier that may be used to retrieve the panoramic image, geographic location information indicating the location and orientation at which the panoramic image was captured (e.g., a latitude longitude pair as well as an indication of which portion of the panoramic image faces a given direction such as North), as well as timestamp information indicating the date and time at which the panoramic image was captured.
- image identifier e.g., a latitude longitude pair as well as an indication of which portion of the panoramic image faces a given direction such as North
- timestamp information indicating the date and time at which the panoramic image was captured.
- FIG. 3 is an example 300 of a map 302 of an area including a set of buildings 304 , 306 , and 308 located near a road 310 .
- Map 302 also includes the geographic locations of a plurality of panoramic images 1 - 19 . The panoramic images are depicted relative to the set of buildings 304 , 306 , and 308 .
- the storage system may store a set of panoramic images that have been pre-linked together or pre-linked sets of panoramic images.
- a pre-linked set of panoramic images may include information such as the panoramic images themselves or the image identifiers for the panoramic images in the sets.
- each of the panoramic images of a set may be associated with one or more links to other panoramic images.
- a panoramic image may be linked to one or more other panoramic images in the set in a particular arrangement as described in more detail below.
- the panoramic images may be linked to one another manually or automatically based on location and orientation information and subsequently confirmed by a user.
- the links may describe physical relationships or virtual paths between the panoramic images that may be used to provide a navigation experience. These virtual paths may also be thought of as a relationship in three dimensions between images.
- a link when viewing a particular panoramic image, a link may describe a first orientation of the particular panoramic image and a second orientation of another panoramic image in the set. Moving from the first orientation in the first panoramic image to the second orientation in the second panoramic image would create the feeling of moving straight ahead through a virtual space between the two panoramic images such as walking down a hallway, etc. Using the links in reverse would create the feeling of moving backwards through the same virtual space.
- a given set of pre-linked panoramic images may be used to provide a “static” tour of those images such that a user may click on and maneuver manually through the tour using the links.
- FIG. 4 is an example 400 of panoramic images 1 - 19 that are included in a particular set of pre-linked panoramic images 402 .
- Linking lines 404 represent the links or relationships between the panoramic images in the set. These links and the panoramic images are arranged in a “constellation” of panoramic images.
- the link between panoramic image 5 and 6 may describe the first and second orientations that can be used to represent a virtual path between the images.
- the linking information may be used to transition the view of panoramic image 5 to the first orientation of the link and subsequently transition to the second orientation in panoramic image 6 .
- a plurality of panoramic images and links may provide a pleasing navigation experience.
- each panoramic image is generally linked to one or more other panoramic images in the set of panoramic images 402 (panoramic images 1 - 19 ).
- the panoramic images in a set are not linked to all of the other panoramic images in a set.
- Storage system 150 may also store automated tour information.
- This automated tour information may include information that can be used to generate an automated tour.
- an automated tour may include an ordered set of panoramic images (or image identifiers), rotations, timing, and transition information.
- storage system 150 can be of any type of computerized storage capable of storing information accessible by server 110 , such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories.
- storage system 150 may include a distributed storage system where data is stored on a plurality of different storage devices which may be physically located at the same or different geographic locations. Storage system 150 may be connected to the computing devices via the network 160 as shown in FIG. 1 and/or may be directly connected to any of the computing devices 110 - 140 (not shown).
- Automated tours may be generated in advance (e.g., “off-line” and stored in order to be served to a user later), or in real time in response to a specific request for an automated tour.
- a user may use a client computing device, such as client computing device 120 , to request an automated tour.
- client computing device 120 may use a client computing device, such as client computing device 120 , to request an automated tour.
- a user may enter a query into a web, image or map search engine for a particular location or subject.
- the request may be sent to one or more server computing devices, such as server computing device 110 , that in response send an option for an automated tour to the client computing device.
- the option may be shown as a button on a web, image or map search results page or an icon or other overlay the page.
- the option may appear when the user has selected a panoramic image that is part of an automated tour (if one has already been generated and stored) or if the image is included in a set of pre-linked panoramic images (if an automated
- a pre-linked set of panoramic images and a starting image for the tour is identified. This may be done manually, for example, by a user specifically selecting a particular panoramic image of a pre-linked set using a client computing device.
- a user may enter a query into an image search engine for a particular location or subject and receive a set of images in response.
- the user may select a panoramic image from the set of images that is included in a pre-linked set of panoramic images. This selection may thus identify a starting image for an automated tour as well as the set of images in which the selected panoramic image appears.
- a user may request to view a constellation (for example, as shown in FIG.
- panoramic images 1 - 19 may then be sent to the one or more server computing devices 110 by client computing device 120 .
- a mouse pointer and corresponding user input device e.g., a mouse or touch pad
- the starting image may then be sent to the one or more server computing devices 110 by client computing device 120 .
- a user may have pre-designated a particular image in the set as a starting image.
- an identifier indicating that the particular image is a starting image or has some type of relationship to the location (e.g., exterior of a building, interior of a building, hotel lobby, includes an exterior business sign etc.) may be assigned to that particular panoramic image by the user.
- the client computing device may send the request to the one or more server computing devices 110 .
- the one or more server computing devices 110 may automatically identify the pre-linked set for that location as well as the starting image for the set using the identifier.
- FIG. 5 is an example 500 of the pre-linked set of panoramic images 402 where two of the panoramic images include identifiers.
- panoramic images 1 and 7 each include a respective identifier 502 and 504 .
- identifier 502 may identify panoramic image 1 as a starting image or has some type of relationship to the location of the set of pre-linked panoramic images. As discussed above, identifier 502 may be used by the one or more server computing devices 100 to identify panoramic image 1 as a starting image for an automated tour.
- a target panoramic image of the pre-linked set may also be identified.
- the target panoramic image may be selected manually using any of the examples above. Again, if selected manually, the target panoramic image may then be sent to the one or more server computing devices 110 by client computing device 120 .
- the target panoramic image may be selected automatically based on an identifier (e.g., an identifier that indicates interior of a building, best interior image of a building, includes an interior business sign, etc.).
- the one or more server computing devices 110 may also automatically identify the target panoramic image using the identifier.
- identifier 504 may identify panoramic image 7 as a starting image or has some type of relationship to the location of the set of pre-linked panoramic images.
- identifier 502 may be used by the one or more server computing devices 100 to identify panoramic image 502 as a starting image for an automated tour.
- an automated tour may be generated.
- an automated tour may be generated for a location corresponding to a business within building 304 , where a starting image is an exterior panoramic image and a target panoramic image is an interior panoramic image as shown in the example of FIG. 5 .
- the automated tour may include an ordered list of panoramic images (or image identifiers) as well as other information including for example, how to transition between panoramic images in the automated tour (e.g., timing, orientations, etc.).
- the automated tour may begin by displaying the starting image and transitioning to other panoramic images in the tour.
- panoramic images are added to an automated tour starting with the starting panoramic image. This may continue until the target panoramic image is met.
- the linking information may be used to determine a shortest route between the starting image and the target panoramic image along the linking lines between the panoramic images of a set of pre-linked panoramic images.
- Dijkstra's algorithm may be used.
- panoramic images may be added to the tour starting with panoramic image 1 (the starting image) until panoramic image 7 is added (the target panoramic image).
- the shortest route determination using the linking information would require that the panoramic images be added in the order of panoramic images 1 , 2 , 3 , 4 , 5 , 6 , and subsequently 7 .
- the automated tour may be complete (e.g., no additional panoramic images are added) or additional panoramic images of the set of pre-linked panoramic images may be added to the tour as discussed in detail below.
- the automated tour may include transitions between the panoramic images.
- a first transition between the starting panoramic image and the second panoramic image added to the automated tour may be of a first transition type where the direction of motion is consistent (e.g., without rotation).
- FIG. 6 is an example 600 of two different types of transitions without rotation though various other types of consistent transitions may be used.
- panoramic images A and B are represented by circles 602 and 604 , respectively. These panoramic images are linked according to line 606 .
- Panoramic image A is shown with an arrow 612 indicating an orientation at which panoramic image A is displayed to a user relative to the circle 602 .
- panoramic image A is shown with an arrow 614 indicating an orientation at which panoramic image B is displayed to a user relative to the circle 604 after the transition.
- the relative orientation between the images does not change, and there is no rotation as part of the transition.
- panoramic images C and D are represented by circles 622 and 624 , respectively. These panoramic images are linked according to line 626 .
- Panoramic image 622 is shown with an arrow 624 indicating an orientation at which panoramic image C is displayed to a user relative to the circle 622 . If the orientation of the view of the panoramic images remains constant, a transition from panoramic image C to panoramic image D would appear as if the user were moving straight ahead along line 606 .
- panoramic image D is shown with an arrow 624 indicating an orientation at which panoramic image D is displayed to a user relative to the circle 622 after the transition.
- the relative orientation between the images does not change, and there is no rotation as part of the transition.
- Additional images of the set of pre-linked panoramic images may be added to the automated tour in order according to the linking information as noted above.
- the transitions between these images may all be of the first transition mode, for example, consistent transitions or without rotation as described above. This may continue until a predetermined threshold condition has been met.
- a predetermined threshold condition may be that a number of images has been added to the tour, an image that is linked to more than two images has been added to the automated tour, or a combination of these (e.g., the threshold is met when at least one of these conditions is true).
- Example 700 of FIG. 7 is an example of a partially-generated automated tour 702 .
- the partially-generated automated tour 702 is shown as a thicker line over the linking lines.
- the tour includes 4 images.
- each of the transitions, between panoramic images 1 and 2 , panoramic images 2 and 3 , and panoramic images 3 and 4 may all be other the first transition mode or a consistent transition as described above.
- the predetermined threshold is 3 images
- the transition between panoramic images 3 and 4 as well as between any additional panoramic images of the set of pre-linked panoramic images added to the automated tour may be of a second transition mode, different from the first transition mode.
- the second transition mode is described in more detail below.
- Example 800 of FIG. 8 is an example of a partially-generated automated tour 802 .
- the partially-generated automated tour 802 includes panoramic image 6 .
- Panoramic image 6 is at a branch because it connects to both panoramic image 7 as well as panoramic image 18 according to the linking lines.
- panoramic image 6 connects to three other panoramic images in the set of pre-lined panoramic images: panoramic images 5 , 7 , and 8 . This is the first branch or panoramic image in the set of pre-linked panoramic images 402 that has been added to the automated tour.
- each of the transitions, between panoramic images 1 and 2 , panoramic images 2 and 3 , panoramic images 3 and 4 , panoramic images 4 and 5 , and panoramic images 5 and 6 may all be other the first transition mode or a consistent transition as described above.
- the predetermined threshold is that an image that is linked to more than two images is added to the automated tour, than the transition between panoramic image 6 and the next panoramic image added to the automated tour (as well as between any additional panoramic images of the set of pre-linked panoramic images added to the automated tour) may be of the second transition mode, different from the first transition mode.
- the threshold may include both a number of images have been added to the tour and an image that is linked to more than two images is added to the automated tour such that if either condition is met, the second transition is used between additional images added to the automated tour.
- the number of images of the predetermined threshold is 4, because this threshold will be met before panoramic image 6 (the branch) is reached, then the transitions between panoramic images 4 and 5 , panoramic images 5 and 6 , as well as any additional panoramic images added to the automated tour will be of the second transition mode.
- the second transition mode may include various panning and rotation movements to give a user the feeling that he or she is turning in a virtual space.
- transitions between images may include a panning rotation (rotating within a panoramic image) in a first panoramic image before displaying the second panoramic image as well as a panning rotation after the transition is complete.
- Panning rotations may provide a user with the feeling that he or she is turning in place “within” a panoramic image.
- FIG. 9 is an example 900 of a transition including panning rotations.
- panoramic images E and F are represented by circles 902 and 904 , respectively. These panoramic images are linked according to line 906 .
- Panoramic image E is shown with an arrow 912 indicating an orientation at which panoramic image E is displayed to a user relative to the circle 902 . If image E is displayed at the orientation of arrow 912 , before displaying image F, panoramic image E may be panned or rotated an angle of ⁇ 1 until panoramic image E is displayed at the orientation of arrow 914 relative to circle 902 . At this point, the a transition from panoramic image E to panoramic image F would appear as if the user were moving along line 906 but facing in the direction of arrow 922 until panoramic image F is displayed in the direction of arrow 912 relative to circle 904 .
- a second panning may be performed such that panoramic image F may be panned or rotated an angle of ⁇ 2 until panoramic image F is displayed at the orientation of arrow 924 relative to circle 904 .
- the second transition mode as applied between two panoramic images, may include a first panning in the first panoramic image, a transition between the first panoramic image and the second panoramic image, and a second panning in the second panoramic image.
- a transition turn percent parameter may be used to determine a percentage of transitions that will include at least some degree of rotation (e.g., 35%).
- a skip rotate percentage parameter may be used to determine a percentage of transitions which will not be followed immediately by a panning rotation (e.g., 35%). For example, returning to FIG. 9 , the panning rotation in panoramic image F may be dropped from the transition.
- Another parameter may include a minimum rotation degree (e.g., 130 degrees) for panning rotations before or after transitions. In this regard, the transitions between images need not always be from the point of view of a user moving forward through space, but also to the side, backwards, etc. as described above.
- an overlap threshold parameter may be used.
- the fields of view of two panoramic images when the automated tour switches between the two panoramic images may be projected onto an imaginary wall at a fixed distance.
- the threshold parameter may be defined as a percentage of overlap between these projections, such as 25%, 50%, 75%, or more or less.
- the amount of panning rotation in the transition from the first panoramic image may be increased in order to ensure that the overlap threshold parameter is satisfied. If the panning rotation required to meet the overlap is less than the minimum rotation degree parameter, the minimum rotation degree parameter may be adjusted, for example, by decreasing the value or simply foregoing the requirement.
- the fields of view of two panoramic images G and H when displayed at the same relative orientation may be determined.
- the orientation of arrows 1002 and 1004 may be determined based on the transition mode between these two images in the tour. For instance, the orientation may be determined based on whether the transition includes is a panning rotation in panoramic image G and the orientation of panoramic image G prior to the start of the transition.
- Panoramic images G and H are separated by a distance d along linking line 1006 .
- the field of view of panoramic image G is defined between lines 1010 and 1012 , where arrow 1002 is centered between lines 1010 and 1012 .
- the field of view of panoramic image H is defined between lines 1014 and 1016 , where arrow 1004 is centered between lines 1014 and 1016 .
- the viewing angle e.g., the angular distance between lines 1010 and 1012 or lines 1014 and 1016 ) of each field of view will depend upon the size of the images, such as the length, width, number of pixels, etc. that will be displayed in the automated tour.
- the fields of view may be projected onto an imaginary wall at a fixed distance w from each of the panoramic images G and H.
- w may be an arbitrary distance such as 2 meters or more or less.
- the projections may be in three dimensions on the wall and appear in the shape of a cone or rectangle.
- the projection from panoramic image G covers the area of K on the wall
- the projection from panoramic image H covers the area of L on the wall.
- the overlap between these projections is the area of J.
- the percentage of overlap may be defined by the ration of J to K (or J to L). This percentage of overlap may be compared to the overlap threshold parameter.
- the amount of panning rotation in the transition from the first panoramic image may be increased in order to ensure that the overlap threshold parameter is satisfied, so that J/K is at least 50%. If the rotation required to meet the overlap is less than the minimum rotation degree parameter, the minimum rotation degree parameter discussed above may be adjusted, for example, by decreasing the value or simply foregoing the requirement.
- example 1100 includes a completed automated tour 1102 .
- the automated tour 1102 includes seven panoramic images, as well as the transitions between them. This includes panoramic image 7 , the target panoramic image.
- the minimum number of panoramic images may have been 7 or less such that the automated tour is complete when panoramic image has been added.
- additional panoramic images of the set of pre-linked images may be added to the tour until the minimum number of panoramic images has been met.
- These images may be selected for example, using the linking information to identify a route along the linking lines to the panoramic image of the set of pre-linked panoramic images that is farthest in distance (not necessarily along the linking lines from the target panoramic image) and not already included in the automated tour.
- the route may be determined using any path planning algorithm such as one that determines the route to that farthest panoramic image that passes through as many panoramic images as possible along the linking lines. In some cases, the route may also be determined such that it does not cross the same panoramic image twice and/or passes through the fewest number of panoramic images already added to the automated tour.
- Panoramic images along the route may then be added to the automated tour with transitions as discussed above until the minimum number of panoramic images has been met. If the farthest panoramic image has been added to the panoramic tour, for instance by adding panoramic images along the route as noted above, but the minimum number of panoramic images has not been met, another farthest panoramic image from that farthest image may be identified and used to add additional panoramic images as described above and so on until the minimum number of panoramic images has been met.
- FIG. 12 is an example 1200 depicting the identification of the farthest panoramic image from eh target panoramic image.
- panoramic image 14 is farthest in distance from panoramic image 7 .
- dashed line 1204 does not follow the linking lines.
- panoramic image 1 is physically farthest from panoramic image 7 , it has already been added to the partially-completed automated tour 1202 .
- a route between panoramic image 7 and panoramic image 14 may be determined.
- the route may be determined such that the route that passes through as many panoramic images as possible along the linking lines without crossing the same panoramic image twice.
- the determined route may include transitioning from panoramic image 7 to panoramic image 8 to panoramic image 9 to panoramic image 10 to panoramic image 11 to panoramic image 19 to panoramic image 18 to panoramic image 17 to panoramic image 16 to panoramic image 15 and finally, to panoramic image 14 along the linking lines.
- Panoramic images along this route may then be added to the automated tour with transitions as described above until the minimum number of panoramic images has been met.
- the automated tour may be complete when panoramic image 19 has been added (e.g., before panoramic image 14 is added).
- the automated tour may be complete when panoramic image 17 is added as shown in the completed panoramic tour 1302 of FIG. 13 .
- the last panoramic image in an automated tour may be rotated so that the last view in an automated tour is oriented towards the target panoramic image.
- a panning rotation may be added to the end of the tour so that the panoramic image 19 is rotated into the orientation of arrow 1304 such that the user has the feeling the he or she is facing the location of the target panoramic image 7 .
- panoramic image 14 is the last panoramic image in the automated tour
- a panning rotation may be added to the end of the tour so that the panoramic image 14 is rotated into the orientation of arrow 1306 such that the user has the feeling the he or she is facing the location of the target panoramic image 7 .
- Other parameters may also be used to tune the transitions, of the first or second transition mode as applicable. For example, the amount of time that it takes to complete a transition between images in either transition mode, the percentage of time spent rotating for a transition in the second transition mode, the minimum amount of time for a rotation in the second transition mode, the speed of a rotation (e.g., degrees per second of rotation) in the second transition mode, whether that speed is adjusted during a rotation (e.g., faster, slower, faster) in the second transition mode, etc.
- the completed automated tours may be served immediately to a user (e.g., in the case of an automated tour generated in response to a specific request) or stored for later service to users.
- Flow diagram 1400 of FIG. 14 is an example of some of the features described above which may be performed by one or more computing devices such as one or more computing devices 110 .
- a request to generate an automated tour based on a set of panoramic images is received by the one or more computing devices at block 1402 .
- Each particular panoramic image in the set of panoramic images is associated with geographic location information and linking information linking the particular panoramic image with one or more other panoramic images in the set of panoramic images.
- the one or more computing devices identify a starting panoramic image of the set of panoramic images at block 1404 .
- a second panoramic image of the set of panoramic images is determined based at least in part on the starting panoramic image and the linking information associated with each of the starting and second panoramic images at block 1406 .
- a first transition of a first transition mode between the starting panoramic image and the second panoramic image is determined based at least in part on the linking information associated with each of the starting panoramic image and the second panoramic image at block 1408 .
- the one or more computing devices also determine additional panoramic images from the set of panoramic images for the tour based at least in part on the linking information associated with the additional panoramic images and whether a minimum image quantity constraint has been met at block 1410 .
- a second transition of a second transition mode, different from the first transition mode, is determined for between ones of the additional panoramic images at block 1412 .
- an identifier for the first panoramic image, the first transition, a second identifier for the second panoramic image, the second transition, and the additional panoramic images are added to the tour according to an order of the tour.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Human Resources & Organizations (AREA)
- Entrepreneurship & Innovation (AREA)
- Library & Information Science (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Strategic Management (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Economics (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Artificial Intelligence (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
- Various systems may provide users with images of different locations. Some systems provide users with panoramic images. For example, panoramic images may include an image or collection of images having a field of view which is greater than that of the human eye, e.g., 180 degrees or greater. Some panoramas may provide a 360-degree view of a location.
- Some systems may allow users to view images in sequences, such as in time or space. In some examples, these systems can provide a navigation experience in a remote or interesting location. Some systems allow users to feel as if they are rotating within a virtual world by clicking on the edges of a displayed portion of a panorama and having the panorama appear to “rotate” in the direction of the clicked edge.
- Aspects of the disclosure provide a computer-implemented method for generating automated tours using images. The method includes receiving a request, by one or more computing devices, to generate an automated tour based on a set of images. Each particular image in the set of images is associated with geographic location information and linking information linking the particular image with one or more other images in the set of images. The one or more computing devices identify a starting image of the set of images, determine a second image of the set of images based at least in part on the starting image and the linking information associated with the starting and second images, determine a first transition of a first transition mode between the starting image and the second image based at least in part on the linking information associated with the starting image and the second image, determine additional images from the set of images for the tour based at least in part on the linking information associated with the additional images and whether a minimum image quantity constraint has been met, and determine a second transition of a second transition mode, different from the first transition mode, for between ones of the additional images. The one or more computing devices add to the tour an identifier for the first image, the first transition, a second identifier for the second image, the second transition, and the additional images according to an order of the tour.
- In one example, identifying the starting image is based on an image-type identifier associated with the starting image. In another example, the second transition of the second transition mode is not added to the tour according to the order of the tour until a second image constraint has been met for the tour, and the second image constraint includes a fixed number of images that have been added with transitions of the first transition mode. In another example, the second transition of a second transition mode is not added to the tour according to the order of the tour until a second image constraint has been met for the tour, and the second image constraint includes adding a given image of the set of images having linking information indicating that the given image is linked to at least three other images of the set of images. In another example, the second transition is determined based on a set of parameters that includes a transition turn percent parameter that identifies a percentage of transitions in the automated tour that will include at least some degree of rotation. In another example, the second transition is determined based on a set of parameters that includes a skip rotate percentage parameter that identifies a percentage of transitions in the automated tour that will not be followed by a panning rotation. In another example, the second transition is determined based on a set of parameters that includes a minimum degree of rotation parameter that identifies a minimum required degree of rotation for transitions between images. In another example, the second transition is determined based on a set of parameters that includes an overlap threshold parameter that identifies a minimum degree of overlap in the fields of view between two images at a particular orientation. In another example, determining the additional images from the set of images for the tour is further based on whether the automated tour includes a particular image of the set of images.
- Another aspect of the disclosure provides a system for generating automated tours using images. The system includes one or more computing devices. The one or more computing devices are configured to receive a request to generate an automated tour based on a set of images. Each particular image in the set of images is associated with geographic location information and linking information linking the particular image with one or more other images in the set of images. The one or more computing devices are also configured to identify a starting image of the set of images; determine a second image of the set of images based at least in part on the starting image and the linking information associated with the starting and second images; determine a first transition of a first transition mode between the starting image and the second image based at least in part on the linking information associated with the starting image and the second image; determine additional images from the set of images for the tour based at least in part on the linking information associated with the additional images and whether a minimum image quantity constraint has been met; determine a second transition of a second transition mode, different from the first transition mode, for between ones of the additional images; and add to the tour: an identifier for the first image, the first transition, a second identifier for the second image, the second transition, and the additional images according to an order of the tour.
- In one example, the one or more computing devices are configured to determine the starting image based on an image-type identifier associated with the starting image. In another example, the one or more computing devices are configured to add the second transition of a second transition mode only when a second image constraint has been met for the tour, and the second image constraint includes a fixed number of images have been added with transitions of the first transition mode. In another example, the one or more computing devices are configured to add the second transition of a second transition mode only when a second image constraint has been met for the tour, and the second image constraint includes adding a given image of the set of images having linking information indicating that the given image is linked to at least three other images of the set of images. In another example, the one or more computing devices are configured to determine the second transition based on a set of parameters that includes a transition turn percent parameter that identifies a percentage of transitions in the automated tour that will include at least some degree of rotation. In another example, the one or more computing devices are configured to determine the second transition based on a set of parameters that includes a skip rotate percentage parameter that identifies a percentage of transitions in the automated tour that will not be followed by a panning rotation. In another example, the one or more computing devices are configured to determine the second transition based on a set of parameters that includes a minimum degree of rotation parameter that identifies a minimum required degree of rotation for transitions between images.
- A further aspect of the disclosure provides a non-transitory computer-readable storage medium on which computer readable instructions of a program are stored. The instructions, when executed by one or more processors, because the processors to perform a method of generating automated tours using images. The method includes identifying a set of images, each particular image in the set of images being associated with geographic location information and linking information linking that image with one or more other images; identifying a starting image of the set of images; and generating an automated tour based on at least the starting image and a second image, the linking information associated with the set of images, and a set of requirements. The set of requirements includes a first requirement that the automated tour begin by displaying the starting image, a second requirement that the automated tour include a first transition of a first transition mode between the starting image and the second image of the set based at least in part on the linking information associated with the starting image and the second image, a third requirement that additional images of the set of images are added to the automated tour based on the linking information associated with the additional images until a minimum image quantity constraint has been met, and a fourth requirement that the automated tour include a second transition of a second transition mode, different from the first transition mode, between two of the additional images, and that the second transition mode is determined according to a set of parameters.
- In one example, identifying the starting image is based on an image-type identifier associated with the starting image. In another example, the method further comprises identifying the particular image based on a second image-type identifier associated with the starting image, the second image-type identifier being different from the image-type identifier. In another example, the third requirement further includes that additional images of the set of images are added to the automated tour based on the linking information associated with the additional images when at least one of (a) the minimum image quantity constraint has been met or (b) the automated tour includes a particular image of the set of images.
-
FIG. 1 is a functional diagram of an example system in accordance with aspects of the disclosure. -
FIG. 2 is a pictorial diagram of the example system ofFIG. 1 . -
FIG. 3 is an example diagram of panoramic image information in accordance with aspects of the disclosure. -
FIG. 4 is an example diagram of pre-linked panoramic images in accordance with aspects of the disclosure. -
FIG. 5 is an example diagram of pre-linked panoramic images and identifiers in accordance with aspects of the disclosure. -
FIG. 6 is an example diagram of transitions between panoramic images in accordance with aspects of the disclosure. -
FIG. 7 is an example diagram of pre-linked panoramic images and a portion of an automated tour in accordance with aspects of the disclosure. -
FIG. 8 is another example diagram of pre-linked panoramic images and a portion of an automated tour in accordance with aspects of the disclosure. -
FIG. 9 is another example diagram of a transition between panoramic images in accordance with aspects of the disclosure. -
FIG. 10 is an example diagram for determining a percentage of overlap between panoramic images in accordance with aspects of the disclosure. -
FIG. 11 is a further example diagram of pre-linked panoramic images and a portion of an automated tour in accordance with aspects of the disclosure. -
FIG. 12 is an example diagram of pre-linked panoramic images and a portion of an automated tour in accordance with aspects of the disclosure. -
FIG. 13 is an example diagram of pre-linked panoramic images and an automated tour in accordance with aspects of the disclosure. -
FIG. 14 is a flow diagram in accordance with aspects of the disclosure. - The technology relates to automatically generating automated tours using panoramas or panoramic images. As an example, a user of a computing device may view or create an automated tour by selecting or indicating a starting panoramic image from a set of pre-linked panoramic images as well as one or more other panoramic images of the set. In response, an automated tour may be generated by stringing together the selected panoramic image, the one or more other panoramic images, and, in some examples, additional panoramic images of the set. When the automated tour is displayed, it may appears as a virtual tour of a location captured in the panoramic images of the tour and may provide a user with a feeling that he or she is moving (e.g., walking or running, and in some cases turning) through that location.
- As noted above, panoramic images may be arranged in sets. Each panoramic image in a set may be linked to one or more other panoramic images in the set. For example, panoramic images may be linked to one another manually or automatically based on location and orientation information and subsequently confirmed by a user. In this regard, a first image may be associated with linking information indicating an orientation of the first image that aligns with a second orientation of a second image to indicate a physical path between the images. This relationship may be displayed, for example, in a “constellation” or a series of points representing the location of panoramic images of the set with lines linking them together.
- In order to generate an automated tour, a pre-linked set of panoramic images and a starting image for the tour are identified. This may be done manually, for example, by a user specifically selecting a particular panoramic image of a pre-linked set. Alternatively, a user may have pre-designated a particular image with an identifier indicating that the particular image is a starting image or has some type of relationship to the location (e.g., exterior of a building, interior of a building, hotel lobby, includes an exterior business sign, etc.). In this example, when a user requests to view an automated tour of a location, the system may automatically identify the pre-linked set for that location as well as the starting image for the set using the identifier.
- In addition, a second panoramic image of the pre-linked set may also be identified. As with the starting image, the second panoramic image may be selected manually or based on an identifier (e.g., interior of a building, best interior image of a building, includes an interior business sign, etc.).
- An automated tour may then be generated using a number of predetermined requirements. As an example, an automated tour may be generated for a location corresponding to a business where a starting image is an exterior panoramic image and a second panoramic image is an interior panoramic image. A requirement may include that the automated tour begin by displaying the starting image and transitioning between images in a first transition mode (e.g., the direction of motion is consistent, such as, without rotation) according to the constellation until a predetermined number of images is displayed (e.g., 3 or 4) or a branch (in the constellation of the pre-liked set) is reached.
- At this point, a requirement may include that the tour continue to display images until the second panoramic image is reached, transitioning between images in a second transition mode. The second transition mode may include various parameters that enable different automated tours to be less predictable and more varied. For example, transitions between images may include a panning rotation (rotating within a panoramic image) in a first panoramic image before displaying the second panoramic image as well as a panning rotation after the transition is complete. A transition turn percent parameter may be used to determine a percentage of transitions that will include at least some degree of rotation (e.g., 35%). A skip rotate percentage parameter may be used to determine a percentage of transitions which will not be followed immediately by a panning rotation (e.g., 35%). Another parameter may include a minimum rotation degree (e.g., 130 degrees) for transitions. In this regard, the transitions between images need not always be from the point of view of a user moving forward through space, but also to the side, backwards, etc.
- In order to maintain overlapping views between transitions, an overlap threshold parameter may be used. For example, the fields of view of two panoramic images may be determined based on the parameters of the first or second transition mode. These fields of view may be projected onto an imaginary wall at a fixed distance. If the overlap of (a) the cone that is projected from the first image with (b) the cone that is projected from the second panoramic image does not satisfy the overlap threshold parameter (e.g., a threshold of 50% overlap), then the amount of panning rotation in the transition from the first panoramic image may be increased in order to ensure that the overlap threshold parameter is satisfied. If the rotation required to meet the overlap is less than the minimum rotation degree, the minimum rotation degree may be adjusted, for example, by decreasing the value or simply foregoing the requirement.
- Another requirement may include that once the second panoramic image is reached, if at least a minimum number of panoramic images have been displayed, then the automated tour may end. If not, the automated tour may continue to include other panoramic images in the set, for example, by using the constellation to reach the farthest panoramic image from the second panoramic image until the minimum number of panoramic images has been reached. In some examples, the last panoramic image may be rotated so that the last view in an automated tour is oriented towards the second panoramic image.
- Other parameters may also be used to generate the automated tour. For example, the amount of time that it takes to complete a transition between images, the percentage of time spent rotating for a transition, the minimum amount of time for a rotation, the speed of a rotation (e.g., degrees per second of rotation), whether that speed is adjusted during a rotation (e.g., faster, slower, faster), etc.
- The automated tours may be pre-determined using the above features and then stored until requested by a user to be displayed. The stored automated tour may include, for example, a set of parameters including identifiers for panoramas to be displayed, order, rotations, timing, and transition information. Alternatively, the automated tours may be generated “on demand” in response to a specific request for an automated tour.
- The features described above may provide a user with a smooth and visually appealing view of a location. In addition, given a pre-linked set that includes a sufficient number of panoramic images, an automated tour may provide a user with a comprehensive view of a location without needing to “click” multiple times in order to maneuver around the location.
-
FIGS. 1 and 2 include anexample system 100 in which the features described above may be implemented. It should not be considered as limiting the scope of the disclosure or usefulness of the features described herein. In this example,system 100 can includecomputing devices storage system 150.Computing device 110 can contain one ormore processors 112,memory 114 and other components typically present in general purpose computing devices.Memory 114 ofcomputing device 110 can store information accessible byprocessor 112, includinginstructions 116 that can be executed by theprocessor 112. - Memory can also include
data 118 that can be retrieved, manipulated or stored by the processor. The memory can be of any non-transitory type capable of storing information accessible by the processor, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories. - The
instructions 116 can be any set of instructions to be executed directly, such as machine code, or indirectly, such as scripts, by the processor. In that regard, the terms “instructions,” “application,” “steps” and “programs” can be used interchangeably herein. The instructions can be stored in object code format for direct processing by the processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. Functions, methods and routines of the instructions are explained in more detail below. -
Data 118 can be retrieved, stored or modified byprocessor 112 in accordance with theinstructions 116. For instance, although the subject matter described herein is not limited by any particular data structure, the data can be stored in computer registers, in a relational database as a table having many different fields and records, or XML documents. The data can also be formatted in any computing device-readable format such as, but not limited to, binary values, ASCII or Unicode. Moreover, the data can comprise any information sufficient to identify the relevant information, such as numbers, descriptive text, proprietary codes, pointers, references to data stored in other memories such as at other network locations, or information that is used by a function to calculate the relevant data. - The one or
more processors 112 can include any conventional processors, such as a commercially available CPU. Alternatively, the processor can be a dedicated component such as an ASIC or other hardware-based processor. Although not necessary,computing devices 110 may include specialized hardware components to perform specific computing processes, such as decoding video, matching video frames with images, distorting videos, encoding distorted videos, etc. faster or more efficiently. - Although
FIG. 1 functionally illustrates the processor, memory, and other elements ofcomputing device 110 as being within the same block, the processor, computer, computing device, or memory can actually comprise multiple processors, computers, computing devices, or memories that may or may not be stored within the same physical housing. For example, the memory can be a hard drive or other storage media located in a housing different from that ofcomputing devices 110. Accordingly, references to a processor, computer, computing device, or memory will be understood to include references to a collection of processors, computers, computing devices, or memories that may or may not operate in parallel. For example, thecomputing devices 110 may include server computing devices operating as a load-balanced server farm. Yet further, although some functions described below are indicated as taking place on a single computing device having a single processor, various aspects of the subject matter described herein can be implemented by a plurality of computing devices, for example, communicating information overnetwork 160. - The
computing devices 110 can be at various nodes of anetwork 160 and capable of directly and indirectly communicating with other nodes ofnetwork 160. Although only a few computing devices are depicted inFIGS. 1-2 , it should be appreciated that a typical system can include a large number of connected computing devices, with each different computing device being at a different node of thenetwork 160. Thenetwork 160 and intervening nodes described herein can be interconnected using various protocols and systems, such that the network can be part of the Internet, World Wide Web, specific intranets, wide area networks, or local networks. The network can utilize standard communications protocols, such as Ethernet, WiFi and HTTP, protocols that are proprietary to one or more companies, and various combinations of the foregoing. Although certain advantages are obtained when information is transmitted or received as noted above, other aspects of the subject matter described herein are not limited to any particular manner of transmission of information. - As an example,
computing devices 110 may include one or more web servers that are capable of communicating withstorage system 150 as well ascomputing devices server computing devices 110 may usenetwork 160 to transmit and present information to a user, such asuser 220, 250, or 250, on a display, such asdisplays computing devices computing devices - Each of the client computing devices may be configured similarly to the
server computing devices 110, with one or more processors, memory and instructions as described above. Eachclient computing device user 220, 250, 250, and have all of the components normally used in connection with a personal computing device such as a central processing unit (CPU), memory (e.g., RAM and internal hard drives) storing data and instructions, a display such asdisplays - Although the
client computing devices client computing device 120 may be a mobile phone or a device such as a wireless-enabled PDA, a tablet PC, or a netbook that is capable of obtaining information via the Internet. In another example,client computing device 130 may be a head-mounted computing system. As an example the user may input information using a small keyboard, a keypad, microphone, using visual signals with a camera, or a touch screen. -
Storage system 150 may store various images, such as panoramic images including a single image or a collection of images as described above having a field of view which is greater than that of the human eye, e.g., 180 degrees or greater. The example panoramic images described herein provide a 360-degree view of a location, though other types of images, such as those having a view of less than 360 degrees as well as combinations of images with different viewing areas, may also be used. In addition, each panoramic image may be associated with an image identifier that may be used to retrieve the panoramic image, geographic location information indicating the location and orientation at which the panoramic image was captured (e.g., a latitude longitude pair as well as an indication of which portion of the panoramic image faces a given direction such as North), as well as timestamp information indicating the date and time at which the panoramic image was captured. -
FIG. 3 is an example 300 of amap 302 of an area including a set ofbuildings road 310.Map 302 also includes the geographic locations of a plurality of panoramic images 1-19. The panoramic images are depicted relative to the set ofbuildings - In some examples, the storage system may store a set of panoramic images that have been pre-linked together or pre-linked sets of panoramic images. A pre-linked set of panoramic images may include information such as the panoramic images themselves or the image identifiers for the panoramic images in the sets. In addition, each of the panoramic images of a set may be associated with one or more links to other panoramic images. In this regard, a panoramic image may be linked to one or more other panoramic images in the set in a particular arrangement as described in more detail below. The panoramic images may be linked to one another manually or automatically based on location and orientation information and subsequently confirmed by a user.
- As an example, the links may describe physical relationships or virtual paths between the panoramic images that may be used to provide a navigation experience. These virtual paths may also be thought of as a relationship in three dimensions between images. In this regard, when viewing a particular panoramic image, a link may describe a first orientation of the particular panoramic image and a second orientation of another panoramic image in the set. Moving from the first orientation in the first panoramic image to the second orientation in the second panoramic image would create the feeling of moving straight ahead through a virtual space between the two panoramic images such as walking down a hallway, etc. Using the links in reverse would create the feeling of moving backwards through the same virtual space. Thus, a given set of pre-linked panoramic images may be used to provide a “static” tour of those images such that a user may click on and maneuver manually through the tour using the links.
-
FIG. 4 is an example 400 of panoramic images 1-19 that are included in a particular set of pre-linkedpanoramic images 402. Linkinglines 404 represent the links or relationships between the panoramic images in the set. These links and the panoramic images are arranged in a “constellation” of panoramic images. For example, the link betweenpanoramic image panoramic image 5 and wanted to transition topanoramic image 6, the linking information may be used to transition the view ofpanoramic image 5 to the first orientation of the link and subsequently transition to the second orientation inpanoramic image 6. In this regard, a plurality of panoramic images and links may provide a pleasing navigation experience. - As shown in
FIG. 4 , each panoramic image is generally linked to one or more other panoramic images in the set of panoramic images 402 (panoramic images 1-19). However, in order to provide a useful navigation experience (e.g., where the links do not pass through walls or other obstructions) the panoramic images in a set are not linked to all of the other panoramic images in a set. -
Storage system 150 may also store automated tour information. This automated tour information may include information that can be used to generate an automated tour. For example, an automated tour may include an ordered set of panoramic images (or image identifiers), rotations, timing, and transition information. - As with
memory 114,storage system 150 can be of any type of computerized storage capable of storing information accessible byserver 110, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories. In addition,storage system 150 may include a distributed storage system where data is stored on a plurality of different storage devices which may be physically located at the same or different geographic locations.Storage system 150 may be connected to the computing devices via thenetwork 160 as shown inFIG. 1 and/or may be directly connected to any of the computing devices 110-140 (not shown). - Automated tours may be generated in advance (e.g., “off-line” and stored in order to be served to a user later), or in real time in response to a specific request for an automated tour. A user may use a client computing device, such as
client computing device 120, to request an automated tour. As an example, a user may enter a query into a web, image or map search engine for a particular location or subject. The request may be sent to one or more server computing devices, such asserver computing device 110, that in response send an option for an automated tour to the client computing device. The option may be shown as a button on a web, image or map search results page or an icon or other overlay the page. As another example, the option may appear when the user has selected a panoramic image that is part of an automated tour (if one has already been generated and stored) or if the image is included in a set of pre-linked panoramic images (if an automated tour has not been generated). - In order to generate an automated tour, a pre-linked set of panoramic images and a starting image for the tour is identified. This may be done manually, for example, by a user specifically selecting a particular panoramic image of a pre-linked set using a client computing device. As an example, a user may enter a query into an image search engine for a particular location or subject and receive a set of images in response. In some cases, the user may select a panoramic image from the set of images that is included in a pre-linked set of panoramic images. This selection may thus identify a starting image for an automated tour as well as the set of images in which the selected panoramic image appears. Alternatively, a user may request to view a constellation (for example, as shown in
FIG. 4 ) usingclient computing device 120, and select one of panoramic images 1-19 as a starting image using for example a mouse pointer and corresponding user input device (e.g., a mouse or touch pad) or a finger or stylus in the case of a touch screen. The starting image may then be sent to the one or moreserver computing devices 110 byclient computing device 120. - As another example, a user may have pre-designated a particular image in the set as a starting image. For example, an identifier indicating that the particular image is a starting image or has some type of relationship to the location (e.g., exterior of a building, interior of a building, hotel lobby, includes an exterior business sign etc.) may be assigned to that particular panoramic image by the user. In this example, when the same or a different user requests to view or generate an automated tour of a location, the client computing device may send the request to the one or more
server computing devices 110. In response, the one or moreserver computing devices 110 may automatically identify the pre-linked set for that location as well as the starting image for the set using the identifier. -
FIG. 5 is an example 500 of the pre-linked set ofpanoramic images 402 where two of the panoramic images include identifiers. In this example,panoramic images respective identifier identifier 502 may identifypanoramic image 1 as a starting image or has some type of relationship to the location of the set of pre-linked panoramic images. As discussed above,identifier 502 may be used by the one or moreserver computing devices 100 to identifypanoramic image 1 as a starting image for an automated tour. - In some examples, a target panoramic image of the pre-linked set may also be identified. As with the starting image, the target panoramic image may be selected manually using any of the examples above. Again, if selected manually, the target panoramic image may then be sent to the one or more
server computing devices 110 byclient computing device 120. - Alternatively, the target panoramic image may be selected automatically based on an identifier (e.g., an identifier that indicates interior of a building, best interior image of a building, includes an interior business sign, etc.). In this example, in response to a request to view or generate an automated tour, the one or more
server computing devices 110 may also automatically identify the target panoramic image using the identifier. Returning toFIG. 5 ,identifier 504 may identifypanoramic image 7 as a starting image or has some type of relationship to the location of the set of pre-linked panoramic images. As discussed above,identifier 502 may be used by the one or moreserver computing devices 100 to identifypanoramic image 502 as a starting image for an automated tour. - Once the starting image, target panoramic image, and set of pre-linked panoramic images have been identified, an automated tour may be generated. As an example, an automated tour may be generated for a location corresponding to a business within building 304, where a starting image is an exterior panoramic image and a target panoramic image is an interior panoramic image as shown in the example of
FIG. 5 . As noted above, the automated tour may include an ordered list of panoramic images (or image identifiers) as well as other information including for example, how to transition between panoramic images in the automated tour (e.g., timing, orientations, etc.). - As an example, the automated tour may begin by displaying the starting image and transitioning to other panoramic images in the tour. Thus, panoramic images are added to an automated tour starting with the starting panoramic image. This may continue until the target panoramic image is met. For example, the linking information may be used to determine a shortest route between the starting image and the target panoramic image along the linking lines between the panoramic images of a set of pre-linked panoramic images. As an example, Dijkstra's algorithm may be used. Returning to
FIG. 5 , panoramic images may be added to the tour starting with panoramic image 1 (the starting image) untilpanoramic image 7 is added (the target panoramic image). In this regard, the shortest route determination using the linking information would require that the panoramic images be added in the order ofpanoramic images - As noted above, the automated tour may include transitions between the panoramic images. For example, a first transition between the starting panoramic image and the second panoramic image added to the automated tour may be of a first transition type where the direction of motion is consistent (e.g., without rotation). For example,
FIG. 6 is an example 600 of two different types of transitions without rotation though various other types of consistent transitions may be used. In the example 600, panoramic images A and B are represented bycircles line 606. Panoramic image A is shown with anarrow 612 indicating an orientation at which panoramic image A is displayed to a user relative to thecircle 602. If the orientation of the view of the panoramic images remains constant, a transition from panoramic image A to panoramic image B would appear as if the user were moving sideways alongline 606. The actual transition may be displayed using any type of still image transitioning effect such as the panning and zooming of the Ken Burns Effect. Panoramic image B is shown with anarrow 614 indicating an orientation at which panoramic image B is displayed to a user relative to thecircle 604 after the transition. Thus, the relative orientation between the images does not change, and there is no rotation as part of the transition. - Similarly, in example 600, panoramic images C and D are represented by
circles line 626.Panoramic image 622 is shown with anarrow 624 indicating an orientation at which panoramic image C is displayed to a user relative to thecircle 622. If the orientation of the view of the panoramic images remains constant, a transition from panoramic image C to panoramic image D would appear as if the user were moving straight ahead alongline 606. In this regard, panoramic image D is shown with anarrow 624 indicating an orientation at which panoramic image D is displayed to a user relative to thecircle 622 after the transition. Thus, again, the relative orientation between the images does not change, and there is no rotation as part of the transition. - Additional images of the set of pre-linked panoramic images may be added to the automated tour in order according to the linking information as noted above. The transitions between these images may all be of the first transition mode, for example, consistent transitions or without rotation as described above. This may continue until a predetermined threshold condition has been met. As an example, a predetermined threshold condition may be that a number of images has been added to the tour, an image that is linked to more than two images has been added to the automated tour, or a combination of these (e.g., the threshold is met when at least one of these conditions is true).
- Example 700 of
FIG. 7 is an example of a partially-generatedautomated tour 702. In this example, the partially-generatedautomated tour 702 is shown as a thicker line over the linking lines. In this example, the tour includes 4 images. As noted above, if the predetermined threshold has not been reached, each of the transitions, betweenpanoramic images panoramic images panoramic images panoramic images - Example 800 of
FIG. 8 is an example of a partially-generatedautomated tour 802. In this example, the partially-generatedautomated tour 802 includespanoramic image 6.Panoramic image 6 is at a branch because it connects to bothpanoramic image 7 as well aspanoramic image 18 according to the linking lines. In other words,panoramic image 6 connects to three other panoramic images in the set of pre-lined panoramic images:panoramic images panoramic images 402 that has been added to the automated tour. As noted above, if the predetermined threshold has not been reached, each of the transitions, betweenpanoramic images panoramic images panoramic images panoramic images panoramic images panoramic image 6 and the next panoramic image added to the automated tour (as well as between any additional panoramic images of the set of pre-linked panoramic images added to the automated tour) may be of the second transition mode, different from the first transition mode. - Referring to the example of a combination predetermined threshold, the threshold may include both a number of images have been added to the tour and an image that is linked to more than two images is added to the automated tour such that if either condition is met, the second transition is used between additional images added to the automated tour. For example, as shown in
FIG. 8 , assuming that the number of images of the predetermined threshold is 4, because this threshold will be met before panoramic image 6 (the branch) is reached, then the transitions betweenpanoramic images panoramic images - The second transition mode may include various panning and rotation movements to give a user the feeling that he or she is turning in a virtual space. For example, transitions between images may include a panning rotation (rotating within a panoramic image) in a first panoramic image before displaying the second panoramic image as well as a panning rotation after the transition is complete. Panning rotations may provide a user with the feeling that he or she is turning in place “within” a panoramic image. For example,
FIG. 9 is an example 900 of a transition including panning rotations. In the example 900, panoramic images E and F are represented bycircles line 906. Panoramic image E is shown with anarrow 912 indicating an orientation at which panoramic image E is displayed to a user relative to thecircle 902. If image E is displayed at the orientation ofarrow 912, before displaying image F, panoramic image E may be panned or rotated an angle of α1 until panoramic image E is displayed at the orientation of arrow 914 relative tocircle 902. At this point, the a transition from panoramic image E to panoramic image F would appear as if the user were moving alongline 906 but facing in the direction ofarrow 922 until panoramic image F is displayed in the direction ofarrow 912 relative tocircle 904. In addition, a second panning may be performed such that panoramic image F may be panned or rotated an angle of α2 until panoramic image F is displayed at the orientation ofarrow 924 relative tocircle 904. In this example, the second transition mode, as applied between two panoramic images, may include a first panning in the first panoramic image, a transition between the first panoramic image and the second panoramic image, and a second panning in the second panoramic image. - These rotations may be tuned using various parameters that enable different automated tours to be less predictable and more varied. As one example, a transition turn percent parameter may be used to determine a percentage of transitions that will include at least some degree of rotation (e.g., 35%). A skip rotate percentage parameter may be used to determine a percentage of transitions which will not be followed immediately by a panning rotation (e.g., 35%). For example, returning to
FIG. 9 , the panning rotation in panoramic image F may be dropped from the transition. Another parameter may include a minimum rotation degree (e.g., 130 degrees) for panning rotations before or after transitions. In this regard, the transitions between images need not always be from the point of view of a user moving forward through space, but also to the side, backwards, etc. as described above. - In order to maintain overlapping views between transitions, an overlap threshold parameter may be used. As an example, the fields of view of two panoramic images when the automated tour switches between the two panoramic images may be projected onto an imaginary wall at a fixed distance. As an example, the threshold parameter may be defined as a percentage of overlap between these projections, such as 25%, 50%, 75%, or more or less. In that regard, if the overlap of (a) the cone that is projected from the first paranoiac image with (b) the cone that is projected from the second panoramic image does not satisfy the overlap threshold parameter, then the amount of panning rotation in the transition from the first panoramic image may be increased in order to ensure that the overlap threshold parameter is satisfied. If the panning rotation required to meet the overlap is less than the minimum rotation degree parameter, the minimum rotation degree parameter may be adjusted, for example, by decreasing the value or simply foregoing the requirement.
- As shown in the example 1000 of
FIG. 10 , the fields of view of two panoramic images G and H when displayed at the same relative orientation (e.g., both in the orientation ofarrows arrows line 1006. The field of view of panoramic image G is defined betweenlines arrow 1002 is centered betweenlines lines arrow 1004 is centered betweenlines lines lines 1014 and 1016) of each field of view will depend upon the size of the images, such as the length, width, number of pixels, etc. that will be displayed in the automated tour. - The fields of view may be projected onto an imaginary wall at a fixed distance w from each of the panoramic images G and H. As an example, w may be an arbitrary distance such as 2 meters or more or less. Although the example 1000 is depicted in two dimensions, the projections may be in three dimensions on the wall and appear in the shape of a cone or rectangle. The projection from panoramic image G covers the area of K on the wall, and the projection from panoramic image H covers the area of L on the wall. The overlap between these projections is the area of J. The percentage of overlap may be defined by the ration of J to K (or J to L). This percentage of overlap may be compared to the overlap threshold parameter. If the percentage of overlap (J/K) does not meet the overlap threshold parameter (for example, 50%), then the amount of panning rotation in the transition from the first panoramic image may be increased in order to ensure that the overlap threshold parameter is satisfied, so that J/K is at least 50%. If the rotation required to meet the overlap is less than the minimum rotation degree parameter, the minimum rotation degree parameter discussed above may be adjusted, for example, by decreasing the value or simply foregoing the requirement.
- Once the second panoramic image is added to the automated tour, as noted above, if at least a threshold number of panoramic images of the set of pre-linked panoramic images (e.g., a minimum number of panoramic images) have been added to the automated tour, then the automated tour may end. As shown in
FIG. 11 , example 1100 includes a completedautomated tour 1102. In this example, theautomated tour 1102 includes seven panoramic images, as well as the transitions between them. This includespanoramic image 7, the target panoramic image. Here, the minimum number of panoramic images may have been 7 or less such that the automated tour is complete when panoramic image has been added. - If the minimum number of panoramic images has not been met, additional panoramic images of the set of pre-linked images may be added to the tour until the minimum number of panoramic images has been met. These images may be selected for example, using the linking information to identify a route along the linking lines to the panoramic image of the set of pre-linked panoramic images that is farthest in distance (not necessarily along the linking lines from the target panoramic image) and not already included in the automated tour. The route may be determined using any path planning algorithm such as one that determines the route to that farthest panoramic image that passes through as many panoramic images as possible along the linking lines. In some cases, the route may also be determined such that it does not cross the same panoramic image twice and/or passes through the fewest number of panoramic images already added to the automated tour.
- Panoramic images along the route may then be added to the automated tour with transitions as discussed above until the minimum number of panoramic images has been met. If the farthest panoramic image has been added to the panoramic tour, for instance by adding panoramic images along the route as noted above, but the minimum number of panoramic images has not been met, another farthest panoramic image from that farthest image may be identified and used to add additional panoramic images as described above and so on until the minimum number of panoramic images has been met.
-
FIG. 12 is an example 1200 depicting the identification of the farthest panoramic image from eh target panoramic image. Here,panoramic image 14 is farthest in distance frompanoramic image 7. As can be seen, dashedline 1204 does not follow the linking lines. In addition, whilepanoramic image 1 is physically farthest frompanoramic image 7, it has already been added to the partially-completedautomated tour 1202. - Once
panoramic image 14 has been identified as farthest frompanoramic image 7, a route betweenpanoramic image 7 andpanoramic image 14 may be determined. As noted above, the route may be determined such that the route that passes through as many panoramic images as possible along the linking lines without crossing the same panoramic image twice. For example, referring to example 1300 ofFIG. 13 , the determined route may include transitioning frompanoramic image 7 topanoramic image 8 topanoramic image 9 topanoramic image 10 topanoramic image 11 topanoramic image 19 topanoramic image 18 topanoramic image 17 topanoramic image 16 topanoramic image 15 and finally, topanoramic image 14 along the linking lines. - Panoramic images along this route may then be added to the automated tour with transitions as described above until the minimum number of panoramic images has been met. Thus, if the minimum number of panoramic images is 12, the automated tour may be complete when
panoramic image 19 has been added (e.g., beforepanoramic image 14 is added). However, if the minimum number of panoramic images is 17, the automated tour may be complete whenpanoramic image 17 is added as shown in the completedpanoramic tour 1302 ofFIG. 13 . - Other effects may be used to make the automated tour interesting. For example, the last panoramic image in an automated tour may be rotated so that the last view in an automated tour is oriented towards the target panoramic image. For example, as shown in
FIG. 13 , ifpanoramic image 19 is the last panoramic image in the automated tour, a panning rotation may be added to the end of the tour so that thepanoramic image 19 is rotated into the orientation ofarrow 1304 such that the user has the feeling the he or she is facing the location of the targetpanoramic image 7. Similarly, ifpanoramic image 14 is the last panoramic image in the automated tour, a panning rotation may be added to the end of the tour so that thepanoramic image 14 is rotated into the orientation ofarrow 1306 such that the user has the feeling the he or she is facing the location of the targetpanoramic image 7. - Other parameters may also be used to tune the transitions, of the first or second transition mode as applicable. For example, the amount of time that it takes to complete a transition between images in either transition mode, the percentage of time spent rotating for a transition in the second transition mode, the minimum amount of time for a rotation in the second transition mode, the speed of a rotation (e.g., degrees per second of rotation) in the second transition mode, whether that speed is adjusted during a rotation (e.g., faster, slower, faster) in the second transition mode, etc.
- As noted above, the completed automated tours may be served immediately to a user (e.g., in the case of an automated tour generated in response to a specific request) or stored for later service to users.
- Flow diagram 1400 of
FIG. 14 is an example of some of the features described above which may be performed by one or more computing devices such as one ormore computing devices 110. In this regard, a request to generate an automated tour based on a set of panoramic images is received by the one or more computing devices atblock 1402. Each particular panoramic image in the set of panoramic images is associated with geographic location information and linking information linking the particular panoramic image with one or more other panoramic images in the set of panoramic images. The one or more computing devices identify a starting panoramic image of the set of panoramic images atblock 1404. A second panoramic image of the set of panoramic images is determined based at least in part on the starting panoramic image and the linking information associated with each of the starting and second panoramic images atblock 1406. A first transition of a first transition mode between the starting panoramic image and the second panoramic image is determined based at least in part on the linking information associated with each of the starting panoramic image and the second panoramic image atblock 1408. The one or more computing devices also determine additional panoramic images from the set of panoramic images for the tour based at least in part on the linking information associated with the additional panoramic images and whether a minimum image quantity constraint has been met atblock 1410. A second transition of a second transition mode, different from the first transition mode, is determined for between ones of the additional panoramic images atblock 1412. In addition, an identifier for the first panoramic image, the first transition, a second identifier for the second panoramic image, the second transition, and the additional panoramic images are added to the tour according to an order of the tour. - Unless stated otherwise, the foregoing alternative examples are not mutually exclusive. They may be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. As an example, the preceding operations do not have to be performed in the precise order described above. Rather, various steps can be handled in a different order or simultaneously. Steps can also be omitted unless otherwise stated. In addition, the provision of the examples described herein, as well as clauses phrased as “such as,” “including” and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the examples are intended to illustrate only one of many possible embodiments. Further, the same reference numbers in different drawings can identify the same or similar elements.
Claims (20)
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/260,862 US9189839B1 (en) | 2014-04-24 | 2014-04-24 | Automatically generating panorama tours |
US14/850,334 US9342911B1 (en) | 2014-04-24 | 2015-09-10 | Automatically generating panorama tours |
US15/134,762 US9830745B1 (en) | 2014-04-24 | 2016-04-21 | Automatically generating panorama tours |
US15/790,315 US10643385B1 (en) | 2014-04-24 | 2017-10-23 | Automatically generating panorama tours |
US16/834,074 US11481977B1 (en) | 2014-04-24 | 2020-03-30 | Automatically generating panorama tours |
US17/943,282 US12002163B1 (en) | 2014-04-24 | 2022-09-13 | Automatically generating panorama tours |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/260,862 US9189839B1 (en) | 2014-04-24 | 2014-04-24 | Automatically generating panorama tours |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/850,334 Continuation US9342911B1 (en) | 2014-04-24 | 2015-09-10 | Automatically generating panorama tours |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150310596A1 true US20150310596A1 (en) | 2015-10-29 |
US9189839B1 US9189839B1 (en) | 2015-11-17 |
Family
ID=54335242
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/260,862 Active US9189839B1 (en) | 2014-04-24 | 2014-04-24 | Automatically generating panorama tours |
US14/850,334 Active US9342911B1 (en) | 2014-04-24 | 2015-09-10 | Automatically generating panorama tours |
US15/134,762 Active 2034-05-30 US9830745B1 (en) | 2014-04-24 | 2016-04-21 | Automatically generating panorama tours |
US15/790,315 Active 2034-11-30 US10643385B1 (en) | 2014-04-24 | 2017-10-23 | Automatically generating panorama tours |
US16/834,074 Active 2034-11-03 US11481977B1 (en) | 2014-04-24 | 2020-03-30 | Automatically generating panorama tours |
US17/943,282 Active 2034-07-13 US12002163B1 (en) | 2014-04-24 | 2022-09-13 | Automatically generating panorama tours |
Family Applications After (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/850,334 Active US9342911B1 (en) | 2014-04-24 | 2015-09-10 | Automatically generating panorama tours |
US15/134,762 Active 2034-05-30 US9830745B1 (en) | 2014-04-24 | 2016-04-21 | Automatically generating panorama tours |
US15/790,315 Active 2034-11-30 US10643385B1 (en) | 2014-04-24 | 2017-10-23 | Automatically generating panorama tours |
US16/834,074 Active 2034-11-03 US11481977B1 (en) | 2014-04-24 | 2020-03-30 | Automatically generating panorama tours |
US17/943,282 Active 2034-07-13 US12002163B1 (en) | 2014-04-24 | 2022-09-13 | Automatically generating panorama tours |
Country Status (1)
Country | Link |
---|---|
US (6) | US9189839B1 (en) |
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160260253A1 (en) * | 2015-03-04 | 2016-09-08 | Samsung Electronics Co., Ltd. | Method for navigation in an interactive virtual tour of a property |
US20170255372A1 (en) * | 2016-03-07 | 2017-09-07 | Facebook, Inc. | Systems and methods for presenting content |
US20180034865A1 (en) * | 2016-07-29 | 2018-02-01 | Everyscape, Inc. | Systems and Methods for Providing Individual and/or Synchronized Virtual Tours through a Realm for a Group of Users |
US20180261000A1 (en) * | 2014-04-22 | 2018-09-13 | Google Llc | Selecting time-distributed panoramic images for display |
US20190020817A1 (en) * | 2017-07-13 | 2019-01-17 | Zillow Group, Inc. | Connecting and using building interior data acquired from mobile devices |
WO2019014620A1 (en) * | 2017-07-13 | 2019-01-17 | Zillow Group, Inc. | Capturing, connecting and using building interior data from mobile devices |
WO2019060985A1 (en) * | 2017-09-29 | 2019-04-04 | Eyexpo Technology Corp. | A cloud-based system and method for creating a virtual tour |
CN109792561A (en) * | 2016-08-12 | 2019-05-21 | 三星电子株式会社 | Image display and its operating method |
US10375306B2 (en) * | 2017-07-13 | 2019-08-06 | Zillow Group, Inc. | Capture and use of building interior data from mobile devices |
USD868092S1 (en) | 2014-04-22 | 2019-11-26 | Google Llc | Display screen with graphical user interface or portion thereof |
USD868093S1 (en) | 2014-04-22 | 2019-11-26 | Google Llc | Display screen with graphical user interface or portion thereof |
US10643386B2 (en) | 2018-04-11 | 2020-05-05 | Zillow Group, Inc. | Presenting image transition sequences between viewing locations |
US10708507B1 (en) | 2018-10-11 | 2020-07-07 | Zillow Group, Inc. | Automated control of image acquisition via use of acquisition device sensors |
US10809066B2 (en) | 2018-10-11 | 2020-10-20 | Zillow Group, Inc. | Automated mapping information generation from inter-connected images |
US10825247B1 (en) * | 2019-11-12 | 2020-11-03 | Zillow Group, Inc. | Presenting integrated building information using three-dimensional building models |
CN112954369A (en) * | 2020-08-21 | 2021-06-11 | 深圳市明源云客电子商务有限公司 | House type preview method, device, equipment and computer readable storage medium |
US11164368B2 (en) | 2019-10-07 | 2021-11-02 | Zillow, Inc. | Providing simulated lighting information for three-dimensional building models |
US11164361B2 (en) | 2019-10-28 | 2021-11-02 | Zillow, Inc. | Generating floor maps for buildings from automated analysis of visual data of the buildings' interiors |
US11163813B2 (en) | 2014-04-22 | 2021-11-02 | Google Llc | Providing a thumbnail image that follows a main image |
US11243656B2 (en) | 2019-08-28 | 2022-02-08 | Zillow, Inc. | Automated tools for generating mapping information for buildings |
US11252329B1 (en) | 2021-01-08 | 2022-02-15 | Zillow, Inc. | Automated determination of image acquisition locations in building interiors using multiple data capture devices |
US11295526B2 (en) * | 2018-10-02 | 2022-04-05 | Nodalview | Method for creating an interactive virtual tour of a place |
US20220189122A1 (en) * | 2019-11-12 | 2022-06-16 | Zillow, Inc. | Presenting Building Information Using Building Models |
US11405549B2 (en) | 2020-06-05 | 2022-08-02 | Zillow, Inc. | Automated generation on mobile devices of panorama images for building locations and subsequent use |
US11481925B1 (en) | 2020-11-23 | 2022-10-25 | Zillow, Inc. | Automated determination of image acquisition locations in building interiors using determined room shapes |
US11480433B2 (en) | 2018-10-11 | 2022-10-25 | Zillow, Inc. | Use of automated mapping information from inter-connected images |
US11501492B1 (en) | 2021-07-27 | 2022-11-15 | Zillow, Inc. | Automated room shape determination using visual data of multiple captured in-room images |
US11514674B2 (en) | 2020-09-04 | 2022-11-29 | Zillow, Inc. | Automated analysis of image contents to determine the acquisition location of the image |
US11592969B2 (en) | 2020-10-13 | 2023-02-28 | MFTB Holdco, Inc. | Automated tools for generating building mapping information |
US11632602B2 (en) | 2021-01-08 | 2023-04-18 | MFIB Holdco, Inc. | Automated determination of image acquisition locations in building interiors using multiple data capture devices |
US11790648B2 (en) | 2021-02-25 | 2023-10-17 | MFTB Holdco, Inc. | Automated usability assessment of buildings using visual data of captured in-room images |
US11830135B1 (en) | 2022-07-13 | 2023-11-28 | MFTB Holdco, Inc. | Automated building identification using floor plans and acquired building images |
US11836973B2 (en) | 2021-02-25 | 2023-12-05 | MFTB Holdco, Inc. | Automated direction of capturing in-room information for use in usability assessment of buildings |
US11842464B2 (en) | 2021-09-22 | 2023-12-12 | MFTB Holdco, Inc. | Automated exchange and use of attribute information between building images of multiple types |
US12003554B2 (en) | 2016-07-29 | 2024-06-04 | Smarter Systems, Inc. | Systems and methods for providing individual and/or synchronized virtual tours through a realm for a group of users |
US12014120B2 (en) | 2019-08-28 | 2024-06-18 | MFTB Holdco, Inc. | Automated tools for generating mapping information for buildings |
US12045951B2 (en) | 2021-12-28 | 2024-07-23 | MFTB Holdco, Inc. | Automated building information determination using inter-image analysis of multiple building images |
US12056900B2 (en) | 2021-08-27 | 2024-08-06 | MFTB Holdco, Inc. | Automated mapping information generation from analysis of building photos |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
USD781318S1 (en) | 2014-04-22 | 2017-03-14 | Google Inc. | Display screen with graphical user interface or portion thereof |
US20170060404A1 (en) * | 2015-08-28 | 2017-03-02 | Facebook, Inc. | Systems and methods for providing interactivity for panoramic media content |
US10521100B2 (en) | 2015-08-28 | 2019-12-31 | Facebook, Inc. | Systems and methods for providing interactivity for panoramic media content |
US10521099B2 (en) * | 2015-08-28 | 2019-12-31 | Facebook, Inc. | Systems and methods for providing interactivity for panoramic media content |
CN108882018B (en) * | 2017-05-09 | 2020-10-20 | 阿里巴巴(中国)有限公司 | Video playing and data providing method in virtual scene, client and server |
US20180342043A1 (en) * | 2017-05-23 | 2018-11-29 | Nokia Technologies Oy | Auto Scene Adjustments For Multi Camera Virtual Reality Streaming |
US10616483B1 (en) | 2019-02-27 | 2020-04-07 | Hong Kong Applied Science and Technology Research Institute Company Limited | Apparatus and method of generating electronic three-dimensional walkthrough environment |
Family Cites Families (83)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020049832A1 (en) | 1996-03-08 | 2002-04-25 | Craig Ullman | Enhanced video programming system and method for incorporating and displaying retrieved integrated internet information segments |
EP0895599B1 (en) | 1996-04-25 | 2002-08-07 | Sirf Technology, Inc. | Spread spectrum receiver with multi-bit correlator |
JP3099734B2 (en) | 1996-05-09 | 2000-10-16 | 住友電気工業株式会社 | Multi-path providing device |
JPH10126731A (en) | 1996-10-17 | 1998-05-15 | Toppan Printing Co Ltd | Tour album generating system |
JP3906938B2 (en) | 1997-02-18 | 2007-04-18 | 富士フイルム株式会社 | Image reproduction method and image data management method |
US6552744B2 (en) * | 1997-09-26 | 2003-04-22 | Roxio, Inc. | Virtual reality camera |
US6199014B1 (en) | 1997-12-23 | 2001-03-06 | Walker Digital, Llc | System for providing driving directions with visual cues |
US7810037B1 (en) | 2000-02-11 | 2010-10-05 | Sony Corporation | Online story collaboration |
DE60141931D1 (en) | 2000-02-21 | 2010-06-10 | Hewlett Packard Co | Magnification of image data sets |
DE10044935B4 (en) | 2000-09-12 | 2010-12-16 | Robert Bosch Gmbh | navigation device |
US7865306B2 (en) | 2000-09-28 | 2011-01-04 | Michael Mays | Devices, methods, and systems for managing route-related information |
US6351710B1 (en) | 2000-09-28 | 2002-02-26 | Michael F. Mays | Method and system for visual addressing |
JP2002258740A (en) | 2001-03-02 | 2002-09-11 | Mixed Reality Systems Laboratory Inc | Method and device for recording picture and method and device for reproducing picture |
US20020122073A1 (en) | 2001-03-02 | 2002-09-05 | Abrams David Hardin | Visual navigation history |
US7096428B2 (en) | 2001-09-28 | 2006-08-22 | Fuji Xerox Co., Ltd. | Systems and methods for providing a spatially indexed panoramic video |
US20030128389A1 (en) | 2001-12-26 | 2003-07-10 | Eastman Kodak Company | Method for creating and using affective information in a digital imaging system cross reference to related applications |
WO2003074973A2 (en) | 2002-03-01 | 2003-09-12 | Networks In Motion, Inc. | Method and apparatus for sending, retrieving, and planning location relevant information |
CN100407782C (en) | 2002-09-27 | 2008-07-30 | 富士胶片株式会社 | Manufacturing method of photo album and its device and program |
US7082572B2 (en) | 2002-12-30 | 2006-07-25 | The Board Of Trustees Of The Leland Stanford Junior University | Methods and apparatus for interactive map-based analysis of digital video content |
US7158878B2 (en) | 2004-03-23 | 2007-01-02 | Google Inc. | Digital mapping system |
US7746376B2 (en) | 2004-06-16 | 2010-06-29 | Felipe Mendoza | Method and apparatus for accessing multi-dimensional mapping and information |
US8751156B2 (en) | 2004-06-30 | 2014-06-10 | HERE North America LLC | Method of operating a navigation system using images |
US8207964B1 (en) | 2008-02-22 | 2012-06-26 | Meadow William D | Methods and apparatus for generating three-dimensional image data models |
JP2006084184A (en) | 2004-09-14 | 2006-03-30 | Ntt Docomo Inc | Three-dimensional gis navigation method, three-dimensional gis navigation server and three-dimensional gis navigation system |
JP4512998B2 (en) | 2004-09-29 | 2010-07-28 | アイシン・エィ・ダブリュ株式会社 | Navigation device |
US7272498B2 (en) | 2004-09-30 | 2007-09-18 | Scenera Technologies, Llc | Method for incorporating images with a user perspective in navigation |
JP4719500B2 (en) | 2004-11-04 | 2011-07-06 | アルパイン株式会社 | In-vehicle device |
US8626440B2 (en) | 2005-04-18 | 2014-01-07 | Navteq B.V. | Data-driven 3D traffic views with the view based on user-selected start and end geographical locations |
WO2006127660A2 (en) | 2005-05-23 | 2006-11-30 | Picateers, Inc. | System and method for collaborative image selection |
US7746343B1 (en) | 2005-06-27 | 2010-06-29 | Google Inc. | Streaming and interactive visualization of filled polygon data in a geographic information system |
US7617246B2 (en) | 2006-02-21 | 2009-11-10 | Geopeg, Inc. | System and method for geo-coding user generated content |
US8793579B2 (en) | 2006-04-20 | 2014-07-29 | Google Inc. | Graphical user interfaces for supporting collaborative generation of life stories |
US8571580B2 (en) | 2006-06-01 | 2013-10-29 | Loopt Llc. | Displaying the location of individuals on an interactive map display on a mobile communication device |
US20090240426A1 (en) | 2006-06-12 | 2009-09-24 | Takashi Akita | Navigation device and navigation method |
US20080033641A1 (en) | 2006-07-25 | 2008-02-07 | Medalia Michael J | Method of generating a three-dimensional interactive tour of a geographic location |
US20100023254A1 (en) | 2006-11-10 | 2010-01-28 | Hiroshi Machino | Navigation system |
US8498497B2 (en) | 2006-11-17 | 2013-07-30 | Microsoft Corporation | Swarm imaging |
US20080215964A1 (en) | 2007-02-23 | 2008-09-04 | Tabblo, Inc. | Method and system for online creation and publication of user-generated stories |
JP4661838B2 (en) | 2007-07-18 | 2011-03-30 | トヨタ自動車株式会社 | Route planning apparatus and method, cost evaluation apparatus, and moving body |
US20090055087A1 (en) | 2007-08-07 | 2009-02-26 | Brandon Graham Beacher | Methods and systems for displaying and automatic dynamic re-displaying of points of interest with graphic image |
JP5044817B2 (en) | 2007-11-22 | 2012-10-10 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Image processing method and apparatus for constructing virtual space |
US8131118B1 (en) | 2008-01-31 | 2012-03-06 | Google Inc. | Inferring locations from an image |
US20090202102A1 (en) | 2008-02-08 | 2009-08-13 | Hermelo Miranda | Method and system for acquisition and display of images |
US20090210277A1 (en) | 2008-02-14 | 2009-08-20 | Hardin H Wesley | System and method for managing a geographically-expansive construction project |
US8428873B2 (en) | 2008-03-24 | 2013-04-23 | Google Inc. | Panoramic images within driving directions |
US7948502B2 (en) | 2008-05-13 | 2011-05-24 | Mitac International Corp. | Method of displaying picture having location data and apparatus thereof |
US8805110B2 (en) | 2008-08-19 | 2014-08-12 | Digimarc Corporation | Methods and systems for content processing |
JP5434018B2 (en) | 2008-09-03 | 2014-03-05 | 株式会社ニコン | Image display device and image display program |
US20110211040A1 (en) | 2008-11-05 | 2011-09-01 | Pierre-Alain Lindemann | System and method for creating interactive panoramic walk-through applications |
US8493408B2 (en) | 2008-11-19 | 2013-07-23 | Apple Inc. | Techniques for manipulating panoramas |
CN101782394B (en) | 2009-01-21 | 2013-03-20 | 佛山市顺德区顺达电脑厂有限公司 | Method for judging turning of mobile object and navigation device using same |
US9454847B2 (en) | 2009-02-24 | 2016-09-27 | Google Inc. | System and method of indicating transition between street level images |
US20120141023A1 (en) | 2009-03-18 | 2012-06-07 | Wang Wiley H | Smart photo story creation |
US20100257477A1 (en) | 2009-04-03 | 2010-10-07 | Certusview Technologies, Llc | Methods, apparatus, and systems for documenting and reporting events via geo-referenced electronic drawings |
JP5521374B2 (en) | 2009-04-08 | 2014-06-11 | ソニー株式会社 | Information processing apparatus, information processing method, and program |
US9298345B2 (en) | 2009-06-23 | 2016-03-29 | Microsoft Technology Licensing, Llc | Block view for geographic navigation |
US8015172B1 (en) | 2009-07-03 | 2011-09-06 | eBridge, Inc. | Method of conducting searches on the internet to obtain selected information on local entities and provide for searching the data in a way that lists local businesses at the top of the results |
JP5464955B2 (en) | 2009-09-29 | 2014-04-09 | 株式会社ソニー・コンピュータエンタテインメント | Panorama image display device |
KR101631497B1 (en) | 2009-11-13 | 2016-06-27 | 삼성전자주식회사 | Display apparatus, User terminal, and methods thereof |
US9766089B2 (en) | 2009-12-14 | 2017-09-19 | Nokia Technologies Oy | Method and apparatus for correlating and navigating between a live image and a prerecorded panoramic image |
JP2011137638A (en) | 2009-12-25 | 2011-07-14 | Toshiba Corp | Navigation system, sightseeing spot detecting device, navigation device, sightseeing spot detecting method, navigation method, sightseeing spot detecting program, and navigation program |
US20110231745A1 (en) * | 2010-03-15 | 2011-09-22 | TripAdvisor LLC. | Slideshow creator |
US20120082401A1 (en) | 2010-05-13 | 2012-04-05 | Kelly Berger | System and method for automatic discovering and creating photo stories |
US20110283210A1 (en) | 2010-05-13 | 2011-11-17 | Kelly Berger | Graphical user interface and method for creating and managing photo stories |
US8655111B2 (en) | 2010-05-13 | 2014-02-18 | Shutterfly, Inc. | System and method for creating and sharing photo stories |
US20120254804A1 (en) | 2010-05-21 | 2012-10-04 | Sheha Michael A | Personal wireless navigation system |
US20120066573A1 (en) | 2010-09-15 | 2012-03-15 | Kelly Berger | System and method for creating photo story books |
US20120092266A1 (en) | 2010-10-14 | 2012-04-19 | Motorola Mobility, Inc. | Method and Apparatus for Providing a Navigation Path on a Touch Display of a Portable Device |
EP2643822B1 (en) | 2010-11-24 | 2017-03-22 | Google, Inc. | Guided navigation through geo-located panoramas |
US20120246562A1 (en) | 2011-03-25 | 2012-09-27 | Leslie Gable Maness | Building a customized story |
US10373375B2 (en) * | 2011-04-08 | 2019-08-06 | Koninklijke Philips N.V. | Image processing system and method using device rotation |
JP2013003048A (en) | 2011-06-20 | 2013-01-07 | Sony Corp | Route search apparatus, route search method, and program |
US8706397B2 (en) | 2011-07-11 | 2014-04-22 | Harman International Industries, Incorporated | System and method for determining an optimal route using aggregated route information |
US9116011B2 (en) | 2011-10-21 | 2015-08-25 | Here Global B.V. | Three dimensional routing |
KR101316176B1 (en) | 2011-12-14 | 2013-10-08 | 현대자동차주식회사 | A system providing a junction of a load and method thereof |
US8930141B2 (en) | 2011-12-30 | 2015-01-06 | Nokia Corporation | Apparatus, method and computer program for displaying points of interest |
JP2013161416A (en) | 2012-02-08 | 2013-08-19 | Sony Corp | Server, client terminal, system and program |
US10176633B2 (en) | 2012-06-05 | 2019-01-08 | Apple Inc. | Integrated mapping and navigation application |
EP2859535A4 (en) | 2012-06-06 | 2016-01-20 | Google Inc | System and method for providing content for a point of interest |
US9116596B2 (en) | 2012-06-10 | 2015-08-25 | Apple Inc. | Sharing images and comments across different devices |
US9488489B2 (en) | 2012-09-28 | 2016-11-08 | Google Inc. | Personalized mapping with photo tours |
CN103971399B (en) | 2013-01-30 | 2018-07-24 | 深圳市腾讯计算机系统有限公司 | street view image transition method and device |
US20140330814A1 (en) | 2013-05-03 | 2014-11-06 | Tencent Technology (Shenzhen) Company Limited | Method, client of retrieving information and computer storage medium |
-
2014
- 2014-04-24 US US14/260,862 patent/US9189839B1/en active Active
-
2015
- 2015-09-10 US US14/850,334 patent/US9342911B1/en active Active
-
2016
- 2016-04-21 US US15/134,762 patent/US9830745B1/en active Active
-
2017
- 2017-10-23 US US15/790,315 patent/US10643385B1/en active Active
-
2020
- 2020-03-30 US US16/834,074 patent/US11481977B1/en active Active
-
2022
- 2022-09-13 US US17/943,282 patent/US12002163B1/en active Active
Cited By (69)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
USD1006046S1 (en) | 2014-04-22 | 2023-11-28 | Google Llc | Display screen with graphical user interface or portion thereof |
USD868093S1 (en) | 2014-04-22 | 2019-11-26 | Google Llc | Display screen with graphical user interface or portion thereof |
US11163813B2 (en) | 2014-04-22 | 2021-11-02 | Google Llc | Providing a thumbnail image that follows a main image |
US20180261000A1 (en) * | 2014-04-22 | 2018-09-13 | Google Llc | Selecting time-distributed panoramic images for display |
US11860923B2 (en) | 2014-04-22 | 2024-01-02 | Google Llc | Providing a thumbnail image that follows a main image |
USD934281S1 (en) | 2014-04-22 | 2021-10-26 | Google Llc | Display screen with graphical user interface or portion thereof |
USD1008302S1 (en) | 2014-04-22 | 2023-12-19 | Google Llc | Display screen with graphical user interface or portion thereof |
USD877765S1 (en) | 2014-04-22 | 2020-03-10 | Google Llc | Display screen with graphical user interface or portion thereof |
USD994696S1 (en) | 2014-04-22 | 2023-08-08 | Google Llc | Display screen with graphical user interface or portion thereof |
USD868092S1 (en) | 2014-04-22 | 2019-11-26 | Google Llc | Display screen with graphical user interface or portion thereof |
USD933691S1 (en) | 2014-04-22 | 2021-10-19 | Google Llc | Display screen with graphical user interface or portion thereof |
US10540804B2 (en) * | 2014-04-22 | 2020-01-21 | Google Llc | Selecting time-distributed panoramic images for display |
US20160260253A1 (en) * | 2015-03-04 | 2016-09-08 | Samsung Electronics Co., Ltd. | Method for navigation in an interactive virtual tour of a property |
US10629001B2 (en) * | 2015-03-04 | 2020-04-21 | Samsung Electronics Co., Ltd. | Method for navigation in an interactive virtual tour of a property |
US10824320B2 (en) * | 2016-03-07 | 2020-11-03 | Facebook, Inc. | Systems and methods for presenting content |
US20170255372A1 (en) * | 2016-03-07 | 2017-09-07 | Facebook, Inc. | Systems and methods for presenting content |
US11153355B2 (en) * | 2016-07-29 | 2021-10-19 | Smarter Systems, Inc. | Systems and methods for providing individual and/or synchronized virtual tours through a realm for a group of users |
US12003554B2 (en) | 2016-07-29 | 2024-06-04 | Smarter Systems, Inc. | Systems and methods for providing individual and/or synchronized virtual tours through a realm for a group of users |
US11575722B2 (en) | 2016-07-29 | 2023-02-07 | Smarter Systems, Inc. | Systems and methods for providing individual and/or synchronized virtual tours through a realm for a group of users |
US20180034865A1 (en) * | 2016-07-29 | 2018-02-01 | Everyscape, Inc. | Systems and Methods for Providing Individual and/or Synchronized Virtual Tours through a Realm for a Group of Users |
CN109792561A (en) * | 2016-08-12 | 2019-05-21 | 三星电子株式会社 | Image display and its operating method |
US10375306B2 (en) * | 2017-07-13 | 2019-08-06 | Zillow Group, Inc. | Capture and use of building interior data from mobile devices |
US11057561B2 (en) * | 2017-07-13 | 2021-07-06 | Zillow, Inc. | Capture, analysis and use of building data from mobile devices |
US10834317B2 (en) | 2017-07-13 | 2020-11-10 | Zillow Group, Inc. | Connecting and using building data acquired from mobile devices |
WO2019014620A1 (en) * | 2017-07-13 | 2019-01-17 | Zillow Group, Inc. | Capturing, connecting and using building interior data from mobile devices |
US20190020817A1 (en) * | 2017-07-13 | 2019-01-17 | Zillow Group, Inc. | Connecting and using building interior data acquired from mobile devices |
US11632516B2 (en) | 2017-07-13 | 2023-04-18 | MFIB Holdco, Inc. | Capture, analysis and use of building data from mobile devices |
US11165959B2 (en) | 2017-07-13 | 2021-11-02 | Zillow, Inc. | Connecting and using building data acquired from mobile devices |
US10530997B2 (en) * | 2017-07-13 | 2020-01-07 | Zillow Group, Inc. | Connecting and using building interior data acquired from mobile devices |
WO2019060985A1 (en) * | 2017-09-29 | 2019-04-04 | Eyexpo Technology Corp. | A cloud-based system and method for creating a virtual tour |
US11217019B2 (en) | 2018-04-11 | 2022-01-04 | Zillow, Inc. | Presenting image transition sequences between viewing locations |
US10643386B2 (en) | 2018-04-11 | 2020-05-05 | Zillow Group, Inc. | Presenting image transition sequences between viewing locations |
US11295526B2 (en) * | 2018-10-02 | 2022-04-05 | Nodalview | Method for creating an interactive virtual tour of a place |
US11284006B2 (en) | 2018-10-11 | 2022-03-22 | Zillow, Inc. | Automated control of image acquisition via acquisition location determination |
US11627387B2 (en) | 2018-10-11 | 2023-04-11 | MFTB Holdco, Inc. | Automated control of image acquisition via use of mobile device interface |
US10708507B1 (en) | 2018-10-11 | 2020-07-07 | Zillow Group, Inc. | Automated control of image acquisition via use of acquisition device sensors |
US11405558B2 (en) | 2018-10-11 | 2022-08-02 | Zillow, Inc. | Automated control of image acquisition via use of hardware sensors and camera content |
US11408738B2 (en) | 2018-10-11 | 2022-08-09 | Zillow, Inc. | Automated mapping information generation from inter-connected images |
US10809066B2 (en) | 2018-10-11 | 2020-10-20 | Zillow Group, Inc. | Automated mapping information generation from inter-connected images |
US11480433B2 (en) | 2018-10-11 | 2022-10-25 | Zillow, Inc. | Use of automated mapping information from inter-connected images |
US11638069B2 (en) | 2018-10-11 | 2023-04-25 | MFTB Holdco, Inc. | Automated control of image acquisition via use of mobile device user interface |
US11243656B2 (en) | 2019-08-28 | 2022-02-08 | Zillow, Inc. | Automated tools for generating mapping information for buildings |
US12014120B2 (en) | 2019-08-28 | 2024-06-18 | MFTB Holdco, Inc. | Automated tools for generating mapping information for buildings |
US11823325B2 (en) | 2019-10-07 | 2023-11-21 | MFTB Holdco, Inc. | Providing simulated lighting information for building models |
US11164368B2 (en) | 2019-10-07 | 2021-11-02 | Zillow, Inc. | Providing simulated lighting information for three-dimensional building models |
US11494973B2 (en) | 2019-10-28 | 2022-11-08 | Zillow, Inc. | Generating floor maps for buildings from automated analysis of visual data of the buildings' interiors |
US11164361B2 (en) | 2019-10-28 | 2021-11-02 | Zillow, Inc. | Generating floor maps for buildings from automated analysis of visual data of the buildings' interiors |
US11238652B2 (en) * | 2019-11-12 | 2022-02-01 | Zillow, Inc. | Presenting integrated building information using building models |
US20220189122A1 (en) * | 2019-11-12 | 2022-06-16 | Zillow, Inc. | Presenting Building Information Using Building Models |
US11676344B2 (en) * | 2019-11-12 | 2023-06-13 | MFTB Holdco, Inc. | Presenting building information using building models |
US20230316660A1 (en) * | 2019-11-12 | 2023-10-05 | MFTB Holdco, Inc. | Presenting Building Information Using Building Models |
US11935196B2 (en) * | 2019-11-12 | 2024-03-19 | MFTB Holdco, Inc. | Presenting building information using building models |
US10825247B1 (en) * | 2019-11-12 | 2020-11-03 | Zillow Group, Inc. | Presenting integrated building information using three-dimensional building models |
US11405549B2 (en) | 2020-06-05 | 2022-08-02 | Zillow, Inc. | Automated generation on mobile devices of panorama images for building locations and subsequent use |
CN112954369A (en) * | 2020-08-21 | 2021-06-11 | 深圳市明源云客电子商务有限公司 | House type preview method, device, equipment and computer readable storage medium |
US11514674B2 (en) | 2020-09-04 | 2022-11-29 | Zillow, Inc. | Automated analysis of image contents to determine the acquisition location of the image |
US11592969B2 (en) | 2020-10-13 | 2023-02-28 | MFTB Holdco, Inc. | Automated tools for generating building mapping information |
US11797159B2 (en) | 2020-10-13 | 2023-10-24 | MFTB Holdco, Inc. | Automated tools for generating building mapping information |
US11645781B2 (en) | 2020-11-23 | 2023-05-09 | MFTB Holdco, Inc. | Automated determination of acquisition locations of acquired building images based on determined surrounding room data |
US11481925B1 (en) | 2020-11-23 | 2022-10-25 | Zillow, Inc. | Automated determination of image acquisition locations in building interiors using determined room shapes |
US11252329B1 (en) | 2021-01-08 | 2022-02-15 | Zillow, Inc. | Automated determination of image acquisition locations in building interiors using multiple data capture devices |
US11632602B2 (en) | 2021-01-08 | 2023-04-18 | MFIB Holdco, Inc. | Automated determination of image acquisition locations in building interiors using multiple data capture devices |
US11836973B2 (en) | 2021-02-25 | 2023-12-05 | MFTB Holdco, Inc. | Automated direction of capturing in-room information for use in usability assessment of buildings |
US11790648B2 (en) | 2021-02-25 | 2023-10-17 | MFTB Holdco, Inc. | Automated usability assessment of buildings using visual data of captured in-room images |
US11501492B1 (en) | 2021-07-27 | 2022-11-15 | Zillow, Inc. | Automated room shape determination using visual data of multiple captured in-room images |
US12056900B2 (en) | 2021-08-27 | 2024-08-06 | MFTB Holdco, Inc. | Automated mapping information generation from analysis of building photos |
US11842464B2 (en) | 2021-09-22 | 2023-12-12 | MFTB Holdco, Inc. | Automated exchange and use of attribute information between building images of multiple types |
US12045951B2 (en) | 2021-12-28 | 2024-07-23 | MFTB Holdco, Inc. | Automated building information determination using inter-image analysis of multiple building images |
US11830135B1 (en) | 2022-07-13 | 2023-11-28 | MFTB Holdco, Inc. | Automated building identification using floor plans and acquired building images |
Also Published As
Publication number | Publication date |
---|---|
US10643385B1 (en) | 2020-05-05 |
US9342911B1 (en) | 2016-05-17 |
US12002163B1 (en) | 2024-06-04 |
US9189839B1 (en) | 2015-11-17 |
US9830745B1 (en) | 2017-11-28 |
US11481977B1 (en) | 2022-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12002163B1 (en) | Automatically generating panorama tours | |
US10540804B2 (en) | Selecting time-distributed panoramic images for display | |
US11860923B2 (en) | Providing a thumbnail image that follows a main image | |
US9658744B1 (en) | Navigation paths for panorama | |
EP3175331B1 (en) | Presenting hierarchies of map data at different zoom levels | |
US9046996B2 (en) | Techniques for navigation among multiple images | |
US9842268B1 (en) | Determining regions of interest based on user interaction | |
US9529803B2 (en) | Image modification | |
US11568551B2 (en) | Method and system for obtaining pair-wise epipolar constraints and solving for panorama pose on a mobile device | |
US20150052475A1 (en) | Projections to fix pose of panoramic photos | |
KR20170132134A (en) | Cluster-based photo navigation | |
US20150109328A1 (en) | Techniques for navigation among multiple images | |
US20150242992A1 (en) | Blending map data with additional imagery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: GOOGLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHERIDAN, ALAN;DONSBACH, AARON MICHAEL;FILIP, DANIEL JOSEPH;SIGNING DATES FROM 20140425 TO 20140428;REEL/FRAME:032788/0939 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
AS | Assignment |
Owner name: GOOGLE LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044334/0466 Effective date: 20170929 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |