US20080086468A1 - Identifying sight for a location - Google Patents
Identifying sight for a location Download PDFInfo
- Publication number
- US20080086468A1 US20080086468A1 US11/548,253 US54825306A US2008086468A1 US 20080086468 A1 US20080086468 A1 US 20080086468A1 US 54825306 A US54825306 A US 54825306A US 2008086468 A1 US2008086468 A1 US 2008086468A1
- Authority
- US
- United States
- Prior art keywords
- sight
- location
- sights
- candidate
- name
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9537—Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
Definitions
- the web (i.e., World Wide Web) is increasingly being used by people to plan their travels.
- a person planning a trip has many web-based resources available including web page search engine services, image search engine services, photographic forums, travel-related web sites, online travel booking services, and so on.
- Web page search engine services such as Google and Overture, provide for searching for information on web pages that may be of interest to users.
- search engine service After a user submits a search request (also referred to as a “query”) that includes search terms, the search engine service identifies web pages that may be related to those search terms. For example, a user planning a trip to Cairo, Egypt may enter the query “Cairo Egypt tourism.” To quickly identify related web pages, the search engine services may maintain a mapping of keywords to web pages.
- This mapping may be generated by “crawling” the web to identify the keywords of each web page.
- a search engine service may use a list of base web pages to identify all web pages that are accessible through those base web pages.
- the keywords of any particular web page can be identified using various well-known information retrieval techniques, such as identifying the words of a headline, the words supplied in the metadata of the web page, the words that are highlighted, and so on.
- the search engine service may generate a relevance score to indicate how related the information of the web page may be to the search request.
- the search engine service displays to the user links to those web pages in an order that is based on their relevance.
- an image search engine service inputs an image query and uses the mapping to find images that are related to the image query.
- An image search engine service may identify thousands of images that are related to an image query and presents thumbnails of the related images. To help a user view the images, an image search engine service may order the thumbnails based on relevance of the images to the image query. An image search engine service may also limit the number of images that are provided to a few hundred of the most relevant images so as not to overwhelm the viewer.
- the person may need to consult various travel resources (e.g., books and visitor bureau web sites) that may describe the sights for the various locations along a route. Not only is the process tedious, the person may not even identify the most desirable sights or may overlook a desirable sight that is identified, because the quality of images and information available varies greatly from sight to sight and resource to resource.
- various travel resources e.g., books and visitor bureau web sites
- a method and system for identifying sights associated with a location is provided.
- a tour system identifies sights associated with a location by submitting a search request formed using the location to an image search service.
- the search results identify images relating to the location and provide metadata associated with each image.
- the tour system then identifies names of candidate sights from the metadata of the search results.
- the tour system may consider the salient phrases of the metadata to be the candidate sight names. Since the candidate sight names may include phrases that do not represent sights that can be visited, the tour system determines which of the candidate sight names represent actual sights.
- the tour system discards those candidate sight names which do not represent actual sights and uses the remaining candidate sight names as the names of sights associated with the location.
- the tour system may then generate a mapping of location to sight names so that the sights associated with a location can be identified quickly.
- the tour system allows a user to search for sights of interest that are associated with specified locations.
- the tour system uses the mapping of locations to sights to identify the sights of interest associated with that location.
- the tour system may display the names of the associated sights of interest to the user.
- the tour system may use the names of the sights of interest to identify images associated with each sight of interest.
- the tour system may submit the name of each sight of interest to an image search service to identify images associated with that sight.
- the tour system may consider the images of the search result for a sight to represent a cluster of images associated with that sight.
- the tour system may then display a representative image of each sight.
- the tour system may also simultaneously display a map encompassing and identifying the location and the sights.
- the tour system may automatically identify travel locations of interest between a start location and an end location of a trip.
- the tour system then identifies sights associated with each travel location along a route or travel path between the start location and the end location.
- the tour system identifies images for each sight and allows the user to browse through the images.
- the tour system may automatically identify a route between the start location and the end location.
- the tour system may allow the user to designate the route between the start location and the end location by tracing the route on a displayed map.
- the tour system may identify travel locations based on their distance from the route.
- the tour system may display representative images of each sight associated with that location.
- the tour system may display multiple images associated with that sight.
- the tour system may present a slideshow of the images of the sights associated with the travel locations.
- FIG. 1 is a flow diagram illustrating high-level processing of a generate location/sight mapping component of the tour system in one embodiment.
- FIG. 2 is a block diagram that illustrates a data structure for representing the location/sight mapping in one embodiment.
- FIG. 3 is a display page that illustrates a user interface of the tour system in one embodiment.
- FIG. 4 is a display page that illustrates a search by path user interface of the tour system in one embodiment.
- FIG. 5 is a block diagram illustrating components of the tour system in one embodiment.
- FIG. 6 is a flow diagram that illustrates the processing of the identify sights of locations component of the tour system in one embodiment.
- FIG. 7 is a flow diagram that illustrates the processing of the identify sights component of the tour system in one embodiment.
- FIG. 8 is a flow diagram that illustrates the processing of the identify candidate sight names component of the tour system in one embodiment.
- FIG. 9 is a flow diagram that illustrates processing of the generate feature vectors component of the tour system in one embodiment.
- FIG. 10 is a flow diagram that illustrates the processing of the score candidate sight names component of the tour system in one embodiment.
- FIG. 13 is a flow diagram that illustrates the processing of the search by path component of the tour system in one embodiment.
- FIG. 14 is a flow diagram illustrating the processing of the retrieve images for locations component of the tour system in one embodiment.
- FIG. 16 is a flow diagram that illustrates the processing of the select image component of the tour system in one embodiment.
- the tour system then generates a mapping of location to sight names so that the sights associated with a location can be identified quickly.
- the tour system may map the location St. Louis to the sights of Gateway Arch, Missouri Botanical Garden, St. Louis Art Museum, and so on. In this way, the tour system can quickly identify sights that may be of interest to a person planning to travel to a location.
- the tour system allows a user to search for sights of interest that are associated with specified locations.
- the tour system uses the mapping of locations to sights to identify the sights of interest associated with that location.
- the tour system may display the names of the associated sights of interest to the user. For example, if the user specifies the location St. Louis, the tour system may identify and display the names of the Gateway Arch, Missouri Botanical Garden, and St. Louis Art Museum.
- the tour system may use the names of the sights of interest to identify images associated with each sight of interest.
- the tour system may submit the name of each sight of interest to an image search service to identify images associated with that sight.
- the tour system may consider the images of the search result for a sight to represent a cluster of images associated with that sight.
- the tour system may then display a representative image of each sight. For example, if the search results for “Gateway Arch” return 50 images and the search results for Missouri Botanical Garden return 10 images, the tour system may select a representative image for each sight based on relevance of the metadata to the query and quality rating of the images. The tour system may also simultaneously display a map encompassing and identifying the location and the sights. When a user selects an image representing a sight, the tour system may display additional images of the sight. The tour system may allow the user to scroll through the images that are displayed.
- the tour system automatically identifies travel locations of interest between a start location and an end location of a trip.
- the tour system automatically identifies sights associated with each travel location along a route or travel path between the start location and the end location.
- the tour system may allow entry of additional attributes of the route such as names of various locations along the route.
- the tour system then identifies images for each sight and allows the user to browse through the images. For example, a user may specify the start location of Los Angeles and the end location of Washington, D.C. for a trip.
- the tour system may then identify St. Louis as a travel location along the travel path.
- the start and end locations may also be considered to be travel locations, and the start location and the end location may be the same location.
- FIG. 1 is a flow diagram illustrating high-level processing of a generate location/sight mapping component of the tour system in one embodiment.
- the component 100 is provided with a location and generates a mapping of that location to associated sights.
- the component submits the location as a search request to an image search service.
- the search results identify images that are relevant to that location and their related metadata including an image title, an image description, and a quality of image rating.
- the component identifies candidate sight names from the metadata of the search results.
- the candidate sight names may be salient phrases extracted from the text of the metadata.
- the component identifies the candidate sight names that are geographic names.
- FIG. 2 is a block diagram that illustrates a data structure for representing the location/sight mapping in one embodiment.
- the location/sight mapping 200 includes a location table 201 and sight tables 202 .
- the location table includes an entry for each location along with a pointer to a sight table for that location.
- Each sight table contains an entry containing the name of the sight for each sight associated with a location.
- the location table may contain an entry for St. Louis that contains the name “St. Louis” and a reference to a sight table.
- the entries of that sight table may include the sight names of “Gateway Arch,” “Missouri Botanical Garden,” “St. Louis Art Museum,” and so on.
- FIG. 3 is a display page that illustrates a user interface of the tour system in one embodiment.
- the display page 300 includes a search by location area 301 , a search by path area 302 , an image panel 303 , and a map panel 304 .
- a user can search for sights by location, by path, or by map.
- To search for sights by location a user enters the name of a location (e.g., St. Louis) into the search by location area 301 and selects the search arrows.
- the tour system uses the location/sight mapping to identify sights associated with that location, searches for images associated with each sight, and displays an image representing each sight within the image panel.
- the tour system may order the sights based on popularity of the sights and order the images for each sight based on a score that is a combination of relevance of the metadata of the image to the sight name and quality of the image.
- the popularity of a sight may be determined in various ways such as based on the number and quality of images of that sight, based on the number of clicks on images for the sights that are collected over time, and so on.
- the tour system may order displayed images so that the highest scored image of each sight is displayed before the second highest scored image of any sights and the highest scored images are ordered based on popularity of the sights. Thus, the tour system may display the highest scored image of the most popular sight first, followed by the highest scored image of the second most popular sight, and so on until the highest scored image of each sight is displayed.
- the tour system initially displayed an image representing each travel location in the image panel.
- the tour system displayed an image representing each of the sights associated with the location of the selected image.
- the user then selected to view a slideshow of the sights associated with the travel path.
- the tour system displays the slides of the slideshow in the image panel along with the name of the location, the name of the sight, and the title of the image.
- the tour system also displays a moving automobile in the map area indicating the location whose sights are currently being displayed in the slideshow.
- the tour system may display the images of the sights of a location in the slideshow based on sight popularity and image score.
- the offline components include an identify sights of locations component 531 , an identify sights component 532 , an identify candidate sight names component 533 , a generate feature vectors component 534 , a score candidate sight names component 535 , and an identify geographic names component 536 .
- the tour system also includes a location/sight store 537 that is written to by the offline components and read by the online components.
- the online components include a search by location component 541 , a search by path component 542 , a search by map component 543 , a retrieve images for locations component 544 , and a show slideshow by path component 545 .
- the tour system may be implemented on various computing systems or devices including personal computers, server computers, multiprocessor systems, microprocessor-based systems, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
- the tour system may be used by various computing systems such as personal computers, cell phones, personal digital assistants, consumer electronics, home automation devices, and so on.
- the tour system may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices.
- program modules include routines, programs, objects, components, data structures, and so on that perform particular tasks or implement particular abstract data types.
- the functionality of the program modules may be combined or distributed as desired in various embodiments.
- the offline components and online components may be implemented on different computing systems.
- some of the offline components may be implemented as online components and vice versa.
- FIG. 6 is a flow diagram that illustrates the processing of the identify sights of locations component of the tour system in one embodiment.
- the component 600 is provided a list of locations and identifies sights associated with each of the locations.
- the component selects the next location.
- decision block 602 if all the locations have already been selected, then the component completes, else the component continues at block 603 .
- the component invokes the identify sights component to identify sights for the selected location.
- the component generates a mapping of the selected location to the identified sights and then loops to block 601 to select the next location.
- FIG. 7 is a flow diagram that illustrates the processing of the identify sights component of the tour system in one embodiment.
- the component 700 is passed a location and identifies the sights associated with that location.
- the component submits the location as a query to a search service, which may be an image search service, and receives images and associated metadata as the search results.
- the component invokes the identify candidate sight names component to identify candidate sight names from the search results.
- the candidate sight names may be salient phrases within the metadata of the search results.
- the component invokes the generate feature vectors component to generate feature vectors representing the candidate sight names.
- the component invokes the score candidate sight names component to generate a score for the candidate sight names based on the generated feature vectors.
- the component invokes the identify geographic names component to identify the candidate sight names that are geographic names. The component then returns as the sights the candidate sight names that are geographic names that satisfy a scoring threshold.
- FIG. 8 is a flow diagram that illustrates the processing of the identify candidate sight names component of the tour system in one embodiment.
- the component 800 is passed search results and identifies candidate sight names from the search results.
- the tour system may identify candidate sight names using various techniques for identifying key or salient phrases. For example, the tour system may extract all phrases of certain lengths (e.g., n-grams) from the text of the search results and identify the various properties of the phrases such as phrase frequency, document frequency, phrase length, and so on. The tour system may identify the key phrases based on the properties of the phrases.
- certain lengths e.g., n-grams
- the tour system may identify the key phrases based on the properties of the phrases.
- the tour system may also filter the key phrases based on those that are unlikely to represent meaningful sight names (e.g., too many noise words), likely to represent the same sight (e.g., “Golden Gate Bridge” and “GoldenGate Bridge”), and so on.
- the tour system may also define a cluster of images for each key phrase as those images that contain the key phrase.
- the tour system may discard key phrases for clusters that have too many or too few images or that have too many images in common with other clusters. Techniques for identifying phrases from search results are described in Zeng, H., He, Q., Chen, Z., Ma, W., and Ma, J., “Learning to Cluster Web Search Results,” SIGIR, Jul. 25-29, 2004 Sheffield, South Yorkshire, U.K., and U.S.
- the component identifies phrases from other text (e.g., description) of the selected search result.
- the component applies stemming to the phrases such as Porter's stemming.
- the component adds the identified phrases as candidate sight names and then loops to block 801 to select the next search result.
- FIG. 9 is a flow diagram that illustrates processing of the generate feature vectors component of the tour system in one embodiment.
- the component 900 is passed candidate sight names and generates a feature vector for each candidate sight name that represents various features of the candidate sight name.
- the component selects the next candidate sight name.
- decision block 902 if all the candidate sight names have already been selected, then the component returns the feature vectors for the candidate sight names, else the component continues at block 903 .
- the component calculates a feature for the feature vector based on term frequency by inverse document frequency (“tf*idf”).
- the component calculates a feature for the feature vector based on the length of the selected candidate sight name.
- the component calculates a feature for the feature vector based on intra-cluster similarity.
- the component calculates a feature for the feature vector to represent the entropy of the cluster.
- the component calculates a feature for the feature vector that indicates the independence of the selected candidate sight name. The component then loops to block 901 to select the next candidate sight name.
- These features are described in described in Zeng, H., He, Q., Chen, Z., Ma, W., and Ma, J., “Learning to Cluster Web Search Results,” SIGIR, Jul. 25-29, 2004 Sheffield, South Yorkshire, U.K.
- the component may also factor into the score the quality of images associated with a candidate sight name.
- the component may normalize or transform to a standard scale the quality ratings of the image forums as described in U.S. patent application Ser. No. 11/339,328, entitled “User Interface for Viewing Images,” which is hereby incorporated by reference.
- FIG. 10 is a flow diagram that illustrates the processing of the score candidate sight names component of the tour system in one embodiment.
- the component 1000 generates a score for each candidate sight name that indicates the likelihood that a candidate sight name is a salient phrase of the search results.
- the component selects the next candidate sight name.
- decision block 1002 if all the candidate sight names have already been selected, then the component returns, else the component continues at block 1003 .
- the component initializes the score for the selected candidate sight name to zero.
- the component loops accumulating a score for each feature i of the feature vector of the selected candidate sight name.
- the component selects the next feature of the feature vector for the selected candidate sight name.
- decision block 1005 if all the features of the selected candidate sight name have already been selected, then the component continues at block 1007 , else the component continues at block 1006 .
- the component multiplies the value x i of the feature by a weighting factor b i and adds it to the score for the selected candidate sight name. The component then loops to block 1004 to select the next feature.
- the component sets a score of the selected candidate sight name and then loops to block 1001 to select the next candidate sight name.
- decision block 1104 if the results provided by the geographic name service indicate that the selected candidate sight name corresponds to a geographic name, then the component continues at block 1105 , else the component loops to block 1101 to select the next candidate sight name. In block 1105 , the component marks the selected candidate sight name as a sight name and then loops to block 1101 to select the next candidate sight name.
- FIG. 12 is a flow diagram that illustrates the processing of the search by location component of the tour system in one embodiment.
- the component 1200 is passed a location and displays images of sights associated with that location.
- the component retrieves the sight names for the passed location using the location/sight mapping.
- decision block 1202 if sight names are found for the passed location, then the component continues at block 1203 , else the component continues at block 1209 .
- the component loops submitting each of the retrieved sight names to an image search service to identify images corresponding to these sight names.
- the component selects the next retrieved sight name.
- the component submits the location and sight name as a query to an image search service.
- the search results identify the images and associated metadata for the selected sight name.
- the component then loops to block 1203 to select the next sight name.
- the component orders the images for display.
- the component displays the images in the image panel.
- the component displays a map in the map panel that encompasses the sights of the passed location.
- the component submits the location as a query to the image search service.
- the component displays the images of the search result in the image panel and map centered at that location in the map panel. The component then completes.
- FIG. 13 is a flow diagram that illustrates the processing of the search by path component of the tour system in one embodiment.
- the component 1300 is passed a start location and an end location.
- the component identifies locations along the travel path from the start location to the end location and displays images of sights associated with those locations.
- the component identifies locations on or near the path from the start location to the end location.
- the component invokes the retrieve images for locations component to retrieve images of sights of the identified locations.
- the component orders the images of the sights based on distance from the start location.
- the component displays the images in the image panel.
- the component displays a map in the map panel that illustrates the travel path from the start location to the end location and then completes.
- FIG. 14 is a flow diagram illustrating the processing of the retrieve images for locations component of the tour system in one embodiment.
- the component 1400 is passed a list of locations and retrieves images of the sights associated with those locations.
- the component selects the next location.
- decision block 1402 if all the locations have already been selected, then the component continues at block 1406 , else the component continues at block 1403 .
- block 1403 the component selects the next sight for the selected location.
- decision block 1404 if all the sights for the selected location have already been selected, then the component loops to block 1401 to select the next location, else the component continues at block 1405 .
- the component submits a query that includes the selected location and selected sight to an image search service. The search results indicate the images associated with the selected sight. The component then loops to block 1403 to select the next sight.
- the component discards any locations with insignificant sights and then returns.
- FIG. 16 is a flow diagram that illustrates the processing of the select image component of the tour system in one embodiment.
- the component 1600 is passed a sight and selects images for the sight.
- the component selects images for the passed sight.
- the component displays the selected images in the image panel.
- the component centers the map on the passed sight.
- the component displays the map in the map panel and then completes.
- the component calculates the number of slides to be presented for the selected location.
- the number of slides may be a fixed number or may be dynamically generated based on time of the slideshow, number of sights for each location, number of locations, and so on.
- the component increments the number of slides that have been displayed.
- decision block 1706 if the number of slides that have been displayed for the selected location is greater than the number of slides to be displayed for this location, then the component loops to block 1702 to select the next location, else the component continues at block 1707 .
- the component selects the next sight of the selected location.
- the component selects the next unseen image for the selected sight.
- the component displays the image in the image panel. The component then loops to block 1705 to select the next sight. After the component selects all the sights, it may then start over by selecting the first sight of the selected location until the calculated number of slides is presented.
- the tour system may submit candidate sight names to a service that identifies the names as being sights of interest.
- the tour system may identify candidate sight names using a web search service or other search service, rather than an image search service.
- the tour system may extract salient phrases from titles and snippets of the search results. Accordingly, the invention is not limited except as by the appended claims.
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Processing Or Creating Images (AREA)
- Instructional Devices (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
- This application is related to U.S. patent application Ser. No. ______, entitled “USER INTERFACE FOR DISPLAYING IMAGES OF SIGHTS” filed concurrently herewith and identified by attorney docket number 418268385US, the disclosure of which is incorporated by reference herein in its entirety.
- The web (i.e., World Wide Web) is increasingly being used by people to plan their travels. A person planning a trip has many web-based resources available including web page search engine services, image search engine services, photographic forums, travel-related web sites, online travel booking services, and so on. Web page search engine services, such as Google and Overture, provide for searching for information on web pages that may be of interest to users. After a user submits a search request (also referred to as a “query”) that includes search terms, the search engine service identifies web pages that may be related to those search terms. For example, a user planning a trip to Cairo, Egypt may enter the query “Cairo Egypt tourism.” To quickly identify related web pages, the search engine services may maintain a mapping of keywords to web pages. This mapping may be generated by “crawling” the web to identify the keywords of each web page. To crawl the web, a search engine service may use a list of base web pages to identify all web pages that are accessible through those base web pages. The keywords of any particular web page can be identified using various well-known information retrieval techniques, such as identifying the words of a headline, the words supplied in the metadata of the web page, the words that are highlighted, and so on. The search engine service may generate a relevance score to indicate how related the information of the web page may be to the search request. The search engine service then displays to the user links to those web pages in an order that is based on their relevance.
- Several search engine services also provide for searching for images that are available on the Internet. These image search engines typically generate a mapping of keywords to images by crawling the web in much the same way as described above for mapping keywords to web pages. An image search engine service can identify keywords based on text of the web pages that contain the images. An image search engine may also gather keywords from metadata associated with images of web-based image forums, which are an increasingly popular mechanism for people to publish their photographs and other images. An image forum allows users to upload their photographs and requires the users to provide associated metadata such as title, camera setting, category, and description. The image forums typically allow reviewers to rate each of the uploaded images and thus have ratings on the quality of the images. Regardless of how the mappings are generated, an image search engine service inputs an image query and uses the mapping to find images that are related to the image query. An image search engine service may identify thousands of images that are related to an image query and presents thumbnails of the related images. To help a user view the images, an image search engine service may order the thumbnails based on relevance of the images to the image query. An image search engine service may also limit the number of images that are provided to a few hundred of the most relevant images so as not to overwhelm the viewer.
- It can be very tedious for a person to plan a trip using the currently available web-based resources. For example, a person planning an automobile trip from Los Angeles, Calif. to Washington, D.C. would need to identify various routes, identify the locations (e.g., cities or counties) along the routes, identify sights associated with each location (e.g., Gateway Arch in St. Louis, Mo.), evaluate the routes and sights, and select a preferred route and sights to visit along that route. Although it may be easy for a person to identify various routes, it can be difficult to identify the sights that may be of interest along each route. If the person knows the name of a sight, the person can submit web page or image search requests. The person can then attempt to evaluate the search results to decide whether to visit that sight. If the person, however, does not know all the possible sights for a route, the person may need to consult various travel resources (e.g., books and visitor bureau web sites) that may describe the sights for the various locations along a route. Not only is the process tedious, the person may not even identify the most desirable sights or may overlook a desirable sight that is identified, because the quality of images and information available varies greatly from sight to sight and resource to resource.
- A method and system for identifying sights associated with a location is provided. A tour system identifies sights associated with a location by submitting a search request formed using the location to an image search service. The search results identify images relating to the location and provide metadata associated with each image. The tour system then identifies names of candidate sights from the metadata of the search results. The tour system may consider the salient phrases of the metadata to be the candidate sight names. Since the candidate sight names may include phrases that do not represent sights that can be visited, the tour system determines which of the candidate sight names represent actual sights. The tour system then discards those candidate sight names which do not represent actual sights and uses the remaining candidate sight names as the names of sights associated with the location. The tour system may then generate a mapping of location to sight names so that the sights associated with a location can be identified quickly.
- The tour system allows a user to search for sights of interest that are associated with specified locations. When a user specifies the location, the tour system uses the mapping of locations to sights to identify the sights of interest associated with that location. The tour system may display the names of the associated sights of interest to the user. Alternatively, the tour system may use the names of the sights of interest to identify images associated with each sight of interest. The tour system may submit the name of each sight of interest to an image search service to identify images associated with that sight. The tour system may consider the images of the search result for a sight to represent a cluster of images associated with that sight. The tour system may then display a representative image of each sight. The tour system may also simultaneously display a map encompassing and identifying the location and the sights.
- The tour system may automatically identify travel locations of interest between a start location and an end location of a trip. The tour system then identifies sights associated with each travel location along a route or travel path between the start location and the end location. The tour system identifies images for each sight and allows the user to browse through the images. The tour system may automatically identify a route between the start location and the end location. Alternatively, the tour system may allow the user to designate the route between the start location and the end location by tracing the route on a displayed map. The tour system may identify travel locations based on their distance from the route. When a travel location is selected, the tour system may display representative images of each sight associated with that location. When a representative image of a sight is selected, the tour system may display multiple images associated with that sight. In addition, the tour system may present a slideshow of the images of the sights associated with the travel locations.
- This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
-
FIG. 1 is a flow diagram illustrating high-level processing of a generate location/sight mapping component of the tour system in one embodiment. -
FIG. 2 is a block diagram that illustrates a data structure for representing the location/sight mapping in one embodiment. -
FIG. 3 is a display page that illustrates a user interface of the tour system in one embodiment. -
FIG. 4 is a display page that illustrates a search by path user interface of the tour system in one embodiment. -
FIG. 5 is a block diagram illustrating components of the tour system in one embodiment. -
FIG. 6 is a flow diagram that illustrates the processing of the identify sights of locations component of the tour system in one embodiment. -
FIG. 7 is a flow diagram that illustrates the processing of the identify sights component of the tour system in one embodiment. -
FIG. 8 is a flow diagram that illustrates the processing of the identify candidate sight names component of the tour system in one embodiment. -
FIG. 9 is a flow diagram that illustrates processing of the generate feature vectors component of the tour system in one embodiment. -
FIG. 10 is a flow diagram that illustrates the processing of the score candidate sight names component of the tour system in one embodiment. -
FIG. 11 is a flow diagram that represents the processing of the identify geographic names component of the tour system in one embodiment. -
FIG. 12 is a flow diagram that illustrates the processing of the search by location component of the tour system in one embodiment. -
FIG. 13 is a flow diagram that illustrates the processing of the search by path component of the tour system in one embodiment. -
FIG. 14 is a flow diagram illustrating the processing of the retrieve images for locations component of the tour system in one embodiment. -
FIG. 15 is a flow diagram that illustrates the processing of the search by map component of the tour system in one embodiment. -
FIG. 16 is a flow diagram that illustrates the processing of the select image component of the tour system in one embodiment. -
FIG. 17 is a flow diagram that illustrates processing of the show slideshow by path component of the tour system in one embodiment. - A method and system for identifying sights associated with a location is provided. In one embodiment, a tour system identifies sights associated with a location by submitting a search request formed using the location to an image search service. For example, the image search service may be a search provided by a web-based image forum. The location may be St. Louis, and the search request may be “St. Louis Missouri.” The search results identify images relating to the location and provide metadata (e.g., image title and quality rating) associated with each image. For example, the search results for the search request “St. Louis Missouri” may include an image with the title “Building the Gateway Arch.” The tour system then identifies names of candidate sights from the metadata of the search results. The tour system may consider the salient phrases (e.g., Gateway Arch or great sunset) of the metadata to be the candidate sight names. Since the candidate sight names may include phrases that do not represent sights that can be visited (e.g., great sunset), the tour system determines which of the candidate sight names represent geographic names. The tour system then discards those candidate sight names which do not represent geographic names and uses the remaining candidate sight names as the names of sights associated with the location. For example, the tour system may submit each candidate sight name (alone or in combination with the location) to a geographic name service to determine whether that candidate sight name corresponds to a geographic name. For example, if the candidate sight names include “Gateway Arch” and “great sunset,” then the tour system will discard “great sunset” because it does not correspond to a geographic name. The tour system then generates a mapping of location to sight names so that the sights associated with a location can be identified quickly. For example, the tour system may map the location St. Louis to the sights of Gateway Arch, Missouri Botanical Garden, St. Louis Art Museum, and so on. In this way, the tour system can quickly identify sights that may be of interest to a person planning to travel to a location.
- In one embodiment, the tour system allows a user to search for sights of interest that are associated with specified locations. When a user specifies the location, the tour system uses the mapping of locations to sights to identify the sights of interest associated with that location. The tour system may display the names of the associated sights of interest to the user. For example, if the user specifies the location St. Louis, the tour system may identify and display the names of the Gateway Arch, Missouri Botanical Garden, and St. Louis Art Museum. Alternatively, the tour system may use the names of the sights of interest to identify images associated with each sight of interest. The tour system may submit the name of each sight of interest to an image search service to identify images associated with that sight. The tour system may consider the images of the search result for a sight to represent a cluster of images associated with that sight. The tour system may then display a representative image of each sight. For example, if the search results for “Gateway Arch” return 50 images and the search results for Missouri Botanical Garden return 10 images, the tour system may select a representative image for each sight based on relevance of the metadata to the query and quality rating of the images. The tour system may also simultaneously display a map encompassing and identifying the location and the sights. When a user selects an image representing a sight, the tour system may display additional images of the sight. The tour system may allow the user to scroll through the images that are displayed.
- In one embodiment, the tour system automatically identifies travel locations of interest between a start location and an end location of a trip. The tour system automatically identifies sights associated with each travel location along a route or travel path between the start location and the end location. The tour system may allow entry of additional attributes of the route such as names of various locations along the route. The tour system then identifies images for each sight and allows the user to browse through the images. For example, a user may specify the start location of Los Angeles and the end location of Washington, D.C. for a trip. The tour system may then identify St. Louis as a travel location along the travel path. The start and end locations may also be considered to be travel locations, and the start location and the end location may be the same location. The tour system may automatically identify a route between the start location and the end location. Alternatively, the tour system may allow the user to designate the route between the start location and the end location by tracing the route on a displayed map. The tour system may identify travel locations based on their distance from the route. For example, if the route is 300 miles, then the tour system may identify travel locations that are within 30 miles of the route. The tour system may display a representative image for each travel location and highlight the travel location on a map. When a travel location is selected (e.g., by selecting a representative image or selecting the location on the map), the tour system may display representative images of each sight associated with that location. When a representative image of a sight is selected, the tour system may display multiple images associated with that sight. In addition, the tour system may present a slidshow of the images of the sights associated with the travel locations. The tour system may order the images of the slideshow based on the order in which they would be encountered when traveling from the start location to the end location.
- In the following, a description of an embodiment of the tour system is described in detail with reference to the figures.
FIG. 1 is a flow diagram illustrating high-level processing of a generate location/sight mapping component of the tour system in one embodiment. Thecomponent 100 is provided with a location and generates a mapping of that location to associated sights. Inblock 101, the component submits the location as a search request to an image search service. The search results identify images that are relevant to that location and their related metadata including an image title, an image description, and a quality of image rating. Inblock 102, the component identifies candidate sight names from the metadata of the search results. The candidate sight names may be salient phrases extracted from the text of the metadata. Inblock 103, the component identifies the candidate sight names that are geographic names. For example, the component may submit each candidate sight name to a geographic name service to determine whether the candidate sight name is a geographic name. The component selects the candidate sight names that are geographic names as the actual sight names associated with the location. Inblock 104, the component creates a mapping of the location to the sight names and completes. The location/sight mapping can be used by a user interface component of the tour system to facilitate identifying sights associated with locations. -
FIG. 2 is a block diagram that illustrates a data structure for representing the location/sight mapping in one embodiment. The location/sight mapping 200 includes a location table 201 and sight tables 202. The location table includes an entry for each location along with a pointer to a sight table for that location. Each sight table contains an entry containing the name of the sight for each sight associated with a location. For example, the location table may contain an entry for St. Louis that contains the name “St. Louis” and a reference to a sight table. The entries of that sight table may include the sight names of “Gateway Arch,” “Missouri Botanical Garden,” “St. Louis Art Museum,” and so on. -
FIG. 3 is a display page that illustrates a user interface of the tour system in one embodiment. Thedisplay page 300 includes a search bylocation area 301, a search bypath area 302, animage panel 303, and amap panel 304. A user can search for sights by location, by path, or by map. To search for sights by location, a user enters the name of a location (e.g., St. Louis) into the search bylocation area 301 and selects the search arrows. The tour system uses the location/sight mapping to identify sights associated with that location, searches for images associated with each sight, and displays an image representing each sight within the image panel. The tour system may order the sights based on popularity of the sights and order the images for each sight based on a score that is a combination of relevance of the metadata of the image to the sight name and quality of the image. The popularity of a sight may be determined in various ways such as based on the number and quality of images of that sight, based on the number of clicks on images for the sights that are collected over time, and so on. The tour system may order displayed images so that the highest scored image of each sight is displayed before the second highest scored image of any sights and the highest scored images are ordered based on popularity of the sights. Thus, the tour system may display the highest scored image of the most popular sight first, followed by the highest scored image of the second most popular sight, and so on until the highest scored image of each sight is displayed. The tour system may then display the second highest scored image of the most popular sight followed by the second highest scored image of the second most popular sight, and so on. The tour system may also simultaneously display in the map panel a map that encompasses all the sights of the location. When a user selects an image from the image panel, the tour system then highlights the sight associated with that image within the map panel and displays the images of that sight within the image panel. A user can select an image to see an enlarged view of the image along with its metadata or hover a pointer over an image to see some of the metadata displayed near the image. -
FIG. 4 is a display page that illustrates a search by path user interface of the tour system in one embodiment. Thedisplay page 400 includes a search bylocation area 401, a search bypath area 402, animage panel 403, and amap panel 404. A user has entered a start location (e.g., location A) and an end location (e.g., location Z) into the search by path area. The tour system has automatically identified and highlighted a travel path between the start location and the end location. The tour system then identified additional travel locations (e.g., location X and location Y) along the travel path. The tour system identified the sights of each travel location using the location/sight mapping and searched for images for each of the travel locations. The tour system initially displayed an image representing each travel location in the image panel. When the user selected an image, the tour system displayed an image representing each of the sights associated with the location of the selected image. The user then selected to view a slideshow of the sights associated with the travel path. The tour system displays the slides of the slideshow in the image panel along with the name of the location, the name of the sight, and the title of the image. The tour system also displays a moving automobile in the map area indicating the location whose sights are currently being displayed in the slideshow. The tour system may display the images of the sights of a location in the slideshow based on sight popularity and image score. The tour system may also allow the user to select the length of the slideshow (e.g., five minutes) or may automatically select the length to be a fixed amount (e.g., five minutes) or a variable length (e.g., based on number of locations and sights along the travel path). -
FIG. 5 is a block diagram illustrating components of the tour system in one embodiment. Thetour system 500 is connected to various computing systems 501-504 via communications link 520. The computing systems may include one or more search service servers 501 (e.g., MSN Search), one or more image forum servers 502 (e.g., www.photosig.com), one or more geographic name servers 503 (e.g., Microsoft's VirtualEarth), and one or moreclient computing devices 504. The users of the tour system use the client computing devices to plan their travels using a web-based user interface provided by the tour system. The tour system includesoffline components 530 andonline components 540. The offline components include an identify sights oflocations component 531, anidentify sights component 532, an identify candidatesight names component 533, a generatefeature vectors component 534, a score candidatesight names component 535, and an identifygeographic names component 536. The tour system also includes a location/sight store 537 that is written to by the offline components and read by the online components. The online components include a search bylocation component 541, a search bypath component 542, a search bymap component 543, a retrieve images forlocations component 544, and a show slideshow bypath component 545. These components of the tour system are explained in detail below with reference to flow diagrams. - The computing devices on which the tour system may be implemented may include a central processing unit, memory, input devices (e.g., keyboard and pointing devices), output devices (e.g., display devices), and storage devices (e.g., disk drives). The memory and storage devices are computer-readable media that may contain instructions that implement the tour system. In addition, the instructions, data structures, and message structures may be stored or transmitted via a data transmission medium, such as a signal on a communications link. Various communications links may be used, such as the Internet, a local area network, a wide area network, or a point-to-point dial-up connection.
- The tour system may be implemented on various computing systems or devices including personal computers, server computers, multiprocessor systems, microprocessor-based systems, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The tour system may be used by various computing systems such as personal computers, cell phones, personal digital assistants, consumer electronics, home automation devices, and so on.
- The tour system may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, and so on that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments. For example, the offline components and online components may be implemented on different computing systems. Also, some of the offline components may be implemented as online components and vice versa.
-
FIG. 6 is a flow diagram that illustrates the processing of the identify sights of locations component of the tour system in one embodiment. Thecomponent 600 is provided a list of locations and identifies sights associated with each of the locations. Inblock 601, the component selects the next location. Indecision block 602, if all the locations have already been selected, then the component completes, else the component continues atblock 603. Inblock 603, the component invokes the identify sights component to identify sights for the selected location. Inblock 604, the component generates a mapping of the selected location to the identified sights and then loops to block 601 to select the next location. -
FIG. 7 is a flow diagram that illustrates the processing of the identify sights component of the tour system in one embodiment. Thecomponent 700 is passed a location and identifies the sights associated with that location. Inblock 701, the component submits the location as a query to a search service, which may be an image search service, and receives images and associated metadata as the search results. Inblock 702, the component invokes the identify candidate sight names component to identify candidate sight names from the search results. For example, the candidate sight names may be salient phrases within the metadata of the search results. Inblock 703, the component invokes the generate feature vectors component to generate feature vectors representing the candidate sight names. Inblock 704, the component invokes the score candidate sight names component to generate a score for the candidate sight names based on the generated feature vectors. Inblock 705, the component invokes the identify geographic names component to identify the candidate sight names that are geographic names. The component then returns as the sights the candidate sight names that are geographic names that satisfy a scoring threshold. -
FIG. 8 is a flow diagram that illustrates the processing of the identify candidate sight names component of the tour system in one embodiment. Thecomponent 800 is passed search results and identifies candidate sight names from the search results. The tour system may identify candidate sight names using various techniques for identifying key or salient phrases. For example, the tour system may extract all phrases of certain lengths (e.g., n-grams) from the text of the search results and identify the various properties of the phrases such as phrase frequency, document frequency, phrase length, and so on. The tour system may identify the key phrases based on the properties of the phrases. The tour system may also filter the key phrases based on those that are unlikely to represent meaningful sight names (e.g., too many noise words), likely to represent the same sight (e.g., “Golden Gate Bridge” and “GoldenGate Bridge”), and so on. The tour system may also define a cluster of images for each key phrase as those images that contain the key phrase. The tour system may discard key phrases for clusters that have too many or too few images or that have too many images in common with other clusters. Techniques for identifying phrases from search results are described in Zeng, H., He, Q., Chen, Z., Ma, W., and Ma, J., “Learning to Cluster Web Search Results,” SIGIR, Jul. 25-29, 2004 Sheffield, South Yorkshire, U.K., and U.S. patent application Ser. No. 10/889,841, entitled “Query-Based Snippet Clustering for Search Result Grouping” and filed on Jul. 13, 2004, both of which are hereby incorporated by reference. One technique described in these references trains a linear regression model to learn the scores of phrases from feature vectors of the phrases. The tour system may use the linear regression model to score the likelihood that a phrase is a key or salient phrase and thus a candidate sight name. Inblock 801, the component selects the next search result. Indecision block 802, if all the search results have already been selected, then the component returns the candidate sight names, else the component continues atblock 803. Inblock 803, the component identifies the phrases from the title of the selected search result. Inblock 804, the component identifies phrases from other text (e.g., description) of the selected search result. Inblock 805, the component applies stemming to the phrases such as Porter's stemming. Inblock 806, the component adds the identified phrases as candidate sight names and then loops to block 801 to select the next search result. -
FIG. 9 is a flow diagram that illustrates processing of the generate feature vectors component of the tour system in one embodiment. Thecomponent 900 is passed candidate sight names and generates a feature vector for each candidate sight name that represents various features of the candidate sight name. Inblock 901, the component selects the next candidate sight name. Indecision block 902, if all the candidate sight names have already been selected, then the component returns the feature vectors for the candidate sight names, else the component continues atblock 903. Inblock 903, the component calculates a feature for the feature vector based on term frequency by inverse document frequency (“tf*idf”). Inblock 904, the component calculates a feature for the feature vector based on the length of the selected candidate sight name. Inblock 905, the component calculates a feature for the feature vector based on intra-cluster similarity. Inblock 906, the component calculates a feature for the feature vector to represent the entropy of the cluster. Inblock 907, the component calculates a feature for the feature vector that indicates the independence of the selected candidate sight name. The component then loops to block 901 to select the next candidate sight name. These features are described in described in Zeng, H., He, Q., Chen, Z., Ma, W., and Ma, J., “Learning to Cluster Web Search Results,” SIGIR, Jul. 25-29, 2004 Sheffield, South Yorkshire, U.K. The component may also factor into the score the quality of images associated with a candidate sight name. The component may normalize or transform to a standard scale the quality ratings of the image forums as described in U.S. patent application Ser. No. 11/339,328, entitled “User Interface for Viewing Images,” which is hereby incorporated by reference. -
FIG. 10 is a flow diagram that illustrates the processing of the score candidate sight names component of the tour system in one embodiment. Thecomponent 1000 generates a score for each candidate sight name that indicates the likelihood that a candidate sight name is a salient phrase of the search results. Inblock 1001, the component selects the next candidate sight name. Indecision block 1002, if all the candidate sight names have already been selected, then the component returns, else the component continues atblock 1003. Inblock 1003, the component initializes the score for the selected candidate sight name to zero. In blocks 1004-1006, the component loops accumulating a score for each feature i of the feature vector of the selected candidate sight name. Inblock 1004, the component selects the next feature of the feature vector for the selected candidate sight name. Indecision block 1005, if all the features of the selected candidate sight name have already been selected, then the component continues atblock 1007, else the component continues atblock 1006. Inblock 1006, the component multiplies the value xi of the feature by a weighting factor bi and adds it to the score for the selected candidate sight name. The component then loops to block 1004 to select the next feature. Inblock 1007, the component sets a score of the selected candidate sight name and then loops to block 1001 to select the next candidate sight name. -
FIG. 11 is a flow diagram that represents the processing of the identify geographic names component of the tour system in one embodiment. Thecomponent 1100 is passed a location and candidate sight names for that location. The component identifies which of the candidate sight names represent geographic names and marks those candidate sight names as sight names. Inblock 1101, the component selects the next candidate sight name. Indecision block 1102, if all the candidate sight names have already been selected, then the component returns the candidate sight names that are marked as sight names, else the component continues atblock 1103. Inblock 1103, the component submits the location and the selected candidate sight name to a geographic name service. Indecision block 1104, if the results provided by the geographic name service indicate that the selected candidate sight name corresponds to a geographic name, then the component continues atblock 1105, else the component loops to block 1101 to select the next candidate sight name. Inblock 1105, the component marks the selected candidate sight name as a sight name and then loops to block 1101 to select the next candidate sight name. -
FIG. 12 is a flow diagram that illustrates the processing of the search by location component of the tour system in one embodiment. Thecomponent 1200 is passed a location and displays images of sights associated with that location. Inblock 1201, the component retrieves the sight names for the passed location using the location/sight mapping. Indecision block 1202, if sight names are found for the passed location, then the component continues atblock 1203, else the component continues atblock 1209. In blocks 1203-1205, the component loops submitting each of the retrieved sight names to an image search service to identify images corresponding to these sight names. Inblock 1203, the component selects the next retrieved sight name. Indecision block 1204, if all the sight names have already been selected, then the component continues atblock 1206, else the component continues atblock 1205. Inblock 1205, the component submits the location and sight name as a query to an image search service. The search results identify the images and associated metadata for the selected sight name. The component then loops to block 1203 to select the next sight name. Inblock 1206, the component orders the images for display. Inblock 1207, the component displays the images in the image panel. Inblock 1208, the component displays a map in the map panel that encompasses the sights of the passed location. Inblock 1209, if no sight names for the passed location were found, then the component submits the location as a query to the image search service. The component then displays the images of the search result in the image panel and map centered at that location in the map panel. The component then completes. -
FIG. 13 is a flow diagram that illustrates the processing of the search by path component of the tour system in one embodiment. Thecomponent 1300 is passed a start location and an end location. The component identifies locations along the travel path from the start location to the end location and displays images of sights associated with those locations. Inblock 1301, the component identifies locations on or near the path from the start location to the end location. Inblock 1302, the component invokes the retrieve images for locations component to retrieve images of sights of the identified locations. Inblock 1303, the component orders the images of the sights based on distance from the start location. Inblock 1304, the component displays the images in the image panel. Inblock 1305, the component displays a map in the map panel that illustrates the travel path from the start location to the end location and then completes. -
FIG. 14 is a flow diagram illustrating the processing of the retrieve images for locations component of the tour system in one embodiment. Thecomponent 1400 is passed a list of locations and retrieves images of the sights associated with those locations. Inblock 1401, the component selects the next location. Indecision block 1402, if all the locations have already been selected, then the component continues atblock 1406, else the component continues atblock 1403. Inblock 1403, the component selects the next sight for the selected location. Indecision block 1404, if all the sights for the selected location have already been selected, then the component loops to block 1401 to select the next location, else the component continues atblock 1405. Inblock 1405, the component submits a query that includes the selected location and selected sight to an image search service. The search results indicate the images associated with the selected sight. The component then loops to block 1403 to select the next sight. Inblock 1406, the component discards any locations with insignificant sights and then returns. -
FIG. 15 is a flow diagram that illustrates the processing of the search by map component of the tour system in one embodiment. Thecomponent 1500 identifies locations that are visible on the currently displayed map and displays images for the sights of those locations. Inblock 1501, the component identifies the locations visible on the map displayed in the image panel. Inblock 1502, the component invokes the retrieve images for locations component to retrieve images associated with the sights of the identified locations. Inblock 1503, the component orders the sights of each location by the number of images (or sight popularity). Inblock 1504, the component displays images of the sights in the image panel. Inblock 1505, the component displays a map in the image panel with the locations highlighted. The component then completes. -
FIG. 16 is a flow diagram that illustrates the processing of the select image component of the tour system in one embodiment. Thecomponent 1600 is passed a sight and selects images for the sight. Inblock 1601, the component selects images for the passed sight. Inblock 1602, the component displays the selected images in the image panel. Inblock 1603, the component centers the map on the passed sight. Inblock 1604, the component displays the map in the map panel and then completes. -
FIG. 17 is a flow diagram that illustrates processing of the show slideshow by path component of the tour system in one embodiment. Thecomponent 1700 is passed a start location and an end location and displays a slideshow of the sights encountered along a path from the start location to the end location. The tour system may also allow slideshows of sights of a single location. Inblock 1701, the component sets a time for the slideshow. The time may be specified by the user, a fixed time, or variable depending on length of path, number of travel locations, number of sights, and so on. Inblock 1702, the component selects the next location along the travel path. Indecision block 1703, if all the locations have already been selected, then the component returns, else the component continues atblock 1704. Inblock 1704, the component calculates the number of slides to be presented for the selected location. The number of slides may be a fixed number or may be dynamically generated based on time of the slideshow, number of sights for each location, number of locations, and so on. Inblock 1705, the component increments the number of slides that have been displayed. Indecision block 1706, if the number of slides that have been displayed for the selected location is greater than the number of slides to be displayed for this location, then the component loops to block 1702 to select the next location, else the component continues atblock 1707. Inblock 1707, the component selects the next sight of the selected location. Inblock 1708, the component selects the next unseen image for the selected sight. Inblock 1709, the component displays the image in the image panel. The component then loops to block 1705 to select the next sight. After the component selects all the sights, it may then start over by selecting the first sight of the selected location until the calculated number of slides is presented. - Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. For example, the tour system may submit candidate sight names to a service that identifies the names as being sights of interest. The tour system may identify candidate sight names using a web search service or other search service, rather than an image search service. In the case of a web search service, the tour system may extract salient phrases from titles and snippets of the search results. Accordingly, the invention is not limited except as by the appended claims.
Claims (20)
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/548,253 US7707208B2 (en) | 2006-10-10 | 2006-10-10 | Identifying sight for a location |
PCT/US2007/079983 WO2008045704A1 (en) | 2006-10-10 | 2007-09-28 | Identifying sight for a location |
CA002661532A CA2661532A1 (en) | 2006-10-10 | 2007-09-28 | Identifying sight for a location |
EP07843547A EP2089845A4 (en) | 2006-10-10 | 2007-09-28 | Identifying sight for a location |
CNA2007800373819A CN101523432A (en) | 2006-10-10 | 2007-09-28 | Identifying sight for a location |
JP2009532498A JP2010506335A (en) | 2006-10-10 | 2007-09-28 | Site identification for location |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/548,253 US7707208B2 (en) | 2006-10-10 | 2006-10-10 | Identifying sight for a location |
Publications (2)
Publication Number | Publication Date |
---|---|
US20080086468A1 true US20080086468A1 (en) | 2008-04-10 |
US7707208B2 US7707208B2 (en) | 2010-04-27 |
Family
ID=39275765
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/548,253 Expired - Fee Related US7707208B2 (en) | 2006-10-10 | 2006-10-10 | Identifying sight for a location |
Country Status (6)
Country | Link |
---|---|
US (1) | US7707208B2 (en) |
EP (1) | EP2089845A4 (en) |
JP (1) | JP2010506335A (en) |
CN (1) | CN101523432A (en) |
CA (1) | CA2661532A1 (en) |
WO (1) | WO2008045704A1 (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070174790A1 (en) * | 2006-01-23 | 2007-07-26 | Microsoft Corporation | User interface for viewing clusters of images |
US20070174872A1 (en) * | 2006-01-25 | 2007-07-26 | Microsoft Corporation | Ranking content based on relevance and quality |
US20070174269A1 (en) * | 2006-01-23 | 2007-07-26 | Microsoft Corporation | Generating clusters of images for search results |
US20110078139A1 (en) * | 2009-09-29 | 2011-03-31 | Microsoft Corporation | Travelogue locating mining for travel suggestion |
US20110077848A1 (en) * | 2009-09-29 | 2011-03-31 | Microsoft Corporation | Travelogue-based travel route planning |
US20110078575A1 (en) * | 2009-09-29 | 2011-03-31 | Microsoft Corporation | Travelogue-based contextual map generation |
WO2012058036A1 (en) * | 2010-10-28 | 2012-05-03 | Eastman Kodak Company | Organizing nearby picture hotspots |
US20120229503A1 (en) * | 2011-03-08 | 2012-09-13 | Sony Corporation | Reproduction processing apparatus, imaging apparatus, reproduction processing method, and program |
US20130069990A1 (en) * | 2011-09-21 | 2013-03-21 | Horsetooth Ventures, LLC | Interactive Image Display and Selection System |
CN103064924A (en) * | 2012-12-17 | 2013-04-24 | 浙江鸿程计算机系统有限公司 | Travel destination situation recommendation method based on geotagged photo excavation |
US8484222B1 (en) * | 2006-12-01 | 2013-07-09 | Google Inc. | Method and apparatus for identifying a standalone location |
US8572076B2 (en) | 2010-04-22 | 2013-10-29 | Microsoft Corporation | Location context mining |
US20140073361A1 (en) * | 2010-10-28 | 2014-03-13 | Intellectual Ventures Fund 83 Llc | Method of locating nearby picture hotspots |
US8676807B2 (en) | 2010-04-22 | 2014-03-18 | Microsoft Corporation | Identifying location names within document text |
CN103678429A (en) * | 2012-09-26 | 2014-03-26 | 阿里巴巴集团控股有限公司 | Recommendation method and device of tour routes |
US20140313140A1 (en) * | 2012-01-10 | 2014-10-23 | Canon Kabushiki Kaisha | Operation reception device and method for receiving operation on page image, storage medium, and image forming apparatus for use with operation reception device |
WO2015108761A1 (en) * | 2014-01-16 | 2015-07-23 | Microsoft Technology Licensing, Llc | Discovery of viewsheds and vantage points by mining geo-tagged data |
US9092409B2 (en) * | 2007-10-11 | 2015-07-28 | Google Inc. | Smart scoring and filtering of user-annotated geocoded datasets |
CN105117400A (en) * | 2015-07-08 | 2015-12-02 | 百度在线网络技术(北京)有限公司 | Information searching method and system |
US9460160B1 (en) | 2011-11-29 | 2016-10-04 | Google Inc. | System and method for selecting user generated content related to a point of interest |
US9471695B1 (en) * | 2014-12-02 | 2016-10-18 | Google Inc. | Semantic image navigation experiences |
US10614366B1 (en) | 2006-01-31 | 2020-04-07 | The Research Foundation for the State University o | System and method for multimedia ranking and multi-modal image retrieval using probabilistic semantic models and expectation-maximization (EM) learning |
US11068532B2 (en) * | 2011-09-21 | 2021-07-20 | Horsetooth Ventures, LLC | Interactive image display and selection system |
US20220092253A1 (en) * | 2020-09-18 | 2022-03-24 | Fujifilm Business Innovation Corp. | Information processing apparatus and non-transitory computer readable medium |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070209025A1 (en) * | 2006-01-25 | 2007-09-06 | Microsoft Corporation | User interface for viewing images |
US8676001B2 (en) | 2008-05-12 | 2014-03-18 | Google Inc. | Automatic discovery of popular landmarks |
US8086048B2 (en) * | 2008-05-23 | 2011-12-27 | Yahoo! Inc. | System to compile landmark image search results |
US8285719B1 (en) | 2008-08-08 | 2012-10-09 | The Research Foundation Of State University Of New York | System and method for probabilistic relational clustering |
US8396287B2 (en) | 2009-05-15 | 2013-03-12 | Google Inc. | Landmarks from digital photo collections |
JP2011248832A (en) * | 2010-05-31 | 2011-12-08 | Denso It Laboratory Inc | Image collection system, portable terminal, image collection device, and image collection method |
US8489641B1 (en) | 2010-07-08 | 2013-07-16 | Google Inc. | Displaying layers of search results on a map |
US9870429B2 (en) * | 2011-11-30 | 2018-01-16 | Nokia Technologies Oy | Method and apparatus for web-based augmented reality application viewer |
CN103377202B (en) * | 2012-04-17 | 2017-07-07 | 阿里巴巴集团控股有限公司 | The display methods and device of map label point |
US9202307B2 (en) * | 2012-08-08 | 2015-12-01 | Google Inc. | Browsing images of a point of interest within an image graph |
JP5673631B2 (en) * | 2012-09-06 | 2015-02-18 | トヨタ自動車株式会社 | Information display device and portable terminal device |
US10229415B2 (en) | 2013-03-05 | 2019-03-12 | Google Llc | Computing devices and methods for identifying geographic areas that satisfy a set of multiple different criteria |
TWI511069B (en) * | 2013-11-28 | 2015-12-01 | Inst Information Industry | Travel plan apparatus, method and storage media |
CN108287893A (en) * | 2018-01-18 | 2018-07-17 | 安徽大学 | Self-help tour guide system and method based on digital image recognition auxiliary positioning |
CN108427743A (en) * | 2018-03-07 | 2018-08-21 | 浪潮软件集团有限公司 | Scenic spot retrieval and reordering method based on geographic position |
CN110646005A (en) * | 2018-12-29 | 2020-01-03 | 北京奇虎科技有限公司 | Method and device for displaying map area features based on map interface |
Citations (94)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4888646A (en) * | 1988-01-29 | 1989-12-19 | Nec Corporation | Optimal image-quality selection apparatus |
US5301018A (en) * | 1991-02-13 | 1994-04-05 | Ampex Systems Corporation | Method and apparatus for shuffling image data into statistically averaged data groups and for deshuffling the data |
US5579471A (en) * | 1992-11-09 | 1996-11-26 | International Business Machines Corporation | Image query system and method |
US5642433A (en) * | 1995-07-31 | 1997-06-24 | Neopath, Inc. | Method and apparatus for image contrast quality evaluation |
US5802361A (en) * | 1994-09-30 | 1998-09-01 | Apple Computer, Inc. | Method and system for searching graphic images and videos |
US5870740A (en) * | 1996-09-30 | 1999-02-09 | Apple Computer, Inc. | System and method for improving the ranking of information retrieval results for short queries |
US5875446A (en) * | 1997-02-24 | 1999-02-23 | International Business Machines Corporation | System and method for hierarchically grouping and ranking a set of objects in a query context based on one or more relationships |
US5937422A (en) * | 1997-04-15 | 1999-08-10 | The United States Of America As Represented By The National Security Agency | Automatically generating a topic description for text and searching and sorting text by topic using the same |
US5987456A (en) * | 1997-10-28 | 1999-11-16 | University Of Masschusetts | Image retrieval by syntactic characterization of appearance |
US6006218A (en) * | 1997-02-28 | 1999-12-21 | Microsoft | Methods and apparatus for retrieving and/or processing retrieved information as a function of a user's estimated knowledge |
US6041323A (en) * | 1996-04-17 | 2000-03-21 | International Business Machines Corporation | Information search method, information search device, and storage medium for storing an information search program |
US6097389A (en) * | 1997-10-24 | 2000-08-01 | Pictra, Inc. | Methods and apparatuses for presenting a collection of digital media in a media container |
US6115717A (en) * | 1997-01-23 | 2000-09-05 | Eastman Kodak Company | System and method for open space metadata-based storage and retrieval of images in an image database |
US6128613A (en) * | 1997-06-26 | 2000-10-03 | The Chinese University Of Hong Kong | Method and apparatus for establishing topic word classes based on an entropy cost function to retrieve documents represented by the topic words |
US6134541A (en) * | 1997-10-31 | 2000-10-17 | International Business Machines Corporation | Searching multidimensional indexes using associated clustering and dimension reduction information |
US6167397A (en) * | 1997-09-23 | 2000-12-26 | At&T Corporation | Method of clustering electronic documents in response to a search query |
US6240378B1 (en) * | 1994-11-18 | 2001-05-29 | Matsushita Electric Industrial Co., Ltd. | Weighting method for use in information extraction and abstracting, based on the frequency of occurrence of keywords and similarity calculations |
US6256623B1 (en) * | 1998-06-22 | 2001-07-03 | Microsoft Corporation | Network search access construct for accessing web-based search services |
US20010020238A1 (en) * | 2000-02-04 | 2001-09-06 | Hiroshi Tsuda | Document searching apparatus, method thereof, and record medium thereof |
US6321226B1 (en) * | 1998-06-30 | 2001-11-20 | Microsoft Corporation | Flexible keyboard searching |
US20020035573A1 (en) * | 2000-08-01 | 2002-03-21 | Black Peter M. | Metatag-based datamining |
US6363373B1 (en) * | 1998-10-01 | 2002-03-26 | Microsoft Corporation | Method and apparatus for concept searching using a Boolean or keyword search engine |
US6370527B1 (en) * | 1998-12-29 | 2002-04-09 | At&T Corp. | Method and apparatus for searching distributed networks using a plurality of search devices |
US20020042793A1 (en) * | 2000-08-23 | 2002-04-11 | Jun-Hyeog Choi | Method of order-ranking document clusters using entropy data and bayesian self-organizing feature maps |
US20020052894A1 (en) * | 2000-08-18 | 2002-05-02 | Francois Bourdoncle | Searching tool and process for unified search using categories and keywords |
US20020055936A1 (en) * | 2000-08-21 | 2002-05-09 | Kent Ridge Digital Labs | Knowledge discovery system |
US6445834B1 (en) * | 1998-10-19 | 2002-09-03 | Sony Corporation | Modular image query system |
US6470307B1 (en) * | 1997-06-23 | 2002-10-22 | National Research Council Of Canada | Method and apparatus for automatically identifying keywords within a document |
US6473753B1 (en) * | 1998-10-09 | 2002-10-29 | Microsoft Corporation | Method and system for calculating term-document importance |
US20030023600A1 (en) * | 2001-07-30 | 2003-01-30 | Kabushiki Kaisha | Knowledge analysis system, knowledge analysis method, and knowledge analysis program product |
US6523021B1 (en) * | 2000-07-31 | 2003-02-18 | Microsoft Corporation | Business directory search engine |
US6522782B2 (en) * | 2000-12-15 | 2003-02-18 | America Online, Inc. | Image and text searching techniques |
US20030061211A1 (en) * | 2000-06-30 | 2003-03-27 | Shultz Troy L. | GIS based search engine |
US20030063131A1 (en) * | 2001-10-03 | 2003-04-03 | Tiger Color Inc. | Picture moving and positioning method in image processing software |
US6549897B1 (en) * | 1998-10-09 | 2003-04-15 | Microsoft Corporation | Method and system for calculating phrase-document importance |
US6556710B2 (en) * | 2000-12-15 | 2003-04-29 | America Online, Inc. | Image searching techniques |
US6567936B1 (en) * | 2000-02-08 | 2003-05-20 | Microsoft Corporation | Data clustering using error-tolerant frequent item sets |
US6578032B1 (en) * | 2000-06-28 | 2003-06-10 | Microsoft Corporation | Method and system for performing phrase/word clustering and cluster merging |
US20030126235A1 (en) * | 2002-01-03 | 2003-07-03 | Microsoft Corporation | System and method for performing a search and a browse on a query |
US6590586B1 (en) * | 1999-10-28 | 2003-07-08 | Xerox Corporation | User interface for a browser based image storage and processing system |
US20030140033A1 (en) * | 2002-01-23 | 2003-07-24 | Matsushita Electric Industrial Co., Ltd. | Information analysis display device and information analysis display program |
US20030144994A1 (en) * | 2001-10-12 | 2003-07-31 | Ji-Rong Wen | Clustering web queries |
US20030142123A1 (en) * | 1993-10-25 | 2003-07-31 | Microsoft Corporation | Information pointers |
US6606659B1 (en) * | 2000-01-28 | 2003-08-12 | Websense, Inc. | System and method for controlling access to internet sites |
US20030191672A1 (en) * | 2001-12-21 | 2003-10-09 | Kendall Errol O. | System for appraising life insurance and annuities |
US6643641B1 (en) * | 2000-04-27 | 2003-11-04 | Russell Snyder | Web search engine with graphic snapshots |
US20040015461A1 (en) * | 2002-07-13 | 2004-01-22 | Lo James Ting-Ho | Risk-averting method of training neural networks and estimating regression models |
US20040044469A1 (en) * | 2002-09-03 | 2004-03-04 | Thorsten Bender | Displaying road maps |
US6704729B1 (en) * | 2000-05-19 | 2004-03-09 | Microsoft Corporation | Retrieval of relevant information categories |
US6728752B1 (en) * | 1999-01-26 | 2004-04-27 | Xerox Corporation | System and method for information browsing using multi-modal features |
US6748398B2 (en) * | 2001-03-30 | 2004-06-08 | Microsoft Corporation | Relevance maximizing, iteration minimizing, relevance-feedback, content-based image retrieval (CBIR) |
US20040111438A1 (en) * | 2002-12-04 | 2004-06-10 | Chitrapura Krishna Prasad | Method and apparatus for populating a predefined concept hierarchy or other hierarchical set of classified data items by minimizing system entrophy |
US6766320B1 (en) * | 2000-08-24 | 2004-07-20 | Microsoft Corporation | Search engine with natural language-based robust parsing for user query and relevance feedback learning |
US6775666B1 (en) * | 2001-05-29 | 2004-08-10 | Microsoft Corporation | Method and system for searching index databases |
US6816850B2 (en) * | 1997-08-01 | 2004-11-09 | Ask Jeeves, Inc. | Personalized search methods including combining index entries for catagories of personal data |
US20040225667A1 (en) * | 2003-03-12 | 2004-11-11 | Canon Kabushiki Kaisha | Apparatus for and method of summarising text |
US6823335B2 (en) * | 1998-10-05 | 2004-11-23 | Canon Kabushiki Kaisha | Information search apparatus and method, and storage medium |
US6847733B2 (en) * | 2001-05-23 | 2005-01-25 | Eastman Kodak Company | Retrieval and browsing of database images based on image emphasis and appeal |
US20050022106A1 (en) * | 2003-07-25 | 2005-01-27 | Kenji Kawai | System and method for performing efficient document scoring and clustering |
US20050097475A1 (en) * | 2003-09-12 | 2005-05-05 | Fuji Photo Film Co., Ltd. | Image comparative display method, image comparative display apparatus, and computer-readable medium |
US6895552B1 (en) * | 2000-05-31 | 2005-05-17 | Ricoh Co., Ltd. | Method and an apparatus for visual summarization of documents |
US20050108200A1 (en) * | 2001-07-04 | 2005-05-19 | Frank Meik | Category based, extensible and interactive system for document retrieval |
US6901411B2 (en) * | 2002-02-11 | 2005-05-31 | Microsoft Corporation | Statistical bigram correlation model for image retrieval |
US20050144158A1 (en) * | 2003-11-18 | 2005-06-30 | Capper Liesl J. | Computer network search engine |
US20050165841A1 (en) * | 2004-01-23 | 2005-07-28 | Microsoft Corporation | System and method for automatically grouping items |
US20050188326A1 (en) * | 2004-02-25 | 2005-08-25 | Triworks Corp. | Image assortment supporting device |
US6944612B2 (en) * | 2002-11-13 | 2005-09-13 | Xerox Corporation | Structured contextual clustering method and system in a federated search engine |
US20060004752A1 (en) * | 2004-06-30 | 2006-01-05 | International Business Machines Corporation | Method and system for determining the focus of a document |
US20060026152A1 (en) * | 2004-07-13 | 2006-02-02 | Microsoft Corporation | Query-based snippet clustering for search result grouping |
US7010751B2 (en) * | 2000-02-18 | 2006-03-07 | University Of Maryland, College Park | Methods for the electronic annotation, retrieval, and use of electronic images |
US7017114B2 (en) * | 2000-09-20 | 2006-03-21 | International Business Machines Corporation | Automatic correlation method for generating summaries for text documents |
US7047482B1 (en) * | 2001-02-28 | 2006-05-16 | Gary Odom | Automatic directory supplementation |
US7051019B1 (en) * | 1999-08-17 | 2006-05-23 | Corbis Corporation | Method and system for obtaining images from a database having images that are relevant to indicated text |
US20060117003A1 (en) * | 1998-07-15 | 2006-06-01 | Ortega Ruben E | Search query processing to identify related search terms and to correct misspellings of search terms |
US7065520B2 (en) * | 2000-10-03 | 2006-06-20 | Ronald Neville Langford | Method of locating web-pages by utilising visual images |
US7099860B1 (en) * | 2000-10-30 | 2006-08-29 | Microsoft Corporation | Image retrieval systems and methods with semantic and feature based relevance feedback |
US20060242178A1 (en) * | 2005-04-21 | 2006-10-26 | Yahoo! Inc. | Media object metadata association and ranking |
US7158878B2 (en) * | 2004-03-23 | 2007-01-02 | Google Inc. | Digital mapping system |
US7162468B2 (en) * | 1998-07-31 | 2007-01-09 | Schwartz Richard M | Information retrieval system |
US20070073748A1 (en) * | 2005-09-27 | 2007-03-29 | Barney Jonathan A | Method and system for probabilistically quantifying and visualizing relevance between two or more citationally or contextually related data objects |
US20070118800A1 (en) * | 2005-09-07 | 2007-05-24 | Moore Michael R | Systems and methods for dynamically integrated capture, collection, authoring, presentation and production of digital content |
US20070115373A1 (en) * | 2005-11-22 | 2007-05-24 | Eastman Kodak Company | Location based image classification with map segmentation |
US20070174269A1 (en) * | 2006-01-23 | 2007-07-26 | Microsoft Corporation | Generating clusters of images for search results |
US20070174872A1 (en) * | 2006-01-25 | 2007-07-26 | Microsoft Corporation | Ranking content based on relevance and quality |
US20070174865A1 (en) * | 2006-01-25 | 2007-07-26 | Microsoft Corporation | Normalizing content ratings of content forums |
US20070185866A1 (en) * | 2004-02-13 | 2007-08-09 | Evans Lynne M | System and method for arranging clusters in a display by theme |
US20070198182A1 (en) * | 2004-09-30 | 2007-08-23 | Mona Singh | Method for incorporating images with a user perspective in navigation |
US20070209025A1 (en) * | 2006-01-25 | 2007-09-06 | Microsoft Corporation | User interface for viewing images |
US20070244925A1 (en) * | 2006-04-12 | 2007-10-18 | Jean-Francois Albouze | Intelligent image searching |
US7287012B2 (en) * | 2004-01-09 | 2007-10-23 | Microsoft Corporation | Machine-learned approach to determining document relevance for search over large electronic collections of documents |
US7349899B2 (en) * | 2001-07-17 | 2008-03-25 | Fujitsu Limited | Document clustering device, document searching system, and FAQ preparing system |
US20080086686A1 (en) * | 2006-10-10 | 2008-04-10 | Microsoft Corporation | User interface for displaying images of sights |
US20080189253A1 (en) * | 2000-11-27 | 2008-08-07 | Jonathan James Oliver | System And Method for Adaptive Text Recommendation |
US7492921B2 (en) * | 2005-01-10 | 2009-02-17 | Fuji Xerox Co., Ltd. | System and method for detecting and ranking images in order of usefulness based on vignette score |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7251637B1 (en) | 1993-09-20 | 2007-07-31 | Fair Isaac Corporation | Context vector generation and retrieval |
WO2001031502A1 (en) | 1999-10-27 | 2001-05-03 | Fujitsu Limited | Multimedia information classifying/arranging device and method |
KR20010105051A (en) * | 2000-05-18 | 2001-11-28 | 신민정 | Sight-seeing guide system of ON/OFF line connection type and service operating method for the same |
US20010049700A1 (en) | 2000-05-26 | 2001-12-06 | Shinobu Ichikura | Information processing apparatus, information processing method and storage medium |
KR20020000680A (en) * | 2000-06-28 | 2002-01-05 | 김영윤 | advertisement method of local shops using game in internet map |
US20020194166A1 (en) | 2001-05-01 | 2002-12-19 | Fowler Abraham Michael | Mechanism to sift through search results using keywords from the results |
US6920448B2 (en) | 2001-05-09 | 2005-07-19 | Agilent Technologies, Inc. | Domain specific knowledge-based metasearch system and methods of using |
US6978275B2 (en) | 2001-08-31 | 2005-12-20 | Hewlett-Packard Development Company, L.P. | Method and system for mining a document containing dirty text |
KR20030023950A (en) * | 2001-09-14 | 2003-03-26 | 김광삼 | The tour service method and system through network |
KR20060034249A (en) * | 2003-06-30 | 2006-04-21 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | Enhanced organization and retrieval of digital images |
US7155336B2 (en) * | 2004-03-24 | 2006-12-26 | A9.Com, Inc. | System and method for automatically collecting images of objects at geographic locations and displaying same in online directories |
US7644373B2 (en) | 2006-01-23 | 2010-01-05 | Microsoft Corporation | User interface for viewing clusters of images |
-
2006
- 2006-10-10 US US11/548,253 patent/US7707208B2/en not_active Expired - Fee Related
-
2007
- 2007-09-28 JP JP2009532498A patent/JP2010506335A/en not_active Withdrawn
- 2007-09-28 EP EP07843547A patent/EP2089845A4/en not_active Ceased
- 2007-09-28 CA CA002661532A patent/CA2661532A1/en not_active Abandoned
- 2007-09-28 CN CNA2007800373819A patent/CN101523432A/en active Pending
- 2007-09-28 WO PCT/US2007/079983 patent/WO2008045704A1/en active Application Filing
Patent Citations (99)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4888646A (en) * | 1988-01-29 | 1989-12-19 | Nec Corporation | Optimal image-quality selection apparatus |
US5301018A (en) * | 1991-02-13 | 1994-04-05 | Ampex Systems Corporation | Method and apparatus for shuffling image data into statistically averaged data groups and for deshuffling the data |
US5579471A (en) * | 1992-11-09 | 1996-11-26 | International Business Machines Corporation | Image query system and method |
US20030142123A1 (en) * | 1993-10-25 | 2003-07-31 | Microsoft Corporation | Information pointers |
US5802361A (en) * | 1994-09-30 | 1998-09-01 | Apple Computer, Inc. | Method and system for searching graphic images and videos |
US6240378B1 (en) * | 1994-11-18 | 2001-05-29 | Matsushita Electric Industrial Co., Ltd. | Weighting method for use in information extraction and abstracting, based on the frequency of occurrence of keywords and similarity calculations |
US5642433A (en) * | 1995-07-31 | 1997-06-24 | Neopath, Inc. | Method and apparatus for image contrast quality evaluation |
US6041323A (en) * | 1996-04-17 | 2000-03-21 | International Business Machines Corporation | Information search method, information search device, and storage medium for storing an information search program |
US5870740A (en) * | 1996-09-30 | 1999-02-09 | Apple Computer, Inc. | System and method for improving the ranking of information retrieval results for short queries |
US6115717A (en) * | 1997-01-23 | 2000-09-05 | Eastman Kodak Company | System and method for open space metadata-based storage and retrieval of images in an image database |
US5875446A (en) * | 1997-02-24 | 1999-02-23 | International Business Machines Corporation | System and method for hierarchically grouping and ranking a set of objects in a query context based on one or more relationships |
US6006218A (en) * | 1997-02-28 | 1999-12-21 | Microsoft | Methods and apparatus for retrieving and/or processing retrieved information as a function of a user's estimated knowledge |
US5937422A (en) * | 1997-04-15 | 1999-08-10 | The United States Of America As Represented By The National Security Agency | Automatically generating a topic description for text and searching and sorting text by topic using the same |
US6470307B1 (en) * | 1997-06-23 | 2002-10-22 | National Research Council Of Canada | Method and apparatus for automatically identifying keywords within a document |
US6128613A (en) * | 1997-06-26 | 2000-10-03 | The Chinese University Of Hong Kong | Method and apparatus for establishing topic word classes based on an entropy cost function to retrieve documents represented by the topic words |
US6816850B2 (en) * | 1997-08-01 | 2004-11-09 | Ask Jeeves, Inc. | Personalized search methods including combining index entries for catagories of personal data |
US6167397A (en) * | 1997-09-23 | 2000-12-26 | At&T Corporation | Method of clustering electronic documents in response to a search query |
US6097389A (en) * | 1997-10-24 | 2000-08-01 | Pictra, Inc. | Methods and apparatuses for presenting a collection of digital media in a media container |
US5987456A (en) * | 1997-10-28 | 1999-11-16 | University Of Masschusetts | Image retrieval by syntactic characterization of appearance |
US6134541A (en) * | 1997-10-31 | 2000-10-17 | International Business Machines Corporation | Searching multidimensional indexes using associated clustering and dimension reduction information |
US6256623B1 (en) * | 1998-06-22 | 2001-07-03 | Microsoft Corporation | Network search access construct for accessing web-based search services |
US6748387B2 (en) * | 1998-06-30 | 2004-06-08 | Microsoft Corporation | Flexible keyword searching |
US6321226B1 (en) * | 1998-06-30 | 2001-11-20 | Microsoft Corporation | Flexible keyboard searching |
US20060117003A1 (en) * | 1998-07-15 | 2006-06-01 | Ortega Ruben E | Search query processing to identify related search terms and to correct misspellings of search terms |
US7162468B2 (en) * | 1998-07-31 | 2007-01-09 | Schwartz Richard M | Information retrieval system |
US6363373B1 (en) * | 1998-10-01 | 2002-03-26 | Microsoft Corporation | Method and apparatus for concept searching using a Boolean or keyword search engine |
US6823335B2 (en) * | 1998-10-05 | 2004-11-23 | Canon Kabushiki Kaisha | Information search apparatus and method, and storage medium |
US6549897B1 (en) * | 1998-10-09 | 2003-04-15 | Microsoft Corporation | Method and system for calculating phrase-document importance |
US6473753B1 (en) * | 1998-10-09 | 2002-10-29 | Microsoft Corporation | Method and system for calculating term-document importance |
US6445834B1 (en) * | 1998-10-19 | 2002-09-03 | Sony Corporation | Modular image query system |
US6370527B1 (en) * | 1998-12-29 | 2002-04-09 | At&T Corp. | Method and apparatus for searching distributed networks using a plurality of search devices |
US6728752B1 (en) * | 1999-01-26 | 2004-04-27 | Xerox Corporation | System and method for information browsing using multi-modal features |
US7051019B1 (en) * | 1999-08-17 | 2006-05-23 | Corbis Corporation | Method and system for obtaining images from a database having images that are relevant to indicated text |
US6590586B1 (en) * | 1999-10-28 | 2003-07-08 | Xerox Corporation | User interface for a browser based image storage and processing system |
US6606659B1 (en) * | 2000-01-28 | 2003-08-12 | Websense, Inc. | System and method for controlling access to internet sites |
US20010020238A1 (en) * | 2000-02-04 | 2001-09-06 | Hiroshi Tsuda | Document searching apparatus, method thereof, and record medium thereof |
US6567936B1 (en) * | 2000-02-08 | 2003-05-20 | Microsoft Corporation | Data clustering using error-tolerant frequent item sets |
US7010751B2 (en) * | 2000-02-18 | 2006-03-07 | University Of Maryland, College Park | Methods for the electronic annotation, retrieval, and use of electronic images |
US6643641B1 (en) * | 2000-04-27 | 2003-11-04 | Russell Snyder | Web search engine with graphic snapshots |
US6704729B1 (en) * | 2000-05-19 | 2004-03-09 | Microsoft Corporation | Retrieval of relevant information categories |
US6895552B1 (en) * | 2000-05-31 | 2005-05-17 | Ricoh Co., Ltd. | Method and an apparatus for visual summarization of documents |
US6578032B1 (en) * | 2000-06-28 | 2003-06-10 | Microsoft Corporation | Method and system for performing phrase/word clustering and cluster merging |
US20030061211A1 (en) * | 2000-06-30 | 2003-03-27 | Shultz Troy L. | GIS based search engine |
US6523021B1 (en) * | 2000-07-31 | 2003-02-18 | Microsoft Corporation | Business directory search engine |
US20020035573A1 (en) * | 2000-08-01 | 2002-03-21 | Black Peter M. | Metatag-based datamining |
US20020052894A1 (en) * | 2000-08-18 | 2002-05-02 | Francois Bourdoncle | Searching tool and process for unified search using categories and keywords |
US20020055936A1 (en) * | 2000-08-21 | 2002-05-09 | Kent Ridge Digital Labs | Knowledge discovery system |
US20020042793A1 (en) * | 2000-08-23 | 2002-04-11 | Jun-Hyeog Choi | Method of order-ranking document clusters using entropy data and bayesian self-organizing feature maps |
US6766320B1 (en) * | 2000-08-24 | 2004-07-20 | Microsoft Corporation | Search engine with natural language-based robust parsing for user query and relevance feedback learning |
US7017114B2 (en) * | 2000-09-20 | 2006-03-21 | International Business Machines Corporation | Automatic correlation method for generating summaries for text documents |
US7065520B2 (en) * | 2000-10-03 | 2006-06-20 | Ronald Neville Langford | Method of locating web-pages by utilising visual images |
US7499916B2 (en) * | 2000-10-30 | 2009-03-03 | Microsoft Corporation | Image retrieval systems and methods with semantic and feature based relevance feedback |
US7099860B1 (en) * | 2000-10-30 | 2006-08-29 | Microsoft Corporation | Image retrieval systems and methods with semantic and feature based relevance feedback |
US20080189253A1 (en) * | 2000-11-27 | 2008-08-07 | Jonathan James Oliver | System And Method for Adaptive Text Recommendation |
US6556710B2 (en) * | 2000-12-15 | 2003-04-29 | America Online, Inc. | Image searching techniques |
US6522782B2 (en) * | 2000-12-15 | 2003-02-18 | America Online, Inc. | Image and text searching techniques |
US7047482B1 (en) * | 2001-02-28 | 2006-05-16 | Gary Odom | Automatic directory supplementation |
US7111002B2 (en) * | 2001-03-30 | 2006-09-19 | Microsoft Corporation | Relevance maximizing, iteration minimizing, relevance-feedback, content-based image retrieval (CBIR) |
US7113944B2 (en) * | 2001-03-30 | 2006-09-26 | Microsoft Corporation | Relevance maximizing, iteration minimizing, relevance-feedback, content-based image retrieval (CBIR). |
US6748398B2 (en) * | 2001-03-30 | 2004-06-08 | Microsoft Corporation | Relevance maximizing, iteration minimizing, relevance-feedback, content-based image retrieval (CBIR) |
US6847733B2 (en) * | 2001-05-23 | 2005-01-25 | Eastman Kodak Company | Retrieval and browsing of database images based on image emphasis and appeal |
US6775666B1 (en) * | 2001-05-29 | 2004-08-10 | Microsoft Corporation | Method and system for searching index databases |
US20050108200A1 (en) * | 2001-07-04 | 2005-05-19 | Frank Meik | Category based, extensible and interactive system for document retrieval |
US7349899B2 (en) * | 2001-07-17 | 2008-03-25 | Fujitsu Limited | Document clustering device, document searching system, and FAQ preparing system |
US20030023600A1 (en) * | 2001-07-30 | 2003-01-30 | Kabushiki Kaisha | Knowledge analysis system, knowledge analysis method, and knowledge analysis program product |
US20030063131A1 (en) * | 2001-10-03 | 2003-04-03 | Tiger Color Inc. | Picture moving and positioning method in image processing software |
US20030144994A1 (en) * | 2001-10-12 | 2003-07-31 | Ji-Rong Wen | Clustering web queries |
US20030191672A1 (en) * | 2001-12-21 | 2003-10-09 | Kendall Errol O. | System for appraising life insurance and annuities |
US20030126235A1 (en) * | 2002-01-03 | 2003-07-03 | Microsoft Corporation | System and method for performing a search and a browse on a query |
US20030140033A1 (en) * | 2002-01-23 | 2003-07-24 | Matsushita Electric Industrial Co., Ltd. | Information analysis display device and information analysis display program |
US7430566B2 (en) * | 2002-02-11 | 2008-09-30 | Microsoft Corporation | Statistical bigram correlation model for image retrieval |
US6901411B2 (en) * | 2002-02-11 | 2005-05-31 | Microsoft Corporation | Statistical bigram correlation model for image retrieval |
US20040015461A1 (en) * | 2002-07-13 | 2004-01-22 | Lo James Ting-Ho | Risk-averting method of training neural networks and estimating regression models |
US20040044469A1 (en) * | 2002-09-03 | 2004-03-04 | Thorsten Bender | Displaying road maps |
US6944612B2 (en) * | 2002-11-13 | 2005-09-13 | Xerox Corporation | Structured contextual clustering method and system in a federated search engine |
US20040111438A1 (en) * | 2002-12-04 | 2004-06-10 | Chitrapura Krishna Prasad | Method and apparatus for populating a predefined concept hierarchy or other hierarchical set of classified data items by minimizing system entrophy |
US20040225667A1 (en) * | 2003-03-12 | 2004-11-11 | Canon Kabushiki Kaisha | Apparatus for and method of summarising text |
US20050022106A1 (en) * | 2003-07-25 | 2005-01-27 | Kenji Kawai | System and method for performing efficient document scoring and clustering |
US20050097475A1 (en) * | 2003-09-12 | 2005-05-05 | Fuji Photo Film Co., Ltd. | Image comparative display method, image comparative display apparatus, and computer-readable medium |
US20050144158A1 (en) * | 2003-11-18 | 2005-06-30 | Capper Liesl J. | Computer network search engine |
US7287012B2 (en) * | 2004-01-09 | 2007-10-23 | Microsoft Corporation | Machine-learned approach to determining document relevance for search over large electronic collections of documents |
US20050165841A1 (en) * | 2004-01-23 | 2005-07-28 | Microsoft Corporation | System and method for automatically grouping items |
US20070185866A1 (en) * | 2004-02-13 | 2007-08-09 | Evans Lynne M | System and method for arranging clusters in a display by theme |
US20050188326A1 (en) * | 2004-02-25 | 2005-08-25 | Triworks Corp. | Image assortment supporting device |
US7158878B2 (en) * | 2004-03-23 | 2007-01-02 | Google Inc. | Digital mapping system |
US20060004752A1 (en) * | 2004-06-30 | 2006-01-05 | International Business Machines Corporation | Method and system for determining the focus of a document |
US20060026152A1 (en) * | 2004-07-13 | 2006-02-02 | Microsoft Corporation | Query-based snippet clustering for search result grouping |
US20070198182A1 (en) * | 2004-09-30 | 2007-08-23 | Mona Singh | Method for incorporating images with a user perspective in navigation |
US7492921B2 (en) * | 2005-01-10 | 2009-02-17 | Fuji Xerox Co., Ltd. | System and method for detecting and ranking images in order of usefulness based on vignette score |
US20060242178A1 (en) * | 2005-04-21 | 2006-10-26 | Yahoo! Inc. | Media object metadata association and ranking |
US20070118800A1 (en) * | 2005-09-07 | 2007-05-24 | Moore Michael R | Systems and methods for dynamically integrated capture, collection, authoring, presentation and production of digital content |
US20070073748A1 (en) * | 2005-09-27 | 2007-03-29 | Barney Jonathan A | Method and system for probabilistically quantifying and visualizing relevance between two or more citationally or contextually related data objects |
US20070115373A1 (en) * | 2005-11-22 | 2007-05-24 | Eastman Kodak Company | Location based image classification with map segmentation |
US20070174269A1 (en) * | 2006-01-23 | 2007-07-26 | Microsoft Corporation | Generating clusters of images for search results |
US20070174865A1 (en) * | 2006-01-25 | 2007-07-26 | Microsoft Corporation | Normalizing content ratings of content forums |
US20070209025A1 (en) * | 2006-01-25 | 2007-09-06 | Microsoft Corporation | User interface for viewing images |
US20070174872A1 (en) * | 2006-01-25 | 2007-07-26 | Microsoft Corporation | Ranking content based on relevance and quality |
US20070244925A1 (en) * | 2006-04-12 | 2007-10-18 | Jean-Francois Albouze | Intelligent image searching |
US20080086686A1 (en) * | 2006-10-10 | 2008-04-10 | Microsoft Corporation | User interface for displaying images of sights |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10120883B2 (en) | 2006-01-23 | 2018-11-06 | Microsoft Technology Licensing, Llc | User interface for viewing clusters of images |
US20070174269A1 (en) * | 2006-01-23 | 2007-07-26 | Microsoft Corporation | Generating clusters of images for search results |
US7644373B2 (en) | 2006-01-23 | 2010-01-05 | Microsoft Corporation | User interface for viewing clusters of images |
US7725451B2 (en) | 2006-01-23 | 2010-05-25 | Microsoft Corporation | Generating clusters of images for search results |
US9396214B2 (en) | 2006-01-23 | 2016-07-19 | Microsoft Technology Licensing, Llc | User interface for viewing clusters of images |
US20070174790A1 (en) * | 2006-01-23 | 2007-07-26 | Microsoft Corporation | User interface for viewing clusters of images |
US20070174872A1 (en) * | 2006-01-25 | 2007-07-26 | Microsoft Corporation | Ranking content based on relevance and quality |
US7836050B2 (en) | 2006-01-25 | 2010-11-16 | Microsoft Corporation | Ranking content based on relevance and quality |
US10614366B1 (en) | 2006-01-31 | 2020-04-07 | The Research Foundation for the State University o | System and method for multimedia ranking and multi-modal image retrieval using probabilistic semantic models and expectation-maximization (EM) learning |
US8484222B1 (en) * | 2006-12-01 | 2013-07-09 | Google Inc. | Method and apparatus for identifying a standalone location |
US9092409B2 (en) * | 2007-10-11 | 2015-07-28 | Google Inc. | Smart scoring and filtering of user-annotated geocoded datasets |
US20110078575A1 (en) * | 2009-09-29 | 2011-03-31 | Microsoft Corporation | Travelogue-based contextual map generation |
US8281246B2 (en) | 2009-09-29 | 2012-10-02 | Microsoft Corporation | Travelogue-based contextual map generation |
US8275546B2 (en) | 2009-09-29 | 2012-09-25 | Microsoft Corporation | Travelogue-based travel route planning |
US8977632B2 (en) | 2009-09-29 | 2015-03-10 | Microsoft Technology Licensing, Llc | Travelogue locating mining for travel suggestion |
US20110077848A1 (en) * | 2009-09-29 | 2011-03-31 | Microsoft Corporation | Travelogue-based travel route planning |
US20110078139A1 (en) * | 2009-09-29 | 2011-03-31 | Microsoft Corporation | Travelogue locating mining for travel suggestion |
US8572076B2 (en) | 2010-04-22 | 2013-10-29 | Microsoft Corporation | Location context mining |
US8676807B2 (en) | 2010-04-22 | 2014-03-18 | Microsoft Corporation | Identifying location names within document text |
WO2012058036A1 (en) * | 2010-10-28 | 2012-05-03 | Eastman Kodak Company | Organizing nearby picture hotspots |
US9317532B2 (en) | 2010-10-28 | 2016-04-19 | Intellectual Ventures Fund 83 Llc | Organizing nearby picture hotspots |
US20140073361A1 (en) * | 2010-10-28 | 2014-03-13 | Intellectual Ventures Fund 83 Llc | Method of locating nearby picture hotspots |
US9100791B2 (en) * | 2010-10-28 | 2015-08-04 | Intellectual Ventures Fund 83 Llc | Method of locating nearby picture hotspots |
US9214193B2 (en) * | 2011-03-08 | 2015-12-15 | Sony Corporation | Processing apparatus and method for determining and reproducing a number of images based on input path information |
US20120229503A1 (en) * | 2011-03-08 | 2012-09-13 | Sony Corporation | Reproduction processing apparatus, imaging apparatus, reproduction processing method, and program |
US9734167B2 (en) * | 2011-09-21 | 2017-08-15 | Horsetooth Ventures, LLC | Interactive image display and selection system |
US20170286453A1 (en) * | 2011-09-21 | 2017-10-05 | Horsetooth Ventures, LLC | Interactive Image Display and Selection System |
US11068532B2 (en) * | 2011-09-21 | 2021-07-20 | Horsetooth Ventures, LLC | Interactive image display and selection system |
US10459967B2 (en) * | 2011-09-21 | 2019-10-29 | Horsetooth Ventures, LLC | Interactive image display and selection system |
US20130069990A1 (en) * | 2011-09-21 | 2013-03-21 | Horsetooth Ventures, LLC | Interactive Image Display and Selection System |
US9460160B1 (en) | 2011-11-29 | 2016-10-04 | Google Inc. | System and method for selecting user generated content related to a point of interest |
US20140313140A1 (en) * | 2012-01-10 | 2014-10-23 | Canon Kabushiki Kaisha | Operation reception device and method for receiving operation on page image, storage medium, and image forming apparatus for use with operation reception device |
CN103678429A (en) * | 2012-09-26 | 2014-03-26 | 阿里巴巴集团控股有限公司 | Recommendation method and device of tour routes |
US9594445B2 (en) * | 2012-10-01 | 2017-03-14 | Canon Kabushiki Kaisha | Operation reception device and method for receiving operation on page image, storage medium, and image forming apparatus for use with operation reception device |
CN103064924A (en) * | 2012-12-17 | 2013-04-24 | 浙江鸿程计算机系统有限公司 | Travel destination situation recommendation method based on geotagged photo excavation |
WO2015108761A1 (en) * | 2014-01-16 | 2015-07-23 | Microsoft Technology Licensing, Llc | Discovery of viewsheds and vantage points by mining geo-tagged data |
US9471695B1 (en) * | 2014-12-02 | 2016-10-18 | Google Inc. | Semantic image navigation experiences |
CN105117400A (en) * | 2015-07-08 | 2015-12-02 | 百度在线网络技术(北京)有限公司 | Information searching method and system |
US20220092253A1 (en) * | 2020-09-18 | 2022-03-24 | Fujifilm Business Innovation Corp. | Information processing apparatus and non-transitory computer readable medium |
Also Published As
Publication number | Publication date |
---|---|
EP2089845A1 (en) | 2009-08-19 |
CA2661532A1 (en) | 2008-04-17 |
WO2008045704A1 (en) | 2008-04-17 |
CN101523432A (en) | 2009-09-02 |
EP2089845A4 (en) | 2012-08-15 |
US7707208B2 (en) | 2010-04-27 |
JP2010506335A (en) | 2010-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7707208B2 (en) | Identifying sight for a location | |
US7657504B2 (en) | User interface for displaying images of sights | |
US11977554B2 (en) | Methods of and systems for searching by incorporating user-entered information | |
US9940398B1 (en) | Customization of search results for search queries received from third party sites | |
US7725451B2 (en) | Generating clusters of images for search results | |
US9002894B2 (en) | Objective and subjective ranking of comments | |
US7783644B1 (en) | Query-independent entity importance in books | |
US20180004850A1 (en) | Method for inputting and processing feature word of file content | |
JP2017041284A (en) | Device, method, program, and system for providing purpose-oriented application on result page of search engine | |
KR20080091821A (en) | Automated tool for human assisted mining and capturing of precise results | |
KR20070039072A (en) | Results based personalization of advertisements in a search engine | |
US20090006324A1 (en) | Multiple monitor/multiple party searches | |
CN109952571B (en) | Context-based image search results | |
JP5313295B2 (en) | Document search service providing method and system | |
KR101734970B1 (en) | System and method of providing search result according to search intention of user | |
JP2009205588A (en) | Page search system and program | |
Bhardwaj et al. | Structure and Functions of Metasearch Engines: An Evaluative Study. | |
KR101180371B1 (en) | Folksonomy-based personalized web search method and system for performing the method | |
Zerr et al. | GuideMe! The World of sights in your pocket | |
Adams et al. | Creating Your Own Web-Deployed Street Map Using Open Source Software and Free Data | |
Nakanishi et al. | Dynamic cross-domain link creation for interconnection of heterogeneous knowledge bases |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JING, FENG;ZHANG, LEI;MA, WEI-YING;REEL/FRAME:019315/0903 Effective date: 20070316 Owner name: MICROSOFT CORPORATION,WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JING, FENG;ZHANG, LEI;MA, WEI-YING;REEL/FRAME:019315/0903 Effective date: 20070316 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034542/0001 Effective date: 20141014 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552) Year of fee payment: 8 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20220427 |