WO2019053441A1 - Location pinpointing - Google Patents

Location pinpointing Download PDF

Info

Publication number
WO2019053441A1
WO2019053441A1 PCT/GB2018/052611 GB2018052611W WO2019053441A1 WO 2019053441 A1 WO2019053441 A1 WO 2019053441A1 GB 2018052611 W GB2018052611 W GB 2018052611W WO 2019053441 A1 WO2019053441 A1 WO 2019053441A1
Authority
WO
WIPO (PCT)
Prior art keywords
location
database
user
image
street
Prior art date
Application number
PCT/GB2018/052611
Other languages
French (fr)
Inventor
Rafi BHATTI
Sabih Chaudhry
Andrew Mann
Original Assignee
Geovistrack Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Geovistrack Ltd filed Critical Geovistrack Ltd
Publication of WO2019053441A1 publication Critical patent/WO2019053441A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • G06Q10/0835Relationships between shipper or supplier and carriers
    • G06Q10/08355Routing methods
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/10Map spot or coordinate position indicators; Map reading aids
    • G09B29/106Map spot or coordinate position indicators; Map reading aids using electronic means

Definitions

  • This invention relates to location pinpointing methods and systems.
  • a delivery address it is commonplace nowadays, when placing an order for a product or service online, to enter a delivery address. So that the goods/services are delivered to the correct location, a user is typically prompted to enter a delivery location, such as a post code and building number, a full address or other location data. Where the user has an account with a particular vendor, their delivery addresses are often stored in the vendor's database, which offers the additional convenience of permitting the user to select an address from a drop-down list, or a grid selection user interface, of previously-inputted delivery locations.
  • a user when a user wishes to go to a physical location, which is advertised online, a user is often presented with an address on the web site in question and this is often accompanied by a map showing that location.
  • the map is often auto-generated using a third-party database/software (such as Google R TM maps), whereby the web site owner simply configures an inline frame of their web site to display third party map imagery centred on the address specified by the web site owner.
  • the address is highlighted by a "pin" graphic overlaid onto the map image, such that visitors to the web site can pinpoint the address in question.
  • map metadata contains other postal unit identification information, such as "odd numbers on the left from 1 to 59; even numbers on the right from 2 to 58", such that the location of a specific building in that street can be estimated.
  • map data can make finding specific locations quite difficult.
  • a user can follow map data (especially where it is transposed into, say, a GPS system) up to a certain point, but invariably finding the exact address/location (to the "front-door” level) usually requires some degree of human intervention.
  • mapping (and GPS) systems can be used (in combination) to facilitate finding specific locations, there is almost always a need for some level of human intervention within a "final delivery radius" of the target location.
  • this final delivery radius can be as small as a few metres, but in other cases, it can be quite large, say up to a kilometre.
  • This invention aims to provide such a solution and/or to address one or more of the problems identified above.
  • the invention provides a method and system for a user to be able to input a target location, and for that location to be displayed simultaneously in two formats within the user's GUI, namely, a "map view” and a "street-level photography view”. From this, the user is able to move wither view so as to centre the displayed street-level photography view on the exact target location. Then, by pressing a button, that centred, street-level photography view is captured and displayed in the third display area. Now, the user is able to send to the backend not only the inputted target address, but also a street-level photography view of the exact location.
  • second person wishes to go to the specified target location, they can obtain the address, and a street-level photography view of the exact target location from the system backend. This removes some of the guesswork that may otherwise be needed by the second person because the second person can compare what they see with the street-level photography view submitted by the user, to confirm the exact target location.
  • the system notifies the user and prompts them to upload their own image of the exact target location.
  • a second person wishes to go to the specified target location, they can obtain the address, and a user-submitted street-level photography view of the exact target location from the system backend. Again, this removes some of the guesswork that may otherwise be needed by the second person because the second person can compare what they see with the street-level photography view submitted by the user, to confirm the exact target location.
  • the ability for a user to be able to upload their own photography, and optionally for the system backend to be able to store that user-uploaded imagery not only improves the system backend by providing data where data previously did not exist, but it also assists the user going forward because once the user has uploaded their own street-level photography/imagery once, if the same/similar address is re-inputted at a later point in time, the system backend may be able to output the user-uploaded imagery where conventional street-level photography from the street-level photography database does not exist.
  • the GUI displays in the first image area, overlaid on the cropped image of the map, a marker corresponding to the best match location.
  • the marker could be a pin-type image or a crosshair overlaid on the map, which assists the user in checking that the system backend has returned an accurate result.
  • the GUI may display in the second image area, overlaid on the cropped photographic image, a marker corresponding to the best match location.
  • the default position of the marker in the second display area of the GUI corresponds to the user's intended target location.
  • the GUI may also allow the user to move the marker overlaid on the second display area, and/or to drag and update the image displayed in the second display area so that the overlaid marker does, indeed, correspond to the intended target location.
  • This feature may be particularly advantageous when attempting to pinpoint a particular entrance of a multi-entrance property: for example, the user may be able to move the overlaid marker so that it points to a specific location, e.g.
  • a back door, delivery drop-box, secure location, etc. within the displayed image.
  • much of the currently-available street-level photography is discontinuous, that is to say, comprises a series of images taken at, say 10m or 20m intervals along a street, if one of the images does not exactly correspond with a particular target location, or if the image is obscured by traffic, a hedge, etc., then it can be difficult to pinpoint an exact target location using this data.
  • the invention thus permits a user of the system to upload/add their own image/photograph of the exact target location to address this issue.
  • the human interface device (HID) of the user terminal suitably permits the cropped image of the map from the cartography database in the first image area to be dragged.
  • the terminal processor may be adapted to automatically fill and display regions initially lying outside the first image area with portions of the cropped image of the map from the cartography database that were not initially displayed. This can provide a smoother user interface experience because rather than having to download new map imagery and refresh the displayed image, the imagery is already there, albeit cropped to the boundary of the first image area. Now, the system simply displays a different crop boundary within the already-downloaded map imagery, thus improving the user interface experience.
  • the cropped image of a map from the cartography database that is transmitted from the system backend to the user terminal further comprises metadata corresponding to the cropped portion of the map.
  • the metadata corresponding to the displayed map image is already downloaded and ready for use, which can reduce latency where otherwise that data may need to be fetched in a separate step.
  • the terminal processor Upon dragging the cropped image of the map from the cartography database in the first image area, the terminal processor is ideally adapted to display, at a position substantially centred on the first image area, a temporary marker overlaid on the first image area. This may provide the user of the terminal with a "rough starting point" for subsequent dragging or editing procedures.
  • the terminal processor Upon dragging the cropped image of the map from the cartography database in the first image area, the terminal processor is suitably adapted to identify a new location, the new location corresponding to the position of the temporary marker relative to the metadata corresponding to the cropped portion of the map, to update the location input field to contain the new location, to automatically transmit the new location to the system backend and to force a requery.
  • the data is updated, which can be convenient.
  • the GUI permits a user, using the HID, to overlay a user-inputted marker on the second display area.
  • the user-inputted marker may be draggable, by the user interacting with the HID, relative to the second display area.
  • the GUI may further comprise a third display area.
  • the third display area may be used for any purpose, but it may in particular be used for displaying user-uploaded or user-imported data, such as comments, instructions or user-generated or user-captured imagery.
  • the GUI may further comprise a second input button, which when pressed by a user interacting with the HID, opens an input form in the GUI via which, a user can import a user image, which user image is displayed in the third display area of the GUI. This conveniently facilitates the interaction of the user with the system with regard to uploading user content to the system.
  • the input form may, for example, comprise a terminal directory/file browser or a search facility.
  • the GUI may further comprise a third input button, which when pressed by a user interacting with the HID, an image of the second display area is captured and displayed in the third display area.
  • a third input button When the third input button is pressed by a user interacting with the HID, an image of the second display area incorporating the overlaid user-inputted marker can be captured and displayed in the third display area.
  • the GUI may further comprise a fourth input button, which when pressed by a user interacting with the HID, the location currently entered in the location input field may be stored in a memory of the user terminal.
  • a fourth input button When the fourth input button is pressed by a user interacting with the HID, for example, all images currently displayed in the third display area can be stored in a memory of the user terminal.
  • the terminal processor can be adapted to transmit, via the terminal data communications interface, the location currently entered in the location input field to the system backend.
  • the terminal processor can be adapted to transmit, via the terminal data communications interface, all images currently displayed in the third display area to the system backend.
  • the system backend upon receipt of the location and/or the images sent to it by the terminal device, can be adapted to retransmit the location and/or the images to an external server. This can occur, for example, by when the fourth input button is pressed by a user interacting with the HID, the terminal processor being adapted to transmit, via the terminal data communications interface, all images currently displayed in the third display area to an external server.
  • the system backend may comprise one or any number of servers, which may be real, i.e. hardware servers with physical processors, storage devices etc., or they can be virtual servers, i.e. implemented by virtualisation technology in a cloud-based system.
  • servers which may be real, i.e. hardware servers with physical processors, storage devices etc., or they can be virtual servers, i.e. implemented by virtualisation technology in a cloud-based system.
  • Any one or more of the cartography database, the street-level photography database, the location database and the correlation database may be located on physically or virtually separate, but operatively-interconnected servers.
  • the metadata points corresponding to the maps of the cartography database may comprise, for example, any one more of the group comprising: street names, post code boundaries, building names, building numbers, town names, county names, country names, named points of interest, road intersection numbers, road intersection names, and landmark names.
  • the metadata points corresponding to the photographic imagery contained in the street-level photography database may comprise, for example, any one or more of the group comprising: street names, post code boundaries, building names, building numbers, town names, county names, country names, named points of interest, road intersection numbers, road intersection names, and landmark names.
  • the photographic imagery contained in the street-level photography database may comprise, for example, any one or more of the group comprising: photographs of buildings, photographs of streets, photographs of landmarks, photographs of points of interest, and photographs of geographic features.
  • the location data contained in the location database may comprise, for example, any one more of the group comprising: street names, post codes, a post code-building number lookup table, a post code-building name lookup table, a street-building number lookup table, a street-building name lookup table, building names, building numbers, town names, county names, country names, named points of interest, road intersection numbers, road intersection names, landmark names, and nicknames for any of the foregoing.
  • the correlation database comprises a matched table of metadata values from any one of the databases, where there is an identical or similar corresponding metadata value in any one of the other databases.
  • This enables the system to readily identify data in one database where there is similar or identical data in another database.
  • This cross-referencing of one data set with another enables the system to operate dynamically, and to display simultaneously, data presented in different ways (e.g. street-level photography, cartography imagery, and address data), which all relate to the same, or substantially the same reference point in physical space.
  • system backend and terminal data communications interfaces are operatively interconnected via the internet. This effectively enables the user terminals to be any internet- connected device with a browser, and so proprietary software is not necessarily needed at the terminal, and it also facilitates implementing the invention into third-party web sites.
  • the system backend is provided with an Application Programming Interface (API), which is accessible via a network, such as the internet.
  • API Application Programming Interface
  • third-party service providers are able to conveniently access the benefits of the invention in their own web sites or user interfaces.
  • the API provides access to the various databases in the system backend, via input commands from the third-party service providers. These input commands form queries for the respective databases, and the system backend' outputs are standardised in some way by the API such that the third-party service provider is able to parse those outputs and display a similar GUI on its own web site to that which would be displayed in the GUI of a directly-connected terminal computer.
  • the human interface device can comprise any device that enables a user to interact with the system.
  • the HID can be any one or more the group comprising: a pointing device, a computer mouse, a computer touchpad, a trackpad, a keyboard, a virtual keyboard displayed on a display screen, and a touch screen.
  • the means for displaying the graphical user interface suitably comprises a display panel or screen, such as that found in most modern smartphones, tablet PCs, computers, laptops and TV displays.
  • the location input field comprises a drop-down box linked to the location database. This enables a user, for example, who has previously logged-in, to select from a few pre-stored options, if desired, rather than having to key-in address/location data each time the system is used. Additionally or alternatively, the location input field may comprise a text input field, which can be used, for example, where pre-stored address/location data is not already in the system for a particular user.
  • the terminal processor or backend processor can be adapted to parse text inputted into the text input field, and using heuristics, to identify a best match between the text inputted into the text input field and a value in the location database. This enables the system to ignore spelling mistakes, or differences in the ways that the same location may be inputted (e.g. "Liverpool University” vs. “University of Liverpool” vs. “Liverpool Uni” vs "Liverpool University”, etc.).
  • Figure 1 is a schematic system view of a system in accordance with the invention.
  • FIGS. 2 to 5 are schematic representations of a graphic user interface embodying the invention.
  • Figure 6 is a schematic view of a graphical user interface with re-searching functionalities
  • Figure 7 is a schematic view of a graphical user interface depicting a certain error condition
  • Figure 8 shows a variation of the system shown in Figure 1, but with an Application Programming Interface (API) layer
  • API Application Programming Interface
  • Figure 9 shows a variation of the system shown in Figure 8, in which all of the client devices communicate with the API layer.
  • a pinpointing system 10 is accordance with the invention comprises a backend 12 and a terminal computer 14.
  • the backend is typically implemented on a server, or a group of interconnected servers and it comprises processor 16, which interacts with a number of databases 18, 20, 22, 24.
  • the system backend 12 has a communications interface 26, which communicates with the terminal computer 14 via a network 28, such as the internet.
  • the terminal computer is typically embodied by a smartphone or tablet-type computing device, or indeed a desktop computer, which has a terminal processor 30 and a terminal communications interface 32, which communicates with the system backend 12 via the network 28, in the illustrated embodiment, via the internet.
  • the terminal computer in the illustrated embodiment has a main body 34, which houses the terminal processor and terminal communications interface 32 internally.
  • the main body 34 has an external, touch screen interface 36, upon which a graphical user interface 38 is displayable and via which, a user (now shown) can interact with the graphical user interface 38.
  • the touchscreen 36 therefore serves as both a display output device and a human input device (HID) and so a user (not shown) is able to interact with the system 10 in the manner set out hereinbelow.
  • HID human input device
  • first data input area 40 into which a user can type, or otherwise input a location.
  • a user will simply type-in an address or postcode and then tap on a first submit button 42, which causes the terminal processor 30 to transmit the inputted address/location data, via the terminal communications interface 32, the network 28 and the communications interface 26 of the system backend 12 to the processor 16 of the system backend.
  • the system backend 16 parses the location information received and interrogates a location database 18 for a suitable match. Once a corresponding location has been identified in the location database 18, the back-end processor 16 uses a correlation database 20 to identify a match for that location in a cartography database 22.
  • the cartography database 22 contains a set of maps and the back-end processor thereby takes an excerpt from a map from the cartography database and crops it to form an image.
  • the metadata contained in the cartography database 22 corresponding to that excerpt of the map is compiled, by the back-end processor 16 into a single data file, or several data files, which is sent back, via the network 28 to the terminal device.
  • the cropped map is then displayed in the first display area 44 in the graphical user interface 38 on their terminal computer 14.
  • the system backend processor When the system backend processor has identified a map excerpt from the cartography database, using the metadata associated with that excerpt, it identifies a corresponding set of street- level photography from a street-level photography database 24 within the system backend. Similarly, the back-end processor 16 excerpts photographs from the street-level photography database 24, crops and transmits it, via the network 28, to the terminal computer 14 in the same manner as previously described. The street-level photography corresponding to the map excerpt is thus displayed in a second display area 46 within the graphical user interface 38 of the terminal computer 14.
  • the graphical user interface comprises a second submit button 48, which, when pressed, copies the image displayed in the second display area 46 into a third display area 50, which is essentially cache for onward transmission, as shall be explained hereinbelow.
  • the system backend processor 16 will generate a code, which is transmitted, via the network 28, to the terminal computer 14 and an error message is displayed in the second display area 46.
  • the error message prompts a user (not shown) of the terminal computer 14 to begin an upload procedure, whereby a user can upload their own photography of a location, if desired.
  • a third input button 52 is provided within a graphical user interface 38, which, when clicked, opens a dialogue box via which a user (not shown) can select a photograph stored either on the terminal computer, or elsewhere, and upload it to the system 10. Upon uploading, the image is copied into the third display area 50, ready for onward transmission.
  • the user can press on a further submit button 54, which transmits a copy of the image or images contained in the third display area 50 to the system backend 12, again via the network.
  • the received images can optionally be stored in a further database 56 of the system backend, or they can be sent on to a third party (not shown) via the network 28.
  • the graphical user interface 38 displayed on the terminal computer 14 has a first input field 40 into which a user (now shown), can type, or otherwise input, a target location 60.
  • the submit button 42 Upon pressing the submit button 42, the target location 60 is transmitted to the system backend, as previously described.
  • the system backend has obtained a corresponding location match from the location database 18 and cross-referenced it, using the correlation database 20, to a map location in the cartography database 22, and obtained a corresponding street-level photography from the photography database 14, the corresponding map is displayed in the first display area 44, and the corresponding street-level photography is displayed alongside it in the second display area 46, as shown in Figure 3 of the drawings.
  • the location shown in the second display area 46 is, in fact, the location that the user (not shown) intended, then the user can accept this. However, in many cases, a user will wish to exactly pinpoint a particular location within that image, and a procedure for that is shown in Figure 4 of the drawings.
  • a user (not shown) is able to touch or click anywhere within the second display area 46 to place a marker 62 thereby pinpointing an exact location within the street-level photography image displayed in the second display area.
  • the user (not shown) is satisfied, as shown in Figure 5, they can press or otherwise click on the copy/send button 48 on the graphical user interface 38, whereupon the image, containing the marker 62 is copied into the third display area 50.
  • the system backend 12 comprises a further database 56, which stores the submitted image, along with the inputted location data 60.
  • the system backend processor 16 may be able to return automatically that same previously-pinpointed location corresponding to that input in the first instance, thereby obviating the need for the user to repeat this process subsequently.
  • the map location shown in the first display are 44 is somewhat inaccurate. This can occur where, for example, only a postcode has been entered, which may cover a fairly large area of ground, in which case, the map displayed in the first display area 46 may not be accurately centred on the intended inputted location 60, in which case some user- intervention may be required.
  • the back-end processor transmits an excerpt from the cartography database for display in the first display area 44 of the graphical user interface 38.
  • the back-end processor transmits an excerpt from the cartography database for display in the first display area 44 of the graphical user interface 38.
  • what is actually displayed in the graphical user interface 38 is, in fact, a slightly cropped-down version of the excerpt obtained by the back-end processor.
  • the user is able to drag the map appearing in the first display area 46 and upon so doing, a "crosshair/cursor" 66 is automatically superimposed over the first display area 46 of the graphical user interface 38.
  • the user is therefore able to pan the map until the crosshair 66 lies on the location of interest.
  • the terminal computer 14 automatically sends back to the system backend 12 an updated location, based on the location of the crosshair 66 relative to the displayed map 46.
  • the updated location 68 can then be displayed in the first display area 40 of the graphical user interface and the accompanying street- level photography appearing in the second display area 48 can be updated to comport.
  • a user (not shown) is able to add a pinpoint marker 62 to the street-level photograph appearing in the second display area and then save that in the third display area 50 by clicking on the submit button 48. Again, the image from the third display area 50 can be transmitted to the system backend 12 by pressing on the send button 54.
  • the graphical user interface will display, in the second display area 48, an error message prompting the user (now shown) to upload their own imagery.
  • An upload button 52 is provided for this purpose, which, when clicked, opens a dialogue box 70 via which a user can upload an image of their own to the system, which is displayed in the third display area 50, and which can then be transmitted to the system backend, by clicking on the send button 54 as previously described.
  • One possible advantage of a user being able to upload their own imagery is that if the street- level photography is incomplete, or cannot be manoeuvred to the correct position, as may happen if, say, a house is located down a narrow passageway off a main street, then the user is able to upload their own imagery so that a particular location can nevertheless be accurately pinpointed.
  • FIG. 8 of the drawings a similar system to that described previously is shown, albeit now the system backend has an API 100 to which various third-party service providers' web servers 102 can connect, for example, via the internet 28.
  • the GUI 38 now forms part of, or is incorporated into, the overall GUI 138 of a third-party service provider's web interface.
  • the overall GUI 138 comprises a "shopping cart" area 104, which displays, for example, a list of items to be purchased; the GUI 38 previously described, which is now part of the "delivery options" section of the overall GUI 138; and a payment area 106, which is where a user (not shown) enters their contact/payment details.
  • this could cause various parts of the overall GUI 138 to auto-populate, for example, by loading default payment settings in the payment area 106, and/or and a default delivery address in the location input are 40 of the GUI 38.
  • the location data including the captured images in the third display area 50, can be sent to the third-party provider, and the, if desired, passed on to a logistics provider 110.
  • the logistics provider 110 supplies its delivery agents (not shown) with tablet-type computers, smart phones or the like 114 upon which, for each delivery is displayed the delivery location data in text form (e.g. the address), as well as copies of the images from the third display area 50 of the user's terminal device 14.
  • the delivery agents' devices 114 are GPS enabled, such that the address data is automatically transposed into the GPS system, enabling the delivery agent to navigate, via GPS, to the final delivery radius in the usual way. Now, however, by virtue of the fact that the delivery agents' device 114 also displays copies of the images from the third display area 50, the delivery operative will, hopefully, now be able to effect deliveries more efficiently - knowing the exact delivery point, as opposed to having to search the final delivery radius manually as s/he had to do previously.
  • all of the "client” devices namely third-party service providers webserver 102, the terminal computer 14, the logistics provider 110 and the delivery agent's device 114 all communicate with the system backend 12, via, for example, the internet 28 and the API layer 100.
  • the third-party providers webserver 102 serves content to its customers, which is displayed on the display area 138 of the terminal device 14.
  • the customers shopping cart 104 and payment input field 106 is provided by the third-party webserver 102 in the usual way.
  • the delivery address field 38 is provided by the system backend 12 of the invention.
  • the API layer 100 enables the third-party webserver to export a particular customers delivery address data, which is then searched by the system back end, in the manner previously described, to yield a predicted delivery location, which is then sent, via the API, and used to populate the location input field 40 on the terminal device 14.
  • the user of the terminal device 14 can interact with the system 10 as previously described to precisely pinpoint a particular location and that committed location, that is to say, when button 54 is clicked, is passed back to the system backend 12 via the API layer 100.
  • the third-party webserver 102 can capture, if required, a copy of that data, or not as the case may be.
  • the logistics provider 110 can query the system backend 12 via the API layer 100 for a delivery corresponding to a particular consignment.
  • the user-inputted data for example the specified delivery location in field 40 and/or the imagery either uploaded to or otherwise stored in the third display area 50, can be retrieved from the system backend, via the API, and then passed on to the delivery agents terminal device 114.
  • One of the advantages of the system shown in Figure 9, vs that shown in Figure 8, is the centralisation of data, namely using the system backend 12 to handle all of the address/location data for all of the parties concerned.
  • This particular configuration avoids the need for the third-party webserver 102, the logistics provider 110, the user terminal 14, or indeed the delivery agent's terminal 114 to store or process any location data at all.
  • a further possible advantage arises from data protection issues, which may be present, where, for example, the user-imported imagery, as shown in the third display area 50, may comprise personally identifiable information, such as car number plates, people in the image, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Databases & Information Systems (AREA)
  • Human Resources & Organizations (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Quality & Reliability (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Mathematical Physics (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Remote Sensing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A location pinpointing system (10) is disclosed, which assists in identifying a precise location using an overlay on cartography or other data. A user interacts with a back end system (12) via a user terminal (14) with a GUI (38), and a location can be obtained from the back end system (12). However, where mapping or other data are inaccurate, or missing certain elements, such as the actual door in a shared building, or a particular delivery point on a site, the user is able to supplement the system back end (12) with, for example, a description and/or a photograph and/or a GPS coordinate of the exact location. The user is also able, via the user terminal (14), to add a pinpoint (62) to the displayed data,and to share that pinpoint (62) with the system back end (12). The pinpoint (62) can be shared with other user terminals (114), such as those used by delivery agents.

Description

LOCATION PINPOINTING
This invention relates to location pinpointing methods and systems.
It is commonplace nowadays, when placing an order for a product or service online, to enter a delivery address. So that the goods/services are delivered to the correct location, a user is typically prompted to enter a delivery location, such as a post code and building number, a full address or other location data. Where the user has an account with a particular vendor, their delivery addresses are often stored in the vendor's database, which offers the additional convenience of permitting the user to select an address from a drop-down list, or a grid selection user interface, of previously-inputted delivery locations.
Similarly, when a user wishes to go to a physical location, which is advertised online, a user is often presented with an address on the web site in question and this is often accompanied by a map showing that location. For convenience, the map is often auto-generated using a third-party database/software (such as GoogleR™ maps), whereby the web site owner simply configures an inline frame of their web site to display third party map imagery centred on the address specified by the web site owner. Usually, the address is highlighted by a "pin" graphic overlaid onto the map image, such that visitors to the web site can pinpoint the address in question.
In both cases, there is a link between address data - typically post code and postal unit (building number, building name) - and corresponding metadata associated with the map data. Thus, a named street in the map data can be cross-referenced to a specified street name. In addition, in most cases, the map metadata contains other postal unit identification information, such as "odd numbers on the left from 1 to 59; even numbers on the right from 2 to 58", such that the location of a specific building in that street can be estimated.
Even though the resolution and metadata content of map data is continually improving, in many areas of the world, and in particular outside of major cities and conurbations, the ability of an exact postal unit to be accurately identified and pinpointed in an online map using a combination of 'street name + postal unit' or 'post code + postal unit' is somewhat limited. This is particularly the case in rural locations, where a single post code may cover several square kilometres, or where there is no systematic building numbering convention (e.g. farms identified by a combination of 'village + farm name', rather than by 'street name + house number'). Even in suburban areas, straightforward interpolation of map metadata to pinpoint an exact address can be problematic, for example along curved roads where there may be more odd-numbered houses on one side of the street, than even- numbered houses on the opposite side of the street. Thus, for example, approximating that house number 25 is half-way along a road from 1 - 51 may not always be correct, especially where, say, there is a break in a row of houses to accommodate a road junction, a park, a bridge, a shop, a school, etc. somewhere along the street in question.
Inaccuracies in map data, such as those outlined above, can make finding specific locations quite difficult. Thus, a user can follow map data (especially where it is transposed into, say, a GPS system) up to a certain point, but invariably finding the exact address/location (to the "front-door" level) usually requires some degree of human intervention.
Therefore, even though mapping (and GPS) systems can be used (in combination) to facilitate finding specific locations, there is almost always a need for some level of human intervention within a "final delivery radius" of the target location. Depending on the resolution and accuracy of map metadata, this final delivery radius can be as small as a few metres, but in other cases, it can be quite large, say up to a kilometre.
A need therefore exists for a system and/or method, which reduces the size of final delivery radius, and in particular, a system and/or method, which reduces the size of final delivery radius to a level which reduces, significantly reduces, or avoids the need for human intervention in identifying an exact location. This invention aims to provide such a solution and/or to address one or more of the problems identified above.
Known systems are described in the following published patent applications:
US2006/0271287 [GOLD]; US2007/0273758 [MENDOZA]; and US2015/0170615 [SIEGEL]. Various aspects of the invention are set forth in the appended independent claims. Various optional or preferred features of the invention are set forth in the appended dependent claims.
In summary, the invention provides a method and system for a user to be able to input a target location, and for that location to be displayed simultaneously in two formats within the user's GUI, namely, a "map view" and a "street-level photography view". From this, the user is able to move wither view so as to centre the displayed street-level photography view on the exact target location. Then, by pressing a button, that centred, street-level photography view is captured and displayed in the third display area. Now, the user is able to send to the backend not only the inputted target address, but also a street-level photography view of the exact location. Thus, if second person wishes to go to the specified target location, they can obtain the address, and a street-level photography view of the exact target location from the system backend. This removes some of the guesswork that may otherwise be needed by the second person because the second person can compare what they see with the street-level photography view submitted by the user, to confirm the exact target location.
In other cases, where there is no street-level photography view corresponding to an inputted target location, the system notifies the user and prompts them to upload their own image of the exact target location. Now, if a second person wishes to go to the specified target location, they can obtain the address, and a user-submitted street-level photography view of the exact target location from the system backend. Again, this removes some of the guesswork that may otherwise be needed by the second person because the second person can compare what they see with the street-level photography view submitted by the user, to confirm the exact target location. The ability for a user to be able to upload their own photography, and optionally for the system backend to be able to store that user-uploaded imagery, not only improves the system backend by providing data where data previously did not exist, but it also assists the user going forward because once the user has uploaded their own street-level photography/imagery once, if the same/similar address is re-inputted at a later point in time, the system backend may be able to output the user-uploaded imagery where conventional street-level photography from the street-level photography database does not exist. Suitably, the GUI displays in the first image area, overlaid on the cropped image of the map, a marker corresponding to the best match location. The marker could be a pin-type image or a crosshair overlaid on the map, which assists the user in checking that the system backend has returned an accurate result.
In addition, the GUI may display in the second image area, overlaid on the cropped photographic image, a marker corresponding to the best match location. Ideally, the default position of the marker in the second display area of the GUI corresponds to the user's intended target location. However, the GUI may also allow the user to move the marker overlaid on the second display area, and/or to drag and update the image displayed in the second display area so that the overlaid marker does, indeed, correspond to the intended target location. This feature may be particularly advantageous when attempting to pinpoint a particular entrance of a multi-entrance property: for example, the user may be able to move the overlaid marker so that it points to a specific location, e.g. a back door, delivery drop-box, secure location, etc., within the displayed image. Furthermore, because much of the currently-available street-level photography is discontinuous, that is to say, comprises a series of images taken at, say 10m or 20m intervals along a street, if one of the images does not exactly correspond with a particular target location, or if the image is obscured by traffic, a hedge, etc., then it can be difficult to pinpoint an exact target location using this data. The invention thus permits a user of the system to upload/add their own image/photograph of the exact target location to address this issue.
The human interface device (HID) of the user terminal suitably permits the cropped image of the map from the cartography database in the first image area to be dragged. Upon dragging the cropped image of the map from the cartography database in the first image area, the terminal processor may be adapted to automatically fill and display regions initially lying outside the first image area with portions of the cropped image of the map from the cartography database that were not initially displayed. This can provide a smoother user interface experience because rather than having to download new map imagery and refresh the displayed image, the imagery is already there, albeit cropped to the boundary of the first image area. Now, the system simply displays a different crop boundary within the already-downloaded map imagery, thus improving the user interface experience.
Suitably, the cropped image of a map from the cartography database that is transmitted from the system backend to the user terminal further comprises metadata corresponding to the cropped portion of the map. Thus, the metadata corresponding to the displayed map image is already downloaded and ready for use, which can reduce latency where otherwise that data may need to be fetched in a separate step.
Upon dragging the cropped image of the map from the cartography database in the first image area, the terminal processor is ideally adapted to display, at a position substantially centred on the first image area, a temporary marker overlaid on the first image area. This may provide the user of the terminal with a "rough starting point" for subsequent dragging or editing procedures.
Upon dragging the cropped image of the map from the cartography database in the first image area, the terminal processor is suitably adapted to identify a new location, the new location corresponding to the position of the temporary marker relative to the metadata corresponding to the cropped portion of the map, to update the location input field to contain the new location, to automatically transmit the new location to the system backend and to force a requery. Thus, when the user interacts with the terminal, the data is updated, which can be convenient.
The GUI permits a user, using the HID, to overlay a user-inputted marker on the second display area. The user-inputted marker may be draggable, by the user interacting with the HID, relative to the second display area.
The GUI may further comprise a third display area. The third display area may be used for any purpose, but it may in particular be used for displaying user-uploaded or user-imported data, such as comments, instructions or user-generated or user-captured imagery.
The GUI may further comprise a second input button, which when pressed by a user interacting with the HID, opens an input form in the GUI via which, a user can import a user image, which user image is displayed in the third display area of the GUI. This conveniently facilitates the interaction of the user with the system with regard to uploading user content to the system. The input form may, for example, comprise a terminal directory/file browser or a search facility.
The GUI may further comprise a third input button, which when pressed by a user interacting with the HID, an image of the second display area is captured and displayed in the third display area. When the third input button is pressed by a user interacting with the HID, an image of the second display area incorporating the overlaid user-inputted marker can be captured and displayed in the third display area.
The GUI may further comprise a fourth input button, which when pressed by a user interacting with the HID, the location currently entered in the location input field may be stored in a memory of the user terminal. When the fourth input button is pressed by a user interacting with the HID, for example, all images currently displayed in the third display area can be stored in a memory of the user terminal.
Additionally or alternatively, when the fourth input button is pressed by a user interacting with the HID, the terminal processor can be adapted to transmit, via the terminal data communications interface, the location currently entered in the location input field to the system backend.
Additionally or alternatively, when the fourth input button is pressed by a user interacting with the HID, the terminal processor can be adapted to transmit, via the terminal data communications interface, all images currently displayed in the third display area to the system backend.
In a preferred embodiment of the invention, upon receipt of the location and/or the images sent to it by the terminal device, the system backend can be adapted to retransmit the location and/or the images to an external server. This can occur, for example, by when the fourth input button is pressed by a user interacting with the HID, the terminal processor being adapted to transmit, via the terminal data communications interface, all images currently displayed in the third display area to an external server.
The system backend may comprise one or any number of servers, which may be real, i.e. hardware servers with physical processors, storage devices etc., or they can be virtual servers, i.e. implemented by virtualisation technology in a cloud-based system.
Any one or more of the cartography database, the street-level photography database, the location database and the correlation database may be located on physically or virtually separate, but operatively-interconnected servers.
The metadata points corresponding to the maps of the cartography database may comprise, for example, any one more of the group comprising: street names, post code boundaries, building names, building numbers, town names, county names, country names, named points of interest, road intersection numbers, road intersection names, and landmark names.
The metadata points corresponding to the photographic imagery contained in the street-level photography database may comprise, for example, any one or more of the group comprising: street names, post code boundaries, building names, building numbers, town names, county names, country names, named points of interest, road intersection numbers, road intersection names, and landmark names.
The photographic imagery contained in the street-level photography database may comprise, for example, any one or more of the group comprising: photographs of buildings, photographs of streets, photographs of landmarks, photographs of points of interest, and photographs of geographic features.
The location data contained in the location database may comprise, for example, any one more of the group comprising: street names, post codes, a post code-building number lookup table, a post code-building name lookup table, a street-building number lookup table, a street-building name lookup table, building names, building numbers, town names, county names, country names, named points of interest, road intersection numbers, road intersection names, landmark names, and nicknames for any of the foregoing.
Suitably, the correlation database comprises a matched table of metadata values from any one of the databases, where there is an identical or similar corresponding metadata value in any one of the other databases. This enables the system to readily identify data in one database where there is similar or identical data in another database. This cross-referencing of one data set with another enables the system to operate dynamically, and to display simultaneously, data presented in different ways (e.g. street-level photography, cartography imagery, and address data), which all relate to the same, or substantially the same reference point in physical space.
Ideally, the system backend and terminal data communications interfaces are operatively interconnected via the internet. This effectively enables the user terminals to be any internet- connected device with a browser, and so proprietary software is not necessarily needed at the terminal, and it also facilitates implementing the invention into third-party web sites.
In one possible implementation of the invention, the system backend is provided with an Application Programming Interface (API), which is accessible via a network, such as the internet. By providing an API, third-party service providers are able to conveniently access the benefits of the invention in their own web sites or user interfaces. The API provides access to the various databases in the system backend, via input commands from the third-party service providers. These input commands form queries for the respective databases, and the system backend' outputs are standardised in some way by the API such that the third-party service provider is able to parse those outputs and display a similar GUI on its own web site to that which would be displayed in the GUI of a directly-connected terminal computer. Thus, it will be appreciated that by providing an API, some degree of centralisation can be achieved, which could reduce data duplication, and which also means that each third-party service provider is somewhat relieved of the need to handle/retain potentially sensitive customer data and large amounts of map, cartography and metadata. The use of an API also means that the third-party service providers' GUIs effectively operate as "thin clients", thereby reducing system overhead requirements at the third-party service provider's web site and/or on the third-party service providers' customers' terminal devices.
The human interface device (HID) can comprise any device that enables a user to interact with the system. Typically, the HID can be any one or more the group comprising: a pointing device, a computer mouse, a computer touchpad, a trackpad, a keyboard, a virtual keyboard displayed on a display screen, and a touch screen. The means for displaying the graphical user interface suitably comprises a display panel or screen, such as that found in most modern smartphones, tablet PCs, computers, laptops and TV displays.
In a preferred embodiment of the invention, the location input field comprises a drop-down box linked to the location database. This enables a user, for example, who has previously logged-in, to select from a few pre-stored options, if desired, rather than having to key-in address/location data each time the system is used. Additionally or alternatively, the location input field may comprise a text input field, which can be used, for example, where pre-stored address/location data is not already in the system for a particular user.
The terminal processor or backend processor can be adapted to parse text inputted into the text input field, and using heuristics, to identify a best match between the text inputted into the text input field and a value in the location database. This enables the system to ignore spelling mistakes, or differences in the ways that the same location may be inputted (e.g. "Liverpool University" vs. "University of Liverpool" vs. "Liverpool Uni" vs "Liverpool University", etc.).
Embodiments of the invention shall now be described, by way of example only, with reference to the accompanying drawings in which:
Figure 1 is a schematic system view of a system in accordance with the invention;
Figures 2 to 5 are schematic representations of a graphic user interface embodying the invention;
Figure 6 is a schematic view of a graphical user interface with re-searching functionalities;
Figure 7 is a schematic view of a graphical user interface depicting a certain error condition; Figure 8 shows a variation of the system shown in Figure 1, but with an Application Programming Interface (API) layer; and
Figure 9 shows a variation of the system shown in Figure 8, in which all of the client devices communicate with the API layer.
Referring to Figure 1 of the drawings, a pinpointing system 10 is accordance with the invention comprises a backend 12 and a terminal computer 14. The backend is typically implemented on a server, or a group of interconnected servers and it comprises processor 16, which interacts with a number of databases 18, 20, 22, 24. The system backend 12 has a communications interface 26, which communicates with the terminal computer 14 via a network 28, such as the internet.
The terminal computer is typically embodied by a smartphone or tablet-type computing device, or indeed a desktop computer, which has a terminal processor 30 and a terminal communications interface 32, which communicates with the system backend 12 via the network 28, in the illustrated embodiment, via the internet.
The terminal computer in the illustrated embodiment, has a main body 34, which houses the terminal processor and terminal communications interface 32 internally. The main body 34 has an external, touch screen interface 36, upon which a graphical user interface 38 is displayable and via which, a user (now shown) can interact with the graphical user interface 38. The touchscreen 36 therefore serves as both a display output device and a human input device (HID) and so a user (not shown) is able to interact with the system 10 in the manner set out hereinbelow.
The graphical user interface 38 displayed on the display screen 36 of the terminal computer
14 has a first data input area 40, into which a user can type, or otherwise input a location. In most cases, a user will simply type-in an address or postcode and then tap on a first submit button 42, which causes the terminal processor 30 to transmit the inputted address/location data, via the terminal communications interface 32, the network 28 and the communications interface 26 of the system backend 12 to the processor 16 of the system backend. The system backend 16 parses the location information received and interrogates a location database 18 for a suitable match. Once a corresponding location has been identified in the location database 18, the back-end processor 16 uses a correlation database 20 to identify a match for that location in a cartography database 22. The cartography database 22 contains a set of maps and the back-end processor thereby takes an excerpt from a map from the cartography database and crops it to form an image. The metadata contained in the cartography database 22 corresponding to that excerpt of the map is compiled, by the back-end processor 16 into a single data file, or several data files, which is sent back, via the network 28 to the terminal device. The cropped map is then displayed in the first display area 44 in the graphical user interface 38 on their terminal computer 14.
When the system backend processor has identified a map excerpt from the cartography database, using the metadata associated with that excerpt, it identifies a corresponding set of street- level photography from a street-level photography database 24 within the system backend. Similarly, the back-end processor 16 excerpts photographs from the street-level photography database 24, crops and transmits it, via the network 28, to the terminal computer 14 in the same manner as previously described. The street-level photography corresponding to the map excerpt is thus displayed in a second display area 46 within the graphical user interface 38 of the terminal computer 14.
The graphical user interface comprises a second submit button 48, which, when pressed, copies the image displayed in the second display area 46 into a third display area 50, which is essentially cache for onward transmission, as shall be explained hereinbelow.
If no corresponding street-level photography is available, the system backend processor 16 will generate a code, which is transmitted, via the network 28, to the terminal computer 14 and an error message is displayed in the second display area 46. The error message prompts a user (not shown) of the terminal computer 14 to begin an upload procedure, whereby a user can upload their own photography of a location, if desired.
To achieve this, a third input button 52 is provided within a graphical user interface 38, which, when clicked, opens a dialogue box via which a user (not shown) can select a photograph stored either on the terminal computer, or elsewhere, and upload it to the system 10. Upon uploading, the image is copied into the third display area 50, ready for onward transmission.
Once the user is satisfied that the location shown in the second display area 46 or the third display area 50 accurately reflects a desired location, the user (not shown) can press on a further submit button 54, which transmits a copy of the image or images contained in the third display area 50 to the system backend 12, again via the network.
The received images can optionally be stored in a further database 56 of the system backend, or they can be sent on to a third party (not shown) via the network 28.
The aforesaid overall description is illustrated, in greater detail, which reference to Figures 2 to 5 of the accompanying drawings.
Referring to Figure 2, the graphical user interface 38 displayed on the terminal computer 14 has a first input field 40 into which a user (now shown), can type, or otherwise input, a target location 60. Upon pressing the submit button 42, the target location 60 is transmitted to the system backend, as previously described.
Once the system backend has obtained a corresponding location match from the location database 18 and cross-referenced it, using the correlation database 20, to a map location in the cartography database 22, and obtained a corresponding street-level photography from the photography database 14, the corresponding map is displayed in the first display area 44, and the corresponding street-level photography is displayed alongside it in the second display area 46, as shown in Figure 3 of the drawings.
If the location shown in the second display area 46 is, in fact, the location that the user (not shown) intended, then the user can accept this. However, in many cases, a user will wish to exactly pinpoint a particular location within that image, and a procedure for that is shown in Figure 4 of the drawings.
In Figure 4, a user (not shown) is able to touch or click anywhere within the second display area 46 to place a marker 62 thereby pinpointing an exact location within the street-level photography image displayed in the second display area. Once the user (not shown) is satisfied, as shown in Figure 5, they can press or otherwise click on the copy/send button 48 on the graphical user interface 38, whereupon the image, containing the marker 62 is copied into the third display area 50.
Once the user (not shown) is completely satisfied, they can press on a further submit button 54 to upload the pinpointed location to the system backend.
In a preferred embodiment of the invention, the system backend 12 comprises a further database 56, which stores the submitted image, along with the inputted location data 60. Thereby, if the same, or a different, user subsequently enters the same location data, then the system backend processor 16 may be able to return automatically that same previously-pinpointed location corresponding to that input in the first instance, thereby obviating the need for the user to repeat this process subsequently.
In certain cases, as shown in Figure 6, the map location shown in the first display are 44 is somewhat inaccurate. This can occur where, for example, only a postcode has been entered, which may cover a fairly large area of ground, in which case, the map displayed in the first display area 46 may not be accurately centred on the intended inputted location 60, in which case some user- intervention may be required.
The back-end processor, as previously described, transmits an excerpt from the cartography database for display in the first display area 44 of the graphical user interface 38. However, what is actually displayed in the graphical user interface 38, is, in fact, a slightly cropped-down version of the excerpt obtained by the back-end processor.
As such, and as illustrated schematically in Figure 7, the user is able to drag the map appearing in the first display area 46 and upon so doing, a "crosshair/cursor" 66 is automatically superimposed over the first display area 46 of the graphical user interface 38. The user is therefore able to pan the map until the crosshair 66 lies on the location of interest. Because the map displayed in the first display area is associated with metadata from the cartography database 22 of the system backend 12, the terminal computer 14 automatically sends back to the system backend 12 an updated location, based on the location of the crosshair 66 relative to the displayed map 46. The updated location 68 can then be displayed in the first display area 40 of the graphical user interface and the accompanying street- level photography appearing in the second display area 48 can be updated to comport.
In a manner, similar to that described previously, a user (not shown) is able to add a pinpoint marker 62 to the street-level photograph appearing in the second display area and then save that in the third display area 50 by clicking on the submit button 48. Again, the image from the third display area 50 can be transmitted to the system backend 12 by pressing on the send button 54.
In certain cases, there may be no corresponding street-level photography associated with a particular map location, and therefore, the graphical user interface will display, in the second display area 48, an error message prompting the user (now shown) to upload their own imagery. An upload button 52 is provided for this purpose, which, when clicked, opens a dialogue box 70 via which a user can upload an image of their own to the system, which is displayed in the third display area 50, and which can then be transmitted to the system backend, by clicking on the send button 54 as previously described.
One possible advantage of a user being able to upload their own imagery is that if the street- level photography is incomplete, or cannot be manoeuvred to the correct position, as may happen if, say, a house is located down a narrow passageway off a main street, then the user is able to upload their own imagery so that a particular location can nevertheless be accurately pinpointed.
Referring now to Figure 8 of the drawings, a similar system to that described previously is shown, albeit now the system backend has an API 100 to which various third-party service providers' web servers 102 can connect, for example, via the internet 28. In this case, rather than the terminal computer 14 being directly connected to the system backend 12, the GUI 38 now forms part of, or is incorporated into, the overall GUI 138 of a third-party service provider's web interface.
Here, the overall GUI 138 comprises a "shopping cart" area 104, which displays, for example, a list of items to be purchased; the GUI 38 previously described, which is now part of the "delivery options" section of the overall GUI 138; and a payment area 106, which is where a user (not shown) enters their contact/payment details. Upon possible logging into the overall GUI, this could cause various parts of the overall GUI 138 to auto-populate, for example, by loading default payment settings in the payment area 106, and/or and a default delivery address in the location input are 40 of the GUI 38.
Now, when a user (not shown) clicks on a "complete checkout" button 108, the location data, including the captured images in the third display area 50, can be sent to the third-party provider, and the, if desired, passed on to a logistics provider 110. The logistics provider 110 supplies its delivery agents (not shown) with tablet-type computers, smart phones or the like 114 upon which, for each delivery is displayed the delivery location data in text form (e.g. the address), as well as copies of the images from the third display area 50 of the user's terminal device 14.
Typically, the delivery agents' devices 114 are GPS enabled, such that the address data is automatically transposed into the GPS system, enabling the delivery agent to navigate, via GPS, to the final delivery radius in the usual way. Now, however, by virtue of the fact that the delivery agents' device 114 also displays copies of the images from the third display area 50, the delivery operative will, hopefully, now be able to effect deliveries more efficiently - knowing the exact delivery point, as opposed to having to search the final delivery radius manually as s/he had to do previously.
It will be appreciated that the invention as described herein could greatly reduce the number of delivery failures, as well as greatly speeding-up the final stage of the logistics/delivery process because the delivery agent, or user seeking to find a specific location, has more data, that is to say, street-level photography and/or user imagery to help locate the exact delivery point down to the "front door level". Other benefits and advantages of the invention will be apparent to the skilled reader, such as, data rationalisation and centralisation, improvement in data protection, reduction of data duplication, reduction of system overhead, as well as physical benefits, such as reduced fuel consumption by delivery operatives through wasted mileage attempted to find particular delivery locations and/or repeated/failed deliveries. Finally, referring to Figure 9 of the drawings, a variation of the scheme described above in relation to Figure 8 is shown. In this embodiment of the invention, all of the "client" devices, namely third-party service providers webserver 102, the terminal computer 14, the logistics provider 110 and the delivery agent's device 114 all communicate with the system backend 12, via, for example, the internet 28 and the API layer 100.
In this embodiment of the invention, the third-party providers webserver 102 serves content to its customers, which is displayed on the display area 138 of the terminal device 14. Here, the customers shopping cart 104 and payment input field 106 is provided by the third-party webserver 102 in the usual way. However, the delivery address field 38 is provided by the system backend 12 of the invention. The API layer 100 enables the third-party webserver to export a particular customers delivery address data, which is then searched by the system back end, in the manner previously described, to yield a predicted delivery location, which is then sent, via the API, and used to populate the location input field 40 on the terminal device 14. The user of the terminal device 14 can interact with the system 10 as previously described to precisely pinpoint a particular location and that committed location, that is to say, when button 54 is clicked, is passed back to the system backend 12 via the API layer 100. At this point, the third-party webserver 102 can capture, if required, a copy of that data, or not as the case may be.
Meanwhile, the logistics provider 110 can query the system backend 12 via the API layer 100 for a delivery corresponding to a particular consignment. The user-inputted data, for example the specified delivery location in field 40 and/or the imagery either uploaded to or otherwise stored in the third display area 50, can be retrieved from the system backend, via the API, and then passed on to the delivery agents terminal device 114.
One of the advantages of the system shown in Figure 9, vs that shown in Figure 8, is the centralisation of data, namely using the system backend 12 to handle all of the address/location data for all of the parties concerned. This particular configuration avoids the need for the third-party webserver 102, the logistics provider 110, the user terminal 14, or indeed the delivery agent's terminal 114 to store or process any location data at all. A further possible advantage arises from data protection issues, which may be present, where, for example, the user-imported imagery, as shown in the third display area 50, may comprise personally identifiable information, such as car number plates, people in the image, etc. That being the case, by all of the user-inputted data being stored in the system backend 12, rather than on the third-party webserver 102 or the logistics provider server 110, the problem of data storage and retention is handled essentially by the system backend. Thus, in the event that any of the components of the system 10 are "hacked" other than the system backend, then the location data, and any potentially personally-identifiable data, is inaccessible via an attack on the third-party webserver 102, the user's terminal device 14, the logistics providers system 110 or, indeed, the delivery agent's terminal 114.
By ensuring that all of the components of the system 10 only have transient copies of the data, which is stored securely on the system backend 12, increases the security of the system considerably, avoids unnecessary data duplication and ILLEGIBLE all of the system overhead, that is to say the requirement for data processing, data storage and analysis onto the system backend, rather than onto any of the terminal servers or devices.
Various features or aspects of the invention have been described herein by way of certain exemplary embodiments. Nothing in this disclosure prohibits one or more features/functions described in relation to one exemplary embodiment hereinabove from being implemented in or incorporated into another exemplary embodiment. For example, the features of the embodiment shown in Figures 1 to 7 of the drawings could be incorporated into the embodiment shown in Figure 8, for example, or vice-versa.

Claims

1. A location pinpointing system comprising a system backend and a user terminal; the system backend comprising: a backend processor; a backend data communications interface operatively connected to the user terminal; a cartography database containing maps and metadata corresponding to the maps; a street-level photography database containing photographic imagery of locations at street- level and metadata corresponding to the photographic imagery; a location database containing location data; and a correlation database linking: location data points in the location database to corresponding metadata points in the cartography database; and location data points in the location database to corresponding metadata points in the street-level photography database; the user terminal comprising: a terminal processor; a terminal data communications interface operatively connected to the system backend; a human interface device (HID); and means for displaying a graphical user interface (GUI), the GUI comprising: one or more location input fields; a first input button; a first image area for displaying an excerpt of a map from the cartography database; and a second image area for displaying a street-level photograph from the street-level photography database, the location pinpointing system being configured, in use, such that when a user of the user terminal, using the HID, inputs a location into at least one of the location input fields and presses the first input button, the inputted location is transmitted from the user terminal to the system backend, whereupon the backend processor: parses the received user-inputted location against entries in the location database to identify a best match location in the location database that corresponds substantially to the user- inputted location; correlates the identified best match location, using the correlation database, to a corresponding matched metadata point in the cartography database and renders a cropped image of a map from the cartography database, the cropped image of the map being substantially centred on the identified best match location; correlates the identified best match location, using the correlation database, to a corresponding matched metadata point in the street-level photography database and where a corresponding matched metadata point in the street-level photography database is available: renders a cropped image of a photographic image from the street-level photography database, the cropped image of the photographic image being substantially centred on the identified best match location, or where a corresponding matched metadata point in the street-level photography database is not available: renders a code; transmits, via the backend data communications interface, the cropped image of the map from the cartography database and the cropped image of the photographic image from the street-level photography database to the user terminal or the code, and wherein upon receipt of the cropped image of the map from the cartography database and the cropped image of the photographic image from the street-level photography database or the code, via the terminal data communications interface, the terminal processor updates the GUI to display: at least a portion of the cropped image of the map from the cartography database in the first image area; and either at least a portion of the cropped image of the photographic image from the street- level photography database in the second image area; or an image or message generated by the terminal processor based on the code in the second image area; and wherein the GUI permits a user, using the HID, to overlay a user-inputted marker on the second display area..
2. The system of claim 1, wherein the user-inputted marker is draggable, by the user interacting with the HID, relative to the second display area.
3. The system of claim 1 or claim 2, wherein the GUI further comprises a third display area and a second input button, which when pressed by a user interacting with the HID, opens an input form in the GUI via which, a user can import a user image, which user image is displayed in the third display area of the GUI.
4. The system of any preceding claim, wherein the GUI further comprises a third input button, wherein when the third input button is pressed by a user interacting with the HID, an image of the second display area is captured and displayed in the third display area.
5. The system of claim 4, wherein when the third input button is pressed by a user interacting with the HID, an image of the second display area incorporating the overlaid user-inputted marker is captured and displayed in the third display area.
6. The system of any preceding claim, wherein the GUI further comprises a fourth input button, and wherein when the fourth input button is pressed by a user interacting with the HID, the location currently entered in the location input field is stored in a memory of the user terminal.
7. The system of claim 6, wherein when the fourth input button is pressed by a user interacting with the HID, all images currently displayed in the third display area are stored in a memory of the user terminal.
8. The system of claim 6 or claim 7, wherein when the fourth input button is pressed by a user interacting with the HID, the terminal processor is adapted to transmit, via the terminal data communications interface, the location currently entered in the location input field to the system backend.
9. The system of any of claims 6, 7 or 8, wherein when the fourth input button is pressed by a user interacting with the HID, the terminal processor is adapted to transmit, via the terminal data communications interface, all images currently displayed in the third display area to the system backend.
10. The system of claim 8 or claim 9, wherein upon receipt of the location and/or the images, the system backend is adapted to retransmit the location and/or the images to an external server.
11. The system of any preceding claim, wherein the GUI displays in the first image area, overlaid on the cropped image of the map, a marker corresponding to the best match location.
12. The system of any preceding claim, wherein the GUI displays in the second image area, overlaid on the cropped photographic image, a marker corresponding to the best match location.
13. The system of any preceding claim, wherein the HID of the user terminal permits the cropped image of the map from the cartography database in the first image area to be dragged.
14. The system of claim 13, wherein, upon dragging the cropped image of the map from the cartography database in the first image area, the terminal processor is adapted to automatically fill and display regions initially lying outside the first image area with portions of the cropped image of the map from the cartography database that were not initially displayed.
15. The system of any preceding claim, wherein the cropped image of a map from the cartography database that is transmitted from the system backend to the user terminal further comprises metadata corresponding to the cropped portion of the map.
16. The system of claim 15, when dependent on claim 4 or claim 5, wherein upon dragging the cropped image of the map from the cartography database in the first image area, the terminal processor is adapted to display, at a position substantially centred on the first image area, a temporary marker overlaid on the first image area.
17. The system of claim 16, wherein upon dragging the cropped image of the map from the cartography database in the first image area, the terminal processor is adapted to identify a new location, the new location corresponding to the position of the temporary marker relative to the metadata corresponding to the cropped portion of the map, to update the location input field to contain the new location, to automatically transmit the new location to the system backend and to force a re-query.
18. The system of any of claims 6 to 17, wherein when the fourth input button is pressed by a user interacting with the HID, the terminal processor is adapted to transmit, via the terminal data communications interface, all images currently displayed in the third display area to an external server.
19. The system of any preceding claim, wherein the system backend comprises one or more servers.
20. The system of any preceding claim, wherein the cartography database, the street-level photography database, the location database and the correlation database are located on physically separate, but operatively-interconnected servers.
21. The system of any preceding claim, wherein the metadata points corresponding to the maps of the cartography database comprise any one more of the group comprising: street names, post code boundaries, building names, building numbers, town names, county names, country names, named points of interest, road intersection numbers, road intersection names, and landmark names.
22. The system of any preceding claim, wherein the metadata points corresponding to the photographic imagery contained in the street-level photography database comprises any one or more of the group comprising: street names, post code boundaries, building names, building numbers, town names, county names, country names, named points of interest, road intersection numbers, road intersection names, and landmark names.
23. The system of any preceding claim, wherein the photographic imagery contained in the street- level photography database comprises any one or more of the group comprising: photographs of buildings, photographs of streets, photographs of landmarks, photographs of points of interest, and photographs of geographic features.
24. The system of any preceding claim, wherein the location data contained in the location database comprises any one more of the group comprising: street names, post codes, a post code-building number lookup table, a post code-building name lookup table, a street-building number lookup table, a street-building name lookup table, building names, building numbers, town names, county names, country names, named points of interest, road intersection numbers, road intersection names, landmark names, and nicknames for any of the foregoing.
25. The system of any preceding claim, wherein the correlation database comprises a matched table of metadata values from any one of the databases, where there is an identical or similar corresponding metadata value in any one of the other databases.
26. The system of any preceding claim, wherein the system backend and terminal data communications interfaces are operatively interconnected via the internet.
27. The system of any preceding claim, wherein the human interface device comprises any one or more the group comprising: a pointing device, a computer mouse, a computer touchpad, a trackpad, a keyboard, a virtual keyboard displayed on a display screen, and a touch screen.
28. The system of any preceding claim, wherein the means for displaying the graphical user interface comprises a screen.
29. The system of any preceding claim, wherein the location input field comprises a drop-down box linked to the location database.
30. The system of any preceding claim, wherein the location input field comprises lookup interface linked to the location database.
31. The system of any preceding claim, wherein the location input field comprises a text input field.
32. The system of any preceding claim, wherein the terminal processor or backend processor is adapted to parse text inputted into the text input field, and using heuristics, to identify a best match between the text inputted into the text input field and a value in the location database.
33. The system of any preceding claim, further comprising an application programming interface interposed between the system backend and the user terminal.
4. The system of claim 3, wherein the application programming interface provides an interface for any one or more of the group comprising: a third-party service provider; an end user, a logistics provider; and a delivery agent's terminal.
PCT/GB2018/052611 2017-09-14 2018-09-13 Location pinpointing WO2019053441A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1714786.9A GB2568858A (en) 2017-09-14 2017-09-14 Location pinpointing
GB1714786.9 2017-09-14

Publications (1)

Publication Number Publication Date
WO2019053441A1 true WO2019053441A1 (en) 2019-03-21

Family

ID=60159361

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2018/052611 WO2019053441A1 (en) 2017-09-14 2018-09-13 Location pinpointing

Country Status (2)

Country Link
GB (1) GB2568858A (en)
WO (1) WO2019053441A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046307A (en) * 2019-04-11 2019-07-23 秦德玉 It is a kind of it is real-time understand target and based on the ground generation movable promotion method
US20230063784A1 (en) * 2021-09-02 2023-03-02 Shopify Inc. Systems and methods for e-commerce checkout with delay loading of checkout options

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060271287A1 (en) * 2004-03-24 2006-11-30 Gold Jonathan A Displaying images in a network or visual mapping system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7746376B2 (en) * 2004-06-16 2010-06-29 Felipe Mendoza Method and apparatus for accessing multi-dimensional mapping and information
US9171527B2 (en) * 2012-11-20 2015-10-27 Google Inc. System and method for displaying geographic imagery

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060271287A1 (en) * 2004-03-24 2006-11-30 Gold Jonathan A Displaying images in a network or visual mapping system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KHON CAN HUA: "How to upload photo to google maps", 12 November 2016 (2016-11-12), pages 1, XP054978885, Retrieved from the Internet <URL:https://www.youtube.com/watch?v=i1F7L0wuyVs> [retrieved on 20181121] *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046307A (en) * 2019-04-11 2019-07-23 秦德玉 It is a kind of it is real-time understand target and based on the ground generation movable promotion method
US20230063784A1 (en) * 2021-09-02 2023-03-02 Shopify Inc. Systems and methods for e-commerce checkout with delay loading of checkout options
US11853981B2 (en) * 2021-09-02 2023-12-26 Shopify Inc. Systems and methods for e-commerce checkout with delay loading of checkout options

Also Published As

Publication number Publication date
GB201714786D0 (en) 2017-11-01
GB2568858A (en) 2019-06-05

Similar Documents

Publication Publication Date Title
US20150206218A1 (en) Augmented Reality Based Mobile App for Home Buyers
US9451050B2 (en) Domain name spinning from geographic location data
CN101755282B (en) Method and system for providing inter-area communication of map
US20080172244A1 (en) Systems And Methods For Displaying Current Prices, Including Hotel Room Rental Rates, With Markers Simultaneously On A Map
US20150062114A1 (en) Displaying textual information related to geolocated images
JP7032277B2 (en) Systems and methods for disambiguating item selection
US20140218400A1 (en) Method for Providing Real Estate Data on an Interactive Map
US9418377B2 (en) System and method for visualizing property based listing on a mobile device
US10152728B2 (en) Informational and advertiser links for use in web mapping services
US20120272172A1 (en) Geographic domain name suggestion tools
US20190095536A1 (en) Method and device for content recommendation and computer readable storage medium
US20150026012A1 (en) Systems and methods for online presentation of storefront images
JP2018049624A (en) Method and system for remote management of location-based spatial objects
WO2019053441A1 (en) Location pinpointing
TW201810170A (en) A method applied for a real estate transaction information providing system
TWI625692B (en) A method applied for a real estate transaction medium system
US10191896B2 (en) Populating user data
KR101608244B1 (en) Method and system for providing extension data according to request of user using wire or wireless network
US20150087339A1 (en) Wireless Location Information Request
US20200175621A1 (en) Automated address resolution for document management and production
EP3282411A1 (en) Localization system and method
KR102160107B1 (en) System for providing auction information
JP7491770B2 (en) Terminal device, information processing method, and program
US20230044871A1 (en) Search Results With Result-Relevant Highlighting
JP6621228B1 (en) Program, information processing apparatus, information processing method, and information processing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18785701

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18785701

Country of ref document: EP

Kind code of ref document: A1