GB2568858A - Location pinpointing - Google Patents
Location pinpointing Download PDFInfo
- Publication number
- GB2568858A GB2568858A GB1714786.9A GB201714786A GB2568858A GB 2568858 A GB2568858 A GB 2568858A GB 201714786 A GB201714786 A GB 201714786A GB 2568858 A GB2568858 A GB 2568858A
- Authority
- GB
- United Kingdom
- Prior art keywords
- location
- database
- user
- image
- backend
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/08—Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
- G06Q10/083—Shipping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/08—Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
- G06Q10/083—Shipping
- G06Q10/0835—Relationships between shipper or supplier and carriers
- G06Q10/08355—Routing methods
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B29/00—Maps; Plans; Charts; Diagrams, e.g. route diagram
- G09B29/10—Map spot or coordinate position indicators; Map reading aids
- G09B29/106—Map spot or coordinate position indicators; Map reading aids using electronic means
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Economics (AREA)
- Databases & Information Systems (AREA)
- Human Resources & Organizations (AREA)
- Operations Research (AREA)
- General Engineering & Computer Science (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Development Economics (AREA)
- Data Mining & Analysis (AREA)
- Entrepreneurship & Innovation (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- Quality & Reliability (AREA)
- Mathematical Physics (AREA)
- Remote Sensing (AREA)
- Educational Technology (AREA)
- Educational Administration (AREA)
- Information Transfer Between Computers (AREA)
Abstract
A location pinpointing system 10 assists in identifying a precise location. A user interacts with a backend system 12 via user terminal 14. The backend comprises correlation database 20 linking location data points in a location database 18 to metadata points in cartography and street-level photography databases 22, 24. The terminal comprises a human interface device, and a GUI 38 having a location input field 40, input button 42, and display fields 44, 46. The backend takes terminal input to find a best match in the location database, and uses the correlation database to get cropped images from the cartography and photography databases to be at least partially displayed in the GUI display fields. When no corresponding metadata point in the photography database is found, a code an image is generated based on a rendered code. When mapping or other data are inaccurate, the user may supplement the system backend with a description, photograph or GPS coordinate of the exact location. The user may add a pinpoint (62, fig. 4) to the displayed data to be shared with the backend, which enables e.g. a delivery agent terminal to determine a correct door in a shared building.
Description
LOCATION PINPOINTING
This invention relates to location pinpointing methods and systems.
It is commonplace nowadays, when placing an order for a product or service online, to enter a delivery address. So that the goods/services are delivered to the correct location, a user is typically prompted to enter a delivery location, such as a post code and building number, a full address or other location data. Where the user has an account with a particular vendor, their delivery addresses are often stored in the vendor's database, which offers the additional convenience of permitting the user to select an address from a drop-down list, or a grid selection user interface, of previously-inputted delivery locations.
Similarly, when a user wishes to go to a physical location, which is advertised online, a user is often presented with an address on the web site in question and this is often accompanied by a map showing that location. For convenience, the map is often auto-generated using a third-party database/software (such as GoogleR™ maps), whereby the web site owner simply configures an inline frame of their web site to display third party map imagery centred on the address specified by the web site owner. Usually, the address is highlighted by a pin graphic overlaid onto the map image, such that visitors to the web site can pinpoint the address in question.
In both cases, there is a link between address data - typically post code and postal unit (building number, building name) - and corresponding metadata associated with the map data. Thus, a named street in the map data can be cross-referenced to a specified street name. In addition, in most cases, the map metadata contains other postal unit identification information, such as odd numbers on the left from 1 to 59; even numbers on the right from 2 to 58, such that the location of a specific building in that street can be estimated.
Even though the resolution and metadata content of map data is continually improving, in many areas of the world, and in particular outside of major cities and conurbations, the ability of an exact postal unit to be accurately identified and pinpointed in an online map using a combination of 'street name + postal unit' or 'post code + postal unit' is somewhat limited. This is particularly the case in rural locations, where a single post code may cover several square kilometres, or where there is no systematic building numbering convention (e.g. farms identified by a combination of 'village + farm name', rather than by 'street name + house number'). Even in suburban areas, straightforward interpolation of map metadata to pinpoint an exact address can be problematic, for example along curved roads where there may be more odd-numbered houses on one side of the street, than evennumbered houses on the opposite side of the street. Thus, for example, approximating that house number 25 is half-way along a road from 1-51 may not always be correct, especially where, say, there is a break in a row of houses to accommodate a road junction, a park, a bridge, a shop, a school, etc. somewhere along the street in question.
Inaccuracies in map data, such as those outlined above, can make finding specific locations quite difficult. Thus, a user can follow map data (especially where it is transposed into, say, a GPS system) up to a certain point, but invariably finding the exact address/location (to the front-door level) usually requires some degree of human intervention.
Therefore, even though mapping (and GPS) systems can be used (in combination) to facilitate finding specific locations, there is almost always a need for some level of human intervention within a final delivery radius of the target location. Depending on the resolution and accuracy of map metadata, this final delivery radius can be as small as a few metres, but in other cases, it can be quite large, say up to a kilometre.
A need therefore exists for a system and/or method, which reduces the size of final delivery radius, and in particular, a system and/or method, which reduces the size of final delivery radius to a level which reduces, significantly reduces, or avoids the need for human intervention in identifying an exact location. This invention aims to provide such a solution and/or to address one or more of the problems identified above.
Various aspects of the invention are set forth in the appended independent claims. Various optional or preferred features of the invention are set forth in the appended dependent claims.
According to one aspect of the invention, there is provided a location pinpointing system comprising a system backend and a user terminal; the system backend comprising: a backend processor; a backend data communications interface operatively connected to the user terminal; a cartography database containing maps and metadata corresponding to the maps; a street-level photography database containing photographic imagery of locations at street-level and metadata corresponding to the photographic imagery; a location database containing location data; and a correlation database linking: location data points in the location database to corresponding metadata points in the cartography database; and location data points in the location database to corresponding metadata points in the street-level photography database; the user terminal comprising: a terminal processor; a terminal data communications interface operatively connected to the system backend; a human interface device (HUD); and means for displaying a graphical user interface (GUI), the GUI comprising: one or more location input fields; a first input button; a first image area for displaying an excerpt of a map from the cartography database; and a second image area for displaying a street-level photograph from the street-level photography database, the location pinpointing system being configured, in use, such that when a user of the user terminal, using the HUD, inputs a location into at least one of the location input fields and presses the first input button, the inputted location is transmitted from the user terminal to the system backend, whereupon the backend processor: parses the received user-inputted location against entries in the location database to identify a best match location in the location database that corresponds substantially to the user-inputted location; correlates the identified best match location, using the correlation database, to a corresponding matched metadata point in the cartography database and renders a cropped image of a map from the cartography database, the cropped image of the map being substantially centred on the identified best match location; correlates the identified best match location, using the correlation database, to a corresponding matched metadata point in the street-level photography database and where a corresponding matched metadata point in the street-level photography database is available: renders a cropped image of a photographic image from the street-level photography database, the cropped image of the photographic image being substantially centred on the identified best match location; or where a corresponding matched metadata point in the street-level photography database is not available: renders a code; transmits, via the backend data communications interface, the cropped image of the map from the cartography database and the cropped image of the photographic image from the street-level photography database to the user terminal or the code, and wherein upon receipt of the cropped image of the map from the cartography database and the cropped image of the photographic image from the street-level photography database or the code, via the terminal data communications interface, the terminal processor updates the GUI to display: at least a portion of the cropped image of the map from the cartography database in the first image area; and either at least a portion of the cropped image of the photographic image from the street-level photography database in the second image area; or an image or message generated by the terminal processor based on the code in the second image area.
In summary, the invention provides a method and system for a user to be able to input a target location, and for that location to be displayed simultaneously in two formats within the user's GUI, namely, a map view and a street-level photography view. From this, the user is able to move wither view so as to centre the displayed street-level photography view on the exact target location. Then, by pressing a button, that centred, street-level photography view is captured and displayed in the third display area. Now, the user is able to send to the backend not only the inputted target address, but also a street-level photography view of the exact location. Thus, if second person wishes to go to the specified target location, they can obtain the address, and a street-level photography view of the exact target location from the system backend. This removes some of the guesswork that may otherwise be needed by the second person because the second person can compare what they see with the street-level photography view submitted by the user, to confirm the exact target location.
In other cases, where there is no street-level photography view corresponding to an inputted target location, the system notifies the user and prompts them to upload their own image of the exact target location. Now, if a second person wishes to go to the specified target location, they can obtain the address, and a user-submitted street-level photography view of the exact target location from the system backend. Again, this removes some of the guesswork that may otherwise be needed by the second person because the second person can compare what they see with the street-level photography view submitted by the user, to confirm the exact target location. The ability for a user to be able to upload their own photography, and optionally for the system backend to be able to store that user-uploaded imagery, not only improves the system backend by providing data where data previously did not exist, but it also assists the user going forward because once the user has uploaded their own street-level photography/imagery once, if the same/similar address is re-inputted at a later point in time, the system backend may be able to output the user-uploaded imagery where conventional street-level photography from the street-level photography database does not exist.
Suitably, the GUI displays in the first image area, overlaid on the cropped image of the map, a marker corresponding to the best match location. The marker could be a pin-type image or a crosshair overlaid on the map, which assists the user in checking that the system backend has returned an accurate result.
In addition, the GUI may display in the second image area, overlaid on the cropped photographic image, a marker corresponding to the best match location. Ideally, the default position of the marker in the second display area of the GUI corresponds to the user's intended target location. However, the GUI may also allow the user to move the marker overlaid on the second display area, and/or to drag and update the image displayed in the second display area so that the overlaid marker does, indeed, correspond to the intended target location. This feature may be particularly advantageous when attempting to pinpoint a particular entrance of a multi-entrance property: for example, the user may be able to move the overlaid marker so that it points to a specific location, e.g. a back door, delivery drop-box, secure location, etc., within the displayed image. Furthermore, because much of the currently-available street-level photography is discontinuous, that is to say, comprises a series of images taken at, say 10m or 20m intervals along a street, if one of the images does not exactly correspond with a particular target location, or if the image is obscured by traffic, a hedge, etc., then it can be difficult to pinpoint an exact target location using this data. The invention thus permits a user of the system to upload/add their own image/photograph of the exact target location to address this issue.
The human interface device (HUD) of the user terminal suitably permits the cropped image of the map from the cartography database in the first image area to be dragged. Upon dragging the cropped image of the map from the cartography database in the first image area, the terminal processor may be adapted to automatically fill and display regions initially lying outside the first image area with portions of the cropped image of the map from the cartography database that were not initially displayed. This can provide a smoother user interface experience because rather than having to download new map imagery and refresh the displayed image, the imagery is already there, albeit cropped to the boundary of the first image area. Now, the system simply displays a different crop boundary within the already-downloaded map imagery, thus improving the user interface experience.
Suitably, the cropped image of a map from the cartography database that is transmitted from the system backend to the user terminal further comprises metadata corresponding to the cropped portion of the map. Thus, the metadata corresponding to the displayed map image is already downloaded and ready for use, which can reduce latency where otherwise that data may need to be fetched in a separate step.
Upon dragging the cropped image of the map from the cartography database in the first image area, the terminal processor is ideally adapted to display, at a position substantially centred on the first image area, a temporary marker overlaid on the first image area. This may provide the user of the terminal with a rough starting point for subsequent dragging or editing procedures.
Upon dragging the cropped image of the map from the cartography database in the first image area, the terminal processor is suitably adapted to identify a new location, the new location corresponding to the position of the temporary marker relative to the metadata corresponding to the cropped portion of the map, to update the location input field to contain the new location, to automatically transmit the new location to the system backend and to force a requery. Thus, when the user interacts with the terminal, the data is updated, which can be convenient.
The GUI may permit a user, using the HUD, to overlay a user-inputted marker on the second display area. The user-inputted marker may be draggable, by the user interacting with the HUD, relative to the second display area.
The GUI may further comprise a third display area. The third display area may be used for any purpose, but it may in particular be used for displaying user-uploaded or user-imported data, such as comments, instructions or user-generated or user-captured imagery.
The GUI may further comprise a second input button, which when pressed by a user interacting with the HUD, opens an input form in the GUI via which, a user can import a user image, which user image is displayed in the third display area of the GUI. This conveniently facilitates the interaction of the user with the system with regard to uploading user content to the system. The input form may, for example, comprise a terminal directory/file browser or a search facility.
The GUI may further comprise a third input button, which when pressed by a user interacting with the HUD, an image of the second display area is captured and displayed in the third display area. When the third input button is pressed by a user interacting with the HUD, an image of the second display area incorporating the overlaid user-inputted marker can be captured and displayed in the third display area.
The GUI may further comprise a fourth input button, which when pressed by a user interacting with the HUD, the location currently entered in the location input field may be stored in a memory of the user terminal. When the fourth input button is pressed by a user interacting with the HUD, for example, all images currently displayed in the third display area can be stored in a memory of the user terminal.
Additionally or alternatively, when the fourth input button is pressed by a user interacting with the HUD, the terminal processor can be adapted to transmit, via the terminal data communications interface, the location currently entered in the location input field to the system backend.
Additionally or alternatively, when the fourth input button is pressed by a user interacting with the HUD, the terminal processor can be adapted to transmit, via the terminal data communications interface, all images currently displayed in the third display area to the system backend.
In a preferred embodiment of the invention, upon receipt of the location and/or the images sent to it by the terminal device, the system backend can be adapted to retransmit the location and/or the images to an external server. This can occur, for example, by when the fourth input button is pressed by a user interacting with the HUD, the terminal processor being adapted to transmit, via the terminal data communications interface, all images currently displayed in the third display area to an external server.
The system backend may comprise one or any number of servers, which may be real, i.e. hardware servers with physical processors, storage devices etc., or they can be virtual servers, i.e. implemented by virtualisation technology in a cloud-based system.
Any one or more of the cartography database, the street-level photography database, the location database and the correlation database may be located on physically or virtually separate, but operatively-interconnected servers.
The metadata points corresponding to the maps of the cartography database may comprise, for example, any one more of the group comprising: street names, post code boundaries, building names, building numbers, town names, county names, country names, named points of interest, road intersection numbers, road intersection names, and landmark names.
The metadata points corresponding to the photographic imagery contained in the street-level photography database may comprise, for example, any one or more of the group comprising: street names, post code boundaries, building names, building numbers, town names, county names, country names, named points of interest, road intersection numbers, road intersection names, and landmark names.
The photographic imagery contained in the street-level photography database may comprise, for example, any one or more of the group comprising: photographs of buildings, photographs of streets, photographs of landmarks, photographs of points of interest, and photographs of geographic features.
The location data contained in the location database may comprise, for example, any one more of the group comprising: street names, post codes, a post code-building number lookup table, a post code-building name lookup table, a street-building number lookup table, a street-building name lookup table, building names, building numbers, town names, county names, country names, named points of interest, road intersection numbers, road intersection names, landmark names, and nicknames for any of the foregoing.
Suitably, the correlation database comprises a matched table of metadata values from any one of the databases, where there is an identical or similar corresponding metadata value in any one of the other databases. This enables the system to readily identify data in one database where there is similar or identical data in another database. This cross-referencing of one data set with another enables the system to operate dynamically, and to display simultaneously, data presented in different ways (e.g. street-level photography, cartography imagery, and address data), which all relate to the same, or substantially the same reference point in physical space.
Ideally, the system backend and terminal data communications interfaces are operatively interconnected via the internet. This effectively enables the user terminals to be any internetconnected device with a browser, and so proprietary software is not necessarily needed at the terminal, and it also facilitates implementing the invention into third-party web sites.
In one possible implementation of the invention, the system backend is provided with an Application Programming Interface (API), which is accessible via a network, such as the internet. By providing an API, third-party service providers are able to conveniently access the benefits of the invention in their own web sites or user interfaces. The API provides access to the various databases in the system backend, via input commands from the third-party service providers. These input commands form queries for the respective databases, and the system backend' outputs are standardised in some way by the API such that the third-party service provider is able to parse those outputs and display a similar GUI on its own web site to that which would be displayed in the GUI of a directly-connected terminal computer. Thus, it will be appreciated that by providing an API, some degree of centralisation can be achieved, which could reduce data duplication, and which also means that each third-party service provider is somewhat relieved of the need to handle/retain potentially sensitive customer data and large amounts of map, cartography and metadata. The use of an API also means that the third-party service providers' GUIs effectively operate as thin clients, thereby reducing system overhead requirements at the third-party service provider's web site and/or on the third-party service providers' customers' terminal devices.
The human interface device (HID) can comprise any device that enables a user to interact with the system. Typically, the HID can be any one or more the group comprising: a pointing device, a computer mouse, a computer touchpad, a trackpad, a keyboard, a virtual keyboard displayed on a display screen, and a touch screen. The means for displaying the graphical user interface suitably comprises a display panel or screen, such as that found in most modern smartphones, tablet PCs, computers, laptops and TV displays.
In a preferred embodiment of the invention, the location input field comprises a drop-down box linked to the location database. This enables a user, for example, who has previously logged-in, to select from a few pre-stored options, if desired, rather than having to key-in address/location data each time the system is used. Additionally or alternatively, the location input field may comprise a text input field, which can be used, for example, where pre-stored address/location data is not already in the system for a particular user.
The terminal processor or backend processor can be adapted to parse text inputted into the text input field, and using heuristics, to identify a best match between the text inputted into the text input field and a value in the location database. This enables the system to ignore spelling mistakes, or differences in the ways that the same location may be inputted (e.g. Liverpool University vs. University of Liverpool vs. Liverpool Uni vs Liverpool University, etc.).
Embodiments of the invention shall now be described, by way of example only, with reference to the accompanying drawings in which:
Figure 1 is a schematic system view of a system in accordance with the invention;
Figures 2 to 5 are schematic representations of a graphic user interface embodying the invention;
Figure 6 is a schematic view of a graphical user interface with re-searching functionalities;
Figure 7 is a schematic view of a graphical user interface depicting a certain error condition;
Figure 8 shows a variation of the system shown in Figure 1, but with an Application Programming Interface (API) layer; and
Figure 9 shows a variation of the system shown in Figure 8, in which all of the client devices communicate with the API layer.
Referring to Figure 1 of the drawings, a pinpointing system 10 is accordance with the invention comprises a backend 12 and a terminal computer 14. The backend is typically implemented on a server, or a group of interconnected servers and it comprises processor 16, which interacts with a number of databases 18, 20, 22, 24. The system backend 12 has a communications interface 26, which communicates with the terminal computer 14 via a network 28, such as the internet.
The terminal computer is typically embodied by a smartphone or tablet-type computing device, or indeed a desktop computer, which has a terminal processor 30 and a terminal communications interface 32, which communicates with the system backend 12 via the network 28, in the illustrated embodiment, via the internet.
The terminal computer in the illustrated embodiment, has a main body 34, which houses the terminal processor and terminal communications interface 32 internally. The main body 34 has an external, touch screen interface 36, upon which a graphical user interface 38 is displayable and via which, a user (now shown) can interact with the graphical user interface 38. The touchscreen 36 therefore serves as both a display output device and a human input device (HID) and so a user (not shown) is able to interact with the system 10 in the manner set out hereinbelow.
The graphical user interface 38 displayed on the display screen 36 of the terminal computer 14 has a first data input area 40, into which a user can type, or otherwise input a location. In most cases, a user will simply type-in an address or postcode and then tap on a first submit button 42, which causes the terminal processor 30 to transmit the inputted address/location data, via the terminal communications interface 32, the network 28 and the communications interface 26 of the system backend 12 to the processor 16 of the system backend. The system backend 16 parses the location information received and interrogates a location database 18 for a suitable match. Once a corresponding location has been identified in the location database 18, the back-end processor 16 uses a correlation database 20 to identify a match for that location in a cartography database 22. The cartography database 22 contains a set of maps and the back-end processor thereby takes an excerpt from a map from the cartography database and crops it to form an image. The metadata contained in the cartography database 22 corresponding to that excerpt of the map is compiled, by the back-end processor 16 into a single data file, or several data files, which is sent back, via the network 28 to the terminal device. The cropped map is then displayed in the first display area 44 in the graphical user interface 38 on their terminal computer 14.
When the system backend processor has identified a map excerpt from the cartography database, using the metadata associated with that excerpt, it identifies a corresponding set of streetlevel photography from a street-level photography database 24 within the system backend. Similarly, the back-end processor 16 excerpts photographs from the street-level photography database 24, crops and transmits it, via the network 28, to the terminal computer 14 in the same manner as previously described. The street-level photography corresponding to the map excerpt is thus displayed in a second display area 46 within the graphical user interface 38 of the terminal computer 14.
The graphical user interface comprises a second submit button 48, which, when pressed, copies the image displayed in the second display area 46 into a third display area 50, which is essentially cache for onward transmission, as shall be explained hereinbelow.
If no corresponding street-level photography is available, the system backend processor 16 will generate a code, which is transmitted, via the network 28, to the terminal computer 14 and an error message is displayed in the second display area 46. The error message prompts a user (not shown) of the terminal computer 14 to begin an upload procedure, whereby a user can upload their own photography of a location, if desired.
To achieve this, a third input button 52 is provided within a graphical user interface 38, which, when clicked, opens a dialogue box via which a user (not shown) can select a photograph stored either on the terminal computer, or elsewhere, and upload it to the system 10. Upon uploading, the image is copied into the third display area 50, ready for onward transmission.
Once the user is satisfied that the location shown in the second display area 46 or the third display area 50 accurately reflects a desired location, the user (not shown) can press on a further submit button 54, which transmits a copy of the image or images contained in the third display area 50 to the system backend 12, again via the network.
The received images can optionally be stored in a further database 56 of the system backend, or they can be sent on to a third party (not shown) via the network 28.
The aforesaid overall description is illustrated, in greater detail, which reference to Figures 2 to 5 of the accompanying drawings.
Referring to Figure 2, the graphical user interface 38 displayed on the terminal computer 14 has a first input field 40 into which a user (now shown), can type, or otherwise input, a target location 60. Upon pressing the submit button 42, the target location 60 is transmitted to the system backend, as previously described.
Once the system backend has obtained a corresponding location match from the location database 18 and cross-referenced it, using the correlation database 20, to a map location in the cartography database 22, and obtained a corresponding street-level photography from the photography database 14, the corresponding map is displayed in the first display area 44, and the corresponding street-level photography is displayed alongside it in the second display area 46, as shown in Figure 3 of the drawings.
If the location shown in the second display area 46 is, in fact, the location that the user (not shown) intended, then the user can accept this. However, in many cases, a user will wish to exactly pinpoint a particular location within that image, and a procedure for that is shown in Figure 4 of the drawings.
In Figure 4, a user (not shown) is able to touch or click anywhere within the second display area 46 to place a marker 62 thereby pinpointing an exact location within the street-level photography image displayed in the second display area. Once the user (not shown) is satisfied, as shown in Figure 5, they can press or otherwise click on the copy/send button 48 on the graphical user interface 38, whereupon the image, containing the marker 62 is copied into the third display area 50.
Once the user (not shown) is completely satisfied, they can press on a further submit button 54 to upload the pinpointed location to the system backend.
In a preferred embodiment of the invention, the system backend 12 comprises a further database 56, which stores the submitted image, along with the inputted location data 60. Thereby, if the same, or a different, user subsequently enters the same location data, then the system backend processor 16 may be able to return automatically that same previously-pinpointed location corresponding to that input in the first instance, thereby obviating the need for the user to repeat this process subsequently.
In certain cases, as shown in Figure 6, the map location shown in the first display are 44 is somewhat inaccurate. This can occur where, for example, only a postcode has been entered, which may cover a fairly large area of ground, in which case, the map displayed in the first display area 46 may not be accurately centred on the intended inputted location 60, in which case some userintervention may be required.
The back-end processor, as previously described, transmits an excerpt from the cartography database for display in the first display area 44 of the graphical user interface 38. However, what is actually displayed in the graphical user interface 38, is, in fact, a slightly cropped-down version of the excerpt obtained by the back-end processor.
As such, and as illustrated schematically in Figure 7, the user is able to drag the map appearing in the first display area 46 and upon so doing, a crosshair/cursor 66 is automatically superimposed over the first display area 46 of the graphical user interface 38. The user is therefore able to pan the map until the crosshair 66 lies on the location of interest. Because the map displayed in the first display area is associated with metadata from the cartography database 22 of the system backend 12, the terminal computer 14 automatically sends back to the system backend 12 an updated location, based on the location of the crosshair 66 relative to the displayed map 46. The updated location 68 can then be displayed in the first display area 40 of the graphical user interface and the accompanying streetlevel photography appearing in the second display area 48 can be updated to comport.
In a manner, similar to that described previously, a user (not shown) is able to add a pinpoint marker 62 to the street-level photograph appearing in the second display area and then save that in the third display area 50 by clicking on the submit button 48. Again, the image from the third display area 50 can be transmitted to the system backend 12 by pressing on the send button 54.
In certain cases, there may be no corresponding street-level photography associated with a particular map location, and therefore, the graphical user interface will display, in the second display area 48, an error message prompting the user (now shown) to upload their own imagery. An upload button 52 is provided for this purpose, which, when clicked, opens a dialogue box 70 via which a user can upload an image of their own to the system, which is displayed in the third display area 50, and which can then be transmitted to the system backend, by clicking on the send button 54 as previously described.
One possible advantage of a user being able to upload their own imagery is that if the streetlevel photography is incomplete, or cannot be manoeuvred to the correct position, as may happen if, say, a house is located down a narrow passageway off a main street, then the user is able to upload their own imagery so that a particular location can nevertheless be accurately pinpointed.
Referring now to Figure 8 of the drawings, a similar system to that described previously is shown, albeit now the system backend has an API 100 to which various third-party service providers' web servers 102 can connect, for example, via the internet 28. In this case, rather than the terminal computer 14 being directly connected to the system backend 12, the GUI 38 now forms part of, or is incorporated into, the overall GUI 138 of a third-party service provider's web interface.
Here, the overall GUI 138 comprises a shopping cart area 104, which displays, for example, a list of items to be purchased; the GUI 38 previously described, which is now part of the delivery options section of the overall GUI 138; and a payment area 106, which is where a user (not shown) enters their contact/payment details. Upon possible logging into the overall GUI, this could cause various parts of the overall GUI 138 to auto-populate, for example, by loading default payment settings in the payment area 106, and/or and a default delivery address in the location input are 40 of the GUI 38.
Now, when a user (not shown) clicks on a complete checkout button 108, the location data, including the captured images in the third display area 50, can be sent to the third-party provider, and the, if desired, passed on to a logistics provider 110. The logistics provider 110 supplies its delivery agents (not shown) with tablet-type computers, smart phones or the like 114 upon which, for each delivery is displayed the delivery location data in text form (e.g. the address), as well as copies of the images from the third display area 50 of the user's terminal device 14.
Typically, the delivery agents' devices 114 are GPS enabled, such that the address data is automatically transposed into the GPS system, enabling the delivery agent to navigate, via GPS, to the final delivery radius in the usual way. Now, however, by virtue of the fact that the delivery agents' device 114 also displays copies of the images from the third display area 50, the delivery operative will, hopefully, now be able to effect deliveries more efficiently - knowing the exact delivery point, as opposed to having to search the final delivery radius manually as s/he had to do previously.
It will be appreciated that the invention as described herein could greatly reduce the number of delivery failures, as well as greatly speeding-up the final stage of the logistics/delivery process because the delivery agent, or user seeking to find a specific location, has more data, that is to say, street-level photography and/or user imagery to help locate the exact delivery point down to the front door level. Other benefits and advantages of the invention will be apparent to the skilled reader, such as, data rationalisation and centralisation, improvement in data protection, reduction of data duplication, reduction of system overhead, as well as physical benefits, such as reduced fuel consumption by delivery operatives through wasted mileage attempted to find particular delivery locations and/or repeated/failed deliveries.
Finally, referring to Figure 9 of the drawings, a variation of the scheme described above in relation to Figure 8 is shown. In this embodiment of the invention, all of the client devices, namely third-party service providers webserver 102, the terminal computer 14, the logistics provider 110 and the delivery agent's device 114 all communicate with the system backend 12, via, for example, the internet 28 and the API layer 100.
In this embodiment of the invention, the third-party providers webserver 102 serves content to its customers, which is displayed on the display area 138 of the terminal device 14. Here, the customers shopping cart 104 and payment input field 106 is provided by the third-party webserver 102 in the usual way. However, the delivery address field 38 is provided by the system backend 12 of the invention. The API layer 100 enables the third-party webserver to export a particular customers delivery address data, which is then searched by the system back end, in the manner previously described, to yield a predicted delivery location, which is then sent, via the API, and used to populate the location input field 40 on the terminal device 14. The user of the terminal device 14 can interact with the system 10 as previously described to precisely pinpoint a particular location and that committed location, that is to say, when button 54 is clicked, is passed back to the system backend 12 via the API layer 100. At this point, the third-party webserver 102 can capture, if required, a copy of that data, or not as the case may be.
Meanwhile, the logistics provider 110 can query the system backend 12 via the API layer 100 for a delivery corresponding to a particular consignment. The user-inputted data, for example the specified delivery location in field 40 and/or the imagery either uploaded to or otherwise stored in the third display area 50, can be retrieved from the system backend, via the API, and then passed on to the delivery agents terminal device 114.
One of the advantages of the system shown in Figure 9, vs that shown in Figure 8, is the centralisation of data, namely using the system backend 12 to handle all of the address/location data for all of the parties concerned. This particular configuration avoids the need for the third-party webserver 102, the logistics provider 110, the user terminal 14, or indeed the delivery agent's terminal 114 to store or process any location data at all. A further possible advantage arises from data protection issues, which may be present, where, for example, the user-imported imagery, as shown in the third display area 50, may comprise personally identifiable information, such as car number plates, people in the image, etc. That being the case, by all of the user-inputted data being stored in the system backend 12, rather than on the third-party webserver 102 or the logistics provider server 110, the problem of data storage and retention is handled essentially by the system backend. Thus, in the event that any of the components of the system 10 are hacked other than the system backend, then the location data, and any potentially personally-identifiable data, is inaccessible via an attack on the third-party webserver 102, the user's terminal device 14, the logistics providers system 110 or, indeed, the delivery agent's terminal 114.
By ensuring that all of the components of the system 10 only have transient copies of the data, which is stored securely on the system backend 12, increases the security of the system considerably, avoids unnecessary data duplication and ILLEGIBLE all of the system overhead, that is to say the requirement for data processing, data storage and analysis onto the system backend, rather than onto any of the terminal servers or devices.
Various features or aspects of the invention have been described herein by way of certain exemplary embodiments. Nothing in this disclosure prohibits one or more features/functions described in relation to one exemplary embodiment hereinabove from being implemented in or incorporated into another exemplary embodiment. For example, the features of the embodiment shown in Figures 1 to 7 of the drawings could be incorporated into the embodiment shown in Figure 8, for example, or vice-versa.
Claims (38)
1. A location pinpointing system comprising a system backend and a user terminal;
the system backend comprising:
a backend processor;
a backend data communications interface operatively connected to the user terminal;
a cartography database containing maps and metadata corresponding to the maps;
a street-level photography database containing photographic imagery of locations at streetlevel and metadata corresponding to the photographic imagery;
a location database containing location data; and a correlation database linking:
location data points in the location database to corresponding metadata points in the cartography database; and location data points in the location database to corresponding metadata points in the street-level photography database;
the user terminal comprising:
a terminal processor;
a terminal data communications interface operatively connected to the system backend;
a human interface device (HUD); and means for displaying a graphical user interface (GUI), the GUI comprising:
one or more location input fields;
a first input button;
a first image area for displaying an excerpt of a map from the cartography database; and a second image area for displaying a street-level photograph from the street-level photography database, the location pinpointing system being configured, in use, such that when a user of the user terminal, using the HUD, inputs a location into at least one of the location input fields and presses the first input button, the inputted location is transmitted from the user terminal to the system backend, whereupon the backend processor:
parses the received user-inputted location against entries in the location database to identify a best match location in the location database that corresponds substantially to the userinputted location;
correlates the identified best match location, using the correlation database, to a corresponding matched metadata point in the cartography database and renders a cropped image of a map from the cartography database, the cropped image of the map being substantially centred on the identified best match location;
correlates the identified best match location, using the correlation database, to a corresponding matched metadata point in the street-level photography database and where a corresponding matched metadata point in the street-level photography database is available: renders a cropped image of a photographic image from the street-level photography database, the cropped image of the photographic image being substantially centred on the identified best match location, or where a corresponding matched metadata point in the street-level photography database is not available: renders a code;
transmits, via the backend data communications interface, the cropped image of the map from the cartography database and the cropped image of the photographic image from the street-level photography database to the user terminal or the code, and wherein upon receipt of the cropped image of the map from the cartography database and the cropped image of the photographic image from the street-level photography database or the code, via the terminal data communications interface, the terminal processor updates the GUI to display:
at least a portion of the cropped image of the map from the cartography database in the first image area; and either at least a portion of the cropped image of the photographic image from the streetlevel photography database in the second image area; or an image or message generated by the terminal processor based on the code in the second image area.
2. The system of claim 1, wherein the GUI displays in the first image area, overlaid on the cropped image of the map, a marker corresponding to the best match location.
3. The system of claim 1 or claim 2, wherein the GUI displays in the second image area, overlaid on the cropped photographic image, a marker corresponding to the best match location.
4. The system of any preceding claim, wherein the HUD of the user terminal permits the cropped image of the map from the cartography database in the first image area to be dragged.
5. The system of claim 4, wherein, upon dragging the cropped image of the map from the cartography database in the first image area, the terminal processor is adapted to automatically fill and display regions initially lying outside the first image area with portions of the cropped image of the map from the cartography database that were not initially displayed.
6. The system of any preceding claim, wherein the cropped image of a map from the cartography database that is transmitted from the system backend to the user terminal further comprises metadata corresponding to the cropped portion of the map.
7. The system of claim 6, when dependent on claim 4 or claim 5, wherein upon dragging the cropped image of the map from the cartography database in the first image area, the terminal processor is adapted to display, at a position substantially centred on the first image area, a temporary marker overlaid on the first image area.
8. The system of claim 7, wherein upon dragging the cropped image of the map from the cartography database in the first image area, the terminal processor is adapted to identify a new location, the new location corresponding to the position of the temporary marker relative to the metadata corresponding to the cropped portion of the map, to update the location input field to contain the new location, to automatically transmit the new location to the system backend and to force a re-query.
9. The system of any preceding claim, wherein the GUI permits a user, using the HUD, to overlay a user-inputted marker on the second display area.
10. The system of claim 9, wherein the user-inputted marker is draggable, by the user interacting with the HUD, relative to the second display area.
11. The system of any preceding claim, wherein the GUI further comprises a third display area.
12. The system of claim 11, wherein the GUI further comprises a second input button, which when pressed by a user interacting with the HUD, opens an input form in the GUI via which, a user can import a user image, which user image is displayed in the third display area of the GUI.
13. The system of any preceding claim, wherein the GUI further comprises a third input button.
14. The system of claim 13, wherein when the third input button is pressed by a user interacting with the HUD, an image of the second display area is captured and displayed in the third display area.
15. The system of claim 13 or claim 14, when dependent on claim 9 or claim 10, wherein when the third input button is pressed by a user interacting with the HUD, an image of the second display area incorporating the overlaid user-inputted marker is captured and displayed in the third display area.
16. The system of any preceding claim, wherein the GUI further comprises a fourth input button.
17. The system of claim 16, wherein when the fourth input button is pressed by a user interacting with the HUD, the location currently entered in the location input field is stored in a memory of the user terminal.
18. The system of claim 16 or claim 17, wherein when the fourth input button is pressed by a user interacting with the HUD, all images currently displayed in the third display area are stored in a memory of the user terminal.
19. The system of claim 16, 17 or 18, wherein when the fourth input button is pressed by a user interacting with the HUD, the terminal processor is adapted to transmit, via the terminal data communications interface, the location currently entered in the location input field to the system backend.
20. The system of any of claims 16 to 19, wherein when the fourth input button is pressed by a user interacting with the HUD, the terminal processor is adapted to transmit, via the terminal data communications interface, all images currently displayed in the third display area to the system backend.
21. The system of claim 19 or claim 20, wherein upon receipt of the location and/or the images, the system backend is adapted to retransmit the location and/or the images to an external server.
22. The system of any of claims 16 to 21, wherein when the fourth input button is pressed by a user interacting with the HUD, the terminal processor is adapted to transmit, via the terminal data communications interface, all images currently displayed in the third display area to an external server.
23. The system of any preceding claim, wherein the system backend comprises one or more servers.
24. The system of any preceding claim, wherein the cartography database, the street-level photography database, the location database and the correlation database are located on physically separate, but operatively-interconnected servers.
25. The system of any preceding claim, wherein the metadata points corresponding to the maps of the cartography database comprise any one more of the group comprising: street names, post code boundaries, building names, building numbers, town names, county names, country names, named points of interest, road intersection numbers, road intersection names, and landmark names.
26. The system of any preceding claim, wherein the metadata points corresponding to the photographic imagery contained in the street-level photography database comprises any one or more of the group comprising: street names, post code boundaries, building names, building numbers, town names, county names, country names, named points of interest, road intersection numbers, road intersection names, and landmark names.
27. The system of any preceding claim, wherein the photographic imagery contained in the streetlevel photography database comprises any one or more of the group comprising: photographs of buildings, photographs of streets, photographs of landmarks, photographs of points of interest, and photographs of geographic features.
28. The system of any preceding claim, wherein the location data contained in the location database comprises any one more of the group comprising: street names, post codes, a post code-building number lookup table, a post code-building name lookup table, a street-building number lookup table, a street-building name lookup table, building names, building numbers, town names, county names, country names, named points of interest, road intersection numbers, road intersection names, landmark names, and nicknames for any of the foregoing.
29. The system of any preceding claim, wherein the correlation database comprises a matched table of metadata values from any one of the databases, where there is an identical or similar corresponding metadata value in any one of the other databases.
30. The system of any preceding claim, wherein the system backend and terminal data communications interfaces are operatively interconnected via the internet.
31. The system of any preceding claim, wherein the human interface device comprises any one or more the group comprising: a pointing device, a computer mouse, a computer touchpad, a trackpad, a keyboard, a virtual keyboard displayed on a display screen, and a touch screen.
32. The system of any preceding claim, wherein the means for displaying the graphical user interface comprises a screen.
33. The system of any preceding claim, wherein the location input field comprises a drop-down box linked to the location database.
34. The system of any preceding claim, wherein the location input field comprises lookup interface linked to the location database.
35. The system of any preceding claim, wherein the location input field comprises a text input field.
36. The system of any preceding claim, wherein the terminal processor or backend processor is adapted to parse text inputted into the text input field, and using heuristics, to identify a best match between the text inputted into the text input field and a value in the location database.
37. The system of any preceding claim, further comprising an application programming interface interposed between the system backend and the user terminal.
38. The system of claim 37, wherein the application programming interface provides an interface for any one or more of the group comprising: a third-party service provider; an end user, a logistics provider; and a delivery agent's terminal.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1714786.9A GB2568858A (en) | 2017-09-14 | 2017-09-14 | Location pinpointing |
PCT/GB2018/052611 WO2019053441A1 (en) | 2017-09-14 | 2018-09-13 | Location pinpointing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1714786.9A GB2568858A (en) | 2017-09-14 | 2017-09-14 | Location pinpointing |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201714786D0 GB201714786D0 (en) | 2017-11-01 |
GB2568858A true GB2568858A (en) | 2019-06-05 |
Family
ID=60159361
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1714786.9A Withdrawn GB2568858A (en) | 2017-09-14 | 2017-09-14 | Location pinpointing |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2568858A (en) |
WO (1) | WO2019053441A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110046307A (en) * | 2019-04-11 | 2019-07-23 | 秦德玉 | It is a kind of it is real-time understand target and based on the ground generation movable promotion method |
US11853981B2 (en) * | 2021-09-02 | 2023-12-26 | Shopify Inc. | Systems and methods for e-commerce checkout with delay loading of checkout options |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060271287A1 (en) * | 2004-03-24 | 2006-11-30 | Gold Jonathan A | Displaying images in a network or visual mapping system |
US20070273758A1 (en) * | 2004-06-16 | 2007-11-29 | Felipe Mendoza | Method and apparatus for accessing multi-dimensional mapping and information |
US20150170615A1 (en) * | 2012-11-20 | 2015-06-18 | Google Inc. | System and Method for Displaying Geographic Imagery |
-
2017
- 2017-09-14 GB GB1714786.9A patent/GB2568858A/en not_active Withdrawn
-
2018
- 2018-09-13 WO PCT/GB2018/052611 patent/WO2019053441A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060271287A1 (en) * | 2004-03-24 | 2006-11-30 | Gold Jonathan A | Displaying images in a network or visual mapping system |
US20070273758A1 (en) * | 2004-06-16 | 2007-11-29 | Felipe Mendoza | Method and apparatus for accessing multi-dimensional mapping and information |
US20150170615A1 (en) * | 2012-11-20 | 2015-06-18 | Google Inc. | System and Method for Displaying Geographic Imagery |
Also Published As
Publication number | Publication date |
---|---|
WO2019053441A1 (en) | 2019-03-21 |
GB201714786D0 (en) | 2017-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20150206218A1 (en) | Augmented Reality Based Mobile App for Home Buyers | |
US9451050B2 (en) | Domain name spinning from geographic location data | |
CN101755282B (en) | Method and system for providing inter-area communication of map | |
US8489746B2 (en) | Systems for suggesting domain names from a geographic location data | |
US8265871B1 (en) | Mobile record information entry and geotagging | |
US20150062114A1 (en) | Displaying textual information related to geolocated images | |
US20080172244A1 (en) | Systems And Methods For Displaying Current Prices, Including Hotel Room Rental Rates, With Markers Simultaneously On A Map | |
JP7032277B2 (en) | Systems and methods for disambiguating item selection | |
US20140218400A1 (en) | Method for Providing Real Estate Data on an Interactive Map | |
US20240289842A1 (en) | Informational and advertiser links for use in web mapping services | |
CN101772780A (en) | Inter-domain communication | |
CN110442813B (en) | Travel commemorative information processing system and method based on AR | |
US20120272172A1 (en) | Geographic domain name suggestion tools | |
US20190095536A1 (en) | Method and device for content recommendation and computer readable storage medium | |
JP2018049624A (en) | Method and system for remote management of location-based spatial objects | |
US20150026012A1 (en) | Systems and methods for online presentation of storefront images | |
TW201810170A (en) | A method applied for a real estate transaction information providing system | |
WO2019053441A1 (en) | Location pinpointing | |
KR101367769B1 (en) | Real estate appraisal system with touch screen based on networkme | |
KR101928456B1 (en) | Field support system for providing electronic document | |
US9311734B1 (en) | Adjusting a digital image characteristic of an object in a digital image | |
TW201802759A (en) | A method applied for a real estate transaction medium system | |
US10191896B2 (en) | Populating user data | |
KR101608244B1 (en) | Method and system for providing extension data according to request of user using wire or wireless network | |
US20150087339A1 (en) | Wireless Location Information Request |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |