US20200234062A1 - Persistent vehicle location service using ground truth image rendering instead of gps - Google Patents

Persistent vehicle location service using ground truth image rendering instead of gps Download PDF

Info

Publication number
US20200234062A1
US20200234062A1 US16/712,883 US201916712883A US2020234062A1 US 20200234062 A1 US20200234062 A1 US 20200234062A1 US 201916712883 A US201916712883 A US 201916712883A US 2020234062 A1 US2020234062 A1 US 2020234062A1
Authority
US
United States
Prior art keywords
rendering
determining
vehicle
location
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/712,883
Inventor
Aaron Matthew Rogan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uber Technologies Inc
Original Assignee
Uber Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uber Technologies Inc filed Critical Uber Technologies Inc
Priority to US16/712,883 priority Critical patent/US20200234062A1/en
Assigned to UBER TECHNOLOGIES, INC. reassignment UBER TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ROGAN, Aaron Matthew
Publication of US20200234062A1 publication Critical patent/US20200234062A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06K9/00791
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/02Reservations, e.g. for tickets, services or events
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3438Rendez-vous, i.e. searching a destination where several users can meet, and the routes to this destination for these users; Ride sharing, i.e. searching a route such that at least two users can share a vehicle for at least part of the route
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3602Input other than that of destination using image analysis, e.g. detection of road signs, lanes, buildings, real preceding vehicles using a camera
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3644Landmark guidance, e.g. using POIs or conspicuous other objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3647Guidance involving output of stored or live camera images or video streams
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • G06K9/6201
    • G06K9/6215
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0207Discounts or incentives, e.g. coupons or rebates
    • G06Q30/0209Incentive being awarded or redeemed in connection with the playing of a video game
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0207Discounts or incentives, e.g. coupons or rebates
    • G06Q30/0235Discounts or incentives, e.g. coupons or rebates constrained by time limit or expiration date
    • G06Q50/40
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/3827Portable transceivers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/18Information format or content conversion, e.g. adaptation by the network of the transmitted or received information for the purpose of wireless delivery to users or terminals
    • H04W4/185Information format or content conversion, e.g. adaptation by the network of the transmitted or received information for the purpose of wireless delivery to users or terminals by embedding added-value information into content, e.g. geo-tagging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/42Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for mass transport vehicles, e.g. buses, trains or aircraft
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/024Multi-user, collaborative environment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/024Guidance services

Definitions

  • the present disclosure relates to location determination with limited or no reliance on global positioning system (GPS) signals, and in particular to determining a location estimate for a vehicle based on imagery captured by a mobile device mounted within a vehicle.
  • GPS global positioning system
  • GPS global positioning system
  • client device may wish to arrange for transportation from his or her present location to another location, and may execute an application (e.g., a transportation service and/or ridesharing application) on his or her client device to obtain transportation.
  • an application e.g., a transportation service and/or ridesharing application
  • GPS traces are not always accurate. For example, in areas subject to GPS interference or reception problems, such as an urban canyon with tall buildings that distort satellite signals, the GPS traces of the driver's client device may be inaccurate. This causes a practical inconvenience where a driver that is sub-optimally located may be matched to a rider, where a look at true coordinates of candidate drivers may have caused a match to a different driver. Moreover, this may cause frustration in the rider, as the rider may be viewing an indicated position of a driver that does not match the driver's true location.
  • a service e.g., that connects a rider with a driver in the context of a ridesharing application
  • a service initializes a determination of a location of a vehicle at a start of a session based on global positioning system (GPS) data of a client device within the vehicle.
  • GPS global positioning system
  • the service initializes the vehicle's location using a GPS sensor of a client device of the driver that is executing the application.
  • the service is implemented within the client device.
  • the service receives, from the client device, a rendering of an image that was captured by the client device at a time subsequent to the start of the session (e.g., an image captured automatically by the driver's client device after the vehicle had traveled for ten seconds, or had traveled for one hundred meters past the point where the initial GPS trace was determined).
  • the service determines a geographical area corresponding to the received rendering using the GPS data and data obtained from a sensor within the vehicle (e.g., a vicinity within which the vehicle is likely to be).
  • the service compares the received rendering to entries in a database, each respective entry including a respective rendering and a respective associated location that is within the geographical area.
  • some or all data of the database is stored at the client device. By limiting the comparison to entries corresponding to locations within the geographical area, processing is efficiently performed, as only a small subset of entries that are likely to correspond to the vehicle's current location are referenced.
  • the service determines from the comparing whether the received rendering matches a respective rendering included in a respective entry in the database of renderings, and when a match is determined, the service determines that the location of the vehicle at the time is the respective associated location included in the respective entry.
  • FIG. 1 is a block diagram illustrating a location estimation system according to one embodiment.
  • FIG. 2 is an illustration of GPS traces in a region where GPS signals are inaccurate according to one embodiment.
  • FIG. 3 is an illustration of a manner in which a vehicle location is initialized and then updated, according to one embodiment.
  • FIG. 4 is an illustration of a manner in which to identify locations for which renderings do not exist in an image rendering database, according to one embodiment.
  • FIG. 5 is an illustrative flowchart of a process for estimating vehicle location based on image renderings, according to one embodiment.
  • FIG. 6 is a block diagram that illustrates a computer system, according to one embodiment.
  • FIG. 1 is a block diagram illustrating a location estimation system according to one embodiment.
  • System 100 includes vehicle 101 , which includes or is carrying client device 110 .
  • the functionality of client device 110 is described in further detail with respect to FIG. 6 below.
  • client device 110 is integrated into vehicle 100 as a component of vehicle 100 .
  • Client device 110 executes an application, such as a transportation service and/or ridesharing application where a rider may request a ride from the rider's current location to a desired destination, and where the rider may be connected to a driver who also uses the ridesharing application, where the driver will provide the ride.
  • an application such as a transportation service and/or ridesharing application where a rider may request a ride from the rider's current location to a desired destination, and where the rider may be connected to a driver who also uses the ridesharing application, where the driver will provide the ride.
  • a map may be viewed by the driver or the rider via the application (e.g., on a client device of the driver or rider) where an indicator of the driver's position is displayed.
  • client device 110 is mounted on a dashboard of vehicle 101 and has a forward-facing camera that faces the road. While this exemplary embodiment is referred to throughout, in some embodiments, the application instead commands images to be captured from a stand-alone camera (e.g., embedded in a device that is affixed to a dashboard or windshield of vehicle 101 ).
  • client device 110 automatically captures one or more images based on commands received from the application. For example, client device 110 captures an image upon a driver of vehicle 101 accepting a ride request from a rider, or upon a certain condition being satisfied (e.g., a certain distance has been traveled, or a certain amount of time has passed, from a reference point). Times at which images are automatically captured by client device 110 will be described in further detail below with reference to FIGS. 3-5 . Automatic capturing of one or more images may be an opt-in feature, where the application by default does not automatically capture images, and where the application has a setting that, if selected by a user of client device 110 (e.g., a driver of vehicle 101 ), enables the application to automatically capture the images. While accurate pinpointing of a driver's location using the systems and methods described herein may rely on opting in, the location of the driver may be determined based on GPS traces of client device 110 (even if inaccurate) should a driver of vehicle 101 not opt in.
  • a certain condition
  • client device 110 transmits the image(s) to location determination service 130 over network 120 , where location determination service 130 receives the image(s) and compares them to known images, stored at image rendering database 132 , to determine the location of client device 110 .
  • location determination service 130 and/or image rendering database 132 is located within client device 110 , and thus need not be accessed by network 120 , as depicted.
  • Functionality of location determination service 130 may be integrated as a module of the application (e.g., the ridesharing application).
  • Image rendering database 132 may be accessed by location determination service 130 directly, or over network 120 .
  • Location determination service 130 may be a module of an application, such as a ridesharing application, or may be a component of a transportation service generally, such as a ridesharing service. In some embodiments where location determination service 130 is a module of the application, some or all of the contents of image rendering database 132 are transmitted to the client device for performing localization at the client device. The functionality of location determination service 130 will be described in further detail below with respect to FIGS. 2-5 .
  • FIG. 2 is an illustration of GPS traces in a region where GPS signals are inaccurate according to one embodiment.
  • Region 200 includes GPS traces 202 of a client device (e.g., client device 110 ) as derived from a GPS sensor of client device 110 .
  • the GPS traces 202 were derived from client device 110 while vehicle 101 was on a road. Because of the existence of tall buildings within region 200 , the GPS signals used to derive the GPS traces are distorted and provide inaccurate GPS traces. This is evidenced by the GPS traces being at locations that are not on a road.
  • Region 200 is exemplary of a location known to location determination service 130 to have or cause erroneous GPS data.
  • the identification of various regions, like region 200 , which are associated with erroneous GPS data may be performed automatically by location determination server 130 , or may be made based on manual feedback (e.g., performed in advance of executing process 500 ).
  • location determination service 130 may detect that users of a ridesharing application in a given location set a pickup pin at a location different from their GPS traces at a frequency that exceeds an implementer-defined threshold, and may determine therefrom that GPS data derived from client devices within that region are likely erroneous.
  • location determination service 130 may detect that GPS traces of users (e.g., drivers) of a ridesharing application are, at a frequency above a threshold, in areas inaccessible to drivers, such as within buildings or parks that do not have road access, and may determine therefrom that GPS data derived from client devices within that region are likely erroneous. As yet another example, location determination service 130 may receive feedback from users that their client devices are determining erroneous locations based on GPS sensors of those client devices, and may determine therefrom that GPS data derived from client devices within that region are likely erroneous.
  • users e.g., drivers
  • location determination service 130 may receive feedback from users that their client devices are determining erroneous locations based on GPS sensors of those client devices, and may determine therefrom that GPS data derived from client devices within that region are likely erroneous.
  • FIG. 3 is an illustration of a manner in which a vehicle location is initialized and then updated, according to one embodiment.
  • Environment 300 includes vehicle 301 , which begins at position 398 , and is subsequently at position 399 .
  • Vehicle 301 is an example of a vehicle 101 described in FIG. 1 .
  • Position 398 indicates the beginning of a session.
  • the beginning of a session refers to a time (referenced in FIG. 3 as time “T 1 ”) at which a location of vehicle 301 is initialized based on data acquired from or by client device 110 , which is inside of vehicle 301 .
  • the application may detect the beginning of a session at the occurrence of any predefined point in time.
  • a non-exhaustive and illustrative set of examples of when a session begins includes the launch of the application, the application detecting that the vehicle is moving (e.g., based on an accelerometer of client device 110 ), the application detecting that the driver has accepted a ride request, and the like.
  • the application retrieves GPS data acquired using a GPS sensor of client device 110 , and determines an initial location of vehicle 301 based on the GPS data. Taking at least an initial GPS reading (even in a region like region 200 ), before determining location based on image renderings, enables location determination service 130 to determine the location of vehicle 301 by referencing far fewer entries of image rendering database 132 than would be necessary without knowledge of a general vicinity within which vehicle 301 is located.
  • location determination service 30 is able to determine a vicinity within which position 399 is contained, and thus efficiently reference only entries within image rendering database 132 that correspond to that vicinity.
  • Position 399 represents a position of vehicle 301 at a time subsequent to the beginning of the session (referenced in FIG. 3 as time “T 2 ”).
  • the application determines that vehicle 301 has reached position 399 upon the occurrence of a condition (or any condition of a set of predefined conditions).
  • the condition may be detecting that a predefined amount of time has passed (e.g., five or ten seconds since time T 1 ).
  • the condition may be detecting that vehicle 301 has traveled a predefined distance, such as distance 315 , since the beginning of the session.
  • the application may detect that vehicle 301 has traveled the predefined distance based on data from one or more of an accelerometer, a GPS sensor, or other sensors of client device 110 .
  • the condition may be detecting that vehicle 301 has entered a region where GPS signals are known to be inaccurate, such as region 200 .
  • the application commands client device 310 to capture an image.
  • the captured image will be used by location determination service 130 to determine the location of vehicle 301 at position 399 without further use of a GPS sensor of client device 310 .
  • the application causes client device 110 to transmit to location determination service 130 a rendering of the captured image.
  • Location determination service 130 identifies a subset of entries of image rendering database 132 that correspond to a location determined from the initial GPS reading at position 398 (e.g., including an offset corresponding to direction and distance detected using sensors of client device 110 between times T 1 and T 2 ). Location determination service 130 then compares the rendering to renderings of each entry of the subset, and in response to finding a matching rendering, determines the location of vehicle 301 at position 399 to be a location indicated in the entry that includes the matching rendering.
  • FIG. 3 only indicates two positions, following determining the location of vehicle 301 at position 399 , a similar process may be used by location determination service 130 to determine the position of vehicle 301 at subsequent positions. For example, each subsequent time the application detects a condition, the application may cause client device 110 to capture another image and transmit that image to location determination service 130 , which may isolate a subset of entries based on the last known location of vehicle 301 (e.g., the location determined for position 399 ), and use that subset to find a matching of the newly captured rendering.
  • the subset of entries may be determined by adding an offset corresponding to distance and direction detected using sensors of client device 110 between times T 2 and a subsequent time, similar to the initial determination described in the prior paragraph. In this manner, following initialization of the location of vehicle 301 , the GPS sensor of client device 110 need not be used again to locate the position of vehicle 301 for the remainder of the session.
  • position 398 may be determined (and thus the initial location of vehicle 301 at the start of a session may be determined) based on a captured image, instead of GPS data.
  • location determination service 130 may receive a captured image (in the manner described above) and search image rendering database 132 for a matching image.
  • location determination service 130 may compare the rendering to all entries of image rendering database 132 .
  • location determination service 130 may, during the initialization, identify a subset of entries for comparison based on last-known location during a prior session (rather than GPS data) to preserve computational resources.
  • FIG. 4 is an illustration of a manner in which to identify locations for which renderings do not exist in an image rendering database, according to one embodiment.
  • image localization is being used to determine driver location, as opposed to GPS sensor data from client device 110 , such as environment 300
  • location determination service 130 may be unable to identify a matching rendering in any entry investigated in image rendering database 132 .
  • Environment 400 includes route 450 .
  • the shaded portions of route 450 depict locations at which image localization was successful (in the manner described above with reference to FIG. 3 , and which will be further detailed below with respect to FIG. 5 ).
  • a vehicle traveling along route 450 from point A to point B may reach location 460 , where location determination service 130 may fail to find a matching rendering in image rendering database 132 of an image captured at location 460 by client device 110 that is within the vehicle.
  • location determination service 130 may continue to attempt to determine the location of the vehicle, and, based on an image captured at location 470 , location determination service 130 may find a matching rendering and successfully localize the vehicle and may continue to do so for the remainder of route 450 based on further matching renderings.
  • location service 130 may use GPS sensor data of client device 110 to find a subset of entries of image rendering database 132 that correspond to location 470 .
  • location service 130 may use sensor data (e.g., directional sensor in combination with an accelerometer) to determine distance and direction traveled from location 460 in order to find a subset of locations to which location 470 is likely to correspond. These embodiments may be combined (e.g., by first using sensor data in combination with location 460 to identify a subset, and if no rendering matches, going on to use GPS data).
  • location determination service 130 may anchor an unknown region, such as the region of route 450 including question marks, with a last known location before the unknown region was entered (e.g., location 460 ), and with a last known location after the unknown region was exited (e.g., location 470 ).
  • Location determination service 130 may alert an administrator of the unknown region's existence, and the anchors. The administrator may then take action to update image rendering database 132 with renderings corresponding to the unknown region. For example, the administrator may send employees to capture images of the unknown region, or may incentivize riders to take pictures of portions of the unknown region.
  • FIG. 5 is an illustrative flowchart of a process for estimating vehicle location based on image renderings, in accordance with some embodiments of the disclosure.
  • Process 500 begins by location determination service 130 initializing 502 a location of a vehicle at a start of a session based on GPS data of a client device within the vehicle. The details of the initialization process are described above with respect to FIG. 3 , the details of which apply fully hereto.
  • the location determination service receives 504 , from the client device (e.g., client device 110 within vehicle 101 ), a rendering of an image that was captured by the client device at a time subsequent to the start of the session.
  • the rendering may be an image itself, or a transformation of the image.
  • the client device may generate the rendering, or a module of location determination service 130 may generate the rendering upon receiving the image.
  • client device 110 or location determination service 130 may generate a three-dimensional model of the captured image, and may register the three-dimensional model to three-dimensional content stored at image rendering database 132 .
  • the rendering is generated as part of a localization process (e.g., 2D-3D or 3D-3D localization).
  • client device 110 or location determination service 130 extracts 2D image features, e.g., using scale invariant feature transform (“SIFT”), object request broker (“ORB”), speed up robust features (“SURF”), or the like.
  • SIFT scale invariant feature transform
  • ORB object request broker
  • SURF speed up robust features
  • location determination service 130 or client device 110 builds a three-dimensional model from the captured image using a machine learned model.
  • location determination service 130 when receiving the rendering of the image that was captured by the client device at the time subsequent to the start of the session retrieving a plurality of progress benchmarks, location determination service 130 determines that, at a given time (e.g., time T 2 depicted in FIG. 3 ), a progress benchmark of the plurality of progress benchmarks has been reached (e.g., a predetermined time lapse is equal to the time difference between times T 2 and T 1 depicted in FIG. 3 , or that distance 315 equals a predefined distance). In response to determining that the progress benchmark has been reached, location determination service 130 commands the client device to capture the image (e.g., as depicted at position 399 of FIG. 3 ).
  • a given time e.g., time T 2 depicted in FIG. 3
  • a progress benchmark of the plurality of progress benchmarks e.g., a predetermined time lapse is equal to the time difference between times T 2 and T 1 depicted in FIG. 3 , or that distance 315
  • the plurality of progress benchmarks may be any of, or a combination of, a threshold period of time from either initialization or from a last capture of an image and a threshold distance from a location where either initialization or a last capture of an image occurred.
  • a progress benchmark may also be a threshold change in direction from a direction the vehicle was approaching at either initialization or at a last capture of an image (e.g., a right angle turn was made, which may trigger a need to capture a new image in the new direction that the vehicle is directed toward).
  • location determination service 130 determines 506 a geographical area corresponding to the received rendering using the GPS data and data obtained from a sensor within the vehicle.
  • location determination service 130 determines, from the data, a distance and direction in which the vehicle has traveled since the start of the session, and determines a scope of the geographical area based on the distance and the direction. For example, as described above, location determination service 130 may determine a last known location (e.g., location 398 of FIG. 3 ), and may determine a distance and direction traveled using sensor data of client device 110 to determine a likely location of vehicle 101 at a time a next image was captured.
  • the geographical area may be defined to be a predefined radius (e.g., ten meters, a quarter mile, etc.) surrounding the likely location of vehicle 101 at the time the next image was captured. Further details about this are described above with respect to FIGS. 3-4 .
  • Location determination service 130 may cause a user interface of a driver or rider client device to display the likely location of the vehicle.
  • Location determination service 130 goes on to compare 508 the received rendering to entries in a database, each respective entry including a rendering and a respective associated location that is within the geographical area. For example, keypoints of the received rendering may be extracted and compared to keypoints of candidate renderings to determine whether a threshold amount of keypoints match (to be described in connection with 510 below). In some embodiments, to improve computational efficiency, the location determination service 130 compares the received rendering to the entries by extracting geolocation data from the captured rendering (e.g., data corresponding to position 399 ). For example, even if GPS data obtained by client device 110 is erroneous, it is likely to be within a threshold distance from the actual location of client device 110 .
  • geolocation data e.g., data corresponding to position 399
  • Location determination service 130 determines a subset of the entries corresponding to the geolocation data. For example, location determination service 130 determines a radius of actual GPS coordinates that are within a threshold distance of a location indicated by the geolocation data. Location determination service 130 limits the comparing of the received rendering to the subset of the entries, thus ensuring a savings of processing time and power, as only entries that are within a threshold radius of a given location will be searched, as opposed to all entries of image rendering database 132 .
  • Location determination service 130 determines 510 whether the received rendering matches a respective rendering included in a respective entry of the database of renderings. In some embodiments, in order to perform this determination, location determination service 130 determines that the received rendering does not completely match any entry of the entries. For example, when comparing two-dimensional renderings, location determination service 130 may determine that not all keypoints of the received rendering match any candidate rendering. When comparing three-dimensional renderings, location determination service 130 may determine that the keypoints of the image do not match all keypoints of any perspective of any candidate rendering.
  • Matching can be performed coarsely (e.g., as a first part of a process) by leveraging GPS to reduce the search space (e.g., to reduce the amount of database entries to be referenced, as discussed above and below).
  • the application isolates candidate renderings (e.g., images or 3D sections of the scene to match against).
  • the application performs further filtering by using the heading direction of the query image or 3D scene coordinates to align them to the base map (e.g., of a 2D or 3D model of known renderings) more readily. Additional techniques like vocab trees, bag of words or even machine learning can be used to quickly retrieve a matching set of images or 3D content.
  • Alignment refers to aligning a captured image to either stored isolated renderings that have known corresponding locations, or to a portion of a “base map” that stitches together known renderings into a model of the world, where each portion of the base map corresponds to a different location and is built from captured images of all locations that are informed by the base map.
  • Location determination service 130 may perform 3D-3D alignment in a variety of ways. In some embodiments, location determination service 130 executes an iterative closest point (ICP) module to determine the 3D-3D alignment.
  • ICP iterative closest point
  • Location determination service 130 may seed the 3D-3D alignment using machine-learned models that generate a segmentation by semantically segmenting the 3D scene of the base map. With that segmentation, location determination service 130 may determine a coarse alignment between similar semantic structures, such as car-to-car alignments, light post-to-light post alignments, and the like. With that coarse alignment, location determination service 130 may then revert to traditional ICP to perform the final precision alignment in an accelerated fashion.
  • location determination service 130 determines that a percentage of characteristics of the received rendering match characteristics of the given entry, and determines whether the percentage exceeds a threshold. In response to determining that the percentage exceeds the threshold, location determination service 130 determines that the received rendering matches the given entry based on the partial match. Likewise, in response to determining that the percentage does not exceed the threshold, location determination service 130 determines that the received rendering does not match the given entry notwithstanding the partial match.
  • location determination service 130 determines 512 that the location of the vehicle (e.g., vehicle 101 ) is the location associated with the matching rendering. For example, location determination service 130 retrieves a location indicated by the entry that includes the matching rendering, and determines that the location indicated by this entry is the location of client device 110 . Location determination service 130 optionally transmits the location to a rider's client device 110 , and causes the location to be generated for display on the rider's client device.
  • location determination service 130 in response to determining that the received rendering does not match the given entry notwithstanding the partial match, transmits 514 a prompt to an administrator to add an entry corresponding to the location of the client device (e.g., in a scenario like the unknown region of route 450 in FIG. 4 , as described above).
  • location determination service 130 may command image rendering database 132 to mark the rendering as stale and requiring an update.
  • the above description relates to localization in a rider/driver environment using images.
  • the techniques described herein may be used to localize anyone with a client device, such as a smart phone (e.g., as they are moving through an urban canyon where GPS is insufficient).
  • a client device such as a smart phone
  • all examples disclosed herein describing identifying driver and/or rider location may apply to any user with a client device, regardless of whether they are engaged in the above-described rider/driver scenarios.
  • FIG. 6 is a block diagram that illustrates a computer system 600 for acting as a client 110 or location determination service 130 , according to one embodiment. Illustrated are at least one processor 602 coupled to a chipset 604 . Also coupled to the chipset 604 are a memory 606 , a storage device 608 , a keyboard 610 , a graphics adapter 612 , a pointing device 614 , and a network adapter 616 . A display 618 is coupled to the graphics adapter 612 . In one embodiment, the functionality of the chipset 604 is provided by a memory controller hub 620 and an I/O controller hub 622 . In another embodiment, the memory 606 is coupled directly to the processor 602 instead of the chipset 604 .
  • the storage device 608 is any non-transitory computer-readable storage medium, such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device.
  • the memory 606 holds instructions and data used by the processor 602 .
  • the pointing device 614 may be a mouse, track ball, or other type of pointing device, and is used in combination with the keyboard 610 to input data into the computer system 600 .
  • the graphics adapter 612 displays images and other information on the display 618 .
  • the network adapter 616 couples the computer system 600 to the network 120 .
  • a computer 600 can have different and/or other components than those shown in FIG. 6 .
  • the computer 600 can lack certain illustrated components.
  • the computer acting as the location determination service 130 can be formed of multiple blade servers linked together into one or more distributed systems and lack components such as keyboards and displays.
  • the storage device 608 can be local and/or remote from the computer 600 (such as embodied within a storage area network (SAN)).
  • SAN storage area network

Abstract

Systems and methods are disclosed herein for computer-implemented method for determining a location of a vehicle. The systems and methods initialize the location of a vehicle based on GPS data of a client device within the vehicle, and receive, from the client device, a rendering of an image captured subsequent to initialization. The systems and methods determine a geographical area corresponding to the received rendering using the GPS data and data obtained from a sensor within the vehicle, and compare the received rendering to entries that each include a rendering and a respective associated location that is within the geographical area. The systems and methods determine from the comparing whether the received rendering matches a respective rendering included in a respective entry, and if so, responsively determine that the location of the vehicle is the respective associated location included in the respective entry.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 62/801,010, filed Feb. 4, 2019, U.S. Provisional Application No. 62/795,988, filed Jan. 23, 2019, U.S. Provisional Application No. 62/812,101, filed Feb. 28, 2019, U.S. Provisional Application No. 62/812,098, filed Feb. 28, 2019, U.S. Provisional Application No. 62/801,010, filed Feb. 4, 2019, U.S. Provisional Application No. 62/802,145, filed Feb. 6, 2019, U.S. Provisional Application No. 62/812,107, filed Feb. 28, 2019, which are incorporated by reference in their entirety.
  • TECHNICAL FIELD
  • The present disclosure relates to location determination with limited or no reliance on global positioning system (GPS) signals, and in particular to determining a location estimate for a vehicle based on imagery captured by a mobile device mounted within a vehicle.
  • BACKGROUND
  • Many systems use global positioning system (GPS) coordinates to estimate the position of vehicles, or to estimate the position of a client device, where the client device's position acts as a proxy for a location of a vehicle in which the client device is located. For example, a person (interchangeably referred to as a “rider” herein) carrying a client device may wish to arrange for transportation from his or her present location to another location, and may execute an application (e.g., a transportation service and/or ridesharing application) on his or her client device to obtain transportation. Existing systems in this scenario match the person with a driver based on an estimated location of various candidate drivers (derived from GPS traces of the drivers' client devices), and show the matched driver's location to the user using indicia on a map that matches the location of the matched driver's client device. However, GPS traces are not always accurate. For example, in areas subject to GPS interference or reception problems, such as an urban canyon with tall buildings that distort satellite signals, the GPS traces of the driver's client device may be inaccurate. This causes a practical inconvenience where a driver that is sub-optimally located may be matched to a rider, where a look at true coordinates of candidate drivers may have caused a match to a different driver. Moreover, this may cause frustration in the rider, as the rider may be viewing an indicated position of a driver that does not match the driver's true location.
  • Existing systems seek to solve the technical problem of improving driver location estimates by determining when a GPS trace is not located on a road, and snapping the GPS trace to a nearest road. However, these snapping technologies suffer the same source of inaccuracy as the GPS traces themselves, as an urban canyon may cause the GPS trace to be far from the actual road on which the vehicle is driving. Moreover, in a location where roads exist in a high density (e.g., in a city where roads are so close as to be within a GPS sensor's margin of error), the existing systems have no tiebreaker mechanism to determine which road within the margin of error to snap the GPS trace to. The technical problem of how to derive exact driver location without the need for GPS signals where GPS signals are distorted or unavailable is not addressed by existing systems.
  • SUMMARY
  • Systems and methods are disclosed herein for determining a location of a vehicle (e.g., where effectiveness of a GPS sensor within the vehicle is limited). To this end, a service (e.g., that connects a rider with a driver in the context of a ridesharing application) initializes a determination of a location of a vehicle at a start of a session based on global positioning system (GPS) data of a client device within the vehicle. For example, when a driver of the vehicle first activates an application for accepting rides, or when the driver of the vehicle accepts a ride request, the service initializes the vehicle's location using a GPS sensor of a client device of the driver that is executing the application. As discussed below, in some embodiments, the service is implemented within the client device.
  • The service receives, from the client device, a rendering of an image that was captured by the client device at a time subsequent to the start of the session (e.g., an image captured automatically by the driver's client device after the vehicle had traveled for ten seconds, or had traveled for one hundred meters past the point where the initial GPS trace was determined). The service then determines a geographical area corresponding to the received rendering using the GPS data and data obtained from a sensor within the vehicle (e.g., a vicinity within which the vehicle is likely to be).
  • The service compares the received rendering to entries in a database, each respective entry including a respective rendering and a respective associated location that is within the geographical area. In some embodiments, as described below, some or all data of the database is stored at the client device. By limiting the comparison to entries corresponding to locations within the geographical area, processing is efficiently performed, as only a small subset of entries that are likely to correspond to the vehicle's current location are referenced. The service determines from the comparing whether the received rendering matches a respective rendering included in a respective entry in the database of renderings, and when a match is determined, the service determines that the location of the vehicle at the time is the respective associated location included in the respective entry.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram illustrating a location estimation system according to one embodiment.
  • FIG. 2 is an illustration of GPS traces in a region where GPS signals are inaccurate according to one embodiment.
  • FIG. 3 is an illustration of a manner in which a vehicle location is initialized and then updated, according to one embodiment.
  • FIG. 4 is an illustration of a manner in which to identify locations for which renderings do not exist in an image rendering database, according to one embodiment.
  • FIG. 5 is an illustrative flowchart of a process for estimating vehicle location based on image renderings, according to one embodiment.
  • FIG. 6 is a block diagram that illustrates a computer system, according to one embodiment.
  • The figures depict various embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
  • DETAILED DESCRIPTION System Environment
  • FIG. 1 is a block diagram illustrating a location estimation system according to one embodiment. System 100 includes vehicle 101, which includes or is carrying client device 110. The functionality of client device 110 is described in further detail with respect to FIG. 6 below. In some embodiments, client device 110 is integrated into vehicle 100 as a component of vehicle 100. Client device 110 executes an application, such as a transportation service and/or ridesharing application where a rider may request a ride from the rider's current location to a desired destination, and where the rider may be connected to a driver who also uses the ridesharing application, where the driver will provide the ride. While the driver travels toward a rider, and while the driver is transporting a rider to a destination, a map may be viewed by the driver or the rider via the application (e.g., on a client device of the driver or rider) where an indicator of the driver's position is displayed. In an exemplary embodiment, client device 110 is mounted on a dashboard of vehicle 101 and has a forward-facing camera that faces the road. While this exemplary embodiment is referred to throughout, in some embodiments, the application instead commands images to be captured from a stand-alone camera (e.g., embedded in a device that is affixed to a dashboard or windshield of vehicle 101).
  • In an embodiment, client device 110 automatically captures one or more images based on commands received from the application. For example, client device 110 captures an image upon a driver of vehicle 101 accepting a ride request from a rider, or upon a certain condition being satisfied (e.g., a certain distance has been traveled, or a certain amount of time has passed, from a reference point). Times at which images are automatically captured by client device 110 will be described in further detail below with reference to FIGS. 3-5. Automatic capturing of one or more images may be an opt-in feature, where the application by default does not automatically capture images, and where the application has a setting that, if selected by a user of client device 110 (e.g., a driver of vehicle 101), enables the application to automatically capture the images. While accurate pinpointing of a driver's location using the systems and methods described herein may rely on opting in, the location of the driver may be determined based on GPS traces of client device 110 (even if inaccurate) should a driver of vehicle 101 not opt in.
  • In some embodiments, client device 110 transmits the image(s) to location determination service 130 over network 120, where location determination service 130 receives the image(s) and compares them to known images, stored at image rendering database 132, to determine the location of client device 110. In some embodiments, the functionality of location determination service 130 and/or image rendering database 132 is located within client device 110, and thus need not be accessed by network 120, as depicted. Functionality of location determination service 130 may be integrated as a module of the application (e.g., the ridesharing application). Image rendering database 132 may be accessed by location determination service 130 directly, or over network 120. Location determination service 130 may be a module of an application, such as a ridesharing application, or may be a component of a transportation service generally, such as a ridesharing service. In some embodiments where location determination service 130 is a module of the application, some or all of the contents of image rendering database 132 are transmitted to the client device for performing localization at the client device. The functionality of location determination service 130 will be described in further detail below with respect to FIGS. 2-5.
  • Identifying Regions Prone to Erroneous GPS Readings
  • FIG. 2 is an illustration of GPS traces in a region where GPS signals are inaccurate according to one embodiment. Region 200 includes GPS traces 202 of a client device (e.g., client device 110) as derived from a GPS sensor of client device 110. As an illustrative example, the GPS traces 202 were derived from client device 110 while vehicle 101 was on a road. Because of the existence of tall buildings within region 200, the GPS signals used to derive the GPS traces are distorted and provide inaccurate GPS traces. This is evidenced by the GPS traces being at locations that are not on a road.
  • Region 200 is exemplary of a location known to location determination service 130 to have or cause erroneous GPS data. The identification of various regions, like region 200, which are associated with erroneous GPS data may be performed automatically by location determination server 130, or may be made based on manual feedback (e.g., performed in advance of executing process 500). For example, location determination service 130 may detect that users of a ridesharing application in a given location set a pickup pin at a location different from their GPS traces at a frequency that exceeds an implementer-defined threshold, and may determine therefrom that GPS data derived from client devices within that region are likely erroneous. As another example, location determination service 130 may detect that GPS traces of users (e.g., drivers) of a ridesharing application are, at a frequency above a threshold, in areas inaccessible to drivers, such as within buildings or parks that do not have road access, and may determine therefrom that GPS data derived from client devices within that region are likely erroneous. As yet another example, location determination service 130 may receive feedback from users that their client devices are determining erroneous locations based on GPS sensors of those client devices, and may determine therefrom that GPS data derived from client devices within that region are likely erroneous.
  • Exemplary Initialization Using GPS and Transition to Image-Based Localization
  • FIG. 3 is an illustration of a manner in which a vehicle location is initialized and then updated, according to one embodiment. Environment 300 includes vehicle 301, which begins at position 398, and is subsequently at position 399. Vehicle 301 is an example of a vehicle 101 described in FIG. 1. Position 398 indicates the beginning of a session. As used herein, the beginning of a session refers to a time (referenced in FIG. 3 as time “T1”) at which a location of vehicle 301 is initialized based on data acquired from or by client device 110, which is inside of vehicle 301. The application may detect the beginning of a session at the occurrence of any predefined point in time. A non-exhaustive and illustrative set of examples of when a session begins includes the launch of the application, the application detecting that the vehicle is moving (e.g., based on an accelerometer of client device 110), the application detecting that the driver has accepted a ride request, and the like.
  • In response to detecting the beginning of the session, the application retrieves GPS data acquired using a GPS sensor of client device 110, and determines an initial location of vehicle 301 based on the GPS data. Taking at least an initial GPS reading (even in a region like region 200), before determining location based on image renderings, enables location determination service 130 to determine the location of vehicle 301 by referencing far fewer entries of image rendering database 132 than would be necessary without knowledge of a general vicinity within which vehicle 301 is located. For example, with knowledge that vehicle 301 is within a margin of error of position 398, as determined using a GPS sensor of client device 110 (e.g., within 50 meters of position 398, which may be a known margin of error in region 200), location determination service 30 is able to determine a vicinity within which position 399 is contained, and thus efficiently reference only entries within image rendering database 132 that correspond to that vicinity.
  • After taking the initial GPS reading, the application monitors data relating to vehicle 301 to detect a scenario where vehicle 301 reaches position 399. Position 399 represents a position of vehicle 301 at a time subsequent to the beginning of the session (referenced in FIG. 3 as time “T2”). The application determines that vehicle 301 has reached position 399 upon the occurrence of a condition (or any condition of a set of predefined conditions). In some embodiments, the condition may be detecting that a predefined amount of time has passed (e.g., five or ten seconds since time T1). In some embodiments, the condition may be detecting that vehicle 301 has traveled a predefined distance, such as distance 315, since the beginning of the session. The application may detect that vehicle 301 has traveled the predefined distance based on data from one or more of an accelerometer, a GPS sensor, or other sensors of client device 110. In some embodiments, the condition may be detecting that vehicle 301 has entered a region where GPS signals are known to be inaccurate, such as region 200.
  • Regardless of which condition, or which combination of conditions, are used to determine that vehicle 301 has reached position 399, in response to determining that vehicle 301 has reached position 399, the application commands client device 310 to capture an image. As will be discussed below with respect to FIG. 5, the captured image will be used by location determination service 130 to determine the location of vehicle 301 at position 399 without further use of a GPS sensor of client device 310. As an illustrative example of what will be discussed below with respect to FIG. 5, the application causes client device 110 to transmit to location determination service 130 a rendering of the captured image. Location determination service 130 identifies a subset of entries of image rendering database 132 that correspond to a location determined from the initial GPS reading at position 398 (e.g., including an offset corresponding to direction and distance detected using sensors of client device 110 between times T1 and T2). Location determination service 130 then compares the rendering to renderings of each entry of the subset, and in response to finding a matching rendering, determines the location of vehicle 301 at position 399 to be a location indicated in the entry that includes the matching rendering.
  • While FIG. 3 only indicates two positions, following determining the location of vehicle 301 at position 399, a similar process may be used by location determination service 130 to determine the position of vehicle 301 at subsequent positions. For example, each subsequent time the application detects a condition, the application may cause client device 110 to capture another image and transmit that image to location determination service 130, which may isolate a subset of entries based on the last known location of vehicle 301 (e.g., the location determined for position 399), and use that subset to find a matching of the newly captured rendering. The subset of entries may be determined by adding an offset corresponding to distance and direction detected using sensors of client device 110 between times T2 and a subsequent time, similar to the initial determination described in the prior paragraph. In this manner, following initialization of the location of vehicle 301, the GPS sensor of client device 110 need not be used again to locate the position of vehicle 301 for the remainder of the session.
  • In some embodiments, position 398 may be determined (and thus the initial location of vehicle 301 at the start of a session may be determined) based on a captured image, instead of GPS data. To this end, location determination service 130 may receive a captured image (in the manner described above) and search image rendering database 132 for a matching image. In some embodiments, rather than using a subset of entries, location determination service 130 may compare the rendering to all entries of image rendering database 132. In some embodiments, location determination service 130 may, during the initialization, identify a subset of entries for comparison based on last-known location during a prior session (rather than GPS data) to preserve computational resources.
  • Anchoring Unknown Road Segments for Future Database Population
  • FIG. 4 is an illustration of a manner in which to identify locations for which renderings do not exist in an image rendering database, according to one embodiment. In an embodiment where image localization is being used to determine driver location, as opposed to GPS sensor data from client device 110, such as environment 300, there may be times where location determination service 130 is unable to determine a location of vehicle 101. For example, location determination service 130 may be unable to identify a matching rendering in any entry investigated in image rendering database 132. Environment 400 includes route 450. The shaded portions of route 450, as illustrated in FIG. 4, depict locations at which image localization was successful (in the manner described above with reference to FIG. 3, and which will be further detailed below with respect to FIG. 5). A vehicle traveling along route 450 from point A to point B may reach location 460, where location determination service 130 may fail to find a matching rendering in image rendering database 132 of an image captured at location 460 by client device 110 that is within the vehicle.
  • As the vehicle progresses along route 450, location determination service 130 may continue to attempt to determine the location of the vehicle, and, based on an image captured at location 470, location determination service 130 may find a matching rendering and successfully localize the vehicle and may continue to do so for the remainder of route 450 based on further matching renderings. In some embodiments, location service 130 may use GPS sensor data of client device 110 to find a subset of entries of image rendering database 132 that correspond to location 470. In other embodiments, location service 130 may use sensor data (e.g., directional sensor in combination with an accelerometer) to determine distance and direction traveled from location 460 in order to find a subset of locations to which location 470 is likely to correspond. These embodiments may be combined (e.g., by first using sensor data in combination with location 460 to identify a subset, and if no rendering matches, going on to use GPS data).
  • In some embodiments, location determination service 130 may anchor an unknown region, such as the region of route 450 including question marks, with a last known location before the unknown region was entered (e.g., location 460), and with a last known location after the unknown region was exited (e.g., location 470). Location determination service 130 may alert an administrator of the unknown region's existence, and the anchors. The administrator may then take action to update image rendering database 132 with renderings corresponding to the unknown region. For example, the administrator may send employees to capture images of the unknown region, or may incentivize riders to take pictures of portions of the unknown region.
  • Location Determination Service Functionality
  • FIG. 5 is an illustrative flowchart of a process for estimating vehicle location based on image renderings, in accordance with some embodiments of the disclosure. Process 500 begins by location determination service 130 initializing 502 a location of a vehicle at a start of a session based on GPS data of a client device within the vehicle. The details of the initialization process are described above with respect to FIG. 3, the details of which apply fully hereto. The location determination service receives 504, from the client device (e.g., client device 110 within vehicle 101), a rendering of an image that was captured by the client device at a time subsequent to the start of the session. The rendering may be an image itself, or a transformation of the image. In the case where the rendering is a transformation of the image, the client device may generate the rendering, or a module of location determination service 130 may generate the rendering upon receiving the image. To generate the rendering in the case where the rendering is a transformation of the image, client device 110 or location determination service 130 may generate a three-dimensional model of the captured image, and may register the three-dimensional model to three-dimensional content stored at image rendering database 132.
  • In some embodiments, the rendering is generated as part of a localization process (e.g., 2D-3D or 3D-3D localization). For example, client device 110 or location determination service 130 extracts 2D image features, e.g., using scale invariant feature transform (“SIFT”), object request broker (“ORB”), speed up robust features (“SURF”), or the like. In some embodiments, location determination service 130 or client device 110 builds a three-dimensional model from the captured image using a machine learned model.
  • In some embodiments, when receiving the rendering of the image that was captured by the client device at the time subsequent to the start of the session retrieving a plurality of progress benchmarks, location determination service 130 determines that, at a given time (e.g., time T2 depicted in FIG. 3), a progress benchmark of the plurality of progress benchmarks has been reached (e.g., a predetermined time lapse is equal to the time difference between times T2 and T1 depicted in FIG. 3, or that distance 315 equals a predefined distance). In response to determining that the progress benchmark has been reached, location determination service 130 commands the client device to capture the image (e.g., as depicted at position 399 of FIG. 3). As discussed above with respect to the conditions, the plurality of progress benchmarks may be any of, or a combination of, a threshold period of time from either initialization or from a last capture of an image and a threshold distance from a location where either initialization or a last capture of an image occurred. A progress benchmark may also be a threshold change in direction from a direction the vehicle was approaching at either initialization or at a last capture of an image (e.g., a right angle turn was made, which may trigger a need to capture a new image in the new direction that the vehicle is directed toward).
  • Process 500 continues, with location determination service 130 determines 506 a geographical area corresponding to the received rendering using the GPS data and data obtained from a sensor within the vehicle. In some embodiments, when determining the geographical area, location determination service 130 determines, from the data, a distance and direction in which the vehicle has traveled since the start of the session, and determines a scope of the geographical area based on the distance and the direction. For example, as described above, location determination service 130 may determine a last known location (e.g., location 398 of FIG. 3), and may determine a distance and direction traveled using sensor data of client device 110 to determine a likely location of vehicle 101 at a time a next image was captured. The geographical area may be defined to be a predefined radius (e.g., ten meters, a quarter mile, etc.) surrounding the likely location of vehicle 101 at the time the next image was captured. Further details about this are described above with respect to FIGS. 3-4. Location determination service 130 may cause a user interface of a driver or rider client device to display the likely location of the vehicle.
  • Location determination service 130 goes on to compare 508 the received rendering to entries in a database, each respective entry including a rendering and a respective associated location that is within the geographical area. For example, keypoints of the received rendering may be extracted and compared to keypoints of candidate renderings to determine whether a threshold amount of keypoints match (to be described in connection with 510 below). In some embodiments, to improve computational efficiency, the location determination service 130 compares the received rendering to the entries by extracting geolocation data from the captured rendering (e.g., data corresponding to position 399). For example, even if GPS data obtained by client device 110 is erroneous, it is likely to be within a threshold distance from the actual location of client device 110. Location determination service 130 then determines a subset of the entries corresponding to the geolocation data. For example, location determination service 130 determines a radius of actual GPS coordinates that are within a threshold distance of a location indicated by the geolocation data. Location determination service 130 limits the comparing of the received rendering to the subset of the entries, thus ensuring a savings of processing time and power, as only entries that are within a threshold radius of a given location will be searched, as opposed to all entries of image rendering database 132.
  • Location determination service 130 determines 510 whether the received rendering matches a respective rendering included in a respective entry of the database of renderings. In some embodiments, in order to perform this determination, location determination service 130 determines that the received rendering does not completely match any entry of the entries. For example, when comparing two-dimensional renderings, location determination service 130 may determine that not all keypoints of the received rendering match any candidate rendering. When comparing three-dimensional renderings, location determination service 130 may determine that the keypoints of the image do not match all keypoints of any perspective of any candidate rendering.
  • Matching can be performed coarsely (e.g., as a first part of a process) by leveraging GPS to reduce the search space (e.g., to reduce the amount of database entries to be referenced, as discussed above and below). By using some large radius around a query image GPS position, the application isolates candidate renderings (e.g., images or 3D sections of the scene to match against). In some embodiments, the application performs further filtering by using the heading direction of the query image or 3D scene coordinates to align them to the base map (e.g., of a 2D or 3D model of known renderings) more readily. Additional techniques like vocab trees, bag of words or even machine learning can be used to quickly retrieve a matching set of images or 3D content.
  • The process of determining whether a received rendering matches a candidate rendering is also referred to as a process of “alignment” herein. Alignment refers to aligning a captured image to either stored isolated renderings that have known corresponding locations, or to a portion of a “base map” that stitches together known renderings into a model of the world, where each portion of the base map corresponds to a different location and is built from captured images of all locations that are informed by the base map. Location determination service 130 may perform 3D-3D alignment in a variety of ways. In some embodiments, location determination service 130 executes an iterative closest point (ICP) module to determine the 3D-3D alignment. Location determination service 130 may seed the 3D-3D alignment using machine-learned models that generate a segmentation by semantically segmenting the 3D scene of the base map. With that segmentation, location determination service 130 may determine a coarse alignment between similar semantic structures, such as car-to-car alignments, light post-to-light post alignments, and the like. With that coarse alignment, location determination service 130 may then revert to traditional ICP to perform the final precision alignment in an accelerated fashion.
  • In response to determining that the received rendering does not completely match any entry of the entries, location determination service 130 determines that a percentage of characteristics of the received rendering match characteristics of the given entry, and determines whether the percentage exceeds a threshold. In response to determining that the percentage exceeds the threshold, location determination service 130 determines that the received rendering matches the given entry based on the partial match. Likewise, in response to determining that the percentage does not exceed the threshold, location determination service 130 determines that the received rendering does not match the given entry notwithstanding the partial match.
  • In response to determining that the received rendering matches a rendering of an entry, location determination service 130 determines 512 that the location of the vehicle (e.g., vehicle 101) is the location associated with the matching rendering. For example, location determination service 130 retrieves a location indicated by the entry that includes the matching rendering, and determines that the location indicated by this entry is the location of client device 110. Location determination service 130 optionally transmits the location to a rider's client device 110, and causes the location to be generated for display on the rider's client device. In some embodiments, in response to determining that the received rendering does not match the given entry notwithstanding the partial match, location determination service 130 transmits 514 a prompt to an administrator to add an entry corresponding to the location of the client device (e.g., in a scenario like the unknown region of route 450 in FIG. 4, as described above). In a scenario where location determination service has a rendering at a location matching the location of the captured image, location determination service 130 may command image rendering database 132 to mark the rendering as stale and requiring an update.
  • The above description relates to localization in a rider/driver environment using images. However, the techniques described herein may be used to localize anyone with a client device, such as a smart phone (e.g., as they are moving through an urban canyon where GPS is insufficient). Thus, all examples disclosed herein describing identifying driver and/or rider location may apply to any user with a client device, regardless of whether they are engaged in the above-described rider/driver scenarios.
  • Computing Hardware
  • The entities shown in FIG. 1 are implemented using one or more computers. FIG. 6 is a block diagram that illustrates a computer system 600 for acting as a client 110 or location determination service 130, according to one embodiment. Illustrated are at least one processor 602 coupled to a chipset 604. Also coupled to the chipset 604 are a memory 606, a storage device 608, a keyboard 610, a graphics adapter 612, a pointing device 614, and a network adapter 616. A display 618 is coupled to the graphics adapter 612. In one embodiment, the functionality of the chipset 604 is provided by a memory controller hub 620 and an I/O controller hub 622. In another embodiment, the memory 606 is coupled directly to the processor 602 instead of the chipset 604.
  • The storage device 608 is any non-transitory computer-readable storage medium, such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device. The memory 606 holds instructions and data used by the processor 602. The pointing device 614 may be a mouse, track ball, or other type of pointing device, and is used in combination with the keyboard 610 to input data into the computer system 600. The graphics adapter 612 displays images and other information on the display 618. The network adapter 616 couples the computer system 600 to the network 120.
  • As is known in the art, a computer 600 can have different and/or other components than those shown in FIG. 6. In addition, the computer 600 can lack certain illustrated components. For example, the computer acting as the location determination service 130 can be formed of multiple blade servers linked together into one or more distributed systems and lack components such as keyboards and displays. Moreover, the storage device 608 can be local and/or remote from the computer 600 (such as embodied within a storage area network (SAN)).
  • Additional Considerations
  • The foregoing description of the embodiments of the invention has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure.
  • The language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.

Claims (20)

What is claimed is:
1. A computer-implemented method for determining a location of a vehicle, the method comprising:
initializing a determination of a location of a vehicle at a start of a session based on global positioning system (GPS) data of a client device within the vehicle;
receiving, from the client device, a rendering of an image that was captured by the client device at a time subsequent to the start of the session;
determining a geographical area corresponding to the received rendering using the GPS data and data obtained from a sensor within the vehicle;
comparing the received rendering to entries in a database, each respective entry including a respective rendering and a respective associated location that is within the geographical area;
determining from the comparing whether the received rendering matches a respective rendering included in a respective entry in the database of renderings; and
in response to determining that the received rendering matches the respective rendering included in the respective entry, determining that the location of the vehicle at the time subsequent to the start of the session is the respective associated location included in the respective entry.
2. The computer-implemented method of claim 1, wherein receiving the rendering of the image that was captured by the client device at the time subsequent to the start of the session comprises:
retrieving a plurality of progress benchmarks;
determining that, at the time subsequent to the start of the session, a progress benchmark of the plurality of progress benchmarks has been reached; and
in response to determining that the progress benchmark has been reached, instructing the client device to capture the image.
3. The computer-implemented method of claim 2, wherein the plurality of progress benchmarks comprises at least one of a threshold period of time from either initialization or from a last capture of an image, a threshold distance from a location where either initialization or a last capture of an image occurred, or a threshold change in direction from a direction the vehicle was approaching at either initialization or at a last capture of an image.
4. The computer-implemented method of claim 1, wherein determining the geographical area comprises:
determining, from the data, a distance and direction in which the vehicle has traveled since the start of the session; and
determining a scope of the geographical area based on the distance and the direction.
5. The computer-implemented method of claim 4, further comprising:
causing a user interface of a rider client device to display the location of the vehicle.
6. The computer-implemented method of claim 1, further comprising:
receiving updated renderings of updated images as the vehicle progresses;
determining whether each respective updated rendering of the updated renderings matches an image rendering of an entry of the entries; and
in response to determining that a respective updated rendering does not match an image rendering of an entry of the entries, designating a respective geographical area corresponding to the respective updated rendering as having stale information.
7. The computer-implemented method of claim 6, further comprising pin-pointing a subsection of the respective geographical area as having the stale information by:
determining a most recent known location of the vehicle prior to when the respective updated rendering was received;
determining an oldest known location of the vehicle subsequent to when the respective updated rendering was received; and
determining the subsection to include an area between the most recent known location and the oldest known location.
8. A non-transitory computer-readable storage medium storing computer program instructions executable by a processor to perform operations for determining a location of a vehicle, the operations comprising:
initializing a determination of a location of a vehicle at a start of a session based on global positioning system (GPS) data of a client device within the vehicle;
receiving, from the client device, a rendering of an image that was captured by the client device at a time subsequent to the start of the session;
determining a geographical area corresponding to the received rendering using the GPS data and data obtained from a sensor within the vehicle;
comparing the received rendering to entries in a database, each respective entry including a respective rendering and a respective associated location that is within the geographical area;
determining from the comparing whether the received rendering matches a respective rendering included in a respective entry in the database of renderings; and
in response to determining that the received rendering matches the respective rendering included in the respective entry, determining that the location of the vehicle at the time subsequent to the start of the session is the respective associated location included in the respective entry.
9. The non-transitory computer-readable storage medium of claim 8, wherein receiving the rendering of the image that was captured by the client device at the time subsequent to the start of the session comprises:
retrieving a plurality of progress benchmarks;
determining that, at the time subsequent to the start of the session, a progress benchmark of the plurality of progress benchmarks has been reached; and
in response to determining that the progress benchmark has been reached, instructing the client device to capture the image.
10. The non-transitory computer-readable storage medium of claim 9, wherein the plurality of progress benchmarks comprises at least one of a threshold period of time from either initialization or from a last capture of an image, a threshold distance from a location where either initialization or a last capture of an image occurred, or a threshold change in direction from a direction the vehicle was approaching at either initialization or at a last capture of an image.
11. The non-transitory computer-readable storage medium of claim 8, wherein determining the geographical area comprises:
determining, from the data, a distance and direction in which the vehicle has traveled since the start of the session; and
determining a scope of the geographical area based on the distance and the direction.
12. The non-transitory computer-readable storage medium of claim 11, the operations further comprising:
causing a user interface of a rider client device to display the location of the vehicle.
13. The non-transitory computer-readable storage medium of claim 8, the operations further comprising:
receiving updated renderings of updated images as the vehicle progresses;
determining whether each respective updated rendering of the updated renderings matches an image rendering of an entry of the entries; and
in response to determining that a respective updated rendering does not match an image rendering of an entry of the entries, designating a respective geographical area corresponding to the respective updated rendering as having stale information.
14. The computer-implemented method of claim 13, the operations further comprising pin-pointing a subsection of the respective geographical area as having the stale information by:
determining a most recent known location of the vehicle prior to when the respective updated rendering was received;
determining an oldest known location of the vehicle subsequent to when the respective updated rendering was received; and
determining the subsection to include an area between the most recent known location and the oldest known location.
15. A system for determining a location of a vehicle, comprising:
a processor for executing computer program instructions; and
a non-transitory computer-readable storage medium storing computer program instructions executable by the processor to perform operations for estimating a location of a client device, the operations comprising:
initializing a determination of a location of a vehicle at a start of a session based on global positioning system (GPS) data of a client device within the vehicle;
receiving, from the client device, a rendering of an image that was captured by the client device at a time subsequent to the start of the session;
determining a geographical area corresponding to the received rendering using the GPS data and data obtained from a sensor within the vehicle;
comparing the received rendering to entries in a database, each respective entry including a respective rendering and a respective associated location that is within the geographical area;
determining from the comparing whether the received rendering matches a respective rendering included in a respective entry in the database of renderings; and
in response to determining that the received rendering matches the respective rendering included in the respective entry, determining that the location of the vehicle at the time subsequent to the start of the session is the respective associated location included in the respective entry.
16. The system of claim 15, wherein receiving the rendering of the image that was captured by the client device at the time subsequent to the start of the session comprises:
retrieving a plurality of progress benchmarks;
determining that, at the time subsequent to the start of the session, a progress benchmark of the plurality of progress benchmarks has been reached; and
in response to determining that the progress benchmark has been reached, instructing the client device to capture the image.
17. The system of claim 16, wherein the plurality of progress benchmarks comprises at least one of a threshold period of time from either initialization or from a last capture of an image, a threshold distance from a location where either initialization or a last capture of an image occurred, or a threshold change in direction from a direction the vehicle was approaching at either initialization or at a last capture of an image.
18. The system of claim 15, wherein determining the geographical area comprises:
determining, from the data, a distance and direction in which the vehicle has traveled since the start of the session; and
determining a scope of the geographical area based on the distance and the direction.
19. The system of claim 18, the operations further comprising:
causing a user interface of a rider client device to display the location of the vehicle.
20. The system of claim 15, the operations further comprising:
receiving updated renderings of updated images as the vehicle progresses;
determining whether each respective updated rendering of the updated renderings matches an image rendering of an entry of the entries; and
in response to determining that a respective updated rendering does not match an image rendering of an entry of the entries, designating a respective geographical area corresponding to the respective updated rendering as having stale information.
US16/712,883 2019-01-23 2019-12-12 Persistent vehicle location service using ground truth image rendering instead of gps Abandoned US20200234062A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/712,883 US20200234062A1 (en) 2019-01-23 2019-12-12 Persistent vehicle location service using ground truth image rendering instead of gps

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US201962795988P 2019-01-23 2019-01-23
US201962801010P 2019-02-04 2019-02-04
US201962801012P 2019-02-04 2019-02-04
US201962802145P 2019-02-06 2019-02-06
US201962812101P 2019-02-28 2019-02-28
US201962812107P 2019-02-28 2019-02-28
US201962812098P 2019-02-28 2019-02-28
US16/712,883 US20200234062A1 (en) 2019-01-23 2019-12-12 Persistent vehicle location service using ground truth image rendering instead of gps

Publications (1)

Publication Number Publication Date
US20200234062A1 true US20200234062A1 (en) 2020-07-23

Family

ID=71608322

Family Applications (6)

Application Number Title Priority Date Filing Date
US16/712,883 Abandoned US20200234062A1 (en) 2019-01-23 2019-12-12 Persistent vehicle location service using ground truth image rendering instead of gps
US16/712,821 Active 2041-01-23 US11501524B2 (en) 2019-01-23 2019-12-12 Generating augmented reality images for display on a mobile device based on ground truth image rendering
US16/712,835 Active 2041-06-25 US11527060B2 (en) 2019-01-23 2019-12-12 Location determination service based on user-sourced image updates
US16/712,829 Active 2040-03-25 US11308322B2 (en) 2019-01-23 2019-12-12 Locating a client device using ground truth image rendering
US16/712,902 Active 2040-01-01 US11151376B2 (en) 2019-01-23 2019-12-12 Rider-driver localization for determining placement of AR content for passenger
US16/712,824 Pending US20200232803A1 (en) 2019-01-23 2019-12-12 Generating composite images for display on a mobile device based on ground truth image rendering

Family Applications After (5)

Application Number Title Priority Date Filing Date
US16/712,821 Active 2041-01-23 US11501524B2 (en) 2019-01-23 2019-12-12 Generating augmented reality images for display on a mobile device based on ground truth image rendering
US16/712,835 Active 2041-06-25 US11527060B2 (en) 2019-01-23 2019-12-12 Location determination service based on user-sourced image updates
US16/712,829 Active 2040-03-25 US11308322B2 (en) 2019-01-23 2019-12-12 Locating a client device using ground truth image rendering
US16/712,902 Active 2040-01-01 US11151376B2 (en) 2019-01-23 2019-12-12 Rider-driver localization for determining placement of AR content for passenger
US16/712,824 Pending US20200232803A1 (en) 2019-01-23 2019-12-12 Generating composite images for display on a mobile device based on ground truth image rendering

Country Status (1)

Country Link
US (6) US20200234062A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111984806A (en) * 2020-08-13 2020-11-24 浙江每日互动网络科技股份有限公司 Method, device and storage medium for determining association degree of vehicle and terminal
US11087492B2 (en) * 2018-03-21 2021-08-10 ISVision America Methods for identifying location of automated guided vehicles on a mapped substrate
US11556580B1 (en) * 2019-02-21 2023-01-17 Meta Platforms, Inc. Indexing key frames for localization
US11609344B2 (en) * 2020-04-07 2023-03-21 Verizon Patent And Licensing Inc. Systems and methods for utilizing a machine learning model to determine a determined location of a vehicle based on a combination of a geographical location and a visual positioning system location

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11876941B1 (en) 2016-06-20 2024-01-16 Pipbin, Inc. Clickable augmented reality content manager, system, and network
US11785161B1 (en) 2016-06-20 2023-10-10 Pipbin, Inc. System for user accessibility of tagged curated augmented reality content
US10290074B2 (en) * 2017-05-25 2019-05-14 Uber Technologies, Inc. Coordinating on-demand transportation with autonomous vehicles
US11011055B2 (en) * 2019-03-21 2021-05-18 Verizon Patent And Licensing Inc. Collecting movement analytics using augmented reality
US11408739B2 (en) * 2019-05-16 2022-08-09 International Business Machines Corporation Location correction utilizing vehicle communication networks
CN116124173A (en) * 2019-05-24 2023-05-16 谷歌有限责任公司 Method and apparatus for navigating two or more users to meeting locations
US11645629B2 (en) * 2019-09-12 2023-05-09 GM Cruise Holdings LLC. Real-time visualization of autonomous vehicle behavior in mobile applications
US11656089B2 (en) * 2019-09-30 2023-05-23 GM Cruise Holdings LLC. Map driven augmented reality
US11900815B2 (en) 2019-09-30 2024-02-13 Gm Cruise Holdings Llc Augmented reality wayfinding in rideshare applications
US20210398041A1 (en) * 2020-06-18 2021-12-23 Uber Technologies, Inc. Side of street pickup optimization in ride coordination network
US11685401B2 (en) 2020-09-29 2023-06-27 Waymo Llc Semantic identification of pickup locations
JP2022160162A (en) * 2021-04-06 2022-10-19 トヨタ自動車株式会社 Information processing device and information processing system
JP2022160309A (en) * 2021-04-06 2022-10-19 トヨタ自動車株式会社 Information processing device, program and information processing method
US11872486B2 (en) * 2021-05-27 2024-01-16 International Business Machines Corporation Applying augmented reality-based gamification to hazard avoidance
CN113899359B (en) * 2021-09-30 2023-02-17 北京百度网讯科技有限公司 Navigation method, device, equipment and storage medium
US20230111327A1 (en) * 2021-10-08 2023-04-13 Motional Ad Llc Techniques for finding and accessing vehicles
US20230112471A1 (en) * 2021-10-08 2023-04-13 Ford Global Technologies, Llc Shared Ride Hail Service Platform Gaming Experience
US20230196212A1 (en) * 2021-12-19 2023-06-22 Gm Cruise Holdings Llc Autonomous vehicle destination determination
WO2023129812A1 (en) * 2021-12-30 2023-07-06 Snap Inc. Augmented reality (ar) - enhanced detection and localization of a personal mobility device
WO2023191810A1 (en) * 2022-04-01 2023-10-05 Innopeak Technology, Inc. Map tile based slam

Family Cites Families (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU4831500A (en) 1999-05-10 2000-11-21 Andrew L. Di Rienzo Authentication
US7383123B2 (en) * 2003-06-03 2008-06-03 Samsung Electronics Co., Ltd. System and method of displaying position information including an image in a navigation system
JP4321128B2 (en) 2003-06-12 2009-08-26 株式会社デンソー Image server, image collection device, and image display terminal
DE102006034413A1 (en) 2006-07-25 2008-01-31 Robert Bosch Gmbh Method for determining a common meeting point for at least two navigation systems and computer program product for carrying out the method
US8483715B2 (en) 2009-03-26 2013-07-09 Yahoo! Inc. Computer based location identification using images
JP2010230551A (en) 2009-03-27 2010-10-14 Sony Corp Navigation apparatus and navigation method
US20150154851A1 (en) 2009-10-06 2015-06-04 Luc Vincent System and method of filling in gaps in image data
US20110313779A1 (en) * 2010-06-17 2011-12-22 Microsoft Corporation Augmentation and correction of location based data through user feedback
US8977236B2 (en) 2011-10-11 2015-03-10 Mobiwork, Llc Method and system to record and visualize type, path and location of moving and idle segments
US20140100995A1 (en) 2012-10-05 2014-04-10 Sanu Koshy Collection and Use of Consumer Data Associated with Augmented-Reality Window Shopping
US20140172640A1 (en) 2012-12-19 2014-06-19 Wal-Mart Stores, Inc. Augmented reality shopping game
US20160147826A1 (en) * 2013-07-18 2016-05-26 Nokia Technologies Oy Method and apparatus for updating points of interest information via crowdsourcing
WO2015157304A1 (en) * 2014-04-07 2015-10-15 Cubic Corporation Systems and methods for queue management
US20170243403A1 (en) 2014-11-11 2017-08-24 Bent Image Lab, Llc Real-time shared augmented reality experience
CN111351494A (en) * 2015-02-10 2020-06-30 御眼视觉技术有限公司 Navigation system and computer readable medium
US20170059347A1 (en) 2015-08-28 2017-03-02 Google Inc. Determining Improved Pick-Up Locations
US10724874B2 (en) 2015-10-13 2020-07-28 Here Global B.V. Virtual reality environment responsive to predictive route navigation
US20170161958A1 (en) * 2015-12-02 2017-06-08 Superb Reality Ltd. Systems and methods for object-based augmented reality navigation guidance
US10927590B2 (en) * 2016-04-07 2021-02-23 Ford Global Technologies, Llc Enhanced service
US20170343375A1 (en) 2016-05-31 2017-11-30 GM Global Technology Operations LLC Systems to dynamically guide a user to an autonomous-driving vehicle pick-up location by augmented-reality walking directions
US10672198B2 (en) 2016-06-14 2020-06-02 Uber Technologies, Inc. Trip termination determination for on-demand transport
US20180122024A1 (en) * 2016-11-01 2018-05-03 Mastercard International Incorporated Systems and Methods for Use in Providing Offers to Consumers Based on Transit Conditions
US10254125B2 (en) 2016-11-14 2019-04-09 International Business Machines Corporation Driving assistant system
US9769616B1 (en) 2017-04-04 2017-09-19 Lyft, Inc. Geohash-related location predictions
GB201705767D0 (en) * 2017-04-10 2017-05-24 Blue Vision Labs Uk Ltd Co-localisation
US10573020B1 (en) * 2017-05-03 2020-02-25 Symantec Corporation Location validation through physical surroundings
US10440536B2 (en) * 2017-05-19 2019-10-08 Waymo Llc Early boarding of passengers in autonomous vehicles
US11118930B2 (en) 2017-07-14 2021-09-14 Lyft, Inc. Providing information to users of a transportation system using augmented reality elements
US10508925B2 (en) * 2017-08-31 2019-12-17 Uber Technologies, Inc. Pickup location selection and augmented reality navigation
US11067410B2 (en) 2017-09-07 2021-07-20 Uber Technologies, Inc. First-person perspective view
US10970725B2 (en) * 2017-11-29 2021-04-06 Universal Studios LLC System and method for crowd management and maintenance operations
US10643104B1 (en) * 2017-12-01 2020-05-05 Snap Inc. Generating data in a messaging system for a machine learning model
JP6820561B2 (en) 2017-12-28 2021-01-27 パナソニックIpマネジメント株式会社 Image processing device, display device, navigation system, image processing method and program
US10365783B2 (en) 2017-12-29 2019-07-30 Lyft, Inc. Optimizing transportation networks through dynamic user interfaces
US10482645B2 (en) * 2018-02-09 2019-11-19 Xueqi Wang System and method for augmented reality map
US10218941B1 (en) * 2018-03-13 2019-02-26 Lyft, Inc. Systems and methods for coordinated collection of street-level image data
US11041733B2 (en) * 2018-10-22 2021-06-22 International Business Machines Corporation Determining a pickup location for a vehicle based on real-time contextual information
US10549198B1 (en) * 2018-10-30 2020-02-04 Niantic, Inc. Verifying a player's real world location using image data of a landmark corresponding to a verification pathway
US10704918B2 (en) * 2018-11-26 2020-07-07 Ford Global Technologies, Llc Method and apparatus for improved location decisions based on surroundings
US10834523B1 (en) 2019-01-14 2020-11-10 Accelerate Labs, Llc Identification of delivery zones for autonomous vehicles, rovers, and drones
US11604069B2 (en) 2019-05-14 2023-03-14 Lyft, Inc. Localizing transportation requests utilizing an image based transportation request interface
US11087543B1 (en) * 2019-06-20 2021-08-10 Snap Inc. Crowd sourced mapping system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11087492B2 (en) * 2018-03-21 2021-08-10 ISVision America Methods for identifying location of automated guided vehicles on a mapped substrate
US11556580B1 (en) * 2019-02-21 2023-01-17 Meta Platforms, Inc. Indexing key frames for localization
US11741151B1 (en) 2019-02-21 2023-08-29 Meta Platforms, Inc. Indexing key frames for localization
US11609344B2 (en) * 2020-04-07 2023-03-21 Verizon Patent And Licensing Inc. Systems and methods for utilizing a machine learning model to determine a determined location of a vehicle based on a combination of a geographical location and a visual positioning system location
CN111984806A (en) * 2020-08-13 2020-11-24 浙江每日互动网络科技股份有限公司 Method, device and storage medium for determining association degree of vehicle and terminal

Also Published As

Publication number Publication date
US20200232803A1 (en) 2020-07-23
US11501524B2 (en) 2022-11-15
US20200234042A1 (en) 2020-07-23
US20200232809A1 (en) 2020-07-23
US11308322B2 (en) 2022-04-19
US11151376B2 (en) 2021-10-19
US20200232804A1 (en) 2020-07-23
US11527060B2 (en) 2022-12-13
US20200234048A1 (en) 2020-07-23

Similar Documents

Publication Publication Date Title
US20200234062A1 (en) Persistent vehicle location service using ground truth image rendering instead of gps
CN109270545B (en) Positioning true value verification method, device, equipment and storage medium
JP5116555B2 (en) LOCATION DEVICE, LOCATION SYSTEM, LOCATION SERVER DEVICE, AND LOCATION METHOD
US9324003B2 (en) Location of image capture device and object features in a captured image
CN109029444B (en) Indoor navigation system and method based on image matching and space positioning
US8929604B2 (en) Vision system and method of analyzing an image
TW200944830A (en) System and method for map matching with sensor detected objects
US20130158865A1 (en) Method and apparatus for estimating position of moving object
KR101880185B1 (en) Electronic apparatus for estimating pose of moving object and method thereof
KR20190076815A (en) Method for estimating pose of moving object of electronic apparatus
KR20200039853A (en) Lane Estimation Method using a Vector Map and Camera for Autonomous Driving Vehicle
US20230138487A1 (en) An Environment Model Using Cross-Sensor Feature Point Referencing
CN110794828A (en) Road sign positioning method fusing semantic information
KR102463698B1 (en) System and method for building a location information database of road sign, apparatus and method for estimating location of vehicle using the same
EP3479064A1 (en) Lane level accuracy using vision of roadway lights and particle filter
WO2017199369A1 (en) Physical object recognition device, physical object recognition method, and program
KR102383499B1 (en) Method and system for generating visual feature map
JP2017211307A (en) Measuring device, measuring method, and program
US20230048235A1 (en) Generating augmented reality images for display on a mobile device based on ground truth image rendering
JP2023541424A (en) Vehicle position determination method and vehicle position determination device
US20220122316A1 (en) Point cloud creation
CN111766619A (en) Road sign intelligent identification assisted fusion navigation positioning method and device
US10984265B2 (en) Method and apparatus for estimating front obstacle detection distance
CN112556701A (en) Method, device, equipment and storage medium for positioning vehicle
Wang et al. Kalman Filter–Based Tracking System for Automated Inventory of Roadway Signs

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: UBER TECHNOLOGIES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ROGAN, AARON MATTHEW;REEL/FRAME:052072/0743

Effective date: 20200309

STPP Information on status: patent application and granting procedure in general

Free format text: PRE-INTERVIEW COMMUNICATION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION