US20190287398A1 - Dynamic natural guidance - Google Patents

Dynamic natural guidance Download PDF

Info

Publication number
US20190287398A1
US20190287398A1 US16/430,032 US201916430032A US2019287398A1 US 20190287398 A1 US20190287398 A1 US 20190287398A1 US 201916430032 A US201916430032 A US 201916430032A US 2019287398 A1 US2019287398 A1 US 2019287398A1
Authority
US
United States
Prior art keywords
movable objects
mobile device
road segment
movable
guidance command
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/430,032
Inventor
William Gale
Joseph Mays
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Here Global BV
Original Assignee
Here Global BV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Here Global BV filed Critical Here Global BV
Priority to US16/430,032 priority Critical patent/US20190287398A1/en
Publication of US20190287398A1 publication Critical patent/US20190287398A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096716Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information does not generate an automatic action on the vehicle control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096733Systems involving transmission of highway information, e.g. weather, speed limits where a selection of the information might take place
    • G08G1/096741Systems involving transmission of highway information, e.g. weather, speed limits where a selection of the information might take place where the source of the transmitted information selects which information to transmit to each vehicle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096766Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission
    • G08G1/096775Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission where the origin of the information is a central station
    • B60W2550/20
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects

Definitions

  • the following disclosure relates to operating a navigation system, and more particularly to providing route information using dynamic natural guidance along a route.
  • Navigation systems provide end users with various navigation-related functions and features. For example, some navigation systems are able to determine an optimum route to travel along a road network from an origin location to a destination location in a geographic region. Using input from the end user, the navigation system examines potential routes between the origin location and the destination location to determine the optimum route. The navigation system may then provide the end user with information about the optimum route in the form of guidance that identifies the maneuvers required to be taken by the end user to travel from the origin to the destination location. Some navigation systems are able to show detailed maps on displays outlining the route, the types of maneuvers to be taken at various locations along the route, locations of certain types of features, and so on.
  • the geographic data may be in the form of one or more geographic databases that include data representing physical features in the geographic region.
  • the represented geographic features may include one-way streets, position of the roads, speed limits along portions of roads, address ranges along the road portions, turn restrictions at intersections of roads, direction restrictions, such as one-way streets, and so on.
  • the geographic data may include points of interests, such as businesses, facilities, restaurants, hotels, airports, gas stations, stadiums, police stations, and so on.
  • the geographic data used in conventional navigation systems is static.
  • the geographic data is stored ahead of time in the database representing physical features that do not generally change over time.
  • the optimal landmarks or features in the geographic region to provide the navigation-related functions and features may not be included in the conventional geographic database.
  • dynamic natural guidance is generated for a route between an origin and a destination.
  • a controller receives data indicative of a location of the mobile device and data indicative of at least one movable object detected in a vicinity of the mobile device.
  • the data indicative of at least one movable object may be collected by a camera and analyzed.
  • the analysis may include one or more of image processing techniques, temporal measurement, and tracking of movable objects.
  • the controller generates a guidance command based on the location of the mobile device.
  • the guidance command references the at least one movable object detected in the vicinity of the mobile device.
  • FIG. 1 illustrates an exemplary geographic region.
  • FIG. 2 illustrates an exemplary navigation system
  • FIG. 3 illustrates an exemplary mobile device of the navigation system of FIG. 2 .
  • FIG. 4 illustrates an exemplary server of the navigation system of FIG. 2 .
  • FIG. 5 illustrates one embodiment of an image acquisition system in a geographic region.
  • FIG. 6 illustrates another embodiment of an image acquisition system in a geographic region.
  • FIG. 7 illustrates an exemplary augmented reality system.
  • FIG. 8 illustrates a first example flowchart for dynamic natural guidance.
  • FIG. 9 illustrates a second example flowchart for dynamic natural guidance.
  • the disclosed embodiments relate to presenting dynamic natural guidance.
  • guidance refers to a set of navigation instructions that reference map elements such as roads and distances (e.g., “turn right on Elm Street in 500 feet”).
  • Natural guidance allows navigation systems to expand into unconventional areas, such as malls and office buildings.
  • Natural guidance refers to other elements outside of the map elements but in the vicinity of the user (e.g., “turn right at the fountain” and “turn left past the coffee shop”).
  • Natural guidance may be defined as a turn-by-turn experience encompassing multiple attributes and relations which details the user's environment and context, e.g. landmarks, to more natural, environmental and intuitive triggers.
  • Guidance messages formed using natural guidance may provide details of contextual elements, such as landmarks, surrounding decision points such as points of interest, cartographic features and traffic signals and/or stop signs.
  • the term dynamic natural guidance refers to a set of navigation instructions that reference elements in the vicinity of the user that are movable.
  • An element that is movable is any object whose geographic position may change.
  • Movable objects include but are not limited to people, vehicles, animals, mobile homes, temporary or semi-permanent structures, temporary road signs, and other objects.
  • movable objects may be defined to include objects whose appearance change such as a rotating or otherwise changing billboard or sign.
  • a natural guidance system identifies the optimal landmarks or features in the geographic region, which may be movable objects or static objects, to provide navigation-related functions and features to the user.
  • FIG. 1 illustrates an exemplary geographic region including two roads.
  • Road 102 is the current path in the route and road 104 is designated as the next path in the route. Traveling along road 102 are various vehicles 103 and an object vehicle 105 .
  • road 102 may be referred to as Main Street and road 104 may be referred to as Elm Street.
  • a navigation system in vehicle 105 conveys to a user that the next turn in the route is onto Elm Street.
  • the street sign 110 for Elm Street is not visible to the occupants of vehicle 105 because a parked truck 101 is obstructing the view of street sign 110 . Accordingly, rather than referring to Elm Street, the navigation system conveys to the user to make the first left turn past the parked truck.
  • the location of the parked truck 110 is determined from collected data.
  • the collected data may be obtained by a camera.
  • the camera may be a security camera from a nearby building, a traffic camera, a satellite camera, an aerial camera, a camera coupled to the navigation system and/or the vehicle 105 , or another type of camera.
  • the phrase “coupled with” is defined to mean directly connected to or indirectly connected through one or more intermediate components. Such intermediate components may include both hardware and software based components.
  • the cameras may also include indoor cameras that capture data relating to indoor environments.
  • indoor cameras For example, retail cameras track individual shoppers in a store to analyze dwell time or detect shoplifting, amusement park cameras track people to manipulate crowd flow, and airport cameras track travelers to identify suspicious activity. Any of these examples also include data related to movable objects with the potential to provide reference points for guidance maneuvers.
  • FIG. 2 illustrates an exemplary navigation system 120 .
  • the navigation system 120 includes a map developer system 121 , a mobile device 122 , an image acquisition device 130 and a network 127 .
  • the developer system 121 includes a server 125 and a database 123 .
  • the developer system 121 may computer systems and networks of a system operator (e.g., NAVTEQ or Nokia Corp.).
  • the mobile device 122 is a navigation apparatus configured to present dynamic natural guidance to the user.
  • the mobile device 122 is a smart phone, a mobile phone, a personal digital assistant (“PDA”), a tablet computer, a notebook computer, a personal navigation device (“PND”), a portable navigation device, and/or any other known or later developed portable or mobile device.
  • PDA personal digital assistant
  • PND personal navigation device
  • the mobile device 122 includes one or more detectors or sensors as a positioning system built or embedded into or within the interior of the mobile device 122 .
  • the operation of the mobile device 122 without a purpose-based position sensor is used by the positioning system (e.g., cellular triangulation).
  • the mobile device 122 receives location data from the positioning system.
  • the mobile device may be referred to as a probe.
  • the image acquisition device 130 may include a video camera, a still camera, a thermographic camera, an infrared camera, a light detection and ranging (LIDAR) device, electric field sensor, ultrasound sensor, or another type of imaging device.
  • the image acquisition device 130 generates camera data that represents the objects in a scene.
  • the image acquisition device 130 may be coupled to any one of the map developer 121 or the mobile device 122 directly by way of a wired or wireless connection 128 or through the network 127 .
  • the mobile device 122 receives data indicative of at least one movable object in a vicinity of the mobile device 122 from the image acquisition device 130 , which may or may not be processed by the server 125 .
  • the movable object may be any identifiable object that can be referenced in a guidance command and identified by the user.
  • Example movable objects include parked cars, pedestrians, or billboards.
  • the server 125 analyzes the data from the image acquisition device 130 .
  • the server 125 identifies movable objects in the data.
  • the server 125 may be configured to perform an image processing algorithm on the data collected by the image acquisition device 130 .
  • the image processing algorithm may incorporate one or more of edge detection, object recognition, facial recognition, optical character recognition, or feature extraction.
  • Edge detection identifies changes in brightness, which corresponds to discontinuities in depth, materials, or surfaces in the image.
  • Object recognition identifies an object in an image using a set of templates for possible objects. The template accounts for variations in the same object based on lighting, viewing direction, and/or size.
  • Facial recognition extracts features of a face of a person in an image in order to identify particular attributes of the person.
  • the attributes may include gender, age, race, or a particular identity of the person.
  • Optical character recognition may identify one or more alphanumeric characters on the movable object.
  • Feature extraction reduces an image into a set of feature vectors in order to identify an object in the image.
  • the server 125 may utilize computer vision techniques.
  • the computer vision techniques may include image acquisition, motion analysis, feature extraction, and detection. Computer vision approximates the abilities of human vision. Computer vision techniques may involve the analysis of a three dimensional space from a set of two dimensional images. Alternatively, two-dimensional image processing is used.
  • the server 125 identifies one or more objects from the data collected by the image acquisition device 130 .
  • the analyzed collected and/or camera data that identifies movable objects may be referred to as movable object data.
  • the identification of an object may involve a suitability characteristic that determines whether the object is suitable to be used as a landmark to be referenced in a guidance command.
  • objects suitable to be referenced for guidance may include people and vehicles.
  • objects suitable to be referenced for guidance may include people standing in line, uniformed people (e.g., police officer, security guard, traffic controller, or traffic guard), secured bicycles, or parked vehicles. Color, text, type, or other identifiable differences of one object to other local objects may indicate that the object is suitable.
  • the suitability of the movable object is determined by the server 125 based on the identity of the object.
  • suitable objects are further analyzed over time. For example, after the movable objects have been identified, the server 125 tracks the position of the movable objects. The server 125 may record the location of the movable objects in order to determine how long the movable objects have been stationary. The server 125 may deem a movable object that has been recently stationary for more than a predetermined time period to be suitable to be referenced for guidance. Example predetermined time periods include but are not limited to 10 seconds, 1 minute, 10 minutes, or 1 hour. In another embodiment, the server 125 may omit the identification of the movable objects and rely on the tracking of the movable objects.
  • the suitability of the movable object is determined by server 125 based on the location of the movable object. For example, movable objects near the destination location of the route may be used to reference the final maneuver of a route.
  • the server 125 may be configured to calculate a distance between the movable object and a location along the route and compare the distance to a threshold distance.
  • Example threshold distances include 10 feet, 10 meters, and 100 meters.
  • movable objects within the threshold distance may be selected to be referenced in a guidance command.
  • the mobile device 122 displays a guidance command based on the location of the mobile device 122 .
  • the guidance command references the at least one movable object in the vicinity of the mobile device 122 .
  • the guidance command may include voice or text that states “turn left past the parked truck ahead,” “in the next intersection with traffic guards, turn right,” or “your destination is a few meters ahead next to the rack of secured bicycles.”
  • the database 123 of the navigation system 120 may be a geographic database.
  • the locations of the movable objects may be incorporated into the geographic database 123 .
  • the geographic database 123 includes information about one or more geographic regions. Located in the geographic region are physical geographic features, such as roads, points of interest (including businesses, municipal facilities, etc.), lakes, rivers, railroads, municipalities, etc.
  • a road network includes, among other things, roads and intersections located in the geographic region.
  • Each road in the geographic region is composed of one or more road segments.
  • a road segment represents a portion of the road.
  • Each road segment is associated with two nodes (e.g., one node represents the point at one end of the road segment and the other node represents the point at the other end of the road segment).
  • the node at either end of a road segment may correspond to a location at which the road meets another road, i.e., an intersection, or where the road dead-ends.
  • the road segments may include sidewalks and crosswalks for travel by pedestrians
  • the road segment data includes a segment ID by which the data record can be identified in the geographic database 123 .
  • Each road segment data record has associated with it information (such as “attributes”, “fields”, etc.) that describes features of the represented road segment.
  • the road segment data record may include data that indicate a speed limit or speed category (i.e., the maximum permitted vehicular speed of travel) on the represented road segment.
  • the road segment data record may also include data that indicate a classification such as a rank of a road segment that may correspond to its functional class.
  • the road segment data may include data identifying what turn restrictions exist at each of the nodes which correspond to intersections at the ends of the road portion represented by the road segment, the name or names by which the represented road segment is known, the length of the road segment, the grade of the road segment, the street address ranges along the represented road segment, the permitted direction of vehicular travel on the represented road segment, whether the represented road segment is part of a controlled access road (such as an expressway), a ramp to a controlled access road, a bridge, a tunnel, a toll road, a ferry, and so on.
  • a controlled access road such as an expressway
  • the road segment data and the camera data or movable object data may be fused together seamlessly such that routing algorithms make no distinction between the types of landmarks.
  • the collected data, camera data, or movable object data may be included in a separate database, for example, internal to the server 125 and/or the mobile device 122 , or at an external location.
  • FIG. 3 illustrates an exemplary mobile device 122 of the navigation system of FIG. 2 .
  • the mobile device 122 may be referred to as a navigation device.
  • the mobile device 122 includes a controller 200 , a memory 201 , an input device 203 , a communication interface 205 , position circuitry 207 , a camera 209 , and a display 211 . Additional, different, or fewer components may be provided.
  • the mobile device 122 may receive the guidance command from the server 125 , which has analyzed one or more scenes including movable objects acquired by the image acquisition device(s) 130 . Alternatively, the analysis of the movable objects may occur at the mobile device 122 .
  • the memory 201 includes computer program code executed by the controller 200 to cause the mobile device 122 to generate dynamic natural guidance.
  • the controller 200 is configured to determine identities of a plurality of movable objects in the vicinity of the mobile device 122 .
  • the identities of the movable objects are determined using image data collected by the image acquisition device 130 , which may include camera 209 .
  • the identities of the movable objects may be determined using any of the image processing techniques discussed above.
  • the image data may be periodic images of a scene (e.g., a video stream).
  • the controller 200 is configured to track movements of the movable objects over time in the periodic images of the scene.
  • the controller 200 analyzes the identities of the plurality of movable objects and/or the movements of the movable objects in order to select a landmark as one of the movable objects suitable to server as a guidance reference point.
  • the controller 200 generates a guidance command based on the location of the navigation device and a location of the landmark.
  • Example guidance commands include “the entrance is behind the woman with the red hat,” “turn right just past the orange construction barrels,” and “turn down the alley in front of the Volkswagen Beetle.” In alternative embodiments, tracking is not used.
  • the moveable object is identified regardless of whether recently moving or not.
  • the controller 200 is configured to determine whether one of the movable objects should be used as part of a guidance command.
  • the controller may check the locations of the movable objects to estimate whether any of the movable objects block line of sight of a subsequent road segment, or a sign corresponding to a subsequent road segment. If the subsequent road segment is not visible to the user, the controller 200 may substitute a guidance command that references the movable object.
  • the reference to the moveable object may be provided in addition to road segment information.
  • the controller 200 is configured to compare the available road segments and other map elements with respect to available movable objects to select the best option as a reference point in the route.
  • the comparison may depend on the distance of the map element and/or movable object to the intersection of the next turn-by-turn direction.
  • the comparison may depend on the suitability characteristics of the movable object.
  • the comparison may depend on a user input for variable setting defining a preference (e.g., disable movable object references, favor movable object references, neutral preference for movable object references.)
  • the controller 200 is configured to compare several (e.g., all known) movable objects to determine the best movable object to be incorporate into the guidance command.
  • the comparison may be based on one or more suitability characteristics.
  • Suitability characteristics include the identity of the movable object, the time elapsed that the movable object has been stationary, the location of the movable object, a degree to which the moveable object stands out from the surroundings, or other features. Different features may be combined into one suitability for a given object.
  • Each suitability characteristic of each movable object may be associated with a suitability characteristic value, which may be a ranking, a percentile, or another number.
  • the identity of the movable object may be any of a predetermined set of movable object identities stored in the memory 201 .
  • Example objects in the predetermined set of movable objects include people, cars, trucks, bicycles, industrial trucks, construction signage, and other objects.
  • the controller 200 may be configured to identify the identity of the movable object using computer vision or another computer recognition algorithm.
  • the time elapsed that the movable object has been stationary may be calculated by comparing subsequent video images over time. For some types of objects, a prolonged time period of remaining stationary may predict that the object is expected to remain stationary in the near future. Examples include parked cars and temporary signs. For other types of objects, the amount of time an object has been stationary may not predict whether the object will remain stationary.
  • the controller 200 may be configured to compare the amount of time a movable object has been stationary to a predetermined threshold. The predetermined threshold may be dependent on the identity of the object.
  • the location of the movable object affects whether the movable object should be used in a guidance command. The farther the movable object is from the intersection of the turn by turn instruction, the harder the instruction is to follow. Likewise, the farther the movable object is from the user, the more difficulty the user may have in located the movable object.
  • the controller 200 may be configured to compare the distance of the movable objects to a location of the user to a predetermined user distance threshold and to compare the distance of the movable object to a location of the subsequent route intersection point to a predetermined intersection distance threshold.
  • Selection of the guidance reference point may involve a ranking of possible landmarks according to the comparison of distances, the amount of time the movable objects have been stationary, or the identity of the movable objects. A combination of any of these factors may be used.
  • the controller 200 may be configured to apply a weighting system. For example, a weight for distance, a weight for time, and a weight for identity may each be selected as a fraction value (e.g., between 0 and 1), such that the three weights add up to 1.0.
  • the weights may be selected by the user or an administrator. Alternatively, the weights may variable and change over time according to a learning algorithm based on the performance of the navigation system (e.g., the number of reroutes or wrong turns).
  • the controller 200 may configured to select the highest ranking movable object or a set of highest ranking movable objects to be a landmark in the route. Alternatively, the controller 200 may compare the suitability characteristic values of two movable objects and select the higher suitability characteristic value.
  • Additional suitability characteristics may include the type of recent movements of the movable object or an intended path of the at least one movable object.
  • the types of recent movements of the movable object may identify actions that are likely to indicate that the movable object will be relatively stationary for an amount of time sufficient for use as the guidance command.
  • Example recent movements could be a person joining a line on the sidewalk (e.g., for a restaurant, event, etc.), a car coming to a stop, a car that a person just exited, or other recent movements.
  • the suitability of the movable object may be determined by a probabilistic algorithm. The algorithm may base future decisions on the results of past decisions.
  • the controller 200 may be configured to consider the intended path of the movable object. For example, the controller 200 estimates future locations of the movable object based on past velocity and/or acceleration. The future location of the movable object at the time the mobile device 122 reaches the intersection in the route may be used in the route guidance command. Examples include “turn to follow the red car that just passed you,” “you are going the right direction if a person wearing a yellow raincoat just crossed your path,” and “follow the big haul truck ahead of you.”
  • the camera 209 is an image acquisition device configured to collect data indicative of the plurality of movable objects.
  • the camera 209 may be integrated in the mobile device 122 as shown by FIG. 3 , or the camera 209 may be externally mounted to a vehicle.
  • the camera 209 may be an optical camera, light detection and ranging (LIDAR) device, or other type of camera.
  • LIDAR light detection and ranging
  • the communication interface 205 is configured to receive the collected data from the image acquisition device 130 .
  • the communication interface 205 is configured to receive movable object data.
  • the movable object data may include an identity field and a location field for each of the movable objects.
  • the positioning circuitry 207 may include a Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), or a cellular or similar position sensor for providing location data.
  • the positioning system may utilize GPS-type technology, a dead reckoning-type system, cellular location, or combinations of these or other systems.
  • the positioning circuitry 207 may include suitable sensing devices that measure the traveling distance, speed, direction, and so on, of the mobile device 122 .
  • the positioning system may also include a receiver and correlation chip to obtain a GPS signal.
  • the one or more detectors or sensors may include an accelerometer built or embedded into or within the interior of the mobile device 122 .
  • the accelerometer is operable to detect, recognize, or measure the rate of change of translational and/or rotational movement of the mobile device 122 .
  • the mobile device 122 receives location data from the positioning system.
  • the location data indicates the location of the mobile device 122 .
  • the input device 203 may be one or more buttons, keypad, keyboard, mouse, stylist pen, trackball, rocker switch, touch pad, voice recognition circuit, or other device or component for inputting data to the mobile device 122 .
  • the input device 203 and the display 211 may be combined as a touch screen, which may be capacitive or resistive.
  • the display 211 may be a liquid crystal display (LCD) panel, light emitting diode (LED) screen, thin film transistor screen, or another type of display.
  • FIG. 4 illustrates an exemplary server 125 of the navigation system of FIG. 2 .
  • the server 125 includes a processor 300 , a communication interface 305 , and a memory 301 .
  • the server 125 may be coupled to a database 130 .
  • the database 130 may be a geographic database as discussed above. Additional, different, or fewer components may be provided.
  • the processor 300 through the communication interface 305 , is configured to receive data indicative of a current location of the mobile device 122 and data indicative of a movable object in a vicinity of the current location of the mobile device 122 .
  • the data indicative of the current location of the mobile device 122 is generated by the position circuitry 207 of the mobile device 122 .
  • the data indicative of the movable object in the vicinity of the current location of the mobile device 122 may be raw image data as collected by the image acquisition device 130 or data processed by the processor 300 .
  • the processor 300 is also configured to compare any of the suitability characteristic values above.
  • the comparison may involve one or more movable object and one or more nonmovable object.
  • the nonmovable objects may be map elements such as road segments, buildings, and natural features.
  • the processor 300 is configured to select the movable object for the guidance command based on the suitability characteristic values.
  • the guidance command is also selected based on the current location of the mobile device 122 .
  • the guidance command may include a visible aspect description of the movable object so that the user can easily locate the movable object.
  • the conspicuousness of the visible aspect may be measured by a prominence value (e.g., a scale from 0 to 10).
  • the visible aspect may be a color, a size, an accessory or any other descriptor that can identify the movable object.
  • Example guidance commands that reference a movable object and a visible aspect include “turn right just past the yellow car,” “follow the tall woman with the red dress,” or “turn left by the people with the stroller.”
  • the processor may also analyze the visible aspects of the movable objects in determining whether the movable objects are suitable to be referenced in a guidance command.
  • the suitability characteristics values above may include a value that indicates the existence of a visible aspect or the effectiveness of a visible aspect.
  • the communication interface 305 is configured to receive data indicative of the location of the mobile device 122 from the mobile device 122 .
  • the communication interface 305 is configured to receive data indicative of the location of the landmark from the image acquisition device 130 .
  • the controller 200 and/or processor 300 may include a general processor, digital signal processor, an application specific integrated circuit (ASIC), field programmable gate array (FPGA), analog circuit, digital circuit, combinations thereof, or other now known or later developed processor.
  • the controller 200 and/or processor 300 may be a single device or combinations of devices, such as associated with a network, distributed processing, or cloud computing.
  • the memory 201 and/or memory 301 may be a volatile memory or a non-volatile memory.
  • the memory 201 and/or memory 301 may include one or more of a read only memory (ROM), random access memory (RAM), a flash memory, an electronic erasable program read only memory (EEPROM), or other type of memory.
  • ROM read only memory
  • RAM random access memory
  • EEPROM electronic erasable program read only memory
  • the memory 201 and/or memory 301 may be removable from the mobile device 122 , such as a secure digital (SD) memory card.
  • SD secure digital
  • the communication interface 205 and/or communication interface 305 may include any operable connection.
  • An operable connection may be one in which signals, physical communications, and/or logical communications may be sent and/or received.
  • An operable connection may include a physical interface, an electrical interface, and/or a data interface.
  • the communication interface 205 and/or communication interface 305 provides for wireless and/or wired communications in any now known or later developed format.
  • FIGS. 5 and 6 illustrate possible implementations of the image acquisition system 130 of FIG. 2 .
  • security cameras 501 view the geographic region shown in FIG. 1 .
  • the security cameras 501 may be traffic cameras, security cameras, or specifically tailored to track movable objects in the geographic region.
  • the security cameras 501 collect camera data and transmit the camera data to the server 125 .
  • the cameras 501 are at known geographical locations for relating to the route.
  • the angle of view of the cameras 501 may be fixed and known or may be sensed, indicating at least a general location of viewed objects.
  • a camera 601 is mounted on the vehicle 105 or incorporated with the mobile device 122 .
  • Data from the camera 601 is transmitted to the mobile device 122 or the server 125 for image processing, as discussed above.
  • the camera 601 may have a viewing angle that includes movable objects in the vicinity of the vehicle 105 .
  • several vehicles are equipped with cameras.
  • Data collected in a camera associated with another vehicle, such as vehicles 103 is transmitted to the server 125 and used in the analysis of movable objects soon to be in the vicinity of vehicle 105 .
  • the mobile devices act as proxies collecting camera data that is used for the dynamic natural guidance of other mobile devices.
  • the location of the camera 601 is known by the position sensor.
  • a compass or other directional sensor may indicate direction of view.
  • FIG. 7 illustrates an exemplary augmented reality system including a mobile device 701 .
  • the mobile device 701 executes an augmented reality application stored in memory.
  • the augmented reality application enhances a user's view of the real world with virtual content.
  • the virtual content is displayed in a layer above the real world content, which is captured by the camera.
  • video of the road and cars 703 are real world content captured by the camera, and the virtual content 705 .
  • the mobile device 701 or server 125 is configured to augment the image of the movable object display on the mobile device 125 by adding the virtual content 705 .
  • the virtual content 705 may be a star, another shape, a discoloration, or a highlight.
  • the virtual content may highlight a movable object on the display to provide a guidance command.
  • the guidance command may be “follow the highlighted car” or “turn left with the car marked with a star.”
  • Additional virtual content may include hyperlinks to additional information regarding the business.
  • graphics are added to a two-dimensional map or displayed route for highlighting a relative position, shape, and/or other characteristic of a movable object.
  • FIG. 8 illustrates a flowchart for dynamic natural guidance.
  • the acts of the flowchart may be performed by any combination of the server 125 and the mobile device 122 , and the term controller may refer to the processor of either of the devices. Additional, different, or fewer acts may be provided.
  • the controller receives data indicative of a location of the mobile device 122 .
  • the location data may be determined based on GPS or cellular triangulation.
  • the location data may be received at predetermined intervals or when the user accesses the guidance application.
  • the controller receives camera data indicative of one or more movable objects in the vicinity of the mobile device.
  • a movable object may be defined as anything that is not a fixture or not permanently static.
  • the controller may access the camera data based on the location data.
  • the camera data includes images, and the controller analyzes the images to determine the identity and/or characteristics of the one or more movable objects.
  • the camera data includes a list of identities and characteristics for the one or more movable objects paired with locations of the movable objects.
  • the controller generates a guidance command based on the location of the mobile device.
  • the guidance command references the one or more movable objects in the vicinity of the mobile device.
  • the guidance command may state “turn in front of the yellow truck,” “follow the man in the blue suede shoes,” or “head toward the billboard with the sports car.”
  • FIG. 9 illustrates another flowchart for dynamic natural guidance.
  • the acts of the flowchart may be performed by any combination of the server 125 and the mobile device 122 , and the term controller may refer to the processor of either of the devices. Additional, different, or fewer acts may be provided.
  • the controller determines the identities of movable objects.
  • the controller is configured to execute an image processing application or a computer vision application.
  • the controller may use feature extraction or other image recognition techniques to determine the types of objects in an image or video.
  • the controller tracks movements of the movable objects over time. For example, the controller may measure an amount of time that each of the movable objects is stationary.
  • the controller analyzes the identities of the movable objects and the movements of the movable objects.
  • the controller selects a landmark from the movable objects.
  • the selection is based on the identities of the movable objects and the movements of the movable objects. For example, the controller may access a lookup table stored in memory that pairs a threshold time for the various types of movable objects.
  • a first type of movable object such as a truck, may be paired with a first threshold time, such as one hour.
  • a second type of movable object, such as a person may be paired with a second threshold time, such as a minute.
  • the controller may be configured to select cars as movable objects for the guidance command when the cars have been stationary for more than the first threshold time and select people as movable objects for the guidance command when the people have been stationary for more than the second threshold time.
  • the movable object may be moving.
  • the controller may be configured to rank the possible movable objects based on their appearance. For example, bright colors or significantly oversized or undersized movable objects may be selected.
  • the controller generates a guidance command that references the landmark.
  • the guidance command is based on the location of the navigation device and the location of the landmark. For example, in addition to the identities of the movable objects and the recent movement of the movable objects, the controller may compare the locations of the navigation device and the movable objects. The controller may calculate a distance between the navigation device and each of a set of potential movable objects. If the distance if less than a threshold distance (e.g., 10 meters, 50 meters), the controller selects the movable object for possible inclusion in the navigation command.
  • a threshold distance e.g. 10 meters, 50 meters
  • the network 127 may include wired networks, wireless networks, or combinations thereof.
  • the wireless network may be a cellular telephone network, an 802.11, 802.16, 802.20, or WiMax network.
  • the network 127 may be a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols.
  • While the computer-readable medium is shown to be a single medium, the term “computer-readable medium” includes a single medium or multiple media, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions.
  • the term “computer-readable medium” shall also include any medium that is capable of storing, encoding or carrying a set of instructions for execution by a processor or that cause a computer system to perform any one or more of the methods or operations disclosed herein.
  • the computer-readable medium can include a solid-state memory such as a memory card or other package that houses one or more non-volatile read-only memories. Further, the computer-readable medium can be a random access memory or other volatile re-writable memory. Additionally, the computer-readable medium can include a magneto-optical or optical medium, such as a disk or tapes or other storage device to capture carrier wave signals such as a signal communicated over a transmission medium. A digital file attachment to an e-mail or other self-contained information archive or set of archives may be considered a distribution medium that is a tangible storage medium. Accordingly, the disclosure is considered to include any one or more of a computer-readable medium or a distribution medium and other equivalents and successor media, in which data or instructions may be stored.
  • dedicated hardware implementations such as application specific integrated circuits, programmable logic arrays and other hardware devices, can be constructed to implement one or more of the methods described herein.
  • Applications that may include the apparatus and systems of various embodiments can broadly include a variety of electronic and computer systems.
  • One or more embodiments described herein may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit. Accordingly, the present system encompasses software, firmware, and hardware implementations.
  • the methods described herein may be implemented by software programs executable by a computer system.
  • implementations can include distributed processing, component/object distributed processing, and parallel processing.
  • virtual computer system processing can be constructed to implement one or more of the methods or functionality as described herein.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • circuitry refers to all of the following: (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
  • circuitry applies to all uses of this term in this application, including in any claims.
  • circuitry would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware.
  • circuitry would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in server, a cellular network device, or other network device.
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and anyone or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio player, a Global Positioning System (GPS) receiver, to name just a few.
  • Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • embodiments of the subject matter described in this specification can be implemented on a device having a display, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
  • a display e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and a pointing device e.g., a mouse or a trackball
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components.
  • the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet.
  • LAN local area network
  • WAN wide area network
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • inventions of the disclosure may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any particular invention or inventive concept.
  • inventions merely for convenience and without intending to voluntarily limit the scope of this application to any particular invention or inventive concept.
  • specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown.
  • This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

In one embodiment, dynamic natural guidance is generated for a route between an origin and a destination. A controller receives data indicative of a location of the mobile device and data indicative of at least one movable object detected in a vicinity of the mobile device. The data indicative of at least one movable object may be collected by a camera and analyzed. The analysis may include one or more of image processing techniques, temporal measurement, and tracking of movable objects. The controller generates a guidance command based on the location of the mobile device. The guidance command references the at least one movable object detected in the vicinity of the mobile device.

Description

    CROSS REFERENCE TO OTHER APPLICATIONS
  • This application is a continuation under 35 U.S.C § 120 and 37 C.F.R. § 1.53(b) of U.S. patent application Ser. No. 13/538,227 filed Jun. 29, 2012, which is hereby incorporated by reference in its entirety.
  • TECHNICAL FIELD
  • The following disclosure relates to operating a navigation system, and more particularly to providing route information using dynamic natural guidance along a route.
  • BACKGROUND
  • Navigation systems provide end users with various navigation-related functions and features. For example, some navigation systems are able to determine an optimum route to travel along a road network from an origin location to a destination location in a geographic region. Using input from the end user, the navigation system examines potential routes between the origin location and the destination location to determine the optimum route. The navigation system may then provide the end user with information about the optimum route in the form of guidance that identifies the maneuvers required to be taken by the end user to travel from the origin to the destination location. Some navigation systems are able to show detailed maps on displays outlining the route, the types of maneuvers to be taken at various locations along the route, locations of certain types of features, and so on.
  • In order to provide these and other navigation-related functions and features, navigation systems use geographic data. The geographic data may be in the form of one or more geographic databases that include data representing physical features in the geographic region. The represented geographic features may include one-way streets, position of the roads, speed limits along portions of roads, address ranges along the road portions, turn restrictions at intersections of roads, direction restrictions, such as one-way streets, and so on. Additionally, the geographic data may include points of interests, such as businesses, facilities, restaurants, hotels, airports, gas stations, stadiums, police stations, and so on.
  • Although navigation systems provide many important features, there continues to be room for new features and improvements. One area in which there is room for improvement relates to providing guidance to follow a route. The geographic data used in conventional navigation systems is static. The geographic data is stored ahead of time in the database representing physical features that do not generally change over time. The optimal landmarks or features in the geographic region to provide the navigation-related functions and features may not be included in the conventional geographic database.
  • SUMMARY
  • In one embodiment, dynamic natural guidance is generated for a route between an origin and a destination. A controller receives data indicative of a location of the mobile device and data indicative of at least one movable object detected in a vicinity of the mobile device. The data indicative of at least one movable object may be collected by a camera and analyzed. The analysis may include one or more of image processing techniques, temporal measurement, and tracking of movable objects. The controller generates a guidance command based on the location of the mobile device. The guidance command references the at least one movable object detected in the vicinity of the mobile device.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Exemplary embodiments of the present invention are described herein with reference to the following drawings.
  • FIG. 1 illustrates an exemplary geographic region.
  • FIG. 2 illustrates an exemplary navigation system.
  • FIG. 3 illustrates an exemplary mobile device of the navigation system of FIG. 2.
  • FIG. 4 illustrates an exemplary server of the navigation system of FIG. 2.
  • FIG. 5 illustrates one embodiment of an image acquisition system in a geographic region.
  • FIG. 6 illustrates another embodiment of an image acquisition system in a geographic region.
  • FIG. 7 illustrates an exemplary augmented reality system.
  • FIG. 8 illustrates a first example flowchart for dynamic natural guidance.
  • FIG. 9 illustrates a second example flowchart for dynamic natural guidance.
  • DETAILED DESCRIPTION
  • The disclosed embodiments relate to presenting dynamic natural guidance. The term guidance refers to a set of navigation instructions that reference map elements such as roads and distances (e.g., “turn right on Elm Street in 500 feet”). Natural guidance allows navigation systems to expand into unconventional areas, such as malls and office buildings. Natural guidance refers to other elements outside of the map elements but in the vicinity of the user (e.g., “turn right at the fountain” and “turn left past the coffee shop”). Natural guidance may be defined as a turn-by-turn experience encompassing multiple attributes and relations which details the user's environment and context, e.g. landmarks, to more natural, environmental and intuitive triggers. Guidance messages formed using natural guidance may provide details of contextual elements, such as landmarks, surrounding decision points such as points of interest, cartographic features and traffic signals and/or stop signs.
  • The term dynamic natural guidance refers to a set of navigation instructions that reference elements in the vicinity of the user that are movable. An element that is movable is any object whose geographic position may change. Movable objects include but are not limited to people, vehicles, animals, mobile homes, temporary or semi-permanent structures, temporary road signs, and other objects. In addition, movable objects may be defined to include objects whose appearance change such as a rotating or otherwise changing billboard or sign. A natural guidance system identifies the optimal landmarks or features in the geographic region, which may be movable objects or static objects, to provide navigation-related functions and features to the user.
  • FIG. 1 illustrates an exemplary geographic region including two roads. Road 102 is the current path in the route and road 104 is designated as the next path in the route. Traveling along road 102 are various vehicles 103 and an object vehicle 105. As an example, road 102 may be referred to as Main Street and road 104 may be referred to as Elm Street. A navigation system in vehicle 105 conveys to a user that the next turn in the route is onto Elm Street. However, the street sign 110 for Elm Street is not visible to the occupants of vehicle 105 because a parked truck 101 is obstructing the view of street sign 110. Accordingly, rather than referring to Elm Street, the navigation system conveys to the user to make the first left turn past the parked truck.
  • The location of the parked truck 110 is determined from collected data. The collected data may be obtained by a camera. The camera may be a security camera from a nearby building, a traffic camera, a satellite camera, an aerial camera, a camera coupled to the navigation system and/or the vehicle 105, or another type of camera. The phrase “coupled with” is defined to mean directly connected to or indirectly connected through one or more intermediate components. Such intermediate components may include both hardware and software based components.
  • Data from several types of cameras may be combined into a continuous or semi-continuous database. The cameras may also include indoor cameras that capture data relating to indoor environments. For example, retail cameras track individual shoppers in a store to analyze dwell time or detect shoplifting, amusement park cameras track people to manipulate crowd flow, and airport cameras track travelers to identify suspicious activity. Any of these examples also include data related to movable objects with the potential to provide reference points for guidance maneuvers.
  • FIG. 2 illustrates an exemplary navigation system 120. The navigation system 120 includes a map developer system 121, a mobile device 122, an image acquisition device 130 and a network 127. The developer system 121 includes a server 125 and a database 123. The developer system 121 may computer systems and networks of a system operator (e.g., NAVTEQ or Nokia Corp.). The mobile device 122 is a navigation apparatus configured to present dynamic natural guidance to the user. The mobile device 122 is a smart phone, a mobile phone, a personal digital assistant (“PDA”), a tablet computer, a notebook computer, a personal navigation device (“PND”), a portable navigation device, and/or any other known or later developed portable or mobile device. The mobile device 122 includes one or more detectors or sensors as a positioning system built or embedded into or within the interior of the mobile device 122. Alternatively, the operation of the mobile device 122 without a purpose-based position sensor is used by the positioning system (e.g., cellular triangulation). The mobile device 122 receives location data from the positioning system. The mobile device may be referred to as a probe.
  • The image acquisition device 130 may include a video camera, a still camera, a thermographic camera, an infrared camera, a light detection and ranging (LIDAR) device, electric field sensor, ultrasound sensor, or another type of imaging device. The image acquisition device 130 generates camera data that represents the objects in a scene. The image acquisition device 130 may be coupled to any one of the map developer 121 or the mobile device 122 directly by way of a wired or wireless connection 128 or through the network 127. The mobile device 122 receives data indicative of at least one movable object in a vicinity of the mobile device 122 from the image acquisition device 130, which may or may not be processed by the server 125. The movable object may be any identifiable object that can be referenced in a guidance command and identified by the user. Example movable objects include parked cars, pedestrians, or billboards.
  • In one embodiment, the server 125 analyzes the data from the image acquisition device 130. The server 125 identifies movable objects in the data. For example, the server 125 may be configured to perform an image processing algorithm on the data collected by the image acquisition device 130. The image processing algorithm may incorporate one or more of edge detection, object recognition, facial recognition, optical character recognition, or feature extraction. Edge detection identifies changes in brightness, which corresponds to discontinuities in depth, materials, or surfaces in the image. Object recognition identifies an object in an image using a set of templates for possible objects. The template accounts for variations in the same object based on lighting, viewing direction, and/or size. Facial recognition extracts features of a face of a person in an image in order to identify particular attributes of the person. The attributes may include gender, age, race, or a particular identity of the person. Optical character recognition may identify one or more alphanumeric characters on the movable object. Feature extraction reduces an image into a set of feature vectors in order to identify an object in the image.
  • The server 125 may utilize computer vision techniques. The computer vision techniques may include image acquisition, motion analysis, feature extraction, and detection. Computer vision approximates the abilities of human vision. Computer vision techniques may involve the analysis of a three dimensional space from a set of two dimensional images. Alternatively, two-dimensional image processing is used.
  • From one or more of the analysis techniques above, the server 125 identifies one or more objects from the data collected by the image acquisition device 130. The analyzed collected and/or camera data that identifies movable objects may be referred to as movable object data. The identification of an object may involve a suitability characteristic that determines whether the object is suitable to be used as a landmark to be referenced in a guidance command. In one example, objects suitable to be referenced for guidance may include people and vehicles. In another example, objects suitable to be referenced for guidance may include people standing in line, uniformed people (e.g., police officer, security guard, traffic controller, or traffic guard), secured bicycles, or parked vehicles. Color, text, type, or other identifiable differences of one object to other local objects may indicate that the object is suitable.
  • In one embodiment, the suitability of the movable object is determined by the server 125 based on the identity of the object. In another embodiment, suitable objects are further analyzed over time. For example, after the movable objects have been identified, the server 125 tracks the position of the movable objects. The server 125 may record the location of the movable objects in order to determine how long the movable objects have been stationary. The server 125 may deem a movable object that has been recently stationary for more than a predetermined time period to be suitable to be referenced for guidance. Example predetermined time periods include but are not limited to 10 seconds, 1 minute, 10 minutes, or 1 hour. In another embodiment, the server 125 may omit the identification of the movable objects and rely on the tracking of the movable objects.
  • In one embodiment, the suitability of the movable object is determined by server 125 based on the location of the movable object. For example, movable objects near the destination location of the route may be used to reference the final maneuver of a route. The server 125 may be configured to calculate a distance between the movable object and a location along the route and compare the distance to a threshold distance. Example threshold distances include 10 feet, 10 meters, and 100 meters. In one embodiment, movable objects within the threshold distance may be selected to be referenced in a guidance command.
  • The mobile device 122 displays a guidance command based on the location of the mobile device 122. The guidance command references the at least one movable object in the vicinity of the mobile device 122. The guidance command may include voice or text that states “turn left past the parked truck ahead,” “in the next intersection with traffic guards, turn right,” or “your destination is a few meters ahead next to the rack of secured bicycles.”
  • The database 123 of the navigation system 120 may be a geographic database. The locations of the movable objects may be incorporated into the geographic database 123. In addition, the geographic database 123 includes information about one or more geographic regions. Located in the geographic region are physical geographic features, such as roads, points of interest (including businesses, municipal facilities, etc.), lakes, rivers, railroads, municipalities, etc. A road network includes, among other things, roads and intersections located in the geographic region. Each road in the geographic region is composed of one or more road segments. A road segment represents a portion of the road. Each road segment is associated with two nodes (e.g., one node represents the point at one end of the road segment and the other node represents the point at the other end of the road segment). The node at either end of a road segment may correspond to a location at which the road meets another road, i.e., an intersection, or where the road dead-ends. The road segments may include sidewalks and crosswalks for travel by pedestrians.
  • The road segment data includes a segment ID by which the data record can be identified in the geographic database 123. Each road segment data record has associated with it information (such as “attributes”, “fields”, etc.) that describes features of the represented road segment. The road segment data record may include data that indicate a speed limit or speed category (i.e., the maximum permitted vehicular speed of travel) on the represented road segment. The road segment data record may also include data that indicate a classification such as a rank of a road segment that may correspond to its functional class.
  • The road segment data may include data identifying what turn restrictions exist at each of the nodes which correspond to intersections at the ends of the road portion represented by the road segment, the name or names by which the represented road segment is known, the length of the road segment, the grade of the road segment, the street address ranges along the represented road segment, the permitted direction of vehicular travel on the represented road segment, whether the represented road segment is part of a controlled access road (such as an expressway), a ramp to a controlled access road, a bridge, a tunnel, a toll road, a ferry, and so on.
  • The road segment data and the camera data or movable object data may be fused together seamlessly such that routing algorithms make no distinction between the types of landmarks. Alternatively, the collected data, camera data, or movable object data may be included in a separate database, for example, internal to the server 125 and/or the mobile device 122, or at an external location.
  • FIG. 3 illustrates an exemplary mobile device 122 of the navigation system of FIG. 2. The mobile device 122 may be referred to as a navigation device. The mobile device 122 includes a controller 200, a memory 201, an input device 203, a communication interface 205, position circuitry 207, a camera 209, and a display 211. Additional, different, or fewer components may be provided.
  • The mobile device 122 may receive the guidance command from the server 125, which has analyzed one or more scenes including movable objects acquired by the image acquisition device(s) 130. Alternatively, the analysis of the movable objects may occur at the mobile device 122. The memory 201 includes computer program code executed by the controller 200 to cause the mobile device 122 to generate dynamic natural guidance.
  • The controller 200 is configured to determine identities of a plurality of movable objects in the vicinity of the mobile device 122. The identities of the movable objects are determined using image data collected by the image acquisition device 130, which may include camera 209. The identities of the movable objects may be determined using any of the image processing techniques discussed above.
  • The image data may be periodic images of a scene (e.g., a video stream). The controller 200 is configured to track movements of the movable objects over time in the periodic images of the scene. The controller 200 analyzes the identities of the plurality of movable objects and/or the movements of the movable objects in order to select a landmark as one of the movable objects suitable to server as a guidance reference point. The controller 200 generates a guidance command based on the location of the navigation device and a location of the landmark. Example guidance commands include “the entrance is behind the woman with the red hat,” “turn right just past the orange construction barrels,” and “turn down the alley in front of the Volkswagen Beetle.” In alternative embodiments, tracking is not used. The moveable object is identified regardless of whether recently moving or not.
  • The controller 200 is configured to determine whether one of the movable objects should be used as part of a guidance command. The controller may check the locations of the movable objects to estimate whether any of the movable objects block line of sight of a subsequent road segment, or a sign corresponding to a subsequent road segment. If the subsequent road segment is not visible to the user, the controller 200 may substitute a guidance command that references the movable object. The reference to the moveable object may be provided in addition to road segment information.
  • In one embodiment, the controller 200 is configured to compare the available road segments and other map elements with respect to available movable objects to select the best option as a reference point in the route. The comparison may depend on the distance of the map element and/or movable object to the intersection of the next turn-by-turn direction. The comparison may depend on the suitability characteristics of the movable object. The comparison may depend on a user input for variable setting defining a preference (e.g., disable movable object references, favor movable object references, neutral preference for movable object references.)
  • In another embodiment, the controller 200 is configured to compare several (e.g., all known) movable objects to determine the best movable object to be incorporate into the guidance command. The comparison may be based on one or more suitability characteristics. Suitability characteristics include the identity of the movable object, the time elapsed that the movable object has been stationary, the location of the movable object, a degree to which the moveable object stands out from the surroundings, or other features. Different features may be combined into one suitability for a given object. Each suitability characteristic of each movable object may be associated with a suitability characteristic value, which may be a ranking, a percentile, or another number.
  • The identity of the movable object may be any of a predetermined set of movable object identities stored in the memory 201. Example objects in the predetermined set of movable objects include people, cars, trucks, bicycles, industrial trucks, construction signage, and other objects. The controller 200 may be configured to identify the identity of the movable object using computer vision or another computer recognition algorithm.
  • The time elapsed that the movable object has been stationary may be calculated by comparing subsequent video images over time. For some types of objects, a prolonged time period of remaining stationary may predict that the object is expected to remain stationary in the near future. Examples include parked cars and temporary signs. For other types of objects, the amount of time an object has been stationary may not predict whether the object will remain stationary. The controller 200 may be configured to compare the amount of time a movable object has been stationary to a predetermined threshold. The predetermined threshold may be dependent on the identity of the object.
  • The location of the movable object affects whether the movable object should be used in a guidance command. The farther the movable object is from the intersection of the turn by turn instruction, the harder the instruction is to follow. Likewise, the farther the movable object is from the user, the more difficulty the user may have in located the movable object. The controller 200 may be configured to compare the distance of the movable objects to a location of the user to a predetermined user distance threshold and to compare the distance of the movable object to a location of the subsequent route intersection point to a predetermined intersection distance threshold.
  • Selection of the guidance reference point may involve a ranking of possible landmarks according to the comparison of distances, the amount of time the movable objects have been stationary, or the identity of the movable objects. A combination of any of these factors may be used. The controller 200 may be configured to apply a weighting system. For example, a weight for distance, a weight for time, and a weight for identity may each be selected as a fraction value (e.g., between 0 and 1), such that the three weights add up to 1.0. The weights may be selected by the user or an administrator. Alternatively, the weights may variable and change over time according to a learning algorithm based on the performance of the navigation system (e.g., the number of reroutes or wrong turns). The controller 200 may configured to select the highest ranking movable object or a set of highest ranking movable objects to be a landmark in the route. Alternatively, the controller 200 may compare the suitability characteristic values of two movable objects and select the higher suitability characteristic value.
  • Additional suitability characteristics may include the type of recent movements of the movable object or an intended path of the at least one movable object. The types of recent movements of the movable object may identify actions that are likely to indicate that the movable object will be relatively stationary for an amount of time sufficient for use as the guidance command. Example recent movements could be a person joining a line on the sidewalk (e.g., for a restaurant, event, etc.), a car coming to a stop, a car that a person just exited, or other recent movements. In one example, the suitability of the movable object may be determined by a probabilistic algorithm. The algorithm may base future decisions on the results of past decisions.
  • The controller 200 may be configured to consider the intended path of the movable object. For example, the controller 200 estimates future locations of the movable object based on past velocity and/or acceleration. The future location of the movable object at the time the mobile device 122 reaches the intersection in the route may be used in the route guidance command. Examples include “turn to follow the red car that just passed you,” “you are going the right direction if a person wearing a yellow raincoat just crossed your path,” and “follow the big haul truck ahead of you.”
  • The camera 209 is an image acquisition device configured to collect data indicative of the plurality of movable objects. The camera 209 may be integrated in the mobile device 122 as shown by FIG. 3, or the camera 209 may be externally mounted to a vehicle. The camera 209 may be an optical camera, light detection and ranging (LIDAR) device, or other type of camera.
  • In embodiments in which the mobile device 122 analyzes the collected data, the communication interface 205 is configured to receive the collected data from the image acquisition device 130. In embodiments in which the server 125 analyzes the collected data, the communication interface 205 is configured to receive movable object data. The movable object data may include an identity field and a location field for each of the movable objects.
  • The positioning circuitry 207 may include a Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), or a cellular or similar position sensor for providing location data. The positioning system may utilize GPS-type technology, a dead reckoning-type system, cellular location, or combinations of these or other systems. The positioning circuitry 207 may include suitable sensing devices that measure the traveling distance, speed, direction, and so on, of the mobile device 122. The positioning system may also include a receiver and correlation chip to obtain a GPS signal. Alternatively or additionally, the one or more detectors or sensors may include an accelerometer built or embedded into or within the interior of the mobile device 122. The accelerometer is operable to detect, recognize, or measure the rate of change of translational and/or rotational movement of the mobile device 122. The mobile device 122 receives location data from the positioning system. The location data indicates the location of the mobile device 122.
  • The input device 203 may be one or more buttons, keypad, keyboard, mouse, stylist pen, trackball, rocker switch, touch pad, voice recognition circuit, or other device or component for inputting data to the mobile device 122. The input device 203 and the display 211 may be combined as a touch screen, which may be capacitive or resistive. The display 211 may be a liquid crystal display (LCD) panel, light emitting diode (LED) screen, thin film transistor screen, or another type of display.
  • FIG. 4 illustrates an exemplary server 125 of the navigation system of FIG. 2. The server 125 includes a processor 300, a communication interface 305, and a memory 301. The server 125 may be coupled to a database 130. The database 130 may be a geographic database as discussed above. Additional, different, or fewer components may be provided.
  • The processor 300, through the communication interface 305, is configured to receive data indicative of a current location of the mobile device 122 and data indicative of a movable object in a vicinity of the current location of the mobile device 122. The data indicative of the current location of the mobile device 122 is generated by the position circuitry 207 of the mobile device 122. The data indicative of the movable object in the vicinity of the current location of the mobile device 122 may be raw image data as collected by the image acquisition device 130 or data processed by the processor 300.
  • The processor 300 is also configured to compare any of the suitability characteristic values above. The comparison may involve one or more movable object and one or more nonmovable object. The nonmovable objects may be map elements such as road segments, buildings, and natural features. When the movable object is more suitable to be referenced in the guidance command, the processor 300 is configured to select the movable object for the guidance command based on the suitability characteristic values. The guidance command is also selected based on the current location of the mobile device 122.
  • The guidance command may include a visible aspect description of the movable object so that the user can easily locate the movable object. The conspicuousness of the visible aspect may be measured by a prominence value (e.g., a scale from 0 to 10). The visible aspect may be a color, a size, an accessory or any other descriptor that can identify the movable object. Example guidance commands that reference a movable object and a visible aspect include “turn right just past the yellow car,” “follow the tall woman with the red dress,” or “turn left by the people with the stroller.”
  • The processor may also analyze the visible aspects of the movable objects in determining whether the movable objects are suitable to be referenced in a guidance command. The suitability characteristics values above may include a value that indicates the existence of a visible aspect or the effectiveness of a visible aspect.
  • The communication interface 305 is configured to receive data indicative of the location of the mobile device 122 from the mobile device 122. The communication interface 305 is configured to receive data indicative of the location of the landmark from the image acquisition device 130.
  • The controller 200 and/or processor 300 may include a general processor, digital signal processor, an application specific integrated circuit (ASIC), field programmable gate array (FPGA), analog circuit, digital circuit, combinations thereof, or other now known or later developed processor. The controller 200 and/or processor 300 may be a single device or combinations of devices, such as associated with a network, distributed processing, or cloud computing.
  • The memory 201 and/or memory 301 may be a volatile memory or a non-volatile memory. The memory 201 and/or memory 301 may include one or more of a read only memory (ROM), random access memory (RAM), a flash memory, an electronic erasable program read only memory (EEPROM), or other type of memory. The memory 201 and/or memory 301 may be removable from the mobile device 122, such as a secure digital (SD) memory card.
  • The communication interface 205 and/or communication interface 305 may include any operable connection. An operable connection may be one in which signals, physical communications, and/or logical communications may be sent and/or received. An operable connection may include a physical interface, an electrical interface, and/or a data interface. The communication interface 205 and/or communication interface 305 provides for wireless and/or wired communications in any now known or later developed format.
  • FIGS. 5 and 6 illustrate possible implementations of the image acquisition system 130 of FIG. 2. In FIG. 5, security cameras 501 view the geographic region shown in FIG. 1. The security cameras 501 may be traffic cameras, security cameras, or specifically tailored to track movable objects in the geographic region. The security cameras 501 collect camera data and transmit the camera data to the server 125. The cameras 501 are at known geographical locations for relating to the route. The angle of view of the cameras 501 may be fixed and known or may be sensed, indicating at least a general location of viewed objects.
  • In FIG. 6, a camera 601 is mounted on the vehicle 105 or incorporated with the mobile device 122. Data from the camera 601 is transmitted to the mobile device 122 or the server 125 for image processing, as discussed above. The camera 601 may have a viewing angle that includes movable objects in the vicinity of the vehicle 105. In another embodiment, several vehicles are equipped with cameras. Data collected in a camera associated with another vehicle, such as vehicles 103, is transmitted to the server 125 and used in the analysis of movable objects soon to be in the vicinity of vehicle 105. In other words, the mobile devices act as proxies collecting camera data that is used for the dynamic natural guidance of other mobile devices. The location of the camera 601 is known by the position sensor. A compass or other directional sensor may indicate direction of view.
  • FIG. 7 illustrates an exemplary augmented reality system including a mobile device 701. The mobile device 701 executes an augmented reality application stored in memory. The augmented reality application enhances a user's view of the real world with virtual content. The virtual content is displayed in a layer above the real world content, which is captured by the camera. As shown in FIG. 7, video of the road and cars 703 are real world content captured by the camera, and the virtual content 705. The mobile device 701 or server 125 is configured to augment the image of the movable object display on the mobile device 125 by adding the virtual content 705. The virtual content 705 may be a star, another shape, a discoloration, or a highlight.
  • The virtual content may highlight a movable object on the display to provide a guidance command. For example, the guidance command may be “follow the highlighted car” or “turn left with the car marked with a star.” Additional virtual content may include hyperlinks to additional information regarding the business. In other embodiments, graphics are added to a two-dimensional map or displayed route for highlighting a relative position, shape, and/or other characteristic of a movable object.
  • FIG. 8 illustrates a flowchart for dynamic natural guidance. The acts of the flowchart may be performed by any combination of the server 125 and the mobile device 122, and the term controller may refer to the processor of either of the devices. Additional, different, or fewer acts may be provided.
  • At act S101, the controller receives data indicative of a location of the mobile device 122. The location data may be determined based on GPS or cellular triangulation. The location data may be received at predetermined intervals or when the user accesses the guidance application.
  • At act S103, the controller receives camera data indicative of one or more movable objects in the vicinity of the mobile device. A movable object may be defined as anything that is not a fixture or not permanently static. The controller may access the camera data based on the location data. In one example, the camera data includes images, and the controller analyzes the images to determine the identity and/or characteristics of the one or more movable objects. In another example, the camera data includes a list of identities and characteristics for the one or more movable objects paired with locations of the movable objects.
  • At act S105, the controller generates a guidance command based on the location of the mobile device. The guidance command references the one or more movable objects in the vicinity of the mobile device. For example, the guidance command may state “turn in front of the yellow truck,” “follow the man in the blue suede shoes,” or “head toward the billboard with the sports car.”
  • FIG. 9 illustrates another flowchart for dynamic natural guidance. The acts of the flowchart may be performed by any combination of the server 125 and the mobile device 122, and the term controller may refer to the processor of either of the devices. Additional, different, or fewer acts may be provided.
  • At act S201, the controller determines the identities of movable objects. The controller is configured to execute an image processing application or a computer vision application. For example, the controller may use feature extraction or other image recognition techniques to determine the types of objects in an image or video.
  • At act S203, the controller tracks movements of the movable objects over time. For example, the controller may measure an amount of time that each of the movable objects is stationary. At act S205, the controller analyzes the identities of the movable objects and the movements of the movable objects.
  • At act S207, the controller selects a landmark from the movable objects. The selection is based on the identities of the movable objects and the movements of the movable objects. For example, the controller may access a lookup table stored in memory that pairs a threshold time for the various types of movable objects. A first type of movable object, such as a truck, may be paired with a first threshold time, such as one hour. A second type of movable object, such as a person, may be paired with a second threshold time, such as a minute. The controller may be configured to select cars as movable objects for the guidance command when the cars have been stationary for more than the first threshold time and select people as movable objects for the guidance command when the people have been stationary for more than the second threshold time.
  • Alternatively, the movable object may be moving. The controller may be configured to rank the possible movable objects based on their appearance. For example, bright colors or significantly oversized or undersized movable objects may be selected.
  • At act S209, the controller generates a guidance command that references the landmark. The guidance command is based on the location of the navigation device and the location of the landmark. For example, in addition to the identities of the movable objects and the recent movement of the movable objects, the controller may compare the locations of the navigation device and the movable objects. The controller may calculate a distance between the navigation device and each of a set of potential movable objects. If the distance if less than a threshold distance (e.g., 10 meters, 50 meters), the controller selects the movable object for possible inclusion in the navigation command.
  • The network 127 may include wired networks, wireless networks, or combinations thereof. The wireless network may be a cellular telephone network, an 802.11, 802.16, 802.20, or WiMax network. Further, the network 127 may be a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols.
  • While the computer-readable medium is shown to be a single medium, the term “computer-readable medium” includes a single medium or multiple media, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions. The term “computer-readable medium” shall also include any medium that is capable of storing, encoding or carrying a set of instructions for execution by a processor or that cause a computer system to perform any one or more of the methods or operations disclosed herein.
  • In a particular non-limiting, exemplary embodiment, the computer-readable medium can include a solid-state memory such as a memory card or other package that houses one or more non-volatile read-only memories. Further, the computer-readable medium can be a random access memory or other volatile re-writable memory. Additionally, the computer-readable medium can include a magneto-optical or optical medium, such as a disk or tapes or other storage device to capture carrier wave signals such as a signal communicated over a transmission medium. A digital file attachment to an e-mail or other self-contained information archive or set of archives may be considered a distribution medium that is a tangible storage medium. Accordingly, the disclosure is considered to include any one or more of a computer-readable medium or a distribution medium and other equivalents and successor media, in which data or instructions may be stored.
  • In an alternative embodiment, dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays and other hardware devices, can be constructed to implement one or more of the methods described herein. Applications that may include the apparatus and systems of various embodiments can broadly include a variety of electronic and computer systems. One or more embodiments described herein may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit. Accordingly, the present system encompasses software, firmware, and hardware implementations.
  • In accordance with various embodiments of the present disclosure, the methods described herein may be implemented by software programs executable by a computer system. Further, in an exemplary, non-limited embodiment, implementations can include distributed processing, component/object distributed processing, and parallel processing. Alternatively, virtual computer system processing can be constructed to implement one or more of the methods or functionality as described herein.
  • Although the present specification describes components and functions that may be implemented in particular embodiments with reference to particular standards and protocols, the invention is not limited to such standards and protocols. For example, standards for Internet and other packet switched network transmission (e.g., TCP/IP, UDP/IP, HTML, HTTP, HTTPS) represent examples of the state of the art. Such standards are periodically superseded by faster or more efficient equivalents having essentially the same functions. Accordingly, replacement standards and protocols having the same or similar functions as those disclosed herein are considered equivalents thereof.
  • A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • As used in this application, the term ‘circuitry’ or ‘circuit’ refers to all of the following: (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
  • This definition of ‘circuitry’ applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term “circuitry” would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware. The term “circuitry” would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in server, a cellular network device, or other network device.
  • Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and anyone or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio player, a Global Positioning System (GPS) receiver, to name just a few. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a device having a display, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet.
  • The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • The illustrations of the embodiments described herein are intended to provide a general understanding of the structure of the various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. Additionally, the illustrations are merely representational and may not be drawn to scale. Certain proportions within the illustrations may be exaggerated, while other proportions may be minimized. Accordingly, the disclosure and the figures are to be regarded as illustrative rather than restrictive.
  • While this specification contains many specifics, these should not be construed as limitations on the scope of the invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of the invention. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.
  • Similarly, while operations are depicted in the drawings and described herein in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
  • One or more embodiments of the disclosure may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any particular invention or inventive concept. Moreover, although specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description.
  • The Abstract of the Disclosure is provided to comply with 37 C.F.R. § 1.72(b) and is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, various features may be grouped together or described in a single embodiment for the purpose of streamlining the disclosure. This disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may be directed to less than all of the features of any of the disclosed embodiments. Thus, the following claims are incorporated into the Detailed Description, with each claim standing on its own as defining separately claimed subject matter.
  • It is intended that the foregoing detailed description be regarded as illustrative rather than limiting and that it is understood that the following claims including all equivalents are intended to define the scope of the invention. The claims should not be read as limited to the described order or elements unless stated to that effect. Therefore, all embodiments that come within the scope and spirit of the following claims and equivalents thereto are claimed as the invention.

Claims (20)

We claim:
1. An apparatus comprising:
an image acquisition device configured to capture a series of image data describing one or more movable objects in a geographic area in a vicinity of a mobile device;
position circuitry configured to calculate a location of the mobile device associated with a road segment; and
a controller configured to track movement of the one or more movable objects in the series of image data and generate a guidance command based on the movement of the one or more movable objects and the road segment associated with the mobile device.
2. The apparatus of claim 1, wherein the controller is configured to determine whether one of the one or more movable objects is usable for the guidance command.
3. The apparatus of claim 2, wherein the controller is configured to determine visibility from the road segment to a subsequent road segment or a sign corresponding to a subsequent road segment.
4. The apparatus of claim 3, wherein the controller generates the guidance command in response to the visibility from the road segment to the subsequent road segment or the sign corresponding to the subsequent road segment.
5. The apparatus of claim 1, wherein the one or more objects includes a first object and a second object, wherein the controller is configured to compare distances to the first object and the second object to a turn-by-turn-direction included in the guidance command.
6. The apparatus of claim 1, wherein the controller is configured to select the one or more movable objects based on a suitability characteristic.
7. The apparatus of claim 6, wherein the suitability characteristic includes an identity of the one or more movable objects, a time elapsed that the movable object has been stationary, or a degree to which the or more movable objects stand out from the surroundings.
8. The apparatus of claim 1, wherein the controller identifies a past velocity or acceleration of the one or more movable objects and selects the one or more movable objects for the guidance command based on the past velocity or acceleration.
9. The apparatus of claim 1, wherein the image acquisition device is a camera or a ranging device mounted to a vehicle.
10. A method comprising:
identifying a series of data describing one or more movable objects in a geographic area in a vicinity of a mobile device;
determining a location of the mobile device;
identifying a route from the location of the mobile device to a destination;
tracking movement of the one or more movable objects in the series of data; and
generating a guidance command based on the movement of the one or more movable objects and a reference point in the route from the location of the mobile device to the destination.
11. The method of claim 10, further comprising:
determining whether one of the or more movable objects is usable for guidance command.
12. The method of claim 11, wherein the location of the mobile device corresponds to a road segment, the method further comprising:
determining visibility from the road segment to a subsequent road segment or a sign corresponding to a subsequent road segment.
13. The method of claim 12, further comprising:
generating the guidance command in response to the visibility from the road segment to the subsequent road segment or the sign corresponding to the subsequent road segment.
14. The method of claim 10, further comprising:
comparing distances to a first object and a second object to a turn-by-turn-direction included in the guidance command; and
selecting the first object based on distance to the turn-by-turn direction.
15. The method of claim 10, further comprising:
selecting the one or more movable objects based on a suitability characteristic including an identity of the movable object, a time elapsed that the movable object has been stationary, or a degree to which the one or more movable objects stand out from the surroundings.
16. The method of claim 10, further comprising:
identifying a past velocity or acceleration of the one or more movable objects; and
selecting the one or more movable objects for the guidance command based on the past velocity or acceleration.
17. A non-transitory computer readable medium including instructions that when executed are operable to:
identify a series of data describing one or more movable objects in a geographic area in a vicinity of a mobile device;
determine a location of the mobile device;
track movement of the one or more movable objects in the series of data; and
generate a guidance command based on the movement of the one or more movable objects and the location of the mobile device.
18. The non-transitory computer readable medium of claim 17, the instructions when executed are operable to:
identify a road segment associated with the location of the mobile device, wherein the guidance command is based on visibility from the road segment.
19. The non-transitory computer readable medium of claim 17, the instructions when executed are operable to:
compare distances to a first object and a second object to a turn-by-turn-direction included in the guidance command; and
select the first object based on distance to the turn-by-turn direction.
20. The non-transitory computer readable medium of claim 17, the instructions when executed are operable to:
select the one or more movable objects based on a suitability characteristic including an identity of the one or more movable objects, a time elapsed that the one or more movable objects have been stationary, or a degree to which the one or more movable objects stand out from associated surroundings.
US16/430,032 2012-06-29 2019-06-03 Dynamic natural guidance Abandoned US20190287398A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/430,032 US20190287398A1 (en) 2012-06-29 2019-06-03 Dynamic natural guidance

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/538,227 US10325489B2 (en) 2012-06-29 2012-06-29 Dynamic natural guidance
US16/430,032 US20190287398A1 (en) 2012-06-29 2019-06-03 Dynamic natural guidance

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/538,227 Continuation US10325489B2 (en) 2012-06-29 2012-06-29 Dynamic natural guidance

Publications (1)

Publication Number Publication Date
US20190287398A1 true US20190287398A1 (en) 2019-09-19

Family

ID=49778959

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/538,227 Active 2035-11-14 US10325489B2 (en) 2012-06-29 2012-06-29 Dynamic natural guidance
US16/430,032 Abandoned US20190287398A1 (en) 2012-06-29 2019-06-03 Dynamic natural guidance

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/538,227 Active 2035-11-14 US10325489B2 (en) 2012-06-29 2012-06-29 Dynamic natural guidance

Country Status (1)

Country Link
US (2) US10325489B2 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2892228A1 (en) 2011-08-05 2015-07-08 Fox Sports Productions, Inc. Selective capture and presentation of native image portions
US11039109B2 (en) 2011-08-05 2021-06-15 Fox Sports Productions, Llc System and method for adjusting an image for a vehicle mounted camera
US9043140B2 (en) 2012-06-29 2015-05-26 Here Global B.V. Predictive natural guidance
SG11201508085QA (en) * 2013-03-29 2015-11-27 Nec Corp Target object identifying device, target object identifying method and target object identifying program
US9792050B2 (en) * 2014-08-13 2017-10-17 PernixData, Inc. Distributed caching systems and methods
US11758238B2 (en) 2014-12-13 2023-09-12 Fox Sports Productions, Llc Systems and methods for displaying wind characteristics and effects within a broadcast
US11159854B2 (en) 2014-12-13 2021-10-26 Fox Sports Productions, Llc Systems and methods for tracking and tagging objects within a broadcast
US10636308B2 (en) * 2016-05-18 2020-04-28 The Boeing Company Systems and methods for collision avoidance
WO2017214168A1 (en) * 2016-06-07 2017-12-14 Bounce Exchange, Inc. Systems and methods of dynamically providing information at detection of scrolling operations
US10552680B2 (en) 2017-08-08 2020-02-04 Here Global B.V. Method, apparatus and computer program product for disambiguation of points of-interest in a field of view
US20200216064A1 (en) * 2019-01-08 2020-07-09 Aptiv Technologies Limited Classifying perceived objects based on activity

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003123190A (en) * 2001-10-10 2003-04-25 Sumitomo Electric Ind Ltd Image transmission system, picture transmitter and picture obtaining device
US20110054772A1 (en) * 2009-08-28 2011-03-03 Rossio Sara B Method of Operating a Navigation System to Provide Route Guidance
US20120062357A1 (en) * 2010-08-27 2012-03-15 Echo-Sense Inc. Remote guidance system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6529831B1 (en) 2000-06-21 2003-03-04 International Business Machines Corporation Emergency vehicle locator and proximity warning system
US6633232B2 (en) * 2001-05-14 2003-10-14 Koninklijke Philips Electronics N.V. Method and apparatus for routing persons through one or more destinations based on a least-cost criterion
GB0324800D0 (en) 2003-10-24 2003-11-26 Trafficmaster Plc Route guidance system
JP4792248B2 (en) 2005-06-30 2011-10-12 日立オートモティブシステムズ株式会社 Travel control device, travel control system, and navigation information recording medium storing information used for the travel control
US7912637B2 (en) 2007-06-25 2011-03-22 Microsoft Corporation Landmark-based routing
US8606316B2 (en) * 2009-10-21 2013-12-10 Xerox Corporation Portable blind aid device
US8417409B2 (en) 2009-11-11 2013-04-09 Google Inc. Transit routing system for public transportation trip planning
US20120194551A1 (en) * 2010-02-28 2012-08-02 Osterhout Group, Inc. Ar glasses with user-action based command and control of external devices
US8781169B2 (en) * 2010-11-03 2014-07-15 Endeavoring, Llc Vehicle tracking and locating system
US9909878B2 (en) * 2012-03-05 2018-03-06 Here Global B.V. Method and apparatus for triggering conveyance of guidance information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003123190A (en) * 2001-10-10 2003-04-25 Sumitomo Electric Ind Ltd Image transmission system, picture transmitter and picture obtaining device
US20110054772A1 (en) * 2009-08-28 2011-03-03 Rossio Sara B Method of Operating a Navigation System to Provide Route Guidance
US20120062357A1 (en) * 2010-08-27 2012-03-15 Echo-Sense Inc. Remote guidance system

Also Published As

Publication number Publication date
US20140005929A1 (en) 2014-01-02
US10325489B2 (en) 2019-06-18

Similar Documents

Publication Publication Date Title
US20190287398A1 (en) Dynamic natural guidance
US10168698B2 (en) Aerial image collection
AU2019201834B2 (en) Geometric fingerprinting for localization of a device
US9857192B2 (en) Predictive natural guidance
US9429435B2 (en) Interactive map
US9230367B2 (en) Augmented reality personalization
US8930141B2 (en) Apparatus, method and computer program for displaying points of interest
US20220065651A1 (en) Method, apparatus, and system for generating virtual markers for journey activities
US20140002440A1 (en) On Demand Image Overlay
CN109937343A (en) Appraisal framework for the prediction locus in automatic driving vehicle traffic forecast
US20170115749A1 (en) Systems And Methods For Presenting Map And Other Information Based On Pointing Direction
EP2972098B1 (en) Visual search results
WO2016103041A1 (en) Selecting feature geometries for localization of a device
EP2836796B1 (en) A method and system for changing geographic information displayed on a mobile device
EP3923247A1 (en) Method, apparatus, and system for projecting augmented reality navigation cues on user-selected surfaces
JP5093667B2 (en) Navigation system, navigation method, and navigation program
CN111051818B (en) Providing navigation directions
Pei et al. Sensor assisted 3d personal navigation on a smart phone in gps degraded environments
US11037332B2 (en) Systems and methods for presenting map and other information based on pointing direction
Sato et al. Wayfinding
Bacchewar et al. Literature Survey: Indoor Navigation Using Augmented Reality
WO2024167482A1 (en) Providing augmented reality view based on geographical data
Pavlin Towards a cognitively-sound, landmark-based indoor navigation system

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION