DE102017007705A1 - Issuing a maneuvering instruction by means of a navigation device - Google Patents

Issuing a maneuvering instruction by means of a navigation device

Info

Publication number
DE102017007705A1
DE102017007705A1 DE102017007705.3A DE102017007705A DE102017007705A1 DE 102017007705 A1 DE102017007705 A1 DE 102017007705A1 DE 102017007705 A DE102017007705 A DE 102017007705A DE 102017007705 A1 DE102017007705 A1 DE 102017007705A1
Authority
DE
Germany
Prior art keywords
maneuver
object
maneuvering
image
navigation device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
DE102017007705.3A
Other languages
German (de)
Inventor
Mathias Haberjahn
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Preh Car Connect GmbH
Original Assignee
Preh Car Connect GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Preh Car Connect GmbH filed Critical Preh Car Connect GmbH
Priority to DE102017007705.3A priority Critical patent/DE102017007705A1/en
Publication of DE102017007705A1 publication Critical patent/DE102017007705A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in preceding groups G01C1/00-G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in preceding groups G01C1/00-G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3644Landmark guidance, e.g. using POIs or conspicuous other objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in preceding groups G01C1/00-G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in preceding groups G01C1/00-G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in preceding groups G01C1/00-G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in preceding groups G01C1/00-G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in preceding groups G01C1/00-G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in preceding groups G01C1/00-G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3602Input other than that of destination using image analysis, e.g. detection of road signs, lanes, buildings, real preceding vehicles using a camera

Abstract

A method of outputting a maneuvering instruction by means of a navigation device (100, 200, 300) comprising a control unit (104), an image acquisition unit (106) and an output unit (108), comprising the steps of: executing (401) a route guidance for a movable object , controlled by a user of the navigation device, by means of the control unit (104), selecting (402) a maneuver object associated with a maneuver of the movable object provided by the route guidance from a plurality of georeferenced maneuver objects by means of the control unit (104) (403) an environment image of the maneuvering object from a user's perspective by means of the image capture unit (106), checking (404) whether the maneuvering object is recognizable on the environmental image by means of the control unit (104) and if the maneuvering object is recognizable on the environmental image, Generating (405) a man including the maneuver object cause transfer means of the control unit (104) and outputting (406) the maneuvering instruction to the user by means of the output unit (108).

Description

  • The invention relates to a method for outputting a maneuvering instruction by means of a navigation device and to a navigation device for carrying out the method.
  • A navigation device is able to calculate a route that leads from a current position of the navigation device to a destination position. The position of the navigation device is most often determined using a global navigation satellite system, for example using NAVSTAR GPS, GLONASS, BEIDOU or GALILEO. The target position is usually derived from a target that predetermines a user of the navigation device by means of an operating unit of the navigation device. Alternatively, the target position may be determined taking into account historical data by means of an estimate.
  • A route may be provided by calculating the route using a control unit of a navigation device. For the route calculation, map data is used, which is usually stored in a non-volatile memory unit of the navigation device, for example on a CD-ROM, on a hard disk or in a flash memory. The map data represents geographic objects and associated information such as roads, paths, intersections, squares, railways, waterways, buildings, bridges, terrain, national boundaries, parking, rest areas, localities, traffic regulations, and speed limits.
  • Furthermore, a route can be displayed in a map display on a display unit of a navigation device, for example on a touch-sensitive screen. As a result, a user of the navigation device can develop a spatial idea of the course of the route and plan maneuvers in a forward-looking manner.
  • A maneuver can be understood to mean a purposeful movement that executes a movable object, for example a person, a vehicle, an animal or a robot, in order to follow a route. Such a maneuver can be calculated by a control unit of a navigation device taking into account a route planned for the object, a current position of the object and map data. The location where a maneuver is to be performed or executed may be referred to as a "maneuver point".
  • Furthermore, a navigation device can output a route guidance based on a route to a user of the navigation device. Guidance is a process that guides the user along the route. In the route guidance, maneuvering instructions are output to the user using an output unit of the navigation device.
  • Such a maneuvering instruction is an invitation to prepare or execute a particular maneuver. Usually, a maneuvering instruction is automatically generated from prefabricated building blocks that have approximate distances and directions. For example, a maneuvering instruction reads "Turn left into the Ahornstrasse after three hundred meters".
  • However, a maneuvering instruction is often too unspecific in a narrow urban space for a user. A difficult to see road course or a complex intersection geometry lead to a great deal of room for interpretation with regard to the meaning of the maneuvering instruction. Partial problems with position detection also lead to a maneuver instruction being issued at the wrong time. This results in the difficulty for the user to transfer the maneuvering instruction to a perceived real traffic situation and to act accordingly. The consequence is an insecurity of the user, which can lead to an unsafe driving style or to a wrong maneuver.
  • It is therefore an object of the present invention to provide a solution for issuing a maneuvering instruction which optimally assists a user in the execution of a maneuver, so that a wrong maneuver is avoided. The object is achieved by a method having the features specified in claim 1 and by a navigation device with the features specified in claim 10.
  • The invention accordingly provides a method for outputting a maneuvering instruction by means of a navigation device which has a control unit, an image acquisition unit and an output unit, with the method steps:
    • Execution of a route guidance for a movable object, which is controlled by a user of the navigation device, by means of the control unit,
    • Selecting a maneuver object that corresponds to a maneuver envisaged by the route guidance (that is, a maneuver to be carried out according to route guidance in order to follow a route on which the route guidance is based) of the movable object is assigned from a plurality of georeferenced maneuver objects by means of the control unit,
    • Detecting an environment image of the maneuver object from a perspective of the user by means of the image acquisition unit,
    • Checking whether the maneuver object can be recognized on the environmental image (that is to say within the environmental image) by means of the control unit and, if the maneuver object can be recognized on the environmental image,
    • Generating a maneuver instruction including the maneuver object by means of the control unit and
    • - Issuing the maneuvering instruction to the user by means of the output unit.
  • The idea on which the invention is based is to assist the user of a navigation device in carrying out a maneuver by means of a maneuvering instruction relating to a specific, striking and / or clearly visible environmental feature (maneuvering object).
  • A maneuver object can be understood to mean a georeferenced physical object that is suitable for describing a maneuver. Such a maneuvering object is, for example, a traffic light, a traffic sign, a road sign, a street name plate, a stop sign, another signpost that marks a particular place, a brand logo, a billboard, a building, a well, a sculpture, a traffic island, a railroad crossing , a tree, a landmark, a shop or a point of interest (POI ... point of interest). A maneuver object used for the method according to the invention is advantageously distinguished by a concise appearance, particular conspicuousness, particular prominence and / or a high recognition value.
  • A "maneuver instruction including the maneuver object" is understood to mean a maneuvering instruction in which the maneuver object is explicitly designated. For this purpose, if the maneuvering instruction is in speech form, the maneuvering object can be called the maneuvering object in the maneuvering instruction. Such a maneuvering instruction may inter alia "Please turn right after the bus stop" or "Please turn left after the DEA petrol station". Further, when the maneuvering instruction is displayed, in the maneuvering instruction the maneuvering object may be visually highlighted, for example with a signal color, a flashing or a frame.
  • A maneuvering instruction including a maneuvering object is easily comprehensible for a user even in a complex traffic situation and allows the user to spend little cognitive capacity for his orientation. As a result, more cognitive capacity is available for monitoring traffic, thereby increasing traffic safety.
  • By "capturing an environmental image from a user's perspective," the capture of the environmental image may be understood from the user's perspective, such as a spatial location where one or both eyes of the user are located, or from the vicinity of such location , This can be achieved, inter alia, with an image acquisition unit arranged on the user's head or in the vicinity of the user's head. As a result, the environmental image can correspond, for example, to a driver's view through a windshield of a vehicle.
  • Advantageous embodiments and developments of the invention will become apparent from the dependent claims and from the description with reference to the figures.
  • In one embodiment of the method according to the invention, a plurality of maneuver objects which are located in a vicinity of the movable object are extracted from the plurality of georeferenced maneuver objects and stored in a maneuver object list. By means of this preselection, the selection of the maneuver object is carried out more efficiently, more quickly and with less computational effort.
  • In a further embodiment of the method according to the invention, those maneuver objects stored in the maneuver object list are removed from the maneuver object list, which are not directly related to the maneuver.
  • A "maneuver object that is directly related to a maneuver" can be understood, inter alia, to mean a maneuver object which is arranged in a vicinity of a maneuver point at which the maneuver is to be carried out, wherein the maneuver object is arranged in the smaller subarea of two subareas in which a route underlying the maneuver divides the short range.
  • Removing the maneuver objects that are not directly related to the maneuver reduces the number of maneuver objects contained in the maneuvering object list, which results in less data being processed. This makes selecting the maneuver object even more efficient. In addition, removing the maneuver objects avoided that an inappropriate maneuver object is selectable.
  • In a further embodiment of the method according to the invention, when checking whether the maneuver object is recognizable on the environmental image, an image feature associated with the maneuver object and / or a text feature assigned to the maneuver object are taken into account. This increases the likelihood that the visibility of the maneuver object will be correctly assessed.
  • In a further embodiment of the method according to the invention, the image feature or the text feature is received by the navigation device from a data cloud (English: cloud). Due to the associated provision of the image feature or text feature on an external storage unit, the capacity of a storage unit of the navigation device can be particularly low. The navigation device can be made particularly inexpensive. In an alternative embodiment of the method according to the invention, an image feature or text feature stored in the navigation device is regularly updated, wherein the image feature or text feature updating data can be received from a data cloud.
  • In a further embodiment of the method according to the invention, the checking whether the maneuver object can be recognized on the environmental image is carried out by means of a cloud service. As a result, the demands on the computing power of the navigation device can be reduced, which enables a particularly cost-effective production of the navigation device. Furthermore, the data cloud service can have a higher computing power than the navigation device compared to the navigation device due to its more powerful technical equipment (English: hardware). This can increase the quality of testing and / or enable faster testing. In addition, the computing capacity of the navigation device can be used instead of checking for another operation.
  • In a further embodiment of the method according to the invention, when checking whether the maneuver object is recognizable on the environmental image, a search area arranged on the environmental image is defined based on a geoposition of the maneuver object, a geoposition of the image capture unit and an orientation of the image capture unit. It is also checked whether the maneuver object is recognizable in the search area. The search area may include a subarea or multiple subareas of the environmental image. Furthermore, the search area can be automatically determined using digital image processing means, such as masks and layers, especially when a particular maneuver object is expected. Additionally or alternatively, a mask associated with the maneuver object may define a search area, for example in a geometry corresponding to the maneuver object. A search area limited to a part of the image can reduce the amount of computation required for checking. Furthermore, the time required for an evaluation can thus be reduced since the limited search area has fewer image features than the entire environment image. Testing can be done more efficiently and faster.
  • In a further embodiment of the method according to the invention, the maneuvering instruction is output acoustically and / or as a visual representation, which has been adapted in perspective taking into account a viewing direction and / or a head position of the user. The acoustic output, for example in the form of a maneuvering announcement, the user can capture, without taking his eyes off the traffic situation. This avoids that the user overlooks safety-relevant details of the traffic situation. The visual representation enables a user-friendly output of the maneuvering instruction, which reduces the likelihood of a wrong maneuver.
  • In a further embodiment of the method according to the invention, the maneuvering instruction is issued visually using augmented reality. Using augmented reality allows the maneuvering instruction to be overlaid with a real traffic situation. Thus, the maneuvering instruction can be communicated in a particularly understandable manner to the user while the user is fully aware of the traffic situation. The overlaying can be associated with the highlighting of the maneuvering object, for example with the colored marking of the maneuvering object and / or the display of additional information about the maneuvering object.
  • The object is further achieved with a navigation device for outputting a maneuvering instruction. The navigation device has the following components:
    • a control unit, which is used for executing a route guidance for a movable object, which is controlled by a user of the navigation device, selecting a maneuver object, which is assigned to a maneuver of the movable object provided by the route guidance, from a plurality of georeferenced maneuver objects, checking whether the Maneuver object on an environment image recognizable, and generating a maneuver instruction including the maneuver object, if the maneuver object is recognizable on the environment image, is set up,
    • an image capture unit adapted to capture the environmental image of the maneuvering object from a user's perspective, and
    • an output unit configured to output the maneuver instruction to the user.
  • In one embodiment of the navigation device, the image capture unit is designed as an environment camera. The surround camera is capable of capturing an environmental image with a particularly large viewing angle, which increases the likelihood that a selected maneuvering object will be present on the environmental image and therefore recognizable. Additionally or alternatively, the image acquisition unit may comprise or be coupled to a radar device, a lidar device and / or a sound / sonic detecting and ranging device. This allows you to capture an environmental image with depth information. The environment image is therefore not limited to two-dimensional information, but may also have spatial information.
  • In another embodiment of the navigation device, to check whether the maneuver object is recognizable on an environmental image, use is made of a digital map having georeferenced object features representing, for example, a category, class, appearance, texture, or position of the maneuvering object. Such an object feature may allow for improved recognition of the maneuver object. Furthermore, a reference from a maneuver object to another maneuver object may also be stored in the map data, for example a distance from one maneuver object to another maneuver object. The map data may be stored in a memory unit of the navigation device or in a data cloud.
  • In a further embodiment of the invention, the navigation device has a sensor unit for detecting the viewing direction and / or the head position of a user, which serves for generating a position-optimized visual maneuvering instruction. The sensor unit makes it possible to precisely adapt the maneuvering instruction to the current situation, for example by referring the maneuvering instruction to a succinct maneuvering object visible to the user. In addition, depending on the detected viewing direction and / or the detected head position of the user, the navigation device can output a position-optimized acoustic maneuver message to the user, in which the maneuver object is called. Additionally or alternatively, the navigation device can output a position-optimized audiovisual maneuvering instruction to the user depending on the detected viewing direction and / or the detected head position of the user.
  • In a further embodiment of the navigation device, the control unit of the navigation device is also coupled to the sensor unit in addition to the image acquisition unit and the output unit. This allows the direct access of the control unit to data and information on the basis of which the maneuvering instruction is generated.
  • In a further embodiment of the navigation device, the output unit is designed as a visual output unit, for example as a head-up display, as a screen and / or as an auditory output unit. A maneuvering instruction may include a visual highlighting of a maneuvering object and / or an audible output of a maneuvering announcement. The maneuvering announcement can be issued simultaneously with a visual highlighting of the maneuvering object. This allows a very precise output of the maneuvering instruction. In addition, the maneuvering instruction may be adaptable to individual user preferences.
  • The aforementioned embodiments of the invention can be combined with each other, if appropriate. Further possible embodiments, developments and implementations of the invention also include not explicitly mentioned combinations of features of the invention described above or below. In particular, the person skilled in the art will be able to add individual aspects as improvements or additions to the respective embodiment of the invention.
  • In the following, embodiments of the method according to the invention and of the navigation device according to the invention are explained in more detail with reference to the attached figures, in which:
    • 1 a block diagram of a navigation device according to an embodiment of the invention,
    • 2 a block diagram of a navigation device according to another embodiment of the invention,
    • 3 a block diagram of a navigation device according to yet another embodiment of the invention,
    • 4 a flowchart of a method for outputting a maneuvering instruction according to an embodiment of the invention,
    • 5 a flowchart of a method step of the method according to 4 .
    • 6 a flowchart of a further method step of the method according to 4 .
    • 7 a flowchart of an alternative variant of the method step according to 6 .
    • 8th a flowchart of yet another method step of the method according to 4 , and
    • 9 a sketch of a street scene with a maneuver object from the perspective of a user of a navigation device according to an embodiment of the invention.
  • 1 shows a block diagram of a navigation device according to the invention 100 according to an embodiment of the invention. The navigation device 100 belongs to a movable object created by a user of the navigation device 100 is controlled. The movable object is a motor vehicle, on whose board is the navigation device 100 located. The motor vehicle may be a passenger car or a truck. Alternatively, the movable object may be a watercraft, an aircraft, a person, an animal, or a robot. In particular, the movable object may be the user of the navigation device himself, which, for example, moves on foot and carries the navigation device with him.
  • The navigation device 100 is firmly installed in the motor vehicle Alternatively, the navigation device may be a portable device that is carried in the motor vehicle, such as a mobile phone or a mobile navigation device.
  • The navigation device 100 has the following functional units: a storage unit 102 , a control unit 104 , an image capture unit 106 and an output unit 108 , In addition to the functional units mentioned here, the navigation device 100 have additional functional units, for example via an interface unit, which is set up for data exchange with another electronic device.
  • The storage unit 102 has a non-volatile memory, which is designed for example as EEPROM (Electrically Erasable Programmable Read-Only Memory). Alternatively, the memory unit may also have a different type of memory, for example a flash EEPROM or a hard disk. In particular, the storage unit 102 have more than one of the mentioned memories.
  • The storage unit 102 is with the control unit 104 connected via a bidirectional data connection. In the storage unit 102 Among other things, a map database is stored, which contains a variety of map data. The map data represents objects that are located in a specific geographical area. The objects include, for example, roads, paths, squares, railway lines, rivers, buildings, bridges, terrain, national borders, service areas, traffic regulations and localities. The map data also have a plurality of georeferenced maneuver objects, as well as an associated image feature and / or an associated text feature for each of the maneuver objects. The storage unit 102 is also provided for storing a maneuver object list.
  • The control unit 104 is the central control module of the navigation device 100 , In addition to a processor (CPU, Central Processing Unit) it has a random access memory (RAM), which is used for volatile storage of variables and intermediate results. The processor and the main memory are combined on an integrated circuit. Alternatively, the processor and the memory may be arranged separately from each other, for example, each on a different integrated circuit.
  • The control unit 104 Among other things, it is set up to calculate a route for moving a movable object from a current position to a destination position. This is the control unit 104 in the position, by a user of the navigation device 100 to convert the specified destination into a destination position. The control unit 104 is further adapted to execute a route guidance for the movable object, which is controlled by the user. Using the route guidance, the movable object can be controlled along a route to a destination position. The control unit 104 is also able to select a maneuver object associated with a maneuver envisaged by the route guidance from a plurality of georeferenced maneuver objects, to check whether the maneuver object is recognizable on an environment image, and in this case to generate a maneuver instruction including the maneuver object.
  • The image capture unit 106 is set up, an environmental image of a maneuvering object from a perspective of a user of the navigation device 100 capture. The image capture unit 106 can be designed as a camera, for example an environment camera or a front camera. Preferably, the image capture unit 106 set up to continuously capture environmental images in the form of a video. Alternatively, the image capture unit is set up only in the presence of a specific control signal, for example from the control unit 104 comes to take a picture of the environment.
  • In one embodiment of the invention, the image capture unit is 106 coupled to a further electronic device of the movable object, which provides additional information, for example with a radar device, with a lidar device and / or with a sodar device. Alternatively, the image acquisition unit itself may be a radar device, a lidar device and / or a sodar device.
  • In another embodiment of the invention, the image capture unit may be part of an external electronic device that is coupled to the navigation device, for example, a component of a mobile phone or a component of a dashboard camera that attaches to a dashboard or to a windshield is.
  • The output unit 108 is set up for outputting a maneuvering instruction, such as an acoustic maneuvering instruction, to a user. For this purpose, the outputting unit has 108 via a voice output unit having an audio amplifier and a speaker or a plurality of speakers. Such a speaker is arranged, for example, in the movable object. The output unit 108 may alternatively be set up or in addition to outputting a visual maneuver instruction to a user. For this purpose, the output unit 108 an image output unit. The image output unit may include, but is not limited to, a liquid crystal display (LCD), an e-paper display, an OLED (organic light emitting diode) display, or a projecting display head-up display).
  • The output unit 108 For example, it may be formed as an HMD (head-mounted display) attached to a user's head. Alternatively or additionally, the output unit may be designed as a touch-sensitive screen. Furthermore, the output unit 108 be configured to use a windshield of the movable object as a projection surface. In an alternative embodiment of the invention, the output unit is a component of an external electronic device connected to the navigation device 100 coupled, for example, a component of a mobile phone.
  • The image output unit may extend over the entire field of view of the user, for example on a windshield. For example, the visual output of the maneuvering instruction may be adapted to a user's line of sight or head position to reduce a likelihood that the user misses or misallocates the maneuvering instruction. Alternatively, the image output unit may be arranged in a partial region or a plurality of partial regions of the field of vision of the user, for example in a partial region of a windshield. Thus, a probability that a relevant to the user detail of the real situation is obscured by a visual representation of the maneuver instruction can be reduced.
  • The following will be on 2 Reference is made to a block diagram of a navigation device 200 according to another embodiment of the invention. The navigation device 200 has the previously described functional units 102 to 108 the navigation device 100 on.
  • Furthermore, the navigation device has 200 via a sensor unit 210 , which is set up to detect a viewing direction and / or a head position of a user. The sensor unit 210 may comprise a camera which monitors the interior of a movable object. The camera may be included as part of a portable device, for example as part of a mobile phone. In particular, the sensor unit may have a video eyeglasses worn by a user on the head.
  • The following will be on 3 Reference is made to a block diagram of a navigation device 300 according to another embodiment of the invention. The navigation device 300 has the functional units explained above 102 to 108 the navigation device 100 , In addition, the navigation device 300 with the sensor unit 210 the navigation device 200 be equipped.
  • The navigation device 300 also has a transmitting and receiving unit 301 on which with the control unit 104 connected via a bidirectional data connection. With the transmitting and receiving unit 301 Data can be received and sent to the control unit 104 hand off. Furthermore, with the transmission and receiving unit 301 from the control unit 104 send transmitted data. In one embodiment of the invention, the transmitting and receiving unit is not integrated in the navigation device, but arranged separately from the navigation device, for example as an external electronic device or as part of an external electronic device.
  • By means of the transmitting and receiving unit 301 leaves the navigation device 300 connect to a cloud service. The data cloud service may provide a plurality of georeferenced maneuver objects, each having an image feature and / or a text feature.
  • In yet another embodiment of the invention, the data cloud service is arranged to check whether a maneuver object is recognizable within an environment image. For this purpose, the navigation device may have an auxiliary control unit which determines a utilization of the control unit. If the load exceeds a predetermined threshold, the auxiliary control unit may initiate the checking by means of the data cloud service.
  • 4 shows a flowchart 400 a method for issuing a maneuvering instruction according to an embodiment of the invention. The method is using the navigation device 100 with reference to FIG 1 has been described, and has the method steps explained below.
  • Before performing a first process step 401 a user of a movable object gives a desired starting point and a desired destination to the navigation device 100 , In one embodiment of the invention instead the current position of the navigation device 100 automatically used as a starting point. The control unit calculates from the starting point and the destination point 104 taking into account a digital road network, an optimal route on which the movable object can get from the starting point to the destination point. For this purpose, the user can specify additional criteria that are taken into account when calculating the route. Alternative to the control unit 104 The route can be calculated by a data cloud service and later to the navigation device 100 be transmitted.
  • The route is stored, for example, in the form of a list of successive maneuvers which the movable object has to carry out along the route. The list can be stored in the memory unit 102 or stored in a data cloud.
  • In the first process step 401 is by means of the control unit 104 executes route-based route guidance while moving the movable object along the route.
  • As soon as the movable object approaches a point on the route at which a route is provided by the route guidance, in a second method step 402 checked if a maneuver object associated with the maneuver is from the one in the memory unit 102 deposited plurality georeferenced maneuver objects is selectable. In this case, such a maneuvering object becomes by means of the control unit 104 selected from the majority of maneuver objects. The maneuver object is, for example, a traffic light, a street sign, a landmark or something similar. Alternative to the storage unit 102 For example, the plurality of georeferenced maneuver objects may be stored in a data cloud to which the navigation device 100 accesses.
  • In a third step 403 , after the second process step 402 is to be executed, by means of the image acquisition unit 106 an environment image of the maneuver object captured from a user's perspective In the image capture unit 106 It may be, inter alia, a front camera, which can record a lying in front of the movable object road section from the user's point of view.
  • In a fourth process step 404 , after the third step 403 is to be executed, by means of the control unit 104 checked whether the maneuver object is recognizable on the environment image. For example, a method of pattern recognition can be used for this purpose.
  • If the maneuver object is recognizable on the environmental image, at a fifth process step 405 by means of the control unit 104 generates a maneuver instruction that includes the maneuver object. The maneuvering instruction may have, for example, the statement "turn right after the bus stop".
  • In a sixth method step 406 which after the fifth process step 405 is to be executed, the maneuvering instruction by means of the output unit 108 issued to the user. The maneuvering instruction can be issued in a variety of ways, such as auditory, visual, and / or augmented reality.
  • After performing the sixth process step 406 the process can be terminated or with process step 401 to be continued. In particular, the method steps 401 to 406 several times while moving the movable object along the route.
  • If, deviating from the situation described, in the second method step 402 no maneuver object associated with the maneuver could be selected from the plurality of maneuver objects, the method would not be procedural 403 continued. Instead, the method could be terminated or it could be done by the control unit 104 a different maneuvering instruction is generated, which does not include a maneuver object (method step 407 ). The other maneuvering instruction could be at the process step 406 by means of the output unit 108 be issued. Thereafter, the process could be terminated or with process step 401 to be continued.
  • If, deviating from the situation described, in the fourth method step 404 If it is determined that the maneuver object is not recognizable on the environmental image, the process could be terminated. Deviating from this could in the process step 402 to check whether another maneuver object associated with the maneuver is selectable from the plurality of maneuver objects. In this case, the other maneuver object could be selected from the plurality of maneuver objects and the method could be used with method step 403 continue for the other maneuver object. This would capture a new environment image and at the next step 404 would check if the other maneuver object is recognizable on the new environment image. Alternatively, in this case, the detection of a new environmental image in the process step could 403 be waived and the process instead with process step 404 to be continued. Then it would be checked if the other maneuver object is recognizable on the old environment picture. Otherwise, if at the process step 402 If no other maneuver object assigned to the maneuver could be selected from the plurality of maneuver objects, the method could be ended or with method step 407 continue, generating a maneuver instruction that does not include a maneuver object.
  • 5 shows a flowchart of the above-mentioned method step 402 in which the maneuver object associated with the maneuver is selected from the plurality of georeferenced maneuver objects.
  • For this purpose, in a first sub-step 501 extracting a plurality of maneuver objects located in a vicinity of the movable object from the plurality of georeferenced objects and storing them in a maneuver object list. The maneuver object list may be in the storage unit 102 the navigation device 100 get saved. Alternatively, the maneuver object list may be stored in a data cloud. This preselection of the maneuver objects takes place using a simple criterion, for example a certain distance from the geoposition of the navigation device. Thus, the preselection can be carried out quickly and with little computational effort.
  • In a second sub-step 502 which after the first step 501 is to be executed, those maneuver objects stored in the maneuver object list are removed from the maneuver object list that are not directly related to the maneuver. The removal of these maneuver objects from the maneuver object list can be done by means of the control unit 104 or be run by a data cloud service. Due to the preselection in sub-step 501 represents the sub-step 502 particularly low demands on the computing capacity of the control unit 104 or the data cloud service, since an already reduced number of maneuver objects has to be processed.
  • In a third sub-step 503 which after the second step 502 is executed, one of the maneuver objects remaining in the maneuver object list is selected, that is, taken from the maneuver object list. For example, the most conspicuous maneuver object, the best-known maneuver object or the largest maneuver object of the remaining maneuver objects can be selected.
  • An alternative or additional selection criterion can be the number of times the recognizability of a maneuver object on an environment image, for example on an environmental image acquired by means of another image acquisition unit, has been evaluated as negative. If this was particularly often the case, you can select a different maneuver object instead of the maneuver object. A number of such cases may exist in the storage unit 102 or stored in a data cloud, so that the navigation device 100 has access to it The other image acquisition unit may, for example, belong to another navigation device. The evaluation of the other environmental image may have been carried out automatically and / or by one or more persons.
  • Another alternative or additional selection criterion may be how many times a maneuvering object was not recognized by one or more persons after a maneuvering instruction including the maneuvering object has been issued to the person or persons. This can be assumed, in particular, when a navigation device of such a person performs a route recalculation (rerouting) after the output had to. If this was particularly often the case, you can select a different maneuver object instead of the maneuver object. A number of such cases may exist in the storage unit 102 or stored in a data cloud, so that the navigation device 100 has access to it.
  • Because of in the first sub-step 501 and in the second sub-step 502 Reduced number of maneuver objects in the maneuver object list become the available resources of the control unit 104 for the partial step 503 in which several selection criteria must be taken into account for each remaining maneuver object, used economically.
  • 6 shows a flowchart of the above-mentioned method step 404 which checks to see if the maneuver object is recognizable on the environmental image.
  • For this purpose, in a first sub-step 601 receive an image feature associated with the maneuver object and / or a text feature associated with the maneuver object from a data cloud. Alternatively, the image feature or the text feature in the memory unit 102 be saved. The receipt of the data cloud allows the storage unit 102 with very little storage capacity. On the other hand, saving in the storage unit allows 102 a particularly fast and secure access to the image feature or text feature.
  • In a second sub-step 602 , after the first step 601 is executed, it is checked whether the maneuver object is recognizable on the environment image, the image feature or the text feature are considered for the test. Considering the image feature may facilitate the recognition of a point of interest (eg, gas station, landmark), a brand logo, or a traffic sign. Considering the text feature may facilitate recognizing a street nameplate.
  • 7 shows a flowchart of an alternative embodiment of the above method step 404 which checks to see if the maneuver object is recognizable on the environmental image.
  • In a first step 701 For example, a search area located on the environment image is determined from a geoposition of the maneuver object, a geoposition of the image capture unit 106 and an orientation of the image capture unit 106 Are defined.
  • In a second sub-step 702 which after the first step 701 is executed, it is checked whether the maneuver object is recognizable in the search area. The restriction to the predefined search range allows particularly efficient and time-saving checking. The testing can be done by means of the control unit 104 or a data cloud service.
  • 8th shows a flowchart of the above-mentioned method step 406 in which the maneuvering instruction is issued to the user.
  • In a first step 801 a visual representation of the maneuvering instruction is generated and adjusted taking into account a viewing direction and / or a head position of the user. For this purpose, the navigation device 100 a sensor unit which detects the viewing direction and / or the head position of the user.
  • In a second process step 802 which after the first step 801 is to be executed, the visual representation is output to the user. Such a representation of the maneuvering instruction adapted to the current traffic situation is particularly suitable for avoiding wrong maneuvers.
  • The following will be on 9 Referenced. 9 shows a street scene from the perspective of a user of the navigation device 100 ,
  • In this street scene, the movable object is a motor vehicle 901 , which at the bottom of the field of view of the user, which the motor vehicle 901 controls, partially visible.
  • The car 901 moves along the from the navigation device 100 calculated route on a road 902 , The route plans to turn right at the next intersection, which is in the course of the road.
  • On one to the street 902 neighboring walkway 903 there is a sign 904 marking a bus stop The navigation device 100 has the sign 904 by means of the control unit 104 selected from a plurality of georeferenced maneuver objects and found that the shield 904 on one by means of the image capture unit 106 seen from a user's perspective recorded environment image. Therefore, by means of the control unit 104 one the shield 904 included maneuver instruction generated.
  • An audible output of this maneuvering instruction includes the phrase "turn right after the bus stop." In the visual output of the maneuvering instruction using the output unit 108 becomes the shield 904 with a frame 905 clearly highlighted.
  • The combination of the acoustic output with the corresponding visual output of the maneuvering instruction assists the user of the motor vehicle 901 so that the user can clearly identify the bus stop referred to in the acoustic output. The risk of the user missing or initiating the turn-off maneuver provided by the route guidance is thus reduced.

Claims (10)

  1. Method for outputting a maneuvering instruction by means of a navigation device (100, 200, 300) having a control unit (104), an image acquisition unit (106) and an output unit (108), with the method steps: - executing (401) a route guidance for a movable object, which is controlled by a user of the navigation device, by means of the control unit (104); Selecting (402) a maneuver object associated with a maneuver of the movable object provided by the route guidance from a plurality of georeferenced maneuver objects by means of the control unit (104); - detecting (403) an environmental image of the maneuvering object from a user's perspective by means of the image capture unit (106); - checking (404) whether the maneuver object is recognizable on the environmental image by means of the control unit (104) and if the maneuver object is recognizable on the environmental image; Generating (405) a maneuver instruction including the maneuver object by means of the control unit (104) and - outputting (406) the maneuvering instruction to the user by means of the outputting unit (108).
  2. Method according to Claim 1 in which, when selecting (402) the maneuvering object, a plurality of maneuvering objects located in a vicinity of the movable object are extracted from the plurality of georeferenced maneuvering objects and placed in a maneuvering object list (501).
  3. Method according to Claim 2 in which maneuver objects stored in the maneuvering object list are removed (502) from the maneuvering object list which are not directly related to the maneuvering.
  4. Method according to one of the preceding claims, wherein when checking (404) whether the maneuver object is recognizable on the environmental image, an image feature associated with the maneuver object and / or a text feature associated with the maneuver object are taken into account (602).
  5. Method according to Claim 4 in which the image feature or the text feature is received by the navigation device (300) from a data cloud (601).
  6. The method of any one of the preceding claims, wherein checking (404) whether the maneuver object is recognizable on the environment image is performed by a data cloud service.
  7. Method according to one of the preceding claims, wherein when checking (404) whether the maneuver object is recognizable on the environment image, a search area arranged on the environment image is defined based on a geoposition of the maneuver object, a geoposition of the image capture unit and an orientation of the image capture unit (701) and checking to see if the maneuver object is recognizable in the search area (702).
  8. Method according to one of the preceding claims, in which the maneuvering instruction is output acoustically and / or as a visual representation (802), which has been adapted in perspective taking into account a viewing direction and / or a head position of the user (801).
  9. A method according to any one of the preceding claims, wherein the maneuvering instruction is issued using augmented reality.
  10. Navigation device (100, 200, 300) for issuing a maneuvering instruction comprising: - a control unit (104) for executing a route guidance (401) for a movable object controlled by a user of the navigation device, selecting (402) a maneuver object associated with a maneuver of the movable object provided by the route guidance a plurality of georeferenced maneuver objects, checking (404) whether the maneuver object is recognizable on an environmental image, and generating (405) a maneuvering instruction including the maneuvering object when the maneuvering object is recognizable on the environmental image; an image capture unit (106) adapted to capture the environmental image (403) of the maneuvering object from a user's perspective; and an output unit (108) adapted to output the maneuvering instruction (406) to the user
DE102017007705.3A 2017-08-14 2017-08-14 Issuing a maneuvering instruction by means of a navigation device Pending DE102017007705A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE102017007705.3A DE102017007705A1 (en) 2017-08-14 2017-08-14 Issuing a maneuvering instruction by means of a navigation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DE102017007705.3A DE102017007705A1 (en) 2017-08-14 2017-08-14 Issuing a maneuvering instruction by means of a navigation device

Publications (1)

Publication Number Publication Date
DE102017007705A1 true DE102017007705A1 (en) 2019-02-14

Family

ID=65084207

Family Applications (1)

Application Number Title Priority Date Filing Date
DE102017007705.3A Pending DE102017007705A1 (en) 2017-08-14 2017-08-14 Issuing a maneuvering instruction by means of a navigation device

Country Status (1)

Country Link
DE (1) DE102017007705A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1378724B1 (en) * 2002-07-01 2006-03-29 Mazda Motor Corporation Route guidance system based on visual activity of the driver
EP1681538A1 (en) * 2005-01-18 2006-07-19 Harman Becker Automotive Systems (Becker Division) GmbH Junction view with 3-dimensional landmarks for a navigation system for a vehicle
US7912637B2 (en) * 2007-06-25 2011-03-22 Microsoft Corporation Landmark-based routing
US8862392B2 (en) * 2011-03-22 2014-10-14 Harman Becker Automotive Systems Gmbh Digital map landmarking system
DE102013010335A1 (en) * 2013-06-20 2014-12-24 Volkswagen Aktiengesellschaft Method and device for selecting an object for navigation instructions
DE102013011827A1 (en) * 2013-07-15 2015-01-15 Audi Ag Method for operating a navigation device, navigation device and motor vehicle
US9671243B2 (en) * 2013-06-13 2017-06-06 Mobileye Vision Technologies Ltd. Vision augmented navigation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1378724B1 (en) * 2002-07-01 2006-03-29 Mazda Motor Corporation Route guidance system based on visual activity of the driver
EP1681538A1 (en) * 2005-01-18 2006-07-19 Harman Becker Automotive Systems (Becker Division) GmbH Junction view with 3-dimensional landmarks for a navigation system for a vehicle
US7912637B2 (en) * 2007-06-25 2011-03-22 Microsoft Corporation Landmark-based routing
US8862392B2 (en) * 2011-03-22 2014-10-14 Harman Becker Automotive Systems Gmbh Digital map landmarking system
US9671243B2 (en) * 2013-06-13 2017-06-06 Mobileye Vision Technologies Ltd. Vision augmented navigation
DE102013010335A1 (en) * 2013-06-20 2014-12-24 Volkswagen Aktiengesellschaft Method and device for selecting an object for navigation instructions
DE102013011827A1 (en) * 2013-07-15 2015-01-15 Audi Ag Method for operating a navigation device, navigation device and motor vehicle

Similar Documents

Publication Publication Date Title
JP6062041B2 (en) A method for generating a virtual display surface from a video image of a landscape based on a road
US10497174B2 (en) Method and device for augmented depiction
US10556600B2 (en) Assessment of human driving performance using autonomous vehicles
US9639968B2 (en) Generating an augmented view of a location of interest
US10303257B2 (en) Communication between autonomous vehicle and external observers
US10520324B2 (en) Route search device, control method, program and storage medium
US9360331B2 (en) Transfer of data from image-data-based map services into an assistance system
US9228851B2 (en) Display of estimated time to arrival at upcoming personalized route waypoints
US9401049B2 (en) Augmented reality system using moving ceiling transparent display for ship and method for enabling same
JP4923647B2 (en) Driving support image display device and program
EP2241859B1 (en) Improved vehicle navigation system
JP6606494B2 (en) Apparatus and method for displaying navigation instructions
CN100587404C (en) Navigation equipment and method of guiding vehicle
EP2700032B1 (en) A comprehensive and intelligent system for managing traffic and emergency services
US7928905B2 (en) Method of using road signs to augment global positioning system (GPS) coordinate data for calculating a current position of a personal navigation device
JP4562471B2 (en) Navigation device and traveling direction guide method
EP1527322B1 (en) Method and device for displaying navigational information for a vehicle
JP5569365B2 (en) Guide device, guide method, and guide program
JP5798392B2 (en) Parking assistance device
US6919866B2 (en) Vehicular navigation system
JP4696248B2 (en) Mobile navigation information display method and mobile navigation information display device
EP2253936B1 (en) Current position determining device and current position determining nethod
DE102011118161B3 (en) Method for determining position
KR100819047B1 (en) Apparatus and method for estimating a center line of intersection
US10126141B2 (en) Systems and methods for using real-time imagery in navigation

Legal Events

Date Code Title Description
R012 Request for examination validly filed
R082 Change of representative
R016 Response to examination communication