WO2011158494A1 - ナビゲーション装置 - Google Patents
ナビゲーション装置 Download PDFInfo
- Publication number
- WO2011158494A1 WO2011158494A1 PCT/JP2011/003375 JP2011003375W WO2011158494A1 WO 2011158494 A1 WO2011158494 A1 WO 2011158494A1 JP 2011003375 W JP2011003375 W JP 2011003375W WO 2011158494 A1 WO2011158494 A1 WO 2011158494A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- guidance
- unit
- information
- determination unit
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3629—Guidance using speech or audio output, e.g. text-to-speech
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3644—Landmark guidance, e.g. using POIs or conspicuous other objects
Definitions
- the present invention relates to a navigation device that provides route guidance to a driver by voice using a target that is easy for a driver to understand.
- an acquisition means for acquiring position data of a feature near a road, and a guidance time for controlling the guidance start time of voice route guidance using the feature according to the position of the feature Some have control means.
- the prior art disclosed in Patent Document 1 uses a vehicle-mounted camera and position data acquisition means for acquiring feature position data of the vehicle position and the vicinity of the road, so that a feature that is easy to find from the driver can be guided. Can be used as
- Patent Document 1 has not been devised to specifically notify the positional relationship between the guidance target intersection that guides the right / left turn and the feature. Therefore, there has been a problem that it is difficult for the driver to easily specify the guidance target intersection to turn left or right.
- the present invention has been made to solve such a problem.
- a target that can be used as a guide for guidance before and after the guidance target intersection
- the purpose is to do.
- the navigation device is extracted by a target candidate extraction unit that extracts target candidates around the guidance route from the map database based on the vehicle position, the guidance route, and the guidance target intersection information, and the target candidate extraction unit.
- Generated from the target candidates by a target determination unit that determines a target based on target determination knowledge, a guide sentence generation unit that generates a guide sentence using the target determined by the target determination part, and a guide sentence generation unit And a voice output unit that outputs voice guidance based on the guidance text.
- FIG. 1 is a block diagram illustrating a configuration of a main part of a navigation device according to Embodiment 1.
- FIG. 3 is a flowchart for explaining the operation of the first embodiment. It is a flowchart explaining operation
- 6 is a diagram illustrating an example of guidance according to Embodiment 1.
- FIG. It is a block diagram which shows the structure of the principal part of the navigation apparatus by Embodiment 2.
- FIG. 10 is a flowchart for explaining the operation of the second embodiment. It is a flowchart explaining operation
- FIG. 10 is a block diagram illustrating a configuration of a main part of a navigation device according to a third embodiment. 10 is a flowchart for explaining the operation of the third embodiment. It is a flowchart explaining operation
- FIG. 10 is a diagram showing an example of guidance according to the third embodiment. It is a block diagram which shows the structure of the principal part of the navigation apparatus by Embodiment 4. 10 is a flowchart for explaining the operation of the fourth embodiment. It is a flowchart explaining operation
- FIG. FIG. 10 is a block diagram illustrating a configuration of a main part of a navigation device according to a fifth embodiment. 10 is a flowchart for explaining the operation of the fifth embodiment. It is a flowchart explaining operation
- FIG. 10 is a
- FIG. 1 is a block diagram showing the configuration of the main part of a navigation device according to Embodiment 1 of the present invention, in which a map database 1, a target candidate extraction unit 2, a target determination unit 3, a guidance sentence generation unit 4, and a voice output unit 5 are shown. It has.
- FIG. 2 is a flowchart for explaining the operation of the first embodiment.
- a guide route searched by a route search unit (not shown) and a guide feature point generated by a guide feature point generation unit (not shown).
- the guidance target intersection information (various information such as the intersection position) and the vehicle position determined by the vehicle position determination unit are input to the target candidate extraction unit 2 (step ST200).
- the target candidate extraction unit 2 acquires map data from the map database 1 based on the input guidance route, guidance target intersection information, and own vehicle position, extracts target candidates from the acquired map data, and outputs them. (Step ST201).
- the target determination unit 3 selects a target from the input target candidate, the guidance route, the guidance target intersection information, and the vehicle position based on the target determination knowledge 3a in which the conditions for narrowing down the target candidates to one are stored.
- One is determined and output (step ST202).
- the guide sentence generation unit 4 generates a guide sentence based on the target input from the target determination unit 3 and the guide sentence generation condition input from the guide sentence generation condition generation unit (not shown), and outputs it (step ST203).
- the voice output unit 5 generates voice data based on the guidance sentence generated by the guidance sentence generation unit 4, outputs the voice guidance (step ST204), and the operation ends.
- FIG. 3 is a flowchart for explaining the operation of the target candidate extraction unit 2.
- a guidance route searched by a route search unit (not shown)
- guidance target intersection information generated by a guidance feature point generation unit (not shown)
- a vehicle position determined by a vehicle position determination unit (not shown) is input, and map data between a predetermined distance ahead (about 100 m) of the guidance target intersection and the vehicle position is acquired from the map database 1 (step ST300).
- map data between a predetermined distance ahead (about 100 m) of the guidance target intersection and the vehicle position is acquired from the map database 1 (step ST300).
- it is determined from the acquired map data whether the type is “signal S, highway entrance, railroad crossing H, tunnel, national highway, etc.” as a target type (step ST301).
- “tunnel” may have different types of tunnel exit and entrance.
- voice guidance that makes it easy to understand the specific positional relationship, such as “When you enter the tunnel, the first intersection in the tunnel is on the right”.
- voice guidance that makes it easy to grasp the specific positional relationship, “After exiting the tunnel, the second intersection is on the right”.
- target candidates are not limited to “signal S, highway entrance, railroad crossing H, tunnel, national road, etc.”, for example, elevated O, road signs (displays such as signs installed on the side of roads) and cranks If the map database 1 stores information that can be used directly as targets, such as K, road markings (displays drawn on the road surface), specific commercial facilities, etc., such information is determined as candidates that can be used as targets. You can keep it.
- step ST301 it is determined whether there is only one target candidate for each type between the vehicle position and the front of the guidance target intersection (for example, about 100 m) (step ST303). Let it be a candidate (step ST304). If the determination result is NO, the process moves to step ST302 and is not set as a target candidate. In this case, if a plurality of targets appearing by the guidance target intersection are output as a voice guide, the driver cannot specify the target, and it is difficult to specify the guidance target intersection. For example, in the situation where there are multiple intersections with signals between the intersections to be guided, if the voice guide saying “It is right when passing the signal” is output, the driver It will not be possible to specify the signal.
- the target determination unit 3 inputs a target intersection position generated by a guide feature point generation unit (not shown) and a vehicle position determined by a vehicle position determination unit (not shown), and a target candidate (signal) input from the target candidate extraction unit 2 S, highway entrance, railroad crossing H, tunnel, national highway, etc.), the target is determined as one in consideration of the positional relationship with the guidance target intersection based on the built-in target determination knowledge 3a.
- FIG. 4 is a flowchart for explaining the operation of the target determining unit 3.
- the guidance route searched by the route searching unit (not shown) and the guidance target intersection information (not shown) generated by the guidance feature point generating unit (not shown) are determined to determine whether there is a signal at the guidance target intersection (step ST400).
- the signal is removed from the target candidates (step ST401).
- the determination result in step ST400 is NO, it is determined whether the guidance target intersection is a high speed entrance (step ST402). If the determination result in step ST402 is YES, the high speed entrance is excluded from the target candidates (step ST403). This makes it possible to avoid duplication of audio content that describes the target intersection and audio content that describes the target to be used. Can be avoided.
- the target candidate closest to the guidance target intersection is extracted (step ST404). This is because when the voice guidance is output, the target closer to the guidance target intersection can easily understand the positional relationship between the guidance target intersection and the target. And about the target candidate extracted by step ST404, the target candidate from which the number of intersections between a guidance object intersection and a target candidate becomes below a predetermined number (for example, three) is extracted (step ST405). This is because when the number of intersections between the target candidate and the guidance target intersection is large, it is difficult for the driver to understand the positional relationship between the guidance target intersection and the target.
- a predetermined number for example, three
- step ST406 it is determined whether there are a plurality of extracted target candidates. If the determination result in step ST406 is NO, there is no extracted target candidate or only one target candidate exists. Therefore, if the determination result in step ST406 is NO, it is determined whether there is one extracted target candidate (step ST407). If the determination result in step ST407 is NO, since there is no target candidate, it is determined that there is no target (step ST408), and the operation is terminated. If the determination result in step ST407 is YES, since the target candidates are narrowed down to one, the target candidate is selected as a target (step ST409), and the operation ends. If the determination result in step ST406 is YES, since there are a plurality of target candidates, the target priority is referred to, the target candidate having the highest priority is selected as the target (step ST410), and the operation is terminated. To do.
- the target determination knowledge 3a is referred to in the target determination operation of the target determination unit 3, and stores conditions for narrowing down target candidates to one.
- the conditions used in the flowchart of FIG. 4 are as follows.
- the number of intersections between the guidance target intersection and the target candidate is a predetermined number Conditions that are less than the number ⁇ Predetermined number in determining the number of intersections between the guidance target intersection and the target candidates ⁇ Priority order for the target candidates
- the target determination knowledge 3a may store conditions for narrowing the target candidates to one. For example, a condition of “prioritize the previous candidate for both the front and the front candidates from the guidance target intersection” may be stored.
- the target ahead of the guidance target intersection (for example, 100m ahead) is easier to understand if the vehicle is approaching the guidance target intersection, but the driver may concentrate on making a right or left turn as he approaches the guidance target intersection. Therefore, priority is given to the goal in front of the guidance target intersection.
- this operation is not shown, first, the process shown in the flowchart of FIG. 4 is performed only on the target candidate before the guidance target intersection.
- the target determination knowledge 3a may store the type of target candidate extracted by the target candidate extraction unit 2 as a condition. In that case, the target candidate extraction unit 2 refers to the target determination knowledge 3a and extracts target candidates.
- the guidance sentence generation unit 4 includes a condition for generating a guidance sentence from a guidance sentence generation condition generation unit (not shown) (for example, a remaining distance to the guidance target intersection, a direction of turning left and right at the guidance target intersection, etc.), and a target determination unit A guidance sentence is generated based on the goal determined in step 3. And the audio
- FIG. 5 shows an example of guidance.
- the level crossing H is used as a target, and “the first intersection G is left after this level crossing H” is output to the voice guidance.
- the signal S is used as a target, and “If the past signal is passed, the second intersection is on the left” is output to the voice guidance.
- the target in front of or near the guidance target intersection on the guidance route is used for guidance, and the situation using the positional relationship from the target to the guidance target intersection is specifically described. This makes it easy to identify the intersection to be guided. Furthermore, since a series of situations up to the guidance target intersection is guided, there is an effect that it becomes easier for the driver to grasp the current traveling position.
- FIG. FIG. 6 is a block diagram showing the configuration of the main part of the navigation device according to Embodiment 2 of the present invention, which is provided with a structure determination unit 6 and connected to the map database 1 and the target candidate extraction unit 2. Since the configuration is the same as that of the first embodiment shown in FIG. 1, the same parts are denoted by the same reference numerals, and redundant description is omitted.
- step ST701 the target candidate extraction unit 2 outputs the guidance route, guidance target intersection information, and the vehicle position to the structure determination unit 6.
- the structure determination unit 6 acquires map data from the map database 1 based on the input guidance route, guidance target intersection information, and own vehicle position, determines whether there is a structure, and there is a structure. In the case, it is output to the target candidate extraction unit 2.
- step ST703 the target candidate extraction unit 2 selects a target candidate from the map data acquired from the map database 1 and the structure input from the structure determination unit 6 based on the guidance route, the guidance target intersection information, and the vehicle position. Extract and output it.
- the structure determination unit 6 determines whether or not there is a structure and outputs information about the structure such as the elevated O that is not directly stored in the map database 1. It is characterized in that it is used as a target candidate. As a result, it is possible to guide the driver on the information on the structure that does not exist in the map database 1 and the positional relationship between the structure and the guidance target intersection.
- a guidance route searched by a route search unit (not shown), guidance target intersection information (various information such as an intersection position) generated by a guidance feature point generation unit (not shown), and a vehicle position determination (not shown)
- the vehicle position determined by the unit is input, and map data between a predetermined distance ahead (about 100 m) of the guidance target intersection and the vehicle position is acquired from the map database 1 (step ST800).
- map data between a predetermined distance ahead (about 100 m) of the guidance target intersection and the vehicle position is acquired from the map database 1 (step ST800).
- it is determined whether there is a road or a railway that crosses the guide route (step ST801).
- step ST801 If the decision result in the above step ST801 is NO, it moves to step ST802 and decides that there is no overhead O. If the determination result in step ST801 is YES, it is determined whether there is height information about the road or railway extracted from the map database 1 and the road corresponding to the guide route (step ST803), and the determination result is NO. In this case, the process proceeds to step ST804. This is because the map database 1 may not always have height information.
- step ST804 it is determined whether or not the road and the railroad are above the road of the guide route for the layer for drawing the map in the map database 1. This is because the layer (drawing order) indicates the height relationship in the map of the detailed scale of the map drawing data that the map database 1 has. If the determination result in step ST804 is NO, the process proceeds to step ST802, where it is determined that there is no overhead O. If the determination result in step ST804 is YES, the process proceeds to step ST805, where it is determined that an elevated O exists, and the operation is terminated.
- step ST806 determines whether the road and railway are above the road of the guide route from the road height information of the road and rail and the guide route. If the determination result is NO, the process proceeds to step ST802, where it is determined that there is no overhead O. If the determination result in step ST806 is YES, it is determined that an elevated O exists, and the operation is terminated. In step ST806, the determination may be made based on whether the height relationship is greater than or equal to a predetermined difference. This makes it possible to extract an easy-to-understand overhead as a target.
- the target candidate extraction unit 2 extracts target candidates from the map data acquired from the map database 1 and the structure input from the structure determination unit 6 based on the guidance route, the guidance target intersection information, and the vehicle position, Output to the decision unit 3.
- the target determination unit 3 determines and outputs one target from the input target candidates, the guidance route, the guidance target intersection information, and the vehicle position.
- the guidance sentence generation unit 4 generates and outputs a guidance sentence based on the target input from the target determination unit 3 and the guidance sentence generation condition input from the guidance sentence generation condition generation unit.
- the voice output unit 5 generates voice data based on the guidance sentence generated by the guidance sentence generation unit 4 and outputs voice guidance.
- voice guidance including the positional relationship between the elevated O and the guidance target intersection
- guidance such as “the first intersection G is left when the elevated O is passed” as shown in FIG. 9 can be output.
- step ST701 in FIG. 7 the structure determination unit 6 does not acquire the guidance route, the guidance target intersection information, and the vehicle position from the target candidate extraction unit 2. 6, a guidance route searched by a route search unit not shown in FIG. 6, guidance target intersection information generated by a guide feature point generation unit not shown in FIG. 6, and a host vehicle determined by a vehicle position determination unit not shown in FIG. 6. The position may be input to the structure determination unit 6.
- the structure determination part 6 acquires map data from the map database 1, and does not determine the presence or absence of a structure, acquires the data regarding a structure from the communication part which is not shown in figure, It is good also as a structure which determines the presence or absence of an object. For example, when information such as road signs (signboards installed beside roads) or specific commercial facilities is not stored in the map database 1, information on road signs is obtained from a communication unit (not shown) The presence or absence as an object may be determined and output to the target candidate extraction unit 2. This makes it possible to provide guidance such as “After passing the blue signboard, the first intersection is on the right”. Furthermore, it is good also as a structure which the structure determination part 6 outputs a structure to the target determination part 3 in step ST702.
- the elevated O crossing the guidance route that is easy for the driver to use is used for guidance as a target in front of or in front of the guidance target intersection. It is possible to specifically convey the situation up to this point, which has the effect of facilitating identification of the guidance target intersection. In addition, since a series of situations up to the guidance target intersection is guided, there is an effect that it becomes easier for the driver to grasp the current traveling position. Furthermore, since information that is not stored in the map database 1 can be used as a target, voice guidance that suits various road conditions is possible.
- FIG. FIG. 10 is a block diagram showing the configuration of the main part of the navigation device according to Embodiment 3 of the present invention, which is provided with a characteristic road determination unit 7 and connected to the map database 1 and the target candidate extraction unit 2. Since the configuration is the same as that of the first embodiment shown in FIG. 1, the same parts are denoted by the same reference numerals, and redundant description is omitted.
- step ST1101 the target candidate extraction unit 2 outputs the guidance route, the guidance target intersection information, and the vehicle position to the characteristic road determination unit 7.
- step ST1102 the characteristic road determination unit 7 acquires map data acquired from the map database 1 based on the input guidance route, guidance target intersection information, and own vehicle position, determines the presence or absence of a characteristic road, It is output to the target candidate extraction unit 2.
- step ST1103 the target candidate extraction unit 2 uses the map data acquired from the map database 1 and the characteristic road input from the characteristic road determination unit 7 based on the guidance route, the guidance target intersection information, and the own vehicle position. Extract it and output it.
- the characteristic road determination unit 7 determines whether or not there is a characteristic road and outputs characteristic road information such as the crank K that is not directly stored in the map database 1. It is characterized in that it is used as a target candidate. As a result, it is possible to guide the driver about characteristic road information that does not exist in the map database 1 and the positional relationship between the characteristic road and the guidance target intersection.
- a guidance route searched by a route search unit (not shown), guidance target intersection information (various information such as an intersection position) generated by a guidance feature point generation unit (not shown), and a vehicle position determination (not shown)
- the vehicle position determined by the unit is input, and map data between a predetermined distance ahead (about 100 m) from the guidance target intersection and the vehicle position is acquired from the map database 1 (step ST1200).
- predetermined information (such as a road link connection angle) is extracted from the acquired map data, and it is determined whether the road corresponds to a predetermined characteristic road (step ST1201).
- step ST1201 determines whether there is no characteristic road. If the determination result in step ST1201 is NO, the process proceeds to step ST1202, and it is determined that there is no characteristic road. If the determination result in step ST1201 is YES, the process proceeds to step ST1203, where it is determined that a characteristic road exists, and the operation ends.
- the target candidate extraction unit 2 extracts target candidates in the map data acquired from the map database 1 and the characteristic road input from the characteristic road determination unit 7 based on the guidance route, the guidance target intersection information, and the vehicle position, Output to the target determination unit 3.
- the target determination unit 3 determines and outputs one target from the input target candidates, the guidance route, the guidance target intersection information, and the vehicle position.
- the guidance sentence generation unit 4 generates and outputs a guidance sentence based on the target input from the target determination unit 3 and the guidance sentence generation condition input from the guidance sentence generation condition generation unit.
- the voice output unit 5 generates voice data based on the guidance sentence generated by the guidance sentence generation unit 4 and outputs voice guidance.
- voice guidance including the positional relationship between the characteristic road and the guidance target intersection, it is possible to output guidance such as “the first intersection G is on the left after past crank K” as shown in FIG. Become.
- step ST1101 of FIG. 11 the feature road determination unit 7 does not acquire the guidance route, guidance target intersection information, and the vehicle position from the target candidate extraction unit 2.
- a guidance route searched by a route search unit (not shown) in FIG. 10 guidance target intersection information generated by a guide feature point generation unit (not shown in FIG. 10), and a host vehicle determined by a vehicle position determination unit (not shown in FIG. 10).
- the position may be input to the characteristic road extraction unit 7.
- the characteristic road extracted by the characteristic road determination unit 7 in step ST1102 in FIG. 11 is not limited to the crank K, but “a curve, an S-shaped curve, an uphill, a downhill, a road that narrows the road, As long as the road can be determined as a characteristic road based on information in the map database 1 such as “widens, lanes increase, lanes decrease”.
- the characteristic road determination part 7 acquires map data from the map database 1, and does not determine a characteristic road, but acquires the data regarding a characteristic road from the communication part which is not illustrated in FIG. However, it may be configured to determine the presence or absence of a characteristic road. For example, when the information on the road height difference is not stored in the map database 1, information on the road height difference is acquired from a communication unit (not shown), the presence or absence as a characteristic road is determined, and the target candidate extraction unit 2 may be output. This makes it possible to provide guidance such as “When you finish the hill, the first signal is on the right”. Furthermore, it is good also as a structure which the characteristic road determination part 7 outputs a characteristic road to the target determination part 3 in step ST1102.
- the characteristic road on the guidance route that is easy to understand for the driver is used for guidance as a target in front of or in front of the guidance target intersection.
- the situation up to the target point can be specifically communicated, and it is easy to specify the guidance target intersection.
- a series of situations up to the guidance target intersection is guided, there is an effect that it becomes easier for the driver to grasp the current traveling position.
- information that is not stored in the map database 1 can be used as a target, voice guidance that suits various road conditions is possible.
- FIG. FIG. 14 is a block diagram showing a configuration of a main part of a navigation device according to Embodiment 4 of the present invention, which is provided with a driving action determination unit 8 and is connected to the map database 1 and the target candidate extraction unit 2. Since the configuration is the same as that of the first embodiment shown in FIG. 1, the same parts are denoted by the same reference numerals, and redundant description is omitted.
- step ST1501 the target candidate extraction unit 2 outputs the guidance route, the guidance target intersection information, and the vehicle position to the driving action determination unit 8.
- step ST1502 the driving action determination unit 8 acquires map data to be acquired from the map database 1 based on the input guidance route, the guidance target intersection information, and the vehicle position, and the driving action that the driver will perform while driving the route in the future. Is output to the target candidate extraction unit 2.
- step ST1503 the target candidate extraction unit 2 selects a target candidate from the map data acquired from the map database 1 and the driving action input from the driving action determination unit 8 based on the guidance route, the guidance target intersection information, and the vehicle position. Extract and output it.
- the driving action determination unit 8 determines whether or not there is a driving action and outputs the driving action that is not directly stored in the map database 1 (for example, an action such as temporary stop I). However, it is characterized in that it is used as a target candidate. As a result, it is possible to guide the driver about the driving action that does not exist in the map database 1 and the positional relationship between the point where the driving action is performed and the guidance target intersection.
- a guidance route searched by a route search unit (not shown), guidance target intersection information (various information such as an intersection position) generated by a guidance feature point generation unit (not shown), and a vehicle position determination (not shown)
- the vehicle position determined by the unit is input, and map data between a predetermined distance ahead (about 100 m) of the guidance target intersection and the vehicle position is acquired from the map database 1 (step ST1600).
- a predetermined driving action for example, an action such as temporary stop I
- step ST1601 determines whether there is a driving action. If the determination result in step ST1601 is NO, the process proceeds to step ST1602, and it is determined that there is no driving action. If the determination result in step ST1601 is YES, the process proceeds to step ST1603, where it is determined that there is a driving action, and the operation ends.
- the target candidate extraction unit 2 extracts target candidates in the map data acquired from the map database 1 and the driving action input from the driving action determination unit 8 based on the guidance route, the guidance target intersection information, and the vehicle position. Output to the decision unit 3. Based on the target determination knowledge 3a, the target determination unit 3 determines and outputs one target from the input target candidates, the guidance route, the guidance target intersection information, and the vehicle position.
- the guidance sentence generation unit 4 generates and outputs a guidance sentence based on the target input from the target determination unit 3 and the guidance sentence generation condition input from the guidance sentence generation condition generation unit.
- the voice output unit 5 generates voice data based on the guidance sentence generated by the guidance sentence generation unit 4 and outputs voice guidance.
- voice guidance including the positional relationship between the driving action and the intersection to be guided as a target, it is possible to output a guidance “After the temporary stop I, the first intersection G is left” as shown in FIG. It becomes.
- step ST1501 of FIG. 15 the driving action determination unit 8 does not acquire the guidance route, guidance target intersection information, and the vehicle position from the target candidate extraction unit 2. 14, the guidance route searched by the route searching unit not shown in FIG. 14, the guidance target intersection information generated by the guidance feature point generating unit not shown in FIG. 14, and the own vehicle determined by the own vehicle position determining unit not shown in FIG. The position may be input to the driving action determination unit 8.
- the driving action extracted by the driving action determination unit 8 is not limited to the temporary stop I, but “lane change, acceleration, slow driving (considering road speed limitation)”, and the like are acquired. Any action that can be determined by the driver in the future from the guidance route, the guidance target intersection information, the vehicle position, the map data, and the like may be used. This enables guidance such as “If you change lanes, the first intersection is on the right”. Further, the target determination unit 3 and the target determination unit 3a in FIG. 14 include up to two guidance target intersections as targets for considering the positional relationship, and the driving action determination unit 8 extracts “right / left turn” as an action. It is good also as a structure. As a result, guidance such as “After turning, turn right at the second intersection” is possible.
- the driving action determination unit 8 acquires map data from the map database 1 and does not determine driving action, but acquires data related to driving action from a communication unit not shown in FIG. It is good also as a structure which extracts a driving action. For example, traffic jam information may be acquired from a communication unit (not shown), and “pass traffic” may be extracted as a driving action and output to the target candidate extraction unit 2. This makes it possible to provide guidance such as “If you pass a traffic jam, you are on the right at the first intersection”. Furthermore, it is good also as a structure which the driving action determination part 8 outputs a driving action to the target determination part 3 in step ST1502.
- the action performed by the driver when traveling on the guidance route is used for guidance, guidance that is easy for the driver to understand is possible.
- a series of situations up to the guidance target intersection is guided, there is an effect that it becomes easier for the driver to grasp the current traveling position.
- information that is not stored in the map database 1 can be used as a target, voice guidance that suits various road conditions is possible.
- FIG. FIG. 18 is a block diagram showing the configuration of the main part of a navigation device according to Embodiment 5 of the present invention, which is provided with a visual information determination unit 9 connected to the map database 1 and the target candidate extraction unit 2. Since the configuration is the same as that of the first embodiment shown in FIG. 1, the same parts are denoted by the same reference numerals, and redundant description is omitted.
- step ST1901 the target candidate extraction unit 2 outputs the guidance route, the guidance target intersection information, and the vehicle position to the visual information determination unit 9.
- step ST1902 the visual information determination unit 9 acquires map data to be acquired from the map database 1 based on the input guidance route, guidance target intersection information, and the vehicle position, such as a camera image not shown in FIG. Image information is acquired from the image acquisition unit, whether or not predetermined visual information (for example, there is a pedestrian crossing R) exists in the image information, and is output to the target candidate extraction unit 2.
- step ST1903 the target candidate extraction unit 2 uses the map data acquired from the map database 1 and the predetermined visual information input from the visual information determination unit 9 based on the guidance route, the guidance target intersection information, and the vehicle position. Candidates are extracted and output.
- the visual information determination unit 9 also extracts visual information that is not directly stored in the map database 1 (for example, there is a pedestrian crossing R) and uses it as a target candidate. There is a feature in the point to be done. Thereby, it becomes possible to guide the driver of visual information that does not exist in the map database 1 and the positional relationship between the visual information and the guidance target intersection.
- a guidance route searched by a route search unit (not shown), guidance target intersection information (various information such as an intersection position) generated by a guidance feature point generation unit (not shown), and a vehicle position determination (not shown)
- the vehicle position determined by the unit is input, map data between a predetermined distance ahead (about 100 m) of the guidance target intersection and the vehicle position is acquired from the map database 1, and an image is acquired from an image acquisition unit such as a camera (not shown).
- Information is acquired (step ST2000).
- it is determined from the acquired information whether predetermined visual information (for example, there is a pedestrian crossing R) is included in the image information (step ST2001).
- a predetermined white line image is included in the image information, and it is determined that there is a pedestrian crossing by referring to the position information held by the image information and the intersection data acquired from the map database 1. good.
- the image acquisition unit is not limited to a camera, and any sensor that can acquire image information such as infrared rays and millimeter waves may be used.
- step ST2001 determines whether there is no visual information, and the operation ends. If the determination result in step ST2001 is YES, the process proceeds to step ST2003, where it is determined that there is visual information, and the operation ends.
- the target candidate extraction unit 2 extracts target candidates from the map data acquired from the map database 1 and the visual information input from the visual information determination unit 9 based on the guidance route, the guidance target intersection information, and the vehicle position, Output to the decision unit 3.
- the target determination unit 3 determines and outputs one target from the input target candidates, the guidance route, the guidance target intersection information, and the vehicle position.
- the guidance sentence generation unit 4 generates and outputs a guidance sentence based on the target input from the target determination unit 3 and the guidance sentence generation condition input from the guidance sentence generation condition generation unit.
- the voice output unit 5 generates voice data based on the guidance sentence generated by the guidance sentence generation unit 4 and outputs voice guidance.
- voice guidance including the positional relationship between the visual information and the guidance target intersection, it is possible to output guidance such as “After the pedestrian crossing R, the first intersection G is on the left” as shown in FIG.
- step ST1901 in FIG. 19 the visual information determination unit 9 does not acquire the guidance route, guidance target intersection information, and the vehicle position from the target candidate extraction unit 2. 18, a guidance route searched by a route search unit not shown in FIG. 18, guidance target intersection information searched by a guidance feature point generation unit not shown in FIG. 18, and a host vehicle determined by a vehicle position determination unit not shown in FIG. 18. The position may be input to the visual information determination unit 9.
- the visual information extracted by the visual information determination unit 9 in step ST1902 in FIG. 19 is not limited to “there is a pedestrian crossing R”, but the acquired guidance route, guidance target intersection information, own vehicle position, and map Any visual information such as “a red car is turning or a white car is stopped” may be used as long as it can be determined from data and image information. This may be determined by extracting a red car from the image information, specifying the direction of the red car, referring to the position information of the image and the intersection data acquired from the map database 1, and determining that the red car bends. As a result, guidance such as “the first intersection is right after the red car turns” is possible.
- the visual information determination unit 9 does not acquire image information from the map database 1 or an image information acquisition unit (not shown) and extracts the visual information, but data related to visual information from a communication unit (not shown). Or it is good also as a structure which acquires visual information itself and outputs visual information. Furthermore, it is good also as a structure which the visual information determination part 9 outputs visual information to the target determination part 3 in step ST1902.
- visual information is used for guidance as a target when traveling on a guidance route, so that it is possible for the driver to provide easy-to-understand guidance.
- a series of situations up to the guidance target intersection is guided, there is an effect that it becomes easier for the driver to grasp the current traveling position.
- information that is not stored in the map database 1 can be used as a target, voice guidance that suits various road conditions is possible.
- the navigation device is configured to make it easier to specify the guidance target intersection in order to specifically convey the situation from the target to the guidance target intersection using the target. It is suitable for use in a navigation device or the like that performs voice route guidance.
- 1 map database 1 map database, 2 target candidate extraction unit, 3 target determination unit, 4 guidance sentence generation unit, 5 audio output unit, 6 structure determination unit, 7 characteristic road determination unit, 8 driving action determination unit, 9 visual information determination unit.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Navigation (AREA)
- Traffic Control Systems (AREA)
- Instructional Devices (AREA)
Abstract
Description
例えば、特許文献1に開示される先行技術は、車載カメラと、自車位置と道路付近の特徴物位置データを取得する位置データ取得手段を用いることで、ドライバーから見つけやすい特徴物を案内の目標として使用可能としている。
実施の形態1.
図1はこの発明の実施の形態1によるナビゲーション装置の要部の構成を示すブロック図であり、地図データベース1、目標候補抽出部2、目標決定部3、案内文生成部4、音声出力部5を備えている。
また、目標候補としては、「信号S、高速道路入口、踏切H、トンネル、国道等」に限定するものではなく、例えば高架Oや道路標識(道路脇に設置された看板等の表示)やクランクK、道路標示(路面に描かれた表示)、特定の商業施設等、目標として直接利用可能な情報が地図データベース1に格納されている場合は、それらの情報を目標として利用可能な候補として定めておいても良い。
また、目標決定知識3aは、目標決定部3の目標決定の動作において参照され、目標候補を1つに絞り込むための条件を格納している。図4のフローチャートで使用した条件は次である。
・案内対象交差点に信号が存在しないという条件
・案内対象交差点が高速入口ではないという条件
・案内対象交差点からの距離が一番近いという条件
・案内対象交差点と目標候補の間に交差点数が所定の数より少ないという条件
・案内対象交差点と目標候補の間の交差点数判定における所定の数
・目標候補についての優先順位
この動作は図示しないが、最初に案内対象交差点より手前の目標候補のみを対象に、図4のフローチャートで示した処理を行う。そこで手前の目標候補の全てがステップST408にて「目標なし」と判断された場合に、案内対象交差点より前方の目標候補について、再度図4で示すフローチャートの動作を施すことで実現できる。
また、目標決定知識3aには目標候補抽出部2で抽出する目標候補の種類を条件として格納しても良い。その際は、目標候補抽出部2が目標決定知識3aを参照し、目標候補を抽出することとする。
図6はこの発明の実施の形態2によるナビゲーション装置の要部の構成を示すブロック図であり、構造物判定部6を設け、地図データベース1と目標候補抽出部2に接続したもので、他の構成は図1に示す実施の形態1の構成と同じであるので、同一部分には同一符号を付して重複説明を省略する。
この実施の形態2は、ステップST701において、目標候補抽出部2が、案内ルートと案内対象交差点情報と自車位置とを構造物判定部6に出力する。ステップST702において、構造物判定部6は、入力された案内ルートと案内対象交差点情報と自車位置に基づき、地図データベース1より地図データを取得し、構造物の有無を判定し、構造物がある場合は、目標候補抽出部2にそれを出力する。ステップST703では、目標候補抽出部2が、案内ルートと案内対象交差点情報と自車位置に基づき、地図データベース1より取得する地図データと構造物判定部6から入力された構造物より、目標候補を抽出し、それを出力している。
なお、ステップST806において、高さの関係が所定の差以上であるかどうかで判定しても良い。これにより、目標として分かりやすい高架を抽出することが可能となる。
さらに、ステップST702において、構造物判定部6が構造物を目標決定部3に出力する構成としても良い。
図10はこの発明の実施の形態3によるナビゲーション装置の要部の構成を示すブロック図であり、特徴道路判定部7を設け、地図データベース1と目標候補抽出部2に接続したもので、他の構成は図1に示す実施の形態1の構成と同じであるので、同一部分には同一符号を付して重複説明を省略する。
この実施の形態3はステップST1101において、目標候補抽出部2が、案内ルートと案内対象交差点情報と自車位置とを特徴道路判定部7に出力する。ステップST1102において、特徴道路判定部7は、入力された案内ルートと案内対象交差点情報と自車位置に基づき、地図データベース1より取得する地図データを取得し、特徴的な道路の有無を判定し、目標候補抽出部2にそれを出力する。ステップST1103では、目標候補抽出部2が、案内ルートと案内対象交差点情報と自車位置に基づき、地図データベース1より取得する地図データと特徴道路判定部7から入力された特徴的道路より、目標候補を抽出し、それを出力している。
さらに、ステップST1102において、特徴道路判定部7が特徴的道路を目標決定部3に出力する構成としても良い。
図14はこの発明の実施の形態4によるナビゲーション装置の要部の構成を示すブロック図であり、運転行為判定部8を設け、地図データベース1と目標候補抽出部2に接続したもので、他の構成は図1に示す実施の形態1の構成と同じであるので、同一部分には同一符号を付して重複説明を省略する。
この実施の形態4はステップST1501において、目標候補抽出部2が、案内ルートと案内対象交差点情報と自車位置とを運転行為判定部8に出力する。ステップST1502において、運転行為判定部8は、入力された案内ルートと案内対象交差点情報と自車位置に基づき、地図データベース1より取得する地図データを取得し、今後ドライバーがルート走行中に行う運転行為を判定し、目標候補抽出部2にそれを出力する。ステップST1503では、目標候補抽出部2が、案内ルートと案内対象交差点情報と自車位置に基づき、地図データベース1より取得する地図データと運転行為判定部8から入力された運転行為より、目標候補を抽出し、それを出力している。
また、図14の目標決定部3および目標決定部3aが位置関係を考慮する対象として、2つ先の案内対象交差点まで含めるようにし、運転行為判定部8が「右左折」を行為として抽出する構成としても良い。これにより、「曲がった後は、2つ目の交差点を右です」などの案内が可能となる。
さらに、ステップST1502において、運転行為判定部8が運転行為を目標決定部3に出力する構成としても良い。
図18はこの発明の実施の形態5によるナビゲーション装置の要部の構成を示すブロック図であり、視覚情報判定部9を設け、地図データベース1と目標候補抽出部2に接続したもので、他の構成は図1に示す実施の形態1の構成と同じであるので、同一部分には同一符号を付して重複説明を省略する。
この実施の形態5はステップST1901において、目標候補抽出部2が、案内ルートと案内対象交差点情報と自車位置とを視覚情報判定部9に出力する。ステップST1902において、視覚情報判定部9は、入力された案内ルートと案内対象交差点情報と自車位置に基づき、地図データベース1より取得する地図データを取得し、図18にて図示しないカメラ画像等の画像取得部から画像情報を取得し、その画像情報に所定の視覚情報(例えば、横断歩道Rがある等)が存在するかを判定し、目標候補抽出部2にそれを出力する。ステップST1903では、目標候補抽出部2が、案内ルートと案内対象交差点情報と自車位置に基づき、地図データベース1より取得する地図データと視覚情報判定部9から入力された所定の視覚情報より、目標候補を抽出し、それを出力している。
このステップST2001では、例えば、画像情報に所定の白線画像が含まれ、その画像情報が保持している位置情報と地図データベース1より取得した交差点データを参照し、横断歩道があると判定しても良い。
また、画像取得部としては、カメラに限定するものではなく、赤外線、ミリ波など、画像情報が取得できるセンサであれば良い。
さらに、ステップST1902において、視覚情報判定部9が視覚情報を目標決定部3に出力する構成としても良い。
Claims (13)
- 自車位置と案内ルートと案内対象交差点情報とに基づいて、前記案内ルート周辺の目標候補を地図データベースから抽出する目標候補抽出部と、
前記目標候補抽出部により抽出された前記目標候補の中から、目標決定知識に基づいて目標を決定する目標決定部と、
前記目標決定部により決定された前記目標を用いて案内文を生成する案内文生成部と、
前記案内文生成部により生成された前記案内文に基づき音声ガイダンスを出力する音声出力部とを備えたナビゲーション装置。 - 前記目標決定部は、案内対象交差点と目標候補との位置関係に基づき、前記目標を決定する請求項1記載のナビゲーション装置。
- 前記目標決定知識は、案内対象交差点と目標候補との位置関係に基づく知識を格納する請求項1記載のナビゲーション装置。
- 前記目標候補抽出部は、自車位置から案内対象交差点の前方あるいは手前の所定距離の範囲内にある案内ルート周辺の目標候補を抽出する請求項1記載のナビゲーション装置。
- 前記目標候補抽出部は、目標候補から各々の種類のうち1つだけ存在するものを抽出する請求項1記載のナビゲーション装置。
- 前記自車位置と前記案内ルートと前記案内対象交差点情報とに基づいて、前記地図データベースから地図データを取得し、この地図データが所定の構造物であるか否かを判定する構造物判定部を備え、
前記目標決定部は、所定の構造物を目標候補とする請求項1記載のナビゲーション装置。 - 前記自車位置と前記案内ルートと前記案内対象交差点情報とに基づいて、前記地図データベースから地図データを取得し、この地図データが所定の特徴的な道路であるかを判定する特徴道路判定部を備え、
前記目標決定部は、所定の特徴的な道路を目標候補とする請求項1記載のナビゲーション装置。 - 前記自車位置と前記案内ルートと前記案内対象交差点情報とに基づいて、前記地図データベースから地図データを取得し、この地図データが所定の運転行為であるかを判定する運転行為判定部を備え、
前記目標決定部は、所定の運転行為を目標候補とする請求項1記載のナビゲーション装置。 - 前記自車位置と前記案内ルートと前記案内対象交差点情報とに基づいて、前記地図データベースから地図データを取得し、この地図データが所定の視覚情報であるかを判定する視覚情報判定部を備え、
前記目標決定部は、所定の視覚情報を目標候補とする請求項1記載のナビゲーション装置。 - 外部から構造物情報を受信する通信部を備え、
前記構造物判定部は、前記通信部により受信した前記構造物情報に基づいて、所定の構造物であるか否かを判定する請求項6記載のナビゲーション装置。 - 外部から特徴的道路情報を受信する通信部を備え、
前記特徴道路判定部は、前記通信部により受信した前記特徴的道路情報に基づいて、所定の特徴的な道路であるか否かを判定する請求項7記載のナビゲーション装置。 - 外部から運転行為情報を受信する通信部を備え、
前記運転行為判定部は、前記通信部により受信した前記運転行為情報に基づいて、所定の運転行為であるか否かを判定する請求項8記載のナビゲーション装置。 - 外部から視覚情報を受信する通信部を備え、
前記視覚情報判定部は、前記通信部により受信した前記視覚情報に基づいて、所定の視覚情報であるか否かを判定する請求項9記載のナビゲーション装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012520290A JP5398913B2 (ja) | 2010-06-14 | 2011-06-14 | ナビゲーション装置 |
CN201180029248.5A CN102947677B (zh) | 2010-06-14 | 2011-06-14 | 导航装置 |
DE112011101988T DE112011101988T5 (de) | 2010-06-14 | 2011-06-14 | Navigationsvorrichtung |
US13/702,806 US8958982B2 (en) | 2010-06-14 | 2011-06-14 | Navigation device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-134998 | 2010-06-14 | ||
JP2010134998 | 2010-06-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011158494A1 true WO2011158494A1 (ja) | 2011-12-22 |
Family
ID=45347909
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/003375 WO2011158494A1 (ja) | 2010-06-14 | 2011-06-14 | ナビゲーション装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US8958982B2 (ja) |
JP (1) | JP5398913B2 (ja) |
CN (1) | CN102947677B (ja) |
DE (1) | DE112011101988T5 (ja) |
WO (1) | WO2011158494A1 (ja) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5533762B2 (ja) * | 2011-03-31 | 2014-06-25 | アイシン・エィ・ダブリュ株式会社 | 移動案内システム、移動案内装置、移動案内方法及びコンピュータプログラム |
US9189976B2 (en) * | 2013-07-10 | 2015-11-17 | Telenav Inc. | Navigation system with multi-layer road capability mechanism and method of operation thereof |
RU2580335C1 (ru) * | 2014-10-17 | 2016-04-10 | Общество С Ограниченной Ответственностью "Яндекс" | Способ обработки картографических данных |
KR20160112526A (ko) * | 2015-03-19 | 2016-09-28 | 현대자동차주식회사 | 차량 및 그 제어 방법 |
KR101826408B1 (ko) * | 2016-03-03 | 2018-03-22 | 엘지전자 주식회사 | 디스플레이 장치 및 이를 포함하는 차량 |
WO2017168543A1 (ja) * | 2016-03-29 | 2017-10-05 | 三菱電機株式会社 | 音声案内装置及び音声案内方法 |
KR101974871B1 (ko) * | 2017-12-08 | 2019-05-03 | 엘지전자 주식회사 | 차량에 구비된 차량 제어 장치 및 차량의 제어방법 |
FR3106922B1 (fr) * | 2020-02-05 | 2022-03-18 | Renault Sas | Procédé d’élaboration d’instructions de guidage routier |
JPWO2021192521A1 (ja) * | 2020-03-27 | 2021-09-30 | ||
US12104911B2 (en) | 2021-03-04 | 2024-10-01 | Nec Corporation Of America | Imperceptible road markings to support automated vehicular systems |
US12037757B2 (en) | 2021-03-04 | 2024-07-16 | Nec Corporation Of America | Infrared retroreflective spheres for enhanced road marks |
US11900695B2 (en) * | 2021-03-04 | 2024-02-13 | Nec Corporation Of America | Marking and detecting road marks |
US12002270B2 (en) | 2021-03-04 | 2024-06-04 | Nec Corporation Of America | Enhanced detection using special road coloring |
CN113264036B (zh) * | 2021-05-19 | 2022-10-14 | 广州小鹏汽车科技有限公司 | 一种基于自动驾驶中泊车功能的引导方法和装置 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11248477A (ja) * | 1998-02-27 | 1999-09-17 | Toyota Motor Corp | 音声案内ナビゲーション装置及び音声案内ナビゲーション方法並びに音声案内ナビゲーションプログラムを記録した媒体 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10239078A (ja) | 1997-02-28 | 1998-09-11 | Toshiba Corp | 経路誘導装置 |
JP3526002B2 (ja) | 1998-04-24 | 2004-05-10 | 松下電器産業株式会社 | カーナビゲーション装置 |
JP4098106B2 (ja) * | 2003-01-29 | 2008-06-11 | 三菱電機株式会社 | 車両用ナビゲーションシステム |
JP2004286559A (ja) * | 2003-03-20 | 2004-10-14 | Mitsubishi Electric Corp | 車両用ナビゲーションシステムおよび経路案内方法 |
JP4095590B2 (ja) * | 2004-07-15 | 2008-06-04 | 株式会社ナビタイムジャパン | 歩行者用ナビゲーションシステムおよび情報配信サーバならびにプログラム |
JP4719500B2 (ja) * | 2004-11-04 | 2011-07-06 | アルパイン株式会社 | 車載装置 |
CN100523732C (zh) * | 2005-11-25 | 2009-08-05 | 环达电脑(上海)有限公司 | 在配备gps模块的移动设备上自动进行道路转向提示的方法 |
KR100911954B1 (ko) * | 2007-03-29 | 2009-08-13 | 에스케이씨앤씨 주식회사 | 자동차 네비게이션용 교차로 경로 안내 방법 |
US9222797B2 (en) * | 2007-04-17 | 2015-12-29 | Esther Abramovich Ettinger | Device, system and method of contact-based routing and guidance |
US8930135B2 (en) * | 2007-04-17 | 2015-01-06 | Esther Abramovich Ettinger | Device, system and method of landmark-based routing and guidance |
JP4561769B2 (ja) * | 2007-04-27 | 2010-10-13 | アイシン・エィ・ダブリュ株式会社 | 経路案内システム及び経路案内方法 |
-
2011
- 2011-06-14 CN CN201180029248.5A patent/CN102947677B/zh active Active
- 2011-06-14 JP JP2012520290A patent/JP5398913B2/ja not_active Expired - Fee Related
- 2011-06-14 DE DE112011101988T patent/DE112011101988T5/de not_active Ceased
- 2011-06-14 WO PCT/JP2011/003375 patent/WO2011158494A1/ja active Application Filing
- 2011-06-14 US US13/702,806 patent/US8958982B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11248477A (ja) * | 1998-02-27 | 1999-09-17 | Toyota Motor Corp | 音声案内ナビゲーション装置及び音声案内ナビゲーション方法並びに音声案内ナビゲーションプログラムを記録した媒体 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2011158494A1 (ja) | 2013-08-19 |
CN102947677B (zh) | 2016-03-30 |
US20130096822A1 (en) | 2013-04-18 |
CN102947677A (zh) | 2013-02-27 |
US8958982B2 (en) | 2015-02-17 |
JP5398913B2 (ja) | 2014-01-29 |
DE112011101988T5 (de) | 2013-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5398913B2 (ja) | ナビゲーション装置 | |
JP5874225B2 (ja) | 移動案内システム、移動案内装置、移動案内方法及びコンピュータプログラム | |
JP5704086B2 (ja) | カーナビゲーションシステム | |
US8798929B2 (en) | Navigation apparatus | |
WO2007138854A1 (ja) | 自車位置測定装置 | |
JP4977218B2 (ja) | 自車位置測定装置 | |
JP2006189325A (ja) | 車両の現在地情報管理装置 | |
JP2008256593A (ja) | 車載用ナビゲーション装置 | |
CN105806349B (zh) | 一种真三维导航转向诱导方法和转向诱导导航设备 | |
WO2010001621A1 (ja) | 地図表示装置 | |
JP5811666B2 (ja) | 停止線検出システム、停止線検出装置、停止線検出方法及びコンピュータプログラム | |
JP2006337334A (ja) | ナビゲーション装置 | |
JP2012208087A (ja) | 移動案内システム、移動案内装置、移動案内方法及びコンピュータプログラム | |
JP2013072782A (ja) | 移動体位置検出システム、移動体位置検出装置、移動体位置検出方法及びコンピュータプログラム | |
JP5772453B2 (ja) | 移動体位置検出システム、移動体位置検出装置、移動体位置検出方法及びコンピュータプログラム | |
JP5691915B2 (ja) | 移動案内システム、移動案内装置、移動案内方法及びコンピュータプログラム | |
KR20090121866A (ko) | 경로 안내 시스템, 경로 안내 방법 및 방법 프로그램을기록한 저장매체 | |
CN111337040A (zh) | 行驶引导系统及计算机程序 | |
JP2007193652A (ja) | ナビゲーション装置 | |
JP5803489B2 (ja) | 移動案内システム、移動案内装置、移動案内方法及びコンピュータプログラム | |
JP5765166B2 (ja) | 移動体位置検出システム、移動体位置検出装置、移動体位置検出方法及びコンピュータプログラム | |
JP6375859B2 (ja) | 経路探索システム、経路探索方法及びコンピュータプログラム | |
JP2012185126A (ja) | ナビゲーション装置 | |
JP2012168145A (ja) | 車線案内装置、方法およびプログラム | |
JP2013030007A (ja) | 停止線検出システム、停止線検出装置、停止線検出方法及びコンピュータプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180029248.5 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11795402 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012520290 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13702806 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112011101988 Country of ref document: DE Ref document number: 1120111019882 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11795402 Country of ref document: EP Kind code of ref document: A1 |