WO2021070238A1 - Map display control device and map display control method - Google Patents

Map display control device and map display control method Download PDF

Info

Publication number
WO2021070238A1
WO2021070238A1 PCT/JP2019/039593 JP2019039593W WO2021070238A1 WO 2021070238 A1 WO2021070238 A1 WO 2021070238A1 JP 2019039593 W JP2019039593 W JP 2019039593W WO 2021070238 A1 WO2021070238 A1 WO 2021070238A1
Authority
WO
WIPO (PCT)
Prior art keywords
specific feature
map
section
display control
user
Prior art date
Application number
PCT/JP2019/039593
Other languages
French (fr)
Japanese (ja)
Inventor
下谷 光生
敬介 井上
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2021550968A priority Critical patent/JP7275294B2/en
Priority to PCT/JP2019/039593 priority patent/WO2021070238A1/en
Publication of WO2021070238A1 publication Critical patent/WO2021070238A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a map display control device that displays a map on a display device.
  • a map display device that displays a map on the screen of a display device is widely applied to, for example, a navigation device or a smartphone having a map display application. Further, in recent years, a map display device capable of three-dimensional map display using the map data of a three-dimensional model has become widespread against the background of high performance of the map display device and high accuracy of the map data.
  • Patent Document 1 discloses a display device for a vehicle that can indicate the direction of a specific feature (landmark or the like) with reference to the position of the vehicle.
  • the user of the map display device can display a map around the specific feature on the screen of the display device. This allows you to recognize the positional relationship between your current position and a specific feature. However, even if the user looks around and searches for a specific feature, the specific feature may not always be visible depending on the surrounding terrain and the position of the building. It is a waste of effort to search for a specific feature in a place where the specific feature cannot be seen. In particular, when the user is a driver of a vehicle, searching for a specific feature in vain increases the driving load.
  • the present invention has been made to solve the above problems, and an object of the present invention is to provide a map display control device capable of showing a user a place where a specific feature can be visually recognized.
  • the map display control device is based on a specific feature storage unit that stores a specific feature designated by the user, a map data acquisition unit that acquires map data including information on the specific feature, and map data.
  • the roads included in the map data are classified into a specific feature visible section, which is a section where the specific feature is visible, and a specific feature invisible section, which is a section where the specific feature is invisible.
  • Display control that displays a map showing a specific feature visible section and a specific feature invisible section of a road in different display modes on a display device based on the section classification unit that performs It is equipped with a part.
  • the user can know in advance a place where a specific feature can be visually recognized. Therefore, it is possible to prevent the user from unnecessarily searching for a specific feature in a place where the specific feature cannot be seen.
  • FIG. It is a figure which shows the structure of the map display control device which concerns on Embodiment 1.
  • FIG. It is a figure which shows the example of the map based on the map data. It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 1.
  • FIG. It is a flowchart which shows the operation of the map display control device which concerns on Embodiment 1.
  • FIG. It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 1.
  • FIG. It is a figure which shows the structure of the navigation system provided with the map display control device which concerns on Embodiment 2.
  • FIG. It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2.
  • FIG. It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2.
  • FIG. It is a flowchart which shows the operation of the map display control device which concerns on Embodiment 2. It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2.
  • FIG. It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2.
  • FIG. It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2.
  • FIG. It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2.
  • FIG. It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2.
  • FIG. It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2.
  • FIG. It is a figure which shows the example of
  • FIG. 2 It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2.
  • FIG. 2 It is a figure which shows the example of the additional image displayed by the map display control device which concerns on Embodiment 2.
  • FIG. 2 It is a figure which shows the example of the additional image displayed by the map display control device which concerns on Embodiment 2.
  • FIG. 2 It is a figure which shows the example of the additional image displayed by the map display control device which concerns on Embodiment 2.
  • FIG. It is a figure which shows the example of the additional image displayed by the map display control device which concerns on Embodiment 2.
  • FIG. 2 It is a figure which shows the example of the additional image displayed by the map display control device which concerns on Embodiment 2.
  • FIG. 2 It is a figure which shows the example of the additional image displayed by the map display control device which concerns on Embodiment 2.
  • FIG. 2 It is a figure which shows the example of the additional image displayed by the map display control device which concerns
  • FIG. 1 It is a figure which shows the visual field range of the user who is a driver of a vehicle. It is a figure which shows the example of the map which the map display control device which concerns on Embodiment 3 displays. It is a figure which shows the field of view range for each visibility level of the user who is a driver of a vehicle. It is a figure which shows the example of the map which the map display control device which concerns on Embodiment 3 displays. It is a figure which shows the example of the map which the map display control device which concerns on Embodiment 3 displays. It is a figure which shows the example of the map which the map display control device which concerns on Embodiment 3 displays. It is a figure which shows the example of the map based on the map data. It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 4. FIG.
  • FIG. It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 4.
  • FIG. It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 4.
  • FIG. It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 4.
  • FIG. It is a figure which shows the hardware configuration example of the map display control device. It is a figure which shows the hardware configuration example of the map display control device.
  • FIG. 1 is a diagram showing a configuration of a map display control device 10 according to the first embodiment. As shown in FIG. 1, the map display control device 10 is connected to the map data storage device 21, the display device 22, and the operation input device 23.
  • the map data storage device 21 is a storage medium that stores map data.
  • the map data stored in the map data storage device 21 is three-dimensional map data that includes not only information on the planar position of features such as roads and buildings, but also information such as the altitude of the land and the height of the features. is there.
  • the display device 22 is a means for the map display control device 10 to display a map, and is composed of, for example, a liquid crystal display device.
  • the operation input device 23 is a user interface that accepts a user's operation on the map display control device 10. Using the operation input device 23, the user can, for example, set the range of the map to be displayed on the display device 22 (hereinafter referred to as "map display range"), or specify the feature of interest as a specific feature. can do. In the present embodiment, it is assumed that the user specifies the center point of the map display range and the scale of the map to set the map display range.
  • the map data storage device 21, the display device 22, and the operation input device 23 may be built in the map display control device 10. Further, the map data storage device 21 may be configured as a server that distributes map data to the map display control device 10 through a communication network such as the Internet.
  • the operation input device 23 may be hardware such as a keyboard, a push button, an operation lever, or a touch pad, or may be software keys displayed on a screen.
  • the display device 22 may display the software key as the operation input device 23, and in that case, the operation input device 23 and the display device 22 may be configured as one touch panel.
  • the map display control device 10 displays a map based on the map data stored in the map data storage device 21 on the display device 22 according to the user's operation input to the operation input device 23.
  • the map display control device 10 includes a specific feature storage unit 11, a map data acquisition unit 12, a section classification unit 13, and a display control unit 14.
  • the specific feature storage unit 11 stores the specific feature specified by the user.
  • the map data acquisition unit 12 acquires map data of the map display range set by the user and its vicinity from the map data storage device 21.
  • the map data acquired by the specific feature storage unit 11 includes the information of the specific feature.
  • the section classification unit 13 Based on the map data acquired by the map data acquisition unit 12, the section classification unit 13 visually recognizes the road included in the map data as a specific feature visible section which is a section in which the specific feature can be visually recognized and the specific feature. It is classified as an invisible section of a specific feature, which is an impossible section. Since the map data stored in the map data storage device 21 is three-dimensional map data including information on the altitude of the land and the height of the feature, the section classification unit 13 determines the altitude of the road and the location of the specific feature. Whether or not a specific feature can be seen from each point on the road, based on the altitude of the specific feature, the size and height of the specific feature, the height of the land or feature existing between the road and the specific point, etc.
  • section classification process the process of classifying a road into a specific feature visible section and a specific feature invisible section.
  • the specific feature is visible does not necessarily mean that the entire specific feature can be seen. For example, it may be defined as “the specific feature is visible” including the state where only a part of the specific feature is visible and the state where a certain percentage or more of the specific feature is visible. Further, the definition of "the specific feature is visible” may be changed for each specific feature or for each type of specific feature. For example, if the specific feature is a pagoda with a characteristic roof shape, it may be determined that the specific feature is visible when the roof portion of the pagoda is visible.
  • the display control unit 14 displays a map corresponding to the map data acquired by the map data acquisition unit 12 on the screen of the display device 22. At this time, the display control unit 14 sets the specific feature visible section and the specific feature visible section on the road so that the user can distinguish between the specific feature visible section and the specific feature invisible section based on the result of the section classification process by the section classification unit 13.
  • the specific feature invisible section is displayed in different display modes.
  • any method may be used to change the display mode between the specific feature visible section and the specific feature invisible section on the map.
  • the thickness and color of the line representing the road may be changed between the specific feature visible section and the specific feature invisible section, or one of the specific feature visible section and the specific feature invisible section. May be displayed in animation (for example, blinking display).
  • the map data acquisition unit 12 has acquired the map data of the map as shown in FIG. 2 as the map data of the map to be displayed on the display device 22.
  • the map of FIG. 2 includes roads RD1, RD2, RD3, mountains M1, M2, buildings B1, B2, B3, B4, and a pagoda PG standing on the side of mountain M2.
  • Road RD1 and road RD2 are connected at an intersection X1
  • road RD1 and road RD3 are connected at an intersection X2.
  • Road RD3 extends from intersection X2 to pagoda PG.
  • the user of the map display control device 10 designates the pagoda PG as a specific feature, and the information of the pagoda PG is stored in the specific feature storage unit 11.
  • the display area of the map includes a section (section P0-P6) between the point P0 and the point P6 on the road RD1.
  • the section between the point P0 and the point P1 (section P0-P1) is a section where the pagoda PG cannot be visually recognized because it is blocked by the slope of the mountain M2.
  • the section between the point P1 and the point P3 (section P1-P3) is a section in which the pagoda PG can be visually recognized from the south side.
  • Point P2 in the section P1-P3 is a point where the pagoda PG can be seen right beside the road RD1. That is, the angle formed by the direction in which the pagoda PG can be seen from the point P2 and the tangential direction of the road RD1 at the point P2 is 90 °.
  • the section between the point P3 and the point P4 (section P3-P4) is a section where the pagoda PG cannot be visually recognized because it is blocked by the buildings B1, B2, and B3.
  • the section between the point P4 and the point P5 (section P4-P5) is a section where the pagoda PG can be visually recognized from the east side.
  • the section between the point P5 and the point P6 is a section where the pagoda PG cannot be visually recognized because it is blocked by the building B4.
  • the map display area includes a section (section Q0-Q2) between the point Q0 and the point Q2 on the road RD2.
  • the section between the point Q0 and the point Q1 (section Q0-Q1) is a section where the pagoda PG can be visually recognized.
  • the section between the point Q1 and the point Q2 (section Q1-Q2) is a section where the pagoda PG cannot be visually recognized because it is blocked by the mountain M2. The pagoda PG can be visually recognized from any place on the road RD3.
  • the section classification unit 13 classifies the sections P1-P3 and the sections P4-P5 into visible specific features (pagoda PG) with respect to the road RD1, and the section P0- P1, section P3-P4 and section P5-P6 are classified into a specific feature invisible section in which the specific feature is invisible. Further, the section classification unit 13 classifies the sections Q0-Q1 into specific feature visible sections and the sections Q1-Q2 into specific feature invisible sections with respect to the road RD2. Further, the section classification unit 13 classifies the entire section of the road RD3 into a specific feature visible section.
  • the display control unit 14 When the display control unit 14 displays the map on the display device 22, for example, as shown in FIG. 3, the display control unit 14 has a section P1-P3 and a section P4-P5 of the road RD1 and a section Q0 of the road RD2 classified as a specific feature visible section. -Q1 and the display mode of all sections of road RD3, and display of sections P0-P1, sections P3-P4 and sections P5-P6 of road RD1 classified as invisible sections of specific features, and sections Q1-Q2 of road RD2.
  • the embodiments are different from each other.
  • the thickness of the road line in the specific feature invisible section is made thicker than the road line in the specific feature invisible section, and the color of the road line in the specific feature invisible section is changed to the color of the road line in the specific feature invisible section. It is different from.
  • the user can know in advance a place where the pagoda PG, which is a specific feature, can be visually recognized. Therefore, it is possible to prevent the user from unnecessarily searching for a specific feature in a place where the specific feature cannot be seen.
  • the specific feature storage unit 11 stores the specific feature (step ST101).
  • the display control unit 14 sets the map display range (step ST102), and the map data acquisition unit 12 is set.
  • the map data of the displayed display range and its vicinity is acquired (step ST103).
  • the section classification unit 13 performs a section classification process for classifying the road included in the map data into a specific feature visible section and a specific feature invisible section based on the map data acquired by the map data acquisition unit 12 (Ste ST104). Then, the display control unit 14 displays a map (a map corresponding to the map data acquired by the map data acquisition unit 12) showing the specific feature visible section and the specific feature invisible section in different display modes. Is displayed in (step ST105).
  • step ST102 After that, the process returns to step ST102 and waits for the user's operation. For example, when the display range of the map is changed by the user scrolling the map or changing the scale of the map, steps ST102 to ST105 are executed and displayed on the display device 22. The map is updated.
  • Any method may be used by the user to specify a specific feature.
  • the user may specify the specific feature by inputting the location (address) of the specific feature, or the display control unit 14 displays a map on the display device 22 in advance and the user specifies on the map. You may specify a feature.
  • the method for the user to specify the central point of the display range of the map may be the same as the method for specifying the specific feature. Further, when the map display control device 10 can acquire the current position of the user as in the second embodiment described later, for example, the current position of the user (or the position in front of the current position of the user by a certain distance) is displayed on the map. The display control unit 14 may automatically set the display range of the map based on the current position of the user, such as setting the center point of the display range.
  • the map data stored in the map data storage device 21 is used as three-dimensional map data, but two-dimensional map data that does not include information on the height of land or features may be used.
  • the section classification unit 13 estimates the heights of land and features included in the map data (for example, buildings, mountains, etc.) and performs section classification processing. be able to.
  • the height of a building may be assumed to be several tens of meters
  • the height of a mountain may be several hundred meters
  • the height of a building or a mountain may be regarded as infinite.
  • the height information of the feature is included as the attribute information of the feature in the two-dimensional map data, that information may be used.
  • the accuracy of the section classification process is lowered, but the storage capacity required for the map data storage device 21 can be reduced, which can contribute to cost reduction.
  • 2D map data and 3D map data may be used properly.
  • the map data storage device 21 in the vehicle having a limited storage capacity stores two-dimensional map data having a relatively small amount of data, and has a large amount of data3.
  • the dimensional map data may be acquired by the map display control device 10 from a server outside the vehicle by communication, if necessary.
  • the display mode of the road line itself of the specific feature visible section is changed.
  • the specific feature visible section is along the road line of the specific feature visible section.
  • the display mode of the specific feature visible section may be changed.
  • FIG. 6 is a diagram showing a configuration of the navigation system 30 according to the second embodiment.
  • the navigation system 30 is mounted on the vehicle, and hereinafter, the vehicle on which the navigation system 30 is mounted, that is, the vehicle on which the user of the navigation system 30 is boarded is referred to as "own vehicle".
  • the navigation system 30 may be for pedestrians.
  • the navigation system 30 includes a map display control device 10, a map data storage device 21, a display device 22, an operation input device 23, a positioning device 24, and a planned travel route setting unit 25. Since the map data storage device 21, the display device 22, and the operation input device 23 are the same as those shown in FIG. 1, their description will be omitted.
  • the positioning device 24 calculates the current position of the own vehicle, that is, the current position of the user, using a positioning signal received from a GNSS (Global Navigation Satellite System) satellite such as a GPS (Global Positioning System) satellite.
  • GNSS Global Navigation Satellite System
  • GPS Global Positioning System
  • the positioning device 24 improves the calculation accuracy of the current position of the own vehicle by, for example, autonomous navigation using a camera or a sensor mounted on the own vehicle or map matching technology using the map data stored in the map data storage device 21. Processing may be performed.
  • the planned travel route setting unit 25 calculates a recommended route from the current position of the own vehicle calculated by the positioning device 24 to the destination set by the user, and uses the calculated recommended route as the planned travel route of the own vehicle, that is, the user. Set the planned route.
  • the planned travel route setting unit 25 estimates the route through which the own vehicle passes based on, for example, the travel history of the own vehicle, and sets the estimated route as the planned travel route. You may.
  • the map display control device 10 displays a map based on the map data stored in the map data storage device 21 according to the user's operation input to the operation input device 23 or the current position of the own vehicle measured by the positioning device 24. Displayed on 22.
  • the map display control device 10 of the second embodiment is added to the specific feature storage unit 11, the map data acquisition unit 12, the section classification unit 13, and the display control unit 14 shown in the first embodiment. It includes a position information acquisition unit 15 for acquiring the current position of the own vehicle calculated by the positioning device 24, and a route information acquisition unit 16 for acquiring the planned travel route of the own vehicle set by the planned travel route setting unit 25. Since the operations of the specific feature storage unit 11, the map data acquisition unit 12, and the section classification unit 13 are the same as those in the first embodiment, their description will be omitted.
  • the display control unit 14 displays a map corresponding to the map data acquired by the map data acquisition unit 12 on the screen of the display device 22. At this time, the display control unit 14 displays the specific feature visible section and the specific feature invisible section of the road in different display modes. In the second embodiment, the display control unit 14 further displays the display mode (thickness, color, etc. of the road line) of the planned travel route so that the planned travel route of the own vehicle can be distinguished from other roads. It should be different from the road display mode. Further, the display control unit 14 superimposes and displays an icon indicating the current position of the own vehicle on the map displayed on the display device 22.
  • the display control unit 14 indicates the current position of the own vehicle on the map, for example, as shown in FIG.
  • the road line of the road RD1 which is the planned travel route of the own vehicle is made thicker than other roads.
  • the planned travel route of the own vehicle is not displayed as shown in FIG. 8 (except for the specific feature visible section, the road lines of the roads RD1, RD2, RD3). The thickness will be the same).
  • the display mode (color, size, shape, etc.) of the icon SV of the own vehicle may be changed depending on whether the current position of the own vehicle is in the specific feature visible section or the specific feature invisible section.
  • the display control unit 14 sets the display range of the map based on the current position of the own vehicle. For example, the display control unit 14 sets the current position of the own vehicle or the position in front of the current position of the own vehicle by a certain distance as the center point of the display range of the map, so that the current position of the own vehicle is the display range of the map. It may be included in. Further, the display control unit 14 scrolls the map to be displayed on the display device 22 according to the change in the current position of the own vehicle so that the current position of the own vehicle is always within the display range of the map. Good.
  • the map display control device 10 in the map displayed on the display device 22, not only the visible section of the specific feature but also the current position of the own vehicle as the current position of the user and the passage of the user. It is possible to display the planned travel route of the own vehicle as the planned route. By displaying the user's current position on the map, the user can grasp whether or not the current position is within the specific feature visible section, that is, whether or not the specific feature can be seen from the current position. In addition, by displaying the route to be passed by the user on the map, the user can grasp whether or not a specific feature can be seen from the route to be taken from now on. Therefore, it is possible to prevent the user from unnecessarily searching for a specific feature in a place where the specific feature cannot be seen.
  • the specific feature storage unit 11 stores the specific feature (step ST201).
  • the route information acquisition unit 16 acquires information on the planned travel route of the own vehicle set by the planned travel route setting unit 25 (step ST202).
  • the position information acquisition unit 15 acquires the information of the current position of the own vehicle calculated by the positioning device 24 (step ST203).
  • the display control unit 14 sets the display range of the map based on the current position of the own vehicle (step ST204).
  • the map data acquisition unit 12 acquires map data of the set display range and its vicinity (step ST205).
  • the section classification unit 13 performs a section classification process for classifying the road included in the map data into a specific feature visible section and a specific feature invisible section based on the map data acquired by the map data acquisition unit 12 (Ste ST206). Then, the display control unit 14 displays on the display device 22 a map showing the specific feature visible section and the specific feature invisible section in different display modes (step ST207). At this time, if the planned travel route of the own vehicle is set, the display control unit 14 displays the planned travel route on another road so that the planned travel route of the own vehicle can be distinguished from the other road. The display mode is different from that of.
  • the display control unit 14 superimposes and displays an icon indicating the current position of the own vehicle on the map displayed on the display device 22 (step ST208).
  • steps ST203 to ST208 are executed, and the map displayed on the display device 22 is updated.
  • the navigation system 30 is mounted on a vehicle on which the user is boarding
  • the vehicle is, for example, a bus. It may be public transportation.
  • the vehicle on which the user is boarding is not limited to an automobile, and may be, for example, a railroad train.
  • the map display control device 10 classifies the track, which is the planned travel route of the train, into a specific feature visible section and a specific feature invisible section, and the specific feature visible section and the specific feature. A map showing the invisible section in different display modes is displayed on the display device 22.
  • the section classification unit 13 may target only the roads included in the planned passage route of the own vehicle among the roads included in the map data acquired by the map data acquisition unit 12 for the section classification process.
  • the display control unit 14 sets the specific feature visible section and the specific feature invisible section on the planned travel route (road RD1) of the own vehicle in different display modes, but other roads. (Roads RD2 and RD3) are displayed in a uniform display mode.
  • FIG. 12 translucent in a substantially triangular or substantially fan-shaped area surrounded by two straight lines connecting the position of the specific feature and both ends of the specific feature visible section and the road line of the specific feature visible section.
  • a pattern (hereinafter referred to as "fan pattern") may be displayed. By displaying such a fan-shaped pattern, it becomes easier to understand the positional relationship between the specific feature visible section and the specific feature. Further, as shown in FIGS. 13 and 14, the fan-shaped pattern may be displayed only for the specific feature visible section in which the own vehicle is located. This makes it easier to understand the three positional relationships of the specific feature visible section, the specific feature, and the own vehicle.
  • the display control unit 14 displays an additional image showing the direction in which the specific feature can be seen on a part of the screen of the display device 22 on which the map is displayed when the user's current position is within the specific feature visible section.
  • additional image may be displayed.
  • FIG. 15 is an example in which an additional image 41 of a three-dimensional map with the user's position as a viewpoint is displayed on a part of the map screen. If the user's current position is within the visible section of the specific feature, the three-dimensional map of the additional image 41 includes the image of the specific feature (pagoda PG) as shown in FIG. 15, so that the additional image 41 Can represent the direction in which a specific feature is visible.
  • an elliptical image 411 that emphasizes the position of a specific feature is superimposed on the three-dimensional map.
  • the additional image 41 may include characters indicating the distance and direction from the user's current position to the specific feature.
  • a live-action image obtained by taking the direction of a specific feature with a camera mounted on the own vehicle may be used as the additional image 41.
  • the additional image 41 can represent the direction in which the specific feature can be seen.
  • an image emphasizing the position of the specific feature (corresponding to the elliptical image 411 in the additional image 41 of FIG. 15) may be superimposed on the live-action image.
  • FIG. 16 is an example in which an additional image 42 showing the direction of a specific feature from the vehicle (own vehicle) on which the user boarded is displayed on a part of the map screen.
  • the additional image 42 of FIG. 16 shows the direction of the specific feature from the own vehicle by the bird's-eye view image 421 of the own vehicle and the figure 422 that imitates the light of the searchlight indicating the direction of the specific feature.
  • the additional image showing the direction in which the specific feature can be seen may be displayed on the screen that the user can see by the head-up display so as to be superimposed on the actual landscape seen by the user.
  • the specific feature pagoda PG
  • the pagoda PG which is a specific feature, can be seen through the windshield 60 as shown in FIG.
  • the display control unit 14 controls the head-up display (not shown) mounted on the own vehicle to display an additional image 43 indicating the direction in which the pagoda PG can be seen on the windshield 60 as a screen as shown in FIG. It may be displayed.
  • the additional image 43 may be displayed so as to overlap the position where the pagoda PG can be seen from the driver.
  • the additional image is added according to the distance from the user to a specific feature. You may change the sense of distance. For example, when the distance from the user to the specific feature is long, the additional image is made to be seen far from the user, and as the distance from the user to the specific feature is shortened, the additional image is made to be seen closer to the user. May be good.
  • the additional image may be displayed before the user enters the specific feature visible section for the purpose of notifying the user that the specific feature will start to be seen.
  • the display control unit 14 controls the head-up display and announces that the specific feature (pagoda PG) starts to be seen as shown in FIG.
  • the additional image 44 to be displayed may be displayed on the windshield 60.
  • the additional image 44 may be displayed so as to overlap the position where the pagoda PG starts to be seen from the driver's point of view.
  • the additional images 41 and 42 shown in FIGS. 15 and 16 may also be displayed before the user can actually see the specific feature for the purpose of notifying the user that the specific feature will start to be seen. Good.
  • Most navigation systems have a route simulation function that moves an icon indicating the user's virtual position along the planned route on the map.
  • the additional image 41 (three-dimensional map with the user's position as a viewpoint) shown in FIG. 15 can also be applied to route simulation. That is, in the route simulation, when the icon indicating the virtual position of the user is in the specific feature visible section, the additional image 41 may be displayed on a part of the screen of the route simulation.
  • the result of the section classification process by the section classification unit 13 (that is, information on the specific feature visible section and the specific feature invisible section). ) May be reflected in the route guidance and automatic driving control of the own vehicle.
  • the map data stored in the map data storage device 21 is high-precision map data including position information for each lane of the road
  • the section classification unit 13 performs section classification processing for each lane of the road and displays the map.
  • the control device 10 outputs the result (for example, information on a specific feature visible section for each lane, information on a lane that can increase the distance of the specific feature visible section, etc.) to the navigation system 30 of the own vehicle or an automatic driving system (not (Shown) may be provided.
  • the result for example, information on a specific feature visible section for each lane, information on a lane that can increase the distance of the specific feature visible section, etc.
  • the navigation system 30 that has acquired the result of the section classification process can guide the user to a lane in which the distance traveled by the own vehicle in the visible section of the specific feature becomes longer.
  • the automatic driving system of the own vehicle that has acquired the result of the section classification process may automatically change lanes so that the distance that the own vehicle travels in the visible section of the specific feature becomes longer.
  • the navigation system 30 searches for a route from the current position to the destination, the navigation system 30 may be able to search for a route in which the specific feature visible section becomes long. As a result, it is possible to extend the time during which the user can see the specific feature.
  • FIG. 20 shows an example of the user's visual field range when the user is the driver of the vehicle (own vehicle).
  • the front front of the own vehicle is 0 °
  • the angle on the right side with respect to the front front is positive
  • the angle on the left side is negative.
  • the limit angle at which the driver can see the right side is ⁇ R (> 0 °)
  • the limit angle at which the user can see the left side is ⁇ L ( ⁇ 0 °)
  • the user's visual field range ⁇ is ⁇ L ⁇
  • ⁇ R is a parameter representing the width of the visual field range on the right side of the front.
  • ⁇ R 90 °
  • ⁇ L ⁇ 90 °. That is, the user's visual field range ⁇ is in the range of ⁇ 90 ° ⁇ ⁇ 90 °.
  • the configuration and operation of the map display control device 10 of the third embodiment are basically the same as those of the second embodiment.
  • the section classification unit 13 performs the section classification process in consideration of the user's visual field range at each point on the road.
  • the user's field of view is set according to the orientation of the own vehicle when passing through each point on the road.
  • the section classification unit 13 targets only the roads included in the planned passage route of the own vehicle among the roads included in the map data acquired by the map data acquisition unit 12 for the section classification process.
  • the point P2 is a point where the pagoda PG, which is a specific feature, can be seen right beside the road RD1 (in the direction of ⁇ 90 °). Therefore, when the user's visual field range ⁇ is in the range of ⁇ 90 ° ⁇ ⁇ ⁇ 90 °, the section classification unit 13 of the third embodiment has points P2 and points P3 in which the direction of the pagoda PG is outside the user's visual field range. The section between and (section P2-P3) is classified as a specific feature invisible section.
  • the section classification unit 13 classifies the sections P1-P2 and the sections P4-P5 into visible sections of the specific features (pagoda PG) with respect to the road RD1 which is the planned travel route.
  • P0-P1, section P2-P4, and section P5-P6 are classified into a specific feature invisible section in which the specific feature is invisible.
  • the display modes of the sections P1-P2 and the sections P4-P5 classified as the specific feature visible section and the specific feature invisible section are displayed.
  • the display modes of the sections P0-P1, the sections P2-P4, and the sections P5-P6 classified as are different from each other.
  • the magnitudes (absolute values) of ⁇ R and ⁇ L may be arbitrary values, and the magnitude of ⁇ R and ⁇ L It may be set independently of the size. That is, the viewing range does not have to be symmetrical, and for example, the viewing range on the driver's seat side may be slightly wider. Specifically, if the own vehicle has a driver's seat on the right side (so-called "right-hand drive vehicle"), the driver's viewing range ⁇ is set to -60 ° ⁇ ⁇ ⁇ 75 °, and the own vehicle is on the left side.
  • the driver's viewing range ⁇ may be ⁇ 75 ° ⁇ ⁇ ⁇ 60 °. Further, the sizes of ⁇ R and ⁇ L may be changed by the user according to his / her preference.
  • the direction of 0 ° does not have to be in front of the own vehicle, and may change according to the road shape in front of the own vehicle, for example.
  • the tangential direction of the road in front of the own vehicle may be the direction of 0 °.
  • the direction of the point on the road several tens of meters away from the own vehicle may be set to the direction of 0 °.
  • the visual field range of the driver is defined as a range that the driver can directly see.
  • the driver can also indirectly look at the rear of the vehicle with a rearview mirror (including an "electronic mirror” that electronically realizes the rearview mirror), the driver can see through the rearview mirror.
  • the range behind the vehicle may also be included in the viewing range.
  • the section classification unit 13 may change the set values of ⁇ R and ⁇ L in consideration of the driving load applied to the driver. Good. That is, the width of the user's visual field range at each point on the road may be set to a different value at each point.
  • the width of the user's field of view may be set according to the driving environment when the vehicle passes through the point or the road attribute of the point.
  • the higher the speed limit the higher the traveling speed of the vehicle, and it is considered that the driving load of the driver becomes heavier. Therefore, the section classification unit 13 indicates that the road having a higher speed limit has a user's viewing range. May be set narrower. Further, since it is considered that the driving load of the driver is large on a winding road or a narrow road, the section classification unit 13 sets the user's field of view narrow on a road having a large curvature of the road or a narrow road. You may.
  • the driving load of the driver will increase at intersections and near the junction.
  • the driver's driving load changes depending on the type of road (expressway, motorway, general road, etc.), the amount of pedestrians, and the amount of traffic.
  • the range may be changed.
  • VICS Vehicle Information and Communication System
  • VICS Vehicle Information and Communication System
  • the user's viewing range at each point may be changed.
  • the driving load of the driver becomes large even when the visibility is poor due to bad weather. Therefore, the user's visual field range at each point may be changed based on the weather information.
  • the width of the user's field of view may be set according to the automatic driving level when the vehicle passes the point.
  • the definition of the automation level (autonomous driving level) of the automatic driving of an automobile will be described.
  • SAE Society of Automotive Engineers
  • J3016 September 2016
  • JASO TP1804 December 2018
  • the automatic driving level of the automatic driving system is defined as follows. Has been done.
  • Level 0 no driving automation: driver performs some or all dynamic driving tasks
  • Level 1 driving assistance: system performs either vertical or horizontal vehicle motion control subtasks in a limited area
  • Level 2 Partial Driving Automation: System Performs Both Vertical and Horizontal Vehicle Motion Control Subtasks in Limited Area
  • Level 3 Consditional Driving Automation: System Performs All Dynamic Driving Tasks in Limited Area
  • Level 4 advanced operation automation: The system responds to all dynamic operation tasks and when it is difficult to continue operation.
  • Level 5 (fully automated operation): The system executes an unlimited number of responses to all dynamic operation tasks and when it is difficult to continue operation (that is, not within the limited area). Means all operational and tactical functions (excluding strategic functions such as itinerary planning and waypoint selection) that must be performed in real time when operating a vehicle in road traffic.
  • a "limited area” is a specific condition (geographical constraint, road surface constraint, environmental constraint, traffic constraint, speed constraint, temporal constraint) designed to operate the system or its function. (Including restrictions, etc.).
  • the user's visual field range may be set wide in the section where the automatic driving level is increased, and the user's visual field range may be set narrow in the section where the automatic driving level is decreased.
  • the driver is released from the driving operation, so that the user's visual field range may be omnidirectional ( ⁇ 180 ° ⁇ ⁇ ⁇ 180 °).
  • the driver is released from the driving operation, so that the user's field of view may be omnidirectional.
  • FIG. 22 shows an example in which the visual field range ⁇ 1 of the visual level 1 is defined as ⁇ L ⁇ ⁇ 1 ⁇ ⁇ R , and the visual field range ⁇ 2 of the visual field level 2 is defined as omnidirectional, that is, ⁇ 180 ° ⁇ ⁇ ⁇ 180 °.
  • the visual field level represents the degree of ease of visual recognition
  • the visual field range of the visual field level 1 is a range that the user can visually recognize more easily than the visual field range of the visual field level 2.
  • the section classification unit 13 performs section classification processing in consideration of a plurality of types of visual field ranges. That is, the section classification unit 13 obtains a specific feature visible section corresponding to each of a plurality of types of visual field ranges. Further, the display control unit 14 displays the specific feature visible sections corresponding to each of the plurality of types of visual field ranges on the map in different display modes.
  • the visual field range ⁇ 1 of the visual level 1 is defined as ⁇ 90 ° ⁇ ⁇ 1 ⁇ 90 °
  • the visual field range ⁇ 2 of the visual field level 2 is defined as ⁇ 180 ° ⁇ ⁇ 2 ⁇ 180 °.
  • the point P2 is a point where the pagoda PG, which is a specific feature, can be seen right beside the road RD1 (in the direction of ⁇ 90 °).
  • the section P1-P2 and the section P4-P5 in which the pagoda PG can be seen in the visual range of the visual level 1 are classified into the specific feature visible section of the level 1, and the pagoda PG is out of the visual range of the visual level 1.
  • the section P2-P3 that can be seen within the viewing range of visibility level 2 is classified into the specific feature visible section of level 2, and the sections P0-P1, the section P3-P4, and the section P5-P6 where the pagoda PG cannot be seen are invisible to the specific feature. Classify into sections.
  • the display modes of the sections P1-P2 and the sections P4-P5 classified as the level 1 specific feature visible section and the level 2 The display mode of the section P2-P3 classified as the specific feature visible section and the display mode of the sections P0-P1, the section P3-P4, and the section P5-P6 classified as the specific feature invisible section are different from each other. It becomes a thing.
  • FIG. 24 there are sections P1-P2 and sections P4-P5 classified as level 1 specific feature visible sections, and sections P2-P3 classified as level 2 specific feature visible sections. Fan-shaped patterns of different colors may be attached. At this time, as described with reference to FIGS. 13 and 14, the fan-shaped pattern may be displayed only for the section in which the own vehicle is located.
  • a range that is directly visible to the driver is defined as a level 1 field of view
  • a range that is indirectly visible to the driver through a rearview mirror (including an electronic mirror) is defined as a level 2 field of view. May be good.
  • a feature can be designated as a specific feature.
  • the user can specify not only a unique pagoda but also all the buildings belonging to the pagoda or all the buildings belonging to the shrines and temples as specific features.
  • the attributes of the feature may be any, for example, a convenience store, a specific brand shop, or the like.
  • the configuration and operation of the map display control device 10 of the fourth embodiment are basically the same as those of the second embodiment.
  • the section classification unit 13 performs a section classification process for classifying the road into a specific feature visible section and a specific feature invisible section for each of the plurality of features.
  • the display control unit 14 changes the display mode of the specific feature visible section according to the number of specific features visible from the specific feature visible section.
  • the section classification unit 13 targets only the roads included in the planned passage route of the own vehicle among the roads included in the map data acquired by the map data acquisition unit 12 for the section classification process.
  • the map data acquisition unit 12 uses the map data of the map as shown in FIG. 25 as the map data of the map to be displayed on the display device 22. Is assumed to have been obtained.
  • the map of FIG. 25 is almost the same as that of FIG. 2, but there is another pagoda PG2 in addition to the pagoda PG on the hillside of the mountain M2.
  • the pagoda PG2 is visible from the road RD1 from the section between the points PP1 and the point PP2 (section PP1-PP2).
  • the section classification unit 13 classifies the sections P1-P3 and the sections P4-P5 as the specific feature visible sections of the pagoda PG with respect to the road RD1 which is the planned travel route of the own vehicle, and the sections P0-P1 and the sections. P3-P4 and sections P5-P6 are classified as pagoda PG specific feature invisible sections. Further, the section classification unit 13 classifies the section PP1-PP2 as a specific feature visible section of the pagoda PG2, and classifies the sections P0-PP1 and the section PP2-P6 as the specific feature invisible section of the pagoda PG2.
  • the section classification unit 13 classifies the sections P0-PP1, the sections P3-P4, and the sections P5-P6 as specific feature invisible sections in which neither the pagoda PG nor the PG2 can be seen, and the section PP1-P1 is only the pagoda PG2. Is classified as a visible specific feature section, the section P1-PP2 is classified as a specific feature visible section where both pagoda PG and PG2 can be seen, and the sections PP2-P3 and P4-P5 are visible only to the pagoda PG. Classify as a specific feature visible section.
  • the display control unit 14 displays a specific feature invisible section (section P0-PP1, section P3-P4, and section P5-) in which neither the pagoda PG nor the pagoda PG2 can be seen, for example, as shown in FIG. P6), specific feature visible section (section PP1-P1, section PP2-P3 and P4-P5) where only one of pagoda PG or pagoda PG2 can be seen, and specific feature visible section where both pagoda PG and pagoda PG2 can be seen.
  • Each section (section P1-PP2) has a different display mode. As a result, the user can recognize how many specific features (pagodas) can be seen from each section.
  • the display control unit 14 includes a specific feature visible section of the pagoda PG (section P1-P3 and section P4-P5) and a specific feature visible section of the pagoda PG2 (section PP1-PP2).
  • different fan-shaped patterns may be attached to each other.
  • the section where the two fan-shaped patterns overlap is the section where the two specific features can be seen. Therefore, the user can recognize the number of specific features that can be seen from each section by the number of overlapping fan-shaped patterns.
  • the additional images 41 and 42 described with reference to FIGS. 15 and 16 may be applied to the fourth embodiment.
  • the additional image 41 of the three-dimensional map with the user's position as the viewpoint is displayed on a part of the map screen, in the specific feature visible section where both the pagoda PG and the pagoda PG2 can be seen, as shown in FIG. , Both Pagoda PG and Pagoda PG2 are displayed on the three-dimensional map.
  • an additional image 42 showing the direction of a specific feature from the vehicle (own vehicle) on which the user boarded is displayed on a part of the map screen
  • both the pagoda PG and the pagoda PG2 are displayed as shown in FIG.
  • two figures 422 that imitate the light of the searchlight indicating the direction of the specific feature are displayed.
  • the additional images 43 and 44 to be displayed on the head-up display described with reference to FIGS. 17, 18 and 19 may also be applied to the fourth embodiment.
  • ⁇ Hardware configuration example> 30 and 31 are diagrams showing an example of the hardware configuration of the map display control device 10, respectively.
  • Each function of the component of the map display control device 10 shown in FIG. 1 or FIG. 6 is realized by, for example, the processing circuit 50 shown in FIG. That is, the map display control device 10 stores the specific feature designated by the user, acquires the map data including the information of the specific feature, and based on the map data, sets the road included in the map data as the specific location.
  • the section classification process is performed to classify the specific feature visible section, which is a section where objects can be seen, and the specific feature invisible section, which is a section where objects cannot be seen, and the result of the map data and section classification process is obtained.
  • a processing circuit 50 for displaying a map showing a specific feature visible section and a specific feature invisible section of a road in different display modes on a display device.
  • the processing circuit 50 may be dedicated hardware, or may be a processor (Central Processing Unit (CPU), processing unit, arithmetic unit, microprocessor, microprocessor, etc.) that executes a program stored in a memory. It may be configured by using a DSP (also called a Digital Signal Processor).
  • DSP also called a Digital Signal Processor
  • the processing circuit 50 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable). GateArray), or a combination of these, etc.
  • the functions of the components of the map display control device 10 may be realized by individual processing circuits, or these functions may be collectively realized by one processing circuit.
  • FIG. 31 shows an example of the hardware configuration of the map display control device 10 when the processing circuit 50 is configured by using the processor 51 that executes the program.
  • the functions of the components of the map display control device 10 are realized by software (software, firmware, or a combination of software and firmware).
  • the software or the like is described as a program and stored in the memory 52.
  • the processor 51 realizes the functions of each part by reading and executing the program stored in the memory 52.
  • the map display control device 10 when executed by the processor 51, the process of storing the specific feature specified by the user, the process of acquiring the map data including the information of the specific feature, and the map data Based on this, the roads included in the map data are classified into a specific feature visible section, which is a section where the specific feature is visible, and a specific feature invisible section, which is a section where the specific feature is invisible. Processing to perform processing, processing to display a map showing a specific feature visible section and a specific feature invisible section of a road in different display modes on a display device based on the results of map data and section classification processing.
  • the memory 52 is a non-volatile or non-volatile memory such as a RAM (RandomAccessMemory), a ROM (ReadOnlyMemory), a flash memory, an EPROM (ErasableProgrammableReadOnlyMemory), or an EEPROM (ElectricallyErasableProgrammableReadOnlyMemory). Volatile semiconductor memory, HDD (Hard Disk Drive), magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versatile Disc) and its drive device, etc., or any storage medium used in the future. You may.
  • RAM RandomAccessMemory
  • ROM ReadOnlyMemory
  • flash memory an EPROM (ErasableProgrammableReadOnlyMemory), or an EEPROM (ElectricallyErasableProgrammableReadOnlyMemory).
  • Volatile semiconductor memory Volatile semiconductor memory, HDD (Hard Disk Drive), magnetic disk, flexible disk, optical disk, compact disk,
  • the present invention is not limited to this, and a configuration in which a part of the components of the map display control device 10 is realized by dedicated hardware and another part of the components is realized by software or the like may be used.
  • the function is realized by the processing circuit 50 as dedicated hardware, and for some other components, the processing circuit 50 as the processor 51 is stored in the memory 52. It is possible to realize the function by reading and executing it.
  • the map display control device 10 can realize each of the above-mentioned functions by hardware, software, or a combination thereof.
  • each embodiment can be freely combined, and each embodiment can be appropriately modified or omitted within the scope of the invention.
  • map display control device 11 specific feature storage unit, 12 map data acquisition unit, 13 section classification unit, 14 display control unit, 15 position information acquisition unit, 16 route information acquisition unit, 21 map data storage device, 22 display device , 23 operation input device, 24 positioning device, 25 planned travel route setting unit, 30 navigation system, 41-44 additional images, 51 processor, 52 memory, 60 windshield.

Abstract

In this map display control device (10), a specific feature storage unit (11) stores a specific feature specified by a user, and a map data acquisition unit (12) acquires map data including information about the specific feature. A section classification unit (13) carries out section classification processing in which a road included in the map data acquired by the map data acquisition unit (12) is classified into a specific-feature-viewable section where the specific feature is viewable and a specific-feature-non-viewable section where the specific feature is not viewable. A display control unit (14) uses the map data and the result of the section classification processing to display, on a display device (22), a map in which the specific-feature-viewable section and the specific-feature-non-viewable section of the road are represented using different display styles.

Description

地図表示制御装置および地図表示制御方法Map display control device and map display control method
 本発明は、表示装置に地図を表示する地図表示制御装置に関するものである。 The present invention relates to a map display control device that displays a map on a display device.
 表示装置の画面に地図を表示する地図表示装置は、例えばナビゲーション装置や、地図表示アプリケーションを有するスマートフォンなどに広く適用されている。また近年では、地図表示装置の高性能化ならびに地図データの高精度化を背景として、3次元モデルの地図データを用いて、3次元的な地図表示が可能な地図表示装置も普及している。 A map display device that displays a map on the screen of a display device is widely applied to, for example, a navigation device or a smartphone having a map display application. Further, in recent years, a map display device capable of three-dimensional map display using the map data of a three-dimensional model has become widespread against the background of high performance of the map display device and high accuracy of the map data.
 また、下記の特許文献1には、車両の位置を基準にして特定の地物(ランドマーク等)の方向を示すことが可能な車両用表示装置が開示されている。 Further, Patent Document 1 below discloses a display device for a vehicle that can indicate the direction of a specific feature (landmark or the like) with reference to the position of the vehicle.
特開2017-118486号公報JP-A-2017-118486
 地図表示装置のユーザは、興味のある特定の地物(以下「特定地物」という)を指定することで、その特定地物の周辺の地図を表示装置の画面に表示させることができ、それによって、自己の現在位置と特定地物との位置関係を認識できる。しかし、ユーザが辺りを見渡して特定地物を探しても、周囲の地形や建物の位置などによっては、特定地物が見えるとは限らない。特定地物が見えない場所で、特定地物を探すのは無駄な労力である。特に、ユーザが車両の運転者である場合、無駄に特定地物を探すことは運転負荷を上げることになる。 By designating a specific feature of interest (hereinafter referred to as "specific feature"), the user of the map display device can display a map around the specific feature on the screen of the display device. This allows you to recognize the positional relationship between your current position and a specific feature. However, even if the user looks around and searches for a specific feature, the specific feature may not always be visible depending on the surrounding terrain and the position of the building. It is a waste of effort to search for a specific feature in a place where the specific feature cannot be seen. In particular, when the user is a driver of a vehicle, searching for a specific feature in vain increases the driving load.
 本発明は以上のような課題を解決するためになされたものであり、特定地物を視認可能な場所をユーザに示すことが可能な地図表示制御装置を提供することを目的とする。 The present invention has been made to solve the above problems, and an object of the present invention is to provide a map display control device capable of showing a user a place where a specific feature can be visually recognized.
 本発明に係る地図表示制御装置は、ユーザにより指定された特定地物を記憶する特定地物記憶部と、特定地物の情報を含む地図データを取得する地図データ取得部と、地図データに基づいて、地図データに含まれる道路を、特定地物を視認可能な区間である特定地物可視区間と、特定地物を視認不可能な区間である特定地物不可視区間とに分類する区間分類処理を行う区間分類部と、地図データおよび区間分類処理の結果に基づいて、道路の特定地物可視区間と特定地物不可視区間とを互いに異なる表示態様で示した地図を表示装置に表示する表示制御部と、を備えるものである。 The map display control device according to the present invention is based on a specific feature storage unit that stores a specific feature designated by the user, a map data acquisition unit that acquires map data including information on the specific feature, and map data. The roads included in the map data are classified into a specific feature visible section, which is a section where the specific feature is visible, and a specific feature invisible section, which is a section where the specific feature is invisible. Display control that displays a map showing a specific feature visible section and a specific feature invisible section of a road in different display modes on a display device based on the section classification unit that performs It is equipped with a part.
 本発明に係る地図表示制御装置によれば、ユーザは特定地物を視認可能な場所を前もって知ることができる。よって、ユーザが特定地物の見えない場所で無駄に特定地物を探すことが防止される。 According to the map display control device according to the present invention, the user can know in advance a place where a specific feature can be visually recognized. Therefore, it is possible to prevent the user from unnecessarily searching for a specific feature in a place where the specific feature cannot be seen.
 本発明の目的、特徴、態様、および利点は、以下の詳細な説明と添付図面とによって、より明白となる。 The object, features, aspects, and advantages of the present invention will be made clearer by the following detailed description and accompanying drawings.
実施の形態1に係る地図表示制御装置の構成を示す図である。It is a figure which shows the structure of the map display control device which concerns on Embodiment 1. FIG. 地図データに基づく地図の例を示す図である。It is a figure which shows the example of the map based on the map data. 実施の形態1に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 1. FIG. 実施の形態1に係る地図表示制御装置の動作を示すフローチャートである。It is a flowchart which shows the operation of the map display control device which concerns on Embodiment 1. FIG. 実施の形態1に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 1. FIG. 実施の形態2に係る地図表示制御装置を備えるナビゲーションシステムの構成を示す図である。It is a figure which shows the structure of the navigation system provided with the map display control device which concerns on Embodiment 2. FIG. 実施の形態2に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2. FIG. 実施の形態2に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2. FIG. 実施の形態2に係る地図表示制御装置の動作を示すフローチャートである。It is a flowchart which shows the operation of the map display control device which concerns on Embodiment 2. 実施の形態2に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2. FIG. 実施の形態2に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2. FIG. 実施の形態2に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2. FIG. 実施の形態2に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2. FIG. 実施の形態2に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 2. FIG. 実施の形態2に係る地図表示制御装置が表示する付加画像の例を示す図である。It is a figure which shows the example of the additional image displayed by the map display control device which concerns on Embodiment 2. FIG. 実施の形態2に係る地図表示制御装置が表示する付加画像の例を示す図である。It is a figure which shows the example of the additional image displayed by the map display control device which concerns on Embodiment 2. FIG. 実施の形態2に係る地図表示制御装置が表示する付加画像の例を示す図である。It is a figure which shows the example of the additional image displayed by the map display control device which concerns on Embodiment 2. FIG. 実施の形態2に係る地図表示制御装置が表示する付加画像の例を示す図である。It is a figure which shows the example of the additional image displayed by the map display control device which concerns on Embodiment 2. FIG. 実施の形態2に係る地図表示制御装置が表示する付加画像の例を示す図である。It is a figure which shows the example of the additional image displayed by the map display control device which concerns on Embodiment 2. FIG. 車両の運転者であるユーザの視野範囲を示す図である。It is a figure which shows the visual field range of the user who is a driver of a vehicle. 実施の形態3に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map which the map display control device which concerns on Embodiment 3 displays. 車両の運転者であるユーザの視認レベルごとの視野範囲を示す図である。It is a figure which shows the field of view range for each visibility level of the user who is a driver of a vehicle. 実施の形態3に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map which the map display control device which concerns on Embodiment 3 displays. 実施の形態3に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map which the map display control device which concerns on Embodiment 3 displays. 地図データに基づく地図の例を示す図である。It is a figure which shows the example of the map based on the map data. 実施の形態4に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 4. FIG. 実施の形態4に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 4. FIG. 実施の形態4に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 4. FIG. 実施の形態4に係る地図表示制御装置が表示する地図の例を示す図である。It is a figure which shows the example of the map displayed by the map display control device which concerns on Embodiment 4. FIG. 地図表示制御装置のハードウェア構成例を示す図である。It is a figure which shows the hardware configuration example of the map display control device. 地図表示制御装置のハードウェア構成例を示す図である。It is a figure which shows the hardware configuration example of the map display control device.
 <実施の形態1>
 図1は、実施の形態1に係る地図表示制御装置10の構成を示す図である。図1のように、地図表示制御装置10は、地図データ記憶装置21、表示装置22および操作入力装置23に接続されている。
<Embodiment 1>
FIG. 1 is a diagram showing a configuration of a map display control device 10 according to the first embodiment. As shown in FIG. 1, the map display control device 10 is connected to the map data storage device 21, the display device 22, and the operation input device 23.
 地図データ記憶装置21は、地図データを記憶した記憶媒体である。地図データ記憶装置21に記憶された地図データは、道路や建物などの地物の平面的な位置の情報だけでなく、土地の標高や地物の高さなどの情報も含む3次元地図データである。 The map data storage device 21 is a storage medium that stores map data. The map data stored in the map data storage device 21 is three-dimensional map data that includes not only information on the planar position of features such as roads and buildings, but also information such as the altitude of the land and the height of the features. is there.
 表示装置22は、地図表示制御装置10が地図を表示するための手段であり、例えば液晶表示装置などで構成される。操作入力装置23は、地図表示制御装置10に対するユーザの操作を受け付けるユーザインターフェイスである。ユーザは、操作入力装置23を用いて、例えば、表示装置22に表示させる地図の範囲(以下「地図の表示範囲」という)を設定したり、興味のある地物を特定地物として指定したりすることができる。本実施の形態では、ユーザが、地図の表示範囲の中心地点と、地図の縮尺とを指定することで、地図の表示範囲が設定されるものとする。 The display device 22 is a means for the map display control device 10 to display a map, and is composed of, for example, a liquid crystal display device. The operation input device 23 is a user interface that accepts a user's operation on the map display control device 10. Using the operation input device 23, the user can, for example, set the range of the map to be displayed on the display device 22 (hereinafter referred to as "map display range"), or specify the feature of interest as a specific feature. can do. In the present embodiment, it is assumed that the user specifies the center point of the map display range and the scale of the map to set the map display range.
 なお、地図データ記憶装置21、表示装置22および操作入力装置23は、地図表示制御装置10に内蔵されていてもよい。また、地図データ記憶装置21は、インターネット等の通信網を通して地図表示制御装置10へ地図データを配信するサーバーとして構成されていてもよい。操作入力装置23は、キーボードや押しボタン、操作レバー、タッチパッドのようなハードウェアであってもよいし、画面に表示されるソフトウェアキーであってもよい。例えば、表示装置22に、操作入力装置23としてのソフトウェアキーを表示させてもよく、その場合、操作入力装置23および表示装置22は1つのタッチパネルとして構成されてもよい。 The map data storage device 21, the display device 22, and the operation input device 23 may be built in the map display control device 10. Further, the map data storage device 21 may be configured as a server that distributes map data to the map display control device 10 through a communication network such as the Internet. The operation input device 23 may be hardware such as a keyboard, a push button, an operation lever, or a touch pad, or may be software keys displayed on a screen. For example, the display device 22 may display the software key as the operation input device 23, and in that case, the operation input device 23 and the display device 22 may be configured as one touch panel.
 地図表示制御装置10は、操作入力装置23に入力されるユーザの操作に従って、地図データ記憶装置21に記憶された地図データに基づく地図を表示装置22に表示する。図1のように、地図表示制御装置10は、特定地物記憶部11、地図データ取得部12、区間分類部13および表示制御部14を備えている。 The map display control device 10 displays a map based on the map data stored in the map data storage device 21 on the display device 22 according to the user's operation input to the operation input device 23. As shown in FIG. 1, the map display control device 10 includes a specific feature storage unit 11, a map data acquisition unit 12, a section classification unit 13, and a display control unit 14.
 特定地物記憶部11は、ユーザが指定した特定地物を記憶する。地図データ取得部12は、ユーザが設定した地図の表示範囲およびその近傍の地図データを、地図データ記憶装置21から取得する。地図の表示範囲またはその近傍に特定地物が位置する場合、特定地物記憶部11が取得する地図データには特定地物の情報が含まれる。 The specific feature storage unit 11 stores the specific feature specified by the user. The map data acquisition unit 12 acquires map data of the map display range set by the user and its vicinity from the map data storage device 21. When the specific feature is located in or near the display range of the map, the map data acquired by the specific feature storage unit 11 includes the information of the specific feature.
 区間分類部13は、地図データ取得部12が取得した地図データに基づいて、地図データに含まれる道路を、特定地物を視認可能な区間である特定地物可視区間と、特定地物を視認不可能な区間である特定地物不可視区間とに分類する。地図データ記憶装置21に記憶されている地図データは、土地の標高や地物の高さの情報を含む3次元地図データであるため、区間分類部13は、道路の標高、特定地物の場所の標高、特定地物の大きさおよび高さ、道路と特定地点との間に存在する土地や地物の高さなどに基づいて、道路上の各地点から特定地物が見えるかどうか、すなわち、道路上の各地点と特定地物との間が土地や地物によって遮られるかどうかを判断できる。以下、道路を特定地物可視区間と特定地物不可視区間とに分類する処理を「区間分類処理」という。 Based on the map data acquired by the map data acquisition unit 12, the section classification unit 13 visually recognizes the road included in the map data as a specific feature visible section which is a section in which the specific feature can be visually recognized and the specific feature. It is classified as an invisible section of a specific feature, which is an impossible section. Since the map data stored in the map data storage device 21 is three-dimensional map data including information on the altitude of the land and the height of the feature, the section classification unit 13 determines the altitude of the road and the location of the specific feature. Whether or not a specific feature can be seen from each point on the road, based on the altitude of the specific feature, the size and height of the specific feature, the height of the land or feature existing between the road and the specific point, etc. , It is possible to judge whether or not each point on the road and a specific feature are blocked by the land or feature. Hereinafter, the process of classifying a road into a specific feature visible section and a specific feature invisible section is referred to as "section classification process".
 なお、区間分類処理において、「特定地物が視認可能」とは、必ずしも特定地物の全体が見えることでなくてもよい。例えば、特定地物の一部分だけが見える状態や、特定地物の一定割合以上が見える状態も含めて「特定地物が視認可能」と定義してもよい。また、「特定地物が視認可能」の定義は、特定地物ごと、あるいは特定地物の種類ごとに変更されてもよい。例えば、特定地物が屋根の形状に特徴のあるパゴダであれば、そのパゴダの屋根の部分が見える場合に「特定地物が視認可能」と判断してもよい。 In the section classification process, "the specific feature is visible" does not necessarily mean that the entire specific feature can be seen. For example, it may be defined as "the specific feature is visible" including the state where only a part of the specific feature is visible and the state where a certain percentage or more of the specific feature is visible. Further, the definition of "the specific feature is visible" may be changed for each specific feature or for each type of specific feature. For example, if the specific feature is a pagoda with a characteristic roof shape, it may be determined that the specific feature is visible when the roof portion of the pagoda is visible.
 表示制御部14は、地図データ取得部12が取得した地図データに対応する地図を表示装置22の画面に表示する。このとき表示制御部14は、区間分類部13による区間分類処理の結果に基づいて、道路の特定地物可視区間と特定地物不可視区間とをユーザが区別できるように、特定地物可視区間と特定地物不可視区間とを互いに異なる表示態様で表示する。 The display control unit 14 displays a map corresponding to the map data acquired by the map data acquisition unit 12 on the screen of the display device 22. At this time, the display control unit 14 sets the specific feature visible section and the specific feature visible section on the road so that the user can distinguish between the specific feature visible section and the specific feature invisible section based on the result of the section classification process by the section classification unit 13. The specific feature invisible section is displayed in different display modes.
 なお、地図上で特定地物可視区間と特定地物不可視区間とで表示態様を変える方法はどのような方法でもよい。例えば、特定地物可視区間と特定地物不可視区間とで、道路を表す線(道路線)の太さや色を変えてもよいし、特定地物可視区間および特定地物不可視区間のうちの片方をアニメーション表示(例えば点滅表示)してもよい。 Any method may be used to change the display mode between the specific feature visible section and the specific feature invisible section on the map. For example, the thickness and color of the line representing the road (road line) may be changed between the specific feature visible section and the specific feature invisible section, or one of the specific feature visible section and the specific feature invisible section. May be displayed in animation (for example, blinking display).
 以下、表示制御部14が表示装置22に表示する地図の具体例を示しつつ、地図表示制御装置10の動作を説明する。ここでは、地図データ取得部12が、表示装置22に表示する地図の地図データとして、図2のような地図の地図データを取得したと仮定する。図2の地図には、道路RD1,RD2,RD3と、山M1,M2と、建物B1,B2,B3,B4と、山M2の中腹に立つパゴダPGとが含まれている。道路RD1と道路RD2とは交差点X1で接続し、道路RD1と道路RD3とは交差点X2で接続している。道路RD3は、交差点X2からパゴダPGまで延びている。 Hereinafter, the operation of the map display control device 10 will be described while showing a specific example of the map displayed on the display device 22 by the display control unit 14. Here, it is assumed that the map data acquisition unit 12 has acquired the map data of the map as shown in FIG. 2 as the map data of the map to be displayed on the display device 22. The map of FIG. 2 includes roads RD1, RD2, RD3, mountains M1, M2, buildings B1, B2, B3, B4, and a pagoda PG standing on the side of mountain M2. Road RD1 and road RD2 are connected at an intersection X1, and road RD1 and road RD3 are connected at an intersection X2. Road RD3 extends from intersection X2 to pagoda PG.
 また、地図表示制御装置10のユーザは、パゴダPGを特定地物として指定し、特定地物記憶部11にはパゴダPGの情報が記憶されているものと仮定する。 Further, it is assumed that the user of the map display control device 10 designates the pagoda PG as a specific feature, and the information of the pagoda PG is stored in the specific feature storage unit 11.
 道路RD1上に地点P0~P6を定義する。地点P0は道路RD1と地図の下端との交点であり、地点P6は道路RD1と地図の上端との交点である。すなわち、地図の表示領域には、道路RD1の地点P0と地点P6との間の区間(区間P0-P6)が含まれている。地点P0と地点P1との間の区間(区間P0-P1)は、山M2の斜面に遮られてパゴダPGを視認できない区間である。地点P1と地点P3との間の区間(区間P1-P3)は、パゴダPGを南側から視認できる区間である。区間P1-P3内の地点P2は、パゴダPGが道路RD1の真横に見える地点である。すなわち、地点P2からパゴダPGが見える方向と地点P2における道路RD1の接線方向とが成す角度は90°である。地点P3と地点P4との間の区間(区間P3-P4)は、建物B1,B2,B3に遮られてパゴダPGを視認できない区間である。地点P4と地点P5との間の区間(区間P4-P5)は、パゴダPGを東側から視認できる区間である。地点P5と地点P6との間の区間(区間P5-P6)は、建物B4に遮られてパゴダPGを視認できない区間である。 Define points P0 to P6 on the road RD1. The point P0 is the intersection of the road RD1 and the lower end of the map, and the point P6 is the intersection of the road RD1 and the upper end of the map. That is, the display area of the map includes a section (section P0-P6) between the point P0 and the point P6 on the road RD1. The section between the point P0 and the point P1 (section P0-P1) is a section where the pagoda PG cannot be visually recognized because it is blocked by the slope of the mountain M2. The section between the point P1 and the point P3 (section P1-P3) is a section in which the pagoda PG can be visually recognized from the south side. Point P2 in the section P1-P3 is a point where the pagoda PG can be seen right beside the road RD1. That is, the angle formed by the direction in which the pagoda PG can be seen from the point P2 and the tangential direction of the road RD1 at the point P2 is 90 °. The section between the point P3 and the point P4 (section P3-P4) is a section where the pagoda PG cannot be visually recognized because it is blocked by the buildings B1, B2, and B3. The section between the point P4 and the point P5 (section P4-P5) is a section where the pagoda PG can be visually recognized from the east side. The section between the point P5 and the point P6 (section P5-P6) is a section where the pagoda PG cannot be visually recognized because it is blocked by the building B4.
 道路RD2上に地点Q0~Q2を定義する。地点Q0は道路RD2と地図の下端との交点であり、地点Q2は道路RD2と地図の上端との交点である。すなわち、地図の表示領域には、道路RD2の地点Q0と地点Q2との間の区間(区間Q0-Q2)が含まれている。地点Q0と地点Q1との間の区間(区間Q0-Q1)は、パゴダPGを視認できる区間である。地点Q1と地点Q2との間の区間(区間Q1-Q2)は、山M2に遮られてパゴダPGを視認できない区間である。なお、道路RD3上ではどの場所からでもパゴダPGを視認できる。 Define points Q0 to Q2 on the road RD2. Point Q0 is the intersection of the road RD2 and the lower end of the map, and point Q2 is the intersection of the road RD2 and the upper end of the map. That is, the map display area includes a section (section Q0-Q2) between the point Q0 and the point Q2 on the road RD2. The section between the point Q0 and the point Q1 (section Q0-Q1) is a section where the pagoda PG can be visually recognized. The section between the point Q1 and the point Q2 (section Q1-Q2) is a section where the pagoda PG cannot be visually recognized because it is blocked by the mountain M2. The pagoda PG can be visually recognized from any place on the road RD3.
 このような場合、区間分類部13は、道路RD1に対し、区間P1-P3および区間P4-P5を、特定地物(パゴダPG)を視認可能な特定地物可視区間に分類し、区間P0-P1、区間P3-P4および区間P5-P6を、特定地物を視認不可能な特定地物不可視区間に分類する。また、区間分類部13は、道路RD2に対し、区間Q0-Q1を特定地物可視区間に分類し、区間Q1-Q2を特定地物不可視区間に分類する。さらに、区間分類部13は、道路RD3に対し、その全区間を特定地物可視区間に分類する。 In such a case, the section classification unit 13 classifies the sections P1-P3 and the sections P4-P5 into visible specific features (pagoda PG) with respect to the road RD1, and the section P0- P1, section P3-P4 and section P5-P6 are classified into a specific feature invisible section in which the specific feature is invisible. Further, the section classification unit 13 classifies the sections Q0-Q1 into specific feature visible sections and the sections Q1-Q2 into specific feature invisible sections with respect to the road RD2. Further, the section classification unit 13 classifies the entire section of the road RD3 into a specific feature visible section.
 表示制御部14は、地図を表示装置22に表示させる際、例えば図3のように、特定地物可視区間として分類された道路RD1の区間P1-P3および区間P4-P5、道路RD2の区間Q0-Q1、ならびに道路RD3の全区間の表示態様と、特定地物不可視区間として分類された道路RD1の区間P0-P1、区間P3-P4および区間P5-P6ならびに道路RD2の区間Q1-Q2の表示態様とを、互いに異なるものとする。ここでは、特定地物可視区間の道路線の太さを特定地物不可視区間の道路線よりも太くするとともに、特定地物可視区間の道路線の色を特定地物不可視区間の道路線の色とは異なるものにしている。図3のような地図が表示装置22に表示されることで、ユーザは、特定地物であるパゴダPGを視認可能な場所を前もって知ることができる。よって、ユーザが特定地物の見えない場所で無駄に特定地物を探すことが防止される。 When the display control unit 14 displays the map on the display device 22, for example, as shown in FIG. 3, the display control unit 14 has a section P1-P3 and a section P4-P5 of the road RD1 and a section Q0 of the road RD2 classified as a specific feature visible section. -Q1 and the display mode of all sections of road RD3, and display of sections P0-P1, sections P3-P4 and sections P5-P6 of road RD1 classified as invisible sections of specific features, and sections Q1-Q2 of road RD2. The embodiments are different from each other. Here, the thickness of the road line in the specific feature invisible section is made thicker than the road line in the specific feature invisible section, and the color of the road line in the specific feature invisible section is changed to the color of the road line in the specific feature invisible section. It is different from. By displaying the map as shown in FIG. 3 on the display device 22, the user can know in advance a place where the pagoda PG, which is a specific feature, can be visually recognized. Therefore, it is possible to prevent the user from unnecessarily searching for a specific feature in a place where the specific feature cannot be seen.
 なお、図3では、説明の便宜のため、「P1」、「Q1」などの符号を図示しているが、これらの符号は表示装置22に表示された地図上に表示される必要はない。このことは、以下に示す地図の表示例についても同様である。 Note that, in FIG. 3, for convenience of explanation, codes such as “P1” and “Q1” are shown, but these codes do not need to be displayed on the map displayed on the display device 22. This also applies to the map display example shown below.
 次に、図4のフローチャートを参照しつつ、実施の形態1に係る地図表示制御装置10の動作を説明する。地図表示制御装置10が起動し、ユーザが操作入力装置23を用いて特定地物を指定すると、特定地物記憶部11がその特定地物を記憶する(ステップST101)。また、ユーザが操作入力装置23を用いて、表示する地図の中心地点および縮尺を指定すると、表示制御部14が地図の表示範囲を設定し(ステップST102)、地図データ取得部12が、設定された表示範囲およびその近傍の地図データを取得する(ステップST103)。 Next, the operation of the map display control device 10 according to the first embodiment will be described with reference to the flowchart of FIG. When the map display control device 10 is activated and the user specifies a specific feature using the operation input device 23, the specific feature storage unit 11 stores the specific feature (step ST101). When the user specifies the center point and scale of the map to be displayed using the operation input device 23, the display control unit 14 sets the map display range (step ST102), and the map data acquisition unit 12 is set. The map data of the displayed display range and its vicinity is acquired (step ST103).
 その後、区間分類部13が、地図データ取得部12が取得した地図データに基づいて、地図データに含まれる道路を特定地物可視区間と特定地物不可視区間とに分類する区間分類処理を行う(ステップST104)。そして、表示制御部14が、特定地物可視区間と特定地物不可視区間とを互いに異なる表示態様で示した地図(地図データ取得部12が取得した地図データに対応する地図)を、表示装置22に表示する(ステップST105)。 After that, the section classification unit 13 performs a section classification process for classifying the road included in the map data into a specific feature visible section and a specific feature invisible section based on the map data acquired by the map data acquisition unit 12 ( Step ST104). Then, the display control unit 14 displays a map (a map corresponding to the map data acquired by the map data acquisition unit 12) showing the specific feature visible section and the specific feature invisible section in different display modes. Is displayed in (step ST105).
 その後は、ステップST102へ戻り、ユーザの操作を待つ。例えば、ユーザが地図をスクロールさせる操作を行ったり、地図の縮尺を変える操作を行ったりして、地図の表示範囲が変更されると、ステップST102~ST105が実行され、表示装置22に表示される地図が更新される。 After that, the process returns to step ST102 and waits for the user's operation. For example, when the display range of the map is changed by the user scrolling the map or changing the scale of the map, steps ST102 to ST105 are executed and displayed on the display device 22. The map is updated.
 なお、ユーザが特定地物を指定する方法はどのような方法でもよい。例えば、ユーザが特定地物の所在地(住所)を入力することで特定地物を指定してもよいし、予め表示制御部14が表示装置22に地図を表示し、ユーザがその地図上で特定地物を指定してもよい。 Any method may be used by the user to specify a specific feature. For example, the user may specify the specific feature by inputting the location (address) of the specific feature, or the display control unit 14 displays a map on the display device 22 in advance and the user specifies on the map. You may specify a feature.
 ユーザが地図の表示範囲の中心地点を指定する方法も、特定地物の指定と同様の方法でよい。また、後述する実施の形態2のように、地図表示制御装置10がユーザの現在位置を取得できる場合は、例えばユーザの現在位置(あるいはユーザの現在位置から一定距離だけ前方の位置)を地図の表示範囲の中心地点にするなど、表示制御部14が、ユーザの現在位置に基づいて地図の表示範囲が自動的に設定してもよい。 The method for the user to specify the central point of the display range of the map may be the same as the method for specifying the specific feature. Further, when the map display control device 10 can acquire the current position of the user as in the second embodiment described later, for example, the current position of the user (or the position in front of the current position of the user by a certain distance) is displayed on the map. The display control unit 14 may automatically set the display range of the map based on the current position of the user, such as setting the center point of the display range.
 上の説明では、地図データ記憶装置21に記憶された地図データを3次元地図データとしたが、土地や地物の高さの情報を含まない2次元地図データでもよい。2次元地図データを用いる場合、区間分類部13は、地図データに含まれる土地や地物の属性情報(例えば、ビル、山など)から、それらの高さを推定して、区間分類処理を行うことができる。例えば、ビルの高さは数十m、山の高さは数百mと仮定してもよいし、ビルや山の高さは無限大とみなしてもよい。もちろん、2次元地図データにおける地物の属性情報として、地物の高さの情報が含まれていれば、その情報を利用してもよい。2次元地図データを用いる場合、区間分類処理の精度は落ちるが、地図データ記憶装置21に必要とされる記憶容量を少なくできるため、コストの削減に寄与できる。 In the above explanation, the map data stored in the map data storage device 21 is used as three-dimensional map data, but two-dimensional map data that does not include information on the height of land or features may be used. When two-dimensional map data is used, the section classification unit 13 estimates the heights of land and features included in the map data (for example, buildings, mountains, etc.) and performs section classification processing. be able to. For example, the height of a building may be assumed to be several tens of meters, the height of a mountain may be several hundred meters, and the height of a building or a mountain may be regarded as infinite. Of course, if the height information of the feature is included as the attribute information of the feature in the two-dimensional map data, that information may be used. When the two-dimensional map data is used, the accuracy of the section classification process is lowered, but the storage capacity required for the map data storage device 21 can be reduced, which can contribute to cost reduction.
 また、2次元地図データと3次元地図データとを使い分けてもよい。例えば、地図表示制御装置10が車両に搭載される場合、記憶容量に限りがある車両内の地図データ記憶装置21には比較的データ量の少ない2次元地図データを記憶させ、データ量の多い3次元地図データは、地図表示制御装置10が必要に応じて車両の外部のサーバーから通信で取得してもよい。 Also, 2D map data and 3D map data may be used properly. For example, when the map display control device 10 is mounted on a vehicle, the map data storage device 21 in the vehicle having a limited storage capacity stores two-dimensional map data having a relatively small amount of data, and has a large amount of data3. The dimensional map data may be acquired by the map display control device 10 from a server outside the vehicle by communication, if necessary.
 [変更例]
 図3においては、特定地物可視区間の道路線そのものの表示態様を変化させたが、例えば図5のように、特定地物可視区間の道路線に沿って、特定地物可視区間であることを示す補助的な線を表示させることで、特定地物可視区間の表示態様を変化させてもよい。このとき、補助的な線を、道路線の特定地物側に沿わせるとよい。これにより、ユーザは、道路のどちら側に特定地物が見えるのかを把握することができる。この効果は、特に、ユーザが地図をスクロールさせるなどして特定地物が地図の表示範囲の外側に出たときに有効である。
[Change example]
In FIG. 3, the display mode of the road line itself of the specific feature visible section is changed. For example, as shown in FIG. 5, the specific feature visible section is along the road line of the specific feature visible section. By displaying an auxiliary line indicating, the display mode of the specific feature visible section may be changed. At this time, it is advisable to run an auxiliary line along the specific feature side of the road line. This allows the user to know on which side of the road the specific feature is visible. This effect is particularly effective when a specific feature appears outside the display range of the map, such as when the user scrolls the map.
 <実施の形態2>
 実施の形態2では、地図表示制御装置10をナビゲーションシステムに適用した例を示す。図6は、実施の形態2に係るナビゲーションシステム30の構成を示す図である。本実施の形態では、ナビゲーションシステム30は車両に搭載されているものと仮定し、以下では、ナビゲーションシステム30が搭載された車両、すなわちナビゲーションシステム30のユーザが搭乗した車両を「自車両」という。ただし、ナビゲーションシステム30は、歩行者用のものであってもよい。
<Embodiment 2>
In the second embodiment, an example in which the map display control device 10 is applied to a navigation system is shown. FIG. 6 is a diagram showing a configuration of the navigation system 30 according to the second embodiment. In the present embodiment, it is assumed that the navigation system 30 is mounted on the vehicle, and hereinafter, the vehicle on which the navigation system 30 is mounted, that is, the vehicle on which the user of the navigation system 30 is boarded is referred to as "own vehicle". However, the navigation system 30 may be for pedestrians.
 図6のように、ナビゲーションシステム30は、地図表示制御装置10、地図データ記憶装置21、表示装置22、操作入力装置23、測位装置24および走行予定経路設定部25を備えている。地図データ記憶装置21、表示装置22および操作入力装置23は、図1に示したものと同様であるため、それらの説明は省略する。 As shown in FIG. 6, the navigation system 30 includes a map display control device 10, a map data storage device 21, a display device 22, an operation input device 23, a positioning device 24, and a planned travel route setting unit 25. Since the map data storage device 21, the display device 22, and the operation input device 23 are the same as those shown in FIG. 1, their description will be omitted.
 測位装置24は、例えばGPS(Global Positioning System)衛星などのGNSS(Global Navigation Satellite System)衛星から受信した測位信号を用いて、自車両の現在位置、すなわちユーザの現在位置を算出する。測位装置24は、例えば自車両が搭載するカメラやセンサを用いた自律航法や、地図データ記憶装置21に記憶された地図データを用いたマップマッチング技術により、自車両の現在位置の算出精度を高める処理を行ってもよい。 The positioning device 24 calculates the current position of the own vehicle, that is, the current position of the user, using a positioning signal received from a GNSS (Global Navigation Satellite System) satellite such as a GPS (Global Positioning System) satellite. The positioning device 24 improves the calculation accuracy of the current position of the own vehicle by, for example, autonomous navigation using a camera or a sensor mounted on the own vehicle or map matching technology using the map data stored in the map data storage device 21. Processing may be performed.
 走行予定経路設定部25は、測位装置24が算出した自車両の現在位置からユーザが設定した目的地までの推奨経路を算出し、算出された推奨経路を自車両の走行予定経路、すなわちユーザの通過予定経路を設定する。なお、走行予定経路設定部25は、ユーザが目的地を設定していない場合、例えば自車両の走行履歴に基づいて自車両が通過する経路を推定し、推定した経路を走行予定経路として設定してもよい。 The planned travel route setting unit 25 calculates a recommended route from the current position of the own vehicle calculated by the positioning device 24 to the destination set by the user, and uses the calculated recommended route as the planned travel route of the own vehicle, that is, the user. Set the planned route. When the user has not set a destination, the planned travel route setting unit 25 estimates the route through which the own vehicle passes based on, for example, the travel history of the own vehicle, and sets the estimated route as the planned travel route. You may.
 地図表示制御装置10は、操作入力装置23に入力されるユーザの操作、あるいは測位装置24が測定した自車両の現在位置に従って、地図データ記憶装置21に記憶された地図データに基づく地図を表示装置22に表示する。 The map display control device 10 displays a map based on the map data stored in the map data storage device 21 according to the user's operation input to the operation input device 23 or the current position of the own vehicle measured by the positioning device 24. Displayed on 22.
 図6のように、実施の形態2の地図表示制御装置10は、実施の形態1で示した特定地物記憶部11、地図データ取得部12、区間分類部13および表示制御部14に加え、測位装置24が算出した自車両の現在位置を取得する位置情報取得部15と、走行予定経路設定部25が設定した自車両の走行予定経路を取得する経路情報取得部16とを備えている。特定地物記憶部11、地図データ取得部12および区間分類部13の動作は、実施の形態1と同様であるので、それらの説明は省略する。 As shown in FIG. 6, the map display control device 10 of the second embodiment is added to the specific feature storage unit 11, the map data acquisition unit 12, the section classification unit 13, and the display control unit 14 shown in the first embodiment. It includes a position information acquisition unit 15 for acquiring the current position of the own vehicle calculated by the positioning device 24, and a route information acquisition unit 16 for acquiring the planned travel route of the own vehicle set by the planned travel route setting unit 25. Since the operations of the specific feature storage unit 11, the map data acquisition unit 12, and the section classification unit 13 are the same as those in the first embodiment, their description will be omitted.
 表示制御部14は、地図データ取得部12が取得した地図データに対応する地図を表示装置22の画面に表示する。このとき表示制御部14は、道路の特定地物可視区間と特定地物不可視区間とを互いに異なる表示態様で表示する。実施の形態2では、表示制御部14はさらに、自車両の走行予定経路と他の道路との区別がつくように、走行予定経路の表示態様(道路線の太さ、色など)を他の道路の表示態様とは異なるものにする。また、表示制御部14は、表示装置22に表示された地図上に、自車両の現在位置を示すアイコンを重畳表示する。 The display control unit 14 displays a map corresponding to the map data acquired by the map data acquisition unit 12 on the screen of the display device 22. At this time, the display control unit 14 displays the specific feature visible section and the specific feature invisible section of the road in different display modes. In the second embodiment, the display control unit 14 further displays the display mode (thickness, color, etc. of the road line) of the planned travel route so that the planned travel route of the own vehicle can be distinguished from other roads. It should be different from the road display mode. Further, the display control unit 14 superimposes and displays an icon indicating the current position of the own vehicle on the map displayed on the display device 22.
 例えば、自車両の現在位置が道路RD1上にあり、自車両の走行予定経路が道路RD1である場合、表示制御部14は、例えば図7のように、地図上に自車両の現在位置を示す三角形のアイコンSVを表示するとともに、自車両の走行予定経路である道路RD1の道路線を他の道路よりも太くする。なお、自車両の走行予定経路が設定されていない状態では、図8のように、自車両の走行予定経路は表示されない(特定地物可視区間を除いて、道路RD1,RD2,RD3の道路線の太さが同じになる)。また、自車両の現在位置が特定地物可視区間にあるか特定地物不可視区間にあるかによって、自車両のアイコンSVの表示態様(色、大きさ、形状など)が変更されてもよい。 For example, when the current position of the own vehicle is on the road RD1 and the planned travel route of the own vehicle is the road RD1, the display control unit 14 indicates the current position of the own vehicle on the map, for example, as shown in FIG. Along with displaying the triangular icon SV, the road line of the road RD1 which is the planned travel route of the own vehicle is made thicker than other roads. In the state where the planned travel route of the own vehicle is not set, the planned travel route of the own vehicle is not displayed as shown in FIG. 8 (except for the specific feature visible section, the road lines of the roads RD1, RD2, RD3). The thickness will be the same). Further, the display mode (color, size, shape, etc.) of the icon SV of the own vehicle may be changed depending on whether the current position of the own vehicle is in the specific feature visible section or the specific feature invisible section.
 また、実施の形態2では、表示制御部14は、地図の表示範囲を、自車両の現在位置に基づいて設定する。例えば、表示制御部14は、自車両の現在位置、あるいは自車両の現在位置から一定距離だけ前方の位置を地図の表示範囲の中心地点とすることで、自車両の現在位置が地図の表示範囲に含まれるようにしてもよい。さらに、表示制御部14は、自車両の現在位置の変化に合わせて、表示装置22に表示させる地図をスクロールさせることで、自車両の現在位置が常に地図の表示範囲内にあるようにしてもよい。 Further, in the second embodiment, the display control unit 14 sets the display range of the map based on the current position of the own vehicle. For example, the display control unit 14 sets the current position of the own vehicle or the position in front of the current position of the own vehicle by a certain distance as the center point of the display range of the map, so that the current position of the own vehicle is the display range of the map. It may be included in. Further, the display control unit 14 scrolls the map to be displayed on the display device 22 according to the change in the current position of the own vehicle so that the current position of the own vehicle is always within the display range of the map. Good.
 実施の形態2に係る地図表示制御装置10によれば、表示装置22に表示される地図において、特定地物可視区間だけでなく、ユーザの現在位置としての自車両の現在位置と、ユーザの通過予定経路としての自車両の走行予定経路を表示することができる。地図上にユーザの現在位置が表示されることにより、ユーザは、現在位置が特定地物可視区間内かどうか、つまり現在位置から特定地物が見えるかどうかを把握することができる。また、地図上にユーザの通過予定経路が表示されることにより、ユーザは、これから通る経路から特定地物が見えるかどうかを把握することができる。よって、ユーザが特定地物の見えない場所で無駄に特定地物を探すことが防止される。 According to the map display control device 10 according to the second embodiment, in the map displayed on the display device 22, not only the visible section of the specific feature but also the current position of the own vehicle as the current position of the user and the passage of the user. It is possible to display the planned travel route of the own vehicle as the planned route. By displaying the user's current position on the map, the user can grasp whether or not the current position is within the specific feature visible section, that is, whether or not the specific feature can be seen from the current position. In addition, by displaying the route to be passed by the user on the map, the user can grasp whether or not a specific feature can be seen from the route to be taken from now on. Therefore, it is possible to prevent the user from unnecessarily searching for a specific feature in a place where the specific feature cannot be seen.
 次に、図9のフローチャートを参照しつつ、実施の形態2に係る地図表示制御装置10の動作を説明する。地図表示制御装置10が起動し、ユーザが操作入力装置23を用いて特定地物を指定すると、特定地物記憶部11がその特定地物を記憶する(ステップST201)。また、経路情報取得部16が、走行予定経路設定部25により設定された自車両の走行予定経路の情報を取得する(ステップST202)。 Next, the operation of the map display control device 10 according to the second embodiment will be described with reference to the flowchart of FIG. When the map display control device 10 is activated and the user specifies a specific feature using the operation input device 23, the specific feature storage unit 11 stores the specific feature (step ST201). In addition, the route information acquisition unit 16 acquires information on the planned travel route of the own vehicle set by the planned travel route setting unit 25 (step ST202).
 さらに、位置情報取得部15が、測位装置24により算出された自車両の現在位置の情報を取得する(ステップST203)。表示制御部14は、自車両の現在位置に基づいて、地図の表示範囲を設定する(ステップST204)。地図データ取得部12は、設定された表示範囲およびその近傍の地図データを取得する(ステップST205)。 Further, the position information acquisition unit 15 acquires the information of the current position of the own vehicle calculated by the positioning device 24 (step ST203). The display control unit 14 sets the display range of the map based on the current position of the own vehicle (step ST204). The map data acquisition unit 12 acquires map data of the set display range and its vicinity (step ST205).
 その後、区間分類部13が、地図データ取得部12が取得した地図データに基づいて、地図データに含まれる道路を特定地物可視区間と特定地物不可視区間とに分類する区間分類処理を行う(ステップST206)。そして、表示制御部14が、特定地物可視区間と特定地物不可視区間とを互いに異なる表示態様で示した地図を表示装置22に表示する(ステップST207)。このとき、自車両の走行予定経路が設定されていれば、表示制御部14は、自車両の走行予定経路と他の道路との区別がつくように、走行予定経路の表示態様を他の道路の表示態様とは異なるものにする。 After that, the section classification unit 13 performs a section classification process for classifying the road included in the map data into a specific feature visible section and a specific feature invisible section based on the map data acquired by the map data acquisition unit 12 ( Step ST206). Then, the display control unit 14 displays on the display device 22 a map showing the specific feature visible section and the specific feature invisible section in different display modes (step ST207). At this time, if the planned travel route of the own vehicle is set, the display control unit 14 displays the planned travel route on another road so that the planned travel route of the own vehicle can be distinguished from the other road. The display mode is different from that of.
 さらに、表示制御部14は、表示装置22に表示された地図上に、自車両の現在位置示すアイコンを重畳表示する(ステップST208)。 Further, the display control unit 14 superimposes and displays an icon indicating the current position of the own vehicle on the map displayed on the display device 22 (step ST208).
 その後は、ステップST203へ戻り、自車両の現在位置が更新されると、ステップST203~ST208が実行され、表示装置22に表示される地図が更新される。 After that, when returning to step ST203 and updating the current position of the own vehicle, steps ST203 to ST208 are executed, and the map displayed on the display device 22 is updated.
 なお、ユーザが地図をスクロールさせた場合などには、実施の形態1のように、自車両の現在位置とは無関係に、地図の表示範囲が設定されるようにするとよい。 When the user scrolls the map, it is preferable to set the display range of the map regardless of the current position of the own vehicle as in the first embodiment.
 本実施の形態では、ナビゲーションシステム30が、ユーザが搭乗する車両に搭載された例を示したが、ナビゲーションシステム30がユーザの所有するスマートフォン上で実現される場合には、当該車両は、例えばバスなど公共の交通機関でもよい。また、ユーザが搭乗する車両は自動車に限られず、例えば鉄道の電車でもよい。車両が鉄道の電車の場合、地図表示制御装置10は、その電車の走行予定経路である線路を特定地物可視区間と特定地物不可視区間とに分類し、特定地物可視区間と特定地物不可視区間とを互いに異なる表示態様で示した地図を表示装置22に表示する。 In the present embodiment, an example in which the navigation system 30 is mounted on a vehicle on which the user is boarding is shown, but when the navigation system 30 is realized on a smartphone owned by the user, the vehicle is, for example, a bus. It may be public transportation. Further, the vehicle on which the user is boarding is not limited to an automobile, and may be, for example, a railroad train. When the vehicle is a railroad train, the map display control device 10 classifies the track, which is the planned travel route of the train, into a specific feature visible section and a specific feature invisible section, and the specific feature visible section and the specific feature. A map showing the invisible section in different display modes is displayed on the display device 22.
 [変更例]
 例えば図10のように、自車両の走行予定経路(道路RD1)上の特定地物可視区間の表示態様と、その他の道路(道路RD2,RD3)上の特定地物可視区間の表示態様とを、互いに異なるものにしてもよい。
[Change example]
For example, as shown in FIG. 10, the display mode of the specific feature visible section on the planned travel route (road RD1) of the own vehicle and the display mode of the specific feature visible section on other roads (roads RD2, RD3) are displayed. , May be different from each other.
 また区間分類部13は、地図データ取得部12が取得した地図データに含まれる道路のうち、自車両の通過予定経路に含まれる道路のみを、区間分類処理の対象としてもよい。その場合、表示制御部14は、図11のように、自車両の走行予定経路(道路RD1)における特定地物可視区間と特定地物不可視区間とを互いに異なる表示態様とするが、その他の道路(道路RD2,RD3)は一様な表示態様で表示する。 Further, the section classification unit 13 may target only the roads included in the planned passage route of the own vehicle among the roads included in the map data acquired by the map data acquisition unit 12 for the section classification process. In that case, as shown in FIG. 11, the display control unit 14 sets the specific feature visible section and the specific feature invisible section on the planned travel route (road RD1) of the own vehicle in different display modes, but other roads. (Roads RD2 and RD3) are displayed in a uniform display mode.
 また、図12のように、特定地物の位置と特定地物可視区間の両端とを結ぶ2直線と特定地物可視区間の道路線とで囲まれる略三角形あるいは略扇形の領域に半透明のパターン(以下「扇形パターン」という)を表示させてもよい。このような扇形パターンを表示することにより、特定地物可視区間と特定地物との位置関係が理解しやすくなる。また、図13および図14に示すように、自車両が位置する特定地物可視区間に対してのみ、扇形パターンが表示されるようにしてもよい。これにより、特定地物可視区間、特定地物および自車両の3つの位置関係を理解しやすくなる。 Further, as shown in FIG. 12, translucent in a substantially triangular or substantially fan-shaped area surrounded by two straight lines connecting the position of the specific feature and both ends of the specific feature visible section and the road line of the specific feature visible section. A pattern (hereinafter referred to as "fan pattern") may be displayed. By displaying such a fan-shaped pattern, it becomes easier to understand the positional relationship between the specific feature visible section and the specific feature. Further, as shown in FIGS. 13 and 14, the fan-shaped pattern may be displayed only for the specific feature visible section in which the own vehicle is located. This makes it easier to understand the three positional relationships of the specific feature visible section, the specific feature, and the own vehicle.
 また、表示制御部14は、ユーザの現在位置が特定地物可視区間内にあるとき、地図が表示された表示装置22の画面の一部に、特定地物が見える方向を示す付加的な画像(以下「付加画像」という)を表示してもよい。例えば、図15は、地図の画面の一部に、ユーザの位置を視点とする三次元地図の付加画像41を表示した例である。ユーザの現在位置が特定地物可視区間内にあれば、図15のように、付加画像41の三次元地図には特定地物(パゴダPG)の画像が含まれることになるため、付加画像41は特定地物が見える方向を表すことができる。図15の付加画像41においては、特定地物の位置を強調する楕円形の画像411を3次元地図に重畳させている。また、付加画像41には、ユーザの現在位置から特定地物までの距離や方向を示す文字を含ませてもよい。なお、ユーザが車両の運転者である場合、ユーザの位置すなわち車両の運転席の位置を視点とする三次元地図は「クルージングビュー」の地図とも呼ばれる。 Further, the display control unit 14 displays an additional image showing the direction in which the specific feature can be seen on a part of the screen of the display device 22 on which the map is displayed when the user's current position is within the specific feature visible section. (Hereinafter referred to as "additional image") may be displayed. For example, FIG. 15 is an example in which an additional image 41 of a three-dimensional map with the user's position as a viewpoint is displayed on a part of the map screen. If the user's current position is within the visible section of the specific feature, the three-dimensional map of the additional image 41 includes the image of the specific feature (pagoda PG) as shown in FIG. 15, so that the additional image 41 Can represent the direction in which a specific feature is visible. In the additional image 41 of FIG. 15, an elliptical image 411 that emphasizes the position of a specific feature is superimposed on the three-dimensional map. Further, the additional image 41 may include characters indicating the distance and direction from the user's current position to the specific feature. When the user is the driver of the vehicle, the three-dimensional map with the position of the user, that is, the position of the driver's seat of the vehicle as a viewpoint is also called a "cruising view" map.
 また、ユーザが車両(自車両)に搭乗している場合、三次元地図の代わりに、自車両に搭載されたカメラで特定地物の方向を撮影した実写画像を、付加画像41として用いてもよい。自車両の現在位置が特定地物可視区間内にあれば、付加画像41としての実写画像には特定地物が写り込むため、付加画像41は特定地物が見える方向を表すことができる。また、実写画像に対しても、特定地物の位置を強調する画像(図15の付加画像41における楕円形の画像411に相当)を重畳させてもよい。 Further, when the user is in a vehicle (own vehicle), instead of the three-dimensional map, a live-action image obtained by taking the direction of a specific feature with a camera mounted on the own vehicle may be used as the additional image 41. Good. If the current position of the own vehicle is within the visible section of the specific feature, the specific feature is reflected in the live-action image as the additional image 41, so that the additional image 41 can represent the direction in which the specific feature can be seen. Further, an image emphasizing the position of the specific feature (corresponding to the elliptical image 411 in the additional image 41 of FIG. 15) may be superimposed on the live-action image.
 図16は、地図の画面の一部に、ユーザが搭乗した車両(自車両)からの特定地物の方向を示す付加画像42を表示した例である。図16の付加画像42は、自車両の俯瞰画像421と、特定地物の方向を示すサーチライトの光を模した図形422とによって、自車両からの特定地物の方向を表している。 FIG. 16 is an example in which an additional image 42 showing the direction of a specific feature from the vehicle (own vehicle) on which the user boarded is displayed on a part of the map screen. The additional image 42 of FIG. 16 shows the direction of the specific feature from the own vehicle by the bird's-eye view image 421 of the own vehicle and the figure 422 that imitates the light of the searchlight indicating the direction of the specific feature.
 特定地物が見える方向を示す付加画像は、ユーザから見える実際の風景に重ねて表示されるように、ヘッドアップディスプレイによって、ユーザが見通せる画面に表示されてもよい。例えば、ユーザが搭乗した車両(自車両)が特定地物不可視区間を走行しているときは、図17のように、自車両のウィンドシールド60を通して見える風景には特定地物(パゴダPG)は含まれない。しかし、自車両が特定地物可視区間に入ると、図18のように、ウィンドシールド60を通して特定地物であるパゴダPGが見えるようになる。このとき、表示制御部14は、自車両が搭載するヘッドアップディスプレイ(不図示)を制御して、図18のように、画面としてのウィンドシールド60にパゴダPGが見える方向を示す付加画像43を表示してもよい。付加画像43は、運転者から見てパゴダPGが見える位置に重なるように表示されるとよい。 The additional image showing the direction in which the specific feature can be seen may be displayed on the screen that the user can see by the head-up display so as to be superimposed on the actual landscape seen by the user. For example, when the vehicle (own vehicle) on which the user is boarding is traveling in the invisible section of the specific feature, the specific feature (pagoda PG) appears in the scenery seen through the windshield 60 of the own vehicle as shown in FIG. Not included. However, when the own vehicle enters the specific feature visible section, the pagoda PG, which is a specific feature, can be seen through the windshield 60 as shown in FIG. At this time, the display control unit 14 controls the head-up display (not shown) mounted on the own vehicle to display an additional image 43 indicating the direction in which the pagoda PG can be seen on the windshield 60 as a screen as shown in FIG. It may be displayed. The additional image 43 may be displayed so as to overlap the position where the pagoda PG can be seen from the driver.
 ヘッドアップディスプレイが、表示する画像の見かけ上の距離、つまりユーザから見た画像の距離感(奥行き感)を制御できる機能を有する場合、ユーザから特定地物までの距離に応じて、付加画像の距離感を変更してもよい。例えば、ユーザから特定地物までの距離が長いときは、付加画像がユーザから遠くに見えるようにし、ユーザから特定地物までの距離が短くなるにつれて、付加画像がユーザへ近づいて見えるようにしてもよい。 When the head-up display has a function that can control the apparent distance of the image to be displayed, that is, the sense of distance (sense of depth) of the image seen by the user, the additional image is added according to the distance from the user to a specific feature. You may change the sense of distance. For example, when the distance from the user to the specific feature is long, the additional image is made to be seen far from the user, and as the distance from the user to the specific feature is shortened, the additional image is made to be seen closer to the user. May be good.
 また、付加画像は、特定地物が見え始めることをユーザに予告する目的で、ユーザが特定地物可視区間に入る前に前に表示されてもよい。例えば、図17の状態から図18の状態になる前のタイミングで、表示制御部14がヘッドアップディスプレイを制御して、図19のように、特定地物(パゴダPG)が見え始めることを予告する付加画像44をウィンドシールド60に表示してもよい。付加画像44は、運転者から見てパゴダPGが見え始める位置に重なるように表示されるとよい。 Further, the additional image may be displayed before the user enters the specific feature visible section for the purpose of notifying the user that the specific feature will start to be seen. For example, at the timing before the state of FIG. 17 changes to the state of FIG. 18, the display control unit 14 controls the head-up display and announces that the specific feature (pagoda PG) starts to be seen as shown in FIG. The additional image 44 to be displayed may be displayed on the windshield 60. The additional image 44 may be displayed so as to overlap the position where the pagoda PG starts to be seen from the driver's point of view.
 なお、図15および図16に示した付加画像41,42も、特定地物が見え始めることをユーザに予告する目的で、実際にユーザが特定地物を視認可能になる前から表示されてもよい。 The additional images 41 and 42 shown in FIGS. 15 and 16 may also be displayed before the user can actually see the specific feature for the purpose of notifying the user that the specific feature will start to be seen. Good.
 ナビゲーションシステムの多くは、ユーザの仮想的な位置を示すアイコンを、地図上の通過予定経路に沿って移動させるルートシミュレーションの機能を有している。図15に示した付加画像41(ユーザの位置を視点とする三次元地図)は、ルートシミュレーションにも適用可能である。すなわち、ルートシミュレーションにおいて、ユーザの仮想的な位置を示すアイコンが特定地物可視区間内にあるときに、ルートシミュレーションの画面の一部に付加画像41を表示させてもよい。 Most navigation systems have a route simulation function that moves an icon indicating the user's virtual position along the planned route on the map. The additional image 41 (three-dimensional map with the user's position as a viewpoint) shown in FIG. 15 can also be applied to route simulation. That is, in the route simulation, when the icon indicating the virtual position of the user is in the specific feature visible section, the additional image 41 may be displayed on a part of the screen of the route simulation.
 実施の形態2のように地図表示制御装置10が車両(自車両)に搭載されている場合、区間分類部13による区間分類処理の結果(つまり特定地物可視区間および特定地物不可視区間の情報)を、自車両の経路案内や自動運転制御に反映させてもよい。例えば、地図データ記憶装置21に記憶された地図データが、道路の車線ごとの位置情報を含む高精度地図データである場合、区間分類部13が道路の車線ごとに区間分類処理を行い、地図表示制御装置10がその結果(例えば、車線ごとの特定地物可視区間の情報や、特定地物可視区間の距離を長くできる車線の情報など)を、自車両のナビゲーションシステム30あるいは自動運転システム(不図示)に提供してもよい。 When the map display control device 10 is mounted on the vehicle (own vehicle) as in the second embodiment, the result of the section classification process by the section classification unit 13 (that is, information on the specific feature visible section and the specific feature invisible section). ) May be reflected in the route guidance and automatic driving control of the own vehicle. For example, when the map data stored in the map data storage device 21 is high-precision map data including position information for each lane of the road, the section classification unit 13 performs section classification processing for each lane of the road and displays the map. The control device 10 outputs the result (for example, information on a specific feature visible section for each lane, information on a lane that can increase the distance of the specific feature visible section, etc.) to the navigation system 30 of the own vehicle or an automatic driving system (not (Shown) may be provided.
 例えば、区間分類処理の結果を取得したナビゲーションシステム30は、ユーザに対し、自車両が特定地物可視区間を走行する距離が長くなるような車線を案内することができる。また、区間分類処理の結果を取得した自車両の自動運転システムが、自車両が特定地物可視区間を走行する距離が長くなるように、自動的に車線変更を行ってもよい。また、ナビゲーションシステム30が、現在位置から目的地までの経路を探索する際に、特定地物可視区間が長くなる経路を探索できるようにしてもよい。これにより、ユーザが特定地物を見ることができる時間を長くすることができる。 For example, the navigation system 30 that has acquired the result of the section classification process can guide the user to a lane in which the distance traveled by the own vehicle in the visible section of the specific feature becomes longer. In addition, the automatic driving system of the own vehicle that has acquired the result of the section classification process may automatically change lanes so that the distance that the own vehicle travels in the visible section of the specific feature becomes longer. Further, when the navigation system 30 searches for a route from the current position to the destination, the navigation system 30 may be able to search for a route in which the specific feature visible section becomes long. As a result, it is possible to extend the time during which the user can see the specific feature.
 <実施の形態3>
 地図表示制御装置10のユーザが歩行者であれば、当該ユーザは自由に周囲を見渡すことができるため、ユーザの視野範囲は制限されない。しかし、地図表示制御装置10のユーザが車両の運転者であれば、運転中は前方を向く必要があるため、当該ユーザの視野範囲は制限される。実施の形態3では、ユーザの視野範囲が考慮された区間分類処理を行う地図表示制御装置10を提案する。
<Embodiment 3>
If the user of the map display control device 10 is a pedestrian, the user can freely look around, so that the viewing range of the user is not limited. However, if the user of the map display control device 10 is a driver of a vehicle, he / she needs to look forward while driving, so that the user's visual field range is limited. In the third embodiment, the map display control device 10 that performs the section classification process in consideration of the user's visual field range is proposed.
 図20に、ユーザが車両(自車両)の運転者である場合の、ユーザの視野範囲の例を示す。図20においては、自車両の正面前方を0°とし、正面前方に対して右側の角度を正、左側の角度を負としている。運転者であるユーザが右側を視認できる限界の角度をα(>0°)、左側を視認できる限界の角度をα(<0°)とすると、ユーザの視野範囲Θは、α≦Θ≦αとして表される。つまり、αは正面より左側の視野範囲の広さを表すパラメータであり、αは正面より右側の視野範囲の広さを表すパラメータである。実施の形態3では、α=90°、α=-90°と仮定する。つまりユーザの視野範囲Θは、-90°≦Θ≦90°の範囲である。 FIG. 20 shows an example of the user's visual field range when the user is the driver of the vehicle (own vehicle). In FIG. 20, the front front of the own vehicle is 0 °, the angle on the right side with respect to the front front is positive, and the angle on the left side is negative. Assuming that the limit angle at which the driver can see the right side is α R (> 0 °) and the limit angle at which the user can see the left side is α L (<0 °), the user's visual field range Θ is α L ≦ It is expressed as Θ≤α R. That is, α L is a parameter representing the width of the visual field range on the left side of the front, and α R is a parameter representing the width of the visual field range on the right side of the front. In the third embodiment, it is assumed that α R = 90 ° and α L = −90 °. That is, the user's visual field range Θ is in the range of −90 ° ≦ Θ ≦ 90 °.
 実施の形態3の地図表示制御装置10の構成および動作は、基本的に実施の形態2と同様である。ただし、実施の形態3では、区間分類部13が、道路上の各地点におけるユーザの視野範囲を考慮して、区間分類処理を行う。ユーザの視野範囲は、道路の各地点を通過するときの自車両の向きに応じて設定される。 The configuration and operation of the map display control device 10 of the third embodiment are basically the same as those of the second embodiment. However, in the third embodiment, the section classification unit 13 performs the section classification process in consideration of the user's visual field range at each point on the road. The user's field of view is set according to the orientation of the own vehicle when passing through each point on the road.
 以下、表示制御部14が表示装置22に表示する地図の具体例を示しつつ、実施の形態3に係る地図表示制御装置10の動作を説明する。ここでは、区間分類部13は、地図データ取得部12が取得した地図データに含まれる道路のうち、自車両の通過予定経路に含まれる道路のみを、区間分類処理の対象とするものとする。 Hereinafter, the operation of the map display control device 10 according to the third embodiment will be described while showing a specific example of the map displayed on the display device 22 by the display control unit 14. Here, the section classification unit 13 targets only the roads included in the planned passage route of the own vehicle among the roads included in the map data acquired by the map data acquisition unit 12 for the section classification process.
 図2に例示した地図において、地点P2は、特定地物であるパゴダPGが道路RD1の真横(-90°の方向)に見える地点である。よって、ユーザの視野範囲Θが-90°≦Θ≦90°の範囲である場合、実施の形態3の区間分類部13は、パゴダPGの方向がユーザの視野範囲外となる地点P2と地点P3との間の区間(区間P2-P3)を特定地物不可視区間として分類する。つまり、区間分類部13は、走行予定経路である道路RD1に対し、区間P1-P2および区間P4-P5を、特定地物(パゴダPG)を視認可能な特定地物可視区間に分類し、区間P0-P1、区間P2-P4および区間P5-P6を、特定地物を視認不可能な特定地物不可視区間に分類する。 In the map illustrated in FIG. 2, the point P2 is a point where the pagoda PG, which is a specific feature, can be seen right beside the road RD1 (in the direction of −90 °). Therefore, when the user's visual field range Θ is in the range of −90 ° ≦ Θ ≦ 90 °, the section classification unit 13 of the third embodiment has points P2 and points P3 in which the direction of the pagoda PG is outside the user's visual field range. The section between and (section P2-P3) is classified as a specific feature invisible section. That is, the section classification unit 13 classifies the sections P1-P2 and the sections P4-P5 into visible sections of the specific features (pagoda PG) with respect to the road RD1 which is the planned travel route. P0-P1, section P2-P4, and section P5-P6 are classified into a specific feature invisible section in which the specific feature is invisible.
 よって、表示制御部14が表示装置22に表示させる地図では、図21のように、特定地物可視区間として分類された区間P1-P2および区間P4-P5の表示態様と、特定地物不可視区間として分類された区間P0-P1、区間P2-P4および区間P5-P6の表示態様とが、互いに異なるものとなる。 Therefore, in the map displayed on the display device 22 by the display control unit 14, as shown in FIG. 21, the display modes of the sections P1-P2 and the sections P4-P5 classified as the specific feature visible section and the specific feature invisible section are displayed. The display modes of the sections P0-P1, the sections P2-P4, and the sections P5-P6 classified as are different from each other.
 実施の形態3では、α=90°、α=-90°と仮定したが、αおよびαの大きさ(絶対値)は任意の値でよく、αの大きさとαの大きさとはそれぞれ独立して設定されてもよい。つまり、視野範囲は左右対称でなくてよく、例えば、運転席側の視認範囲をやや広くとってもよい。具体的には、自車両が右側に運転席がある車両(いわゆる「右ハンドルの車両」)であれば、運転者の視野範囲Θを、-60°≦Θ≦75°とし、自車両が左側に運転席がある車両(いわゆる「左ハンドルの車両」)であれば、運転者の視野範囲Θを、-75°≦Θ≦60°としてもよい。また、αおよびαの大きさは、ユーザが好みに応じて変更できるようにしてもよい。 In the third embodiment, it is assumed that α R = 90 ° and α L = −90 °, but the magnitudes (absolute values) of α R and α L may be arbitrary values, and the magnitude of α R and α L It may be set independently of the size. That is, the viewing range does not have to be symmetrical, and for example, the viewing range on the driver's seat side may be slightly wider. Specifically, if the own vehicle has a driver's seat on the right side (so-called "right-hand drive vehicle"), the driver's viewing range Θ is set to -60 ° ≤ Θ ≤ 75 °, and the own vehicle is on the left side. If the vehicle has a driver's seat (so-called "left-hand drive vehicle"), the driver's viewing range Θ may be −75 ° ≦ Θ ≦ 60 °. Further, the sizes of α R and α L may be changed by the user according to his / her preference.
 また、0°の方向は、自車両の正面前方でなくてもよく、例えば、自車両前方の道路形状に応じて変化してもよい。例えば、自車両の前方の道路の接線方向を0°の方向としてもよい。また、走行速度にもよるが、一般的に運転者は数十m先の道路を見ているので、自車両から数十m先の道路上の地点の方向を0°の方向としてもよい。 Further, the direction of 0 ° does not have to be in front of the own vehicle, and may change according to the road shape in front of the own vehicle, for example. For example, the tangential direction of the road in front of the own vehicle may be the direction of 0 °. Further, although it depends on the traveling speed, since the driver generally looks at the road several tens of meters away, the direction of the point on the road several tens of meters away from the own vehicle may be set to the direction of 0 °.
 また、図20においては、運転者の視野範囲を、運転者が直接見ることができる範囲として規定した。しかし、運転者は、後写鏡(後写鏡を電子的に実現した「電子ミラー」も含む)で間接的に車両の後方を見ることも可能であるため、運転者が後写鏡を通して視認できる車両後方の範囲も視野範囲に含ませてもよい。 Further, in FIG. 20, the visual field range of the driver is defined as a range that the driver can directly see. However, since the driver can also indirectly look at the rear of the vehicle with a rearview mirror (including an "electronic mirror" that electronically realizes the rearview mirror), the driver can see through the rearview mirror. The range behind the vehicle may also be included in the viewing range.
 運転者の視野範囲の広さは、運転者にかかる運転負荷によって変わるため、区間分類部13は、運転者にかかる運転負荷を考慮して、αおよびαの設定値を変更してもよい。つまり、道路上の各地点におけるユーザの視野範囲の広さは、それぞれの地点で異なる値に設定されてもよい。 Since the width of the driver's field of view changes depending on the driving load applied to the driver, the section classification unit 13 may change the set values of α R and α L in consideration of the driving load applied to the driver. Good. That is, the width of the user's visual field range at each point on the road may be set to a different value at each point.
 また、ユーザの視野範囲の広さは、その地点を車両が通過するときの走行環境またはその地点の道路属性に応じて設定されてもよい。一般的に、制限速度が高い道路ほど、車両の走行速度は上がる傾向にあり、運転者の運転負荷が大きくなると考えられるので、区間分類部13は、制限速度が大きい道路ほど、ユーザの視野範囲を狭く設定してもよい。また、曲がりくねった道路や狭い道路などでは、運転者の運転負荷が大きくなると考えられるので、区間分類部13は、道路の曲率が大きい道路や、幅の狭い道路では、ユーザの視野範囲を狭く設定してもよい。 Further, the width of the user's field of view may be set according to the driving environment when the vehicle passes through the point or the road attribute of the point. In general, the higher the speed limit, the higher the traveling speed of the vehicle, and it is considered that the driving load of the driver becomes heavier. Therefore, the section classification unit 13 indicates that the road having a higher speed limit has a user's viewing range. May be set narrower. Further, since it is considered that the driving load of the driver is large on a winding road or a narrow road, the section classification unit 13 sets the user's field of view narrow on a road having a large curvature of the road or a narrow road. You may.
 その他、交差点や、分合流近傍なども、運転者の運転負荷が大きくなると考えられる。また、道路の種類(高速道路、自動車専用道路、一般道など)や、歩行者の多寡、交通量の多寡によっても、運転者の運転負荷は変化するため、それらの条件に応じてユーザの視野範囲を変化させてもよい。例えばVICS(Vehicle Information and Communication System)(「VICS」は登録商標)などから得られる交通情報から、時間帯ごと、曜日ごとあるいは季節ごとの歩行者または車両の混雑具合や車両の平均速度を予測して、各地点におけるユーザの視野範囲を変化させてもよい。また、悪天候で視界が悪いときも、運転者の運転負荷が大きくなると考えられる。よって、天候情報に基づいて、各地点におけるユーザの視野範囲を変化させてもよい。 In addition, it is thought that the driving load of the driver will increase at intersections and near the junction. In addition, the driver's driving load changes depending on the type of road (expressway, motorway, general road, etc.), the amount of pedestrians, and the amount of traffic. The range may be changed. For example, from traffic information obtained from VICS (Vehicle Information and Communication System) (“VICS” is a registered trademark), it is possible to predict the congestion level of pedestrians or vehicles and the average speed of vehicles by time zone, day of the week, or season. The user's viewing range at each point may be changed. In addition, it is considered that the driving load of the driver becomes large even when the visibility is poor due to bad weather. Therefore, the user's visual field range at each point may be changed based on the weather information.
 また、ユーザの視野範囲の広さは、その地点を車両が通過するときの自動運転レベルに応じて設定されてもよい。ここで、自動車の自動運転の自動化レベル(自動運転レベル)の定義について説明する。SAE(Society of Automotive Engineers)International のJ3016(2016年9月)、および、その日本語参考訳であるJASO TP18004(2018年2月)によると、自動運転システムの自動運転レベルは次のように定義されている。 Further, the width of the user's field of view may be set according to the automatic driving level when the vehicle passes the point. Here, the definition of the automation level (autonomous driving level) of the automatic driving of an automobile will be described. According to SAE (Society of Automotive Engineers) International's J3016 (September 2016) and its Japanese reference translation, JASO TP1804 (February 2018), the automatic driving level of the automatic driving system is defined as follows. Has been done.
 レベル0(運転自動化なし):運転者が一部又は全ての動的運転タスクを実行
 レベル1(運転支援):システムが縦方向又は横方向のいずれかの車両運動制御のサブタスクを限定領域において実行
 レベル2(部分運転自動化):システムが縦方向及び横方向両方の車両運動制御のサブタスクを限定領域において実行
 レベル3(条件付運転自動化):システムが全ての動的運転タスクを限定領域において実行するが、作動継続が困難な場合は、システムからの介入要求等に運転者が適切に応答
 レベル4(高度運転自動化):システムが全ての動的運転タスク及び作動継続が困難な場合への応答を限定領域において実行
 レベル5(完全運転自動化):システムが全ての動的運転タスク及び作動継続が困難な場合への応答を無制限に(すなわち限定領域内ではない)実行
 なお、「動的運転タスク」とは、道路交通において車両を操作する際にリアルタイムで行う必要がある全ての操作上及び戦術上の機能(行程計画並びに経由地の選択などの戦略上の機能は除く)をいう。また、「限定領域」とは、システム又はその機能が作動するように設計されている特定の条件(地理的制約、道路面の制約、環境的制約、交通の制約、速度上の制約、時間的な制約などを含む)をいう。
Level 0 (no driving automation): driver performs some or all dynamic driving tasks Level 1 (driving assistance): system performs either vertical or horizontal vehicle motion control subtasks in a limited area Level 2 (Partial Driving Automation): System Performs Both Vertical and Horizontal Vehicle Motion Control Subtasks in Limited Area Level 3 (Conditional Driving Automation): System Performs All Dynamic Driving Tasks in Limited Area However, if it is difficult to continue operation, the driver responds appropriately to intervention requests from the system. Level 4 (advanced operation automation): The system responds to all dynamic operation tasks and when it is difficult to continue operation. Execution in a limited area Level 5 (fully automated operation): The system executes an unlimited number of responses to all dynamic operation tasks and when it is difficult to continue operation (that is, not within the limited area). Means all operational and tactical functions (excluding strategic functions such as itinerary planning and waypoint selection) that must be performed in real time when operating a vehicle in road traffic. A "limited area" is a specific condition (geographical constraint, road surface constraint, environmental constraint, traffic constraint, speed constraint, temporal constraint) designed to operate the system or its function. (Including restrictions, etc.).
 自動運転レベルが上記のように定義される場合、運転レベルが大きいほどドライバの運転負荷は軽減する。よって、自車両の自動運転計画において、自動運転レベルを大きくする区間ではユーザの視野範囲を広く設定し、自動運転レベルを小さくする区間ではユーザの視野範囲を狭く設定してもよい。特に、自動運転レベルを4以上にする区間では、運転者は運転操作から開放されるため、ユーザの視野範囲を全方位(-180°≦Θ≦180°)としてもよい。また、遠隔運転サービスによって自車両が遠隔操作される場合も、運転者は運転操作から開放されるため、ユーザの視野範囲を全方位としてもよい。 When the automatic driving level is defined as above, the higher the driving level, the less the driver's driving load. Therefore, in the automatic driving plan of the own vehicle, the user's visual field range may be set wide in the section where the automatic driving level is increased, and the user's visual field range may be set narrow in the section where the automatic driving level is decreased. In particular, in the section where the automatic driving level is 4 or more, the driver is released from the driving operation, so that the user's visual field range may be omnidirectional (−180 ° ≦ Θ ≦ 180 °). Further, when the own vehicle is remotely controlled by the remote driving service, the driver is released from the driving operation, so that the user's field of view may be omnidirectional.
 [変更例]
 上の説明において、ユーザの視野範囲は1種類のみ設定されたが、広さの異なる複数種類の視野範囲が設定されてもよい。例えば図22は、視認レベル1の視野範囲Θ1をα≦Θ1≦αとして定義し、視認レベル2の視野範囲Θ2を全方位すなわち-180°≦Θ≦180°として定義した例である。ここで、視認レベルは、視認のしやすさの程度を表しており、視認レベル1の視野範囲は、視認レベル2の視野範囲よりも容易にユーザが視認できる範囲である。
[Change example]
In the above description, only one type of visual field range of the user is set, but a plurality of types of visual field ranges having different widths may be set. For example, FIG. 22 shows an example in which the visual field range Θ1 of the visual level 1 is defined as α L ≤ Θ1 ≤ α R , and the visual field range Θ2 of the visual field level 2 is defined as omnidirectional, that is, −180 ° ≤ Θ ≦ 180 °. Here, the visual field level represents the degree of ease of visual recognition, and the visual field range of the visual field level 1 is a range that the user can visually recognize more easily than the visual field range of the visual field level 2.
 この場合、区間分類部13は、複数種類の視野範囲を考慮した区間分類処理を行う。すなわち区間分類部13は、複数種類の視野範囲それぞれに対応する特定地物可視区間を求める。また、表示制御部14は、複数種類の視野範囲のそれぞれに対応する特定地物可視区間を、それぞれ異なる表示態様で地図上に表示する。 In this case, the section classification unit 13 performs section classification processing in consideration of a plurality of types of visual field ranges. That is, the section classification unit 13 obtains a specific feature visible section corresponding to each of a plurality of types of visual field ranges. Further, the display control unit 14 displays the specific feature visible sections corresponding to each of the plurality of types of visual field ranges on the map in different display modes.
 例えば、視認レベル1の視野範囲Θ1を-90°≦Θ1≦90°、視認レベル2の視野範囲Θ2を-180°≦Θ2≦180°として定義する。図2に例示した地図において、地点P2は、特定地物であるパゴダPGが道路RD1の真横(-90°の方向)に見える地点であるので、区間分類部13は、走行予定経路である道路RD1に対し、パゴダPGが視認レベル1の視野範囲に見える区間P1-P2および区間P4-P5をレベル1の特定地物可視区間に分類し、パゴダPGが視認レベル1の視野範囲外になるが視認レベル2の視野範囲内に見える区間P2-P3をレベル2の特定地物可視区間に分類し、パゴダPGが見えない区間P0-P1、区間P3-P4および区間P5-P6を特定地物不可視区間に分類する。 For example, the visual field range Θ1 of the visual level 1 is defined as −90 ° ≦ Θ1 ≦ 90 °, and the visual field range Θ2 of the visual field level 2 is defined as −180 ° ≦ Θ2 ≦ 180 °. In the map illustrated in FIG. 2, the point P2 is a point where the pagoda PG, which is a specific feature, can be seen right beside the road RD1 (in the direction of −90 °). With respect to RD1, the section P1-P2 and the section P4-P5 in which the pagoda PG can be seen in the visual range of the visual level 1 are classified into the specific feature visible section of the level 1, and the pagoda PG is out of the visual range of the visual level 1. The section P2-P3 that can be seen within the viewing range of visibility level 2 is classified into the specific feature visible section of level 2, and the sections P0-P1, the section P3-P4, and the section P5-P6 where the pagoda PG cannot be seen are invisible to the specific feature. Classify into sections.
 よって、表示制御部14が表示装置22に表示させる地図では、図23のように、レベル1の特定地物可視区間として分類された区間P1-P2および区間P4-P5の表示態様と、レベル2の特定地物可視区間として分類された区間P2-P3の表示態様と、特定地物不可視区間として分類された区間P0-P1、区間P3-P4および区間P5-P6の表示態様とが、それぞれ異なるものとなる。 Therefore, in the map displayed on the display device 22 by the display control unit 14, as shown in FIG. 23, the display modes of the sections P1-P2 and the sections P4-P5 classified as the level 1 specific feature visible section and the level 2 The display mode of the section P2-P3 classified as the specific feature visible section and the display mode of the sections P0-P1, the section P3-P4, and the section P5-P6 classified as the specific feature invisible section are different from each other. It becomes a thing.
 あるいは、図24のように、レベル1の特定地物可視区間として分類された区間P1-P2および区間P4-P5と、レベル2の特定地物可視区間として分類された区間P2-P3とで、互いに異なる色の扇形パターンが付されるようにしてもよい。このとき、図13および図14で説明したように、自車両が位置する区間に対してのみ、扇形パターンが表示されるようにしてもよい。 Alternatively, as shown in FIG. 24, there are sections P1-P2 and sections P4-P5 classified as level 1 specific feature visible sections, and sections P2-P3 classified as level 2 specific feature visible sections. Fan-shaped patterns of different colors may be attached. At this time, as described with reference to FIGS. 13 and 14, the fan-shaped pattern may be displayed only for the section in which the own vehicle is located.
 複数種類の視野範囲の定義の方法は図22に示した方法に限られない。例えば、運転者が直接視認可能な範囲をレベル1の視野範囲として定義し、運転者が後写鏡(電子ミラーを含む)を通して間接的に視認可能な範囲をレベル2の視野範囲として定義してもよい。 The method of defining a plurality of types of visual field ranges is not limited to the method shown in FIG. For example, a range that is directly visible to the driver is defined as a level 1 field of view, and a range that is indirectly visible to the driver through a rearview mirror (including an electronic mirror) is defined as a level 2 field of view. May be good.
 <実施の形態4>
 以上の実施の形態では、単一の地物が特定地物として指定されるものと仮定したが、実施の形態4では、ユーザは、地図表示制御装置10に対し、共通の属性を持つ複数の地物を特定地物として指定することができるものとする。例えば、ユーザは、特定地物として、固有のパゴダだけでなく、パゴダに属する建築物すべて、あるいは神社仏閣に属する建築物すべてを指定することができる。地物の属性としては、例えば、コンビニエンスストア、特定のブランドショップなど、どのようなものでもよい。
<Embodiment 4>
In the above embodiment, it is assumed that a single feature is designated as a specific feature, but in the fourth embodiment, the user has a plurality of attributes common to the map display control device 10. A feature can be designated as a specific feature. For example, the user can specify not only a unique pagoda but also all the buildings belonging to the pagoda or all the buildings belonging to the shrines and temples as specific features. The attributes of the feature may be any, for example, a convenience store, a specific brand shop, or the like.
 実施の形態4の地図表示制御装置10の構成および動作は、基本的に実施の形態2と同様である。ただし、実施の形態4では、区間分類部13が、複数の地物のそれぞれに関して、道路を特定地物可視区間と特定地物不可視区間とに分類する区間分類処理を行う。そして、表示制御部14は、特定地物可視区間の表示態様を、その特定地物可視区間から視認可能な特定地物の個数に応じて変更する。 The configuration and operation of the map display control device 10 of the fourth embodiment are basically the same as those of the second embodiment. However, in the fourth embodiment, the section classification unit 13 performs a section classification process for classifying the road into a specific feature visible section and a specific feature invisible section for each of the plurality of features. Then, the display control unit 14 changes the display mode of the specific feature visible section according to the number of specific features visible from the specific feature visible section.
 以下、表示制御部14が表示装置22に表示する地図の具体例を示しつつ、実施の形態4に係る地図表示制御装置10の動作を説明する。ここでは、区間分類部13は、地図データ取得部12が取得した地図データに含まれる道路のうち、自車両の通過予定経路に含まれる道路のみを、区間分類処理の対象とするものとする。 Hereinafter, the operation of the map display control device 10 according to the fourth embodiment will be described while showing a specific example of the map displayed on the display device 22 by the display control unit 14. Here, the section classification unit 13 targets only the roads included in the planned passage route of the own vehicle among the roads included in the map data acquired by the map data acquisition unit 12 for the section classification process.
 ここでは、ユーザが、パゴダに属する建築物すべてを特定地物として指定しており、地図データ取得部12が、表示装置22に表示する地図の地図データとして、図25のような地図の地図データを取得したと仮定する。図25の地図は、図2とほぼ同じであるが、山M2の中腹に、パゴダPGの他に、もう1つのパゴダPG2が存在する。パゴダPG2は、道路RD1からは地点PP1と地点PP2との間の区間(区間PP1-PP2)から視認可能である。 Here, the user designates all the buildings belonging to Pagoda as specific features, and the map data acquisition unit 12 uses the map data of the map as shown in FIG. 25 as the map data of the map to be displayed on the display device 22. Is assumed to have been obtained. The map of FIG. 25 is almost the same as that of FIG. 2, but there is another pagoda PG2 in addition to the pagoda PG on the hillside of the mountain M2. The pagoda PG2 is visible from the road RD1 from the section between the points PP1 and the point PP2 (section PP1-PP2).
 この場合、区間分類部13は、自車両の走行予定経路である道路RD1に対し、区間P1-P3および区間P4-P5をパゴダPGの特定地物可視区間として分類し、区間P0-P1、区間P3-P4および区間P5-P6をパゴダPGの特定地物不可視区間として分類する。さらに、区間分類部13は、区間PP1-PP2をパゴダPG2の特定地物可視区間として分類し、区間P0-PP1、区間PP2-P6をパゴダPG2の特定地物不可視区間として分類する。言い換えれば、区間分類部13は、区間P0-PP1、区間P3-P4および区間P5-P6をパゴダPG,PG2のどちらも見えない特定地物不可視区間として分類し、区間PP1-P1をパゴダPG2のみが視認可能な特定地物可視区間として分類し、区間P1-PP2をパゴダPG,PG2の両方が見える特定地物可視区間として分類し、区間PP2-P3、P4-P5をパゴダPGのみが視認可能な特定地物可視区間として分類する。 In this case, the section classification unit 13 classifies the sections P1-P3 and the sections P4-P5 as the specific feature visible sections of the pagoda PG with respect to the road RD1 which is the planned travel route of the own vehicle, and the sections P0-P1 and the sections. P3-P4 and sections P5-P6 are classified as pagoda PG specific feature invisible sections. Further, the section classification unit 13 classifies the section PP1-PP2 as a specific feature visible section of the pagoda PG2, and classifies the sections P0-PP1 and the section PP2-P6 as the specific feature invisible section of the pagoda PG2. In other words, the section classification unit 13 classifies the sections P0-PP1, the sections P3-P4, and the sections P5-P6 as specific feature invisible sections in which neither the pagoda PG nor the PG2 can be seen, and the section PP1-P1 is only the pagoda PG2. Is classified as a visible specific feature section, the section P1-PP2 is classified as a specific feature visible section where both pagoda PG and PG2 can be seen, and the sections PP2-P3 and P4-P5 are visible only to the pagoda PG. Classify as a specific feature visible section.
 表示制御部14は、地図を表示装置22に表示させる際、例えば図26のように、パゴダPGもパゴダPG2も見えない特定地物不可視区間(区間P0-PP1、区間P3-P4および区間P5-P6)と、パゴダPGまたはパゴダPG2どちらか1つだけ見える特定地物可視区間(区間PP1-P1、区間PP2-P3およびP4-P5)と、パゴダPGおよびパゴダPG2の両方が見える特定地物可視区間(区間P1-PP2)とを、それぞれ異なる表示態様にする。これにより、ユーザは、各区間から幾つの特定地物(パゴダ)が見えるのかを認識できるようになる。 When displaying the map on the display device 22, the display control unit 14 displays a specific feature invisible section (section P0-PP1, section P3-P4, and section P5-) in which neither the pagoda PG nor the pagoda PG2 can be seen, for example, as shown in FIG. P6), specific feature visible section (section PP1-P1, section PP2-P3 and P4-P5) where only one of pagoda PG or pagoda PG2 can be seen, and specific feature visible section where both pagoda PG and pagoda PG2 can be seen. Each section (section P1-PP2) has a different display mode. As a result, the user can recognize how many specific features (pagodas) can be seen from each section.
 また、表示制御部14は、例えば図27のように、パゴダPGの特定地物可視区間(区間P1-P3および区間P4-P5)と、パゴダPG2の特定地物可視区間(区間PP1-PP2)とに、互いに異なる扇形パターンが付されるようにしてもよい。この場合、2つの扇形パターンが重なる区間が、2つの特定地物が見える区間である。よって、ユーザは、扇形パターンの重なる数によって、各区間から見える特定地物の個数を認識できる。 Further, as shown in FIG. 27, the display control unit 14 includes a specific feature visible section of the pagoda PG (section P1-P3 and section P4-P5) and a specific feature visible section of the pagoda PG2 (section PP1-PP2). And, different fan-shaped patterns may be attached to each other. In this case, the section where the two fan-shaped patterns overlap is the section where the two specific features can be seen. Therefore, the user can recognize the number of specific features that can be seen from each section by the number of overlapping fan-shaped patterns.
 また、図15および図16を用いて説明した付加画像41,42は、実施の形態4にも適用してもよい。例えば、地図の画面の一部に、ユーザの位置を視点とする三次元地図の付加画像41を表示する場合、パゴダPGおよびパゴダPG2の両方が見える特定地物可視区間では、図28のように、三次元地図上にパゴダPGおよびパゴダPG2の両方が表示される。また例えば、地図の画面の一部に、ユーザが搭乗した車両(自車両)からの特定地物の方向を示す付加画像42を表示する場合、図29のように、パゴダPGおよびパゴダPG2の両方が見える特定地物可視区間では、特定地物の方向を示すサーチライトの光を模した図形422が2つ表示される。また、図示は省略するが、図17、図18および図19を用いて説明した、ヘッドアップディスプレイで表示させる付加画像43,44も、実施の形態4に適用してもよい。 Further, the additional images 41 and 42 described with reference to FIGS. 15 and 16 may be applied to the fourth embodiment. For example, when the additional image 41 of the three-dimensional map with the user's position as the viewpoint is displayed on a part of the map screen, in the specific feature visible section where both the pagoda PG and the pagoda PG2 can be seen, as shown in FIG. , Both Pagoda PG and Pagoda PG2 are displayed on the three-dimensional map. Further, for example, when an additional image 42 showing the direction of a specific feature from the vehicle (own vehicle) on which the user boarded is displayed on a part of the map screen, both the pagoda PG and the pagoda PG2 are displayed as shown in FIG. In the visible section of the specific feature, two figures 422 that imitate the light of the searchlight indicating the direction of the specific feature are displayed. Although not shown, the additional images 43 and 44 to be displayed on the head-up display described with reference to FIGS. 17, 18 and 19 may also be applied to the fourth embodiment.
 <ハードウェア構成例>
 図30および図31は、それぞれ地図表示制御装置10のハードウェア構成の例を示す図である。図1または図6に示した地図表示制御装置10の構成要素の各機能は、例えば図30に示す処理回路50により実現される。すなわち、地図表示制御装置10は、ユーザにより指定された特定地物を記憶し、特定地物の情報を含む地図データを取得し、地図データに基づいて、地図データに含まれる道路を、特定地物を視認可能な区間である特定地物可視区間と、特定地物を視認不可能な区間である特定地物不可視区間とに分類する区間分類処理を行い、地図データおよび区間分類処理の結果に基づいて、道路の特定地物可視区間と特定地物不可視区間とを互いに異なる表示態様で示した地図を表示装置に表示する、ための処理回路50を備える。処理回路50は、専用のハードウェアであってもよいし、メモリに格納されたプログラムを実行するプロセッサ(中央処理装置(CPU:Central Processing Unit)、処理装置、演算装置、マイクロプロセッサ、マイクロコンピュータ、DSP(Digital Signal Processor)とも呼ばれる)を用いて構成されていてもよい。
<Hardware configuration example>
30 and 31 are diagrams showing an example of the hardware configuration of the map display control device 10, respectively. Each function of the component of the map display control device 10 shown in FIG. 1 or FIG. 6 is realized by, for example, the processing circuit 50 shown in FIG. That is, the map display control device 10 stores the specific feature designated by the user, acquires the map data including the information of the specific feature, and based on the map data, sets the road included in the map data as the specific location. The section classification process is performed to classify the specific feature visible section, which is a section where objects can be seen, and the specific feature invisible section, which is a section where objects cannot be seen, and the result of the map data and section classification process is obtained. Based on this, a processing circuit 50 is provided for displaying a map showing a specific feature visible section and a specific feature invisible section of a road in different display modes on a display device. The processing circuit 50 may be dedicated hardware, or may be a processor (Central Processing Unit (CPU), processing unit, arithmetic unit, microprocessor, microprocessor, etc.) that executes a program stored in a memory. It may be configured by using a DSP (also called a Digital Signal Processor).
 処理回路50が専用のハードウェアである場合、処理回路50は、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)、またはこれらを組み合わせたものなどが該当する。地図表示制御装置10の構成要素の各々の機能が個別の処理回路で実現されてもよいし、それらの機能がまとめて一つの処理回路で実現されてもよい。 When the processing circuit 50 is dedicated hardware, the processing circuit 50 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable). GateArray), or a combination of these, etc. The functions of the components of the map display control device 10 may be realized by individual processing circuits, or these functions may be collectively realized by one processing circuit.
 図31は、処理回路50がプログラムを実行するプロセッサ51を用いて構成されている場合における地図表示制御装置10のハードウェア構成の例を示している。この場合、地図表示制御装置10の構成要素の機能は、ソフトウェア等(ソフトウェア、ファームウェア、またはソフトウェアとファームウェアとの組み合わせ)により実現される。ソフトウェア等はプログラムとして記述され、メモリ52に格納される。プロセッサ51は、メモリ52に記憶されたプログラムを読み出して実行することにより、各部の機能を実現する。すなわち、地図表示制御装置10は、プロセッサ51により実行されるときに、ユーザにより指定された特定地物を記憶する処理と、特定地物の情報を含む地図データを取得する処理と、地図データに基づいて、地図データに含まれる道路を、特定地物を視認可能な区間である特定地物可視区間と、特定地物を視認不可能な区間である特定地物不可視区間とに分類する区間分類処理を行う処理と、地図データおよび区間分類処理の結果に基づいて、道路の特定地物可視区間と特定地物不可視区間とを互いに異なる表示態様で示した地図を表示装置に表示する処理と、が結果的に実行されることになるプログラムを格納するためのメモリ52を備える。換言すれば、このプログラムは、地図表示制御装置10の構成要素の動作の手順や方法をコンピュータに実行させるものであるともいえる。 FIG. 31 shows an example of the hardware configuration of the map display control device 10 when the processing circuit 50 is configured by using the processor 51 that executes the program. In this case, the functions of the components of the map display control device 10 are realized by software (software, firmware, or a combination of software and firmware). The software or the like is described as a program and stored in the memory 52. The processor 51 realizes the functions of each part by reading and executing the program stored in the memory 52. That is, when the map display control device 10 is executed by the processor 51, the process of storing the specific feature specified by the user, the process of acquiring the map data including the information of the specific feature, and the map data Based on this, the roads included in the map data are classified into a specific feature visible section, which is a section where the specific feature is visible, and a specific feature invisible section, which is a section where the specific feature is invisible. Processing to perform processing, processing to display a map showing a specific feature visible section and a specific feature invisible section of a road in different display modes on a display device based on the results of map data and section classification processing. Provide a memory 52 for storing a program that will eventually be executed. In other words, it can be said that this program causes the computer to execute the procedure and method of operation of the components of the map display control device 10.
 ここで、メモリ52は、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)、EEPROM(Electrically Erasable Programmable Read Only Memory)などの、不揮発性または揮発性の半導体メモリ、HDD(Hard Disk Drive)、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ミニディスク、DVD(Digital Versatile Disc)およびそのドライブ装置等、または、今後使用されるあらゆる記憶媒体であってもよい。 Here, the memory 52 is a non-volatile or non-volatile memory such as a RAM (RandomAccessMemory), a ROM (ReadOnlyMemory), a flash memory, an EPROM (ErasableProgrammableReadOnlyMemory), or an EEPROM (ElectricallyErasableProgrammableReadOnlyMemory). Volatile semiconductor memory, HDD (Hard Disk Drive), magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versatile Disc) and its drive device, etc., or any storage medium used in the future. You may.
 以上、地図表示制御装置10の構成要素の機能が、ハードウェアおよびソフトウェア等のいずれか一方で実現される構成について説明した。しかしこれに限ったものではなく、地図表示制御装置10の一部の構成要素を専用のハードウェアで実現し、別の一部の構成要素をソフトウェア等で実現する構成であってもよい。例えば、一部の構成要素については専用のハードウェアとしての処理回路50でその機能を実現し、他の一部の構成要素についてはプロセッサ51としての処理回路50がメモリ52に格納されたプログラムを読み出して実行することによってその機能を実現することが可能である。 The configuration in which the functions of the components of the map display control device 10 are realized by either hardware or software has been described above. However, the present invention is not limited to this, and a configuration in which a part of the components of the map display control device 10 is realized by dedicated hardware and another part of the components is realized by software or the like may be used. For example, for some components, the function is realized by the processing circuit 50 as dedicated hardware, and for some other components, the processing circuit 50 as the processor 51 is stored in the memory 52. It is possible to realize the function by reading and executing it.
 以上のように、地図表示制御装置10は、ハードウェア、ソフトウェア等、またはこれらの組み合わせによって、上述の各機能を実現することができる。 As described above, the map display control device 10 can realize each of the above-mentioned functions by hardware, software, or a combination thereof.
 なお、本発明は、その発明の範囲内において、各実施の形態を自由に組み合わせたり、各実施の形態を適宜、変形、省略したりすることが可能である。 In the present invention, each embodiment can be freely combined, and each embodiment can be appropriately modified or omitted within the scope of the invention.
 本発明は詳細に説明されたが、上記した説明は、すべての態様において、例示であって、この発明がそれに限定されるものではない。例示されていない無数の変形例が、この発明の範囲から外れることなく想定され得るものと解される。 Although the present invention has been described in detail, the above description is exemplary in all embodiments and the present invention is not limited thereto. It is understood that innumerable variations not illustrated can be assumed without departing from the scope of the present invention.
 10 地図表示制御装置、11 特定地物記憶部、12 地図データ取得部、13 区間分類部、14 表示制御部、15 位置情報取得部、16 経路情報取得部、21 地図データ記憶装置、22 表示装置、23 操作入力装置、24 測位装置、25 走行予定経路設定部、30 ナビゲーションシステム、41~44 付加画像、51 プロセッサ、52 メモリ、60 ウィンドシールド。 10 map display control device, 11 specific feature storage unit, 12 map data acquisition unit, 13 section classification unit, 14 display control unit, 15 position information acquisition unit, 16 route information acquisition unit, 21 map data storage device, 22 display device , 23 operation input device, 24 positioning device, 25 planned travel route setting unit, 30 navigation system, 41-44 additional images, 51 processor, 52 memory, 60 windshield.

Claims (21)

  1.  ユーザにより指定された特定地物を記憶する特定地物記憶部と、
     前記特定地物の情報を含む地図データを取得する地図データ取得部と、
     前記地図データに基づいて、前記地図データに含まれる道路を、前記特定地物を視認可能な区間である特定地物可視区間と、前記特定地物を視認不可能な区間である特定地物不可視区間とに分類する区間分類処理を行う区間分類部と、
     前記地図データおよび前記区間分類処理の結果に基づいて、前記道路の前記特定地物可視区間と前記特定地物不可視区間とを互いに異なる表示態様で示した地図を表示装置に表示する表示制御部と、
    を備える地図表示制御装置。
    A specific feature storage unit that stores a specific feature specified by the user,
    A map data acquisition unit that acquires map data including information on the specific feature, and
    Based on the map data, the roads included in the map data are divided into a specific feature visible section, which is a section in which the specific feature is visible, and a specific feature invisible section, which is a section in which the specific feature is invisible. A section classification unit that performs section classification processing to classify into sections,
    A display control unit that displays on a display device a map showing the specific feature visible section and the specific feature invisible section of the road in different display modes based on the map data and the result of the section classification process. ,
    A map display control device including.
  2.  前記ユーザの位置の情報を取得する位置情報取得部をさらに備え、
     前記表示制御部は、前記地図上に前記ユーザの位置を示すアイコンを表示する、
    請求項1に記載の地図表示制御装置。
    A position information acquisition unit for acquiring the user's position information is further provided.
    The display control unit displays an icon indicating the position of the user on the map.
    The map display control device according to claim 1.
  3.  前記区間分類部は、前記道路上の各地点における前記ユーザの視野範囲を考慮して、前記区間分類処理を行う、
    請求項1に記載の地図表示制御装置。
    The section classification unit performs the section classification process in consideration of the user's visual field range at each point on the road.
    The map display control device according to claim 1.
  4.  前記道路上の各地点における前記ユーザの前記視野範囲は、その地点を前記ユーザが搭乗した車両が通過するときの前記車両の向きに応じて設定される、
    請求項3に記載の地図表示制御装置。
    The field of view of the user at each point on the road is set according to the orientation of the vehicle when the vehicle on which the user is boarding passes through the point.
    The map display control device according to claim 3.
  5.  前記視野範囲における前記車両の正面から右側の広さと左側の広さとが、互いに独立して設定される、
    請求項4に記載の地図表示制御装置。
    The area on the right side and the area on the left side of the vehicle in the viewing range are set independently of each other.
    The map display control device according to claim 4.
  6.  前記道路上の各地点における前記ユーザの前記視野範囲の広さは、それぞれの地点で異なる値に設定可能である、
    請求項3に記載の地図表示制御装置。
    The width of the field of view of the user at each point on the road can be set to a different value at each point.
    The map display control device according to claim 3.
  7.  前記道路上の各地点における前記ユーザの前記視野範囲の広さは、その地点を前記車両が通過するときの走行環境またはその地点の道路属性に応じて設定される、
    請求項4に記載の地図表示制御装置。
    The width of the field of view of the user at each point on the road is set according to the traveling environment when the vehicle passes through the point or the road attribute of the point.
    The map display control device according to claim 4.
  8.  前記道路上の各地点における前記ユーザの前記視野範囲の広さは、その地点を前記車両が通過するときの自動運転レベルに応じて設定される、
    請求項4に記載の地図表示制御装置。
    The breadth of the user's field of view at each point on the road is set according to the level of autonomous driving when the vehicle passes through that point.
    The map display control device according to claim 4.
  9.  前記ユーザが前記特定地物として単一の地物または共通の属性を持つ複数の地物を指定可能なように構成されている、
    請求項1に記載の地図表示制御装置。
    It is configured so that the user can specify a single feature or a plurality of features having common attributes as the specific feature.
    The map display control device according to claim 1.
  10.  前記特定地物記憶部に複数の前記特定地物が記憶されている場合、前記特定地物可視区間の表示態様は、その特定地物可視区間から視認可能な前記特定地物の個数に応じて変更される、
    請求項1に記載の地図表示制御装置。
    When a plurality of the specific features are stored in the specific feature storage unit, the display mode of the specific feature visible section depends on the number of the specific features visible from the specific feature visible section. Be changed,
    The map display control device according to claim 1.
  11.  前記表示制御部は、前記特定地物が見える方向を示す付加画像を前記地図の表示画面の一部に表示する、
    請求項2に記載の地図表示制御装置。
    The display control unit displays an additional image indicating the direction in which the specific feature is visible on a part of the display screen of the map.
    The map display control device according to claim 2.
  12.  前記付加画像は、前記ユーザの位置を視点とする三次元地図の画像である、
    請求項11に記載の地図表示制御装置。
    The additional image is an image of a three-dimensional map with the user's position as a viewpoint.
    The map display control device according to claim 11.
  13.  前記付加画像は、前記ユーザが搭乗した車両からの前記特定地物の方向を示す画像である、
    請求項11に記載の地図表示制御装置。
    The additional image is an image showing the direction of the specific feature from the vehicle on which the user boarded.
    The map display control device according to claim 11.
  14.  前記付加画像は、前記特定地物の方向を撮影した実写画像である、
    請求項11に記載の地図表示制御装置。
    The additional image is a live-action image taken in the direction of the specific feature.
    The map display control device according to claim 11.
  15.  前記表示制御部は、ヘッドアップディスプレイを用いて、ユーザが見通せる画面に、前記特定地物が見える方向を示す付加画像を表示する、
    請求項2に記載の地図表示制御装置。
    The display control unit uses a head-up display to display an additional image indicating the direction in which the specific feature can be seen on a screen that the user can see.
    The map display control device according to claim 2.
  16.  前記ユーザの通過予定経路の情報を取得する経路情報取得部をさらに備え、
     前記区間分類部は、前記地図データに含まれる道路のうち前記通過予定経路に含まれる道路を、前記区間分類処理の対象とする、
    請求項2に記載の地図表示制御装置。
    Further, a route information acquisition unit for acquiring information on the route to be passed by the user is provided.
    Among the roads included in the map data, the section classification unit targets the road included in the planned passage route for the section classification process.
    The map display control device according to claim 2.
  17.  前記ユーザの仮想的な位置を示すアイコンを前記地図上の前記通過予定経路に沿って移動させるルートシミュレーションにおいて、前記アイコンが前記特定地物可視区間内にあるとき、前記特定地物の画像を含む、前記ユーザの仮想的な位置を視点とする三次元地図の画像をルートシミュレーション画面の一部に表示する
    請求項16に記載の地図表示制御装置。
    In a route simulation in which an icon indicating a virtual position of the user is moved along the planned passage route on the map, when the icon is within the visible section of the specific feature, an image of the specific feature is included. The map display control device according to claim 16, wherein an image of a three-dimensional map with a virtual position of the user as a viewpoint is displayed on a part of a route simulation screen.
  18.  前記通過予定経路は、前記ユーザが搭乗した車両の走行予定経路であり、
     前記区間分類部は、前記区間分類処理を前記道路の車線ごとに行い、
     前記走行予定経路の道路に車線が複数ある場合に、車線ごとの前記特定地物可視区間の情報、あるいは、前記特定地物可視区間の距離を長くできる車線の情報を、前記車両のナビゲーションシステムまたは自動運転システムに提供する
    請求項16に記載の地図表示制御装置。
    The planned passage route is a planned travel route of the vehicle on which the user has boarded.
    The section classification unit performs the section classification process for each lane of the road.
    When there are a plurality of lanes on the road of the planned travel route, the information on the specific feature visible section for each lane or the information on the lane that can increase the distance of the specific feature visible section can be obtained from the vehicle navigation system or the vehicle navigation system. The map display control device according to claim 16, which is provided for an automatic driving system.
  19.  前記区間分類部は、広さの異なる複数種類の前記視野範囲を設定して前記区間分類処理を行い、
     前記表示制御部は、複数種類の前記視野範囲のそれぞれに対応する前記特定地物可視区間を、それぞれ異なる表示態様で前記地図上に表示する
    請求項3に記載の地図表示制御装置。
    The section classification unit sets a plurality of types of the visual field ranges having different widths and performs the section classification process.
    The map display control device according to claim 3, wherein the display control unit displays the specific feature visible section corresponding to each of a plurality of types of the visual field ranges on the map in different display modes.
  20.  前記ユーザの視野範囲には、前記ユーザが搭乗した車両の後写鏡または電子ミラーを通して前記ユーザが視認可能な範囲も含まれる、
    請求項3に記載の地図表示制御装置。
    The user's field of view also includes a range visible to the user through a rearview mirror or electronic mirror of the vehicle on which the user is boarding.
    The map display control device according to claim 3.
  21.  地図表示制御装置の特定地物記憶部が、ユーザにより指定された特定地物を記憶し、
     前記地図表示制御装置の地図データ取得部が、前記特定地物の情報を含む地図データを取得し、
     前記地図表示制御装置の区間分類部が、前記地図データに基づいて、前記地図データに含まれる道路を、前記特定地物を視認可能な区間である特定地物可視区間と、前記特定地物を視認不可能な区間である特定地物不可視区間とに分類する区間分類処理を行い、
     前記地図表示制御装置の表示制御部が、前記地図データおよび前記区間分類処理の結果に基づいて、前記道路の前記特定地物可視区間と前記特定地物不可視区間とを互いに異なる表示態様で示した地図を表示装置に表示する、
    地図表示制御方法。
    The specific feature storage unit of the map display control device stores the specific feature specified by the user, and
    The map data acquisition unit of the map display control device acquires map data including information on the specific feature, and the map data acquisition unit obtains the map data.
    Based on the map data, the section classification unit of the map display control device determines the road included in the map data as a specific feature visible section which is a section in which the specific feature can be visually recognized and the specific feature. Performs section classification processing to classify into invisible sections of specific features, which are invisible sections.
    The display control unit of the map display control device shows the specific feature visible section and the specific feature invisible section of the road in different display modes based on the map data and the result of the section classification process. Display the map on the display device,
    Map display control method.
PCT/JP2019/039593 2019-10-08 2019-10-08 Map display control device and map display control method WO2021070238A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021550968A JP7275294B2 (en) 2019-10-08 2019-10-08 MAP DISPLAY CONTROL DEVICE AND MAP DISPLAY CONTROL METHOD
PCT/JP2019/039593 WO2021070238A1 (en) 2019-10-08 2019-10-08 Map display control device and map display control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/039593 WO2021070238A1 (en) 2019-10-08 2019-10-08 Map display control device and map display control method

Publications (1)

Publication Number Publication Date
WO2021070238A1 true WO2021070238A1 (en) 2021-04-15

Family

ID=75437339

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/039593 WO2021070238A1 (en) 2019-10-08 2019-10-08 Map display control device and map display control method

Country Status (2)

Country Link
JP (1) JP7275294B2 (en)
WO (1) WO2021070238A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009204392A (en) * 2008-02-27 2009-09-10 Alpine Electronics Inc Navigation apparatus
JP2011107025A (en) * 2009-11-19 2011-06-02 Clarion Co Ltd Navigation system
JP2011203149A (en) * 2010-03-26 2011-10-13 Zenrin Co Ltd Guide system
JP2014037172A (en) * 2012-08-13 2014-02-27 Alpine Electronics Inc Display controller and display control method for head-up display
JP2014089140A (en) * 2012-10-31 2014-05-15 Aisin Aw Co Ltd Intersection guide system, method and program
JP2015102520A (en) * 2013-11-28 2015-06-04 アイシン・エィ・ダブリュ株式会社 On-vehicle apparatus control system, on-vehicle apparatus control method, and on-vehicle apparatus control program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5095321B2 (en) * 2007-09-11 2012-12-12 クラリオン株式会社 Sightseeing information display device, navigation device, and sightseeing information display system
JP5916690B2 (en) * 2013-11-21 2016-05-11 クラリオン株式会社 Map display device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009204392A (en) * 2008-02-27 2009-09-10 Alpine Electronics Inc Navigation apparatus
JP2011107025A (en) * 2009-11-19 2011-06-02 Clarion Co Ltd Navigation system
JP2011203149A (en) * 2010-03-26 2011-10-13 Zenrin Co Ltd Guide system
JP2014037172A (en) * 2012-08-13 2014-02-27 Alpine Electronics Inc Display controller and display control method for head-up display
JP2014089140A (en) * 2012-10-31 2014-05-15 Aisin Aw Co Ltd Intersection guide system, method and program
JP2015102520A (en) * 2013-11-28 2015-06-04 アイシン・エィ・ダブリュ株式会社 On-vehicle apparatus control system, on-vehicle apparatus control method, and on-vehicle apparatus control program

Also Published As

Publication number Publication date
JPWO2021070238A1 (en) 2021-04-15
JP7275294B2 (en) 2023-05-17

Similar Documents

Publication Publication Date Title
US20210004013A1 (en) Lane-centric road network model for navigation
US11067404B2 (en) Vehicle usage-based pricing alerts
JP4921462B2 (en) Navigation device with camera information
EP3260817B1 (en) Method, apparatus and computer program product for a navigation user interface
US9874456B2 (en) Method, apparatus and computer program product for providing a destination preview
US7920966B2 (en) Navigation apparatuses, methods, and programs
US20080167811A1 (en) Navigation device and method for displaying navigation information
US7974781B2 (en) Navigation apparatuses, methods, and programs for generation of a 3D movie
US8515664B2 (en) Digital map signpost system
EP1681538A1 (en) Junction view with 3-dimensional landmarks for a navigation system for a vehicle
US6591190B2 (en) Navigation system
US11186293B2 (en) Method and system for providing assistance to a vehicle or driver thereof
CN110196056B (en) Method and navigation device for generating a road map for automatic driving vehicle navigation and decision-making
JP2009500765A (en) Method for determining traffic information and apparatus configured to perform the method
CN110874063A (en) Method, apparatus and computer program product for differentiated policy enforcement for roads
WO2021070238A1 (en) Map display control device and map display control method
US11479264B2 (en) Mobile entity interaction countdown and display
JP6542085B2 (en) INFORMATION PROCESSING APPARATUS, METHOD, AND PROGRAM
US20230331237A1 (en) Method and apparatus for generating personalized splines
US20230303111A1 (en) Autonomous vehicle navigation using non-connected map fragments
US20240125939A1 (en) Vehicle mirror assembly with a movable sensor
JP2011022152A (en) Navigation device
JP2007172016A (en) Map display device mounted inside vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19948579

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021550968

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19948579

Country of ref document: EP

Kind code of ref document: A1