WO2021095198A1 - Meetup assistance device and meetup assistance method - Google Patents

Meetup assistance device and meetup assistance method Download PDF

Info

Publication number
WO2021095198A1
WO2021095198A1 PCT/JP2019/044677 JP2019044677W WO2021095198A1 WO 2021095198 A1 WO2021095198 A1 WO 2021095198A1 JP 2019044677 W JP2019044677 W JP 2019044677W WO 2021095198 A1 WO2021095198 A1 WO 2021095198A1
Authority
WO
WIPO (PCT)
Prior art keywords
merging
user
point
virtual
support device
Prior art date
Application number
PCT/JP2019/044677
Other languages
French (fr)
Japanese (ja)
Inventor
下谷 光生
直志 宮原
義典 上野
圭作 福田
家田 邦代
祐介 荒井
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2021555718A priority Critical patent/JP7282199B2/en
Priority to PCT/JP2019/044677 priority patent/WO2021095198A1/en
Publication of WO2021095198A1 publication Critical patent/WO2021095198A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/123Traffic control systems for road vehicles indicating the position of vehicles, e.g. scheduled vehicles; Managing passenger vehicles circulating according to a fixed timetable, e.g. buses, trains, trams

Definitions

  • the present invention relates to a merging support device that supports merging between a user and another person (merging partner).
  • Patent Documents 1 and 2 below propose a merging support device that supports merging by displaying a merging point (meeting place) on a map.
  • the service user can correctly identify the reserved vehicle by providing the terminal device of the service user with an authenticator for authenticating the reserved vehicle to be the merging partner in advance. The system has been proposed.
  • the user can confirm the position of the merging point on the map.
  • the user if the user is not good at reading the map, or if there are no landmarks (so-called "landmarks") near the confluence, it is difficult to match the map with the real world. The user may not be able to easily reach the confluence.
  • the present invention has been made to solve the above problems, and an object of the present invention is to provide a merging support device capable of presenting the location of a merging point in the real world to a user.
  • the merging support device includes a merging point storage unit that stores a merging point between a user and another person, a virtual object storage unit that stores a virtual merging point object that is an image showing the position of the merging point, and a user.
  • the virtual confluence point object can be viewed from the user through a transparent screen, or the actual view around the user displayed on the screen. It is provided with a display control unit that displays at a position corresponding to a confluence point in an image.
  • the present invention it is possible to present to the user a virtual world in which a virtual confluence point object exists at a position corresponding to a confluence point in the real world. Therefore, the user can intuitively recognize the position of the confluence in the real world.
  • FIG. It is a figure which shows the structure of the merging support apparatus which concerns on Embodiment 1.
  • FIG. It is a figure which shows the example of the real world. It is a figure which shows the example of the virtual world provided by the merging support device which concerns on Embodiment 1.
  • FIG. It is a figure which shows the example of the real scene and the virtual confluence point object seen from the user. It is a flowchart which shows the operation of the merging support device which concerns on Embodiment 1.
  • FIG. It is a figure which shows the hardware configuration example of the merging support device. It is a figure which shows the hardware configuration example of the merging support device. It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1.
  • FIG. It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1.
  • FIG. It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1.
  • FIG. It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1.
  • FIG. It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1.
  • FIG. It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1.
  • FIG. It is a flowchart for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1.
  • FIG. It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1.
  • FIG. It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1.
  • FIG. It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1.
  • FIG. It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1.
  • FIG. It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1.
  • FIG. It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1.
  • FIG. It is a figure which shows the structure of the merging support apparatus which concerns on Embodiment 2.
  • It is a flowchart which shows the operation of the merging support apparatus which concerns on Embodiment 2.
  • FIG. 1 is a diagram showing a configuration of a merging support device 10 according to the first embodiment of the present invention.
  • the merging support device 10 of the first embodiment is mounted on the vehicle 100.
  • the merging support device 10 does not have to be permanently installed in the vehicle 100, and may be configured as a portable device that can be brought into the vehicle 100.
  • the user of the merging support device 10 is the driver of the vehicle 100.
  • the "user” in the following description is intended to be the driver of the vehicle 100.
  • the merging support device 10 is connected to a head-up display (HUD) 21, a display device 22, an operation input device 23, a positioning device 24, a communication device 25, and a map information storage device 26 included in the vehicle 100.
  • HUD head-up display
  • the head-up display 21 and the display device 22 are means for the merging support device 10 to present various information as images to the user (driver of the vehicle 100).
  • the head-up display 21 can display an image on the actual scene seen by the user by using the windshield of the vehicle 100 as a transparent screen.
  • the display device 22 is a general display device installed on the center panel or the instrument panel of the vehicle 100, and displays, for example, the operation screen of the merging support device 10.
  • the display device 22 is composed of, for example, a liquid crystal display or an organic EL display (Organic ElectroLuminescence display).
  • the image displayed by the head-up display 21 Since the image displayed by the head-up display 21 is displayed in the actual scene, it looks as if it exists in the real world.
  • the image displayed by the head-up display 21 is referred to as a "virtual object", and the world in which the virtual object is regarded as existing is referred to as a "virtual world”.
  • the operation input device 23 is a means for the user to input an operation for the merging support device 10.
  • the operation input device 23 may be a hardware key such as a push button, an operation lever, or a keyboard, or a software key displayed on the screen.
  • the software key as the operation input device 23 may be displayed on the display device 22, and in that case, the display device 22 and the operation input device 23 may be configured as one touch panel.
  • the positioning device 24 calculates the position of the vehicle 100 based on a positioning signal received from a GNSS (Global Navigation Satellite System) satellite such as a GPS (Global Positioning System) satellite.
  • GNSS Global Navigation Satellite System
  • GPS Global Positioning System
  • the communication device 25 is a means for the merging support device 10 to communicate with the outside of the vehicle 100.
  • the map information storage device 26 is a storage medium that stores map information including location information such as roads and POI (Point of Interest).
  • the map information storage device 26 may be provided in the navigation system of the vehicle 100. Further, the map information storage device 26 may be configured as a server installed outside the vehicle 100, in which case the merging support device 10 acquires map information from the map information storage device 26 through the communication device 25.
  • the merging support device 10 includes a communication unit 11, a merging point storage unit 12, a virtual object storage unit 13, a position information acquisition unit 14, a map information acquisition unit 15, and a display control unit 16.
  • the communication unit 11 communicates with the outside of the vehicle 100 via the communication device 25.
  • the merging point storage unit 12 memorizes the merging point between the user and another person who is the merging partner.
  • the merging partner may be a pedestrian or a vehicle.
  • the user can operate the operation input device 23 to perform direct or indirect communication with the merging partner via the communication device 25, and can pre-arrange a merging point with the merging partner, and the determined merging point.
  • the points are stored in the confluence point storage unit 12.
  • indirect communication means that the user and the merging partner communicate with each other via another system composed of a server or the like (for example, an arbitration system at the merging point), and direct communication is , Means that the user and the merging partner communicate without going through another system.
  • the virtual object storage unit 13 stores an image of a virtual merging point object, which is a virtual object indicating the position of the merging point.
  • the position information acquisition unit 14 acquires the position (user's position) of the vehicle 100 calculated by the positioning device 24. Further, the position information acquisition unit 14 can calculate the direction of the vehicle 100 from the history of the position of the vehicle 100. Even if the position information acquisition unit 14 performs processing for improving the accuracy of the position of the vehicle 100, such as map matching using map information and positioning by autonomous navigation using the speed sensor and the direction sensor of the vehicle 100, the position information acquisition unit 14 may perform processing such as map matching. Good. Further, the orientation of the vehicle 100 may be measured by using the orientation sensor of the vehicle 100.
  • the map information acquisition unit 15 acquires map information around the position of the vehicle 100 acquired by the position information acquisition unit 14 from the map information storage device 26.
  • the display control unit 16 is based on the position and orientation of the vehicle 100 acquired by the position information acquisition unit 14, the position of the merging point stored in the virtual object storage unit 13, and the map information acquired by the map information acquisition unit 15.
  • the position (relative position) of the merging point as seen by the user (driver of the vehicle 100) is calculated, and the virtual merging point object is placed at the position corresponding to the merging point in the actual view seen by the user using the head-up display 21. indicate. More specifically, the display control unit 16 virtually merges at the intersection of the windshield and the straight line connecting the user's eyes and the merge point so that the virtual merge point object appears to be superimposed on the merge point when viewed from the user. Display the point object.
  • the display control unit 16 provides the user with a virtual world in which the virtual confluence point object appears to exist at the confluence point in the real world.
  • the point MP in the real world as shown in FIG. 2 is defined as the merging point between the user and the merging partner, and the vehicle 100 driven by the user reaches the vicinity of the merging point MP.
  • Information on the position of the merging point MP is stored in the merging point storage unit 12 of the merging support device 10.
  • the display control unit 16 uses the head-up display 21 to provide the user with a virtual world as shown in FIG. 3 in which the virtual merging point object 200 exists at the merging point MP.
  • the user who is the driver of the vehicle 100 can see the virtual merging point object 200 at a position corresponding to the merging point MP in the actual view seen through the windshield 30 of the vehicle 100.
  • FIG. 3 and the figures shown thereafter hatching is attached to the virtual object so that the real world and the virtual world can be easily distinguished, but the virtual object actually displayed by the head-up display 21 is used. Does not need to be hatched.
  • the virtual confluence point object an image simulating a sign pillar installed at a public transportation stop (bus stop) is used, but the virtual confluence point object may be any image.
  • the merging support device 10 presents the location of the merging point to the user by presenting the virtual world in which the virtual merging point object appears to exist at the merging point in the real world. .. Therefore, the user can intuitively recognize the location of the confluence in the real world. As a result, it becomes easier for the user to reach the confluence.
  • the merging point storage unit 12 stores the merging point (step S101).
  • the position information acquisition unit 14 acquires the position of the vehicle 100 calculated by the positioning device 24, that is, the position of the user (step S102). At this time, the position information acquisition unit 14 calculates the direction of the vehicle 100 based on the history of the position of the vehicle 100.
  • the map information acquisition unit 15 acquires map information around the user's position from the map information storage device 26 (step S103).
  • the display control unit 16 is based on the user's position acquired by the position information acquisition unit 14, the position of the confluence stored in the confluence point storage unit 12, and the map information acquired by the map information acquisition unit 15. It is determined whether or not the position of is near the confluence (step S104).
  • the determination in step S104 may be determined simply by whether or not the straight line distance from the user to the confluence point is equal to or less than a predetermined threshold value (for example, 100 m). For example, the road on which the user is located and the confluence point are located. Even if the straight line distance from the user to the confluence is short, it may take a long time for the user to reach the confluence, such as when the road crosses over. Therefore, in the present embodiment, the route length (distance) from the user to the confluence is calculated using the map information, and the determination in step S104 is performed depending on whether or not the route length is equal to or less than the threshold value.
  • a predetermined threshold value for example, 100 m.
  • the display control unit 16 indicates the position and orientation of the vehicle 100 acquired by the position information acquisition unit 14 and the merging point stored in the virtual object storage unit 13. Based on the position and the map information acquired by the map information acquisition unit 15, the position of the confluence point as seen by the user is calculated (step S105). Then, it is confirmed whether or not the position of the confluence point as seen by the user is within the display area of the head-up display 21 (step S106).
  • step S106 If the position of the merging point seen by the user is within the display area of the head-up display 21 (YES in step S106), the display control unit 16 uses the head-up display 21 to see the virtual merging point object from the user. The display is superimposed on the position of the confluence (step S107), and the process returns to step S102.
  • step S104 when the position of the user is far from the merging point (NO in step S104), and when the position of the merging point seen by the user is outside the display area of the head-up display 21 (NO in step S106), the display is displayed.
  • the control unit 16 confirms whether or not the virtual confluence point object is displayed by the head-up display 21, deletes it if it is displayed (step S108), and returns to step S102.
  • step S104 if the road where the user is located and the road where the confluence is located are grade-separated, but there are stairs or slopes that allow walking between the two roads, the user It is also possible to get off the vehicle 100 and walk to pick up the merging partner. Therefore, the determination in step S104 may be made based on the path length through the stairs or slopes. If YES is determined in step S104 based on such a determination, it is preferable to notify the user to that effect.
  • step without considering the stairs or slopes it is preferable that the determination of S104 is performed.
  • the user may be notified that using the stairs or slope will result in a later arrival time at the confluence than driving the vehicle 100 along the road.
  • FIG. 6 and 7 are diagrams showing an example of the hardware configuration of the merging support device 10, respectively.
  • Each function of the component of the merging support device 10 shown in FIG. 1 is realized by, for example, the processing circuit 50 shown in FIG. That is, the merging support device 10 stores the merging point between the user and another person, stores a virtual merging point object which is an image showing the position of the merging point, acquires the user's position, and acquires the user's position and the merging point.
  • a processing circuit 50 is provided for displaying a virtual confluence point object at a position corresponding to a confluence point of a real scene seen from a user through a transparent screen based on the position of.
  • the processing circuit 50 may be dedicated hardware, or may be a processor (Central Processing Unit (CPU), processing unit, arithmetic unit, microprocessor, microprocessor, etc.) that executes a program stored in the memory. It may be configured by using a DSP (also called a Digital Signal Processor).
  • processor Central Processing Unit (CPU), processing unit, arithmetic unit, microprocessor, microprocessor, etc.
  • DSP Digital Signal Processor
  • the processing circuit 50 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable). GateArray), or a combination of these, etc.
  • the functions of the components of the merging support device 10 may be realized by individual processing circuits, or these functions may be collectively realized by one processing circuit.
  • FIG. 7 shows an example of the hardware configuration of the merging support device 10 when the processing circuit 50 is configured by using the processor 51 that executes the program.
  • the functions of the components of the merging support device 10 are realized by software (software, firmware, or a combination of software and firmware).
  • the software or the like is described as a program and stored in the memory 52.
  • the processor 51 realizes the functions of each part by reading and executing the program stored in the memory 52.
  • the merging support device 10 has a process of storing the merging point between the user and another person when executed by the processor 51, a process of storing a virtual merging point object which is an image showing the position of the merging point, The result is the process of acquiring the user's position and the process of displaying the virtual confluence object at the position corresponding to the confluence of the actual view seen from the user through the transparent screen based on the user's position and the position of the confluence.
  • a memory 52 for storing a program to be executed is provided. In other words, it can be said that this program causes the computer to execute the procedure and method of operation of the components of the merging support device 10.
  • the memory 52 is a non-volatile or non-volatile memory such as a RAM (RandomAccessMemory), a ROM (ReadOnlyMemory), a flash memory, an EPROM (ErasableProgrammableReadOnlyMemory), and an EEPROM (ElectricallyErasableProgrammableReadOnlyMemory). Volatile semiconductor memory, HDD (Hard Disk Drive), magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versatile Disc) and its drive device, etc., or any storage medium used in the future. You may.
  • the present invention is not limited to this, and a configuration in which a part of the components of the merging support device 10 is realized by dedicated hardware and another part of the components is realized by software or the like may be used.
  • the function is realized by the processing circuit 50 as dedicated hardware, and for some other components, the processing circuit 50 as the processor 51 is stored in the memory 52. It is possible to realize the function by reading and executing it.
  • the merging support device 10 can realize each of the above-mentioned functions by hardware, software, or a combination thereof.
  • the merging point may be determined by any method.
  • an arbitration system for arranging the location of the merging point between the user of the merging support device 10 and the merging partner may be constructed on the Internet.
  • the merging point storage unit 12 of the merging support device 10 acquires and stores the determined merging point information from the arbitration system through the communication unit 11.
  • the image of the virtual confluence point object may be selected from a plurality of images by the user operating the operation input device 23. Further, the image of the virtual merging point object may be an image designated by the merging partner, or the virtual merging point object image may be consulted and determined between the user of the merging support device 10 and the merging partner.
  • an arbitration system for arranging an image of a virtual merging point object between a user of the merging support device 10 and a merging partner may be constructed on the Internet. In that case, the virtual object storage unit 13 of the merging support device 10 acquires and stores an image of the determined virtual merging point object from the arbitration system through the communication unit 11.
  • both the user of the merging support device 10 driver of the vehicle 100
  • the merging partner can see what kind of image the image of the virtual merging point object is. I can recognize what it is. For example, when the vehicle 100 is a taxi, there may be a situation where the driver of the vehicle 100 and the merging partner have no acquaintance, but both the driver of the vehicle 100 and the merging partner recognize the image of the virtual merging point object. Then, by confirming the images of the virtual merging point objects with each other, it is possible to authenticate each other, which can contribute to the improvement of security.
  • the display control unit 16 may provide the user with a virtual world including not only a virtual merging point object indicating the position of the merging point but also a virtual object indicating additional information for merging support.
  • a virtual object 201 simulating a user arriving at the merging point here, a vehicle 100 driven by the user
  • a virtual merging partner arriving at the merging point are simulated.
  • a virtual world in which the object 202 resides may be provided to the user.
  • the display control unit 16 uses the head-up display 21 to display the virtual object 201 simulating the vehicle 100 on the road in front of the virtual merging point object 200 as shown in FIG. 9, and the virtual merging point object A virtual object 202 simulating a merging partner is displayed beside 200.
  • a virtual world in which a virtual object 203 indicating a stop position (parking space) when the vehicle 100 arrives at the merging point exists together with the virtual merging point object 200 may be provided to the user. ..
  • the display control unit 16 uses the head-up display 21 to display the virtual object 203 indicating the stop position of the vehicle 100 on the road in front of the virtual confluence point object 200 as shown in FIG.
  • the display control unit 16 may display the virtual object 204 indicating that the merging point MP is close to the vehicle 100 but is not at a position where the virtual merging point object 200 can be displayed, as shown in FIG. 13, for example.
  • the virtual object 204 shows an image of the virtual confluence object 200 in an inconspicuous display mode (size, color, brightness, etc.), and an image of an arrow indicating the direction in which the confluence MP is located.
  • the virtual object 204 may be only one of these two images. Further, an animation effect such as blinking the virtual object 204 may be used.
  • FIG. 13 the operation of the merging support device 10 when displaying the virtual object 204 indicating that the position of the merging point MP as seen by the user is outside the display area of the head-up display 21 is shown in FIG. Shown in the flowchart.
  • the flowchart of FIG. 14 is obtained by adding step S109 to be executed when NO is determined in step S106 to the flowchart of FIG.
  • step S106 when the position of the merging point as seen by the user is outside the display area of the head-up display 21 (NO in step S106), the display control unit 16 displays the virtual object 204 indicating that fact. , Displayed using the head-up display 21 (step S109). Since the other steps are the same as those in FIG. 5, the description here will be omitted.
  • the communication unit 11 acquires information on the position of the merging partner and the planned passage route from the mobile terminal or navigation device of the merging partner, and the display control unit 16 converts a virtual object indicating the information into a virtual merging point object. It may be displayed together with 200. Specifically, the display control unit 16 displays a virtual object indicating the position of the merging partner at a position corresponding to the position of the merging partner as seen by the user, and also sets the planned passage route of the merging partner as seen by the user. A virtual object indicating the planned passage route of the merging partner may be displayed at the corresponding position.
  • the point MP in the real world as shown in FIG. 15 is defined as the merging point between the user and the merging partner 101, and the vehicle 100 driven by the user and the merging partner 101 reach the vicinity of the merging point MP.
  • Information on the merging point MP is stored in the merging point storage unit 12 of the merging support device 10, and the communication unit 11 sequentially acquires information on the position of the merging partner 101 and the planned passage route from the mobile terminal of the merging partner 101.
  • the display control unit 16 uses the head-up display 21 to indicate the virtual merging point object 200 indicating the merging point MP, the virtual object 205 indicating the position of the merging partner 101, and the planned passage route of the merging partner 101.
  • the virtual object 206 in which the virtual object 206 exists is provided to the user.
  • the user who is the driver of the vehicle 100 sees the virtual merging point object 200, the virtual object 205 indicating the position of the merging partner 101, and the merging partner in the actual view seen through the windshield 30 of the vehicle 100.
  • FIG. 17 shows a state in which the merging partner 101 is hidden behind the building and cannot be seen. However, when the merging partner 101 comes out to a position where the user can see it, the virtual object 205 can be seen above the merging partner 101 as shown in FIG. .. Therefore, the user can quickly find the merging partner 101 even in a busy place.
  • the information of the merging partner acquired by the communication unit 11 and displayed by the display control unit 16 is not limited to the information of the current position and the planned passage route.
  • the communication unit 11 may acquire information on the estimated time of arrival at the merging point of the merging partner, and display a character string such as "5 minutes before arrival" as a virtual object.
  • FIG. 19 is a diagram showing a configuration of a merging support device 10 according to a second embodiment of the present invention.
  • the merging support device 10 of the second embodiment is also mounted on the vehicle 100, and the user of the merging support device 10 is the driver of the vehicle 100.
  • the merging support device 10 of the second embodiment is connected to the photographing device 27 instead of the head-up display 21 shown in FIG.
  • the photographing device 27 is a camera that photographs an actual scene around the vehicle 100 (that is, around the user). Here, it is assumed that the photographing device 27 photographs an actual scene in front of the vehicle 100.
  • the display control unit 16 of the merging support device 10 displays an image of the actual scene taken by the photographing device 27 (hereinafter referred to as “actual scene image”) on the screen of the display device 22. Further, the display control unit 16 is based on the position and orientation of the vehicle 100 acquired by the position information acquisition unit 14, the position of the merging point stored in the virtual object storage unit 13, and the map information acquired by the map information acquisition unit 15. Then, the relative position of the merging point as seen from the user (driver of the vehicle 100) is calculated, and the virtual merging point object is displayed at the position corresponding to the merging point in the actual scene image displayed on the screen of the display device 22. ..
  • the screen of the display device 22 displays an image as if a virtual confluence point object exists at the confluence point. That is, the display control unit 16 provides the user with a virtual world in which the virtual confluence point object exists at the confluence point through the screen of the display device 22.
  • the display control unit 16 when the display control unit 16 provides the user with a virtual world similar to that shown in FIG. 3, the display control unit 16 displays an image as shown in FIG. 20 on the screen of the display device 22.
  • the image of FIG. 4 is a superposition of the virtual confluence point object 200 at a position corresponding to the confluence point MP in the actual scene image taken by the photographing device 27.
  • the merging support device 10 merges by presenting the user who is the driver of the vehicle 100 with a virtual world in which the virtual merging point object appears to exist at the merging point in the real world. Present the location of the point. Therefore, the user can intuitively recognize the location of the confluence in the real world, and it becomes easy to reach the confluence.
  • the merging point storage unit 12 stores the merging point (step S201).
  • the display control unit 16 displays the actual scene image around the vehicle 100 photographed by the photographing device 27 on the screen of the display device 22 (step S202).
  • the position information acquisition unit 14 acquires the position of the vehicle 100 calculated by the positioning device 24, that is, the position of the user (step S203).
  • the position information acquisition unit 14 calculates the direction of the vehicle 100 based on the history of the position of the vehicle 100.
  • the map information acquisition unit 15 acquires map information around the user's position from the map information storage device 26 (step S204).
  • the display control unit 16 is based on the user's position acquired by the position information acquisition unit 14, the position of the confluence stored in the confluence point storage unit 12, and the map information acquired by the map information acquisition unit 15. It is determined whether or not the position of is near the confluence (step S205). The determination in step S205 may be the same as in step S104 in FIG.
  • the display control unit 16 sets the display control unit 16 on the actual view as seen by the user based on the position and orientation of the vehicle 100, the position of the confluence, and the map information.
  • the position of the confluence of the above is calculated (step S206).
  • step S207 it is confirmed whether or not the position of the confluence point as seen by the user is within the range of the actual scene image taken by the photographing device 27 (step S207). If the position of the merging point as seen by the user is within the range of the actual scene image (YES in step S207), the display control unit 16 determines the position of the merging point as seen by the user in the actual scene image displayed on the screen of the display device 22. The virtual confluence point object is superimposed and displayed (step S208), and the process returns to step S202.
  • step S205 when the position of the user is far from the merging point (NO in step S205) and when the position of the merging point seen by the user is out of the range of the actual scene image (NO in step S207), the display control unit 16 However, it is confirmed whether or not the virtual confluence point object is displayed on the screen of the display device 22, and if it is displayed, it is deleted (step S209), and the process returns to step S202.
  • the image may be displayed on the display device 22 as a real scene image in which the virtual confluence point object is synthesized.
  • the display device 22 When the display device 22 has a superimpose function, the display device 22 displays the actual scene image directly (without passing through the display control unit 16) acquired from the photographing device 27 on the screen, and the display control unit 16 displays the display device 16.
  • the virtual confluence point object may be superimposed on the actual scene image displayed on the screen of 22.
  • the photographing device 27 captures an actual view in front of the vehicle 100, but the photographing device 27 may capture an actual view on the side, rear, or rear side of the vehicle 100.
  • the photographing device 27 and the display device 22 function as an electronic mirror that electronically realizes the rear view mirror of the vehicle 100.
  • the merging support device 10 of the second embodiment is applicable to the electronic mirror system of the vehicle 100.
  • a plurality of photographing devices 27 having different shooting directions are connected to the merging support device 10, and the display control unit 16 creates a surround image obtained by synthesizing a plurality of actual scene images photographed by the plurality of photographing devices 27. It may be displayed on the screen of the display device 22.
  • the technique of using the head-up display 21 shown in the first embodiment the technique of using the actual view image in front of the vehicle 100 shown in the second embodiment, the technique of using the above-mentioned electronic mirror, and the technique of using the surround image.
  • the area for displaying the virtual confluence point object is expanded, so that the probability that the virtual confluence point object is displayed is high.
  • the virtual confluence point object may be displayed only on a part of the plurality of screens.
  • the display control unit 16 may display a mark notifying the user on which screen the virtual confluence point object is displayed on the screen on which the virtual confluence point object is not displayed.
  • the vehicle 100 may be remotely controlled by a driver (user of the merging support device 10) who is away from the vehicle 100.
  • the display device 22 is installed near the user at a remote location.
  • the position of the vehicle 100 acquired by the position information acquisition unit 14 is not actually the position of the user, but the position information acquisition unit 14 calculates the position of the vehicle 100 by the positioning device 24. Is regarded as the user's position and is acquired.
  • the display control unit 16 displays the virtual merging point object at a position corresponding to the merging point of the image of the actual scene around the vehicle 100.
  • the merging support device 10 of the second embodiment can also be realized by the same hardware configuration as in FIGS. 6 and 7.
  • the merging support device 10 stores the merging point between the user and another person, and is an image showing the position of the merging point.
  • Store the point object get the user's position, and based on the user's position and the position of the confluence, place the virtual confluence object at the position corresponding to the confluence of the real-world images around the user displayed on the screen.
  • a processing circuit 50 for displaying the image is provided.
  • the memory 52 is a process of storing the merging point between the user and another person when executed by the processor 51.
  • the virtual merging point object which is an image showing the position of the merging point, is stored on the screen, the user's position is acquired, and the virtual merging point object is displayed on the screen based on the user's position and the merging point position.
  • a program that will be executed as a result of the process of displaying at the position corresponding to the confluence of the displayed images of the actual scene around the user is stored.
  • FIG. 22 is a diagram showing the configuration of the merging support device 10 according to the third embodiment of the present invention.
  • the merging support device 10 of the third embodiment is mounted on a mobile terminal 300 such as a smartphone. That is, the merging support device 10 is realized by the processor of the mobile terminal 300 executing a merging support application program (hereinafter referred to as "merging support application") stored in the memory. Except for this, the configuration of the merging support device 10 in FIG. 22 is the same as that in FIG.
  • the user of the merging support device 10 is a user of the mobile terminal 300.
  • the "user" in the following description is intended as a user of the mobile terminal 300.
  • the user is a pedestrian, but the user can also bring the mobile terminal 300 into the vehicle.
  • the merging partner may be a pedestrian or a vehicle, but here, it is assumed to be a vehicle.
  • the vehicle of the merging partner may be a vehicle without a driver (for example, an unmanned taxi) that travels by remote control or automatic driving.
  • the photographing device 27 of the mobile terminal 300 photographs the actual scenery around the user, but the photographing direction depends on the direction in which the user holds the mobile terminal 300, and therefore does not necessarily match the traveling direction of the user. Therefore, the display control unit 16 of the merging support device 10 according to the third embodiment needs to recognize the orientation of the mobile terminal 300 in order to recognize the range of the actual scene image captured by the photographing device 27.
  • the orientation recognition method of the mobile terminal 300 may be any method.
  • the display control unit 16 may calculate the orientation of the mobile terminal 300 using the orientation sensor (not shown) of the mobile terminal 300, or the actual scene image and the map information acquisition unit 15 taken by the photographing device 27 may calculate the orientation of the mobile terminal 300.
  • the orientation of the mobile terminal 300 may be determined by collating with the acquired map information.
  • the display control unit 16 displays an actual scene image taken by the photographing device 27 on the screen of the display device 22 (sometimes referred to as a “screen of the mobile terminal 300”). Further, the display control unit 16 indicates the direction of the recognized mobile terminal 300, the position of the mobile terminal 300 acquired by the position information acquisition unit 14, the position of the confluence stored in the virtual object storage unit 13, and the map information acquisition unit 15. Calculates the relative position of the merging point as seen by the user based on the map information acquired by the user, and displays the virtual merging point object at the position corresponding to the merging point in the actual scene image displayed on the screen of the display device 22. .. As a result, the screen of the display device 22 displays an image as if a virtual confluence point object exists at the confluence point. That is, the display control unit 16 provides the user with a virtual world in which the virtual confluence point object exists at the confluence point through the screen of the display device 22.
  • the point MP in the real world as shown in FIG. 23 is defined as the merging point between the user 301 and the merging partner, and the user 301 reaches the vicinity of the merging point MP.
  • Information on the position of the merging point MP is stored in the merging point storage unit 12 of the merging support device 10.
  • the display control unit 16 uses the display device 22 to provide the user with a virtual world as shown in FIG. 24 in which the virtual confluence point object 400 exists at the confluence point MP.
  • an image in which the virtual merging point object 400 is superimposed is displayed at a position corresponding to the merging point MP in the actual scene image taken by the photographing device 27.
  • the merging support device 10 presents the user of the mobile terminal 300 with a virtual world in which the virtual merging point object appears to exist at the merging point in the real world, thereby presenting the location of the merging point. To present. Therefore, the user can intuitively recognize the location of the confluence in the real world, and it becomes easy to reach the confluence.
  • the operation of the mobile terminal 300 will be described with reference to the flowchart of FIG. 26.
  • the merging point determination application activates the communication device 25. Direct or indirect communication with the merging partner is performed, and a merging point is negotiated with the merging partner (step S301).
  • the merging point determined by the merging point determination application is stored in the merging point storage unit 12 (step S302).
  • the user activates the merging support device 10 by activating the merging support application at an arbitrary timing (for example, when the user approaches the merging point to some extent) (step S303).
  • the display control unit 16 displays an actual view image of the user's surroundings taken by the photographing device 27 on the screen of the display device 22 (step S304). Then, the position information acquisition unit 14 acquires the position of the mobile terminal 300 calculated by the positioning device 24, that is, the position of the user (step S305). The map information acquisition unit 15 acquires map information around the user's position from the map information storage device 26 (step S306).
  • the display control unit 16 is based on the user's position acquired by the position information acquisition unit 14, the position of the confluence stored in the confluence point storage unit 12, and the map information acquired by the map information acquisition unit 15. It is determined whether or not the position of is near the confluence (step S307).
  • the determination in step S307 may be the same as in step S104 in FIG.
  • the display control unit 16 determines the actual view as seen by the user based on the position and orientation of the mobile terminal 300, the position of the confluence, and the map information. The position of the upper confluence is calculated (step S308).
  • the display control unit 16 confirms whether or not the position of the confluence point as seen by the user is within the range of the actual scene image taken by the photographing device 27 (step S309). If the position of the merging point as seen by the user is within the range of the actual scene image (YES in step S309), the display control unit 16 determines the position of the merging point as seen by the user in the actual scene image displayed on the screen of the display device 22. The virtual confluence point object is superimposed and displayed (step S310), and the process returns to step S304.
  • step S307 when the position of the user is far from the merging point (NO in step S307) and when the position of the merging point seen by the user is out of the range of the actual scene image (NO in step S309), the display control unit 16 However, it is confirmed whether or not the virtual confluence point object is displayed on the screen of the display device 22, and if it is displayed, it is deleted (step S311), and the process returns to step S304.
  • steps S304 to S311 are repeatedly executed, the screen displayed on the display device 22 is sequentially updated. Therefore, even if the virtual merging point object 400 is displayed on the screen of the display device 22, the merging point is out of the real scene image area as soon as the user changes the direction in which the mobile terminal 300 is held, and the virtual merging point object 400 is displayed. Disappear. Therefore, the user may be able to pause the image displayed on the screen of the display device 22. As a result, the user can easily compare the image of the virtual world displayed on the display device 22 with the landscape of the real world.
  • the display control unit 16 synthesizes the real scene images to generate a panoramic image.
  • the virtual confluence object may be displayed in the panoramic image.
  • the merging point determination application and the merging support application are separate applications, but the function of the merging point determination application may be incorporated in the merging support application.
  • An example of the operation of the mobile terminal 300 when the merging point determination application has the function of the merging support application will be described with reference to FIG. 27 based on a flowchart.
  • the merging support application When the user of the mobile terminal 300 operates the operation input device 23 to activate the merging support application (step S401), the merging support application performs direct or indirect communication with the merging partner via the communication device 25. Arrange a merging point with the merging partner (step S402). The determined merging point is stored in the merging point storage unit 12 (step S403).
  • the position information acquisition unit 14 acquires the position of the mobile terminal 300 calculated by the positioning device 24, that is, the position of the user (step S404).
  • the map information acquisition unit 15 acquires map information around the user's position from the map information storage device 26 (step S405).
  • the display control unit 16 is based on the user's position acquired by the position information acquisition unit 14, the position of the confluence stored in the confluence point storage unit 12, and the map information acquired by the map information acquisition unit 15. It is determined whether or not the position of is near the confluence (step S406). If the user's position is near the confluence (YES in step S406), the user is notified to that effect by using, for example, the vibration function of the mobile terminal 300 (step S407).
  • the user who received the notification can operate the operation input device 23 to instruct the merging support device 10 to display the virtual merging point object. If there is no display instruction of the virtual confluence point object for a certain period of time (NO in step S408), the process returns to step S404.
  • the display control unit 16 displays an actual view image of the user's surroundings taken by the photographing device 27 on the display device 22. Is displayed on the screen of (step S409). Then, the position information acquisition unit 14 acquires the position (user's position) of the mobile terminal 300 calculated by the positioning device 24 (step S410). The map information acquisition unit 15 acquires map information around the user's position from the map information storage device 26 (step S411).
  • the display control unit 16 determines whether or not the user's position is near the merging point in the same manner as in step S406, and if the user's position is near the merging point (YES in step S412), the mobile terminal 300
  • the position of the merging point in the actual view as seen by the user is calculated based on the position and direction of the merging point, the position of the merging point, and the map information (step S413).
  • step S414 it is confirmed whether or not the position of the confluence point as seen by the user is within the range of the actual scene image taken by the photographing device 27 (step S414). If the position of the merging point as seen by the user is within the range of the actual scene image (YES in step S414), the display control unit 16 determines the position of the merging point as seen by the user in the actual scene image displayed on the screen of the display device 22. The virtual confluence point object is superimposed and displayed (step S415), and the process returns to step S409.
  • step S412 When the user's position moves away from the merging point (NO in step S412), the display control unit 16 confirms and displays whether or not the virtual merging point object is displayed on the screen of the display device 22. If so, it is deleted (step S416), and the process returns to step S404.
  • step S416 When the position of the confluence point seen by the user is outside the range of the actual scene image (NO in step S414), the display control unit 16 performs the same processing as in step S416 (step S417), and steps S409. Return to.
  • the user or the merging partner selects the image to be the virtual merging point object, or the arbitration system for arranging the image of the virtual merging point object is used. It may be possible to change it.
  • the display control unit 16 may provide the user with a virtual world including not only a virtual merging point object indicating the position of the merging point but also a virtual object indicating additional information for merging support.
  • a virtual object 401 simulating a user 301 arriving at the merging point and a virtual object 402 simulating a merging partner (here, a vehicle) arriving at the merging point.
  • a virtual world to be provided may be provided to the user.
  • the display control unit 16 displays the virtual object 401 simulating the user 301 beside the virtual merging point object 400 on the screen of the mobile terminal 300, and in front of the virtual merging point object 400.
  • a virtual object 402 simulating a merging partner is displayed on the road.
  • the communication unit 11 acquires information on the position of the merging partner and the planned passage route from the mobile terminal or navigation device of the merging partner, and the display control unit 16 converts a virtual object indicating the information into a virtual merging point object. It may be displayed together with 400. Specifically, the display control unit 16 displays a virtual object indicating the position of the merging partner at a position corresponding to the position of the merging partner as seen by the user, and also sets the planned passage route of the merging partner as seen by the user. A virtual object indicating the planned passage route of the merging partner may be displayed at the corresponding position.
  • the point MP in the real world as shown in FIG. 30 is defined as the merging point between the user 301 and the merging partner vehicle 302, and the user 301 and the merging partner vehicle 302 reach the vicinity of the merging point MP. ..
  • Information on the merging point MP is stored in the merging point storage unit 12 of the merging support device 10, and the communication unit 11 acquires information on the position of the vehicle 302 and the planned passage route from the navigation device of the merging partner vehicle 302. .
  • the display control unit 16 uses the screen of the mobile terminal 300 to display the virtual merging point object 400 indicating the merging point MP, the virtual object 403 indicating the position of the merging partner vehicle 302, and the merging partner vehicle 302.
  • a virtual world as shown in FIG. 30 in which a virtual object 404 indicating a planned passage route exists is provided to the user 301.
  • a virtual merging point object 400, a virtual object 403 indicating the position of the merging partner vehicle 302, and a planned passage route of the vehicle 302 are displayed in the actual scene image.
  • An image in which the indicated virtual object 404 is superimposed on the actual scene image is displayed.
  • the information of the merging partner acquired by the communication unit 11 and displayed by the display control unit 16 is not limited to the information of the current position and the planned passage route.
  • the communication unit 11 may acquire information on the estimated time of arrival at the merging point of the merging partner, and display a character string such as "5 minutes before arrival" as a virtual object.
  • the mobile terminal 300 is configured to be able to switch between the display of a real scene image including the virtual confluence point object as shown in FIG. 25 (display of the virtual world) and the display of a map showing the positions of the user and the confluence point as shown in FIG. 32. It may have been done. In that case, the map display area and the actual scene image display area on the screen of the mobile terminal 300 may be changed according to the posture of the mobile terminal 300.
  • this state is defined as an elevation angle of 0 degrees
  • a map as shown in FIG. 32 is displayed, and when the user holds the mobile terminal 300 vertically to capture a real scene (this state is defined as an elevation angle of 0 degrees)
  • This state is defined as an elevation angle of 90 degrees
  • a real scene image as shown in FIG. 25 may be displayed.
  • both the actual scene image and the map may be displayed as shown in FIG. 33.
  • both the actual scene image and the map may be displayed as shown in FIG. 33 regardless of the elevation angle of the mobile terminal 300.
  • both the actual scene image and the map are displayed as shown in FIG. 34. May be good.
  • the user can swipe the screen of the mobile terminal 300 to expand the display area of the actual scene image and the display area of the map (area). The ratio) may be changed.
  • the vehicle 100 having the merging support device 10 according to the first embodiment (or the second embodiment) and the user of the mobile terminal 300 having the merging support device 10 according to the third embodiment are merged.
  • An example of a merging system to support is shown.
  • FIG. 35 is a diagram showing the configuration of the merging support system according to the fourth embodiment.
  • the merging support system includes a vehicle 100 and a mobile terminal 300 each equipped with a merging support device 10, and a merging support server 500.
  • the vehicle 100 and the mobile terminal 300 can communicate with the merging support server 500 through a communication network such as the Internet.
  • FIG. 35 shows one vehicle 100 and one mobile terminal 300, there are a plurality of vehicles 100 and mobile terminals 300.
  • the merging support server 500 provides a pick-up service using the vehicle 100 to the user of the mobile terminal 300, and a plurality of vehicles 100 that can be dispatched are registered in the merging support server 500 in advance.
  • the user of the mobile terminal 300 who intends to receive this service is referred to as a "service user”.
  • the merging support server 500 receives a vehicle allocation request from the service user's mobile terminal 300 (step S501).
  • the vehicle dispatch request from the mobile terminal 300 includes at least information on the place and time when the service user wishes to dispatch the vehicle, but also includes information such as the number of passengers and the vehicle type desired to be dispatched. Good.
  • the merging support server 500 that has received the vehicle allocation request consults the vehicle allocation to the one that is close to the service user's request from among the plurality of registered vehicles 100 (step S502).
  • the merging support server 500 receives a response from any of the vehicles 100 that consulted the vehicle allocation to the effect that the vehicle can be dispatched (step S503). Then, the information of the vehicle 100 that can be dispatched is transmitted to the mobile terminal 300 of the service user (step S504). When there are a plurality of vehicles 100 that can be dispatched, information on the plurality of vehicles 100 is transmitted to the service user's mobile terminal 300, and the service user selects one of them.
  • the merging support server 500 receives the information of the vehicle 100 selected by the service user and the profile of the service user from the mobile terminal 300 of the service user (step S505). As a result, a transfer contract is established between the service user and the selected vehicle 100.
  • the merging support server 500 When the contract is established, the merging support server 500 notifies the vehicle 100 selected by the service user of the conclusion of the contract and transmits an image of the virtual merging point object (step S506). Further, the merging support server 500 also notifies the mobile terminal 300 of the service user of the establishment of the contract and transmits an image of the virtual merging point object (step S507).
  • the operation of the merge support server 500 is now complete. After that, the merging support device 10 of the vehicle 100 performs the operation described in the first embodiment, and the merging support device 10 of the mobile terminal 300 performs the operation described in the third embodiment, whereby the vehicle 100 and the service user perform the operation described in the third embodiment. Support the merger with.
  • the merging support server 500 may operate as an arbitration system for arranging an image as a virtual merging point object between the service user and the driver of the vehicle 100 to be dispatched.
  • the same virtual confluence point object is used in the vehicle 100 and the mobile terminal 300, for example, when the service user merges with the vehicle 100, the mobile terminal 300 on which the virtual confluence point object is displayed is used by the driver of the vehicle 100.
  • step S307 of FIG. 26 was performed by the merging support device 10 included in the vehicle 100 or the mobile terminal 300.
  • the merging support server 500 sequentially acquires the positions of the vehicle 100 and the mobile terminal 300, makes a determination thereof, and notifies the vehicle 100 or the mobile terminal 300 when the vehicle 100 or the mobile terminal 300 approaches the merging point. You may.
  • one or a plurality of vehicles 100 equipped with the merging support device 10 travel on a specific patrol route like a patrol bus, and are carried at a plurality of boarding / alighting points in the middle of the patrol route. It provides a service for getting on and off an unspecified passenger having a terminal 300.
  • the vehicle 100 that provides this service is referred to as a "patrol vehicle”
  • the user of the mobile terminal 300 who intends to receive the service is referred to as a "service user”.
  • the configuration of the transfer system of the fifth embodiment may be the same as that of FIG. 35.
  • Information on the patrol route and each boarding / alighting point is distributed from the merging support server 500 to the patrol vehicle 100 and the mobile terminal 300 of the service user.
  • a plurality of boarding / alighting points are stored as merging points in the merging point storage unit 12 of the merging support device 10 of the patrol vehicle 100.
  • a plurality of boarding / alighting points are stored as merging points in the merging point storage unit 12 of the merging support device 10 of the service user's mobile terminal 300.
  • the merging support device 10 of the patrolling vehicle 100 and the mobile terminal 300 provides the driver and the service user of the patrolling vehicle 100 with a virtual world in which the virtual merging point object exists at a position corresponding to the boarding / alighting point.
  • the virtual confluence point object indicating the position of the boarding / alighting point is referred to as a “virtual boarding / alighting point object”.
  • the driver and service user of the patrol vehicle 100 can recognize the position of the boarding / alighting point by confirming the position of the virtual boarding / alighting point object in the virtual world provided by the merging support device 10. Therefore, in the transfer system, it is not necessary to actually install a signpost on the patrol route. Therefore, in the transfer system, it is possible to flexibly change the location of the boarding / alighting point and the patrol route.
  • the merging support server 500 collects traffic information, weather information, distribution information of service users, etc., and for example, the patrol route is not changed so as to avoid a congested section, and the boarding / alighting point is not exposed to wind and rain in bad weather. By changing to a location or adding an area with many service users to the patrol route, it is possible to provide a highly convenient transfer service. Further, the merging support server 500 may change the image of the virtual boarding / alighting point object as needed.
  • the merging support server 500 may receive a request from the service user for the place to be the boarding / alighting point, and determine the boarding / alighting point based on the request. For example, it is assumed that a place where a plurality of service users can reach within 10 minutes is set as a boarding / alighting point. Further, the merging support server 500 may temporarily increase the number of boarding / alighting points in response to a request from a service user who has difficulty walking, such as an elderly person or an injured person.
  • the change information of the image of the patrol route, the boarding / alighting point, and the virtual boarding / alighting point object is distributed from the merging support server 500 to the patrol vehicle 100 and the mobile terminal 300.
  • each embodiment can be freely combined, and each embodiment can be appropriately modified or omitted within the scope of the invention.
  • 10 merging support device 11 communication unit, 12 merging point storage unit, 13 virtual object storage unit, 14 position information acquisition unit, 15 map information acquisition unit, 16 display control unit, 21 head-up display, 22 display device, 23 operation input Devices, 24 positioning devices, 25 communication devices, 26 map information storage devices, 27 imaging devices, 30 windshields, MP merging points, 100 user vehicles, 101 merging partners, 200,400 virtual merging point objects, 201-206,401 ⁇ 404 virtual object, 300 mobile terminal, 301 mobile terminal user, 302 merging partner's vehicle, 500 merging support server.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

A meetup assistance device (10) is provided with a meetup point storage unit (12) for storing a meetup point of a user and another person, a virtual object storage unit (13) for storing a virtual meetup point object, which is an image indicating the position of the meetup point, and a position information acquisition unit (14) for acquiring the position of the user. In addition, on the basis of the position of the user and the position of the meetup point a display control unit (16) of the meetup assistance device (10) displays the virtual meetup point object at a position corresponding to a meetup point in an actual scene that can be seen by the user through a transparent screen or in an image of the user's surroundings as displayed on the screen.

Description

合流支援装置および合流支援方法Merge support device and merge support method
 本発明は、ユーザと他者(合流相手)との合流を支援する合流支援装置に関するものである。 The present invention relates to a merging support device that supports merging between a user and another person (merging partner).
 自動車業界では「シェアリング」をキーワードに、例えばライドヘイリング、ライドシェアリング、ダイナミックシャトルなど、サービス利用者が、他者所有の車両を利用することができるサービスの開発および実用化が進んでいる。これらのサービスでは、サービス利用者と車両との合流をスムーズに行わせることが重要である。 In the automobile industry, with the keyword "sharing," services such as ride hailing, ride sharing, and dynamic shuttles are being developed and put into practical use so that service users can use vehicles owned by others. .. In these services, it is important to facilitate the merging of service users and vehicles.
 例えば、下記の特許文献1,2には、合流地点(待ち合わせ場所)を地図上に表示することによって、合流を支援する合流支援装置が提案されている。また、下記の特許文献3には、サービス利用者の持つ端末装置に、合流相手となる予約車両を認証するための認証子を事前に提供することで、サービス利用者が予約車両を正しく識別できるシステムが提案されている。 For example, Patent Documents 1 and 2 below propose a merging support device that supports merging by displaying a merging point (meeting place) on a map. Further, in Patent Document 3 below, the service user can correctly identify the reserved vehicle by providing the terminal device of the service user with an authenticator for authenticating the reserved vehicle to be the merging partner in advance. The system has been proposed.
特開2019-053547号公報Japanese Unexamined Patent Publication No. 2019-053547 特開2016-085734号公報Japanese Unexamined Patent Publication No. 2016-085334 特開2019-28629号公報Japanese Unexamined Patent Publication No. 2019-28629
 特許文献1,2の合流支援装置によれば、ユーザは、地図上で合流地点の位置を確認できる。しかし、ユーザが地図を読むのが苦手な場合や、合流地点の近くに目印となる地物(いわゆる「ランドマーク」)がない場合などには、地図と現実世界とをマッチングさせるのが難しく、ユーザが合流地点へ容易に到達できないこともある。 According to the merging support device of Patent Documents 1 and 2, the user can confirm the position of the merging point on the map. However, if the user is not good at reading the map, or if there are no landmarks (so-called "landmarks") near the confluence, it is difficult to match the map with the real world. The user may not be able to easily reach the confluence.
 本発明は以上のような課題を解決するためになされたものであり、現実世界における合流地点の場所をユーザに提示することが可能な合流支援装置を提供することを目的とする。 The present invention has been made to solve the above problems, and an object of the present invention is to provide a merging support device capable of presenting the location of a merging point in the real world to a user.
 本発明に係る合流支援装置は、ユーザと他者との合流地点を記憶する合流地点記憶部と、合流地点の位置を示す画像である仮想合流地点オブジェクトを記憶する仮想オブジェクト記憶部と、ユーザの位置を取得する位置情報取得部と、ユーザの位置および合流地点の位置に基づいて、仮想合流地点オブジェクトを、ユーザから透明な画面を通して見える実景、または、画面に表示されたユーザの周囲の実景の画像における合流地点に対応する位置に表示する表示制御部と、を備えるものである。 The merging support device according to the present invention includes a merging point storage unit that stores a merging point between a user and another person, a virtual object storage unit that stores a virtual merging point object that is an image showing the position of the merging point, and a user. Based on the position information acquisition unit that acquires the position and the position of the user and the position of the confluence point, the virtual confluence point object can be viewed from the user through a transparent screen, or the actual view around the user displayed on the screen. It is provided with a display control unit that displays at a position corresponding to a confluence point in an image.
 本発明によれば、現実世界の合流地点に対応する位置に仮想合流地点オブジェクトが存在する仮想世界をユーザに提示することが可能である。よって、ユーザは現実世界における合流地点の位置を直感的に認識できる。 According to the present invention, it is possible to present to the user a virtual world in which a virtual confluence point object exists at a position corresponding to a confluence point in the real world. Therefore, the user can intuitively recognize the position of the confluence in the real world.
 本発明の目的、特徴、態様、および利点は、以下の詳細な説明と添付図面とによって、より明白となる。 The object, features, aspects, and advantages of the present invention will be made clearer by the following detailed description and accompanying drawings.
実施の形態1に係る合流支援装置の構成を示す図である。It is a figure which shows the structure of the merging support apparatus which concerns on Embodiment 1. FIG. 現実世界の例を示す図である。It is a figure which shows the example of the real world. 実施の形態1に係る合流支援装置が提供する仮想世界の例を示す図である。It is a figure which shows the example of the virtual world provided by the merging support device which concerns on Embodiment 1. FIG. ユーザから見た実景および仮想合流地点オブジェクトの例を示す図である。It is a figure which shows the example of the real scene and the virtual confluence point object seen from the user. 実施の形態1に係る合流支援装置の動作を示すフローチャートである。It is a flowchart which shows the operation of the merging support device which concerns on Embodiment 1. FIG. 合流支援装置のハードウェア構成例を示す図である。It is a figure which shows the hardware configuration example of the merging support device. 合流支援装置のハードウェア構成例を示す図である。It is a figure which shows the hardware configuration example of the merging support device. 実施の形態1に係る合流支援装置の変形例を説明するための図である。It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1. FIG. 実施の形態1に係る合流支援装置の変形例を説明するための図である。It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1. FIG. 実施の形態1に係る合流支援装置の変形例を説明するための図である。It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1. FIG. 実施の形態1に係る合流支援装置の変形例を説明するための図である。It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1. FIG. 実施の形態1に係る合流支援装置の変形例を説明するための図である。It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1. FIG. 実施の形態1に係る合流支援装置の変形例を説明するための図である。It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1. FIG. 実施の形態1に係る合流支援装置の変形例を説明するためのフローチャートである。It is a flowchart for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1. FIG. 実施の形態1に係る合流支援装置の変形例を説明するための図である。It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1. FIG. 実施の形態1に係る合流支援装置の変形例を説明するための図である。It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1. FIG. 実施の形態1に係る合流支援装置の変形例を説明するための図である。It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1. FIG. 実施の形態1に係る合流支援装置の変形例を説明するための図である。It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 1. FIG. 実施の形態2に係る合流支援装置の構成を示す図である。It is a figure which shows the structure of the merging support apparatus which concerns on Embodiment 2. 表示装置に表示される実景の画像および仮想合流地点オブジェクトの例を示す図である。It is a figure which shows the image of the real scene and the example of the virtual confluence point object displayed on the display device. 実施の形態2に係る合流支援装置の動作を示すフローチャートである。It is a flowchart which shows the operation of the merging support apparatus which concerns on Embodiment 2. 実施の形態3に係る合流支援装置の構成を示す図である。It is a figure which shows the structure of the merging support device which concerns on Embodiment 3. 現実世界の例を示す図である。It is a figure which shows the example of the real world. 実施の形態3に係る合流支援装置が提供する仮想世界の例を示す図である。It is a figure which shows the example of the virtual world provided by the merging support device which concerns on Embodiment 3. 携帯端末の画面に表示される実景の画像および仮想合流地点オブジェクトの例を示す図である。It is a figure which shows the image of the real scene and the example of the virtual confluence point object displayed on the screen of a mobile terminal. 実施の形態3に係る携帯端末の動作を示すフローチャートである。It is a flowchart which shows the operation of the mobile terminal which concerns on Embodiment 3. 実施の形態3に係る携帯端末の動作の変形例を示すフローチャートである。It is a flowchart which shows the modification of the operation of the mobile terminal which concerns on Embodiment 3. 実施の形態3に係る合流支援装置の変形例を説明するための図である。It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 3. 実施の形態3に係る合流支援装置の変形例を説明するための図である。It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 3. 実施の形態3に係る合流支援装置の変形例を説明するための図である。It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 3. 実施の形態3に係る合流支援装置の変形例を説明するための図である。It is a figure for demonstrating the modification of the merging support apparatus which concerns on Embodiment 3. 実施の形態3に係る携帯端末の変形例を説明するための図である。It is a figure for demonstrating the modification of the mobile terminal which concerns on Embodiment 3. 実施の形態3に係る携帯端末の変形例を説明するための図である。It is a figure for demonstrating the modification of the mobile terminal which concerns on Embodiment 3. 実施の形態3に係る携帯端末の変形例を説明するための図である。It is a figure for demonstrating the modification of the mobile terminal which concerns on Embodiment 3. 実施の形態4に係る合流支援システムの構成を示す図である。It is a figure which shows the structure of the merging support system which concerns on Embodiment 4. 実施の形態4に係る合流支援システムにおける合流支援サーバの動作を示すフローチャートである。It is a flowchart which shows the operation of the merging support server in the merging support system which concerns on Embodiment 4. FIG.
 <実施の形態1>
 図1は、本発明の実施の形態1に係る合流支援装置10の構成を示す図である。図1のように、実施の形態1の合流支援装置10は、車両100に搭載されている。ただし、合流支援装置10は、車両100に常設される必要はなく、車両100に持ち込み可能な携帯型の装置として構成されてもよい。
<Embodiment 1>
FIG. 1 is a diagram showing a configuration of a merging support device 10 according to the first embodiment of the present invention. As shown in FIG. 1, the merging support device 10 of the first embodiment is mounted on the vehicle 100. However, the merging support device 10 does not have to be permanently installed in the vehicle 100, and may be configured as a portable device that can be brought into the vehicle 100.
 本実施の形態では、合流支援装置10のユーザは車両100の運転者である。以下の説明における「ユーザ」は、特に断りのない限り、車両100の運転者を意図している。 In the present embodiment, the user of the merging support device 10 is the driver of the vehicle 100. Unless otherwise specified, the "user" in the following description is intended to be the driver of the vehicle 100.
 合流支援装置10は、車両100が備えるヘッドアップディスプレイ(HUD)21、表示装置22、操作入力装置23、測位装置24、通信装置25および地図情報記憶装置26に接続されている。 The merging support device 10 is connected to a head-up display (HUD) 21, a display device 22, an operation input device 23, a positioning device 24, a communication device 25, and a map information storage device 26 included in the vehicle 100.
 ヘッドアップディスプレイ21および表示装置22は、合流支援装置10がユーザ(車両100の運転者)に各種の情報を画像として提示する手段である。特に、ヘッドアップディスプレイ21は、車両100のウィンドシールドを透明な画面として用いることで、ユーザから見える実景上に、画像を表示することができる。表示装置22は、車両100のセンターパネルやインストルメントパネルに設置された一般的な表示装置であり、例えば合流支援装置10の操作画面などを表示する。表示装置22は、例えば液晶ディスプレイや有機ELディスプレイ(Organic ElectroLuminescence display)などで構成される。 The head-up display 21 and the display device 22 are means for the merging support device 10 to present various information as images to the user (driver of the vehicle 100). In particular, the head-up display 21 can display an image on the actual scene seen by the user by using the windshield of the vehicle 100 as a transparent screen. The display device 22 is a general display device installed on the center panel or the instrument panel of the vehicle 100, and displays, for example, the operation screen of the merging support device 10. The display device 22 is composed of, for example, a liquid crystal display or an organic EL display (Organic ElectroLuminescence display).
 ヘッドアップディスプレイ21が表示する画像は、実景上に表示されるため、あたかも現実世界に存在しているかのように見える。以下、ヘッドアップディスプレイ21が表示する画像を「仮想オブジェクト」といい、仮想オブジェクトが存在するものとして見立てた世界を「仮想世界」という。 Since the image displayed by the head-up display 21 is displayed in the actual scene, it looks as if it exists in the real world. Hereinafter, the image displayed by the head-up display 21 is referred to as a "virtual object", and the world in which the virtual object is regarded as existing is referred to as a "virtual world".
 操作入力装置23は、ユーザが合流支援装置10に対する操作を入力する手段である。操作入力装置23は、押しボタンや操作レバー、キーボードなどのハードウェアキーでもよいし、画面に表示されるソフトウェアキーでもよい。例えば、操作入力装置23としてのソフトウェアキーを表示装置22に表示させてもよく、その場合、表示装置22と操作入力装置23は1つのタッチパネルとして構成されてもよい。 The operation input device 23 is a means for the user to input an operation for the merging support device 10. The operation input device 23 may be a hardware key such as a push button, an operation lever, or a keyboard, or a software key displayed on the screen. For example, the software key as the operation input device 23 may be displayed on the display device 22, and in that case, the display device 22 and the operation input device 23 may be configured as one touch panel.
 測位装置24は、例えばGPS(Global Positioning System)衛星などのGNSS(Global Navigation Satellite System )衛星から受信した測位信号に基づいて、車両100の位置を算出する。ここで、車両100の位置は、合流支援装置10のユーザの位置と等しいものとする。通信装置25は、合流支援装置10が車両100の外部との通信を行う手段である。 The positioning device 24 calculates the position of the vehicle 100 based on a positioning signal received from a GNSS (Global Navigation Satellite System) satellite such as a GPS (Global Positioning System) satellite. Here, the position of the vehicle 100 is assumed to be equal to the position of the user of the merging support device 10. The communication device 25 is a means for the merging support device 10 to communicate with the outside of the vehicle 100.
 地図情報記憶装置26は、道路やPOI(Point of Interest)などの位置情報を含む地図情報が記憶された記憶媒体である。地図情報記憶装置26は、車両100のナビゲーションシステムが備えるものであってもよい。また、地図情報記憶装置26は、車両100の外部に設置されたサーバとして構成されてもよく、その場合、合流支援装置10は、通信装置25を通して地図情報記憶装置26から地図情報を取得する。 The map information storage device 26 is a storage medium that stores map information including location information such as roads and POI (Point of Interest). The map information storage device 26 may be provided in the navigation system of the vehicle 100. Further, the map information storage device 26 may be configured as a server installed outside the vehicle 100, in which case the merging support device 10 acquires map information from the map information storage device 26 through the communication device 25.
 合流支援装置10は、通信部11、合流地点記憶部12、仮想オブジェクト記憶部13、位置情報取得部14、地図情報取得部15および表示制御部16を備える。 The merging support device 10 includes a communication unit 11, a merging point storage unit 12, a virtual object storage unit 13, a position information acquisition unit 14, a map information acquisition unit 15, and a display control unit 16.
 通信部11は、通信装置25を介して、車両100の外部との通信を行う。合流地点記憶部12は、ユーザと合流相手である他者との合流地点を記憶する。合流相手は歩行者でも車両でもよい。ユーザは、操作入力装置23を操作して、通信装置25を介する合流相手との直接的または間接的な通信を行い、合流相手との間で予め合流地点を取り決めることができ、決定された合流地点が合流地点記憶部12に記憶される。ここで、間接的な通信とは、ユーザと合流相手とがサーバ等から構成される他のシステム(例えば、合流地点の調停システム)を介して通信することを意味し、直接的な通信とは、ユーザと合流相手とが他のシステムを介さずに通信することを意味している。 The communication unit 11 communicates with the outside of the vehicle 100 via the communication device 25. The merging point storage unit 12 memorizes the merging point between the user and another person who is the merging partner. The merging partner may be a pedestrian or a vehicle. The user can operate the operation input device 23 to perform direct or indirect communication with the merging partner via the communication device 25, and can pre-arrange a merging point with the merging partner, and the determined merging point. The points are stored in the confluence point storage unit 12. Here, indirect communication means that the user and the merging partner communicate with each other via another system composed of a server or the like (for example, an arbitration system at the merging point), and direct communication is , Means that the user and the merging partner communicate without going through another system.
 仮想オブジェクト記憶部13は、合流地点の位置を示す仮想オブジェクトである仮想合流地点オブジェクトの画像を記憶する。 The virtual object storage unit 13 stores an image of a virtual merging point object, which is a virtual object indicating the position of the merging point.
 位置情報取得部14は、測位装置24が算出した車両100の位置(ユーザの位置)を取得する。また、位置情報取得部14は、車両100の位置の履歴から車両100の向きを算出することができる。なお、位置情報取得部14は、例えば地図情報を用いたマップマッチング、車両100の速度センサと方位センサを用いた自律航法による測位など、車両100の位置の精度を上げるための処理を行ってもよい。また、車両100の向きは、車両100の方位センサを用いて測定されてもよい。 The position information acquisition unit 14 acquires the position (user's position) of the vehicle 100 calculated by the positioning device 24. Further, the position information acquisition unit 14 can calculate the direction of the vehicle 100 from the history of the position of the vehicle 100. Even if the position information acquisition unit 14 performs processing for improving the accuracy of the position of the vehicle 100, such as map matching using map information and positioning by autonomous navigation using the speed sensor and the direction sensor of the vehicle 100, the position information acquisition unit 14 may perform processing such as map matching. Good. Further, the orientation of the vehicle 100 may be measured by using the orientation sensor of the vehicle 100.
 地図情報取得部15は、地図情報記憶装置26から、位置情報取得部14が取得した車両100の位置周辺の地図情報を取得する。 The map information acquisition unit 15 acquires map information around the position of the vehicle 100 acquired by the position information acquisition unit 14 from the map information storage device 26.
 表示制御部16は、位置情報取得部14が取得した車両100の位置および向き、仮想オブジェクト記憶部13に記憶された合流地点の位置、および地図情報取得部15が取得した地図情報に基づいて、ユーザ(車両100の運転者)から見た合流地点の位置(相対位置)を算出し、ヘッドアップディスプレイ21を用いて、ユーザから見える実景内の合流地点に対応する位置に、仮想合流地点オブジェクトを表示する。より具体的には、表示制御部16は、ユーザから見て仮想合流地点オブジェクトが合流地点と重畳して見えるように、ユーザの眼と合流地点とを結ぶ直線とウィンドシールドとの交点に仮想合流地点オブジェクトを表示する。その結果、ユーザからは、あたかも合流地点に仮想合流地点オブジェクトが存在するよう見える。つまり、表示制御部16は、現実世界の合流地点に仮想合流地点オブジェクトが存在して見える仮想世界をユーザに提供する。 The display control unit 16 is based on the position and orientation of the vehicle 100 acquired by the position information acquisition unit 14, the position of the merging point stored in the virtual object storage unit 13, and the map information acquired by the map information acquisition unit 15. The position (relative position) of the merging point as seen by the user (driver of the vehicle 100) is calculated, and the virtual merging point object is placed at the position corresponding to the merging point in the actual view seen by the user using the head-up display 21. indicate. More specifically, the display control unit 16 virtually merges at the intersection of the windshield and the straight line connecting the user's eyes and the merge point so that the virtual merge point object appears to be superimposed on the merge point when viewed from the user. Display the point object. As a result, to the user, it appears as if a virtual confluence object exists at the confluence. That is, the display control unit 16 provides the user with a virtual world in which the virtual confluence point object appears to exist at the confluence point in the real world.
 例えば、図2のような現実世界における地点MPがユーザと合流相手との合流地点として定められ、ユーザが運転する車両100が合流地点MPの付近に到達した場面を想定する。合流支援装置10の合流地点記憶部12には、合流地点MPの位置の情報が記憶されている。この場合、表示制御部16は、ヘッドアップディスプレイ21を用いて、合流地点MPに仮想合流地点オブジェクト200が存在する図3のような仮想世界をユーザに提供する。車両100の運転者であるユーザからは、図4のように、車両100のウィンドシールド30を通して見える実景における合流地点MPに対応する位置に、仮想合流地点オブジェクト200が見える。 For example, it is assumed that the point MP in the real world as shown in FIG. 2 is defined as the merging point between the user and the merging partner, and the vehicle 100 driven by the user reaches the vicinity of the merging point MP. Information on the position of the merging point MP is stored in the merging point storage unit 12 of the merging support device 10. In this case, the display control unit 16 uses the head-up display 21 to provide the user with a virtual world as shown in FIG. 3 in which the virtual merging point object 200 exists at the merging point MP. As shown in FIG. 4, the user who is the driver of the vehicle 100 can see the virtual merging point object 200 at a position corresponding to the merging point MP in the actual view seen through the windshield 30 of the vehicle 100.
 なお、図3およびこれ以降に示す図では、現実世界と仮想世界とを容易に区別できるように、仮想オブジェクトにはハッチングを付しているが、実際にヘッドアップディスプレイ21が表示する仮想オブジェクトにはハッチングが付される必要はない。ここでは仮想合流地点オブジェクトの例として、公共交通機関の停留所(バス停)に設置される標識柱を模擬した画像を用いているが、仮想合流地点オブジェクトはどのような画像でもよい。 In addition, in FIG. 3 and the figures shown thereafter, hatching is attached to the virtual object so that the real world and the virtual world can be easily distinguished, but the virtual object actually displayed by the head-up display 21 is used. Does not need to be hatched. Here, as an example of the virtual confluence point object, an image simulating a sign pillar installed at a public transportation stop (bus stop) is used, but the virtual confluence point object may be any image.
 このように、実施の形態1に係る合流支援装置10は、ユーザに対し、現実世界の合流地点に仮想合流地点オブジェクトが存在して見える仮想世界を提示することによって、合流地点の場所を提示する。よって、ユーザは、現実世界における合流地点の場所を直感的に認識できる。その結果、ユーザが合流地点へ到達することが容易になる。 As described above, the merging support device 10 according to the first embodiment presents the location of the merging point to the user by presenting the virtual world in which the virtual merging point object appears to exist at the merging point in the real world. .. Therefore, the user can intuitively recognize the location of the confluence in the real world. As a result, it becomes easier for the user to reach the confluence.
 ここで、図5のフローチャートを参照しつつ、合流支援装置10の動作を説明する。合流支援装置10が起動し、ユーザと合流相手との間で合流地点が決定されると、合流地点記憶部12は、その合流地点を記憶する(ステップS101)。 Here, the operation of the merging support device 10 will be described with reference to the flowchart of FIG. When the merging support device 10 is activated and the merging point is determined between the user and the merging partner, the merging point storage unit 12 stores the merging point (step S101).
 続いて、位置情報取得部14が、測位装置24が算出した車両100の位置、すなわちユーザの位置を取得する(ステップS102)。このとき位置情報取得部14は、車両100の位置の履歴に基づき、車両100の向きを算出する。地図情報取得部15は、地図情報記憶装置26から、ユーザの位置周辺の地図情報を取得する(ステップS103)。 Subsequently, the position information acquisition unit 14 acquires the position of the vehicle 100 calculated by the positioning device 24, that is, the position of the user (step S102). At this time, the position information acquisition unit 14 calculates the direction of the vehicle 100 based on the history of the position of the vehicle 100. The map information acquisition unit 15 acquires map information around the user's position from the map information storage device 26 (step S103).
 表示制御部16は、位置情報取得部14が取得したユーザの位置と、合流地点記憶部12に記憶された合流地点の位置と、地図情報取得部15が取得した地図情報とに基づいて、ユーザの位置が合流地点付近であるか否かを判断する(ステップS104)。 The display control unit 16 is based on the user's position acquired by the position information acquisition unit 14, the position of the confluence stored in the confluence point storage unit 12, and the map information acquired by the map information acquisition unit 15. It is determined whether or not the position of is near the confluence (step S104).
 ステップS104の判断は、単純にユーザから合流地点までの直線距離が予め定められた閾値(例えば100m)以下か否かで判断してもよいが、例えば、ユーザが位置する道路と合流地点が位置する道路とが立体交差している場合など、ユーザから合流地点までの直線距離が短くてもユーザが合流地点に到達するまでに長時間を要することもある。そのため、本実施の形態では、地図情報を用いて、ユーザから合流地点までの経路長(道のり)を算出し、その経路長が閾値以下か否かによって、ステップS104の判断を行う。 The determination in step S104 may be determined simply by whether or not the straight line distance from the user to the confluence point is equal to or less than a predetermined threshold value (for example, 100 m). For example, the road on which the user is located and the confluence point are located. Even if the straight line distance from the user to the confluence is short, it may take a long time for the user to reach the confluence, such as when the road crosses over. Therefore, in the present embodiment, the route length (distance) from the user to the confluence is calculated using the map information, and the determination in step S104 is performed depending on whether or not the route length is equal to or less than the threshold value.
 ユーザの位置が合流地点付近であれば(ステップS104でYES)、表示制御部16は、位置情報取得部14が取得した車両100の位置および向き、仮想オブジェクト記憶部13に記憶された合流地点の位置、および地図情報取得部15が取得した地図情報に基づいて、ユーザから見た合流地点の位置を算出する(ステップS105)。そして、ユーザから見た合流地点の位置がヘッドアップディスプレイ21の表示領域内か否かを確認する(ステップS106)。 If the user's position is near the merging point (YES in step S104), the display control unit 16 indicates the position and orientation of the vehicle 100 acquired by the position information acquisition unit 14 and the merging point stored in the virtual object storage unit 13. Based on the position and the map information acquired by the map information acquisition unit 15, the position of the confluence point as seen by the user is calculated (step S105). Then, it is confirmed whether or not the position of the confluence point as seen by the user is within the display area of the head-up display 21 (step S106).
 ユーザから見た合流地点の位置がヘッドアップディスプレイ21の表示領域内であれば(ステップS106でYES)、表示制御部16は、ヘッドアップディスプレイ21を用いて、仮想合流地点オブジェクトをユーザから見た合流地点の位置に重畳させて表示し(ステップS107)、ステップS102へ戻る。 If the position of the merging point seen by the user is within the display area of the head-up display 21 (YES in step S106), the display control unit 16 uses the head-up display 21 to see the virtual merging point object from the user. The display is superimposed on the position of the confluence (step S107), and the process returns to step S102.
 一方、ユーザの位置が合流地点から遠い場合(ステップS104でNO)、ならびに、ユーザから見た合流地点の位置がヘッドアップディスプレイ21の表示領域外である場合(ステップS106でNO)には、表示制御部16が、ヘッドアップディスプレイ21により仮想合流地点オブジェクトが表示されているか否かを確認し、表示されていればそれを消去して(ステップS108)、ステップS102へ戻る。 On the other hand, when the position of the user is far from the merging point (NO in step S104), and when the position of the merging point seen by the user is outside the display area of the head-up display 21 (NO in step S106), the display is displayed. The control unit 16 confirms whether or not the virtual confluence point object is displayed by the head-up display 21, deletes it if it is displayed (step S108), and returns to step S102.
 なお、ステップS104の判断に関し、ユーザが位置する道路と合流地点が位置する道路とが立体交差しているが、その2つの道路間を歩いて行き来できる階段またはスロープがある場合には、ユーザが車両100を降り、歩いて合流相手を迎えに行くことも可能である。そのため、階段またはスロープを通った経路長に基づいて、ステップS104の判断が行われてもよい。そのような判断により、ステップS104でYESと判断された場合には、その旨をユーザに通知することが好ましい。 Regarding the determination in step S104, if the road where the user is located and the road where the confluence is located are grade-separated, but there are stairs or slopes that allow walking between the two roads, the user It is also possible to get off the vehicle 100 and walk to pick up the merging partner. Therefore, the determination in step S104 may be made based on the path length through the stairs or slopes. If YES is determined in step S104 based on such a determination, it is preferable to notify the user to that effect.
 また、階段またはスロープがあったとしても、それを使った方が、道路に沿って車両100を走行させるよりも合流地点への到着時刻が遅くなる場合は、階段またはスロープを考慮せずにステップS104の判断が行われることが好ましい。あるいは、階段またはスロープを使う方が、道路に沿って車両100を走行させるよりも合流地点への到着時刻が遅くなることを、ユーザに通知してもよい。 Also, even if there are stairs or slopes, if the arrival time at the confluence is later than driving the vehicle 100 along the road, step without considering the stairs or slopes. It is preferable that the determination of S104 is performed. Alternatively, the user may be notified that using the stairs or slope will result in a later arrival time at the confluence than driving the vehicle 100 along the road.
 [ハードウェア構成例]
 図6および図7は、それぞれ合流支援装置10のハードウェア構成の例を示す図である。図1に示した合流支援装置10の構成要素の各機能は、例えば図6に示す処理回路50により実現される。すなわち、合流支援装置10は、ユーザと他者との合流地点を記憶し、合流地点の位置を示す画像である仮想合流地点オブジェクトを記憶し、ユーザの位置を取得し、ユーザの位置および合流地点の位置に基づいて、仮想合流地点オブジェクトを、ユーザから透明な画面を通して見える実景の合流地点に対応する位置に表示する、ための処理回路50を備える。処理回路50は、専用のハードウェアであってもよいし、メモリに格納されたプログラムを実行するプロセッサ(中央処理装置(CPU:Central Processing Unit)、処理装置、演算装置、マイクロプロセッサ、マイクロコンピュータ、DSP(Digital Signal Processor)とも呼ばれる)を用いて構成されていてもよい。
[Hardware configuration example]
6 and 7 are diagrams showing an example of the hardware configuration of the merging support device 10, respectively. Each function of the component of the merging support device 10 shown in FIG. 1 is realized by, for example, the processing circuit 50 shown in FIG. That is, the merging support device 10 stores the merging point between the user and another person, stores a virtual merging point object which is an image showing the position of the merging point, acquires the user's position, and acquires the user's position and the merging point. A processing circuit 50 is provided for displaying a virtual confluence point object at a position corresponding to a confluence point of a real scene seen from a user through a transparent screen based on the position of. The processing circuit 50 may be dedicated hardware, or may be a processor (Central Processing Unit (CPU), processing unit, arithmetic unit, microprocessor, microprocessor, etc.) that executes a program stored in the memory. It may be configured by using a DSP (also called a Digital Signal Processor).
 処理回路50が専用のハードウェアである場合、処理回路50は、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)、またはこれらを組み合わせたものなどが該当する。合流支援装置10の構成要素の各々の機能が個別の処理回路で実現されてもよいし、それらの機能がまとめて一つの処理回路で実現されてもよい。 When the processing circuit 50 is dedicated hardware, the processing circuit 50 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable). GateArray), or a combination of these, etc. The functions of the components of the merging support device 10 may be realized by individual processing circuits, or these functions may be collectively realized by one processing circuit.
 図7は、処理回路50がプログラムを実行するプロセッサ51を用いて構成されている場合における合流支援装置10のハードウェア構成の例を示している。この場合、合流支援装置10の構成要素の機能は、ソフトウェア等(ソフトウェア、ファームウェア、またはソフトウェアとファームウェアとの組み合わせ)により実現される。ソフトウェア等はプログラムとして記述され、メモリ52に格納される。プロセッサ51は、メモリ52に記憶されたプログラムを読み出して実行することにより、各部の機能を実現する。すなわち、合流支援装置10は、プロセッサ51により実行されるときに、ユーザと他者との合流地点を記憶する処理と、合流地点の位置を示す画像である仮想合流地点オブジェクトを記憶する処理と、ユーザの位置を取得する処理と、ユーザの位置および合流地点の位置に基づいて、仮想合流地点オブジェクトを、ユーザから透明な画面を通して見える実景の合流地点に対応する位置に表示する処理と、が結果的に実行されることになるプログラムを格納するためのメモリ52を備える。換言すれば、このプログラムは、合流支援装置10の構成要素の動作の手順や方法をコンピュータに実行させるものであるともいえる。 FIG. 7 shows an example of the hardware configuration of the merging support device 10 when the processing circuit 50 is configured by using the processor 51 that executes the program. In this case, the functions of the components of the merging support device 10 are realized by software (software, firmware, or a combination of software and firmware). The software or the like is described as a program and stored in the memory 52. The processor 51 realizes the functions of each part by reading and executing the program stored in the memory 52. That is, the merging support device 10 has a process of storing the merging point between the user and another person when executed by the processor 51, a process of storing a virtual merging point object which is an image showing the position of the merging point, The result is the process of acquiring the user's position and the process of displaying the virtual confluence object at the position corresponding to the confluence of the actual view seen from the user through the transparent screen based on the user's position and the position of the confluence. A memory 52 for storing a program to be executed is provided. In other words, it can be said that this program causes the computer to execute the procedure and method of operation of the components of the merging support device 10.
 ここで、メモリ52は、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)、EEPROM(Electrically Erasable Programmable Read Only Memory)などの、不揮発性または揮発性の半導体メモリ、HDD(Hard Disk Drive)、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ミニディスク、DVD(Digital Versatile Disc)およびそのドライブ装置等、または、今後使用されるあらゆる記憶媒体であってもよい。 Here, the memory 52 is a non-volatile or non-volatile memory such as a RAM (RandomAccessMemory), a ROM (ReadOnlyMemory), a flash memory, an EPROM (ErasableProgrammableReadOnlyMemory), and an EEPROM (ElectricallyErasableProgrammableReadOnlyMemory). Volatile semiconductor memory, HDD (Hard Disk Drive), magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versatile Disc) and its drive device, etc., or any storage medium used in the future. You may.
 以上、合流支援装置10の構成要素の機能が、ハードウェアおよびソフトウェア等のいずれか一方で実現される構成について説明した。しかしこれに限ったものではなく、合流支援装置10の一部の構成要素を専用のハードウェアで実現し、別の一部の構成要素をソフトウェア等で実現する構成であってもよい。例えば、一部の構成要素については専用のハードウェアとしての処理回路50でその機能を実現し、他の一部の構成要素についてはプロセッサ51としての処理回路50がメモリ52に格納されたプログラムを読み出して実行することによってその機能を実現することが可能である。 The configuration in which the functions of the components of the merging support device 10 are realized by either hardware or software has been described above. However, the present invention is not limited to this, and a configuration in which a part of the components of the merging support device 10 is realized by dedicated hardware and another part of the components is realized by software or the like may be used. For example, for some components, the function is realized by the processing circuit 50 as dedicated hardware, and for some other components, the processing circuit 50 as the processor 51 is stored in the memory 52. It is possible to realize the function by reading and executing it.
 以上のように、合流支援装置10は、ハードウェア、ソフトウェア等、またはこれらの組み合わせによって、上述の各機能を実現することができる。 As described above, the merging support device 10 can realize each of the above-mentioned functions by hardware, software, or a combination thereof.
 [変形例]
 合流地点の決定方法は、任意の方法でよく、例えば、合流支援装置10のユーザと合流相手との間で合流地点の場所を取り決めるための調停システムが、インターネット上に構築されていてもよい。その場合、合流支援装置10の合流地点記憶部12は、通信部11を通して当該調停システムから、決定した合流地点の情報を取得して記憶する。
[Modification example]
The merging point may be determined by any method. For example, an arbitration system for arranging the location of the merging point between the user of the merging support device 10 and the merging partner may be constructed on the Internet. In that case, the merging point storage unit 12 of the merging support device 10 acquires and stores the determined merging point information from the arbitration system through the communication unit 11.
 仮想合流地点オブジェクトの画像は、ユーザが操作入力装置23を操作して、複数の画像から選択できるようにしてもよい。また、仮想合流地点オブジェクトの画像は、合流相手から指定された画像でもよいし、合流支援装置10のユーザと合流相手との間で仮想合流地点オブジェクト画像を相談して決めてもよい。例えば、合流支援装置10のユーザと合流相手との間で仮想合流地点オブジェクトの画像を取り決めるための調停システムが、インターネット上に構築されていてもよい。その場合、合流支援装置10の仮想オブジェクト記憶部13は、通信部11を通して当該調停システムから、決定した仮想合流地点オブジェクトの画像を取得して記憶する。 The image of the virtual confluence point object may be selected from a plurality of images by the user operating the operation input device 23. Further, the image of the virtual merging point object may be an image designated by the merging partner, or the virtual merging point object image may be consulted and determined between the user of the merging support device 10 and the merging partner. For example, an arbitration system for arranging an image of a virtual merging point object between a user of the merging support device 10 and a merging partner may be constructed on the Internet. In that case, the virtual object storage unit 13 of the merging support device 10 acquires and stores an image of the determined virtual merging point object from the arbitration system through the communication unit 11.
 仮想合流地点オブジェクトの画像を決定する際に合流相手の意向を汲む場合、合流支援装置10のユーザ(車両100の運転者)と合流相手の双方が、仮想合流地点オブジェクトの画像がどのような画像なのかを認識できる。例えば、車両100がタクシーの場合など、車両100の運転者と合流相手との面識がない場面も考えられるが、車両100の運転者と合流相手の双方が仮想合流地点オブジェクトの画像を認識していれば、互いに仮想合流地点オブジェクトの画像を確認し合うことで、互いの認証を行うことができるため、セキュリティーの向上に寄与できる。 When determining the image of the virtual merging point object, when the intention of the merging partner is taken into consideration, both the user of the merging support device 10 (driver of the vehicle 100) and the merging partner can see what kind of image the image of the virtual merging point object is. I can recognize what it is. For example, when the vehicle 100 is a taxi, there may be a situation where the driver of the vehicle 100 and the merging partner have no acquaintance, but both the driver of the vehicle 100 and the merging partner recognize the image of the virtual merging point object. Then, by confirming the images of the virtual merging point objects with each other, it is possible to authenticate each other, which can contribute to the improvement of security.
 また、表示制御部16は、合流地点の位置を示す仮想合流地点オブジェクトだけでなく、合流支援のための追加的な情報を示す仮想オブジェクトを含む仮想世界をユーザに提供してもよい。例えば、図8のように、仮想合流地点オブジェクト200と共に、合流地点に到着したユーザ(ここではユーザが運転する車両100)を模擬した仮想オブジェクト201や、合流地点に到着した合流相手を模擬した仮想オブジェクト202が存在する仮想世界がユーザに提供されてもよい。この場合、表示制御部16は、ヘッドアップディスプレイ21を用いて、図9のように、仮想合流地点オブジェクト200の前の道路上に車両100を模擬した仮想オブジェクト201を表示し、仮想合流地点オブジェクト200の傍らに合流相手を模擬した仮想オブジェクト202を表示する。 Further, the display control unit 16 may provide the user with a virtual world including not only a virtual merging point object indicating the position of the merging point but also a virtual object indicating additional information for merging support. For example, as shown in FIG. 8, together with the virtual merging point object 200, a virtual object 201 simulating a user arriving at the merging point (here, a vehicle 100 driven by the user) and a virtual merging partner arriving at the merging point are simulated. A virtual world in which the object 202 resides may be provided to the user. In this case, the display control unit 16 uses the head-up display 21 to display the virtual object 201 simulating the vehicle 100 on the road in front of the virtual merging point object 200 as shown in FIG. 9, and the virtual merging point object A virtual object 202 simulating a merging partner is displayed beside 200.
 また例えば、図10のように、仮想合流地点オブジェクト200と共に、車両100が合流地点に到着したときの停止位置(駐車スペース)を示す仮想オブジェクト203が存在する仮想世界がユーザに提供されてもよい。この場合、表示制御部16は、ヘッドアップディスプレイ21を用いて、図11のように、仮想合流地点オブジェクト200の前の道路上に車両100の停止位置を示す仮想オブジェクト203を表示する。 Further, for example, as shown in FIG. 10, a virtual world in which a virtual object 203 indicating a stop position (parking space) when the vehicle 100 arrives at the merging point exists together with the virtual merging point object 200 may be provided to the user. .. In this case, the display control unit 16 uses the head-up display 21 to display the virtual object 203 indicating the stop position of the vehicle 100 on the road in front of the virtual confluence point object 200 as shown in FIG.
 また、図12に示す仮想世界のように車両100が合流地点MPを通り過ぎる位置まで進んだ場合、ユーザから見た合流地点MPの位置は、ヘッドアップディスプレイ21の表示領域外になる。このとき表示制御部16は、例えば図13のように、合流地点MPが車両100から近いが仮想合流地点オブジェクト200を表示できる位置にないことを示す仮想オブジェクト204を表示してもよい。図13には、仮想オブジェクト204が、仮想合流地点オブジェクト200の画像を目立たない表示態様(大きさ、色、明るさなど)にした画像と、合流地点MPが位置する方向を示す矢印の画像とから成る例を示したが、仮想オブジェクト204はこの2つの画像のいずれか片方だけでもよい。また、仮想オブジェクト204を点滅させるなどのアニメーション効果が用いられてもよい。 Further, when the vehicle 100 advances to a position where the vehicle 100 passes the confluence point MP as in the virtual world shown in FIG. 12, the position of the confluence point MP as seen by the user is outside the display area of the head-up display 21. At this time, the display control unit 16 may display the virtual object 204 indicating that the merging point MP is close to the vehicle 100 but is not at a position where the virtual merging point object 200 can be displayed, as shown in FIG. 13, for example. In FIG. 13, the virtual object 204 shows an image of the virtual confluence object 200 in an inconspicuous display mode (size, color, brightness, etc.), and an image of an arrow indicating the direction in which the confluence MP is located. Although an example consisting of is shown, the virtual object 204 may be only one of these two images. Further, an animation effect such as blinking the virtual object 204 may be used.
 なお、図13のように、ユーザから見た合流地点MPの位置がヘッドアップディスプレイ21の表示領域外である旨を示す仮想オブジェクト204を表示させる場合の、合流支援装置10の動作を図14のフローチャートに示す。図14のフローチャートは、図5のフローチャートに対し、ステップS106でNOと判断されたときに実行されるステップS109を追加したものである。 As shown in FIG. 13, the operation of the merging support device 10 when displaying the virtual object 204 indicating that the position of the merging point MP as seen by the user is outside the display area of the head-up display 21 is shown in FIG. Shown in the flowchart. The flowchart of FIG. 14 is obtained by adding step S109 to be executed when NO is determined in step S106 to the flowchart of FIG.
 すなわち、図14のフローチャートでは、ユーザから見た合流地点の位置がヘッドアップディスプレイ21の表示領域外である場合に(ステップS106でNO)、表示制御部16が、その旨を示す仮想オブジェクト204を、ヘッドアップディスプレイ21を用いて表示する(ステップS109)。その他のステップは、図5と同様であるため、ここでの説明は省略する。 That is, in the flowchart of FIG. 14, when the position of the merging point as seen by the user is outside the display area of the head-up display 21 (NO in step S106), the display control unit 16 displays the virtual object 204 indicating that fact. , Displayed using the head-up display 21 (step S109). Since the other steps are the same as those in FIG. 5, the description here will be omitted.
 また、通信部11が合流相手の携帯端末やナビゲーション装置などから、合流相手の位置や通過予定経路の情報を取得し、表示制御部16が、それらの情報を示す仮想オブジェクトを、仮想合流地点オブジェクト200と共に表示させてもよい。具体的には、表示制御部16は、ユーザから見た合流相手の位置に対応する位置に、合流相手の位置を示す仮想オブジェクトを表示し、また、ユーザから見た合流相手の通過予定経路に対応する位置に、合流相手の通過予定経路を示す仮想オブジェクトを表示してもよい。 Further, the communication unit 11 acquires information on the position of the merging partner and the planned passage route from the mobile terminal or navigation device of the merging partner, and the display control unit 16 converts a virtual object indicating the information into a virtual merging point object. It may be displayed together with 200. Specifically, the display control unit 16 displays a virtual object indicating the position of the merging partner at a position corresponding to the position of the merging partner as seen by the user, and also sets the planned passage route of the merging partner as seen by the user. A virtual object indicating the planned passage route of the merging partner may be displayed at the corresponding position.
 例えば、図15のような現実世界における地点MPがユーザと合流相手101との合流地点として定められ、ユーザが運転する車両100および合流相手101が合流地点MPの付近に到達した場面を想定する。合流支援装置10の合流地点記憶部12には、合流地点MPの情報が記憶されており、通信部11は合流相手101が持つ携帯端末から合流相手101の位置や通過予定経路の情報を逐次取得する。この場合、表示制御部16は、ヘッドアップディスプレイ21を用いて、合流地点MPを示す仮想合流地点オブジェクト200と、合流相手101の位置を示す仮想オブジェクト205と、合流相手101の通過予定経路を示す仮想オブジェクト206とが存在する図16のような仮想世界をユーザに提供する。車両100の運転者であるユーザからは、図17のように、車両100のウィンドシールド30を通して見える実景内に、仮想合流地点オブジェクト200と、合流相手101の位置を示す仮想オブジェクト205と、合流相手101の通過予定経路を示す仮想オブジェクト206とが見える。 For example, it is assumed that the point MP in the real world as shown in FIG. 15 is defined as the merging point between the user and the merging partner 101, and the vehicle 100 driven by the user and the merging partner 101 reach the vicinity of the merging point MP. Information on the merging point MP is stored in the merging point storage unit 12 of the merging support device 10, and the communication unit 11 sequentially acquires information on the position of the merging partner 101 and the planned passage route from the mobile terminal of the merging partner 101. To do. In this case, the display control unit 16 uses the head-up display 21 to indicate the virtual merging point object 200 indicating the merging point MP, the virtual object 205 indicating the position of the merging partner 101, and the planned passage route of the merging partner 101. A virtual world as shown in FIG. 16 in which the virtual object 206 exists is provided to the user. As shown in FIG. 17, the user who is the driver of the vehicle 100 sees the virtual merging point object 200, the virtual object 205 indicating the position of the merging partner 101, and the merging partner in the actual view seen through the windshield 30 of the vehicle 100. You can see the virtual object 206 that shows the planned route of 101.
 図17では合流相手101が建物に隠れて見えない状態を示したが、合流相手101がユーザから見える位置まで出てくると、図18のように、仮想オブジェクト205が合流相手101の頭上に見える。そのため、ユーザは、人通りの多い場所などでも素早く合流相手101を見つけることができる。 FIG. 17 shows a state in which the merging partner 101 is hidden behind the building and cannot be seen. However, when the merging partner 101 comes out to a position where the user can see it, the virtual object 205 can be seen above the merging partner 101 as shown in FIG. .. Therefore, the user can quickly find the merging partner 101 even in a busy place.
 通信部11が取得して表示制御部16が表示する合流相手の情報は、現在位置と通過予定経路の情報に限られない。例えば、通信部11が、合流相手の合流地点への到着予定時刻の情報を取得し、例えば「到着5分前」などの文字列を仮想オブジェクトとして表示してもよい。 The information of the merging partner acquired by the communication unit 11 and displayed by the display control unit 16 is not limited to the information of the current position and the planned passage route. For example, the communication unit 11 may acquire information on the estimated time of arrival at the merging point of the merging partner, and display a character string such as "5 minutes before arrival" as a virtual object.
 <実施の形態2>
 図19は、本発明の実施の形態2に係る合流支援装置10の構成を示す図である。実施の形態2の合流支援装置10も車両100に搭載されており、合流支援装置10のユーザは車両100の運転者である。ただし、実施の形態2の合流支援装置10には、図1に示したヘッドアップディスプレイ21に代えて、撮影装置27が接続されている。
<Embodiment 2>
FIG. 19 is a diagram showing a configuration of a merging support device 10 according to a second embodiment of the present invention. The merging support device 10 of the second embodiment is also mounted on the vehicle 100, and the user of the merging support device 10 is the driver of the vehicle 100. However, the merging support device 10 of the second embodiment is connected to the photographing device 27 instead of the head-up display 21 shown in FIG.
 撮影装置27は、車両100の周囲(すなわちユーザの周囲)の実景を撮影するカメラである。ここでは、撮影装置27は車両100の前方の実景を撮影するものとする。 The photographing device 27 is a camera that photographs an actual scene around the vehicle 100 (that is, around the user). Here, it is assumed that the photographing device 27 photographs an actual scene in front of the vehicle 100.
 実施の形態2に係る合流支援装置10の表示制御部16は、表示装置22の画面に、撮影装置27が撮影した実景の画像(以下「実景画像」という)を表示する。さらに、表示制御部16は、位置情報取得部14が取得した車両100の位置および向き、仮想オブジェクト記憶部13に記憶された合流地点の位置、および地図情報取得部15が取得した地図情報に基づいて、ユーザ(車両100の運転者)から見た合流地点の相対位置を算出し、表示装置22の画面に表示されている実景画像における合流地点に対応する位置に、仮想合流地点オブジェクトを表示する。その結果、表示装置22の画面には、あたかも合流地点に仮想合流地点オブジェクトが存在するよう見える画像が表示される。つまり、表示制御部16は、表示装置22の画面を通して、合流地点に仮想合流地点オブジェクトが存在する仮想世界をユーザに提供する。 The display control unit 16 of the merging support device 10 according to the second embodiment displays an image of the actual scene taken by the photographing device 27 (hereinafter referred to as “actual scene image”) on the screen of the display device 22. Further, the display control unit 16 is based on the position and orientation of the vehicle 100 acquired by the position information acquisition unit 14, the position of the merging point stored in the virtual object storage unit 13, and the map information acquired by the map information acquisition unit 15. Then, the relative position of the merging point as seen from the user (driver of the vehicle 100) is calculated, and the virtual merging point object is displayed at the position corresponding to the merging point in the actual scene image displayed on the screen of the display device 22. .. As a result, the screen of the display device 22 displays an image as if a virtual confluence point object exists at the confluence point. That is, the display control unit 16 provides the user with a virtual world in which the virtual confluence point object exists at the confluence point through the screen of the display device 22.
 例えば、表示制御部16が、上述した図3と同様の仮想世界をユーザに提供する場合、表示制御部16は、表示装置22の画面に図20のような画像を表示する。図4の画像は、撮影装置27が撮影した実景画像における合流地点MPに対応する位置に、仮想合流地点オブジェクト200を重畳したものである。 For example, when the display control unit 16 provides the user with a virtual world similar to that shown in FIG. 3, the display control unit 16 displays an image as shown in FIG. 20 on the screen of the display device 22. The image of FIG. 4 is a superposition of the virtual confluence point object 200 at a position corresponding to the confluence point MP in the actual scene image taken by the photographing device 27.
 このように、実施の形態2に係る合流支援装置10は、車両100の運転者であるユーザに、現実世界の合流地点に仮想合流地点オブジェクトが存在して見える仮想世界を提示することによって、合流地点の場所を提示する。よって、ユーザは、現実世界における合流地点の場所を直感的に認識でき、合流地点へ到達することが容易になる。 As described above, the merging support device 10 according to the second embodiment merges by presenting the user who is the driver of the vehicle 100 with a virtual world in which the virtual merging point object appears to exist at the merging point in the real world. Present the location of the point. Therefore, the user can intuitively recognize the location of the confluence in the real world, and it becomes easy to reach the confluence.
 ここで、図21のフローチャートを参照しつつ、合流支援装置10の動作を説明する。合流支援装置10が起動し、ユーザと合流相手との間で合流地点が決定されると、合流地点記憶部12は、その合流地点を記憶する(ステップS201)。 Here, the operation of the merging support device 10 will be described with reference to the flowchart of FIG. When the merging support device 10 is activated and the merging point is determined between the user and the merging partner, the merging point storage unit 12 stores the merging point (step S201).
 続いて、表示制御部16が、撮影装置27により撮影された車両100周囲の実景画像を、表示装置22の画面に表示する(ステップS202)。そして、位置情報取得部14が、測位装置24が算出した車両100の位置、すなわちユーザの位置を取得する(ステップS203)。このとき位置情報取得部14は、車両100の位置の履歴に基づき、車両100の向きを算出する。地図情報取得部15は、地図情報記憶装置26から、ユーザの位置周辺の地図情報を取得する(ステップS204)。 Subsequently, the display control unit 16 displays the actual scene image around the vehicle 100 photographed by the photographing device 27 on the screen of the display device 22 (step S202). Then, the position information acquisition unit 14 acquires the position of the vehicle 100 calculated by the positioning device 24, that is, the position of the user (step S203). At this time, the position information acquisition unit 14 calculates the direction of the vehicle 100 based on the history of the position of the vehicle 100. The map information acquisition unit 15 acquires map information around the user's position from the map information storage device 26 (step S204).
 表示制御部16は、位置情報取得部14が取得したユーザの位置と、合流地点記憶部12に記憶された合流地点の位置と、地図情報取得部15が取得した地図情報とに基づいて、ユーザの位置が合流地点付近であるか否かを判断する(ステップS205)。ステップS205の判断は、図5のステップS104と同様の判断でよい。 The display control unit 16 is based on the user's position acquired by the position information acquisition unit 14, the position of the confluence stored in the confluence point storage unit 12, and the map information acquired by the map information acquisition unit 15. It is determined whether or not the position of is near the confluence (step S205). The determination in step S205 may be the same as in step S104 in FIG.
 ユーザの位置が合流地点付近であれば(ステップS205でYES)、表示制御部16は、車両100の位置および向きと、合流地点の位置と、地図情報とに基づいて、ユーザから見た実景上の合流地点の位置を算出する(ステップS206)。 If the user's position is near the confluence (YES in step S205), the display control unit 16 sets the display control unit 16 on the actual view as seen by the user based on the position and orientation of the vehicle 100, the position of the confluence, and the map information. The position of the confluence of the above is calculated (step S206).
 そして、ユーザから見た合流地点の位置が、撮影装置27により撮影された実景画像の範囲内か否かを確認する(ステップS207)。ユーザから見た合流地点の位置が実景画像の範囲内であれば(ステップS207でYES)、表示制御部16は、表示装置22の画面に表示された実景画像におけるユーザから見た合流地点の位置に、仮想合流地点オブジェクトを重畳させて表示し(ステップS208)、ステップS202へ戻る。 Then, it is confirmed whether or not the position of the confluence point as seen by the user is within the range of the actual scene image taken by the photographing device 27 (step S207). If the position of the merging point as seen by the user is within the range of the actual scene image (YES in step S207), the display control unit 16 determines the position of the merging point as seen by the user in the actual scene image displayed on the screen of the display device 22. The virtual confluence point object is superimposed and displayed (step S208), and the process returns to step S202.
 一方、ユーザの位置が合流地点から遠い場合(ステップS205でNO)、ならびに、ユーザから見た合流地点の位置が実景画像の範囲外である場合(ステップS207でNO)には、表示制御部16が、表示装置22の画面に仮想合流地点オブジェクトを表示しているか否かを確認し、表示していれば消去して(ステップS209)、ステップS202へ戻る。 On the other hand, when the position of the user is far from the merging point (NO in step S205) and when the position of the merging point seen by the user is out of the range of the actual scene image (NO in step S207), the display control unit 16 However, it is confirmed whether or not the virtual confluence point object is displayed on the screen of the display device 22, and if it is displayed, it is deleted (step S209), and the process returns to step S202.
 [変形例]
 実施の形態1で示した各種の変形例は、実施の形態2の合流支援装置10に対しても適用可能である。
[Modification example]
The various modifications shown in the first embodiment are also applicable to the merging support device 10 of the second embodiment.
 図21のフローチャートでは、表示装置22の画面に実景画像のみを先に表示してから、その上に仮想合流地点オブジェクトを重畳表示したが、実景画像と仮想合流地点オブジェクトとの重畳処理(合成処理)先に行ってから、仮想合流地点オブジェクトが合成された実景画像を表示装置22に映像を表示してもよい。 In the flowchart of FIG. 21, only the actual scene image is displayed on the screen of the display device 22 first, and then the virtual merging point object is superimposed and displayed on the screen. However, the actual scene image and the virtual merging point object are superimposed and displayed (composite processing). ) After going first, the image may be displayed on the display device 22 as a real scene image in which the virtual confluence point object is synthesized.
 また、表示装置22がスーパーインポーズ機能を有する場合、表示装置22が撮影装置27から直接(表示制御部16を通さずに)取得した実景画像を画面に表示し、表示制御部16が表示装置22の画面に表示された実景画像に対して仮想合流地点オブジェクトのスーパーインポーズを行ってもよい。 When the display device 22 has a superimpose function, the display device 22 displays the actual scene image directly (without passing through the display control unit 16) acquired from the photographing device 27 on the screen, and the display control unit 16 displays the display device 16. The virtual confluence point object may be superimposed on the actual scene image displayed on the screen of 22.
 実施の形態2では、撮影装置27が車両100の前方の実景を撮影する例を示したが、撮影装置27は車両100の側方、後方あるいは後側方の実景を撮影するものであってもよい。例えば、撮影装置27が車両100の後方または後側方の実景を撮影する場合、撮影装置27および表示装置22は、車両100のリアビューミラーを電子的に実現する電子ミラーとして機能する。言い換えれば、実施の形態2の合流支援装置10は、車両100の電子ミラーシステムに適用可能である。 In the second embodiment, an example is shown in which the photographing device 27 captures an actual view in front of the vehicle 100, but the photographing device 27 may capture an actual view on the side, rear, or rear side of the vehicle 100. Good. For example, when the photographing device 27 captures a real scene behind or behind the vehicle 100, the photographing device 27 and the display device 22 function as an electronic mirror that electronically realizes the rear view mirror of the vehicle 100. In other words, the merging support device 10 of the second embodiment is applicable to the electronic mirror system of the vehicle 100.
 また、合流支援装置10に、それぞれ撮影方向の異なる複数の撮影装置27を接続させ、表示制御部16が、複数の撮影装置27が撮影した複数の実景画像を合成したサラウンド画像を作成して、表示装置22の画面に表示させてもよい。 Further, a plurality of photographing devices 27 having different shooting directions are connected to the merging support device 10, and the display control unit 16 creates a surround image obtained by synthesizing a plurality of actual scene images photographed by the plurality of photographing devices 27. It may be displayed on the screen of the display device 22.
 実施の形態1で示したヘッドアップディスプレイ21を用いる技術、実施の形態2で示した車両100の前方の実景画像を用いる技術、さらに、上述した電子ミラーを用いる技術、サラウンド画像を用いる技術の2つ以上を組み合わせてもよい。それにより、仮想合流地点オブジェクトを表示する領域が広がるため、仮想合流地点オブジェクトが表示される確率が高くなる。また、ヘッドアップディスプレイ21あるいは表示装置22を複数用いる場合、複数の画面のうちの一部にだけ仮想合流地点オブジェクトが表示されることがある。そのとき、表示制御部16は、仮想合流地点オブジェクトが表示されていない画面に、仮想合流地点オブジェクトがどの画面に表示されているかをユーザに通知するマークを表示してもよい。 2 of the technique of using the head-up display 21 shown in the first embodiment, the technique of using the actual view image in front of the vehicle 100 shown in the second embodiment, the technique of using the above-mentioned electronic mirror, and the technique of using the surround image. You may combine one or more. As a result, the area for displaying the virtual confluence point object is expanded, so that the probability that the virtual confluence point object is displayed is high. Further, when a plurality of head-up displays 21 or display devices 22 are used, the virtual confluence point object may be displayed only on a part of the plurality of screens. At that time, the display control unit 16 may display a mark notifying the user on which screen the virtual confluence point object is displayed on the screen on which the virtual confluence point object is not displayed.
 実施の形態2において、車両100は、車両100から離れた場所にいる運転者(合流支援装置10のユーザ)によって遠隔操作されてもよい。その場合、表示装置22は、遠隔地にいるユーザのそばに設置される。車両100が遠隔操作される場合、実際には、位置情報取得部14が取得する車両100の位置はユーザの位置ではないが、位置情報取得部14は、測位装置24が算出した車両100の位置を、ユーザの位置とみなして取得する。そして、表示制御部16は、車両100の周囲の実景の画像の合流地点に対応する位置に、仮想合流地点オブジェクトを表示する。 In the second embodiment, the vehicle 100 may be remotely controlled by a driver (user of the merging support device 10) who is away from the vehicle 100. In that case, the display device 22 is installed near the user at a remote location. When the vehicle 100 is remotely controlled, the position of the vehicle 100 acquired by the position information acquisition unit 14 is not actually the position of the user, but the position information acquisition unit 14 calculates the position of the vehicle 100 by the positioning device 24. Is regarded as the user's position and is acquired. Then, the display control unit 16 displays the virtual merging point object at a position corresponding to the merging point of the image of the actual scene around the vehicle 100.
 [ハードウェア構成]
 実施の形態2の合流支援装置10も、図6および図7と同様のハードウェア構成により実現できる。例えば、合流支援装置10が図6に示した処理回路50により実現される場合、合流支援装置10は、ユーザと他者との合流地点を記憶し、合流地点の位置を示す画像である仮想合流地点オブジェクトを記憶し、ユーザの位置を取得し、ユーザの位置および合流地点の位置に基づいて、仮想合流地点オブジェクトを、画面に表示されたユーザの周囲の実景の画像の合流地点に対応する位置に表示する、ための処理回路50を備える。
[Hardware configuration]
The merging support device 10 of the second embodiment can also be realized by the same hardware configuration as in FIGS. 6 and 7. For example, when the merging support device 10 is realized by the processing circuit 50 shown in FIG. 6, the merging support device 10 stores the merging point between the user and another person, and is an image showing the position of the merging point. Store the point object, get the user's position, and based on the user's position and the position of the confluence, place the virtual confluence object at the position corresponding to the confluence of the real-world images around the user displayed on the screen. A processing circuit 50 for displaying the image is provided.
 また、合流支援装置10が、図7に示したプロセッサ51およびメモリ52で構成される場合、メモリ52には、プロセッサ51により実行されるときに、ユーザと他者との合流地点を記憶する処理と、合流地点の位置を示す画像である仮想合流地点オブジェクトを記憶する処理と、ユーザの位置を取得する処理と、ユーザの位置および合流地点の位置に基づいて、仮想合流地点オブジェクトを、画面に表示されたユーザの周囲の実景の画像の合流地点に対応する位置に表示する処理と、が結果的に実行されることになるプログラムが格納される。 Further, when the merging support device 10 is composed of the processor 51 and the memory 52 shown in FIG. 7, the memory 52 is a process of storing the merging point between the user and another person when executed by the processor 51. The virtual merging point object, which is an image showing the position of the merging point, is stored on the screen, the user's position is acquired, and the virtual merging point object is displayed on the screen based on the user's position and the merging point position. A program that will be executed as a result of the process of displaying at the position corresponding to the confluence of the displayed images of the actual scene around the user is stored.
 <実施の形態3>
 図22は、本発明の実施の形態3に係る合流支援装置10の構成を示す図である。実施の形態3の合流支援装置10は、例えばスマートフォンなどの携帯端末300に搭載されている。すなわち、合流支援装置10は、携帯端末300のプロセッサがメモリに記憶された合流支援のアプリケーションプログラム(以下「合流支援アプリ」という)を実行することによって実現されている。そのことを除けば、図22の合流支援装置10の構成は、図19と同様である。
<Embodiment 3>
FIG. 22 is a diagram showing the configuration of the merging support device 10 according to the third embodiment of the present invention. The merging support device 10 of the third embodiment is mounted on a mobile terminal 300 such as a smartphone. That is, the merging support device 10 is realized by the processor of the mobile terminal 300 executing a merging support application program (hereinafter referred to as "merging support application") stored in the memory. Except for this, the configuration of the merging support device 10 in FIG. 22 is the same as that in FIG.
 合流支援装置10のユーザは携帯端末300のユーザである。以下の説明における「ユーザ」は、特に断りのない限り、携帯端末300のユーザを意図している。ここでは、ユーザは歩行者であるものとするが、ユーザは、携帯端末300を車両に持ち込むことも可能である。また、合流相手は、歩行者でも車両でもよいが、ここでは車両であるものとする。また、合流相手の車両は、遠隔操作または自動運転により走行する運転者不在の車両(例えば無人タクシーなど)でもよい。 The user of the merging support device 10 is a user of the mobile terminal 300. Unless otherwise specified, the "user" in the following description is intended as a user of the mobile terminal 300. Here, it is assumed that the user is a pedestrian, but the user can also bring the mobile terminal 300 into the vehicle. Further, the merging partner may be a pedestrian or a vehicle, but here, it is assumed to be a vehicle. Further, the vehicle of the merging partner may be a vehicle without a driver (for example, an unmanned taxi) that travels by remote control or automatic driving.
 携帯端末300の撮影装置27は、ユーザの周囲の実景を撮影するものであるが、その撮影方向は、ユーザが携帯端末300を構える方向によるため、必ずしもユーザの進行方向とは一致しない。そのため、実施の形態3に係る合流支援装置10の表示制御部16は、撮影装置27が撮影した実景画像の範囲を認識するために、携帯端末300の向きを認識する必要がある。携帯端末300の向きの認識方法は、任意の方法でよい。表示制御部16は、例えば、携帯端末300の方位センサ(不図示)を用いて携帯端末300の向きを算出してもよいし、撮影装置27で撮影された実景画像と地図情報取得部15が取得した地図情報とを照合することで携帯端末300の向きを判断してもよい。 The photographing device 27 of the mobile terminal 300 photographs the actual scenery around the user, but the photographing direction depends on the direction in which the user holds the mobile terminal 300, and therefore does not necessarily match the traveling direction of the user. Therefore, the display control unit 16 of the merging support device 10 according to the third embodiment needs to recognize the orientation of the mobile terminal 300 in order to recognize the range of the actual scene image captured by the photographing device 27. The orientation recognition method of the mobile terminal 300 may be any method. For example, the display control unit 16 may calculate the orientation of the mobile terminal 300 using the orientation sensor (not shown) of the mobile terminal 300, or the actual scene image and the map information acquisition unit 15 taken by the photographing device 27 may calculate the orientation of the mobile terminal 300. The orientation of the mobile terminal 300 may be determined by collating with the acquired map information.
 表示制御部16は、表示装置22の画面(「携帯端末300の画面」ということもある)に、撮影装置27が撮影した実景画像を表示する。さらに、表示制御部16は、認識した携帯端末300の向き、位置情報取得部14が取得した携帯端末300の位置、仮想オブジェクト記憶部13に記憶された合流地点の位置、および地図情報取得部15が取得した地図情報に基づいて、ユーザから見た合流地点の相対位置を算出し、表示装置22の画面に表示されている実景画像における合流地点に対応する位置に、仮想合流地点オブジェクトを表示する。その結果、表示装置22の画面には、あたかも合流地点に仮想合流地点オブジェクトが存在するよう見える画像が表示される。つまり、表示制御部16は、表示装置22の画面を通して、合流地点に仮想合流地点オブジェクトが存在する仮想世界をユーザに提供する。 The display control unit 16 displays an actual scene image taken by the photographing device 27 on the screen of the display device 22 (sometimes referred to as a “screen of the mobile terminal 300”). Further, the display control unit 16 indicates the direction of the recognized mobile terminal 300, the position of the mobile terminal 300 acquired by the position information acquisition unit 14, the position of the confluence stored in the virtual object storage unit 13, and the map information acquisition unit 15. Calculates the relative position of the merging point as seen by the user based on the map information acquired by the user, and displays the virtual merging point object at the position corresponding to the merging point in the actual scene image displayed on the screen of the display device 22. .. As a result, the screen of the display device 22 displays an image as if a virtual confluence point object exists at the confluence point. That is, the display control unit 16 provides the user with a virtual world in which the virtual confluence point object exists at the confluence point through the screen of the display device 22.
 例えば、図23のような現実世界における地点MPがユーザ301と合流相手との合流地点として定められ、ユーザ301が合流地点MPの付近に到達した場面を想定する。合流支援装置10の合流地点記憶部12には、合流地点MPの位置の情報が記憶されている。この場合、表示制御部16は、表示装置22を用いて、合流地点MPに仮想合流地点オブジェクト400が存在する図24のような仮想世界をユーザに提供する。ユーザ301の携帯端末300の画面には、図25のように、撮影装置27が撮影した実景画像における合流地点MPに対応する位置に、仮想合流地点オブジェクト400が重畳された画像が表示される。 For example, it is assumed that the point MP in the real world as shown in FIG. 23 is defined as the merging point between the user 301 and the merging partner, and the user 301 reaches the vicinity of the merging point MP. Information on the position of the merging point MP is stored in the merging point storage unit 12 of the merging support device 10. In this case, the display control unit 16 uses the display device 22 to provide the user with a virtual world as shown in FIG. 24 in which the virtual confluence point object 400 exists at the confluence point MP. On the screen of the mobile terminal 300 of the user 301, as shown in FIG. 25, an image in which the virtual merging point object 400 is superimposed is displayed at a position corresponding to the merging point MP in the actual scene image taken by the photographing device 27.
 このように、実施の形態3に係る合流支援装置10は、携帯端末300のユーザに、現実世界の合流地点に仮想合流地点オブジェクトが存在して見える仮想世界を提示することによって、合流地点の場所を提示する。よって、ユーザは、現実世界における合流地点の場所を直感的に認識でき、合流地点へ到達することが容易になる。 As described above, the merging support device 10 according to the third embodiment presents the user of the mobile terminal 300 with a virtual world in which the virtual merging point object appears to exist at the merging point in the real world, thereby presenting the location of the merging point. To present. Therefore, the user can intuitively recognize the location of the confluence in the real world, and it becomes easy to reach the confluence.
 ここで、図26のフローチャートを参照しつつ、携帯端末300の動作を説明する。まず、携帯端末300のユーザが、操作入力装置23を操作して、合流相手との合流地点を決定するためのアプリケーション(合流地点決定アプリ)を起動させると、合流地点決定アプリが通信装置25を介する合流相手との直接的または間接的な通信を行い、合流相手との間で合流地点を取り決める(ステップS301)。合流地点決定アプリにより決定された合流地点は、合流地点記憶部12に記憶される(ステップS302)。 Here, the operation of the mobile terminal 300 will be described with reference to the flowchart of FIG. 26. First, when the user of the mobile terminal 300 operates the operation input device 23 to activate an application (merging point determination application) for determining the merging point with the merging partner, the merging point determination application activates the communication device 25. Direct or indirect communication with the merging partner is performed, and a merging point is negotiated with the merging partner (step S301). The merging point determined by the merging point determination application is stored in the merging point storage unit 12 (step S302).
 その後、ユーザは、任意のタイミング(例えば、ユーザが合流地点にある程度近づいたときなど)で、合流支援アプリを起動させることで、合流支援装置10を起動させる(ステップS303)。 After that, the user activates the merging support device 10 by activating the merging support application at an arbitrary timing (for example, when the user approaches the merging point to some extent) (step S303).
 合流支援装置10が起動すると、表示制御部16が、撮影装置27により撮影されたユーザの周囲の実景画像を、表示装置22の画面に表示する(ステップS304)。そして、位置情報取得部14が、測位装置24が算出した携帯端末300の位置、すなわちユーザの位置を取得する(ステップS305)。地図情報取得部15は、地図情報記憶装置26から、ユーザの位置周辺の地図情報を取得する(ステップS306)。 When the merging support device 10 is activated, the display control unit 16 displays an actual view image of the user's surroundings taken by the photographing device 27 on the screen of the display device 22 (step S304). Then, the position information acquisition unit 14 acquires the position of the mobile terminal 300 calculated by the positioning device 24, that is, the position of the user (step S305). The map information acquisition unit 15 acquires map information around the user's position from the map information storage device 26 (step S306).
 表示制御部16は、位置情報取得部14が取得したユーザの位置と、合流地点記憶部12に記憶された合流地点の位置と、地図情報取得部15が取得した地図情報とに基づいて、ユーザの位置が合流地点付近であるか否かを判断する(ステップS307)。ステップS307の判断は、図5のステップS104と同様の判断でよい。 The display control unit 16 is based on the user's position acquired by the position information acquisition unit 14, the position of the confluence stored in the confluence point storage unit 12, and the map information acquired by the map information acquisition unit 15. It is determined whether or not the position of is near the confluence (step S307). The determination in step S307 may be the same as in step S104 in FIG.
 ユーザの位置が合流地点付近であれば(ステップS307でYES)、表示制御部16は、携帯端末300の位置および向きと、合流地点の位置と、地図情報とに基づいて、ユーザから見た実景上の合流地点の位置を算出する(ステップS308)。 If the user's position is near the confluence (YES in step S307), the display control unit 16 determines the actual view as seen by the user based on the position and orientation of the mobile terminal 300, the position of the confluence, and the map information. The position of the upper confluence is calculated (step S308).
 そして、表示制御部16は、ユーザから見た合流地点の位置が、撮影装置27により撮影された実景画像の範囲内か否かを確認する(ステップS309)。ユーザから見た合流地点の位置が実景画像の範囲内であれば(ステップS309でYES)、表示制御部16は、表示装置22の画面に表示された実景画像におけるユーザから見た合流地点の位置に、仮想合流地点オブジェクトを重畳させて表示し(ステップS310)、ステップS304へ戻る。 Then, the display control unit 16 confirms whether or not the position of the confluence point as seen by the user is within the range of the actual scene image taken by the photographing device 27 (step S309). If the position of the merging point as seen by the user is within the range of the actual scene image (YES in step S309), the display control unit 16 determines the position of the merging point as seen by the user in the actual scene image displayed on the screen of the display device 22. The virtual confluence point object is superimposed and displayed (step S310), and the process returns to step S304.
 一方、ユーザの位置が合流地点から遠い場合(ステップS307でNO)、ならびに、ユーザから見た合流地点の位置が実景画像の範囲外である場合(ステップS309でNO)には、表示制御部16が、表示装置22の画面に仮想合流地点オブジェクトを表示しているか否かを確認し、表示していれば消去して(ステップS311)、ステップS304へ戻る。 On the other hand, when the position of the user is far from the merging point (NO in step S307) and when the position of the merging point seen by the user is out of the range of the actual scene image (NO in step S309), the display control unit 16 However, it is confirmed whether or not the virtual confluence point object is displayed on the screen of the display device 22, and if it is displayed, it is deleted (step S311), and the process returns to step S304.
 [変形例]
 合流地点の位置が、撮影装置27が撮影した実景画像の範囲外である場合(つまり、図26のステップS309でNOの場合)、実施の形態1で図12~図14を用いて説明した変形例と同様に、合流地点が表示領域外である旨を示す仮想オブジェクトを携帯端末300の画面に表示させてもよい。
[Modification example]
When the position of the merging point is outside the range of the actual scene image captured by the photographing device 27 (that is, when NO in step S309 of FIG. 26), the modification described with reference to FIGS. 12 to 14 in the first embodiment. Similar to the example, a virtual object indicating that the merging point is outside the display area may be displayed on the screen of the mobile terminal 300.
 図26のフローチャートでは、ステップS304~S311が繰り返し実行されるため、表示装置22に表示される画面は逐次更新される。よって、表示装置22の画面に仮想合流地点オブジェクト400が表示されても、ユーザが携帯端末300を構える向きを変えるとすぐに合流地点が実景画像の領域外になって、仮想合流地点オブジェクト400が消えてしまう。そこで、ユーザが表示装置22の画面に表示された画像をポーズできるようにしてもよい。それにより、ユーザは、表示装置22に表示された仮想世界の画像と現実世界の風景とを比較し易くなる。 In the flowchart of FIG. 26, since steps S304 to S311 are repeatedly executed, the screen displayed on the display device 22 is sequentially updated. Therefore, even if the virtual merging point object 400 is displayed on the screen of the display device 22, the merging point is out of the real scene image area as soon as the user changes the direction in which the mobile terminal 300 is held, and the virtual merging point object 400 is displayed. Disappear. Therefore, the user may be able to pause the image displayed on the screen of the display device 22. As a result, the user can easily compare the image of the virtual world displayed on the display device 22 with the landscape of the real world.
 また、ユーザが携帯端末300で実景画像を撮影しながら回転し、撮影方向の異なる複数の実景画像を取得した場合に、表示制御部16が、それらの実景画像を合成してパノラマ画像を生成し、パノラマ画像の中に仮想合流地点オブジェクトを表示できるようにしてもよい。 Further, when the user rotates while shooting the real scene image with the mobile terminal 300 and acquires a plurality of real scene images having different shooting directions, the display control unit 16 synthesizes the real scene images to generate a panoramic image. , The virtual confluence object may be displayed in the panoramic image.
 図26のフローチャートでは、合流地点決定アプリと合流支援アプリとが別々のアプリであるものとしたが、合流地点決定アプリの機能は合流支援アプリに組み込まれていてもよい。合流地点決定アプリが合流支援アプリの機能を備える場合の携帯端末300の動作の例を、図27はフローチャートに基づいて説明する。 In the flowchart of FIG. 26, it is assumed that the merging point determination application and the merging support application are separate applications, but the function of the merging point determination application may be incorporated in the merging support application. An example of the operation of the mobile terminal 300 when the merging point determination application has the function of the merging support application will be described with reference to FIG. 27 based on a flowchart.
 携帯端末300のユーザが、操作入力装置23を操作して合流支援アプリを起動すると(ステップS401)、合流支援アプリが、通信装置25を介する合流相手との直接的または間接的な通信を行い、合流相手との間で合流地点を取り決める(ステップS402)。決定された合流地点は、合流地点記憶部12に記憶される(ステップS403)。 When the user of the mobile terminal 300 operates the operation input device 23 to activate the merging support application (step S401), the merging support application performs direct or indirect communication with the merging partner via the communication device 25. Arrange a merging point with the merging partner (step S402). The determined merging point is stored in the merging point storage unit 12 (step S403).
 その後、位置情報取得部14が、測位装置24が算出した携帯端末300の位置、すなわちユーザの位置を取得する(ステップS404)。地図情報取得部15は、地図情報記憶装置26から、ユーザの位置周辺の地図情報を取得する(ステップS405)。表示制御部16は、位置情報取得部14が取得したユーザの位置と、合流地点記憶部12に記憶された合流地点の位置と、地図情報取得部15が取得した地図情報とに基づいて、ユーザの位置が合流地点付近であるか否かを判断する(ステップS406)。ユーザの位置が合流地点付近であれば(ステップS406でYES)、その旨を例えば携帯端末300のバイブレーション機能などを用いてユーザに通知する(ステップS407)。 After that, the position information acquisition unit 14 acquires the position of the mobile terminal 300 calculated by the positioning device 24, that is, the position of the user (step S404). The map information acquisition unit 15 acquires map information around the user's position from the map information storage device 26 (step S405). The display control unit 16 is based on the user's position acquired by the position information acquisition unit 14, the position of the confluence stored in the confluence point storage unit 12, and the map information acquired by the map information acquisition unit 15. It is determined whether or not the position of is near the confluence (step S406). If the user's position is near the confluence (YES in step S406), the user is notified to that effect by using, for example, the vibration function of the mobile terminal 300 (step S407).
 通知を受けたユーザは、操作入力装置23を操作して、合流支援装置10に対し、仮想合流地点オブジェクトの表示を指示することができる。一定時間、仮想合流地点オブジェクトの表示指示がなければ(ステップS408でNO)、ステップS404に戻る。 The user who received the notification can operate the operation input device 23 to instruct the merging support device 10 to display the virtual merging point object. If there is no display instruction of the virtual confluence point object for a certain period of time (NO in step S408), the process returns to step S404.
 合流支援装置10が、ユーザから仮想合流地点オブジェクトの表示指示を受けると、(ステップS408でYES)、表示制御部16が、撮影装置27により撮影されたユーザの周囲の実景画像を、表示装置22の画面に表示する(ステップS409)。そして、位置情報取得部14が、測位装置24が算出した携帯端末300の位置(ユーザの位置)を取得する(ステップS410)。地図情報取得部15は、地図情報記憶装置26から、ユーザの位置周辺の地図情報を取得する(ステップS411)。 When the merging support device 10 receives a display instruction of the virtual merging point object from the user (YES in step S408), the display control unit 16 displays an actual view image of the user's surroundings taken by the photographing device 27 on the display device 22. Is displayed on the screen of (step S409). Then, the position information acquisition unit 14 acquires the position (user's position) of the mobile terminal 300 calculated by the positioning device 24 (step S410). The map information acquisition unit 15 acquires map information around the user's position from the map information storage device 26 (step S411).
 表示制御部16は、ステップS406と同様の方法で、ユーザの位置が合流地点付近であるか否かを判断し、ユーザの位置が合流地点付近であれば(ステップS412でYES)、携帯端末300の位置および向きと、合流地点の位置と、地図情報とに基づいて、ユーザから見た実景上の合流地点の位置を算出する(ステップS413)。 The display control unit 16 determines whether or not the user's position is near the merging point in the same manner as in step S406, and if the user's position is near the merging point (YES in step S412), the mobile terminal 300 The position of the merging point in the actual view as seen by the user is calculated based on the position and direction of the merging point, the position of the merging point, and the map information (step S413).
 そして、ユーザから見た合流地点の位置が、撮影装置27により撮影された実景画像の範囲内か否かを確認する(ステップS414)。ユーザから見た合流地点の位置が実景画像の範囲内であれば(ステップS414でYES)、表示制御部16は、表示装置22の画面に表示された実景画像におけるユーザから見た合流地点の位置に、仮想合流地点オブジェクトを重畳させて表示し(ステップS415)、ステップS409へ戻る。 Then, it is confirmed whether or not the position of the confluence point as seen by the user is within the range of the actual scene image taken by the photographing device 27 (step S414). If the position of the merging point as seen by the user is within the range of the actual scene image (YES in step S414), the display control unit 16 determines the position of the merging point as seen by the user in the actual scene image displayed on the screen of the display device 22. The virtual confluence point object is superimposed and displayed (step S415), and the process returns to step S409.
 なお、ユーザの位置が合流地点から遠ざかった場合(ステップS412でNO)には、表示制御部16が、表示装置22の画面に仮想合流地点オブジェクトを表示しているか否かを確認し、表示していれば消去して(ステップS416)、ステップS404へ戻る。また、ユーザから見た合流地点の位置が実景画像の範囲外である場合(ステップS414でNO)には、表示制御部16が、ステップS416と同様の処理を行って(ステップS417)、ステップS409へ戻る。 When the user's position moves away from the merging point (NO in step S412), the display control unit 16 confirms and displays whether or not the virtual merging point object is displayed on the screen of the display device 22. If so, it is deleted (step S416), and the process returns to step S404. When the position of the confluence point seen by the user is outside the range of the actual scene image (NO in step S414), the display control unit 16 performs the same processing as in step S416 (step S417), and steps S409. Return to.
 実施の形態3においても、実施の形態1と同様に、仮想合流地点オブジェクトとする画像を、ユーザまたは合流相手が選択したり、仮想合流地点オブジェクトの画像を取り決めるための調停システムを利用したりして、変更できるようにしてもよい。 Also in the third embodiment, as in the first embodiment, the user or the merging partner selects the image to be the virtual merging point object, or the arbitration system for arranging the image of the virtual merging point object is used. It may be possible to change it.
 また、表示制御部16は、合流地点の位置を示す仮想合流地点オブジェクトだけでなく、合流支援のための追加的な情報を示す仮想オブジェクトを含む仮想世界をユーザに提供してもよい。例えば、図28のように、仮想合流地点オブジェクト400と共に、合流地点に到着したユーザ301を模擬した仮想オブジェクト401や、合流地点に到着した合流相手(ここでは車両)を模擬した仮想オブジェクト402が存在する仮想世界がユーザに提供されてもよい。この場合、表示制御部16は、図29のように、携帯端末300の画面に、仮想合流地点オブジェクト400の傍らにユーザ301を模擬した仮想オブジェクト401を表示し、仮想合流地点オブジェクト400の前の道路に合流相手を模擬した仮想オブジェクト402を表示する。 Further, the display control unit 16 may provide the user with a virtual world including not only a virtual merging point object indicating the position of the merging point but also a virtual object indicating additional information for merging support. For example, as shown in FIG. 28, along with the virtual merging point object 400, there are a virtual object 401 simulating a user 301 arriving at the merging point and a virtual object 402 simulating a merging partner (here, a vehicle) arriving at the merging point. A virtual world to be provided may be provided to the user. In this case, as shown in FIG. 29, the display control unit 16 displays the virtual object 401 simulating the user 301 beside the virtual merging point object 400 on the screen of the mobile terminal 300, and in front of the virtual merging point object 400. A virtual object 402 simulating a merging partner is displayed on the road.
 また、通信部11が合流相手の携帯端末やナビゲーション装置などから、合流相手の位置や通過予定経路の情報を取得し、表示制御部16が、それらの情報を示す仮想オブジェクトを、仮想合流地点オブジェクト400と共に表示させてもよい。具体的には、表示制御部16は、ユーザから見た合流相手の位置に対応する位置に、合流相手の位置を示す仮想オブジェクトを表示し、また、ユーザから見た合流相手の通過予定経路に対応する位置に、合流相手の通過予定経路を示す仮想オブジェクトを表示してもよい。 Further, the communication unit 11 acquires information on the position of the merging partner and the planned passage route from the mobile terminal or navigation device of the merging partner, and the display control unit 16 converts a virtual object indicating the information into a virtual merging point object. It may be displayed together with 400. Specifically, the display control unit 16 displays a virtual object indicating the position of the merging partner at a position corresponding to the position of the merging partner as seen by the user, and also sets the planned passage route of the merging partner as seen by the user. A virtual object indicating the planned passage route of the merging partner may be displayed at the corresponding position.
 例えば、図30のような現実世界における地点MPがユーザ301と合流相手の車両302との合流地点として定められ、ユーザ301および合流相手の車両302が合流地点MPの付近に到達した場面を想定する。合流支援装置10の合流地点記憶部12には、合流地点MPの情報が記憶されており、通信部11は合流相手の車両302のナビゲーション装置から車両302の位置や通過予定経路の情報を取得する。この場合、表示制御部16は、携帯端末300の画面を用いて、合流地点MPを示す仮想合流地点オブジェクト400と、合流相手の車両302の位置を示す仮想オブジェクト403と、合流相手の車両302の通過予定経路を示す仮想オブジェクト404とが存在する図30のような仮想世界をユーザ301に提供する。ユーザ301の携帯端末300の画面には、図31のように、実景画像内に仮想合流地点オブジェクト400と、合流相手の車両302の位置を示す仮想オブジェクト403と、当該車両302の通過予定経路を示す仮想オブジェクト404が実景画像に重畳された画像が表示される。 For example, it is assumed that the point MP in the real world as shown in FIG. 30 is defined as the merging point between the user 301 and the merging partner vehicle 302, and the user 301 and the merging partner vehicle 302 reach the vicinity of the merging point MP. .. Information on the merging point MP is stored in the merging point storage unit 12 of the merging support device 10, and the communication unit 11 acquires information on the position of the vehicle 302 and the planned passage route from the navigation device of the merging partner vehicle 302. .. In this case, the display control unit 16 uses the screen of the mobile terminal 300 to display the virtual merging point object 400 indicating the merging point MP, the virtual object 403 indicating the position of the merging partner vehicle 302, and the merging partner vehicle 302. A virtual world as shown in FIG. 30 in which a virtual object 404 indicating a planned passage route exists is provided to the user 301. On the screen of the mobile terminal 300 of the user 301, as shown in FIG. 31, a virtual merging point object 400, a virtual object 403 indicating the position of the merging partner vehicle 302, and a planned passage route of the vehicle 302 are displayed in the actual scene image. An image in which the indicated virtual object 404 is superimposed on the actual scene image is displayed.
 通信部11が取得して表示制御部16が表示する合流相手の情報は、現在位置と通過予定経路の情報に限られない。例えば、通信部11が、合流相手の合流地点への到着予定時刻の情報を取得し、例えば「到着5分前」などの文字列を仮想オブジェクトとして表示してもよい。 The information of the merging partner acquired by the communication unit 11 and displayed by the display control unit 16 is not limited to the information of the current position and the planned passage route. For example, the communication unit 11 may acquire information on the estimated time of arrival at the merging point of the merging partner, and display a character string such as "5 minutes before arrival" as a virtual object.
 携帯端末300は、図25のような仮想合流地点オブジェクトを含む実景画像の表示(仮想世界の表示)と、図32のようなユーザおよび合流地点の位置を示す地図の表示とを切り替え可能に構成されていてもよい。その場合、携帯端末300の画面における地図の表示領域と実景画像の表示領域が、携帯端末300の姿勢に従って変更されてもよい。 The mobile terminal 300 is configured to be able to switch between the display of a real scene image including the virtual confluence point object as shown in FIG. 25 (display of the virtual world) and the display of a map showing the positions of the user and the confluence point as shown in FIG. 32. It may have been done. In that case, the map display area and the actual scene image display area on the screen of the mobile terminal 300 may be changed according to the posture of the mobile terminal 300.
 例えば、携帯端末300が水平に寝かせると(この状態を仰角0度と定義する)図32のような地図が表示され、ユーザが実景を撮影するために携帯端末300を垂直に立てて構えると(この状態を仰角90度と定義する)と、図25のような実景画像が表示されるようにしてもよい。あるいは、携帯端末300の仰角が90度のとき、図33のように実景画像と地図の双方が表示されるようにしてもよい。また、ユーザが実景画像の表示にポーズをかけたときには、携帯端末300の仰角に関係なく、図33のように実景画像と地図の双方が表示されるようにしてもよい。 For example, when the mobile terminal 300 is laid horizontally (this state is defined as an elevation angle of 0 degrees), a map as shown in FIG. 32 is displayed, and when the user holds the mobile terminal 300 vertically to capture a real scene (this state is defined as an elevation angle of 0 degrees) This state is defined as an elevation angle of 90 degrees), and a real scene image as shown in FIG. 25 may be displayed. Alternatively, when the elevation angle of the mobile terminal 300 is 90 degrees, both the actual scene image and the map may be displayed as shown in FIG. 33. Further, when the user poses for displaying the actual scene image, both the actual scene image and the map may be displayed as shown in FIG. 33 regardless of the elevation angle of the mobile terminal 300.
 また、携帯端末300を横向きにしたときは図25のような実景画像が表示され、携帯端末300を縦向きにしたときは図34のように実景画像と地図の双方が表示されるようにしてもよい。さらに、携帯端末300の画面に実景画像と地図の双方が表示された状態では、ユーザが、携帯端末300の画面のスワイプ操作などによって、実景画像の表示領域および地図の表示領域の広さ(面積比)を変更できるようにしてもよい。 Further, when the mobile terminal 300 is oriented horizontally, the actual scene image as shown in FIG. 25 is displayed, and when the mobile terminal 300 is oriented vertically, both the actual scene image and the map are displayed as shown in FIG. 34. May be good. Further, in a state where both the actual scene image and the map are displayed on the screen of the mobile terminal 300, the user can swipe the screen of the mobile terminal 300 to expand the display area of the actual scene image and the display area of the map (area). The ratio) may be changed.
 <実施の形態4>
 実施の形態4では、実施の形態1(または実施の形態2)に係る合流支援装置10を備える車両100と、実施の形態3に係る合流支援装置10を備える携帯端末300のユーザとの合流を支援する合流システムの例を示す。
<Embodiment 4>
In the fourth embodiment, the vehicle 100 having the merging support device 10 according to the first embodiment (or the second embodiment) and the user of the mobile terminal 300 having the merging support device 10 according to the third embodiment are merged. An example of a merging system to support is shown.
 図35は、実施の形態4に係る合流支援システムの構成を示す図である。合流支援システムは、それぞれ合流支援装置10を備える車両100および携帯端末300と、合流支援サーバ500とを含んでいる。車両100および携帯端末300は、インターネット等の通信網を通して合流支援サーバ500と通信可能である。図35には車両100および携帯端末300を1つずつ示しているが、車両100および携帯端末300は複数存在する。 FIG. 35 is a diagram showing the configuration of the merging support system according to the fourth embodiment. The merging support system includes a vehicle 100 and a mobile terminal 300 each equipped with a merging support device 10, and a merging support server 500. The vehicle 100 and the mobile terminal 300 can communicate with the merging support server 500 through a communication network such as the Internet. Although FIG. 35 shows one vehicle 100 and one mobile terminal 300, there are a plurality of vehicles 100 and mobile terminals 300.
 合流支援サーバ500は、携帯端末300のユーザに対し、車両100を用いた送迎サービスを提供するものであり、合流支援サーバ500には予め配車可能な複数の車両100が登録されている。以下、このサービスを受けようとする携帯端末300のユーザを「サービス利用者」という。 The merging support server 500 provides a pick-up service using the vehicle 100 to the user of the mobile terminal 300, and a plurality of vehicles 100 that can be dispatched are registered in the merging support server 500 in advance. Hereinafter, the user of the mobile terminal 300 who intends to receive this service is referred to as a "service user".
 以下、図36のフローチャートを参照しつつ、合流支援サーバ500の動作を説明する。合流支援サーバ500は、サービス利用者の携帯端末300から配車要求を受信する(ステップS501)。携帯端末300からの配車要求には、少なくともサービス利用者が配車を希望する場所および時刻の情報が含まれるが、その他、搭乗者の人数、配車を希望する車種などの情報も含まれていてもよい。配車要求を受信した合流支援サーバ500は、登録された複数の車両100のうちからサービス利用者の希望に近いものへ配車を打診する(ステップS502)。 Hereinafter, the operation of the merging support server 500 will be described with reference to the flowchart of FIG. 36. The merging support server 500 receives a vehicle allocation request from the service user's mobile terminal 300 (step S501). The vehicle dispatch request from the mobile terminal 300 includes at least information on the place and time when the service user wishes to dispatch the vehicle, but also includes information such as the number of passengers and the vehicle type desired to be dispatched. Good. The merging support server 500 that has received the vehicle allocation request consults the vehicle allocation to the one that is close to the service user's request from among the plurality of registered vehicles 100 (step S502).
 その後、合流支援サーバ500は、配車を打診した車両100のいずれかから配車可能である旨の応答を受信する(ステップS503)。そして、配車可能な車両100の情報をサービス利用者の携帯端末300へ送信する(ステップS504)。配車可能な車両100が複数ある場合には、複数の車両100の情報がサービス利用者の携帯端末300へ送信され、サービス利用者がそのうちの1台を選択する。 After that, the merging support server 500 receives a response from any of the vehicles 100 that consulted the vehicle allocation to the effect that the vehicle can be dispatched (step S503). Then, the information of the vehicle 100 that can be dispatched is transmitted to the mobile terminal 300 of the service user (step S504). When there are a plurality of vehicles 100 that can be dispatched, information on the plurality of vehicles 100 is transmitted to the service user's mobile terminal 300, and the service user selects one of them.
 合流支援サーバ500は、サービス利用者の携帯端末300から、サービス利用者が選択した車両100の情報およびサービス利用者のプロファイルを受信する(ステップS505)。これにより、サービス利用者と選択された車両100との間で送迎の契約が成立する。 The merging support server 500 receives the information of the vehicle 100 selected by the service user and the profile of the service user from the mobile terminal 300 of the service user (step S505). As a result, a transfer contract is established between the service user and the selected vehicle 100.
 契約が成立すると、合流支援サーバ500は、サービス利用者が選択した車両100へ、契約の成立を通知するとともに、仮想合流地点オブジェクトの画像を送信する(ステップS506)。また、合流支援サーバ500は、サービス利用者の携帯端末300へも、契約の成立を通知するとともに、仮想合流地点オブジェクトの画像を送信する(ステップS507)。 When the contract is established, the merging support server 500 notifies the vehicle 100 selected by the service user of the conclusion of the contract and transmits an image of the virtual merging point object (step S506). Further, the merging support server 500 also notifies the mobile terminal 300 of the service user of the establishment of the contract and transmits an image of the virtual merging point object (step S507).
 合流支援サーバ500の動作は以上で完了する。その後は、車両100の合流支援装置10が実施の形態1で説明した動作を行い、携帯端末300の合流支援装置10が実施の形態3で説明した動作を行うことで、車両100とサービス利用者との合流を支援する。 The operation of the merge support server 500 is now complete. After that, the merging support device 10 of the vehicle 100 performs the operation described in the first embodiment, and the merging support device 10 of the mobile terminal 300 performs the operation described in the third embodiment, whereby the vehicle 100 and the service user perform the operation described in the third embodiment. Support the merger with.
 なお、契約が成立した車両100と携帯端末300には、仮想合流地点オブジェクトの画像として互いに同一の画像が送信されることが好ましい。あるいは、合流支援サーバ500が、サービス利用者と配車される車両100の運転者との間で、仮想合流地点オブジェクトとする画像を取り決めるための調停システムとして動作してもよい。車両100と携帯端末300とで同一の仮想合流地点オブジェクトが用いられる場合、例えば、サービス利用者が車両100と合流したときに、仮想合流地点オブジェクトが表示された携帯端末300を車両100の運転者に提示するなどして、互いに仮想合流地点オブジェクトの画像を確認し合うことができる。それにより、互いの認証を行うことができるため、セキュリティーの向上に寄与できる。 It is preferable that the same image as the image of the virtual confluence point object is transmitted to the vehicle 100 and the mobile terminal 300 for which the contract has been concluded. Alternatively, the merging support server 500 may operate as an arbitration system for arranging an image as a virtual merging point object between the service user and the driver of the vehicle 100 to be dispatched. When the same virtual confluence point object is used in the vehicle 100 and the mobile terminal 300, for example, when the service user merges with the vehicle 100, the mobile terminal 300 on which the virtual confluence point object is displayed is used by the driver of the vehicle 100. You can check the images of the virtual confluence points objects with each other by presenting them to each other. As a result, mutual authentication can be performed, which can contribute to the improvement of security.
 [変形例]
 実施の形態1および3では、車両100が合流地点に近づいたか否かの判断(例えば図5のステップS104の判断)、または、携帯端末300のユーザが合流地点に近づいたか否かの判断(例えば図26のステップS307の判断)を、車両100または携帯端末300が備える合流支援装置10が行った。しかし、合流支援サーバ500が、車両100および携帯端末300の位置を逐次取得してそれらの判断を行い、車両100または携帯端末300が合流地点に近づいたときに、車両100または携帯端末300へ通知してもよい。
[Modification example]
In the first and third embodiments, it is determined whether or not the vehicle 100 has approached the merging point (for example, the determination of step S104 in FIG. 5), or whether or not the user of the mobile terminal 300 has approached the merging point (for example,). The determination in step S307 of FIG. 26) was performed by the merging support device 10 included in the vehicle 100 or the mobile terminal 300. However, the merging support server 500 sequentially acquires the positions of the vehicle 100 and the mobile terminal 300, makes a determination thereof, and notifies the vehicle 100 or the mobile terminal 300 when the vehicle 100 or the mobile terminal 300 approaches the merging point. You may.
 <実施の形態5>
 実施の形態5では、合流支援装置10を備える車両100を用いて、合流支援装置10を備える携帯端末300を持つ不特定多数の乗客を送迎する送迎システムの例を示す。
<Embodiment 5>
In the fifth embodiment, an example of a transfer system for picking up and dropping an unspecified number of passengers having a mobile terminal 300 equipped with the merge support device 10 by using the vehicle 100 provided with the merge support device 10 will be shown.
 実施の形態5の送迎システムは、合流支援装置10を備える1台または複数台の車両100が、巡回バスのように特定の巡回ルートを走行し、巡回ルートの途中にある複数の乗降地点で携帯端末300を持つ不特定の乗客を乗降させるサービスを提供する。以下、このサービスを提供する車両100を「巡回車両」といい、当該サービスを受けようとする携帯端末300のユーザを「サービス利用者」という。 In the transfer system of the fifth embodiment, one or a plurality of vehicles 100 equipped with the merging support device 10 travel on a specific patrol route like a patrol bus, and are carried at a plurality of boarding / alighting points in the middle of the patrol route. It provides a service for getting on and off an unspecified passenger having a terminal 300. Hereinafter, the vehicle 100 that provides this service is referred to as a "patrol vehicle", and the user of the mobile terminal 300 who intends to receive the service is referred to as a "service user".
 実施の形態5の送迎システムの構成は、図35と同様でよい。巡回車両100およびサービス利用者の携帯端末300には、合流支援サーバ500から、巡回ルートおよび各乗降地点の情報が配信される。巡回車両100の合流支援装置10の合流地点記憶部12には、複数の乗降地点が合流地点として記憶される。また、サービス利用者の携帯端末300の合流支援装置10の合流地点記憶部12にも同様に、複数の乗降地点が合流地点として記憶される。よって、巡回車両100および携帯端末300の合流支援装置10は、乗降地点に対応する位置に仮想合流地点オブジェクトが存在する仮想世界を、巡回車両100の運転者およびサービス利用者に提供する。以下、乗降地点の位置を示す仮想合流地点オブジェクトを「仮想乗降地点オブジェクト」という。 The configuration of the transfer system of the fifth embodiment may be the same as that of FIG. 35. Information on the patrol route and each boarding / alighting point is distributed from the merging support server 500 to the patrol vehicle 100 and the mobile terminal 300 of the service user. A plurality of boarding / alighting points are stored as merging points in the merging point storage unit 12 of the merging support device 10 of the patrol vehicle 100. Similarly, a plurality of boarding / alighting points are stored as merging points in the merging point storage unit 12 of the merging support device 10 of the service user's mobile terminal 300. Therefore, the merging support device 10 of the patrolling vehicle 100 and the mobile terminal 300 provides the driver and the service user of the patrolling vehicle 100 with a virtual world in which the virtual merging point object exists at a position corresponding to the boarding / alighting point. Hereinafter, the virtual confluence point object indicating the position of the boarding / alighting point is referred to as a “virtual boarding / alighting point object”.
 巡回車両100の運転者およびサービス利用者は、合流支援装置10が提供する仮想世界における仮想乗降地点オブジェクトの位置を確認することで、乗降地点の位置を認識できる。よって、当該送迎システムでは、実際に巡回ルートに標識柱を設置する必要がない。そのため、当該送迎システムでは、乗降地点の場所や巡回ルートの変更を柔軟に行うことができる。 The driver and service user of the patrol vehicle 100 can recognize the position of the boarding / alighting point by confirming the position of the virtual boarding / alighting point object in the virtual world provided by the merging support device 10. Therefore, in the transfer system, it is not necessary to actually install a signpost on the patrol route. Therefore, in the transfer system, it is possible to flexibly change the location of the boarding / alighting point and the patrol route.
 例えば、合流支援サーバ500が、交通情報、天候情報、サービス利用者の分布情報などを収集して、例えば、渋滞区間を避けるように巡回ルートを変更したり、悪天候時に乗降地点を風雨にさらされない場所へ変更したり、サービス利用者の多い地域を巡回ルートに追加したりすることで、利便性の高い送迎サービスを提供できる。また、合流支援サーバ500は、必要に応じて仮想乗降地点オブジェクトの画像を変更してもよい。 For example, the merging support server 500 collects traffic information, weather information, distribution information of service users, etc., and for example, the patrol route is not changed so as to avoid a congested section, and the boarding / alighting point is not exposed to wind and rain in bad weather. By changing to a location or adding an area with many service users to the patrol route, it is possible to provide a highly convenient transfer service. Further, the merging support server 500 may change the image of the virtual boarding / alighting point object as needed.
 また、合流支援サーバ500が、サービス利用者から乗降地点にしたい場所のリクエストを受け付け、そのリクエストに基づいて乗降地点を定めてもよい。例えば、複数のサービス利用者が10分以内に到達できる場所を乗降地点と定めることなどが想定される。さらに、合流支援サーバ500は、例えば老人や怪我人など歩行が困難なサービス利用者からの要求に応じて、一時的に乗降地点を増やしてもよい。 Further, the merging support server 500 may receive a request from the service user for the place to be the boarding / alighting point, and determine the boarding / alighting point based on the request. For example, it is assumed that a place where a plurality of service users can reach within 10 minutes is set as a boarding / alighting point. Further, the merging support server 500 may temporarily increase the number of boarding / alighting points in response to a request from a service user who has difficulty walking, such as an elderly person or an injured person.
 巡回ルート、乗降地点、仮想乗降地点オブジェクトの画像の変更情報は、合流支援サーバ500から巡回車両100および携帯端末300へ配信される。 The change information of the image of the patrol route, the boarding / alighting point, and the virtual boarding / alighting point object is distributed from the merging support server 500 to the patrol vehicle 100 and the mobile terminal 300.
 なお、本発明は、その発明の範囲内において、各実施の形態を自由に組み合わせたり、各実施の形態を適宜、変形、省略したりすることが可能である。 In the present invention, each embodiment can be freely combined, and each embodiment can be appropriately modified or omitted within the scope of the invention.
 本発明は詳細に説明されたが、上記した説明は、すべての態様において、例示であって、この発明がそれに限定されるものではない。例示されていない無数の変形例が、この発明の範囲から外れることなく想定され得るものと解される。 Although the present invention has been described in detail, the above description is exemplary in all embodiments and the present invention is not limited thereto. It is understood that innumerable variations not illustrated can be assumed without departing from the scope of the present invention.
 10 合流支援装置、11 通信部、12 合流地点記憶部、13 仮想オブジェクト記憶部、14 位置情報取得部、15 地図情報取得部、16 表示制御部、21 ヘッドアップディスプレイ、22 表示装置、23 操作入力装置、24 測位装置、25 通信装置、26 地図情報記憶装置、27 撮影装置、30 ウィンドシールド、MP 合流地点、100 ユーザの車両、101 合流相手、200,400 仮想合流地点オブジェクト、201~206,401~404 仮想オブジェクト、300 携帯端末、301 携帯端末のユーザ、302 合流相手の車両、500 合流支援サーバ。 10 merging support device, 11 communication unit, 12 merging point storage unit, 13 virtual object storage unit, 14 position information acquisition unit, 15 map information acquisition unit, 16 display control unit, 21 head-up display, 22 display device, 23 operation input Devices, 24 positioning devices, 25 communication devices, 26 map information storage devices, 27 imaging devices, 30 windshields, MP merging points, 100 user vehicles, 101 merging partners, 200,400 virtual merging point objects, 201-206,401 ~ 404 virtual object, 300 mobile terminal, 301 mobile terminal user, 302 merging partner's vehicle, 500 merging support server.

Claims (20)

  1.  ユーザと他者との合流地点を記憶する合流地点記憶部と、
     前記合流地点の位置を示す画像である仮想合流地点オブジェクトを記憶する仮想オブジェクト記憶部と、
     前記ユーザの位置を取得する位置情報取得部と、
     前記ユーザの位置および前記合流地点の位置に基づいて、前記仮想合流地点オブジェクトを、前記ユーザから透明な画面を通して見える実景、または、画面に表示された前記ユーザの周囲の実景の画像における前記合流地点に対応する位置に表示する表示制御部と、
    を備える合流支援装置。
    A confluence storage unit that stores the confluence between the user and others,
    A virtual object storage unit that stores a virtual confluence point object that is an image showing the position of the confluence point, and
    A location information acquisition unit that acquires the user's position, and
    Based on the position of the user and the position of the confluence point, the virtual confluence point object is viewed from the user through a transparent screen, or the confluence point in an image of the actual scene around the user displayed on the screen. Display control unit that displays at the position corresponding to
    Confluence support device equipped with.
  2.  前記ユーザは車両の運転者であり、
     前記表示制御部は、前記仮想合流地点オブジェクトを、前記運転者から透明な画面を通して見える実景、または、前記車両が搭載する表示装置の画面に表示された前記車両の周囲の実景の画像における前記合流地点に対応する位置に表示する、
    請求項1に記載の合流支援装置。
    The user is the driver of the vehicle
    The display control unit sees the virtual merging point object through a transparent screen from the driver, or the merging in an image of a real scene around the vehicle displayed on the screen of a display device mounted on the vehicle. Display at the position corresponding to the point,
    The merging support device according to claim 1.
  3.  前記合流支援装置は、携帯端末に搭載されており、
     前記表示制御部は、前記仮想合流地点オブジェクトを、前記携帯端末の画面に表示された前記ユーザの周囲の実景の画像における前記合流地点に対応する位置に表示する、
    請求項1に記載の合流支援装置。
    The merging support device is mounted on a mobile terminal and is mounted on a mobile terminal.
    The display control unit displays the virtual merging point object at a position corresponding to the merging point in an image of a real scene around the user displayed on the screen of the mobile terminal.
    The merging support device according to claim 1.
  4.  前記他者は、遠隔操作または自動運転により走行する運転者不在の車両である、
    請求項3に記載の合流支援装置。
    The other person is a driver-less vehicle that travels by remote control or automatic driving.
    The merging support device according to claim 3.
  5.  前記合流地点記憶部は、前記ユーザと前記他者との間で前記合流地点の場所を取り決める調停システムから、前記合流地点の情報を取得する、
    請求項1に記載の合流支援装置。
    The merging point storage unit acquires information on the merging point from an arbitration system that decides the location of the merging point between the user and the other person.
    The merging support device according to claim 1.
  6.  前記仮想オブジェクト記憶部は、前記ユーザと前記他者との間で前記仮想合流地点オブジェクトとする画像を取り決める調停システムから、前記仮想合流地点オブジェクトとする画像を取得する、
    請求項1に記載の合流支援装置。
    The virtual object storage unit acquires an image to be the virtual confluence point object from an arbitration system for arranging an image to be the virtual confluence point object between the user and the other person.
    The merging support device according to claim 1.
  7.  前記ユーザと前記他者との双方により前記合流支援装置が利用される場合、前記ユーザの前記合流支援装置と前記他者の前記合流支援装置とで、共通の前記仮想合流地点オブジェクトが用いられる、
    請求項1に記載の合流支援装置。
    When the merging support device is used by both the user and the other person, the virtual merging point object common to the merging support device of the user and the merging support device of the other person is used.
    The merging support device according to claim 1.
  8.  前記仮想合流地点オブジェクトは、公共交通機関の停留所に設置される標識柱を模擬した画像である、
    請求項1に記載の合流支援装置。
    The virtual confluence object is an image simulating a signpost installed at a public transportation stop.
    The merging support device according to claim 1.
  9.  前記表示制御部は、前記仮想合流地点オブジェクトと共に、前記合流地点に到着した前記ユーザまたは前記他者を模擬した画像を表示する、
    請求項1に記載の合流支援装置。
    The display control unit displays an image simulating the user or the other person who has arrived at the merging point together with the virtual merging point object.
    The merging support device according to claim 1.
  10.  前記表示制御部は、前記仮想合流地点オブジェクトと共に、前記車両が前記合流地点に到着したときの停止位置を示す画像を表示する、
    請求項2に記載の合流支援装置。
    The display control unit displays an image showing a stop position when the vehicle arrives at the merging point together with the virtual merging point object.
    The merging support device according to claim 2.
  11.  通信により前記他者の位置の情報を取得する通信部をさらに備え、
     前記表示制御部はさらに、前記実景または前記実景の画像における前記他者の位置に対応する位置に、前記他者の位置を示す画像を表示する、
    請求項1に記載の合流支援装置。
    Further equipped with a communication unit for acquiring information on the position of the other person by communication,
    The display control unit further displays an image indicating the position of the other person at a position corresponding to the position of the other person in the actual scene or the image of the actual scene.
    The merging support device according to claim 1.
  12.  通信により前記他者が前記合流地点に到着するまでの通過予定経路の情報を取得する通信部さらに備え、
     前記表示制御部はさらに、前記実景または前記実景の画像における前記他者の前記通過予定経路の位置に対応する位置に、前記他者の前記通過予定経路を示す画像を表示する、
    請求項1に記載の合流支援装置。
    Further equipped with a communication unit that acquires information on the route to be passed until the other person arrives at the confluence by communication.
    The display control unit further displays an image showing the planned passage route of the other person at a position corresponding to the position of the planned passage route of the other person in the actual scene or the image of the actual scene.
    The merging support device according to claim 1.
  13.  前記合流地点の位置が、前記ユーザから透明な画面を通して見える実景の範囲外、または、前記ユーザの周囲の実景の画像の範囲外である場合に、前記合流地点が存在する方向を示す画像を表示する、
    請求項1に記載の合流支援装置。
    When the position of the merging point is outside the range of the actual view seen from the user through the transparent screen or the range of the image of the actual view around the user, an image indicating the direction in which the merging point exists is displayed. To do,
    The merging support device according to claim 1.
  14.  前記車両は、予め定められた複数の前記合流地点を廻る巡回ルートを走行し、不特定の前記他者を前記合流地点で乗降させる車両である、
    請求項2に記載の合流支援装置。
    The vehicle is a vehicle that travels on a patrol route around a plurality of predetermined confluence points and allows an unspecified other person to get on and off at the confluence point.
    The merging support device according to claim 2.
  15.  前記巡回ルートおよび複数の前記合流地点は、前記車両を利用する前記他者の分布情報、交通情報、天候情報のいずれか1つ以上に基づいて変更される、
    請求項14に記載の合流支援装置。
    The patrol route and the plurality of confluence points are changed based on any one or more of distribution information, traffic information, and weather information of the other person who uses the vehicle.
    The merging support device according to claim 14.
  16.  前記携帯端末は、前記ユーザの位置および前記合流地点の位置を示した地図の表示と、前記仮想合流地点オブジェクトが重畳される前記実景の画像の表示とが可能であり、前記携帯端末の姿勢に従って、前記画面における前記地図を表示する領域および前記実景の画像を表示する領域を変更する、
    請求項3に記載の合流支援装置。
    The mobile terminal can display a map showing the position of the user and the position of the confluence, and can display an image of the actual scene on which the virtual confluence object is superimposed, according to the posture of the mobile terminal. , The area for displaying the map and the area for displaying the image of the actual scene on the screen are changed.
    The merging support device according to claim 3.
  17.  前記車両は、前記車両から離れた場所にいる前記運転者によって遠隔操作される車両であり、
     前記位置情報取得部は、前記車両の位置を、前記ユーザの位置として取得し、
     前記表示制御部は、前記仮想合流地点オブジェクトを、画面に表示された前記車両の周囲の実景の画像における前記合流地点に対応する位置に表示する、
    請求項2に記載の合流支援装置。
    The vehicle is a vehicle that is remotely controlled by the driver at a location distant from the vehicle.
    The position information acquisition unit acquires the position of the vehicle as the position of the user, and obtains the position of the user.
    The display control unit displays the virtual merging point object at a position corresponding to the merging point in the image of the actual scene around the vehicle displayed on the screen.
    The merging support device according to claim 2.
  18.  前記仮想合流地点オブジェクトを含む前記実景の画像をポーズさせることが可能である、
    請求項1に記載の合流支援装置。
    It is possible to pose the image of the real scene including the virtual confluence object.
    The merging support device according to claim 1.
  19.  前記実景の画像は、撮影方向の異なる複数の画像を合成したサラウンド画像である、
    請求項1に記載の合流支援装置。
    The image of the actual scene is a surround image obtained by synthesizing a plurality of images having different shooting directions.
    The merging support device according to claim 1.
  20.  合流支援装置の合流地点記憶部が、ユーザと他者との合流地点を記憶し、
     前記合流支援装置の仮想オブジェクト記憶部が、前記合流地点の位置を示す画像である仮想合流地点オブジェクトを記憶し、
     前記合流支援装置の位置情報取得部が、前記ユーザの位置を取得し、
     前記合流支援装置の表示制御部が、前記ユーザの位置および前記合流地点の位置に基づいて、前記仮想合流地点オブジェクトを、前記ユーザから透明な画面を通して見える実景、または、画面に表示された前記ユーザの周囲の実景の画像における前記合流地点に対応する位置に表示する、
    合流支援方法。
    The merging point storage unit of the merging support device memorizes the merging point between the user and others,
    The virtual object storage unit of the merging support device stores a virtual merging point object which is an image showing the position of the merging point.
    The position information acquisition unit of the merging support device acquires the position of the user,
    Based on the position of the user and the position of the merging point, the display control unit of the merging support device can see the virtual merging point object from the user through a transparent screen, or the user displayed on the screen. Displayed at the position corresponding to the confluence in the image of the actual scene around the
    Joining support method.
PCT/JP2019/044677 2019-11-14 2019-11-14 Meetup assistance device and meetup assistance method WO2021095198A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021555718A JP7282199B2 (en) 2019-11-14 2019-11-14 Merging support device and merging support method
PCT/JP2019/044677 WO2021095198A1 (en) 2019-11-14 2019-11-14 Meetup assistance device and meetup assistance method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/044677 WO2021095198A1 (en) 2019-11-14 2019-11-14 Meetup assistance device and meetup assistance method

Publications (1)

Publication Number Publication Date
WO2021095198A1 true WO2021095198A1 (en) 2021-05-20

Family

ID=75912590

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/044677 WO2021095198A1 (en) 2019-11-14 2019-11-14 Meetup assistance device and meetup assistance method

Country Status (2)

Country Link
JP (1) JP7282199B2 (en)
WO (1) WO2021095198A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114973736A (en) * 2022-05-30 2022-08-30 东风汽车集团股份有限公司 Remote driving monitoring system based on virtual simulation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009236844A (en) * 2008-03-28 2009-10-15 Panasonic Corp Navigation device, navigation method, and navigation program
JP2014048079A (en) * 2012-08-30 2014-03-17 Mitsubishi Electric Corp Navigation device
JP2016090515A (en) * 2014-11-10 2016-05-23 パイオニア株式会社 Information display device, control method, program, and recording medium
JP2019117449A (en) * 2017-12-26 2019-07-18 トヨタ自動車株式会社 Person search system
JP2019174909A (en) * 2018-03-27 2019-10-10 パイオニア株式会社 Support device and support processing method
JP2019194836A (en) * 2018-04-27 2019-11-07 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America Information processing device and information processing method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018163578A (en) 2017-03-27 2018-10-18 株式会社日本総合研究所 Car pickup control server, in-vehicle terminal, control method, and control program in active car pickup system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009236844A (en) * 2008-03-28 2009-10-15 Panasonic Corp Navigation device, navigation method, and navigation program
JP2014048079A (en) * 2012-08-30 2014-03-17 Mitsubishi Electric Corp Navigation device
JP2016090515A (en) * 2014-11-10 2016-05-23 パイオニア株式会社 Information display device, control method, program, and recording medium
JP2019117449A (en) * 2017-12-26 2019-07-18 トヨタ自動車株式会社 Person search system
JP2019174909A (en) * 2018-03-27 2019-10-10 パイオニア株式会社 Support device and support processing method
JP2019194836A (en) * 2018-04-27 2019-11-07 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America Information processing device and information processing method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114973736A (en) * 2022-05-30 2022-08-30 东风汽车集团股份有限公司 Remote driving monitoring system based on virtual simulation

Also Published As

Publication number Publication date
JPWO2021095198A1 (en) 2021-05-20
JP7282199B2 (en) 2023-05-26

Similar Documents

Publication Publication Date Title
JP6487231B2 (en) Generating an extended field of view
JP5901779B2 (en) How to move data from image database map service into assist system
CN109644256B (en) Vehicle-mounted video system
JP5459137B2 (en) Intersection guidance system
US20100161207A1 (en) Mobile terminal and method for providing location-based service thereof
JP2007292713A (en) Navigation device
JP5054834B2 (en) Information processing apparatus, system, empty space guidance method, and program
JP5955662B2 (en) Augmented reality system
CN112985433A (en) Vehicle navigation apparatus and vehicle navigation method
JP2011102732A (en) Navigation system
JP2023165757A (en) Information display control device, information display control method, and information display control program
US11181386B2 (en) Navigation device, destination guiding system, and non-transitory recording medium
US11358592B2 (en) In-vehicle processing apparatus and in-vehicle processing system
WO2021095198A1 (en) Meetup assistance device and meetup assistance method
JP6424100B2 (en) NAVIGATION SYSTEM, NAVIGATION DEVICE, GLASS TYPE DEVICE, AND METHOD FOR LINKING THE DEVICE
JP5212920B2 (en) Vehicle display device
CN110301133B (en) Information processing apparatus, information processing method, and computer-readable recording medium
JP5231595B2 (en) Navigation device
JP4798549B2 (en) Car navigation system
JP2020149476A (en) Program, information processing device, and control method
WO2019155557A1 (en) Information display control device, information display control method, and information display control program
JPWO2020157785A1 (en) Automatic driving instruction device and automatic driving instruction method
JP7361486B2 (en) Information presentation device, information presentation method, and program
JP7369571B2 (en) Meeting place presentation system, meeting place presentation method, and dispatch server
JP5003994B2 (en) Car navigation system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19952205

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021555718

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19952205

Country of ref document: EP

Kind code of ref document: A1