WO2021192519A1 - Information providing device, information providing method and information providing program - Google Patents

Information providing device, information providing method and information providing program Download PDF

Info

Publication number
WO2021192519A1
WO2021192519A1 PCT/JP2021/000992 JP2021000992W WO2021192519A1 WO 2021192519 A1 WO2021192519 A1 WO 2021192519A1 JP 2021000992 W JP2021000992 W JP 2021000992W WO 2021192519 A1 WO2021192519 A1 WO 2021192519A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
guidance
corresponding feature
guide
destination
Prior art date
Application number
PCT/JP2021/000992
Other languages
French (fr)
Japanese (ja)
Inventor
孝太郎 福井
匡弘 岩田
慎一朗 飯野
将太 和泉
洋平 大沼
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to JP2022509297A priority Critical patent/JPWO2021192519A1/ja
Publication of WO2021192519A1 publication Critical patent/WO2021192519A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • G08G1/0969Systems involving transmission of navigation instructions to the vehicle having a display in the form of a map

Definitions

  • This application belongs to the technical fields of information providing devices, information providing methods, and information providing programs. More specifically, it belongs to the technical field of an information providing device and an information providing method for providing information on the movement of a moving body such as a vehicle, and a program for the information providing device.
  • Patent Document 1 Japanese Patent Application Laidification
  • the person receiving the guidance can more easily understand (or more) the position of the guidance destination such as the destination, as compared with the guidance using the conventional image.
  • Guidance is required (easy to imagine), but this point is not taken into consideration in the prior art disclosed in Patent Document 1. Therefore, the prior art cannot meet the above requirements.
  • the present application has been made in view of the above request, and one example of the problem is the provision of information capable of providing guidance information to be provided in an easy-to-understand manner even if the guidance is mainly by voice or sound.
  • the purpose is to provide a device, an information providing method, and a program for the information providing device.
  • the invention according to claim 1 provides guidance location information indicating a guidance location to which guidance information should be provided, and attributes according to the preference of the recipient to whom the guidance information is provided.
  • the invention according to claim 5 has a guide location information indicating a guide location to which the guide information should be provided, and a corresponding location to which the recipient who receives the guide information has moved in the past.
  • the acquisition means for acquiring the corresponding feature information indicating the corresponding feature that is a thing and exists within a predetermined range from the position of the guide, and the acquisition when the guide information is provided to the recipient. It is provided with a providing means for providing the guide information of the guide site associated with the corresponding feature to the recipient by sound based on the guide site information and the corresponding feature information.
  • the invention according to claim 6 is an information providing method executed in an information providing device including an acquisition means and a providing means, and is a guide place to provide guidance information.
  • Guidance site information indicating When the guide information is provided to the recipient, the guide site associated with the corresponding feature based on the acquired guide site information and the corresponding feature information.
  • the guide information is provided to the recipient by sound by the providing means.
  • the invention according to claim 7 uses a computer according to the guide location information indicating the guide location to which the guide information should be provided and the preference of the recipient who receives the guide information.
  • the acquisition means for acquiring the corresponding feature information indicating the corresponding feature that has the above-mentioned attribute and exists within a predetermined range from the position of the guide, and the guide information to the recipient Based on the acquired guide site information and the corresponding feature information, the guide information of the guide site associated with the corresponding feature is provided to the recipient by sound as a providing means.
  • FIG. 1 is a block diagram showing an outline configuration of the information providing device of the embodiment.
  • the information providing device S includes an acquisition means 1 and a providing means 2.
  • the acquisition means 1 has the guide location information indicating the guide location to which the guide information should be provided, the attribute according to the preference of the recipient who receives the guide information, and from the position of the guide location. Acquires the corresponding feature information indicating the corresponding feature existing within the predetermined range.
  • the providing means 2 when the providing means 2 provides the above-mentioned guidance information to the recipient, the providing means 2 guides the above-mentioned guidance place associated with the above-mentioned corresponding feature based on the guidance place information and the corresponding feature information acquired by the acquisition means 1. Provide information to recipients by sound.
  • the guide information when the guide information is provided to the recipient, it is associated with the corresponding feature based on the acquired guide location information and the corresponding feature information. Since the guidance information of the guidance site is provided by sound, even if the guidance information is provided by sound, the guidance information of the guidance site is related to the corresponding feature having the attribute according to the preference of the recipient. Can be provided in an easy-to-understand manner. It should be noted that the "guidance” in the present embodiment does not include, for example, the guidance itself along the route (information provision itself such as "turn left” or “turn right” during guidance), and serves as the destination or waypoint of the guidance. Includes providing information about the points or facilities to be obtained.
  • the present application is applied to route guidance using sound (voice) in a navigation system consisting of a terminal device and a server connected to each other so that data can be exchanged via a network such as the Internet. This is an example of the case where
  • FIG. 2 is a block diagram showing an outline configuration of the navigation system of the embodiment
  • FIG. 3 is a block diagram showing an outline configuration of the terminal device and the like of the embodiment
  • FIG. 4 shows the entire navigation process of the embodiment. It is a flowchart which shows
  • FIG. 5 is a flowchart which shows the detail of the navigation process.
  • each of the navigation systems SS of the embodiment is one or more terminals used in the vehicle by the passenger of the vehicle (more specifically, the driver or passenger of the vehicle).
  • Device T1, terminal device T2, terminal device T3, ..., Terminal device Tn (n is a natural number), server SV, terminal device T1, terminal device T2, terminal device T3, ..., Terminal device Tn, server SV, and so on. It is composed of a network NW such as the Internet that connects the data to and from the data.
  • NW such as the Internet
  • the terminal device T is specifically realized as, for example, a so-called smartphone or a tablet-type terminal device. Further, in the following description, an embodiment will be described when a passenger using the terminal device T is in a vehicle as an example of a moving body.
  • each terminal device T separately exchanges various data with and from the server SV via the network NW, and guides passengers using each terminal device T about movement. ..
  • the data exchanged at this time includes search data for searching for a route to be traveled by the vehicle, route data indicating the searched route, and guidance after starting the movement along the route. Includes guidance data for.
  • the search data exchanged between each terminal device T and the server SV includes the destination voice data of the embodiment and the answer voice data of the embodiment transmitted from each terminal device T to the server SV.
  • Inquiry voice data of the embodiment transmitted from the server SV to each terminal device T, and destination confirmation voice data of the embodiment are included.
  • the destination voice data of the embodiment is voice data corresponding to the destination voice indicating the destination of the movement, which is uttered by the passenger using the terminal device T.
  • the inquiry voice data of the embodiment is transmitted from the server SV when the destination sufficient to search the route cannot be determined in the server SV only by the content of the destination voice corresponding to the destination voice data.
  • This is voice data corresponding to the inquiry voice (inquiry voice by automatic voice for determining the destination).
  • the answer voice data of the embodiment is voice data corresponding to the answer voice of the passenger to the inquiry voice.
  • the destination confirmation voice data of the embodiment is voice data for confirming a point or the like finally used as the destination of the above route.
  • the guidance data transmitted from the server SV to each terminal device T includes the voice or sound guidance voice data. It has been.
  • the voice data for guidance is simply referred to as "guidance voice data”.
  • each of the terminal devices T of the embodiment has an interface 5, a processing unit 6 including a CPU, a RAM (RandomAccessMemory), a ROM (ReadOnlyMemory), and the like, and volatilization.
  • a memory 7 including a sexual region and a non-volatile region an operation unit 8 including a touch panel and operation buttons, a speaker 9, a sensor unit 10 including a GPS (Global Positioning System) sensor and / or an independent sensor, and a liquid crystal display or It is composed of a display 11 made of an organic EL (Electro Luminescence) display and the like, and a microphone 12.
  • the processing unit 6 includes a route search unit 6a and a guidance voice output control unit 6b.
  • the route setting unit 6a and the guidance voice output control unit 6b may be realized by a hardware logic circuit including the CPU or the like constituting the processing unit 6, respectively, or may be realized by a hardware logic circuit including the CPU or the like constituting the processing unit 6, respectively, or in the examples described later. It may be realized by software when the CPU or the like reads and executes a program corresponding to a flowchart showing the processing as the terminal device T in the navigation processing.
  • the interface 5 controls the transfer of data to and from the server SV via the network NW under the control of the processing unit 6.
  • the sensor unit 10 uses the GPS sensor and / or the self-supporting sensor or the like to use the current position, moving speed, moving direction, etc. of the terminal device T (in other words, the passenger or the passenger who uses the terminal device T). Generates sensor data indicating the current position, moving speed, moving direction, etc. of the vehicle on which the vehicle is on board, and outputs the sensor data to the processing unit 6.
  • the microphone 12 collects the voice of the passenger using the terminal device T and the sound in the vehicle in which the terminal device T is used, and outputs the sound collection result to the processing unit 6.
  • the passenger's voice collected at this time includes the destination voice and the answer voice.
  • the route search unit 6a under the control of the processing unit 6, performs the destination voice data (that is, the destination voice data indicating the destination to be reached by the vehicle on which the passenger using the terminal device T is on board. ), The answer voice data, and the sensor data are transmitted to the server SV via the interface 5 and the network NW as search data.
  • the destination data indicating the destination may be input from the operation unit 8 and transmitted to the server SV together with the sensor data.
  • the route search unit 6a transmits the route data indicating the search result of the route from the current position indicated by the sensor data to the destination voice data or the destination indicated by the destination data via the network NW and the interface 5. And get it from the server SV.
  • the processing unit 6 uses the acquired route data to perform the above-mentioned search while exchanging the above-mentioned guidance data (including the above-mentioned sensor data and the above-mentioned guidance voice data at that time) with the server SV. Information on the movement of the vehicle along the route is provided.
  • the guidance voice output control unit 6b outputs the guidance voice corresponding to the guidance voice data acquired from the server SV via the network NW and the interface 5 to the passenger via the speaker 9 ( Sound).
  • the operation unit 8 when the operation unit 8 performs an input operation of data necessary for guiding the vehicle in addition to the destination data, the operation unit 8 outputs an operation signal corresponding to the input operation. Generate and send to the processing unit 6.
  • the processing unit 6 executes the processing as the terminal device T in the navigation processing of the embodiment while controlling the route setting unit 6a and the guidance voice output control unit 6b.
  • the processing unit 6 executes the processing while temporarily or non-volatilely storing the data required for the processing in the memory 7. Further, the guide image or the like as the execution result of the process is displayed on the display 11.
  • the server SV of the embodiment includes an interface 20, a processing unit 21 including a CPU, RAM, ROM, and the like, an HDD (Hard Disc Drive), an SSD (Solid State Drive), and the like. It is composed of a recording unit 22 and. Further, the processing unit 21 includes a route setting unit 1 and a guidance voice generation unit 2. At this time, the route setting unit 1 and the guidance voice generation unit 2 may be realized by a hardware logic circuit including the CPU or the like constituting the processing unit 21, respectively, or may be realized by the navigation process of the embodiment. Among them, the program may be realized by software when the CPU or the like reads and executes a program corresponding to a flowchart showing processing as a server SV.
  • the route search unit 1 corresponds to an example of the acquisition means 1 of the embodiment
  • the guidance voice generation unit 2 corresponds to an example of the provision means 2 of the embodiment.
  • the route search unit 1 and the guidance voice generation unit 2 constitute an example of the information providing device S of the embodiment.
  • the recording unit 22 has a map necessary for guiding information on the movement of each vehicle on which the passenger using each terminal device T connected to the server SV via the network NW is on board.
  • the navigation data 23 including the data 23a, the personal data 23b of the embodiment, the inquiry voice data, the guidance voice data, and the like are recorded non-volatilely.
  • the map data 23a includes road data, intersection data, and the like used for route search and route guidance, respectively.
  • the hobby preference data 23ba indicating the hobby and preference of the passenger in association with the user ID for identifying the passenger who uses each terminal device T
  • the passenger Includes visit history data 23bb, which indicates points that have been visited in the past (ie, have been destinations for the passenger's movements).
  • the hobby / preference data 23ba is data in which information indicating the hobby or preference of the passenger identified by the user ID is recorded in association with the user ID. ..
  • hobby / preference data 23ba As a method of generating (recording) hobby / preference data 23ba, for example, it may be recorded for each passenger by the registration operation of the passenger himself / herself identified by the user ID, or is used by the passenger, for example.
  • Hobby / preference data 23ba may be obtained by collecting and recording words that are often used in the SNS (Social Networking Service).
  • the visit history data 23bb includes the position and name of a point that has become a destination (that is, a visited place) in the past movement of the passenger indicated by the user ID, and for example.
  • the information indicating the date and time when the destination is reached is the data recorded in association with the user ID.
  • the above information indicating the destination is provided for each passenger. It is preferable to record the visit history data 23bb.
  • the interface 20 controls the transfer of data to and from each terminal device T via the network NW under the control of the processing unit 21.
  • the route setting unit 1 uses the navigation data 23 based on the destination voice data and the sensor data acquired from any of the terminal devices T under the control of the processing unit 21, and uses the destination voice.
  • the route to the destination indicated by the data is searched, and the route data indicating the search result is transmitted to the terminal device T that has transmitted the destination voice data and the sensor data.
  • the terminal device T guides the route based on the route data.
  • the guidance voice generation unit 2 generates the guidance voice data according to the guidance timing on the route, and is used by the passenger of the vehicle to be guided via the interface 20 and the network NW. It is transmitted to the terminal device T. As a result, the guidance voice corresponding to the guidance voice data is output (sounded) to the passenger via the speaker 9 of the terminal measure T.
  • the navigation process of the embodiment is performed, for example, in the operation unit 8 of the terminal device T of the embodiment used by a passenger who is on a vehicle (hereinafter, simply referred to as “target vehicle”) for which information on movement is provided. , It is started when the guidance instruction operation or the like to guide the movement along the route of the target vehicle (that is, the guidance of the route) is executed.
  • the terminal device T is appropriately referred to as a "target terminal device T".
  • the route search unit 6a of the processing unit 6 of the terminal device T moves to the server.
  • the destination voice data, the inquiry voice data, the answer voice data, the destination confirmation voice data, the search data including the sensor data, and the route data are exchanged with the route search unit 1 of the SV.
  • Search for the route that the target vehicle should move (step S1).
  • the route search unit 1 of the server SV is always waiting for the transmission of the search data from any of the terminal devices T connected to the server SV via the network NW at that time.
  • the route search unit 1 performs a route search based on the destination voice data and the like included in the search data, and as the search result.
  • the route data and the destination confirmation voice data of the above are transmitted to the target terminal device T via the network NW (step S10).
  • the route search of the embodiment including the step S1 and the step S10 will be described in detail later with reference to FIG.
  • the guidance voice generation unit 2 of the server SV is set. It monitors whether or not there is a guide point (for example, an intersection to be turned on the route) on the route (step S12). In the monitoring of step S12, if there is no guide point (step S12: NO), the processing unit 21 shifts to step S14, which will be described later. On the other hand, in the monitoring of step S12, if there is a guidance point (step S12: YES), then the guidance voice generation unit 2 should guide the guidance point at the timing when the guidance about the guidance point should be performed by voice. (For example, "Turn left at the next XX intersection.") Is generated, and the generated guidance voice data is transmitted to the target terminal device T via the network NW (step S13).
  • a guide point for example, an intersection to be turned on the route
  • the processing unit 21 determines whether or not to end the route guidance as the navigation process of the embodiment because the target vehicle has reached the destination or the like (step S14). In the determination of step S14, if the route guidance is not completed (step S14: NO), the processing unit 21 returns to step S12 and continues to perform route guidance. On the other hand, in the determination of step S14, when the route guidance is ended (step S14: YES), the processing unit 21 ends the route guidance as it is.
  • the guidance voice output control unit 6b of the target terminal device T waits for the transmission of the guidance voice data from the server SV after the guidance is started in the above step 2 (step S3).
  • the processing unit 6 of the target terminal device T shifts to step S5 described later.
  • step S3 when the guidance voice data from the server SV is received in the standby of step S3 (step S3: YES), the guidance voice output control unit 6b of the target terminal device T corresponds to the received guidance voice data.
  • the guidance voice is output (sounded) via the speaker 9 (step S4).
  • the processing unit 6 of the target terminal device T determines whether or not to end the route guidance as the navigation process of the embodiment for the same reason as in step S14 (step S5). In the determination of step S5, if the route guidance is not completed (step S5: NO), the processing unit 6 returns to step S3 and continues to perform route guidance. On the other hand, in the determination of step S5, when the route guidance is terminated (step S5: NO), the processing unit 6 ends the route guidance as it is.
  • the corresponding flowchart is shown in FIG. 5.
  • the voice is detected by the microphone 12, and the destination voice data corresponding to the destination voice is further generated by the route search unit 6a of the target terminal device T. After that, the generated destination voice data is transmitted to the server SV via the network NW together with the sensor data corresponding to that time point (step S15).
  • the voice includes the name of the destination such as " ⁇ stadium”.
  • the destination voice data and the sensor data are collectively referred to as “destination voice data and the like”.
  • the server SV is waiting for transmission of the destination voice data or the like from the target terminal device T as the route search in step S10 of FIG. 4 (step S25, step S25: NO).
  • the route search unit 1 of the server SV uses the destination voice data received in step S25. It is determined whether or not the destination can be determined as the destination of the route search only by the name of the indicated destination (step S26). More specifically, the route search unit 1 refers to, for example, the map data 23a and the like, and the name of the destination indicated by the destination voice data received in step S25 is unique (that is, confused with other facilities and the like).
  • step S26 when the destination indicated by the destination voice data can be determined as the destination of the route search (step S26: YES), the route search unit 1 received the map data 23a and the like and step S25. With reference to the sensor data, a route from the current position of the target terminal device T to the determined destination is searched by, for example, the same method as in the conventional method (step S30). Then, the route search unit 1 generates route data indicating the searched route and destination confirmation voice data for confirming a point or the like currently used as the destination of the route, and generates a network NW. It is transmitted to the target terminal device T via the device (step S31). After that, the server SV shifts to step S11 shown in FIG.
  • Step S26 when the destination cannot be determined as the destination of the route search only by the name of the destination indicated by the destination voice data (that is, when only the name of " ⁇ stadium" is used, for example, the purpose.
  • Step S26: NO. The route search unit 1 responds to the passenger by referring to the personal data 23b corresponding to the passenger using the target terminal device T.
  • the facilities having the attributes indicated by the hobby / preference data 23ba the facility exists in the vicinity of each of the plurality of candidate sites (that is, within a predetermined distance preset from each of the plurality of candidate sites; the same shall apply hereinafter).
  • At least one of the past destinations (visited places) indicated by the visit history data 23bb corresponding to the passenger or the destinations (visited places) in the vicinity of each of the plurality of candidate sites. Search as the corresponding feature of the embodiment (step S27).
  • the passenger who uses the target terminal device T is specified in the server SV as a passenger who uses the target terminal device T when the target terminal device T is connected to the server SV.
  • the guidance voice generation unit 2 of the server SV generates the inquiry voice data including the corresponding feature searched in step S27 and transmits it to the target terminal device T via the network NW (step S28).
  • the inquiry voice corresponding to the inquiry voice data generated / transmitted in step S28 among the facilities having the attribute indicated by the hobby / preference data 23ba corresponding to the passenger (for example, the attribute "shrine / temple"). If the facilities that exist near the above multiple candidate sites exist as corresponding features, for example, "Is it a XX stadium near the XX temple? Or is it a XX stadium near the ⁇ shrine?" Is it an inquiry voice such as "?” In addition, when the destinations (visited places) in the vicinity of the plurality of candidate sites among the past destinations (visited places) indicated by the visit history data 23bb corresponding to the passengers exist as the corresponding features, for example. , "Is it the XX stadium near the XX park that I visited last year? Or is it the XX stadium near the XX temple that I visited this year?"
  • the route search unit 6a of the target terminal device T monitors whether or not the inquiry voice data has been transmitted from the server SV (step S16). In the monitoring of step S16, for example, when the inquiry voice data is not transmitted within the preset monitoring time after the destination voice data is transmitted (that is, in the determination of step S26, the destination voice data is indicated. When the destination has been determined as the destination for the route search (see step S26: YES). Step S16: NO.), The route search unit 6a shifts to step S19, which will be described later.
  • step S16 when the inquiry voice data is transmitted within the predetermined monitoring time (step S16: YES), the guidance voice output control unit 6b of the target terminal device T passes through the speaker 9.
  • the inquiry voice corresponding to the inquiry voice data is output (sounded) (step S17).
  • the answer voice of the passenger to answer the inquiry voice is detected by the microphone 12, and the answer voice data corresponding to the reply voice is further generated by the route search unit 6a (step S18).
  • the response voice detected at this time includes a more detailed (or more specific) destination name or the like as a response to the inquiry voice.
  • the route search unit 6a transmits the response voice data to the server SV via the network NW (step S18).
  • the guidance voice generation unit 2 of the server SV that transmitted the inquiry voice data in step S28 waits for the transmission of the response voice data from the target terminal device T (step S29. Step S29: NO.). Then, when the answer voice data is transmitted (see step S18; step S29: YES), the route search unit 1 uses the name of the destination indicated by the reply voice data to search for the route. In order to determine the ground again, the process returns to step S26 and the above process is repeated.
  • the destination cannot be determined as the route search destination even with the name of the destination indicated by the answer voice data (step S26: NO), in order to obtain a more specific destination name or the like. , The processes of steps S27 to S29 will be executed again.
  • step S19 has the route search unit 6a of the target terminal device T to which the response voice data has been transmitted in step S18 transmit the route data and the destination confirmation voice data from the server SV in response to the response voice data? Whether or not it is monitored (step S19. Step S19: NO).
  • the guidance voice output control unit 6b of the target terminal device T receives the received destination confirmation voice data.
  • the destination confirmation voice corresponding to the above, including the corresponding feature (see step S27) and the name of the destination is output (sounded) to the passenger via the speaker 9 (sound).
  • the destination confirmation voice is, for example, a voice such as "Go to the XX stadium near the XX temple.” After that, the target terminal device T shifts to step S2 shown in FIG.
  • the navigation process when guiding the information regarding the movement of the passenger using the target terminal device T, based on the acquired destination voice data and personal data 23b, Since the destination information associated with the corresponding feature is output (sounded) by voice, the corresponding place has attributes (see FIG. 3C) according to the passenger's preference even if the guidance is by voice.
  • the passenger can easily imagine the location of the destination, the route to the destination, or surrounding information. Furthermore, it is possible to easily recognize whether the set destination is the point intended by the passenger.
  • a destination confirmation voice including the name of the associated corresponding feature (see step S27 in FIG. 5) is output (sounded), even if it is a voice guidance, it is related to the corresponding feature. , It is possible to guide the destination etc. in an easy-to-understand manner.
  • step S27 in FIG. 5 is a past destination (visited place)
  • the location of the destination and the location of the destination can be more easily understood in relation to the destination (visited place). Passengers can easily imagine the route to the destination or surrounding information.
  • the positional relationship between the corresponding feature and the destination may also be configured to be output as audio.
  • the passenger can easily imagine the location of the destination, the route to the destination, or the surrounding information in relation to the corresponding feature. Can be made to.
  • the processing unit 21 of the server SV performs the search for the corresponding feature (see step S27 in FIG. 5) and the generation of the inquiry voice data (see step S28 in FIG. 5).
  • the target terminal device T may be configured to search for the corresponding feature and generate inquiry voice data.
  • the destination based on the above-mentioned answer voice (that is, the answer voice including the name of a more specific destination, etc.) to the inquiry voice corresponding to the inquiry voice data, and then by coordinating with the server SV, the destination. And the route may be searched.
  • the navigation process of the above-described embodiment can be applied not only to vehicles but also to navigation for pedestrians.
  • the route guidance is not indispensable, and may be used only when notifying the user of the position of the search point.
  • the point display by the map is not excluded, but since the display is not essential, it can be applied to a smart speaker or the like that does not have a display unit.
  • programs corresponding to the flowcharts shown in FIGS. 4 and 5 are recorded on a recording medium such as an optical disk or a hard disk, or acquired via a network such as the Internet, and these are used for general purposes. It is also possible to make the microcomputer function as the processing unit 6 or the processing unit 21 according to the embodiment by reading it out to a microcomputer or the like and executing it.

Abstract

An information providing device is provided which can provide guidance information to be provided, in an easy-to-understand form even when guidance is mainly by audio or sound. When searching for a route, destination information and information about a corresponding feature that has properties that accord with the preferences of the person receiving the guidance and that is present in a prescribed area from the destination position is acquired (step S25, S27), and, on the basis of the acquired destination information and corresponding feature information, guidance to the destination associated with the corresponding feature is provided by audio.

Description

情報提供装置、情報提供方法及び情報提供用プログラムInformation providing device, information providing method and information providing program
 本願は、情報提供装置、情報提供方法及び情報提供用プログラムの技術分野に属する。より詳細には、車両等の移動体の移動に関する情報を提供する情報提供装置及び情報提供方法並びに当該情報提供装置用のプログラムの技術分野に属する。 This application belongs to the technical fields of information providing devices, information providing methods, and information providing programs. More specifically, it belongs to the technical field of an information providing device and an information providing method for providing information on the movement of a moving body such as a vehicle, and a program for the information providing device.
 上記移動体の移動に関する情報を提供するナビゲーション装置として、近年、従来から一般化している移動体搭載型のナビゲーション装置に加えて、例えばスマートフォン等の携帯型端末装置を活用したナビゲーションシステムに関する研究/開発が活発化している。 As a navigation device that provides information on the movement of the mobile body, research / development on a navigation system that utilizes a portable terminal device such as a smartphone, in addition to the mobile body-mounted navigation device that has been generally used in recent years. Is becoming more active.
 このとき、上記携帯型端末装置を活用する場合、それに備えられているディスプレイの大きさの制限等に起因して、案内音声を含む音を用いた案内が重要となってくる。このような背景に対応した先行技術を開示した文献としては、例えば下記特許文献1が挙げられる。この特許文献1に開示されている先行技術では、目的地設定地点の付近の情報を音声出力し、意図通りの地点であるかを確認させる構成とされている。 At this time, when utilizing the above-mentioned portable terminal device, guidance using sound including guidance voice becomes important due to restrictions on the size of the display provided therein. Examples of documents that disclose prior art corresponding to such a background include the following Patent Document 1. The prior art disclosed in Patent Document 1 has a configuration in which information in the vicinity of a destination setting point is output by voice to confirm whether the point is as intended.
特許第4822575号公報Japanese Patent No. 4822575
 ここで、上述したような案内音声を主として用いた案内においては、従来の画像を併用した案内に比べて、案内を受ける者が例えば目的地等の案内地の位置をより判り易く(又は、よりイメージし易く)案内することが求められるが、この点については、上記特許文献1に開示されている先行技術では考慮されていない。よって、当該先行技術では、上記の要請に対応することができない。 Here, in the guidance mainly using the guidance voice as described above, the person receiving the guidance can more easily understand (or more) the position of the guidance destination such as the destination, as compared with the guidance using the conventional image. Guidance is required (easy to imagine), but this point is not taken into consideration in the prior art disclosed in Patent Document 1. Therefore, the prior art cannot meet the above requirements.
 そこで本願は、上記の要請に鑑みて為されたもので、その課題の一例は、主として音声又は音による案内であっても、提供すべき案内情報をより判り易く提供することが可能な情報提供装置及び情報提供方法並びに当該情報提供装置用のプログラムを提供することにある。 Therefore, the present application has been made in view of the above request, and one example of the problem is the provision of information capable of providing guidance information to be provided in an easy-to-understand manner even if the guidance is mainly by voice or sound. The purpose is to provide a device, an information providing method, and a program for the information providing device.
 上記の課題を解決するために、請求項1に記載の発明は、案内情報を提供すべき案内地を示す案内地情報と、前記案内情報の提供を受ける被提供者の好みに沿った属性を有し且つ前記案内地の位置から所定範囲内に存在する対応地物を示す対応地物情報と、を取得する取得手段と、前記被提供者に前記案内情報を提供するとき、前記取得された案内地情報及び対応地物情報に基づいて、前記対応地物と関連付けた前記案内地の前記案内情報を音により前記被提供者に提供する提供手段と、を備える。 In order to solve the above-mentioned problems, the invention according to claim 1 provides guidance location information indicating a guidance location to which guidance information should be provided, and attributes according to the preference of the recipient to whom the guidance information is provided. The acquisition means for acquiring the corresponding feature information indicating the corresponding feature that is possessed and exists within a predetermined range from the position of the guide, and the acquired when the guide information is provided to the recipient. Based on the guide site information and the corresponding feature information, the provision means for providing the guide information of the guide site associated with the corresponding feature to the recipient by sound is provided.
 上記の課題を解決するために、請求項5に記載の発明は、案内情報を提供すべき案内地を示す案内地情報と、前記案内情報の提供を受ける被提供者が過去に移動した対応地物であって前記案内地の位置から所定範囲内に存在する対応地物を示す対応地物情報と、を取得する取得手段と、前記被提供者に前記案内情報を提供するとき、前記取得された案内地情報及び対応地物情報に基づいて、前記対応地物と関連付けた前記案内地の前記案内情報を音により前記被提供者に提供する提供手段と、を備える。 In order to solve the above-mentioned problems, the invention according to claim 5 has a guide location information indicating a guide location to which the guide information should be provided, and a corresponding location to which the recipient who receives the guide information has moved in the past. The acquisition means for acquiring the corresponding feature information indicating the corresponding feature that is a thing and exists within a predetermined range from the position of the guide, and the acquisition when the guide information is provided to the recipient. It is provided with a providing means for providing the guide information of the guide site associated with the corresponding feature to the recipient by sound based on the guide site information and the corresponding feature information.
 上記の課題を解決するために、請求項6に記載の発明は、取得手段と、提供手段と、を備える情報提供装置において実行される情報提供方法であって、案内情報を提供すべき案内地を示す案内地情報と、前記案内情報の提供を受ける被提供者の好みに沿った属性を有し且つ前記案内地の位置から所定範囲内に存在する対応地物を示す対応地物情報と、を前記取得手段により取得する取得工程と、前記被提供者に前記案内情報を提供するとき、前記取得された案内地情報及び対応地物情報に基づいて、前記対応地物と関連付けた前記案内地の前記案内情報を、音により前記被提供者に前記提供手段により提供する提供工程と、を含む。 In order to solve the above problems, the invention according to claim 6 is an information providing method executed in an information providing device including an acquisition means and a providing means, and is a guide place to provide guidance information. Guidance site information indicating When the guide information is provided to the recipient, the guide site associated with the corresponding feature based on the acquired guide site information and the corresponding feature information. The guide information is provided to the recipient by sound by the providing means.
 上記の課題を解決するために、請求項7に記載の発明は、コンピュータを、案内情報を提供すべき案内地を示す案内地情報と、前記案内情報の提供を受ける被提供者の好みに沿った属性を有し且つ前記案内地の位置から所定範囲内に存在する対応地物を示す対応地物情報と、を取得する取得手段、及び、前記被提供者に前記案内情報を提供するとき、前記取得された案内地情報及び対応地物情報に基づいて、前記対応地物と関連付けた前記案内地の前記案内情報を音により前記被提供者に提供する提供手段、として機能させる。 In order to solve the above-mentioned problems, the invention according to claim 7 uses a computer according to the guide location information indicating the guide location to which the guide information should be provided and the preference of the recipient who receives the guide information. When providing the acquisition means for acquiring the corresponding feature information indicating the corresponding feature that has the above-mentioned attribute and exists within a predetermined range from the position of the guide, and the guide information to the recipient. Based on the acquired guide site information and the corresponding feature information, the guide information of the guide site associated with the corresponding feature is provided to the recipient by sound as a providing means.
実施形態の情報提供装置の概要構成を示すブロック図である。It is a block diagram which shows the outline structure of the information providing apparatus of embodiment. 実施例のナビゲーションシステムの概要構成を示すブロック図である。It is a block diagram which shows the outline structure of the navigation system of an Example. 実施例の端末装置等の概要構成を示すブロック図であり、(a)は当該端末装置の概要構成を示すブロック図であり、(b)は実施例のサーバの概要構成を示すブロック図であり、(c)は実施例のパーソナルデータの内容を例示する図である。It is a block diagram which shows the outline structure of the terminal device of an Example, (a) is a block diagram which shows the outline structure of the terminal device, and (b) is a block diagram which shows the outline structure of the server of an Example. , (C) are diagrams illustrating the contents of the personal data of the examples. 実施例のナビゲーション処理の全体を示すフローチャートである。It is a flowchart which shows the whole of the navigation process of an Example. 実施例のナビゲーション処理の細部を示すフローチャートである。It is a flowchart which shows the detail of the navigation process of an Example.
 次に、本願を実施するための形態について、図1を用いて説明する。なお図1は、実施形態の情報提供装置の概要構成を示すブロック図である。 Next, a mode for carrying out the present application will be described with reference to FIG. Note that FIG. 1 is a block diagram showing an outline configuration of the information providing device of the embodiment.
 図1に示すように、実施形態に係る情報提供装置Sは、取得手段1と、提供手段2と、を備えて構成されている。 As shown in FIG. 1, the information providing device S according to the embodiment includes an acquisition means 1 and a providing means 2.
 この構成において、取得手段1は、案内情報を提供すべき案内地を示す案内地情報と、上記案内情報の提供を受ける被提供者の好みに沿った属性を有し且つ上記案内地の位置から所定範囲内に存在する対応地物を示す対応地物情報と、を取得する。 In this configuration, the acquisition means 1 has the guide location information indicating the guide location to which the guide information should be provided, the attribute according to the preference of the recipient who receives the guide information, and from the position of the guide location. Acquires the corresponding feature information indicating the corresponding feature existing within the predetermined range.
 そして、提供手段2は、被提供者に上記案内情報を提供するとき、取得手段1により取得された案内地情報及び対応地物情報に基づいて、上記対応地物と関連付けた上記案内地の案内情報を音により被提供者に提供する。 Then, when the providing means 2 provides the above-mentioned guidance information to the recipient, the providing means 2 guides the above-mentioned guidance place associated with the above-mentioned corresponding feature based on the guidance place information and the corresponding feature information acquired by the acquisition means 1. Provide information to recipients by sound.
 以上説明したように、実施形態の情報提供装置Sの動作によれば、被提供者に案内情報を提供するとき、取得された案内地情報及び対応地物情報に基づいて、対応地物と関連付けた案内地の案内情報を音により提供するので、音による案内情報の提供の場合であっても、被提供者の好みに沿った属性を有する対応地物との関係で、案内地の案内情報を判り易く提供することができる。なお、本実施形態における「案内」には、例えば経路に沿った誘導自体(誘導中の「左折」又は「右折」等の情報提供自体)は含まれず、その誘導の目的地や経由地等となり得る地点又は施設に関する情報を案内(提供)することが含まれる。 As described above, according to the operation of the information providing device S of the embodiment, when the guide information is provided to the recipient, it is associated with the corresponding feature based on the acquired guide location information and the corresponding feature information. Since the guidance information of the guidance site is provided by sound, even if the guidance information is provided by sound, the guidance information of the guidance site is related to the corresponding feature having the attribute according to the preference of the recipient. Can be provided in an easy-to-understand manner. It should be noted that the "guidance" in the present embodiment does not include, for example, the guidance itself along the route (information provision itself such as "turn left" or "turn right" during guidance), and serves as the destination or waypoint of the guidance. Includes providing information about the points or facilities to be obtained.
 次に、上述した実施形態に対応する具体的な実施例について、図2乃至図5を用いて説明する。なお以下に説明する実施例は、インターネット等のネットワークを介して相互にデータの授受が可能に接続された端末装置とサーバとからなるナビゲーションシステムにおける音(音声)を用いた経路案内に本願を適用した場合の実施例である。 Next, specific examples corresponding to the above-described embodiments will be described with reference to FIGS. 2 to 5. In the embodiment described below, the present application is applied to route guidance using sound (voice) in a navigation system consisting of a terminal device and a server connected to each other so that data can be exchanged via a network such as the Internet. This is an example of the case where
 また、図2は実施例のナビゲーションシステムの概要構成を示すブロック図であり、図3は実施例の端末装置等の概要構成を示すブロック図であり、図4は実施例のナビゲーション処理の全体を示すフローチャートであり、図5は当該ナビゲーション処理の細部を示すフローチャートである。このとき図3では、図1に示した実施形態に係る情報提供装置Sにおける各構成部材に対応する実施例の構成部材について、当該情報提供装置Sにおける各構成部材と同一の部材番号を用いている。 Further, FIG. 2 is a block diagram showing an outline configuration of the navigation system of the embodiment, FIG. 3 is a block diagram showing an outline configuration of the terminal device and the like of the embodiment, and FIG. 4 shows the entire navigation process of the embodiment. It is a flowchart which shows, and FIG. 5 is a flowchart which shows the detail of the navigation process. At this time, in FIG. 3, for the constituent members of the embodiment corresponding to the respective constituent members in the information providing device S according to the embodiment shown in FIG. 1, the same member numbers as those of the respective constituent members in the information providing device S are used. There is.
 図2に示すように、実施例のナビゲーションシステムSSは、それぞれが車両の搭乗者(より具体的には、当該車両の運転者又は同乗者)により当該車両内で使用される一又は複数の端末装置T1、端末装置T2、端末装置T3、…、端末装置Tn(nは自然数)と、サーバSVと、端末装置T1、端末装置T2、端末装置T3、…、端末装置Tnと、サーバSVと、をデータの授受が可能に接続するインターネット等のネットワークNWと、により構成されている。なお以下の説明において、端末装置T1乃至端末装置Tnに共通の構成等を説明する場合、これらを纏めて「端末装置T」と称する。このとき、端末装置Tは、具体的には、例えばいわゆるスマートフォンや、タブレット型の端末装置として実現される。また、以下の説明では、端末装置Tを使用する搭乗者が移動体の一例としての車両に搭乗している場合について、実施例を説明する。 As shown in FIG. 2, each of the navigation systems SS of the embodiment is one or more terminals used in the vehicle by the passenger of the vehicle (more specifically, the driver or passenger of the vehicle). Device T1, terminal device T2, terminal device T3, ..., Terminal device Tn (n is a natural number), server SV, terminal device T1, terminal device T2, terminal device T3, ..., Terminal device Tn, server SV, and so on. It is composed of a network NW such as the Internet that connects the data to and from the data. In the following description, when the configurations common to the terminal device T1 to the terminal device Tn are described, these are collectively referred to as "terminal device T". At this time, the terminal device T is specifically realized as, for example, a so-called smartphone or a tablet-type terminal device. Further, in the following description, an embodiment will be described when a passenger using the terminal device T is in a vehicle as an example of a moving body.
 この構成において、各端末装置Tは、それぞれ別個に、ネットワークNWを介してサーバSVとの間で種々のデータの授受を行い、各端末装置Tを使用する搭乗者に対する移動に関する情報の案内を行う。このとき授受されるデータには、車両が移動すべき経路を探索するための探索用データや、当該探索された経路を示す経路データ、及び当該経路に沿った移動を開始した後の案内を行うための案内用データが含まれる。 In this configuration, each terminal device T separately exchanges various data with and from the server SV via the network NW, and guides passengers using each terminal device T about movement. .. The data exchanged at this time includes search data for searching for a route to be traveled by the vehicle, route data indicating the searched route, and guidance after starting the movement along the route. Includes guidance data for.
 ここで、端末装置Tに備えられた表示用のディスプレイの大きさの制限や処理負荷の制限、又は画面注視を避けるため等により、ナビゲーションシステムSSにおける上記経路の探索及び上記搭乗者に対する移動に関する情報の案内は、音声又は音を主として用いて行われる。このため、各端末装置TとサーバSVとの間で授受される上記探索用データには、各端末装置TからサーバSVに送信される実施例の目的地音声データ及び実施例の回答音声データと、サーバSVから各端末装置Tに送信される実施例の問合せ音声データ及び実施例の目的地確認音声データと、が含まれる。このとき、実施例の目的地音声データは、端末装置Tを使用する搭乗者が発声した、上記移動の目的地を示す目的地音声に相当する音声データである。一方、実施例の問合せ音声データは、上記目的地音声データに相当する目的地音声の内容のみでは上記経路を探索するに足る目的地がサーバSVにおいて確定できなかった場合に、サーバSVから送信される問合せ音声(目的地の確定のための自動音声による問合せ音声)に相当する音声データである。更に、実施例の回答音声データは、上記問合せ音声に対する上記搭乗者の回答音声に相当する音声データである。更にまた、実施例の目的地確認音声データは、上記経路の目的地として最終的に用いられる地点等を確認する旨の音声データである。 Here, information on the search for the route and the movement for the passenger in the navigation system SS by limiting the size of the display for display provided in the terminal device T, limiting the processing load, or avoiding the screen gaze. Guidance is mainly performed by voice or sound. Therefore, the search data exchanged between each terminal device T and the server SV includes the destination voice data of the embodiment and the answer voice data of the embodiment transmitted from each terminal device T to the server SV. , Inquiry voice data of the embodiment transmitted from the server SV to each terminal device T, and destination confirmation voice data of the embodiment are included. At this time, the destination voice data of the embodiment is voice data corresponding to the destination voice indicating the destination of the movement, which is uttered by the passenger using the terminal device T. On the other hand, the inquiry voice data of the embodiment is transmitted from the server SV when the destination sufficient to search the route cannot be determined in the server SV only by the content of the destination voice corresponding to the destination voice data. This is voice data corresponding to the inquiry voice (inquiry voice by automatic voice for determining the destination). Further, the answer voice data of the embodiment is voice data corresponding to the answer voice of the passenger to the inquiry voice. Furthermore, the destination confirmation voice data of the embodiment is voice data for confirming a point or the like finally used as the destination of the above route.
 他方、上記移動に関する情報の案内には音声又は音が主として用いられることから、サーバSVから各端末装置Tに送信される上記案内用データには、上記音声又は音による案内用の音声データが含まれている。なお以下の説明において、当該案内用の音声データを、単に「案内音声データ」と称する。 On the other hand, since voice or sound is mainly used for guiding information regarding the movement, the guidance data transmitted from the server SV to each terminal device T includes the voice or sound guidance voice data. It has been. In the following description, the voice data for guidance is simply referred to as "guidance voice data".
 次に、各端末装置T及びサーバSVの構成及び動作について、図3を用いて説明する。先ず、図3(a)に示すように、実施例の端末装置Tのそれぞれは、インターフェース5と、CPU、RAM(Random Access Memory)及びROM(Read Only Memory)等からなる処理部6と、揮発性領域及び不揮発性領域を含むメモリ7と、タッチパネル及び操作ボタン等からなる操作部8と、スピーカ9と、GPS(Global Positioning System)センサ及び/又は自立センサ等からなるセンサ部10と、液晶又は有機EL(Electro Luminescence)ディスプレイ等からなるディスプレイ11と、マイク12と、により構成されている。また、処理部6は、経路探索部6aと、案内音声出力制御部6bと、を備えて構成されている。このとき、経路設定部6a及び案内音声出力制御部6bは、それぞれ、処理部6を構成する上記CPU等を含むハードウェアロジック回路により実現されるものであってもよいし、後述する実施例のナビゲーション処理のうち端末装置Tとしての処理を示すフローチャートに相当するプログラムを当該CPU等が読み出して実行することにより、ソフトウェア的に実現されるものであってもよい。 Next, the configuration and operation of each terminal device T and server SV will be described with reference to FIG. First, as shown in FIG. 3A, each of the terminal devices T of the embodiment has an interface 5, a processing unit 6 including a CPU, a RAM (RandomAccessMemory), a ROM (ReadOnlyMemory), and the like, and volatilization. A memory 7 including a sexual region and a non-volatile region, an operation unit 8 including a touch panel and operation buttons, a speaker 9, a sensor unit 10 including a GPS (Global Positioning System) sensor and / or an independent sensor, and a liquid crystal display or It is composed of a display 11 made of an organic EL (Electro Luminescence) display and the like, and a microphone 12. Further, the processing unit 6 includes a route search unit 6a and a guidance voice output control unit 6b. At this time, the route setting unit 6a and the guidance voice output control unit 6b may be realized by a hardware logic circuit including the CPU or the like constituting the processing unit 6, respectively, or may be realized by a hardware logic circuit including the CPU or the like constituting the processing unit 6, respectively, or in the examples described later. It may be realized by software when the CPU or the like reads and executes a program corresponding to a flowchart showing the processing as the terminal device T in the navigation processing.
 以上の構成おいて、インターフェース5は、処理部6の制御の下、ネットワークNWを介したサーバSVとの間のデータの授受を制御する。一方、センサ部10は、上記GPSセンサ及び/又は自立センサ等を用いて、端末装置Tの現在位置や移動速度及び移動方向等(換言すれば、端末装置Tを使用する搭乗者又は当該搭乗者が搭乗している車両の現在位置や移動速度及び移動方向等)を示すセンサデータを生成し、処理部6に出力する。マイク12は、端末装置Tを使用している搭乗者の音声及び端末装置Tが使用されている車両内の音を集音し、当該集音結果を処理部6に出力する。このとき集音される搭乗者の音声には、上記目的地音声及び上記回答音声が含まれる。 In the above configuration, the interface 5 controls the transfer of data to and from the server SV via the network NW under the control of the processing unit 6. On the other hand, the sensor unit 10 uses the GPS sensor and / or the self-supporting sensor or the like to use the current position, moving speed, moving direction, etc. of the terminal device T (in other words, the passenger or the passenger who uses the terminal device T). Generates sensor data indicating the current position, moving speed, moving direction, etc. of the vehicle on which the vehicle is on board, and outputs the sensor data to the processing unit 6. The microphone 12 collects the voice of the passenger using the terminal device T and the sound in the vehicle in which the terminal device T is used, and outputs the sound collection result to the processing unit 6. The passenger's voice collected at this time includes the destination voice and the answer voice.
 他方、経路探索部6aは、処理部6の制御の下、上記目的地音声データ(即ち、端末装置Tを使用する搭乗者が搭乗している車両が到達すべき目的地を示す目的地音声データ)及び上記回答音声データ並びに上記センサデータを、探索用データとして、インターフェース5及びネットワークNWを介してサーバSVに送信する。なお、上記目的地を示す目的地データが操作部8から入力され、これが上記センサデータと共にサーバSVに送信されてもよい。その後経路探索部6aは、上記センサデータにより示される現在位置から上記目的地音声データ又は上記目的地データにより示される目的地に至る経路の探索結果を示す経路データを、ネットワークNW及びインターフェース5を介してサーバSVから取得する。 On the other hand, the route search unit 6a, under the control of the processing unit 6, performs the destination voice data (that is, the destination voice data indicating the destination to be reached by the vehicle on which the passenger using the terminal device T is on board. ), The answer voice data, and the sensor data are transmitted to the server SV via the interface 5 and the network NW as search data. The destination data indicating the destination may be input from the operation unit 8 and transmitted to the server SV together with the sensor data. After that, the route search unit 6a transmits the route data indicating the search result of the route from the current position indicated by the sensor data to the destination voice data or the destination indicated by the destination data via the network NW and the interface 5. And get it from the server SV.
 その後、処理部6は、取得した経路データを用いて、サーバSVとの間で上記案内用データ(その時点での上記センサデータ及び上記案内音声データを含む)の授受を行いつつ、上記探索された経路に沿った当該車両の移動に関する情報を案内する。このとき、案内音声出力制御部6bは、ネットワークNW及びインターフェース5を介してサーバSVから取得した上記案内音声データに対応する案内用の音声を、スピーカ9を介して上記搭乗者に対して出力(放音)する。 After that, the processing unit 6 uses the acquired route data to perform the above-mentioned search while exchanging the above-mentioned guidance data (including the above-mentioned sensor data and the above-mentioned guidance voice data at that time) with the server SV. Information on the movement of the vehicle along the route is provided. At this time, the guidance voice output control unit 6b outputs the guidance voice corresponding to the guidance voice data acquired from the server SV via the network NW and the interface 5 to the passenger via the speaker 9 ( Sound).
 これらと並行して、操作部8は、上記目的地データの他、上記車両の案内に必要なデータの入力操作が当該操作部8において行われた場合に、当該入力操作に対応する操作信号を生成して処理部6に送信する。これらにより処理部6は、上記経路設定部6a及び上記案内音声出力制御部6bを制御しつつ、実施例のナビゲーション処理のうち端末装置Tとしての処理を実行する。このとき処理部6は、当該処理に必要なデータをメモリ7に一時的又は不揮発的に記憶させつつ、当該処理を実行する。また、当該処理の実行結果としての案内画像等は、ディスプレイ11上に表示される。 In parallel with these, when the operation unit 8 performs an input operation of data necessary for guiding the vehicle in addition to the destination data, the operation unit 8 outputs an operation signal corresponding to the input operation. Generate and send to the processing unit 6. As a result, the processing unit 6 executes the processing as the terminal device T in the navigation processing of the embodiment while controlling the route setting unit 6a and the guidance voice output control unit 6b. At this time, the processing unit 6 executes the processing while temporarily or non-volatilely storing the data required for the processing in the memory 7. Further, the guide image or the like as the execution result of the process is displayed on the display 11.
 一方、図3(b)に示すように、実施例のサーバSVは、インターフェース20と、CPU、RAM及びROM等を含む処理部21と、HDD(Hard Disc Drive)又はSSD(Solid State Drive)等からなる記録部22と、により構成されている。また、処理部21は、経路設定部1と、案内音声生成部2と、を備えて構成されている。このとき、経路設定部1及び案内音声生成部2は、それぞれ、処理部21を構成する上記CPU等を含むハードウェアロジック回路により実現されるものであってもよいし、実施例のナビゲーション処理のうちサーバSVとしての処理を示すフローチャートに相当するプログラムを当該CPU等が読み出して実行することにより、ソフトウェア的に実現されるものであってもよい。そして、上記経路探索部1が実施形態の取得手段1の一例に相当し、上記案内音声生成部2が実施形態の提供手段2の一例に相当する。また、図3(b)において破線で示す通り、上記経路探索部1及び案内音声生成部2により、実施形態の情報提供装置Sの一例を構成している。 On the other hand, as shown in FIG. 3B, the server SV of the embodiment includes an interface 20, a processing unit 21 including a CPU, RAM, ROM, and the like, an HDD (Hard Disc Drive), an SSD (Solid State Drive), and the like. It is composed of a recording unit 22 and. Further, the processing unit 21 includes a route setting unit 1 and a guidance voice generation unit 2. At this time, the route setting unit 1 and the guidance voice generation unit 2 may be realized by a hardware logic circuit including the CPU or the like constituting the processing unit 21, respectively, or may be realized by the navigation process of the embodiment. Among them, the program may be realized by software when the CPU or the like reads and executes a program corresponding to a flowchart showing processing as a server SV. Then, the route search unit 1 corresponds to an example of the acquisition means 1 of the embodiment, and the guidance voice generation unit 2 corresponds to an example of the provision means 2 of the embodiment. Further, as shown by the broken line in FIG. 3B, the route search unit 1 and the guidance voice generation unit 2 constitute an example of the information providing device S of the embodiment.
 以上の構成において、記録部22には、ネットワークNWを介してサーバSVに接続されている各端末装置Tそれぞれを使用する搭乗者が搭乗している各車両の移動に関する情報の案内に必要な地図データ23a、実施例のパーソナルデータ23b並びに上記問合せ音声データ及び上記案内音声データ等を含むナビゲーション用データ23が、不揮発性に記録されている。このとき、上記地図データ23aには、経路探索及び経路案内にそれぞれ用いられる道路データ及び交差点データ等が含まれている。 In the above configuration, the recording unit 22 has a map necessary for guiding information on the movement of each vehicle on which the passenger using each terminal device T connected to the server SV via the network NW is on board. The navigation data 23 including the data 23a, the personal data 23b of the embodiment, the inquiry voice data, the guidance voice data, and the like are recorded non-volatilely. At this time, the map data 23a includes road data, intersection data, and the like used for route search and route guidance, respectively.
 一方、実施例のパーソナルデータ23bには、各端末装置Tを使用する搭乗者を識別するためのユーザIDに関連付けて、当該搭乗者の趣味や嗜好を示す趣味嗜好データ23baと、当該搭乗者が過去に訪問したことがある(即ち、当該搭乗者の移動における目的地となったことがある)地点を示す訪問履歴データ23bbと、が含まれている。このとき図3(c)左に例示するように、趣味嗜好データ23baは、上記ユーザIDにより識別される搭乗者の趣味や嗜好を示す情報が、当該ユーザIDに関連付けて記録されたデータである。なお、趣味嗜好データ23baの生成(記録)の方法としては、例えば、当該ユーザIDにより識別される搭乗者自身の登録操作により当該搭乗者ごとに記録されてもよいし、例えばその搭乗者が利用しているSNS(Social Networking Service)においてよく使用されている文言を収集して記録することで趣味嗜好データ23baとしてもよい。 On the other hand, in the personal data 23b of the embodiment, the hobby preference data 23ba indicating the hobby and preference of the passenger in association with the user ID for identifying the passenger who uses each terminal device T, and the passenger Includes visit history data 23bb, which indicates points that have been visited in the past (ie, have been destinations for the passenger's movements). At this time, as illustrated on the left side of FIG. 3C, the hobby / preference data 23ba is data in which information indicating the hobby or preference of the passenger identified by the user ID is recorded in association with the user ID. .. As a method of generating (recording) hobby / preference data 23ba, for example, it may be recorded for each passenger by the registration operation of the passenger himself / herself identified by the user ID, or is used by the passenger, for example. Hobby / preference data 23ba may be obtained by collecting and recording words that are often used in the SNS (Social Networking Service).
 一方、図3(c)右に例示するように、訪問履歴データ23bbは、上記ユーザIDにより示される搭乗者の過去の移動における目的地(即ち訪問地)となった地点の位置及び名称並びに例えば当該目的地に到達した日時を示す情報が、当該ユーザIDに関連付けて記録されたデータである。このとき、訪問履歴データ23bbの生成(記録)の方法としては、例えば、当該ユーザIDにより識別される搭乗者自身の過去の移動の度に、その目的地を示す上記情報を当該搭乗者ごとに記録することで訪問履歴データ23bbとするのが好適である。 On the other hand, as illustrated on the right side of FIG. 3C, the visit history data 23bb includes the position and name of a point that has become a destination (that is, a visited place) in the past movement of the passenger indicated by the user ID, and for example. The information indicating the date and time when the destination is reached is the data recorded in association with the user ID. At this time, as a method of generating (recording) the visit history data 23bb, for example, for each past movement of the passenger himself / herself identified by the user ID, the above information indicating the destination is provided for each passenger. It is preferable to record the visit history data 23bb.
 他方、インターフェース20は、処理部21の制御の下、ネットワークNWを介した各端末装置Tとの間のデータの授受を制御する。また、経路設定部1は、処理部21の制御の下、いずれかの端末装置Tから取得された上記目的地音声データ及び上記センサデータに基づき、ナビゲーション用データ23を用いて、当該目的地音声データにより示される目的地に至る上記経路を探索し、その探索結果を示す上記経路データを、上記目的地音声データ及び上記センサデータを送信してきた端末装置Tに送信する。これにより、当該端末装置Tでは、上記経路データに基づいた経路の案内が行われる。 On the other hand, the interface 20 controls the transfer of data to and from each terminal device T via the network NW under the control of the processing unit 21. Further, the route setting unit 1 uses the navigation data 23 based on the destination voice data and the sensor data acquired from any of the terminal devices T under the control of the processing unit 21, and uses the destination voice. The route to the destination indicated by the data is searched, and the route data indicating the search result is transmitted to the terminal device T that has transmitted the destination voice data and the sensor data. As a result, the terminal device T guides the route based on the route data.
 そして、当該案内中において、案内音声生成部2は、当該経路上における案内タイミングに合わせて上記案内音声データを生成し、インターフェース20及びネットワークNWを介して、案内対象たる車両の搭乗者が使用する端末装置Tに送信する。これにより、当該案内音声データに対応する案内用の音声が、その端末措置Tのスピーカ9を介して上記搭乗者に対して出力(放音)される。 Then, during the guidance, the guidance voice generation unit 2 generates the guidance voice data according to the guidance timing on the route, and is used by the passenger of the vehicle to be guided via the interface 20 and the network NW. It is transmitted to the terminal device T. As a result, the guidance voice corresponding to the guidance voice data is output (sounded) to the passenger via the speaker 9 of the terminal measure T.
 次に、上述した構成及び機能を備える実施例のナビゲーションシステムにおいて実行される実施例のナビゲーション処理について、具体的に図3乃至図5を用いて説明する。 Next, the navigation process of the embodiment executed in the navigation system of the embodiment having the above-described configuration and function will be specifically described with reference to FIGS. 3 to 5.
 実施例のナビゲーション処理は、例えば、移動に関する情報の提供の対象たる車両(以下、単に「対象車両」と称する)に搭乗している搭乗者が使用する実施例の端末装置Tの操作部8において、当該対象車両の経路に沿った移動の案内(即ち経路の誘導)をすべき旨の案内指示操作等が実行されたときに開始される。なお以下の説明において、当該端末装置Tを適宜「対象の端末装置T」と称する。そして、対応するフローチャートの全体を図4に示すように、対象の端末装置Tの操作部8において当該案内指示操作が行われると、当該端末装置Tの処理部6の経路探索部6aは、サーバSVの経路探索部1との間で上記目的地音声データ、上記問合せ音声データ、上記回答音声データ及び上記目的地確認音声データ並びに上記センサデータを含む上記探索用データ並びに上記経路データの授受を行い、対象車両が移動すべき経路の探索を行う(ステップS1)。このときサーバSVの経路探索部1は、その時点でネットワークNWを介してサーバSVに接続されている端末装置Tのいずれかからの上記探索用データの送信を常に待機している。そして経路探索部1は、当該探索用データが対象の端末装置Tから送信されてきた場合、当該探索用データに含まれている目的地音声データ等に基づいた経路探索を行い、その探索結果としての上記経路データ及び上記目的地確認音声データを、対象の端末装置TにネットワークNWを介して送信する(ステップS10)。なお、上記ステップS1及び上記ステップS10を含む実施例の経路探索については、後ほど図5を用いて詳細に説明する。 The navigation process of the embodiment is performed, for example, in the operation unit 8 of the terminal device T of the embodiment used by a passenger who is on a vehicle (hereinafter, simply referred to as “target vehicle”) for which information on movement is provided. , It is started when the guidance instruction operation or the like to guide the movement along the route of the target vehicle (that is, the guidance of the route) is executed. In the following description, the terminal device T is appropriately referred to as a "target terminal device T". Then, as shown in FIG. 4, when the guidance instruction operation is performed by the operation unit 8 of the target terminal device T, the route search unit 6a of the processing unit 6 of the terminal device T moves to the server. The destination voice data, the inquiry voice data, the answer voice data, the destination confirmation voice data, the search data including the sensor data, and the route data are exchanged with the route search unit 1 of the SV. , Search for the route that the target vehicle should move (step S1). At this time, the route search unit 1 of the server SV is always waiting for the transmission of the search data from any of the terminal devices T connected to the server SV via the network NW at that time. Then, when the search data is transmitted from the target terminal device T, the route search unit 1 performs a route search based on the destination voice data and the like included in the search data, and as the search result. The route data and the destination confirmation voice data of the above are transmitted to the target terminal device T via the network NW (step S10). The route search of the embodiment including the step S1 and the step S10 will be described in detail later with reference to FIG.
 その後、対象の端末装置Tの操作部8における例えば移動を開始する旨の操作により経路に沿った移動の案内が開始されると、対象の端末装置Tの処理部6及びサーバSVの処理部21は、その時点での上記センサデータを含む上記案内用データのネットワークNWを介した授受を行いつつ、ステップS1及びステップS10で探索された経路に沿った案内を開始する(ステップS2、ステップS11)。 After that, when guidance for movement along the route is started by, for example, an operation in the operation unit 8 of the target terminal device T to start the movement, the processing unit 6 of the target terminal device T and the processing unit 21 of the server SV 21. Starts guidance along the route searched in steps S1 and S10 while transmitting and receiving the guidance data including the sensor data at that time via the network NW (steps S2 and S11). ..
 一方、ステップS11で開始された経路の案内中(即ち、対象の端末装置Tを使用する搭乗者が搭乗している車両の移動中)において、サーバSVの案内音声生成部2は、設定された経路上において案内地点(例えば経路上で曲がるべき交差点等)があるか否かを監視している(ステップS12)。ステップS12の監視において、案内地点がない場合(ステップS12:NO)、処理部21は後述するステップS14に移行する。他方ステップS12の監視において、案内地点がある場合(ステップS12:YES)、次に案内音声生成部2は、当該案内地点についての案内を音声により行うべきタイミングで、当該案内地点について案内すべき内容(例えば「次の○○交差点左折です。」等)を有する案内音声データを生成し、当該生成された案内音声データを、ネットワークNWを介して対象の端末装置Tに送信する(ステップS13)。 On the other hand, during the guidance of the route started in step S11 (that is, during the movement of the vehicle on which the passenger using the target terminal device T is on board), the guidance voice generation unit 2 of the server SV is set. It monitors whether or not there is a guide point (for example, an intersection to be turned on the route) on the route (step S12). In the monitoring of step S12, if there is no guide point (step S12: NO), the processing unit 21 shifts to step S14, which will be described later. On the other hand, in the monitoring of step S12, if there is a guidance point (step S12: YES), then the guidance voice generation unit 2 should guide the guidance point at the timing when the guidance about the guidance point should be performed by voice. (For example, "Turn left at the next XX intersection.") Is generated, and the generated guidance voice data is transmitted to the target terminal device T via the network NW (step S13).
 その後、処理部21は、対象車両がその目的地に到達した等の理由により、実施例のナビゲーション処理としての経路案内を終了するか否かを判定する(ステップS14)。ステップS14の判定において、当該経路案内を終了しない場合(ステップS14:NO)、処理部21は、上記ステップS12に戻って引き続き経路案内を行う。一方、ステップS14の判定において、当該経路案内を終了する場合(ステップS14:YES)、処理部21はそのまま当該経路案内を終了する。 After that, the processing unit 21 determines whether or not to end the route guidance as the navigation process of the embodiment because the target vehicle has reached the destination or the like (step S14). In the determination of step S14, if the route guidance is not completed (step S14: NO), the processing unit 21 returns to step S12 and continues to perform route guidance. On the other hand, in the determination of step S14, when the route guidance is ended (step S14: YES), the processing unit 21 ends the route guidance as it is.
 他方、対象の端末装置Tの案内音声出力制御部6bは、上記ステップ2で案内が開始された後は、サーバSVからの上記案内音声データの送信を待機する(ステップS3)。ステップS3の待機において案内音声データが送信されてこない場合(ステップS3:NO)、対象の端末装置Tの処理部6は、後述するステップS5に移行する。 On the other hand, the guidance voice output control unit 6b of the target terminal device T waits for the transmission of the guidance voice data from the server SV after the guidance is started in the above step 2 (step S3). When the guidance voice data is not transmitted in the standby of step S3 (step S3: NO), the processing unit 6 of the target terminal device T shifts to step S5 described later.
 他方、ステップS3の待機において、サーバSVからの当該案内音声データを受信した場合(ステップS3:YES)、対象の端末装置Tの案内音声出力制御部6bは、当該受信した案内音声データに相当する案内音声を、スピーカ9を介して出力(放音)する(ステップS4)。その後、対象の端末装置Tの処理部6は、例えば上記ステップS14と同様の理由により実施例のナビゲーション処理としての経路案内を終了するか否かを判定する(ステップS5)。ステップS5の判定において、当該経路案内を終了しない場合(ステップS5:NO)、処理部6は上記ステップS3に戻って引き続き経路案内を行う。一方、ステップS5の判定において、当該経路案内を終了する場合(ステップS5:NO)、処理部6はそのまま当該経路案内を終了する。 On the other hand, when the guidance voice data from the server SV is received in the standby of step S3 (step S3: YES), the guidance voice output control unit 6b of the target terminal device T corresponds to the received guidance voice data. The guidance voice is output (sounded) via the speaker 9 (step S4). After that, the processing unit 6 of the target terminal device T determines whether or not to end the route guidance as the navigation process of the embodiment for the same reason as in step S14 (step S5). In the determination of step S5, if the route guidance is not completed (step S5: NO), the processing unit 6 returns to step S3 and continues to perform route guidance. On the other hand, in the determination of step S5, when the route guidance is terminated (step S5: NO), the processing unit 6 ends the route guidance as it is.
 次に、対象の端末装置T及びサーバSVにより実行される、実施例の経路探索(図4ステップS1及びステップS10参照)について、より具体的に図5を用いて説明する。 Next, the route search of the embodiment (see steps S1 and S10 in FIG. 4) executed by the target terminal device T and the server SV will be described more specifically with reference to FIG.
 対応するフローチャートを図5に示すように、実施例の経路探索では、対象の端末装置Tにおいて、当該経路探索を行うべき旨と共に目的地の名称等が搭乗者により発声されると、その目的地音声がマイク12により検出され、更に、当該目的地音声に相当する目的地音声データが対象の端末装置Tの経路探索部6aにより生成される。その後、当該生成された目的地音声データは、その時点に対応するセンサデータと共にネットワークNWを介してサーバSVに送信される(ステップS15)。ここで、上記目的地音声の例としては、例えば「○○スタジアム」といった目的地の名称を含む音声となる。また、図5では、上記目的地音声データ及び上記センサデータを、纏めて「目的地音声データ等」と示している。 As shown in FIG. 5, the corresponding flowchart is shown in FIG. The voice is detected by the microphone 12, and the destination voice data corresponding to the destination voice is further generated by the route search unit 6a of the target terminal device T. After that, the generated destination voice data is transmitted to the server SV via the network NW together with the sensor data corresponding to that time point (step S15). Here, as an example of the above-mentioned destination voice, the voice includes the name of the destination such as "○○ stadium". Further, in FIG. 5, the destination voice data and the sensor data are collectively referred to as “destination voice data and the like”.
 一方、サーバSVは、図4ステップS10の経路探索として、対象の端末装置Tからの上記目的地音声データ等の送信を待機している(ステップS25、ステップS25:NO)。ステップS25の待機において、対象の端末装置Tから上記目的地音声データ等が送信されてきた場合(ステップS25:YES)、サーバSVの経路探索部1は、ステップS25で受信した目的地音声データにより示されている目的地の例えば名称のみで、経路探索の目的地として確定できるか否かを判定する(ステップS26)。より具体的に経路探索部1は、例えば地図データ23a等を参照して、ステップS25で受信した目的地音声データにより示されている目的地の名称が唯一の(即ち、他の施設等と混同すること等がない)名称であるか否かを判定する。ステップS26の判定において、目的地音声データにより示されている目的地が経路探索の目的地として確定できる場合(ステップS26:YES)、経路探索部1は、地図データ23a等及びステップS25で受信したセンサデータを参照して、対象の端末装置Tの現在位置から当該確定した目的地までの経路を例えば従来と同様の手法により探索する(ステップS30)。そして、経路探索部1は、探索された経路を示す経路データと、当該経路の目的地として現在用いられている地点等を確認する旨の目的地確認音声データと、を生成し、ネットワークNWを介して対象の端末装置Tに送信する(ステップS31)。その後サーバSVは、図4に示すステップS11に移行する。 On the other hand, the server SV is waiting for transmission of the destination voice data or the like from the target terminal device T as the route search in step S10 of FIG. 4 (step S25, step S25: NO). In the standby of step S25, when the destination voice data or the like is transmitted from the target terminal device T (step S25: YES), the route search unit 1 of the server SV uses the destination voice data received in step S25. It is determined whether or not the destination can be determined as the destination of the route search only by the name of the indicated destination (step S26). More specifically, the route search unit 1 refers to, for example, the map data 23a and the like, and the name of the destination indicated by the destination voice data received in step S25 is unique (that is, confused with other facilities and the like). Judge whether it is a name or not. In the determination of step S26, when the destination indicated by the destination voice data can be determined as the destination of the route search (step S26: YES), the route search unit 1 received the map data 23a and the like and step S25. With reference to the sensor data, a route from the current position of the target terminal device T to the determined destination is searched by, for example, the same method as in the conventional method (step S30). Then, the route search unit 1 generates route data indicating the searched route and destination confirmation voice data for confirming a point or the like currently used as the destination of the route, and generates a network NW. It is transmitted to the target terminal device T via the device (step S31). After that, the server SV shifts to step S11 shown in FIG.
 他方、ステップS26の判定において、目的地音声データにより示されている目的地の名称等だけでは経路探索の目的地として確定できない場合(即ち、例えば「○○スタジアム」の名称だけの場合は、目的地となる候補地が複数挙げられる場合。ステップS26:NO。)、経路探索部1は、対象の端末装置Tを使用する搭乗者に対応するパーソナルデータ23bを参照して、当該搭乗者に対応する趣味嗜好データ23baにより示される属性を有する施設のうち上記複数の候補地それぞれの近傍(即ち、上記複数の候補地それぞれから予め設定された所定距離の範囲内をいう。以下同様。)に存在する施設、又は、当該搭乗者に対応する訪問履歴データ23bbにより示される過去の目的地(訪問地)のうち上記複数の候補地それぞれの近傍の目的地(訪問地)の、少なくともいずれか一方を、実施例の対応地物として検索する(ステップS27)。なお、対象の端末装置Tを使用する上記搭乗者は、対象の端末装置TがサーバSVと接続された時点で、対象の端末装置Tを使用する搭乗者としてサーバSVにおいて特定されている。その後、サーバSVの案内音声生成部2は、ステップS27で検索された対応地物を含めた上記問合せ音声データを生成し、ネットワークNWを介して対象の端末装置Tに送信する(ステップS28)。 On the other hand, in the determination of step S26, when the destination cannot be determined as the destination of the route search only by the name of the destination indicated by the destination voice data (that is, when only the name of "○○ stadium" is used, for example, the purpose. When a plurality of candidate sites to be sites are listed. Step S26: NO.), The route search unit 1 responds to the passenger by referring to the personal data 23b corresponding to the passenger using the target terminal device T. Among the facilities having the attributes indicated by the hobby / preference data 23ba, the facility exists in the vicinity of each of the plurality of candidate sites (that is, within a predetermined distance preset from each of the plurality of candidate sites; the same shall apply hereinafter). At least one of the past destinations (visited places) indicated by the visit history data 23bb corresponding to the passenger or the destinations (visited places) in the vicinity of each of the plurality of candidate sites. , Search as the corresponding feature of the embodiment (step S27). The passenger who uses the target terminal device T is specified in the server SV as a passenger who uses the target terminal device T when the target terminal device T is connected to the server SV. After that, the guidance voice generation unit 2 of the server SV generates the inquiry voice data including the corresponding feature searched in step S27 and transmits it to the target terminal device T via the network NW (step S28).
 ここで、ステップS28で生成・送信される問合せ音声データに相当する問合せ音声としては、上記搭乗者に対応する趣味嗜好データ23baにより示される属性(例えば「神社仏閣」なる属性)を有する施設のうち上記複数の候補地の近傍に存在する施設が対応地物として存在する場合は、例えば、「××寺の近くにある○○スタジアムですか?又は、△△神社の近くにある○○スタジアムですか?」といった問合せ音声となる。また、上記搭乗者に対応する訪問履歴データ23bbにより示される過去の目的地(訪問地)のうち上記複数の候補地の近傍の目的地(訪問地)が対応地物として存在する場合は、例えば、「昨年○月に行った×△公園の近くにある○○スタジアムですか?又は、今年□月に行った×△寺の近くにある○○スタジアムですか?」といった問合せ音声となる。 Here, as the inquiry voice corresponding to the inquiry voice data generated / transmitted in step S28, among the facilities having the attribute indicated by the hobby / preference data 23ba corresponding to the passenger (for example, the attribute "shrine / temple"). If the facilities that exist near the above multiple candidate sites exist as corresponding features, for example, "Is it a XX stadium near the XX temple? Or is it a XX stadium near the △△ shrine?" Is it an inquiry voice such as "?" In addition, when the destinations (visited places) in the vicinity of the plurality of candidate sites among the past destinations (visited places) indicated by the visit history data 23bb corresponding to the passengers exist as the corresponding features, for example. , "Is it the XX stadium near the XX park that I visited last year? Or is it the XX stadium near the XX temple that I visited this year?"
 一方、上記ステップS15で目的地音声データ等を送信した後、対象の端末装置Tの経路探索部6aは、サーバSVから上記問合せ音声データが送信されてきたか否かを監視する(ステップS16)。ステップS16の監視において、例えば上記目的地音声データ送信後の予め設定された監視時間内に上記問合せ音声データが送信されてこない場合(即ち、上記ステップS26の判定で、上記目的地音声データにより示されている目的地が経路探索の目的地として確定できた場合(ステップS26:YES参照)。ステップS16:NO。)、経路探索部6aは、後述するステップS19に移行する。他方、ステップS16の監視において、上記既定の監視時間内に上記問合せ音声データが送信されてきた場合(ステップS16:YES)、対象の端末装置Tの案内音声出力制御部6bは、スピーカ9を介して当該問合せ音声データに相当する問合せ音声を出力(放音)する(ステップS17)。そして、当該問合せ音声に対する回答する旨の上記搭乗者の回答音声がマイク12により検出され、更に、当該回答音声に相当する回答音声データが経路探索部6aにより生成される(ステップS18)。このとき検出される回答音声には、上記問合せ音声に対する回答としての、より詳しい(又はより具体的な)目的地の名称等が含まれている。その後、経路探索部6aは、当該回答音声データを、ネットワークNWを介してサーバSVに送信する(ステップS18)。 On the other hand, after transmitting the destination voice data or the like in step S15, the route search unit 6a of the target terminal device T monitors whether or not the inquiry voice data has been transmitted from the server SV (step S16). In the monitoring of step S16, for example, when the inquiry voice data is not transmitted within the preset monitoring time after the destination voice data is transmitted (that is, in the determination of step S26, the destination voice data is indicated. When the destination has been determined as the destination for the route search (see step S26: YES). Step S16: NO.), The route search unit 6a shifts to step S19, which will be described later. On the other hand, in the monitoring of step S16, when the inquiry voice data is transmitted within the predetermined monitoring time (step S16: YES), the guidance voice output control unit 6b of the target terminal device T passes through the speaker 9. The inquiry voice corresponding to the inquiry voice data is output (sounded) (step S17). Then, the answer voice of the passenger to answer the inquiry voice is detected by the microphone 12, and the answer voice data corresponding to the reply voice is further generated by the route search unit 6a (step S18). The response voice detected at this time includes a more detailed (or more specific) destination name or the like as a response to the inquiry voice. After that, the route search unit 6a transmits the response voice data to the server SV via the network NW (step S18).
 次に、上記ステップS28で問合せ音声データを送信したサーバSVの案内音声生成部2は、対象の端末装置Tからの回答音声データの送信を待機する(ステップS29。ステップS29:NO。)。そして、当該回答音声データが送信されてきた場合(ステップS18参照。ステップS29:YES)、経路探索部1は、当該回答音声データにより示されている目的地の名称等を用いて経路探索の目的地を再度確定すべく、上記ステップS26に戻り、上述した処理を繰り返す。ここで、回答音声データにより示されている目的地の名称等をもってしても経路探索の目的地として確定できない場合(ステップS26:NO)、更なる具体的な目的地の名称等を取得すべく、上記ステップS27乃至ステップS29の処理が再度実行されることになる。 Next, the guidance voice generation unit 2 of the server SV that transmitted the inquiry voice data in step S28 waits for the transmission of the response voice data from the target terminal device T (step S29. Step S29: NO.). Then, when the answer voice data is transmitted (see step S18; step S29: YES), the route search unit 1 uses the name of the destination indicated by the reply voice data to search for the route. In order to determine the ground again, the process returns to step S26 and the above process is repeated. Here, if the destination cannot be determined as the route search destination even with the name of the destination indicated by the answer voice data (step S26: NO), in order to obtain a more specific destination name or the like. , The processes of steps S27 to S29 will be executed again.
 一方、上記ステップS18で回答音声データを送信した対象の端末装置Tの経路探索部6aは、当該回答音声データに対応して上記経路データ及び上記目的地確認音声データがサーバSVから送信されてきたか否かを監視する(ステップS19。ステップS19:NO)。ステップS19の監視において、上記経路データ及び上記目的地確認音声データが送信されてきた場合(ステップS19:YES)、対象の端末装置Tの案内音声出力制御部6bは、受信した目的地確認音声データに相当する目的地確認音声であって、対応地物(ステップS27参照)及び目的地の名称等を含む目的地確認音声を、搭乗者に対してスピーカ9を介して出力(放音)する(ステップS20)。この場合の目的地確認音声は、例えば、「××寺の近くにある○○スタジアムに向かいます。」等の音声となる。その後対象の端末装置Tは、図4に示すステップS2に移行する。 On the other hand, has the route search unit 6a of the target terminal device T to which the response voice data has been transmitted in step S18 transmit the route data and the destination confirmation voice data from the server SV in response to the response voice data? Whether or not it is monitored (step S19. Step S19: NO). When the route data and the destination confirmation voice data are transmitted in the monitoring in step S19 (step S19: YES), the guidance voice output control unit 6b of the target terminal device T receives the received destination confirmation voice data. The destination confirmation voice corresponding to the above, including the corresponding feature (see step S27) and the name of the destination, is output (sounded) to the passenger via the speaker 9 (sound). Step S20). In this case, the destination confirmation voice is, for example, a voice such as "Go to the XX stadium near the XX temple." After that, the target terminal device T shifts to step S2 shown in FIG.
 以上説明したように、実施例に係るナビゲーション処理によれば、対象の端末装置Tを使用する搭乗者の移動に関する情報を案内するとき、取得された目的地音声データ及びパーソナルデータ23bに基づいて、対応地物と関連付けた目的地の情報を音声により出力(放音)するので、音声による案内であっても、上記搭乗者の好みに沿った属性(図3(c)参照)を有する対応地物との関係で、目的地の所在や、目的地までの経路、又は周辺情報について搭乗者は容易に想像することができる。更に、設定された目的地が搭乗者の意図した地点であるかを容易に認識できる。 As described above, according to the navigation process according to the embodiment, when guiding the information regarding the movement of the passenger using the target terminal device T, based on the acquired destination voice data and personal data 23b, Since the destination information associated with the corresponding feature is output (sounded) by voice, the corresponding place has attributes (see FIG. 3C) according to the passenger's preference even if the guidance is by voice. In relation to objects, passengers can easily imagine the location of the destination, the route to the destination, or surrounding information. Furthermore, it is possible to easily recognize whether the set destination is the point intended by the passenger.
 また、関連付けられた対応地物(図5ステップS27参照)の名称等を含む目的地確認音声が出力(放音)される場合は、音声による案内であっても、対応地物との関係で、より判り易く目的地等を案内することができる。 In addition, when a destination confirmation voice including the name of the associated corresponding feature (see step S27 in FIG. 5) is output (sounded), even if it is a voice guidance, it is related to the corresponding feature. , It is possible to guide the destination etc. in an easy-to-understand manner.
 更に、関連付けられた対応地物(図5ステップS27参照)が過去の目的地(訪問地)である場合でも、当該目的地(訪問地)との関係で、より判り易く目的地の所在や、目的地までの経路又は周辺情報について搭乗者に容易に想像させることができる。 Further, even if the associated corresponding feature (see step S27 in FIG. 5) is a past destination (visited place), the location of the destination and the location of the destination can be more easily understood in relation to the destination (visited place). Passengers can easily imagine the route to the destination or surrounding information.
 なお、上記問合せ音声(図5ステップS28参照)及び上記目的地確認音声(図5ステップS31参照)において、対応地物と目的地との間の位置的な関係(例えば相互の距離、又は当該距離及び対応地物から見た目的地の方向等)を、併せて音声出力するように構成してもよい。この場合には、いずれも、音声による案内の場合であっても、対応地物との関係で、より判り易く目的地の所在や、目的地までの経路又は周辺情報について搭乗者に容易に想像させることができる。 In the inquiry voice (see step S28 in FIG. 5) and the destination confirmation voice (see step S31 in FIG. 5), the positional relationship between the corresponding feature and the destination (for example, mutual distance or the distance). And the direction of the destination as seen from the corresponding feature), etc.) may also be configured to be output as audio. In this case, even in the case of voice guidance, the passenger can easily imagine the location of the destination, the route to the destination, or the surrounding information in relation to the corresponding feature. Can be made to.
 なお、上述した実施例のナビゲーション処理では、対応地物の検索(図5ステップS27参照)及び問合せ音声データの生成(図5ステップS28参照)をサーバSVの処理部21において行うこととした。しかしながらこれ以外に、対応地物の検索並びに問合せ音声データの生成を対象の端末装置Tにおいて行うように構成してもよい。この場合には、当該問合せ音声データに相当する問合せ音声に対する上記回答音声(即ち、より具体的な目的地の名称等を含む回答音声)に基づいて、その後に、サーバSVとの連携により目的地及び経路が探索されればよい。 In the navigation process of the above-described embodiment, it was decided that the processing unit 21 of the server SV performs the search for the corresponding feature (see step S27 in FIG. 5) and the generation of the inquiry voice data (see step S28 in FIG. 5). However, in addition to this, the target terminal device T may be configured to search for the corresponding feature and generate inquiry voice data. In this case, based on the above-mentioned answer voice (that is, the answer voice including the name of a more specific destination, etc.) to the inquiry voice corresponding to the inquiry voice data, and then by coordinating with the server SV, the destination. And the route may be searched.
 また、上述した実施例のナビゲーション処理では、車両に限らず、歩行者用のナビゲーションにも適用可能である。更に、パーソナルデータ23bを用いた案内に関し、経路誘導は必須ではなく、単に検索地点の位置をユーザに報知する場合に用いられてもよい。 Further, the navigation process of the above-described embodiment can be applied not only to vehicles but also to navigation for pedestrians. Further, regarding the guidance using the personal data 23b, the route guidance is not indispensable, and may be used only when notifying the user of the position of the search point.
 更に、実施例のナビゲーション処理では、地図による地点表示を行うことを除外しないが、表示が必須でないため、表示部を備えないスマートスピーカ等にも適用可能である。 Further, in the navigation process of the embodiment, the point display by the map is not excluded, but since the display is not essential, it can be applied to a smart speaker or the like that does not have a display unit.
 更にまた、図4及び図5に示した各フローチャートにそれぞれ相当するプログラムを、光ディスク又はハードディスク等の記録媒体に記録しておき、或いはインターネット等のネットワークを介して取得しておき、これらを汎用のマイクロコンピュータ等に読み出して実行することにより、当該マイクロコンピュータ等を実施例に係る処理部6又は処理部21として機能させることも可能である。 Furthermore, programs corresponding to the flowcharts shown in FIGS. 4 and 5 are recorded on a recording medium such as an optical disk or a hard disk, or acquired via a network such as the Internet, and these are used for general purposes. It is also possible to make the microcomputer function as the processing unit 6 or the processing unit 21 according to the embodiment by reading it out to a microcomputer or the like and executing it.
 1  取得手段(経路探索部)
 2  提供手段(案内音声生成部)
 6、21  処理部
 6a  経路探索部
 6b  案内音声出力制御部
 9  スピーカ
 23b  パーソナルデータ
 23ba  趣味嗜好データ
 23bb  訪問履歴データ
 S  情報提供装置
 T、T1、T2、T3、Tn  端末装置
 SV  サーバ
 SS  ナビゲーションシステム
1 Acquisition means (route search unit)
2 Providing means (guidance voice generator)
6, 21 Processing unit 6a Route search unit 6b Guidance voice output control unit 9 Speaker 23b Personal data 23ba Hobby / Preference data 23bb Visit history data S Information providing device T, T1, T2, T3, Tn terminal device SV server SS navigation system

Claims (7)

  1.  案内情報を提供すべき案内地を示す案内地情報と、前記案内情報の提供を受ける被提供者の好みに沿った属性を有し且つ前記案内地の位置から所定範囲内に存在する対応地物を示す対応地物情報と、を取得する取得手段と、
     前記被提供者に前記案内情報を提供するとき、前記取得された案内地情報及び対応地物情報に基づいて、前記対応地物と関連付けた前記案内地の前記案内情報を音により前記被提供者に提供する提供手段と、
     を備えることを特徴とする情報提供装置。
    Guidance site information indicating the guidance site to which the guidance information should be provided, and a corresponding feature that has attributes according to the preference of the recipient to whom the guidance information is provided and exists within a predetermined range from the position of the guidance site. Corresponding feature information indicating, acquisition means to acquire,
    When the guide information is provided to the recipient, the guide information of the guide location associated with the corresponding feature is sounded by the recipient based on the acquired guide location information and the corresponding feature information. Providing means to provide to
    An information providing device characterized by being provided with.
  2.  請求項1に記載の情報提供装置において、
     前記提供手段は、前記案内地に関連付けられた前記対応地物の名称を少なくとも含む前記案内情報を前記音により提供することを特徴とする情報提供装置。
    In the information providing device according to claim 1,
    The providing means is an information providing device, characterized in that the guidance information including at least the name of the corresponding feature associated with the guiding place is provided by the sound.
  3.  請求項1又は請求項2に記載の情報提供装置において、
     前記提供手段は、前記案内地に関連付けられた前記対応地物の位置から見た当該案内地の方角を示す方角情報を少なくとも含む前記案内情報を前記音により提供することを特徴とする情報提供装置。
    In the information providing device according to claim 1 or 2.
    The providing means is an information providing device characterized by providing the guidance information including at least direction information indicating the direction of the guide place as seen from the position of the corresponding feature associated with the guide place by the sound. ..
  4.  請求項1から請求項3のいずれか一項に記載の情報提供装置において、
     前記提供手段は、前記案内地に関連付けられた前記対応地物と当該案内地との間の距離を示す距離情報を少なくとも含む前記案内情報を前記音により提供することを特徴とする情報提供装置。
    In the information providing device according to any one of claims 1 to 3.
    The providing means is an information providing device characterized in that the guidance information including at least the distance information indicating the distance between the corresponding feature associated with the guide site and the guide site is provided by the sound.
  5.  案内情報を提供すべき案内地を示す案内地情報と、前記案内情報の提供を受ける被提供者が過去に移動した対応地物であって前記案内地の位置から所定範囲内に存在する対応地物を示す対応地物情報と、を取得する取得手段と、
     前記被提供者に前記案内情報を提供するとき、前記取得された案内地情報及び対応地物情報に基づいて、前記対応地物と関連付けた前記案内地の前記案内情報を音により前記被提供者に提供する提供手段と、
     を備えることを特徴とする情報提供装置。
    Guidance site information indicating the guidance site to which the guidance information should be provided, and the corresponding feature that the recipient who receives the guidance information has moved in the past and exists within a predetermined range from the position of the guidance site. Corresponding feature information indicating an object, acquisition means for acquiring, and
    When the guide information is provided to the recipient, the guide information of the guide location associated with the corresponding feature is sounded by the recipient based on the acquired guide location information and the corresponding feature information. Providing means to provide to
    An information providing device characterized by being provided with.
  6.  取得手段と、提供手段と、を備える情報提供装置において実行される情報提供方法であって、
     案内情報を提供すべき案内地を示す案内地情報と、前記案内情報の提供を受ける被提供者の好みに沿った属性を有し且つ前記案内地の位置から所定範囲内に存在する対応地物を示す対応地物情報と、を前記取得手段により取得する取得工程と、
     前記被提供者に前記案内情報を提供するとき、前記取得された案内地情報及び対応地物情報に基づいて、前記対応地物と関連付けた前記案内地の前記案内情報を、音により前記被提供者に前記提供手段により提供する提供工程と、
     を含むことを特徴とする情報提供方法。
    An information providing method executed in an information providing device including an acquisition means and a providing means.
    Guidance site information indicating the guidance site to which the guidance information should be provided, and a corresponding feature that has attributes according to the preference of the recipient to whom the guidance information is provided and exists within a predetermined range from the position of the guidance site. The acquisition process of acquiring the corresponding feature information indicating the above by the acquisition means, and
    When the guide information is provided to the recipient, the guide information of the guide site associated with the corresponding feature is provided by sound based on the acquired guide site information and the corresponding feature information. The providing process provided to the person by the providing means and
    An information providing method characterized by including.
  7.  コンピュータを、
     案内情報を提供すべき案内地を示す案内地情報と、前記案内情報の提供を受ける被提供者の好みに沿った属性を有し且つ前記案内地の位置から所定範囲内に存在する対応地物を示す対応地物情報と、を取得する取得手段、及び、
     前記被提供者に前記案内情報を提供するとき、前記取得された案内地情報及び対応地物情報に基づいて、前記対応地物と関連付けた前記案内地の前記案内情報を音により前記被提供者に提供する提供手段、
     として機能させることを特徴とする情報提供用プログラム。
    Computer,
    Guidance site information indicating the guidance site to which the guidance information should be provided, and a corresponding feature that has attributes according to the preference of the recipient to whom the guidance information is provided and exists within a predetermined range from the position of the guidance site. Corresponding feature information indicating, acquisition means to acquire, and
    When the guide information is provided to the recipient, the guide information of the guide location associated with the corresponding feature is sounded by the recipient based on the acquired guide location information and the corresponding feature information. Providing means to provide to
    A program for providing information, which is characterized by functioning as.
PCT/JP2021/000992 2020-03-27 2021-01-14 Information providing device, information providing method and information providing program WO2021192519A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022509297A JPWO2021192519A1 (en) 2020-03-27 2021-01-14

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020058567 2020-03-27
JP2020-058567 2020-03-27

Publications (1)

Publication Number Publication Date
WO2021192519A1 true WO2021192519A1 (en) 2021-09-30

Family

ID=77891324

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/000992 WO2021192519A1 (en) 2020-03-27 2021-01-14 Information providing device, information providing method and information providing program

Country Status (2)

Country Link
JP (1) JPWO2021192519A1 (en)
WO (1) WO2021192519A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06284212A (en) * 1993-03-29 1994-10-07 Nippon Telegr & Teleph Corp <Ntt> Method and device for guiding object place
JPH10213448A (en) * 1997-01-30 1998-08-11 Nippon Telegr & Teleph Corp <Ntt> Method and device for automatic creation of path guiding text
JP2000046576A (en) * 1998-07-29 2000-02-18 Nec Corp Device and method for guiding moving body and machine- readable recording medium where program is recorded
JP2003121189A (en) * 2001-10-09 2003-04-23 Hitachi Ltd Guidance information providing method and executing apparatus thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06284212A (en) * 1993-03-29 1994-10-07 Nippon Telegr & Teleph Corp <Ntt> Method and device for guiding object place
JPH10213448A (en) * 1997-01-30 1998-08-11 Nippon Telegr & Teleph Corp <Ntt> Method and device for automatic creation of path guiding text
JP2000046576A (en) * 1998-07-29 2000-02-18 Nec Corp Device and method for guiding moving body and machine- readable recording medium where program is recorded
JP2003121189A (en) * 2001-10-09 2003-04-23 Hitachi Ltd Guidance information providing method and executing apparatus thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FUJII, KENSAKU: "Location Guidance Text Generation Method Emphasizing Positional Relationships", PROCEEDINGS OF THE 1997 IEICE GENERAL CONFERENCE (INFORMATION & SYSTEMS), 29 September 1997 (1997-09-29) *

Also Published As

Publication number Publication date
JPWO2021192519A1 (en) 2021-09-30

Similar Documents

Publication Publication Date Title
CN102027325B (en) Navigation apparatus and method of detection that a parking facility is sought
US20110130953A1 (en) Navigation equipment and navigation system
CN110542427A (en) Information processing apparatus, information processing method, and information processing system
US20200175446A1 (en) System and method for managing taxi dispatch, and program for controlling taxi dispatch requests
JP6563451B2 (en) Movement support apparatus, movement support system, movement support method, and movement support program
JP2009180675A (en) Mobile terminal apparatus, information management server, information control method, information management method, information collection program, information management program, and recording medium
CN113390434A (en) Information processing apparatus, non-transitory storage medium, and system
JP2019105516A (en) Destination estimation device, destination estimation system and destination estimation method
WO2021192519A1 (en) Information providing device, information providing method and information providing program
WO2021192520A1 (en) Information providing device, information providing method and information providing program
JP2017111497A (en) Traveler position information confirmation system, and traveler position information confirmation method
JP2022153363A (en) Server device, information processing method, and server program
JP7076766B2 (en) Information processing system, information processing program, information processing device and information processing method
JP2009258026A (en) Navigation device and navigation system
WO2007105423A1 (en) Information acquisition assisting device, information acquisition assisting method, information acquisition assisting program, and recording medium
JP7439572B2 (en) Self-propelled robot, guidance method, and guidance program
US20220163345A1 (en) Information processing apparatus, information processing method, and non-transitory storage medium
JP6604023B2 (en) Information processing system, control method and program thereof, and navigation management server, control method and program thereof
JP5831936B2 (en) In-vehicle device system and in-vehicle device
JP7420661B2 (en) Vehicle, information processing device, vehicle control method, information processing device control method, and program
JP2005337867A (en) Portable terminal and server device
JP2018124293A (en) Information processing device
WO2021192522A1 (en) Information provision device, information provision method and information provision program
JP7117282B2 (en) Output system, its control method, and program
WO2021192521A1 (en) Information providing device, information providing method, and information providing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21774066

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022509297

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21774066

Country of ref document: EP

Kind code of ref document: A1