WO2014196038A1 - Device for processing information through line-of-sight detection and information processing method - Google Patents

Device for processing information through line-of-sight detection and information processing method Download PDF

Info

Publication number
WO2014196038A1
WO2014196038A1 PCT/JP2013/065610 JP2013065610W WO2014196038A1 WO 2014196038 A1 WO2014196038 A1 WO 2014196038A1 JP 2013065610 W JP2013065610 W JP 2013065610W WO 2014196038 A1 WO2014196038 A1 WO 2014196038A1
Authority
WO
WIPO (PCT)
Prior art keywords
search
information
user
unit
display
Prior art date
Application number
PCT/JP2013/065610
Other languages
French (fr)
Japanese (ja)
Inventor
成晃 竹原
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2013/065610 priority Critical patent/WO2014196038A1/en
Publication of WO2014196038A1 publication Critical patent/WO2014196038A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3679Retrieval, searching and output of POI information, e.g. hotels, restaurants, shops, filling stations, parking facilities

Definitions

  • the present invention relates to an information processing device that specifies a search object by visual line detection and presents detailed information in a moving body such as a vehicle.
  • a method for a driver driving a vehicle or the like to specify an object outside the vehicle as a search object has not been sufficiently considered. For example, in order to find out what a building is 500 m away in a search of a car navigation device, a method of narrowing down by pressing a search button or the like with a touch panel or the like is used.
  • Patent Document 1 when a question such as “What is that?” Is given to an object such as a building or a sightseeing spot that the driver is looking at, voice recognition and There has been disclosed a navigation device that displays detailed information on a building or the like viewed by a driver on a display by gaze detection.
  • Patent Document 1 it is difficult to narrow down the search range without specifying the search word in detail. Even if the search result is shown from a sufficiently wide search area, it is difficult for the driver to make a decision while driving it. Furthermore, even with a recent CPU, if the search area is too large, the search takes time, and the real-time performance is lowered.
  • the present invention has been made to solve the above-described problems, and information processing based on line-of-sight detection, in which a user in a moving body such as a vehicle can easily specify a search target and obtain detailed information.
  • An object is to provide an apparatus.
  • the present invention provides a display for displaying various types of information superimposed on the user's field of view and a search for determining a search request by the user in an information processing apparatus based on a user's line of sight detection using a moving object.
  • search for an object in the user's line-of-sight direction based on at least the current position of the moving object and the user's line-of-sight recognition A coordinate calculation unit that calculates a coordinate position on a map as an object, and a search that highlights the search object on the display based on the coordinate position and map data of the search object calculated by the coordinate calculation unit
  • a superimposition display control unit that superimposes and displays a mark; and a coordinate position and map data of the search object calculated by the coordinate calculation unit.
  • a search processing unit that searches for detailed information on the search target; an output information creation unit that generates output information for presenting detailed information on the search target searched by the search processing unit to the user; It is provided with.
  • a search target is easily specified and displayed on a display that displays various information in a user's field of view such as a HUD (head-up display). Therefore, it is possible to efficiently present detailed information of the search object to the user.
  • a HUD head-up display
  • FIG. 2 is a block diagram illustrating an example of an information processing device in Embodiment 1.
  • FIG. It is a table
  • FIG. 10 is a block diagram illustrating an example of an information processing device in Embodiment 2.
  • FIG. 10 is a block diagram illustrating an example of an information processing device in Embodiment 3.
  • FIG. 10 is a diagram illustrating an example of a transition image of processing in the information processing apparatus according to Embodiment 3.
  • 14 is a flowchart illustrating a process of presenting detailed information using line-of-sight recognition in the information processing apparatus according to Embodiment 3.
  • FIG. 10 is a diagram showing an outline of an information processing system (navigation system) in a fourth embodiment.
  • a search target is specified by detecting a line of sight of a user who uses the moving body, and superimposed on a user's field of view such as a head-up display (HUD) or a head-mounted display (HMD).
  • HUD head-up display
  • HMD head-mounted display
  • a navigation device mounted on a moving body such as a vehicle will be described as an example.
  • Navigation for a moving body other than a vehicle will be described. It may be a device or may be applied to a server of a navigation system. Moreover, you may apply to the application etc. of the navigation system installed in portable information terminals, such as a smart phone, tablet PC, and a mobile telephone.
  • FIG. 1 is a block diagram showing an example of an information processing apparatus according to Embodiment 1 of the present invention.
  • the information processing apparatus includes a search request determination unit 11, a coordinate calculation unit 12, a search processing unit 13, a storage unit 14, an output information creation unit 15, and a superimposed display control unit 16, and at least a head-up display ( HUD) 18.
  • HUD head-up display
  • the search request determination unit 11 includes a voice recognition device 1 for recognizing a voice spoken by the user, a physical switch 2 provided on a handle or the like, a camera or optical sensor in the user (driver) direction, etc. From the information obtained from the gesture recognition device 3 capable of detecting movement, the content of the search trigger is recognized to determine whether a search request has been made, and the determination result is notified to the coordinate calculation unit 12.
  • the speech recognition device 1 recognizes the words uttered by the user (driver) because it is a well-known technology, a description thereof will be omitted here.
  • the content of the search trigger as shown in FIG. Recognize whether there was a request.
  • the search trigger “what”, “what would be”, “search”, “tell me” and the like are stored as search trigger words, and it is determined whether or not these search trigger words are included.
  • the position specifying word includes a word specifying a direction such as right or left, or a word specifying a facility name such as a hospital or a temple, whether that, this, that (that, that, this) Whether or not a word suggesting a distance is included.
  • search coordinates for example, it may be possible to recognize a position specifying word accompanying such a search trigger word together.
  • position specifying words they may be used in combination. For example, when there are a plurality of words whose positions can be specified, such as “What is that temple on the right hand”, all of them may be notified to the coordinate calculation unit 12.
  • switches 2 there may be one or more switches 2, for example, three switches are arranged side by side, and if you want to search for the one in the left front direction, use the left switch, and if you want to search for the one in front, If it is desired to search for a switch in the right front direction, the right switch may be pressed.
  • the gesture recognition device 3 is a device that can detect the movement of the user (driver) with a camera or optical sensor in the direction of the user (driver). For example, the gesture recognition device 3 recognizes the content of the search trigger as shown in FIG. Recognizes whether there is a search request. As a search trigger, it is determined whether blinking has been performed a specified number of times, and as a search trigger + position specification, whether a finger has been pointed or pointed with a chin.
  • the search request determination unit 11 determines the information obtained from the speech recognition device 1, the switch 2, and the gesture recognition device 3, and notifies the coordinate calculation unit 12 of the determination results.
  • information of the switch 2, and the gesture recognition device 3 may be used in combination.
  • the coordinate calculation unit 12 may be notified after scoring each position specification result and specifying the approximate search target direction.
  • the coordinate calculation unit 12 acquires information such as current location information, traveling direction, and traveling speed as shown in FIG. 4 from a GPS (Global Positioning System) 4, and calculates and saves the vehicle position. At this time, the vehicle position is basically calculated from the current location information.However, as the traveling speed increases, the human field of view narrows, so the traveling speed is set to prevent searching for a very close place during high-speed traveling. In addition to use, it is calculated which direction is likely to be searched from the vehicle position depending on the traveling direction.
  • GPS Global Positioning System
  • the coordinate calculation unit 12 receives, from the line-of-sight recognition device 5, for example, a line-of-sight direction as shown in FIG. Gaze information such as a relative relationship with the display device) is acquired. Since the technique for recognizing the user's line of sight by the line-of-sight recognition device 5 is well known, detailed description is omitted here, but the line-of-sight vector can be calculated from the position of the eyeball and the direction in which the left and right eyeballs face.
  • the distance to the user (driver) can be measured by calculating the intersection of the vectors.
  • the visual line recognition device 5 can measure the positional relationship between the object and the user (driver). Furthermore, it is possible to expect an improvement in accuracy by using the line-of-sight information in time series.
  • FIG. 6 is a schematic explanatory diagram illustrating the positional relationship between the user's eyeball position, the line-of-sight recognition device, and the search target.
  • two reference numerals 21 indicate the left and right eyeballs of the user 20
  • reference numeral 22 indicates an eyeball position measurement area
  • reference numeral 23 indicates a gaze-measurable direction.
  • the line-of-sight recognition device 5 can measure the distance L1 to the eyeball position A by a stereo camera or pattern light and image recognition. Further, the calibration plane B is set at a certain distance L2 from the line-of-sight recognition device 5, and calibration is performed. Then, the direction in which the left and right eyeballs 21 of the user 20 are facing is measured using infrared rays and image recognition (for example, corneal reflection method). As a result, since the position and direction of the left and right eyeballs 21 are known, the line-of-sight direction vector can be calculated for the left and right eyeballs.
  • the distance X between the eyeball position A and the search object 80 (object position C) is the user's 20 eyeball.
  • X (d1 / (d1-d2)) ⁇ (L1 + L2)
  • the eyeball position and the line-of-sight position are known, a method of calculating the intersection coordinates from the vector using the outer product may be used.
  • the distance can be calculated from the above, the accuracy is not sufficient.
  • the accuracy of the direction is high, it is only necessary to increase the accuracy of the distance to the search object by using the direction mainly and using a distance measuring sensor such as a front camera or millimeter wave in combination.
  • the line-of-sight recognition device 5 is assembled and fixed to a vehicle (moving body). Moreover, map information, geomagnetic sensor information, highly accurate GPS information, etc. can be acquired.
  • the coordinates of the line-of-sight recognition device 5 are regarded as the same as the GPS information, and the direction of the vehicle is measured by a geomagnetic sensor. Since the distance and direction from the search object 80 can be measured as described above, the coordinates of the search object 80 are calculated by adding the distance and direction to the GPS information.
  • the coordinate calculation unit 12 acquires information such as the vehicle speed, the steering angle, whether the vehicle is traveling, whether the vehicle is traveling, or the like as shown in FIG.
  • the vehicle-mounted device 7 includes at least a device that detects the speed of the moving body, and the acquired vehicle speed and steering angle are used for specifying the search range.
  • the determination of whether or not the vehicle is traveling or whether or not the vehicle is traveling is used to limit travel such as making it impossible to search during traveling.
  • the coordinate calculation unit 12 detects the distance from the sensor 6 such as a camera or radar that detects the peripheral information of the moving object to the target as shown in FIG. 8, character recognition of the target, and the target that is not a facility. Get information such as specific (such as the car you are driving).
  • the sensor 6 such as a camera or radar that detects the peripheral information of the moving object to the target as shown in FIG. 8, character recognition of the target, and the target that is not a facility.
  • Get information such as specific (such as the car you are driving).
  • a radar it is possible to measure the distance to the search object
  • a camera it is possible to recognize characters and logos drawn on the search object. It is also used to determine whether a facility other than those described on the map, such as cars and motorcycles, is designated.
  • the coordinate calculation unit 12 receives the information from the GPS 4, the line-of-sight recognition device 5, the sensor 6 such as the camera, the in-vehicle device 7 and the notification from the search request determination unit 11, and calculates the position coordinates of the search target. To do. For example, when a search request is received from the search request determination unit 11, the coordinates of the vehicle position are calculated by the GPS 4, and the direction and distance of the object whose line of sight is directed by the line-of-sight recognition device 5 are measured. Thereby, the coordinates of the search object are calculated.
  • the information of the sensor 6 such as the camera and the in-vehicle device 7 is used for improving accuracy.
  • a plurality of information is used in combination. Also good.
  • a mechanism may be provided in which each is tabulated and scored to determine whether it is useful or not useful.
  • the coordinate calculation unit 12 is based on at least the current position of the moving object acquired from the GPS 4 and the user's line-of-sight recognition by the line-of-sight recognition device 5.
  • the coordinate position on the map is calculated using the object in the line-of-sight direction as the search object.
  • the superimposition display control unit 16 is notified of the search target coordinates (coordinates of the search target object) by the coordinate calculation unit 12, and the search target coordinates (search target object) on the map data displayed on the head-up display (HUD) 18.
  • the HUD 18 is instructed to superimpose a search mark for emphasizing the search object at the position of the coordinate.
  • the search mark for emphasizing the search object includes a mark displayed with a frame such as a rectangle or a circle, a mark with an arrow, etc., a building on the map data displayed on the HUD 18, and a search object based on line-of-sight recognition.
  • the mark may have any shape as long as the user can confirm both the position and the position, and the user can recognize which building or the like is the search target.
  • the HUD 18 is a display device that is displayed so as to overlap the user's (driver's) field of view, and receives an instruction from the superimposed display control unit 16 on a building, a road, or the like in the user's field of view, A search mark that highlights the target object is displayed in a superimposed manner.
  • this Embodiment 1 demonstrates as a head-up display (HUD), if it can display various information on a user's visual field, it will be a head mounted display (HMD) etc., for example. Also good.
  • the search processing unit 13 receives a notification of the search target coordinates (coordinates of the search target object) from the coordinate calculation unit 12, and based on the map data stored in the storage unit 14, the facility near the search target coordinates Information, that is, detailed information of the search object is searched. When a facility is specified by voice or the like when searching, the search condition may be narrowed down. If a search result exists, the search processing unit 13 notifies the output information creating unit 15 of facility information for user (driver) notification. If the facility search result does not exist, the output information creation unit 15 is notified that the facility search result does not exist.
  • the facility closest to the search target coordinates is notified as a search result.
  • the user uses the voice recognition device 1 to specify “facility in front”, “inner facility”, etc. Then, an additional search may be performed.
  • the output information creation unit 15 creates voice information or display information as output information of the facility information (detailed information of the search target) searched by the search processing unit 13. In addition, when there is no search result, a voice or display character string of a phrase such as “no search result” is created.
  • the output device 8 is a device that outputs or displays the information created by the output information creation unit 15 by voice. In the case of display output, the HUD 18 may be used as the output device 8.
  • FIG. 9 is a flowchart showing input recognition processing performed by each of the voice recognition device 1, the switch 2, and the gesture recognition device 3.
  • step ST1 the speech recognition process shown in FIG. 9A will be described. If some kind of search trigger has already been applied and a search is being performed, or if a voice is being output (YES in step ST1), the voice recognition process is not performed. If the search is not being performed and the voice is not being output (NO in step ST1), the voice information is acquired as a waveform (step ST2), and the voice information is recognized from the acquired waveform (step ST3). Since the voice recognition method is a known technique, a description thereof is omitted here, but any of various methods may be used.
  • step ST4 it is determined whether or not a search trigger word is included in the recognized voice information. If a search trigger word is included (YES in step ST4), a search request notification is executed (step ST5). On the other hand, if the search trigger word is not included (NO in step ST4), the process returns to the beginning without any processing.
  • switch recognition process shown in FIG. 9B will be described.
  • the switch recognition process is not performed (switch input is not accepted). If the search is not being performed and the voice is not being output (NO in step ST11), the switch information is recognized (step ST12).
  • step ST13 it is determined whether or not the search request switch has been pressed.
  • a search request notification is executed (step ST14).
  • the switch has a role, the role of the pressed switch is also added and notified.
  • the search request switch is not pressed (NO in step ST13)
  • the process returns to the beginning without performing any processing.
  • gesture recognition process shown in FIG. 9C will be described. Also here, if some kind of search trigger has already been applied and a search is being performed, or if a voice is being output (YES in step ST21), the gesture recognition process is not performed. If the search is not being performed and the voice is not being output (NO in step ST21), gesture information such as camera information and an optical sensor is acquired (step ST22), and what gesture is performed is recognized (step ST22). Step ST23). Since the gesture recognition method is a known technique, description thereof is omitted here, but any of various methods may be used.
  • step ST24 it is determined whether or not a search trigger gesture is included in the recognized gesture information.
  • a search trigger gesture is included (YES in step ST24)
  • a search request notification is executed (step ST25).
  • the search trigger gesture is not included (NO in step ST24)
  • the process returns to the beginning without any processing.
  • FIG. 10 is a flowchart showing coordinate calculation processing in the coordinate calculation unit 12.
  • current location information is acquired from the GPS 4 (step ST31). Thereby, the own vehicle position can be grasped.
  • line-of-sight information is acquired from the line-of-sight recognition device 5 (step ST32).
  • sensor information is acquired from the sensor 6 such as the front camera or radar (step ST33). The direction and distance that the user (driver) is looking at is calculated from the line-of-sight information, and the accuracy of the distance is improved by the sensor information.
  • the coordinates of the search object are calculated based on the vehicle position information, direction, and distance obtained from the current location information, line-of-sight information, and sensor information (step ST34).
  • the search coordinate is notified to the superimposition display control unit 16 and the search processing unit 13 (step ST35).
  • the coordinate position of the search object is calculated with high accuracy.
  • the coordinate position of the search object can be calculated if there is at least the current position of the moving object and the user's line-of-sight recognition. That is, based on at least the current position of the moving object and the user's line-of-sight recognition, the coordinate position on the map may be calculated using an object in the user's line-of-sight direction as a search target.
  • FIG. 11 is a flowchart showing a superimposed display process in the superimposed display control unit 16.
  • the HUD 18 is instructed to display the search mark superimposed, that is, the search object is a rectangular object. It is superimposed and displayed so as to be surrounded by search marks (step ST43).
  • the user does not have to add a word for specifying a facility or the like to search for and limit the conditions, for example, “What is the white building on the right side?” is there. Further, the user can obtain detailed information while visually confirming that the object to be searched is surrounded by a rectangle or the like (a search mark is superimposed on the object).
  • FIG. 12 is a flowchart showing search processing in the search processing unit 13.
  • the facility information at the search coordinates is read out from the map data stored in the storage unit 14 based on the coordinate data of the search object included in the notified search coordinates (step ST46).
  • facility information in the search coordinates here, information on peripheral facilities within a predetermined range from the search coordinates is read out.
  • the facility information may be read, and if there is no facility at the position corresponding to the search coordinates, the facility information existing at the position closest to the search coordinates may be read.
  • the search may be performed based on that.
  • the search result is added to the output information creation unit 15 to notify the output of the search result.
  • the facility information at the search coordinates does not exist Adds information that the search result does not exist, and notifies the output information creation unit 15 of the search result output (step ST47).
  • the output information creation unit 15 generates, from the search result information notified from the search processing unit 13, information indicating that the searched facility information or the search result does not exist as a voice or a display character string, and outputs it to the output device 8. Output. That is, when the output device 8 is a sound output device, a sound is generated and output as a sound, and when the output device 8 is a display device, a display character string is generated and displayed.
  • FIG. 13 is a diagram illustrating a transition image example of processing of the information processing apparatus according to the first embodiment.
  • a user sees a building indicated by a line of sight 81 and utters "What is that building?"
  • the voice recognition apparatus 1 acquires voice information as the utterance content and performs voice recognition (steps ST2 to ST3). ). As a result of referring to the table shown in FIG. 2, it is determined that the search trigger word is included (in the case of YES in step ST4), the search request is notified (step ST5).
  • the coordinate calculation part 12 is the present location information from GPS4, the gaze information from the gaze recognition apparatus 5 (information regarding the gaze position 81 in FIG. 13A), the information from the sensor 6 such as the camera, and the in-vehicle device. Based on the information from 7, the position coordinates of the search object 80 are calculated, and the search coordinates are notified to the superimposed display control unit 16 and the search processing unit 13 (steps ST31 to ST35 in FIG. 10).
  • the superimposition display control unit 16 When the superimposition display control unit 16 receives the search coordinates of the search object from the coordinate calculation unit 12, it refers to the map data stored in the storage unit 14 and maps information (facility information) and the object position (search coordinates). ) And instructing the HUD 18 to superimpose and display the search mark (steps ST41 to ST43 in FIG. 11).
  • an object in the line of sight of the user is received on the building or road displayed on the HUD 18 in response to an instruction from the superimposed display control unit 16.
  • a search mark 85 highlighted is displayed in a superimposed manner.
  • a rectangular frame-shaped mark is adopted as the search mark 85.
  • the search processing unit 13 receives the search coordinates of the search object from the coordinate calculation unit 12, the search processing unit 13 refers to the map data stored in the storage unit 14, reads the facility information in the search coordinates, and outputs the output information generation unit 15 is notified of search result output (steps ST46 to ST47 in FIG. 12).
  • the output information creation unit 15 generates a voice or a display character string for outputting the facility information at the search coordinates. For example, as shown in the voice output or display output 83 of FIG.
  • the voice or display character string “is a bank” is output to the output device 8 (voice output device or display device) to present it to the user (driver).
  • the speech recognition device 1 recognizes the speech
  • the search processing unit 13 further reads out the information of the previous search object (facility information at the search coordinates) and, as shown in the audio output or display output 83 ′ of FIG. 13C, “business hours are from 9:00 AM to PM 5:00 ”is output to the output device 8 (speech output device or display device) and presented to the user (driver).
  • the line-of-sight position 81 is slightly deviated from the search object 80, for example, facility information at a position close to the search coordinates of the line-of-sight position 81 is presented, or a user (driver) from among a plurality of presentations.
  • the facility information of the target object is presented by allowing the user (driver) to speak “before” or “behind” when information on different facilities is presented. Can be.
  • a user can easily specify a search target by detecting a user's line of sight in a moving body such as a vehicle, and superimpose it on the user's field of view such as a HUD (head-up display). Since it can be displayed on the display on which information is displayed, the detailed information of the search object can be efficiently presented to the user.
  • a facility in the line-of-sight direction can be pinpointed with a simple trigger such as a voice, a switch, or a gesture, a conventionally troublesome search can be facilitated.
  • FIG. FIG. 14 is a block diagram showing an example of an information processing apparatus according to Embodiment 2 of the present invention.
  • symbol is attached
  • the second embodiment described below further includes a search start button (search start instruction input unit) 31 and a search determination button (search determination instruction input unit) 32 as compared with the first embodiment.
  • search start button search start instruction input unit
  • search determination button search determination instruction input unit
  • a physical button is provided on a handle or the like.
  • a software menu button or the like displayed on the touch panel may be used.
  • it is physically one button, it may have a structure in which a process of starting a search when the button is half-pressed and determining a search when the button is deeply pressed is performed.
  • the search request determination unit 11 can determine a search request by the user from information obtained from any of the speech recognition device 1, the switch 2, and the gesture recognition device 3 as in the first embodiment. It may be determined that there is a search request by pressing the start button 31.
  • the coordinate calculation unit 12 recognizes the line of sight of the user (driver) only while the search start button 31 is being pressed by the user. The coordinate position on the map is calculated using an object in the line-of-sight direction as a search target.
  • the search request determination unit 11 determines that there is a search request
  • the coordinate calculation unit 12 determines a search target from various information.
  • the coordinates of the object are calculated, and the search mark is superimposed. This is dynamically executed in real time while the search start button 31 is pressed. Therefore, if the line of sight moves to another object while pressing the search start button 31, the search mark is also displayed on the other object.
  • the search processing unit 13 searches for detailed information on the search target object with the search mark superimposed on the head-up display (HUD) 18. That is, the user presses the search determination button 32 while confirming that the search mark matches the object to be searched, and the search processing unit 13 determines that the search determination button 32 is pressed. To output the search results.
  • HUD head-up display
  • the user keeps pressing the search start button 31 while looking at the building indicated by the line of sight 81 while the vehicle is traveling.
  • utterance or voice recognition is not necessary, but using the specific example (FIG. 13) for performing voice recognition in the first embodiment.
  • the search start button 31 is continuously pressed.
  • the coordinate calculation part 12 will be the present location information from GPS4, the gaze information from the gaze recognition apparatus 5 (information regarding the gaze position 81 of Fig.13 (a)), if the search start button 31 is pressed. Based on the information from the sensor 6 such as the camera and the information from the in-vehicle device 7, the position coordinates of the search object 80 are calculated, and the search coordinates are notified to the superimposition display control unit 16 and the search processing unit 13 ( Steps ST31 to ST35 in FIG.
  • the superimposition display control unit 16 When the superimposition display control unit 16 receives the search coordinates of the search object from the coordinate calculation unit 12, it refers to the map data stored in the storage unit 14 and maps information (facility information) and the object position (search coordinates). ) And instructing the HUD 18 to superimpose and display the search mark (steps ST41 to ST43 in FIG. 11).
  • an object in the line of sight of the user (driver) is received on the building or road displayed on the HUD 18 in response to an instruction from the superimposed display control unit 16.
  • the search mark 85 shown is superimposed and displayed. At this time, if the search mark 85 is superimposed on the object that the user wants to search, the user presses the search determination button 32.
  • the search processing unit 13 When the search processing unit 13 receives the search coordinates of the search object from the coordinate calculation unit 12, when the search determination button 32 is pressed, the search processing unit 13 refers to the map data stored in the storage unit 14. Then, the facility information at the search coordinates is read, and the search result output is notified to the output information creation unit 15 (steps ST46 to ST47 in FIG. 12).
  • the output information creation unit 15 generates a voice or a display character string for outputting the facility information at the search coordinates. For example, as shown in the voice output or display output 83 of FIG.
  • the voice or display character string “is a bank” is output to the output device 8 (voice output device or display device) to present it to the user (driver).
  • the search processing unit 13 further reads the information of the previous search object (facility information at the search coordinates), and outputs the voice or display in FIG. 13C.
  • a voice or a display character string "business hours are from 9:00 am to 5:00 pm" is output to the output device 8 (voice output device or display device) to the user (driver).
  • the output device 8 voice output device or display device
  • voice recognition is also involved, for example, as shown by an utterance 82 in FIG. 13B, if the user (driver) further utters “What is business hours?”, The voice recognition device 1 transmits the voice. Recognizing, the search processing unit 13 further reads and processes the information of the previous search object (facility information at the search coordinates). Even in this case, it is assumed that the search determination button 32 is pressed. It is.
  • the search object can be confirmed on the HUD 18 before the search result (detailed information of the search object) is presented, the user can determine which object has been searched since the search result was presented. The problem that (driver) has no choice but to recognize is solved.
  • the user can perform a search by using the search determination button 32 while visually confirming, so the user (driver) ) Can also be resolved by re-searching by saying "before” or "behind".
  • the user can perform a search while visually confirming the search target object, and thus an unintended target object is searched.
  • a desired object can be searched in real time and reliably, and detailed information of the search object can be efficiently presented to the user.
  • FIG. FIG. 15 is a block diagram showing an example of an information processing apparatus according to Embodiment 3 of the present invention.
  • symbol is attached
  • the search processing unit 13 acquires information from the GPS 4, the sensor 6, and the in-vehicle device 7, and also performs a peripheral search process for searching for a peripheral facility in the current location.
  • a display device is provided as the output device 8.
  • the search processing unit 13 searches for a specific facility (for example, a bank, a convenience store, a gas station, etc.) around the vehicle position by using voice, a switch, a GUI (Graphic User Interface), or the like.
  • the peripheral facility as a result of the search is stored in the storage unit 14, and information is sent to the output information creation unit 15 to be superimposed on the output device (display device) 8.
  • the output device (display device) 8 will be described assuming a head-up display. That is, the head-up display (HUD) 18 and the output device (display device) 8 in FIG. 15 are the same.
  • a specific peripheral facility that is a result of the search by the search processing unit 13 can be displayed according to the viewpoint.
  • the user (driver) viewpoint information can be acquired from the line-of-sight recognition device 5.
  • the search processing unit 13 searches for a peripheral facility of a mobile body specified in advance as a search point, and the output information creation unit 15 generates output information so that the search point searched by the search processing unit 13 is displayed. To do.
  • HUD head-up display
  • the user can request detailed information by directing his / her line of sight to the mark displayed by the peripheral search result.
  • the detailed information may be displayed on a head-up display (HUD) 18 or output by voice.
  • the output device 8 is assumed to be a head-up display (HUD) 18 which is a display device. explain.
  • HUD head-up display
  • FIG. 16 is a diagram illustrating a transition image example of the processing of the information processing apparatus according to the third embodiment.
  • the search point 84 is displayed on the output device (display device) 8 while being displayed.
  • the vicinity search may be input by voice or may be input by a touch panel or the like.
  • the search request may be notified by any of the voice recognition device 1, the switch 2, or the gesture recognition device 3, but here the search request is pressed by pressing the switch 2. It explains as what notifies.
  • the switch 2 acquires switch pressing information (step ST12).
  • the search switch serving as a search trigger is pressed (YES in step ST13)
  • a search request is notified (step ST14).
  • the coordinate calculation part 12 is the present location information from GPS4, the gaze information from the gaze recognition apparatus 5 (information regarding the gaze position 81 of FIG.16 (b)), the information from sensors 6 such as a camera, and vehicle equipment 7 is used to calculate the position coordinates of the search object 80 and notify the search processing unit 13 of the search coordinates (steps ST31 to ST35 in FIG. 10).
  • the search processing unit 13 When the search processing unit 13 receives the search coordinates of the search object from the coordinate calculation unit 12, the search processing unit 13 refers to the map data stored in the storage unit 14, reads the facility information at the search coordinates, and outputs it to the output information creation unit 15. The search result output is notified (steps ST46 to ST47 in FIG. 12).
  • the output information creation unit 15 generates a display character string for outputting the facility information at the search coordinates.
  • the display character string is output to the output device 8 (speech output device or display device) and presented to the user (driver).
  • the search processing unit 13 further reads the information of the previous search object 80 (facility information at the search coordinates), and as shown in the display output 83 of FIG. 16D, “business hours are from 9:00 am to pm 5:00 ”is output to the output device (display device) 8 to present it to the user (driver).
  • a specific search point designated by the user is displayed in advance. Then, simply point the line of sight to the search object with the search point mark displayed and make a search request such as utterance, then you can easily identify the search object and search for detailed information and present it to the user. Give me. Thereby, as compared with the first embodiment, it is possible to more reliably specify the search object and obtain detailed information.
  • the output information creation unit 15 determines that the search point that matches when the line-of-sight position used when the coordinate calculation unit 12 calculates the coordinates of the search target that the user wants to search matches with any of the search points 84.
  • the output information may be generated so that 84 is highlighted. That is, for example, as shown in FIG. 16B, when the line of sight 81 overlaps the search point 84 displayed on the search object 80, the search point 84 is displayed in a large size or displayed in a color-changed manner. You may make it display.
  • FIG. 17 is a flowchart illustrating a process of presenting detailed information using line-of-sight recognition in the information processing apparatus according to the third embodiment.
  • the search point 84 is marked in a range visible to the user (driver) by the peripheral search (step ST51). If not marked (NO in step ST51), the process continues to wait until the mark of the search point 84 appears.
  • step ST51 when the facility mark (search point 84) is displayed (YES in step ST51), line-of-sight information is acquired (step ST52). Subsequently, it is determined whether or not the facility detailed information 83 is displayed (step ST53). If the facility detailed information 83 is not displayed (YES in step ST53), it is determined from the line-of-sight information whether or not the facility mark (search point 84) has been viewed for a certain period of time (step ST54).
  • step ST54 If the user has been viewing for a certain period of time (in the case of YES in step ST54), the detailed information 83 of the search object having the facility mark (search point 84) pointing at the line of sight is displayed (step ST55). On the other hand, if it is not determined that the facility mark (search point 84) has been viewed for a certain period of time (NO in step ST54), the process returns to the first step ST51.
  • step ST53 If the facility detail information 83 is displayed in step ST53 (NO in step ST53), it is determined whether or not the displayed facility detail information 83 has been viewed for a certain period of time (step ST56). ). If the user is viewing for a certain period of time (YES in step ST56), further facility detailed information 83 'is displayed (step ST57). On the other hand, if it is not determined that the facility detailed information 83 has been viewed for a certain period of time (NO in step ST56), the displayed facility detailed information 83 is hidden (step ST58).
  • the position of a specific peripheral facility to be searched for in a moving body such as a vehicle is displayed in an easy-to-understand manner, and
  • the search object can be easily specified by detecting the user's line of sight, so that the detailed information of the search object can be efficiently presented to the user.
  • the user can easily recognize the display by superimposing the display on the display device in the line-of-sight direction, it is possible to search for the surrounding facilities more comfortably.
  • in-vehicle equipment information such as an information panel, a navigation system, a front camera, an air conditioner, a winker, and a shift lever inside the vehicle is also stored in the storage unit 14.
  • the line-of-sight recognition device 5 to visually check the on-vehicle equipment that you want to know the nearer and triggering a search trigger by voice recognition, switch, gesture recognition, etc. Can be output.
  • the facilities that can be searched are limited to those in which the user (driver) cannot change the position freely, but the position coordinates of the in-vehicle facilities are basically determined, so that they can be used without much trouble.
  • the user makes a search request using voice, switches, gestures, etc. while looking at the in-vehicle equipment to be searched.
  • the search trigger is recognized and a search request is notified
  • the coordinates of the search target are specified by line-of-sight recognition.
  • it searches based on a search coordinate, reads the detailed information of the vehicle equipment corresponding to a search coordinate, and outputs information. If there is no corresponding equipment, a message indicating that there is no information is output.
  • buttons have no idea how to find out how to use the functions of a car in the past. Can provide a way to easily find out how to use the equipment. As a result, even if the user (driver) does not know all the functions in advance, it is possible to confirm detailed information such as how to use the facility he / she is looking at by simply performing a search with a gaze. Further, when it is explained that the setting can be changed by a button operation, the button for changing the setting may be easily performed by looking at the button for a predetermined time or more.
  • a search start button 31 and a search determination button 32 are further provided, and the coordinate position is changed only while the search start button 31 is pressed by the user (driver). Needless to say, the calculation may be performed and the detailed information of the search object may be searched when the search determination button 32 is pressed by the user (driver).
  • Embodiment 4 FIG.
  • the case where the information processing apparatus according to the present invention is applied to a navigation apparatus mounted on a moving body such as a vehicle has been described as an example.
  • the present invention is applied to an in-vehicle navigation apparatus.
  • the navigation device for a moving body including, but not limited to, a person, a vehicle, a railroad, a ship, an airplane, or the like may be used, or the information processing system or the navigation system server may be applied.
  • the present invention can be applied to any form such as an information processing system installed in a portable information terminal such as a smartphone, a tablet PC, or a mobile phone, or an application of a navigation system.
  • FIG. 18 is a diagram showing an outline of the navigation system according to the fourth embodiment of the present invention.
  • the in-vehicle device 100 performs search processing and navigation processing in cooperation with at least one of the portable information terminal 101 such as a smartphone and the server 102, or at least one of the portable information terminal 101 such as a smartphone and the server 102.
  • the search processing and the navigation processing are performed, and the recognition result and the map information are displayed on the in-vehicle device 100.
  • a configuration aspect of the navigation system will be described.
  • the search processing function is described as being provided in the in-vehicle device 100 shown in FIG. 18, but in the navigation system in the fourth embodiment, the server 102 performs the search processing, and the search result
  • the mobile information terminal 101 performs search processing in cooperation with the server 102 and displays the search result on the in-car device 100, the user is provided with the user. Will be described.
  • the in-vehicle device 100 includes a communication unit capable of communicating with the server 102 using, for example, mobile communication or a wireless LAN.
  • the server 102 includes the search processing unit 13, issues a search request to the external server 102 based on the search target coordinates recognized by the in-vehicle device 100, and the server 102 performs search processing and generates search result output information.
  • the search result can be presented to the user (driver) by sending it to the in-vehicle device 100.
  • the storage unit 14 does not need to have map data, and the server 102 only needs to have map data.
  • the server 102 performs a search process and displays the search result on the in-vehicle device 100, that is, a case where the in-vehicle device 100 functions as a display device in cooperation with the server 102 having a search processing function will be described.
  • the in-vehicle device 100 communicates directly with the server 102 or the in-vehicle device 100 communicates with the server 102 via the portable information terminal 101.
  • the server 102 has the function of the search processing unit 13 described in the first to third embodiments.
  • the in-vehicle device 100 functions as a display device including at least a display unit for providing a search result by the server 102 to the user.
  • the portable information terminal 101 performs a search process in cooperation with the server 102 and the in-vehicle device 100 provides the search result to the user.
  • the case where the in-vehicle device 100 communicates with the server 102 via the portable information terminal 101 can be considered, and the application of the portable information terminal 101 performs search processing in cooperation with the server 102.
  • the in-vehicle device 100 functions as a display device including at least a display unit for providing a search result by the portable information terminal 101 and the server 102 to the user.
  • the in-vehicle device 100 basically has only a communication function and a display function, and receives a search result obtained by cooperation between the portable information terminal 101 and the server 102 and provides it to the user. That is, the search result of the search object requested by the user is displayed on the in-vehicle device 100 which is a display device by the application of the portable information terminal 101. Even with this configuration, it is possible to obtain the same effects as in the first to third embodiments.
  • the information processing device of the present invention is not limited to a vehicle-mounted navigation device, but includes a navigation device for a moving body including a person, a vehicle, a railway, a ship, an aircraft, etc., a portable navigation device, a portable information processing device, etc.
  • the present invention can be applied to navigation system applications installed in mobile information terminals such as smartphones, tablet PCs, and mobile phones.
  • HUD head-up display
  • 20 user 21 eyeball of user 20, 22 eyeball position measurement area, 23 Gaze measurable direction, 31 Search start button (search start instruction input part), 32 Search

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

This device for processing information through line-of-sight detection is provided with a search request determination unit (11) for determining whether a search request has been made by a user; a coordinate calculation unit (12) for calculating the map coordinates of an object in the user's line of sight, which is made to be a search subject, on the basis of at least the current position of a moving body and recognition of the line of sight of the user; a superimposed display control unit (16) for superimposing and displaying a search mark for highlighting the search subject on an HUD (18) on the basis of the coordinates of the search subject and map data; a search processing unit (13) for searching for detailed information about the search subject on the basis of the coordinates of the search subject and the map data; and an output information generation unit (15) for generating output information for presenting detailed information about the search subject to the user. As a result, the search subject can be simply identified and displayed on the HUD through the detection of the line of sight of the user in a vehicle or other moving body, and it is therefore possible to effectively present detailed information about the search subject to the user.

Description

視線検知による情報処理装置および情報処理方法Information processing apparatus and information processing method based on gaze detection
 この発明は、車両等の移動体において、視線検知により検索対象物を特定して詳細情報を提示する情報処理装置に関するものである。 The present invention relates to an information processing device that specifies a search object by visual line detection and presents detailed information in a moving body such as a vehicle.
 現状、車両等を運転するドライバーが、車両外部の対象物を検索対象物として指定する方法は、十分には考えられていない。例えば、カーナビゲーション装置の検索で500m先にある建物が何であるのか調べようとすると、タッチパネル等により、検索ボタン等を押して絞り込んでいくような方法が使用されている。 Currently, a method for a driver driving a vehicle or the like to specify an object outside the vehicle as a search object has not been sufficiently considered. For example, in order to find out what a building is 500 m away in a search of a car navigation device, a method of narrowing down by pressing a search button or the like with a touch panel or the like is used.
 しかし、車両外部にドライバーが検索したい対象物があった場合であっても、運転に集中しなければならない場面では、ドライバーがタッチパネル等を操作して絞り込んでいくことは難しく、車両等の移動体におけるユーザが具体的なポイントを指し示して検索対象物を特定することは難しい、という問題があった。 However, even if there is an object that the driver wants to search outside the vehicle, it is difficult for the driver to narrow down by operating the touch panel etc. There is a problem that it is difficult for a user at to specify a search target by pointing to a specific point.
 このような問題に対処するために、例えば特許文献1には、ドライバーが見ている建築物や観光物等の対象物に対して、例えば「あれは何?」と質問をすると、音声認識と視線検知とによって、ドライバーが見ている建物等に関する詳細情報をディスプレイに表示するナビゲーション装置が開示されている。 In order to deal with such a problem, for example, in Patent Document 1, when a question such as “What is that?” Is given to an object such as a building or a sightseeing spot that the driver is looking at, voice recognition and There has been disclosed a navigation device that displays detailed information on a building or the like viewed by a driver on a display by gaze detection.
特開2001-330450号公報JP 2001-330450 A
 しかしながら、例えば特許文献1に示すような従来の装置では、検索ワードを細かく指定しなくては、検索範囲を絞ることが難しい。また、十分に広い検索エリアから検索結果を示されても、ドライバーがそれを運転しながら判断することは難しい。さらに、最近のCPUをもってしても、検索エリアが広すぎると検索には時間がかかってしまい、リアルタイム性が下がってしまうという課題もあった。 However, in the conventional apparatus as shown in Patent Document 1, for example, it is difficult to narrow down the search range without specifying the search word in detail. Even if the search result is shown from a sufficiently wide search area, it is difficult for the driver to make a decision while driving it. Furthermore, even with a recent CPU, if the search area is too large, the search takes time, and the real-time performance is lowered.
 この発明は、上記のような課題を解決するためになされたものであり、車両等の移動体におけるユーザが簡単に検索対象物を特定して詳細情報を得ることができる、視線検知による情報処理装置を提供することを目的とする。 The present invention has been made to solve the above-described problems, and information processing based on line-of-sight detection, in which a user in a moving body such as a vehicle can easily specify a search target and obtain detailed information. An object is to provide an apparatus.
 上記目的を達成するため、この発明は、移動体を使用するユーザの視線検知による情報処理装置において、前記ユーザの視界に重ねて各種情報を表示させるディスプレイと、前記ユーザによる検索要求を判断する検索要求判断部と、前記検索要求判断部により検索要求があると判断された場合に、少なくとも前記移動体の現在位置および前記ユーザの視線認識に基づいて、前記ユーザの視線方向にある対象物を検索対象として地図上の座標位置を算出する座標計算部と、前記座標計算部により算出された前記検索対象物の座標位置と地図データとに基づいて、前記ディスプレイに前記検索対象物を強調表示する検索マークを重畳表示させる重畳表示制御部と、前記座標計算部により算出された前記検索対象物の座標位置と地図データとに基づいて、前記検索対象物の詳細情報を検索する検索処理部と、前記検索処理部により検索された前記検索対象物の詳細情報を前記ユーザに提示するための出力情報を生成する出力情報作成部とを備えたことを特徴とする。 In order to achieve the above object, the present invention provides a display for displaying various types of information superimposed on the user's field of view and a search for determining a search request by the user in an information processing apparatus based on a user's line of sight detection using a moving object. When the request determination unit and the search request determination unit determine that there is a search request, search for an object in the user's line-of-sight direction based on at least the current position of the moving object and the user's line-of-sight recognition A coordinate calculation unit that calculates a coordinate position on a map as an object, and a search that highlights the search object on the display based on the coordinate position and map data of the search object calculated by the coordinate calculation unit A superimposition display control unit that superimposes and displays a mark; and a coordinate position and map data of the search object calculated by the coordinate calculation unit. A search processing unit that searches for detailed information on the search target; an output information creation unit that generates output information for presenting detailed information on the search target searched by the search processing unit to the user; It is provided with.
 この発明によれば、車両等の移動体におけるユーザの視線検知により、簡単に検索対象物を特定してHUD(ヘッドアップディスプレイ)等のユーザの視界に重ねて各種情報を表示させるディスプレイ上に表示することができるので、その検索対象物の詳細情報を効率よくユーザに提示することができる。 According to the present invention, by detecting the user's line of sight in a moving body such as a vehicle, a search target is easily specified and displayed on a display that displays various information in a user's field of view such as a HUD (head-up display). Therefore, it is possible to efficiently present detailed information of the search object to the user.
実施の形態1における情報処理装置の一例を示すブロック図である。2 is a block diagram illustrating an example of an information processing device in Embodiment 1. FIG. 音声認識装置が認識する検索トリガの内容の一例を示す表である。It is a table | surface which shows an example of the content of the search trigger which a speech recognition apparatus recognizes. ジェスチャー認識装置が認識する検索トリガの内容の一例を示す表である。It is a table | surface which shows an example of the content of the search trigger which a gesture recognition apparatus recognizes. 座標計算部がGPSから取得する情報の一例を示す表である。It is a table | surface which shows an example of the information which a coordinate calculation part acquires from GPS. 座標計算部が視線認識装置から取得する情報の一例を示す表である。It is a table | surface which shows an example of the information which a coordinate calculation part acquires from a gaze recognition apparatus. ユーザの眼球位置、視線認識装置、検索対象物の位置関係を示す模式説明図である。It is a schematic explanatory drawing which shows the positional relationship of a user's eyeball position, a gaze recognition apparatus, and a search target object. 座標計算部が車載機器から取得する情報の一例を示す表である。It is a table | surface which shows an example of the information which a coordinate calculation part acquires from vehicle equipment. 座標計算部がセンサから取得する情報の一例を示す表である。It is a table | surface which shows an example of the information which a coordinate calculation part acquires from a sensor. 音声認識装置、スイッチ、ジェスチャー認識装置が、検索要求を受け付ける入力処理を示すフローチャートである。It is a flowchart which shows the input process in which a speech recognition apparatus, a switch, and a gesture recognition apparatus receive a search request. 座標計算部における座標計算処理を示すフローチャートである。It is a flowchart which shows the coordinate calculation process in a coordinate calculation part. 重畳表示制御部における重畳表示処理を示すフローチャートである。It is a flowchart which shows the superimposition display process in a superimposition display control part. 検索処理部における検索処理を示すフローチャートである。It is a flowchart which shows the search process in a search process part. 実施の形態1における情報処理装置における処理の遷移イメージ例を示す図である。6 is a diagram illustrating an example of a transition image of processing in the information processing apparatus according to Embodiment 1. FIG. 実施の形態2における情報処理装置の一例を示すブロック図である。10 is a block diagram illustrating an example of an information processing device in Embodiment 2. FIG. 実施の形態3における情報処理装置の一例を示すブロック図である。10 is a block diagram illustrating an example of an information processing device in Embodiment 3. FIG. 実施の形態3における情報処理装置における処理の遷移イメージ例を示す図である。FIG. 10 is a diagram illustrating an example of a transition image of processing in the information processing apparatus according to Embodiment 3. 実施の形態3における情報処理装置において、視線認識を用いて詳細情報を提示する処理を示すフローチャートである。14 is a flowchart illustrating a process of presenting detailed information using line-of-sight recognition in the information processing apparatus according to Embodiment 3. 実施の形態4における情報処理システム(ナビゲーションシステム)の概要を示す図である。FIG. 10 is a diagram showing an outline of an information processing system (navigation system) in a fourth embodiment.
 以下、この発明の実施の形態について、図面を参照しながら詳細に説明する。
 この発明は、車両等の移動体において、移動体を使用するユーザの視線検知により検索対象物を特定して、ヘッドアップディスプレイ(HUD)やヘッドマウントディスプレイ(HMD)等のユーザの視界に重ねて各種情報を表示させるディスプレイ上に詳細情報を提示する情報処理装置である。なお、以下の実施の形態では、この発明の情報処理装置を車両等の移動体に搭載されるナビゲーション装置に適用した場合を例に挙げて説明するが、車両以外の他の移動体用のナビゲーション装置であってもよいし、ナビゲーションシステムのサーバに適用してもよい。また、スマートフォン、タブレットPC、携帯電話等の携帯情報端末等にインストールされるナビゲーションシステムのアプリケーション等に適用してもよい。
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
According to the present invention, in a moving body such as a vehicle, a search target is specified by detecting a line of sight of a user who uses the moving body, and superimposed on a user's field of view such as a head-up display (HUD) or a head-mounted display (HMD). It is an information processing apparatus that presents detailed information on a display that displays various types of information. In the following embodiments, a case where the information processing apparatus of the present invention is applied to a navigation device mounted on a moving body such as a vehicle will be described as an example. Navigation for a moving body other than a vehicle will be described. It may be a device or may be applied to a server of a navigation system. Moreover, you may apply to the application etc. of the navigation system installed in portable information terminals, such as a smart phone, tablet PC, and a mobile telephone.
実施の形態1.
 図1は、この発明の実施の形態1における情報処理装置の一例を示すブロック図である。この情報処理装置は、検索要求判断部11、座標計算部12、検索処理部13、記憶部14、出力情報作成部15、重畳表示制御部16を有したナビゲーション装置10と、少なくともヘッドアップディスプレイ(HUD)18とを備えている。
Embodiment 1 FIG.
FIG. 1 is a block diagram showing an example of an information processing apparatus according to Embodiment 1 of the present invention. The information processing apparatus includes a search request determination unit 11, a coordinate calculation unit 12, a search processing unit 13, a storage unit 14, an output information creation unit 15, and a superimposed display control unit 16, and at least a head-up display ( HUD) 18.
 検索要求判断部11は、ユーザが発話した音声を認識する音声認識装置1、ハンドル等に備え付けられた物理的なスイッチ2、ユーザ(ドライバー)方向にあるカメラや光センサなどによりユーザ(ドライバー)の動きを検知できるジェスチャー認識装置3から得られる情報から、検索トリガの内容を認識して検索要求があったかどうかを判断し、判断結果を座標計算部12に通知する。 The search request determination unit 11 includes a voice recognition device 1 for recognizing a voice spoken by the user, a physical switch 2 provided on a handle or the like, a camera or optical sensor in the user (driver) direction, etc. From the information obtained from the gesture recognition device 3 capable of detecting movement, the content of the search trigger is recognized to determine whether a search request has been made, and the determination result is notified to the coordinate calculation unit 12.
 音声認識装置1が、ユーザ(ドライバー)が発した言葉を認識する技術については公知の技術であるためここでは説明を省略するが、例えば図2に示すような検索トリガの内容を認識し、検索要求があったかどうかを認識する。検索トリガとしては、「なに」、「なんだろう」、「検索」、「教えて」などを検索トリガワードとして記憶しており、これらの検索トリガワードが含まれているかどうかを判断する。また、位置特定ワードとして、右、左などの方向を指定する文言が含まれるか、病院、寺などの施設名を指定する文言が含まれるか、あれ、それ、これ(あの、その、この)、奥のなど距離を示唆する文言が含まれるかなどを判断する。 Since the speech recognition device 1 recognizes the words uttered by the user (driver) because it is a well-known technology, a description thereof will be omitted here. For example, the content of the search trigger as shown in FIG. Recognize whether there was a request. As the search trigger, “what”, “what would be”, “search”, “tell me” and the like are stored as search trigger words, and it is determined whether or not these search trigger words are included. In addition, the position specifying word includes a word specifying a direction such as right or left, or a word specifying a facility name such as a hospital or a temple, whether that, this, that (that, that, this) Whether or not a word suggesting a distance is included.
 なお、検索座標を特定しやすくするために、例えば、あれ、これ、それなどの検索トリガワードに付随する位置特定ワードを一緒に認識するようにしてもよい。また、複数の位置特定ワードがあった場合にそれらを複合して使用してもよい。例えば、「右手にあるあのお寺はなんだろう」のように、位置を特定できるワードが複数あった場合は、それらすべてを座標計算部12に通知するようにしてもよい。 In order to make it easy to specify the search coordinates, for example, it may be possible to recognize a position specifying word accompanying such a search trigger word together. Moreover, when there are a plurality of position specifying words, they may be used in combination. For example, when there are a plurality of words whose positions can be specified, such as “What is that temple on the right hand”, all of them may be notified to the coordinate calculation unit 12.
 また、スイッチ2は、1つ以上あってもよく、例えば横に3つ並べてスイッチを配置し、左前方向にあるものを検索したい場合は左のスイッチを、前方にあるものを検索したい場合は真ん中のスイッチを、右前方向にあるものを検索したい場合は右のスイッチを押下するようにしてもよい。 Also, there may be one or more switches 2, for example, three switches are arranged side by side, and if you want to search for the one in the left front direction, use the left switch, and if you want to search for the one in front, If it is desired to search for a switch in the right front direction, the right switch may be pressed.
 また、ジェスチャー認識装置3は、ユーザ(ドライバー)方向にあるカメラや光センサなどで、ユーザ(ドライバー)の動きを検知できる装置であり、例えば図3に示すような検索トリガの内容を認識し、検索要求があったかどうかを認識する。検索トリガとしては、瞬きを指定回数連続で行ったか、また、検索トリガ+位置特定としては、指をさしたか、あごで指し示したかなどを判断する。 The gesture recognition device 3 is a device that can detect the movement of the user (driver) with a camera or optical sensor in the direction of the user (driver). For example, the gesture recognition device 3 recognizes the content of the search trigger as shown in FIG. Recognizes whether there is a search request. As a search trigger, it is determined whether blinking has been performed a specified number of times, and as a search trigger + position specification, whether a finger has been pointed or pointed with a chin.
 検索要求判断部11は、前述のとおり、音声認識装置1、スイッチ2およびジェスチャー認識装置3から得られる情報をそれぞれ判断し、判断結果を座標計算部12に通知するが、この際、音声認識装置1、スイッチ2、ジェスチャー認識装置3の情報を複合して使用してもよい。例えば、複数の位置特定判別があった場合、それぞれの位置特定結果に点数付けを行い、おおよその検索対象方向を特定した上で、座標計算部12に通知を行ってもよい。 As described above, the search request determination unit 11 determines the information obtained from the speech recognition device 1, the switch 2, and the gesture recognition device 3, and notifies the coordinate calculation unit 12 of the determination results. 1, information of the switch 2, and the gesture recognition device 3 may be used in combination. For example, when there are a plurality of position specification determinations, the coordinate calculation unit 12 may be notified after scoring each position specification result and specifying the approximate search target direction.
 座標計算部12は、GPS(Global Positioning System)4から、図4に示すような現在地情報、進行方向、走行速度などの情報を取得し、自車位置を計算し保存する。この際、基本的には現在地情報から自車位置を計算するが、走行速度が速くなると人間の視野は狭まるため、高速走行中に非常に近い場所を検索するようなことを防ぐために走行速度を使用するとともに、進行方向によって、自車位置からどの方向に検索がかけられる可能性が高いかを計算する。 The coordinate calculation unit 12 acquires information such as current location information, traveling direction, and traveling speed as shown in FIG. 4 from a GPS (Global Positioning System) 4, and calculates and saves the vehicle position. At this time, the vehicle position is basically calculated from the current location information.However, as the traveling speed increases, the human field of view narrows, so the traveling speed is set to prevent searching for a very close place during high-speed traveling. In addition to use, it is calculated which direction is likely to be searched from the vehicle position depending on the traveling direction.
 座標計算部12は、視線認識装置5から、例えば図5に示すような視線方向(自車位置から見て対象物のある方向)、対象物との大まかな距離、視点位置(目の位置と表示装置との相対的な関係)などの視線情報を取得する。なお、視線認識装置5によりユーザの視線を認識する技術については公知のため、ここでは詳細な説明は省略するが、眼球の位置と左右の眼球が向いている向きから視線のベクトルが計算できる。 The coordinate calculation unit 12 receives, from the line-of-sight recognition device 5, for example, a line-of-sight direction as shown in FIG. Gaze information such as a relative relationship with the display device) is acquired. Since the technique for recognizing the user's line of sight by the line-of-sight recognition device 5 is well known, detailed description is omitted here, but the line-of-sight vector can be calculated from the position of the eyeball and the direction in which the left and right eyeballs face.
 そして、ユーザ(ドライバー)が見ている物体は左右の視線が交差する部分に存在するため、ベクトルの交点を計算することでユーザ(ドライバー)との距離を測定することができる。これにより、視線認識装置5によって、対象物とユーザ(ドライバー)との位置関係を測定することが可能である。さらに、時系列に視線情報を使用することで精度の向上を見込むことも可能である。 And since the object that the user (driver) is looking at exists at the intersection of the left and right line of sight, the distance to the user (driver) can be measured by calculating the intersection of the vectors. Thereby, the visual line recognition device 5 can measure the positional relationship between the object and the user (driver). Furthermore, it is possible to expect an improvement in accuracy by using the line-of-sight information in time series.
 ここで、自車位置(移動体の位置)とユーザの視線認識を用いて検索対象物の座標を算出する具体的な方法について、一例を挙げて説明する。
 図6は、ユーザの眼球位置、視線認識装置、検索対象物の位置関係を示す模式説明図である。この図において、2つの符号21はユーザ20の左右の眼球、符号22は眼球位置測定エリア、符号23は視線測定可能方向を示している。
Here, a specific method for calculating the coordinates of the search object using the vehicle position (position of the moving body) and the user's line-of-sight recognition will be described with an example.
FIG. 6 is a schematic explanatory diagram illustrating the positional relationship between the user's eyeball position, the line-of-sight recognition device, and the search target. In this figure, two reference numerals 21 indicate the left and right eyeballs of the user 20, reference numeral 22 indicates an eyeball position measurement area, and reference numeral 23 indicates a gaze-measurable direction.
 視線認識装置5は、ステレオカメラまたはパターン光と画像認識により、眼球位置Aまでの距離L1を測定することができる。また、視線認識装置5から、ある一定距離L2でキャリブレーション面Bを設定し、キャリブレーションを行う。
 そして、赤外線と画像認識(例えば角膜反射法)を用いて、ユーザ20の左右眼球21が向いている方向を測定する。その結果、左右眼球21の位置と方向がわかるため、左右眼球について視線方向ベクトルを算出することができる。
The line-of-sight recognition device 5 can measure the distance L1 to the eyeball position A by a stereo camera or pattern light and image recognition. Further, the calibration plane B is set at a certain distance L2 from the line-of-sight recognition device 5, and calibration is performed.
Then, the direction in which the left and right eyeballs 21 of the user 20 are facing is measured using infrared rays and image recognition (for example, corneal reflection method). As a result, since the position and direction of the left and right eyeballs 21 are known, the line-of-sight direction vector can be calculated for the left and right eyeballs.
 ここで、ユーザ20の左右の眼球21の間隔をd1、キャリブレーション面Bにおける視差をd2とすると、眼球位置Aと検索対象物80(対象物位置C)との距離Xは、ユーザ20の眼球位置Aとキャリブレーション面Bまでの距離(L1+L2)を用いて、以下の式で計算できる。
 X=(d1/(d1-d2))×(L1+L2)
Here, when the distance between the left and right eyeballs 21 of the user 20 is d1 and the parallax on the calibration plane B is d2, the distance X between the eyeball position A and the search object 80 (object position C) is the user's 20 eyeball. Using the distance (L1 + L2) between the position A and the calibration plane B, it can be calculated by the following equation.
X = (d1 / (d1-d2)) × (L1 + L2)
 また、眼球位置、視線位置がわかっているので、ベクトルから外積を使って交点座標を計算する方法も用いてもよい。
 なお、上記より距離の算出が可能だが、精度が十分ではない。ただし、方向の精度は高いので、方向をメインで使用し、フロントカメラ、ミリ波などの距離測定センサを併用することで検索対象物との距離の精度を上げるようにすればよい。
Further, since the eyeball position and the line-of-sight position are known, a method of calculating the intersection coordinates from the vector using the outer product may be used.
Although the distance can be calculated from the above, the accuracy is not sufficient. However, since the accuracy of the direction is high, it is only necessary to increase the accuracy of the distance to the search object by using the direction mainly and using a distance measuring sensor such as a front camera or millimeter wave in combination.
 一方、視線認識装置5は、車両(移動体)に組み付けられて固定されている。また、地図情報、地磁気センサ情報、高精度なGPS情報などが取得できる。視線認識装置5の座標はGPS情報と同じものとみなし、車両の向きは地磁気センサによって測定する。
 そして、検索対象物80との距離と方向は前述のとおり測定可能なので、GPS情報に距離と方向を足し合わせることにより、検索対象物80の座標を算出する。
On the other hand, the line-of-sight recognition device 5 is assembled and fixed to a vehicle (moving body). Moreover, map information, geomagnetic sensor information, highly accurate GPS information, etc. can be acquired. The coordinates of the line-of-sight recognition device 5 are regarded as the same as the GPS information, and the direction of the vehicle is measured by a geomagnetic sensor.
Since the distance and direction from the search object 80 can be measured as described above, the coordinates of the search object 80 are calculated by adding the distance and direction to the GPS information.
 座標計算部12は、車載機器7から、例えば図7に示すような車速、舵角、走行中か否か、バック中か否かなどの情報を取得する。ここで、車載機器7としては、少なくとも移動体の速度を検出する機器を含むものとし、取得した車速や舵角は、検索範囲の特定のために用いられる。また、走行中か否か、バック中か否かの判断は、例えば走行中には検索できなくするなどの走行制限を行うために使用する。 The coordinate calculation unit 12 acquires information such as the vehicle speed, the steering angle, whether the vehicle is traveling, whether the vehicle is traveling, or the like as shown in FIG. Here, the vehicle-mounted device 7 includes at least a device that detects the speed of the moving body, and the acquired vehicle speed and steering angle are used for specifying the search range. The determination of whether or not the vehicle is traveling or whether or not the vehicle is traveling is used to limit travel such as making it impossible to search during traveling.
 さらに、座標計算部12は、移動体の周辺情報を検出するカメラやレーダーなどのセンサ6から、例えば図8に示すような対象物までの距離、対象物の文字認識、施設ではない対象物の特定(走行している車など)などの情報を取得する。ここで、例えばレーダーを用いた場合、検索対象物との距離を測定することが可能であり、カメラを用いた場合は、検索対象物に描かれている文字やロゴを認識することができる。また、車やバイクなど地図に記載されている施設以外を指定しているかどうかを判定することに使用する。 Further, the coordinate calculation unit 12 detects the distance from the sensor 6 such as a camera or radar that detects the peripheral information of the moving object to the target as shown in FIG. 8, character recognition of the target, and the target that is not a facility. Get information such as specific (such as the car you are driving). Here, for example, when a radar is used, it is possible to measure the distance to the search object, and when a camera is used, it is possible to recognize characters and logos drawn on the search object. It is also used to determine whether a facility other than those described on the map, such as cars and motorcycles, is designated.
 このようにして、座標計算部12は、GPS4、視線認識装置5、カメラなどのセンサ6、車載機器7からの情報および検索要求判断部11からの通知を受けて、検索対象の位置座標を計算する。例えば、検索要求判断部11から検索要求を受けると、GPS4により自車位置の座標を計算し、視線認識装置5により視線を向けている対象物の方向および距離を測定する。これにより、検索対象物の座標を計算する。 In this way, the coordinate calculation unit 12 receives the information from the GPS 4, the line-of-sight recognition device 5, the sensor 6 such as the camera, the in-vehicle device 7 and the notification from the search request determination unit 11, and calculates the position coordinates of the search target. To do. For example, when a search request is received from the search request determination unit 11, the coordinates of the vehicle position are calculated by the GPS 4, and the direction and distance of the object whose line of sight is directed by the line-of-sight recognition device 5 are measured. Thereby, the coordinates of the search object are calculated.
 また、カメラなどのセンサ6や車載機器7の情報は、精度の向上のために用いるものであり、その際、複数の情報を複合して用いるものとして説明するが、不要であれば削除してもよい。また、重複している情報や、相反する情報があった場合には、それぞれをテーブル化して点数付けを行うことで、有用か有用でないかを判断するような仕組みを設けてもよい。 In addition, the information of the sensor 6 such as the camera and the in-vehicle device 7 is used for improving accuracy. In this case, it is described that a plurality of information is used in combination. Also good. In addition, when there is overlapping information or conflicting information, a mechanism may be provided in which each is tabulated and scored to determine whether it is useful or not useful.
 すなわち、座標計算部12は、検索要求判断部11により検索要求があると判断された場合に、少なくともGPS4から取得した移動体の現在位置および視線認識装置5によるユーザの視線認識に基づいて、ユーザの視線方向にある対象物を検索対象物として地図上の座標位置を算出する。 That is, when the search request determination unit 11 determines that there is a search request, the coordinate calculation unit 12 is based on at least the current position of the moving object acquired from the GPS 4 and the user's line-of-sight recognition by the line-of-sight recognition device 5. The coordinate position on the map is calculated using the object in the line-of-sight direction as the search object.
 重畳表示制御部16は、座標計算部12により検索対象座標(検索対象物の座標)の通知を受け、ヘッドアップディスプレイ(HUD)18に表示されている地図データ上の検索対象座標(検索対象物の座標)の位置に、検索対象物を強調する検索マークを重畳表示するよう、HUD18に対して指示を行う。 The superimposition display control unit 16 is notified of the search target coordinates (coordinates of the search target object) by the coordinate calculation unit 12, and the search target coordinates (search target object) on the map data displayed on the head-up display (HUD) 18. The HUD 18 is instructed to superimpose a search mark for emphasizing the search object at the position of the coordinate.
 なお、検索対象物を強調する検索マークとしては、矩形や円形等の枠で表示するマークや矢印によるマークなど、HUD18に表示されている地図データ上の建物等と、視線認識により検索対象物とされた位置との両方をユーザが確認して、どの建物等が検索対象物であるかをユーザが認識できるものであれば、どのような形状のマークであってもよい。 The search mark for emphasizing the search object includes a mark displayed with a frame such as a rectangle or a circle, a mark with an arrow, etc., a building on the map data displayed on the HUD 18, and a search object based on line-of-sight recognition. The mark may have any shape as long as the user can confirm both the position and the position, and the user can recognize which building or the like is the search target.
 HUD18は、ユーザ(ドライバー)の視界と重なって表示される表示装置であって、ユーザの視界の中にある建物や道路等に、重畳表示制御部16からの指示を受けて、ユーザの視線方向にある対象物を強調して示す検索マークが重畳表示されるものである。なお、この実施の形態1ではヘッドアップディスプレイ(HUD)として説明するが、ユーザの視界に重ねて各種情報を表示させることが可能なものであれば、例えばヘッドマウントディスプレイ(HMD)等であってもよい。 The HUD 18 is a display device that is displayed so as to overlap the user's (driver's) field of view, and receives an instruction from the superimposed display control unit 16 on a building, a road, or the like in the user's field of view, A search mark that highlights the target object is displayed in a superimposed manner. In addition, although this Embodiment 1 demonstrates as a head-up display (HUD), if it can display various information on a user's visual field, it will be a head mounted display (HMD) etc., for example. Also good.
 また、検索処理部13は、座標計算部12により検索対象座標(検索対象物の座標)の通知を受け、記憶部14に記憶されている地図データとに基づいて、検索対象座標付近にある施設情報、すなわち、検索対象物の詳細情報を検索する。なお、検索する際に、音声等により施設の指定があった場合は、検索条件を絞ってもよい。そして、検索結果が存在した場合、検索処理部13は、ユーザ(ドライバー)通知用の施設情報を出力情報作成部15に通知する。また、施設検索結果が存在しない場合には、存在しない旨を出力情報作成部15に通知する。 In addition, the search processing unit 13 receives a notification of the search target coordinates (coordinates of the search target object) from the coordinate calculation unit 12, and based on the map data stored in the storage unit 14, the facility near the search target coordinates Information, that is, detailed information of the search object is searched. When a facility is specified by voice or the like when searching, the search condition may be narrowed down. If a search result exists, the search processing unit 13 notifies the output information creating unit 15 of facility information for user (driver) notification. If the facility search result does not exist, the output information creation unit 15 is notified that the facility search result does not exist.
 ここで、施設検索結果が複数あった場合には、最も検索対象座標に近い施設を検索結果として通知する。この際、例えば検索対象の1つ隣の施設が検索結果として通知された場合、ユーザ(ドライバー)が「手前の施設」や「奥の施設」などと言って音声認識装置1などを用いて指定して、追加検索できるようにしてもよい。 Here, when there are a plurality of facility search results, the facility closest to the search target coordinates is notified as a search result. In this case, for example, when a facility next to the search target is notified as a search result, the user (driver) uses the voice recognition device 1 to specify “facility in front”, “inner facility”, etc. Then, an additional search may be performed.
 出力情報作成部15は、検索処理部13により検索された施設情報(検索対象物の詳細情報)の出力情報として、音声情報または表示情報を作成する。また、検索結果が存在しない場合は、「検索結果がありません」等の文言の音声や表示文字列を作成する。
 出力装置8は、出力情報作成部15により作成された情報を音声出力または表示出力する装置である。なお、表示出力する場合には、HUD18を出力装置8として使用するようにしてもよい。
The output information creation unit 15 creates voice information or display information as output information of the facility information (detailed information of the search target) searched by the search processing unit 13. In addition, when there is no search result, a voice or display character string of a phrase such as “no search result” is created.
The output device 8 is a device that outputs or displays the information created by the output information creation unit 15 by voice. In the case of display output, the HUD 18 may be used as the output device 8.
 この実施の形態1における検索処理の流れを、図9~図12に示すフローチャートを用いて説明する。なお、これらの処理は、キーがONされると起動し、キーがOFFされると終了する。
 図9は、音声認識装置1、スイッチ2、ジェスチャー認識装置3それぞれによる入力認識処理を示すフローチャートである。
The flow of search processing in the first embodiment will be described with reference to the flowcharts shown in FIGS. These processes are started when the key is turned on, and are ended when the key is turned off.
FIG. 9 is a flowchart showing input recognition processing performed by each of the voice recognition device 1, the switch 2, and the gesture recognition device 3.
 まず、図9(a)に示す音声認識処理について説明する。既に何かしらの検索トリガがかかっていて検索中であった場合や、音声出力中(ステップST1のYESの場合)には、音声認識処理は行わない。検索中でなく、かつ、音声出力中でない場合(ステップST1のNOの場合)、音声情報を波形として取得し(ステップST2)、取得した波形から音声情報を認識する(ステップST3)。音声認識方法については、公知の技術であるためここでは説明を省略するが、種々の方法のうちのどのような方法を用いてもよい。 First, the speech recognition process shown in FIG. 9A will be described. If some kind of search trigger has already been applied and a search is being performed, or if a voice is being output (YES in step ST1), the voice recognition process is not performed. If the search is not being performed and the voice is not being output (NO in step ST1), the voice information is acquired as a waveform (step ST2), and the voice information is recognized from the acquired waveform (step ST3). Since the voice recognition method is a known technique, a description thereof is omitted here, but any of various methods may be used.
 そして、認識した音声情報の中に、検索トリガワードが含まれているか否かを判断する(ステップST4)。検索トリガワードが含まれていた場合(ステップST4のYESの場合)、検索要求通知を実行する(ステップST5)。一方、検索トリガワードが含まれていなかった場合(ステップST4のNOの場合)は、何も処理をせずに最初に戻る。 Then, it is determined whether or not a search trigger word is included in the recognized voice information (step ST4). If a search trigger word is included (YES in step ST4), a search request notification is executed (step ST5). On the other hand, if the search trigger word is not included (NO in step ST4), the process returns to the beginning without any processing.
 次に、図9(b)に示すスイッチ認識処理について説明する。ここでも、既に何かしらの検索トリガがかかっていて検索中であった場合や、音声出力中(ステップST11のYESの場合)には、スイッチ認識処理は行わない(スイッチの入力は受け付けない)。検索中でなく、かつ、音声出力中でない場合(ステップST11のNOの場合)、スイッチ情報を認識する(ステップST12)。 Next, the switch recognition process shown in FIG. 9B will be described. In this case as well, when some sort of search trigger has already been applied and the search is being performed, or during voice output (in the case of YES in step ST11), the switch recognition process is not performed (switch input is not accepted). If the search is not being performed and the voice is not being output (NO in step ST11), the switch information is recognized (step ST12).
 そして、検索要求スイッチが押下されたかどうかを判断する(ステップST13)。検索要求スイッチが押下された場合(ステップST13のYESの場合)、検索要求通知を実行する(ステップST14)。この際、スイッチに役割を持たせていた場合には、押下されたスイッチの役割も付加して通知する。一方、検索要求スイッチが押下されなかった場合(ステップST13のNOの場合)は、何も処理をせずに最初に戻る。 Then, it is determined whether or not the search request switch has been pressed (step ST13). When the search request switch is pressed (YES in step ST13), a search request notification is executed (step ST14). At this time, if the switch has a role, the role of the pressed switch is also added and notified. On the other hand, if the search request switch is not pressed (NO in step ST13), the process returns to the beginning without performing any processing.
 最後に、図9(c)に示すジェスチャー認識処理について説明する。ここでも、既に何かしらの検索トリガがかかっていて検索中であった場合や、音声出力中(ステップST21のYESの場合)には、ジェスチャー認識処理は行わない。検索中でなく、かつ、音声出力中でない場合(ステップST21のNOの場合)、カメラ情報や光センサなどのジェスチャー情報を取得し(ステップST22)、どのようなジェスチャーが行われたかを認識する(ステップST23)。ジェスチャー認識方法については、公知の技術であるためここでは説明を省略するが、種々の方法のうちのどのような方法を用いてもよい。 Finally, the gesture recognition process shown in FIG. 9C will be described. Also here, if some kind of search trigger has already been applied and a search is being performed, or if a voice is being output (YES in step ST21), the gesture recognition process is not performed. If the search is not being performed and the voice is not being output (NO in step ST21), gesture information such as camera information and an optical sensor is acquired (step ST22), and what gesture is performed is recognized (step ST22). Step ST23). Since the gesture recognition method is a known technique, description thereof is omitted here, but any of various methods may be used.
 そして、認識したジェスチャー情報の中に、検索トリガジェスチャーが含まれているか否かを判断する(ステップST24)。検索トリガジェスチャーが含まれていた場合(ステップST24のYESの場合)、検索要求通知を実行する(ステップST25)。一方、検索トリガジェスチャーが含まれていなかった場合(ステップST24のNOの場合)は、何も処理をせずに最初に戻る。 Then, it is determined whether or not a search trigger gesture is included in the recognized gesture information (step ST24). When a search trigger gesture is included (YES in step ST24), a search request notification is executed (step ST25). On the other hand, if the search trigger gesture is not included (NO in step ST24), the process returns to the beginning without any processing.
 このように、音声認識処理、スイッチ処理、ジェスチャー認識処理のいずれかにより検索要求通知が実行されると、座標計算部12における座標計算処理に移行する。
 図10は、座標計算部12における座標計算処理を示すフローチャートである。
 まず初めに、GPS4から現在地情報を取得する(ステップST31)。これにより、自車位置が把握できる。
As described above, when the search request notification is executed by any of the voice recognition process, the switch process, and the gesture recognition process, the process proceeds to the coordinate calculation process in the coordinate calculation unit 12.
FIG. 10 is a flowchart showing coordinate calculation processing in the coordinate calculation unit 12.
First, current location information is acquired from the GPS 4 (step ST31). Thereby, the own vehicle position can be grasped.
 次に、視線認識装置5から、視線情報を取得する(ステップST32)。また、前方カメラやレーダーなどのセンサ6から、センサ情報を取得する(ステップST33)。視線情報から、ユーザ(ドライバー)が見ている方向および距離を計算し、センサ情報によって距離の精度を向上させる。 Next, line-of-sight information is acquired from the line-of-sight recognition device 5 (step ST32). Also, sensor information is acquired from the sensor 6 such as the front camera or radar (step ST33). The direction and distance that the user (driver) is looking at is calculated from the line-of-sight information, and the accuracy of the distance is improved by the sensor information.
 そして、現在地情報、視線情報、センサ情報から得られた自車位置情報、方向、距離に基づいて、検索対象物の座標を計算する(ステップST34)。
 検索対象物の座標計算が終了したら、重畳表示制御部16および検索処理部13に対して検索座標の通知を行う(ステップST35)。
Then, the coordinates of the search object are calculated based on the vehicle position information, direction, and distance obtained from the current location information, line-of-sight information, and sensor information (step ST34).
When the coordinate calculation of the search object is completed, the search coordinate is notified to the superimposition display control unit 16 and the search processing unit 13 (step ST35).
 なお、ここでは移動体の現在位置とユーザの視線認識以外に、移動体の周辺情報を検出するセンサから取得したセンサ情報と、少なくとも移動体の速度を検出する車載機器から取得した情報とに基づいて、検索対象物の座標位置を精度よく算出するものとして説明したが、少なくとも移動体の現在位置とユーザの視線認識とがあれば、検索対象物の座標位置は算出できる。すなわち、少なくとも移動体の現在位置およびユーザの視線認識に基づいて、ユーザの視線方向にある対象物を検索対象として地図上の座標位置を算出すればよい。 Here, in addition to the current position of the moving body and the user's line-of-sight recognition, based on sensor information acquired from a sensor that detects peripheral information of the moving body and at least information acquired from an in-vehicle device that detects the speed of the moving body In the above description, the coordinate position of the search object is calculated with high accuracy. However, the coordinate position of the search object can be calculated if there is at least the current position of the moving object and the user's line-of-sight recognition. That is, based on at least the current position of the moving object and the user's line-of-sight recognition, the coordinate position on the map may be calculated using an object in the user's line-of-sight direction as a search target.
 検索座標の通知が行われると、重畳表示制御部16における重畳表示処理、および、検索処理部13における検索処理に移行する。
 図11は、重畳表示制御部16における重畳表示処理を示すフローチャートである。
 まず初めに、通知された検索座標に含まれる検索対象物の座標データをもとに、記憶部14に記憶されている地図データから、地図情報(施設情報)と対象物位置(検索座標)とのマッチングを行う(ステップST41)。
When the notification of the search coordinates is performed, the process proceeds to the superimposed display processing in the superimposed display control unit 16 and the search processing in the search processing unit 13.
FIG. 11 is a flowchart showing a superimposed display process in the superimposed display control unit 16.
First, based on the coordinate data of the search object included in the notified search coordinates, map information (facility information), object position (search coordinates), and the like are extracted from the map data stored in the storage unit 14. Are matched (step ST41).
 そして、対象物位置(検索座標)に対象物となる施設等が存在する場合(ステップST42のYESの場合)、HUD18に対して検索マークを重畳表示するよう指示、すなわち、検索対象物を矩形の検索マークで囲むように重畳表示させる(ステップST43)。 If there is a facility or the like that is the object at the object position (search coordinate) (YES in step ST42), the HUD 18 is instructed to display the search mark superimposed, that is, the search object is a rectangular object. It is superimposed and displayed so as to be surrounded by search marks (step ST43).
 これにより、ユーザが例えば「あの右側にある白いビルは何?」のように、検索したい施設等を特定するためのワードを付加して発話して条件限定を行う、という必要がなくなるというメリットがある。また、ユーザは検索したい対象物が矩形等で囲まれている(対象物に検索マークが重畳表示されている)ことを目視で確認しながら、その詳細情報を得ることができる。 As a result, there is an advantage that the user does not have to add a word for specifying a facility or the like to search for and limit the conditions, for example, “What is the white building on the right side?” is there. Further, the user can obtain detailed information while visually confirming that the object to be searched is surrounded by a rectangle or the like (a search mark is superimposed on the object).
 また、図12は、検索処理部13における検索処理を示すフローチャートである。
 まず初めに、通知された検索座標に含まれる検索対象物の座標データをもとに、記憶部14に記憶されている地図データから、検索座標における施設情報を読み出す(ステップST46)。
FIG. 12 is a flowchart showing search processing in the search processing unit 13.
First, the facility information at the search coordinates is read out from the map data stored in the storage unit 14 based on the coordinate data of the search object included in the notified search coordinates (step ST46).
 この際、検索座標における施設情報として、ここでは、検索座標から所定の範囲内にある周辺施設の情報を読み出すものとするが、この他に、例えば検索座標に対応する位置に施設が存在する場合には、その施設情報を読み出し、検索座標に対応する位置に施設が存在しない場合には、その検索座標に一番近い位置に存在する施設情報を読み出すようにしてもよい。また、検索座標における施設情報を読み出す際に、音声などにより検索施設の限定があれば、それに基づいて検索を行うようにしてもよい。 At this time, as facility information in the search coordinates, here, information on peripheral facilities within a predetermined range from the search coordinates is read out. In addition to this, for example, when a facility exists at a position corresponding to the search coordinates Alternatively, the facility information may be read, and if there is no facility at the position corresponding to the search coordinates, the facility information existing at the position closest to the search coordinates may be read. Further, when the facility information in the search coordinates is read, if there is a limitation of the search facility by voice or the like, the search may be performed based on that.
 そして、検索結果として検索座標における施設情報が存在する場合には、その検索結果を付加して、出力情報作成部15に対して検索結果出力通知を行い、検索座標における施設情報が存在しない場合には、検索結果が存在しないという情報を付加して、出力情報作成部15に対して検索結果出力通知を行う(ステップST47)。 When the facility information at the search coordinates exists as a search result, the search result is added to the output information creation unit 15 to notify the output of the search result. When the facility information at the search coordinates does not exist Adds information that the search result does not exist, and notifies the output information creation unit 15 of the search result output (step ST47).
 出力情報作成部15は、検索処理部13から通知された検索結果情報から、検索された施設情報または検索結果が存在しない旨の情報を音声または表示文字列として生成し、出力装置8に対して出力を行う。すなわち、出力装置8が音声出力装置である場合には、音声を生成して音声出力を行い、出力装置8が表示装置である場合には、表示文字列を生成して表示出力を行う。 The output information creation unit 15 generates, from the search result information notified from the search processing unit 13, information indicating that the searched facility information or the search result does not exist as a voice or a display character string, and outputs it to the output device 8. Output. That is, when the output device 8 is a sound output device, a sound is generated and output as a sound, and when the output device 8 is a display device, a display character string is generated and displayed.
 ここで、具体的な処理の流れを、図13のイメージ図を用いて説明する。
 図13は、実施の形態1における情報処理装置の処理の遷移イメージ例を示す図である。
 例えば、図13(a)に示すように、車両の走行中にユーザ(ドライバー)が視線81で示す建物を見て、発話82に示すように「あの建物なに?」と発話したとする。
Here, a specific processing flow will be described with reference to an image diagram of FIG.
FIG. 13 is a diagram illustrating a transition image example of processing of the information processing apparatus according to the first embodiment.
For example, as shown in FIG. 13A, it is assumed that a user (driver) sees a building indicated by a line of sight 81 and utters "What is that building?"
 このとき、検索中でも音声出力中でもなければ(図9(a)のステップ1のNOの場合)、音声認識装置1がその発話内容である音声情報を取得して音声認識を行う(ステップST2~ST3)。そして、図2に示すような表を参照した結果、検索トリガワードが含まれていると判断されるので(ステップST4のYESの場合)、検索要求が通知される(ステップST5)。 At this time, if neither search nor voice output is being performed (NO in step 1 in FIG. 9A), the voice recognition apparatus 1 acquires voice information as the utterance content and performs voice recognition (steps ST2 to ST3). ). As a result of referring to the table shown in FIG. 2, it is determined that the search trigger word is included (in the case of YES in step ST4), the search request is notified (step ST5).
 そして、座標計算部12が、GPS4からの現在地情報や、視線認識装置5からの視線情報(図13(a)の視線位置81に関する情報)、カメラ等のセンサ6からの情報、および、車載機器7からの情報に基づいて、検索対象物80の位置座標を計算し、検索座標を重畳表示制御部16および検索処理部13に通知する(図10のステップST31~ST35)。 And the coordinate calculation part 12 is the present location information from GPS4, the gaze information from the gaze recognition apparatus 5 (information regarding the gaze position 81 in FIG. 13A), the information from the sensor 6 such as the camera, and the in-vehicle device. Based on the information from 7, the position coordinates of the search object 80 are calculated, and the search coordinates are notified to the superimposed display control unit 16 and the search processing unit 13 (steps ST31 to ST35 in FIG. 10).
 重畳表示制御部16は、座標計算部12から検索対象物の検索座標を受け取ると、記憶部14に記憶されている地図データを参照して、地図情報(施設情報)と対象物位置(検索座標)とのマッチングを行い、HUD18に対して検索マークを重畳表示するよう指示を行う(図11のステップST41~ST43)。 When the superimposition display control unit 16 receives the search coordinates of the search object from the coordinate calculation unit 12, it refers to the map data stored in the storage unit 14 and maps information (facility information) and the object position (search coordinates). ) And instructing the HUD 18 to superimpose and display the search mark (steps ST41 to ST43 in FIG. 11).
 この結果、例えば図13(b)に示すように、HUD18に表示されている建物や道路等に、重畳表示制御部16からの指示を受けて、ユーザ(ドライバー)の視線方向にある対象物を強調して示す検索マーク85が重畳表示される。なお、ここでは検索マーク85として、矩形の枠形状のマークを採用している。 As a result, for example, as shown in FIG. 13B, an object in the line of sight of the user (driver) is received on the building or road displayed on the HUD 18 in response to an instruction from the superimposed display control unit 16. A search mark 85 highlighted is displayed in a superimposed manner. Here, a rectangular frame-shaped mark is adopted as the search mark 85.
 また、検索処理部13は、座標計算部12から検索対象物の検索座標を受け取ると、記憶部14に記憶されている地図データを参照して、検索座標における施設情報を読み出し、出力情報作成部15に対して検索結果出力を通知する(図12のステップST46~ST47)。 Further, when the search processing unit 13 receives the search coordinates of the search object from the coordinate calculation unit 12, the search processing unit 13 refers to the map data stored in the storage unit 14, reads the facility information in the search coordinates, and outputs the output information generation unit 15 is notified of search result output (steps ST46 to ST47 in FIG. 12).
 この結果、出力情報作成部15が、検索座標における施設情報を出力するための音声または表示文字列を生成し、例えば図13(b)の音声出力または表示出力83に示すように、「○○銀行です」という音声または表示文字列を出力装置8(音声出力装置または表示装置)に出力することによりユーザ(ドライバー)に提示する。 As a result, the output information creation unit 15 generates a voice or a display character string for outputting the facility information at the search coordinates. For example, as shown in the voice output or display output 83 of FIG. The voice or display character string “is a bank” is output to the output device 8 (voice output device or display device) to present it to the user (driver).
 ここで、例えば図13(b)の発話82に示すように、さらに「営業時間は?」とユーザ(ドライバー)が発話したとすると、音声認識装置1によりその音声を認識して、検索処理部13がさらに、先の検索対象物の情報(検索座標における施設情報)を読み出して、図13(c)の音声出力または表示出力83’に示すように、「営業時間は午前9:00~午後5:00です」とう音声または表示文字列を出力装置8(音声出力装置または表示装置)に出力することによりユーザ(ドライバー)に提示する。 Here, for example, as shown in the utterance 82 of FIG. 13B, if the user (driver) further utters “What is business hours?”, The speech recognition device 1 recognizes the speech, and the search processing unit 13 further reads out the information of the previous search object (facility information at the search coordinates) and, as shown in the audio output or display output 83 ′ of FIG. 13C, “business hours are from 9:00 AM to PM 5:00 ”is output to the output device 8 (speech output device or display device) and presented to the user (driver).
 このように、自車位置の周辺にある建物等の施設について、ユーザ(ドライバー)が走行中に詳細情報を検索したいと思った場合に、一旦停止して検索画面を開き、ジャンルを選択して検索するような手順が煩雑な手法を用いる必要がなく、その検索対象物に視線を向けて発声等の検索要求をかけるだけで、簡単にその検索対象物を特定して詳細情報を検索し、ユーザに提示してくれる。 In this way, when a user (driver) wants to search for detailed information on a facility such as a building in the vicinity of his / her vehicle position, he / she stops and opens the search screen and selects a genre. There is no need to use a complicated procedure for the search procedure, just by pointing the line of sight to the search object and making a search request such as utterance, you can easily identify the search object and search for detailed information, Present it to the user.
 また、視線位置81が検索対象物80から多少ずれていた場合などは、視線位置81の検索座標に近い位置にある施設情報が提示されるようにしたり、複数提示した中からユーザ(ドライバー)が選択できるようにしたり、異なる施設の情報が提示された場合にはユーザ(ドライバー)が「その手前」や「その奥」などと発話することによって、目的の対象物の施設情報が提示されるようにすることができる。 Further, when the line-of-sight position 81 is slightly deviated from the search object 80, for example, facility information at a position close to the search coordinates of the line-of-sight position 81 is presented, or a user (driver) from among a plurality of presentations. The facility information of the target object is presented by allowing the user (driver) to speak “before” or “behind” when information on different facilities is presented. Can be.
 以上のように、この実施の形態1によれば、車両等の移動体におけるユーザの視線検知により、簡単に検索対象物を特定してHUD(ヘッドアップディスプレイ)等のユーザの視界に重ねて各種情報を表示させるディスプレイ上に表示することができるので、その検索対象物の詳細情報を効率よくユーザに提示することができる。
 また、音声、スイッチ、ジェスチャー等の簡易なトリガで、視線方向にある施設をピンポイントで検索することができるので、従来手間がかかっていた検索を容易にすることができる。
As described above, according to the first embodiment, a user can easily specify a search target by detecting a user's line of sight in a moving body such as a vehicle, and superimpose it on the user's field of view such as a HUD (head-up display). Since it can be displayed on the display on which information is displayed, the detailed information of the search object can be efficiently presented to the user.
In addition, since a facility in the line-of-sight direction can be pinpointed with a simple trigger such as a voice, a switch, or a gesture, a conventionally troublesome search can be facilitated.
実施の形態2.
 図14は、この発明の実施の形態2における情報処理装置の一例を示すブロック図である。なお、実施の形態1で説明したものと同様の構成には、同一の符号を付して重複した説明を省略する。以下に示す実施の形態2では、実施の形態1と比べると、検索開始ボタン(検索開始指示入力部)31および検索決定ボタン(検索決定指示入力部)32をさらに備えるものである。
Embodiment 2. FIG.
FIG. 14 is a block diagram showing an example of an information processing apparatus according to Embodiment 2 of the present invention. In addition, the same code | symbol is attached | subjected to the structure similar to what was demonstrated in Embodiment 1, and the overlapping description is abbreviate | omitted. The second embodiment described below further includes a search start button (search start instruction input unit) 31 and a search determination button (search determination instruction input unit) 32 as compared with the first embodiment.
 なお、この実施の形態2では、検索開始ボタン(検索開始指示入力部)31および検索決定ボタン(検索決定指示入力部)32として、例えば、ハンドル等に物理的なボタンが設けられているものとして説明するが、タッチパネル上に表示されるソフトウェア的なメニュー・ボタンなどであってもよい。また、物理的には1つのボタンであるが、そのボタンが半押しされることで検索開始、深押しされることで検索決定、という処理が行われる構造のものであってもよい。 In the second embodiment, as a search start button (search start instruction input unit) 31 and a search determination button (search determination instruction input unit) 32, for example, a physical button is provided on a handle or the like. As will be described, a software menu button or the like displayed on the touch panel may be used. Further, although it is physically one button, it may have a structure in which a process of starting a search when the button is half-pressed and determining a search when the button is deeply pressed is performed.
 検索要求判断部11は、実施の形態1と同様に、音声認識装置1、スイッチ2、ジェスチャー認識装置3のいずれかから得られる情報からユーザによる検索要求を判断することも可能であるが、検索開始ボタン31の押下により検索要求があると判断してもよい。
 そして、検索要求判断部11により検索要求があると判断されると、座標計算部12は、ユーザにより検索開始ボタン31が押下されている間のみ、ユーザ(ドライバー)の視線を認識してユーザの視線方向にある対象物を検索対象として地図上の座標位置を算出する。
The search request determination unit 11 can determine a search request by the user from information obtained from any of the speech recognition device 1, the switch 2, and the gesture recognition device 3 as in the first embodiment. It may be determined that there is a search request by pressing the start button 31.
When the search request determination unit 11 determines that there is a search request, the coordinate calculation unit 12 recognizes the line of sight of the user (driver) only while the search start button 31 is being pressed by the user. The coordinate position on the map is calculated using an object in the line-of-sight direction as a search target.
 すなわち、走行中にユーザが検索を開始して欲しい場合に検索開始ボタン31が押下されると、検索要求判断部11により検索要求があると判断され、座標計算部12により、各種情報から検索対象物の座標が計算され、検索マークが重畳表示される、
 これは、検索開始ボタン31が押下されている間、リアルタイムで動的に実行される。したがって、検索開始ボタン31を押しながら視線が別の対象物に移れば、検索マークも別の対象物に表示される。
That is, when the user wants to start a search while traveling, when the search start button 31 is pressed, the search request determination unit 11 determines that there is a search request, and the coordinate calculation unit 12 determines a search target from various information. The coordinates of the object are calculated, and the search mark is superimposed.
This is dynamically executed in real time while the search start button 31 is pressed. Therefore, if the line of sight moves to another object while pressing the search start button 31, the search mark is also displayed on the other object.
 また、検索処理部13は、ユーザにより検索決定ボタン32が押下された場合に、ヘッドアップディスプレイ(HUD)18に検索マークが重畳表示された検索対象物の詳細情報を検索する。
 すなわち、ユーザは検索マークが自分の検索したい対象物と一致していることを確認しながら検索決定ボタン32を押下し、検索処理部13は、検索決定ボタン32が押下されていることを判断して検索結果を出力する。
In addition, when the user presses the search determination button 32, the search processing unit 13 searches for detailed information on the search target object with the search mark superimposed on the head-up display (HUD) 18.
That is, the user presses the search determination button 32 while confirming that the search mark matches the object to be searched, and the search processing unit 13 determines that the search determination button 32 is pressed. To output the search results.
 ここで、具体的な処理の流れを、実施の形態1における図13のイメージ図を用いて説明する。なお、フローチャートについては、検索開始ボタン31および検索決定ボタン32の押下状態を判別する処理が追加されるのみであり、基本的には実施の形態1における図9~図12と同様であるので、図示を省略する。 Here, a specific processing flow will be described with reference to the image diagram of FIG. 13 in the first embodiment. Note that the flowchart is only added with a process for determining the pressed state of the search start button 31 and the search determination button 32, and is basically the same as in FIGS. 9 to 12 in the first embodiment. Illustration is omitted.
 例えば、図13(a)に示すように、車両の走行中にユーザ(ドライバー)が視線81で示す建物を見ながら、検索開始ボタン31を押下し続ける。
 なお、前述のとおり、検索開始ボタン31の押下により検索要求を受け付けた場合には、発話や音声認識は不要であるが、実施の形態1において音声認識を行う具体例(図13)を用いて説明したのと同様に、ユーザが視線81で示す建物を見て、図13(a)の発話82に示すように「あの建物なに?」と発話した場合にも、ユーザが検索を開始して欲しい場合には、検索開始ボタン31を押下し続ける。
For example, as shown in FIG. 13A, the user (driver) keeps pressing the search start button 31 while looking at the building indicated by the line of sight 81 while the vehicle is traveling.
As described above, when a search request is received by pressing the search start button 31, utterance or voice recognition is not necessary, but using the specific example (FIG. 13) for performing voice recognition in the first embodiment. Similarly to the explanation, when the user looks at the building indicated by the line of sight 81 and utters “What is that building?” As shown in the utterance 82 in FIG. If the search start button 31 is desired, the search start button 31 is continuously pressed.
 そして、座標計算部12は、検索開始ボタン31が押下されている間であれば、GPS4からの現在地情報や、視線認識装置5からの視線情報(図13(a)の視線位置81に関する情報)、カメラ等のセンサ6からの情報、および、車載機器7からの情報に基づいて、検索対象物80の位置座標を計算し、検索座標を重畳表示制御部16および検索処理部13に通知する(図10のステップST31~ST35)。 And the coordinate calculation part 12 will be the present location information from GPS4, the gaze information from the gaze recognition apparatus 5 (information regarding the gaze position 81 of Fig.13 (a)), if the search start button 31 is pressed. Based on the information from the sensor 6 such as the camera and the information from the in-vehicle device 7, the position coordinates of the search object 80 are calculated, and the search coordinates are notified to the superimposition display control unit 16 and the search processing unit 13 ( Steps ST31 to ST35 in FIG.
 重畳表示制御部16は、座標計算部12から検索対象物の検索座標を受け取ると、記憶部14に記憶されている地図データを参照して、地図情報(施設情報)と対象物位置(検索座標)とのマッチングを行い、HUD18に対して検索マークを重畳表示するよう指示を行う(図11のステップST41~ST43)。 When the superimposition display control unit 16 receives the search coordinates of the search object from the coordinate calculation unit 12, it refers to the map data stored in the storage unit 14 and maps information (facility information) and the object position (search coordinates). ) And instructing the HUD 18 to superimpose and display the search mark (steps ST41 to ST43 in FIG. 11).
 この結果、例えば図13(b)に示すように、HUD18に表示されている建物や道路等に、重畳表示制御部16からの指示を受けて、ユーザ(ドライバー)の視線方向にある対象物を示す検索マーク85が重畳表示される。このとき、ユーザが検索したい対象物に検索マーク85が重畳表示されていれば、ユーザは検索決定ボタン32を押下する。 As a result, for example, as shown in FIG. 13B, an object in the line of sight of the user (driver) is received on the building or road displayed on the HUD 18 in response to an instruction from the superimposed display control unit 16. The search mark 85 shown is superimposed and displayed. At this time, if the search mark 85 is superimposed on the object that the user wants to search, the user presses the search determination button 32.
 そして、検索処理部13は、座標計算部12から検索対象物の検索座標を受け取ると、検索決定ボタン32が押下されている場合には、記憶部14に記憶されている地図データを参照して、検索座標における施設情報を読み出し、出力情報作成部15に対して検索結果出力を通知する(図12のステップST46~ST47)。 When the search processing unit 13 receives the search coordinates of the search object from the coordinate calculation unit 12, when the search determination button 32 is pressed, the search processing unit 13 refers to the map data stored in the storage unit 14. Then, the facility information at the search coordinates is read, and the search result output is notified to the output information creation unit 15 (steps ST46 to ST47 in FIG. 12).
 この結果、出力情報作成部15が、検索座標における施設情報を出力するための音声または表示文字列を生成し、例えば図13(b)の音声出力または表示出力83に示すように、「○○銀行です」という音声または表示文字列を出力装置8(音声出力装置または表示装置)に出力することによりユーザ(ドライバー)に提示する。 As a result, the output information creation unit 15 generates a voice or a display character string for outputting the facility information at the search coordinates. For example, as shown in the voice output or display output 83 of FIG. The voice or display character string “is a bank” is output to the output device 8 (voice output device or display device) to present it to the user (driver).
 ここで、さらに検索決定ボタン32を押下し続けると、検索処理部13がさらに、先の検索対象物の情報(検索座標における施設情報)を読み出して、図13(c)の音声出力または表示出力83’に示すように、「営業時間は午前9:00~午後5:00です」とう音声または表示文字列を出力装置8(音声出力装置または表示装置)に出力することによりユーザ(ドライバー)に提示する。 Here, if the search determination button 32 is further pressed, the search processing unit 13 further reads the information of the previous search object (facility information at the search coordinates), and outputs the voice or display in FIG. 13C. As shown in 83 ', a voice or a display character string "business hours are from 9:00 am to 5:00 pm" is output to the output device 8 (voice output device or display device) to the user (driver). Present.
 なお、音声認識も伴う場合には、例えば図13(b)の発話82に示すように、さらに「営業時間は?」とユーザ(ドライバー)が発話したとすると、音声認識装置1によりその音声を認識して、検索処理部13がさらに、先の検索対象物の情報(検索座標における施設情報)を読み出して処理を行うが、この場合であっても、検索決定ボタン32を押下することが前提である。 If voice recognition is also involved, for example, as shown by an utterance 82 in FIG. 13B, if the user (driver) further utters “What is business hours?”, The voice recognition device 1 transmits the voice. Recognizing, the search processing unit 13 further reads and processes the information of the previous search object (facility information at the search coordinates). Even in this case, it is assumed that the search determination button 32 is pressed. It is.
 このように、自車位置の周辺にある建物等の施設について、ユーザ(ドライバー)が走行中に詳細情報を検索したいと思った場合に、一旦停止して検索画面を開き、ジャンルを選択して検索するような手順が煩雑な手法を用いる必要がなく、その検索対象物に視線を向けて発声等の検索要求をかけるだけで、簡単にその検索対象物を特定して詳細情報を検索し、ユーザに提示してくれる。 In this way, when a user (driver) wants to search for detailed information on a facility such as a building in the vicinity of his / her vehicle position, he / she stops and opens the search screen and selects a genre. There is no need to use a complicated procedure for the search procedure, just by pointing the line of sight to the search object and making a search request such as utterance, you can easily identify the search object and search for detailed information, Present it to the user.
 また、検索結果(検索対象物の詳細情報)が提示される前に、検索対象物をHUD18上で確認することができるので、検索結果が提示されてからどの対象物が検索されたのかをユーザ(ドライバー)が認識するしかない、という問題が解決される。 In addition, since the search object can be confirmed on the HUD 18 before the search result (detailed information of the search object) is presented, the user can determine which object has been searched since the search result was presented. The problem that (driver) has no choice but to recognize is solved.
 さらに、視線位置81が検索対象物80から多少ずれていた場合などは、ユーザ(ドライバー)が目視確認をしながら検索決定ボタン32により検索をかけることができるので、検索をかけた後にユーザ(ドライバー)が「その手前」や「その奥」などと発話することにより指定して再検索をしなければならない、という問題も解決される。 Further, when the line-of-sight position 81 is slightly deviated from the search object 80, the user (driver) can perform a search by using the search determination button 32 while visually confirming, so the user (driver) ) Can also be resolved by re-searching by saying "before" or "behind".
 以上のように、この実施の形態2によれば、実施の形態1における効果に加え、ユーザが検索対象物を目視確認しながら検索をかけることができるので、意図しない対象物が検索されてしまうことがなく、リアルタイムに、かつ、確実に、所望の対象物について検索を行い、その検索対象物の詳細情報を効率よくユーザに提示することができる。 As described above, according to the second embodiment, in addition to the effects in the first embodiment, the user can perform a search while visually confirming the search target object, and thus an unintended target object is searched. Thus, a desired object can be searched in real time and reliably, and detailed information of the search object can be efficiently presented to the user.
実施の形態3.
 図15は、この発明の実施の形態3における情報処理装置の一例を示すブロック図である。なお、実施の形態1,2で説明したものと同様の構成には、同一の符号を付して重複した説明を省略する。以下に示す実施の形態3では、実施の形態1と比べると、検索処理部13がGPS4、センサ6、車載機器7から情報を取得し、現在地の周辺施設を検索する周辺検索処理も行うものであり、また、出力装置8として表示装置を備えるものである。
Embodiment 3 FIG.
FIG. 15 is a block diagram showing an example of an information processing apparatus according to Embodiment 3 of the present invention. In addition, the same code | symbol is attached | subjected to the structure similar to what was demonstrated in Embodiment 1, 2, and the overlapping description is abbreviate | omitted. In the third embodiment described below, as compared with the first embodiment, the search processing unit 13 acquires information from the GPS 4, the sensor 6, and the in-vehicle device 7, and also performs a peripheral search process for searching for a peripheral facility in the current location. In addition, a display device is provided as the output device 8.
 検索処理部13は、音声やスイッチやGUI(Graphic User Interface)などによって、自車位置周辺の特定の施設(例えば銀行、コンビニエンスストア、ガソリンスタンドなど)を検索する。こうして検索された結果である周辺施設は記憶部14に記憶されるとともに、出力情報作成部15に情報が送られ、出力装置(表示装置)8に重畳表示する。 The search processing unit 13 searches for a specific facility (for example, a bank, a convenience store, a gas station, etc.) around the vehicle position by using voice, a switch, a GUI (Graphic User Interface), or the like. The peripheral facility as a result of the search is stored in the storage unit 14, and information is sent to the output information creation unit 15 to be superimposed on the output device (display device) 8.
 ここでは、出力装置(表示装置)8としては、ヘッドアップディスプレイを想定して説明する。すなわち、図15におけるヘッドアップディスプレイ(HUD)18と出力装置(表示装置)8は同じものであるものとする。検索処理部13で検索した結果である特定の周辺施設を、視点に合わせて表示できる。なお、ユーザ(ドライバー)視点情報は、視線認識装置5から取得することが可能である。 Here, the output device (display device) 8 will be described assuming a head-up display. That is, the head-up display (HUD) 18 and the output device (display device) 8 in FIG. 15 are the same. A specific peripheral facility that is a result of the search by the search processing unit 13 can be displayed according to the viewpoint. The user (driver) viewpoint information can be acquired from the line-of-sight recognition device 5.
 すなわち、検索処理部13は、予め特定された移動体の周辺施設を検索ポイントとして検索し、出力情報作成部15は、検索処理部13により検索された検索ポイントを表示させるよう、出力情報を生成する。 That is, the search processing unit 13 searches for a peripheral facility of a mobile body specified in advance as a search point, and the output information creation unit 15 generates output information so that the search point searched by the search processing unit 13 is displayed. To do.
 これにより、ユーザ(ドライバー)は走行しながら周辺検索結果の位置を確認することができる。しかし、ヘッドアップディスプレイ(HUD)18に複雑な情報を多数提示すると運転に差し障る可能性があるため、多数表示する場合は、マークまたは簡単な単語程度とするのが現実的である。 This allows the user (driver) to check the location of the surrounding search results while traveling. However, if a large amount of complicated information is presented on the head-up display (HUD) 18, there is a possibility that the driving may be hindered. Therefore, when a large number of information is displayed, it is realistic to use a mark or a simple word.
 ユーザ(ドライバー)は、周辺検索結果によって表示されているマークに視線を向けることによって、詳細情報を要求することが可能である。詳細情報は、ヘッドアップディスプレイ(HUD)18上に表示する、または、音声にて出力させる方法が考えられるが、ここでは出力装置8は表示装置であるヘッドアップディスプレイ(HUD)18であるものとして説明する。なお、ヘッドアップディスプレイ(HUD)18上への表示を行う場合は、運転上差し障りのないレベルにおさえるものとする。 The user (driver) can request detailed information by directing his / her line of sight to the mark displayed by the peripheral search result. The detailed information may be displayed on a head-up display (HUD) 18 or output by voice. Here, the output device 8 is assumed to be a head-up display (HUD) 18 which is a display device. explain. In addition, when performing display on the head-up display (HUD) 18, it shall be held at a level that does not hinder driving.
 図16は、実施の形態3における情報処理装置の処理の遷移イメージ例を示す図である。
 例えば、ユーザ(ドライバー)が自車位置周辺の特定の施設として、「銀行」と発話したり入力したりすると、図16(a)に示すように、周辺の建物の中で「銀行」にだけ検索ポイント84が示された状態で、出力装置(表示装置)8に表示される。この周辺検索の入力は音声でもよいし、タッチパネルなどによる入力により行ってもよい。
FIG. 16 is a diagram illustrating a transition image example of the processing of the information processing apparatus according to the third embodiment.
For example, when the user (driver) speaks or inputs “bank” as a specific facility around the vehicle position, as shown in FIG. The search point 84 is displayed on the output device (display device) 8 while being displayed. The vicinity search may be input by voice or may be input by a touch panel or the like.
 その後の検索処理の流れについては、実施の形態1における図9~図12に示すフローチャートと同じであるため、図示および詳細な説明を省略する。なお、この実施の形態3でも、検索要求は音声認識装置1、スイッチ2、または、ジェスチャー認識装置3のいずれにより通知されるものであってもよいが、ここでは、検索要求をスイッチ2の押下により通知するものとして説明する。 Since the subsequent search processing flow is the same as the flowchart shown in FIGS. 9 to 12 in the first embodiment, illustration and detailed description thereof will be omitted. In the third embodiment, the search request may be notified by any of the voice recognition device 1, the switch 2, or the gesture recognition device 3, but here the search request is pressed by pressing the switch 2. It explains as what notifies.
 まず、検索中でも音声出力中でもなければ(図9(b)のステップ11のNOの場合)、スイッチ2がスイッチの押下情報を取得する(ステップST12)。そして、検索トリガとなる検索スイッチが押下されていると判断されると(ステップST13のYESの場合)、検索要求が通知される(ステップST14)。 First, if neither searching nor voice output (NO in step 11 of FIG. 9B), the switch 2 acquires switch pressing information (step ST12). When it is determined that the search switch serving as a search trigger is pressed (YES in step ST13), a search request is notified (step ST14).
 そして、座標計算部12が、GPS4からの現在地情報や、視線認識装置5からの視線情報(図16(b)の視線位置81に関する情報)、カメラ等のセンサ6からの情報、および、車載機器7からの情報に基づいて、検索対象物80の位置座標を計算し、検索座標を検索処理部13に通知する(図10のステップST31~ST35)。 And the coordinate calculation part 12 is the present location information from GPS4, the gaze information from the gaze recognition apparatus 5 (information regarding the gaze position 81 of FIG.16 (b)), the information from sensors 6 such as a camera, and vehicle equipment 7 is used to calculate the position coordinates of the search object 80 and notify the search processing unit 13 of the search coordinates (steps ST31 to ST35 in FIG. 10).
 検索処理部13は、座標計算部12から検索対象物の検索座標を受け取ると、記憶部14に記憶されている地図データを参照して、検索座標における施設情報を読み出し、出力情報作成部15に対して検索結果出力を通知する(図12のステップST46~ST47)。 When the search processing unit 13 receives the search coordinates of the search object from the coordinate calculation unit 12, the search processing unit 13 refers to the map data stored in the storage unit 14, reads the facility information at the search coordinates, and outputs it to the output information creation unit 15. The search result output is notified (steps ST46 to ST47 in FIG. 12).
 この結果、出力情報作成部15が、検索座標における施設情報を出力するための表示文字列を生成し、例えば図16(c)の表示出力83に示すように、「○○銀行です」という音声または表示文字列を出力装置8(音声出力装置または表示装置)に出力することによりユーザ(ドライバー)に提示する。 As a result, the output information creation unit 15 generates a display character string for outputting the facility information at the search coordinates. For example, as shown in the display output 83 in FIG. Alternatively, the display character string is output to the output device 8 (speech output device or display device) and presented to the user (driver).
 ここで、さらなる詳細情報を検索したい場合には、例えば図16(c)の表示出力83に示すように、さらなる詳細情報を検索するための視線検出位置81をユーザ(ドライバー)が見ることにより、検索処理部13がさらに、先の検索対象物80の情報(検索座標における施設情報)を読み出して、図16(d)の表示出力83に示すように、「営業時間は午前9:00~午後5:00です」という表示文字列を出力装置(表示装置)8に出力することによりユーザ(ドライバー)に提示する。 Here, when it is desired to search for further detailed information, for example, as shown in the display output 83 of FIG. 16C, the user (driver) sees the line-of-sight detection position 81 for searching for further detailed information. The search processing unit 13 further reads the information of the previous search object 80 (facility information at the search coordinates), and as shown in the display output 83 of FIG. 16D, “business hours are from 9:00 am to pm 5:00 ”is output to the output device (display device) 8 to present it to the user (driver).
 このように、自車位置の周辺にある建物等の施設について、ユーザ(ドライバー)が詳細情報を検索したいと思った場合に、ユーザ(ドライバー)が指定した特定の検索ポイントが予め表示された上で、その検索ポイントのマークが表示された検索対象物に視線を向けて発声等の検索要求をかけるだけで、簡単にその検索対象物を特定して詳細情報を検索し、ユーザに提示してくれる。これにより、実施の形態1に比べて、さらに確実に検索対象物を特定して詳細情報を得ることができる。 As described above, when a user (driver) wants to search for detailed information on a facility such as a building around the vehicle position, a specific search point designated by the user (driver) is displayed in advance. Then, simply point the line of sight to the search object with the search point mark displayed and make a search request such as utterance, then you can easily identify the search object and search for detailed information and present it to the user. Give me. Thereby, as compared with the first embodiment, it is possible to more reliably specify the search object and obtain detailed information.
 なお、出力情報作成部15は、座標計算部12がユーザが検索したい検索対象物の座標を算出する際に用いた視線位置が、検索ポイント84のいずれかと一致した場合に、当該一致した検索ポイント84を強調表示させるように出力情報を生成するようにしてもよい。すなわち、例えば図16(b)のように、検索対象物80に表示されていた検索ポイント84に視線81が重なった場合に、当該検索ポイント84が大きく表示されたり色替え表示されたりなど、強調表示させるようにしてもよい。 Note that the output information creation unit 15 determines that the search point that matches when the line-of-sight position used when the coordinate calculation unit 12 calculates the coordinates of the search target that the user wants to search matches with any of the search points 84. The output information may be generated so that 84 is highlighted. That is, for example, as shown in FIG. 16B, when the line of sight 81 overlaps the search point 84 displayed on the search object 80, the search point 84 is displayed in a large size or displayed in a color-changed manner. You may make it display.
 図17は、この実施の形態3における情報処理装置において、視線認識を用いて詳細情報を提示する処理を示すフローチャートである。
 まず、周辺検索によってユーザ(ドライバー)から見える範囲に検索ポイント84がマーキングされているかどうかを判断する(ステップST51)。マーキングされていない場合(ステップST51のNOの場合)、検索ポイント84のマークが出現するまで待ち続ける。
FIG. 17 is a flowchart illustrating a process of presenting detailed information using line-of-sight recognition in the information processing apparatus according to the third embodiment.
First, it is determined whether or not the search point 84 is marked in a range visible to the user (driver) by the peripheral search (step ST51). If not marked (NO in step ST51), the process continues to wait until the mark of the search point 84 appears.
 一方、施設マーク(検索ポイント84)が表示されている場合(ステップST51のYESの場合)、視線情報の取得を行う(ステップST52)。
 続いて、施設詳細情報83が表示されているかどうかを判断する(ステップST53)。施設詳細情報83が表示されていない場合(ステップST53のYESの場合)、視線情報から施設マーク(検索ポイント84)を一定時間以上目視しているか否かを判断する(ステップST54)。
On the other hand, when the facility mark (search point 84) is displayed (YES in step ST51), line-of-sight information is acquired (step ST52).
Subsequently, it is determined whether or not the facility detailed information 83 is displayed (step ST53). If the facility detailed information 83 is not displayed (YES in step ST53), it is determined from the line-of-sight information whether or not the facility mark (search point 84) has been viewed for a certain period of time (step ST54).
 一定時間以上目視している場合(ステップST54のYESの場合)、視線を向けている施設マーク(検索ポイント84)のある検索対象物の詳細情報83を表示する(ステップST55)。一方、施設マーク(検索ポイント84)を一定時間以上目視していると判断されなかった場合(ステップST54のNOの場合)、最初のステップST51の処理に戻る。 If the user has been viewing for a certain period of time (in the case of YES in step ST54), the detailed information 83 of the search object having the facility mark (search point 84) pointing at the line of sight is displayed (step ST55). On the other hand, if it is not determined that the facility mark (search point 84) has been viewed for a certain period of time (NO in step ST54), the process returns to the first step ST51.
 また、ステップST53において、施設詳細情報83が表示されていた場合(ステップST53のNOの場合)、その表示されている施設詳細情報83を一定時間以上目視しているか否かを判断する(ステップST56)。一定時間以上目視している場合(ステップST56のYESの場合)、さらなる施設詳細情報83’を表示する(ステップST57)。一方、施設詳細情報83を一定時間以上目視していると判断されなかった場合(ステップST56のNOの場合)、表示されていた施設詳細情報83を非表示にする(ステップST58)。 If the facility detail information 83 is displayed in step ST53 (NO in step ST53), it is determined whether or not the displayed facility detail information 83 has been viewed for a certain period of time (step ST56). ). If the user is viewing for a certain period of time (YES in step ST56), further facility detailed information 83 'is displayed (step ST57). On the other hand, if it is not determined that the facility detailed information 83 has been viewed for a certain period of time (NO in step ST56), the displayed facility detailed information 83 is hidden (step ST58).
 以上のように、この実施の形態3によれば、実施の形態1,2における効果に加え、車両等の移動体において検索したい特定の周辺施設の位置をわかりやすく表示するとともに、その周辺施設の詳細情報を知りたい場合に、ユーザの視線検知により簡単に検索対象物を特定することができるので、その検索対象物の詳細情報を効率よくユーザに提示することができる。
 また、表示を視線方向の表示装置に重畳することで、ユーザが認識しやすくなるため、より快適に周辺施設の検索を行うことができる。
As described above, according to the third embodiment, in addition to the effects of the first and second embodiments, the position of a specific peripheral facility to be searched for in a moving body such as a vehicle is displayed in an easy-to-understand manner, and When the user wants to know the detailed information, the search object can be easily specified by detecting the user's line of sight, so that the detailed information of the search object can be efficiently presented to the user.
Moreover, since the user can easily recognize the display by superimposing the display on the display device in the line-of-sight direction, it is possible to search for the surrounding facilities more comfortably.
 なお、上述した実施の形態1~3においては、検索対象物として車両の外部の施設等に視線を向けることを前提に説明したが、車両の内部の車載設備に視線を向けることにより、車載設備の詳細情報としてその設備の使い方などを検索することも可能である。
 この場合、記憶部14の中に、車両の内部にあるインフォメーションパネル、ナビゲーションシステム、フロントカメラ、エアコン、ウィンカー、シフトレバーなどの車載設備情報も記憶している。
In the first to third embodiments described above, the description has been made on the premise that the line of sight is directed to the facility outside the vehicle as the search object, but the vehicle-mounted equipment is directed to the vehicle-mounted equipment inside the vehicle. It is also possible to search how to use the equipment as detailed information.
In this case, in-vehicle equipment information such as an information panel, a navigation system, a front camera, an air conditioner, a winker, and a shift lever inside the vehicle is also stored in the storage unit 14.
 近年、自動車には様々な機能が搭載されているが、それらについてユーザ(ドライバー)がすべてを理解して使用されているかどうかは疑わしいが、この発明を利用することにより、複雑な使い方が想定されるインフォメーションパネル、ナビゲーションシステム、フロントカメラなどの使い方や表示の意味を、分厚い説明書を開くことなく容易に調べることができる。 In recent years, automobiles have been equipped with various functions, but it is doubtful that the user (driver) understands and uses them all. However, by using this invention, complicated usage is assumed. You can easily find out how to use and display the information panel, navigation system, front camera, etc. without opening a thick manual.
 このとき、視線認識装置5を用いて、近い方を知りたい車載設備を目視し、音声認識、スイッチ、ジェスチャー認識などにより検索トリガをかけることで、それらの設備の使い方や説明を表示したり音声で出力することができる。ただし、検索できる設備は、ユーザ(ドライバー)が自由にポジションを変更できないものに限られるが、車載設備の位置座標は基本的に決まっているため、大きく困ることはなく利用できるものである。 At this time, by using the line-of-sight recognition device 5 to visually check the on-vehicle equipment that you want to know the nearer and triggering a search trigger by voice recognition, switch, gesture recognition, etc. Can be output. However, the facilities that can be searched are limited to those in which the user (driver) cannot change the position freely, but the position coordinates of the in-vehicle facilities are basically determined, so that they can be used without much trouble.
 具体的には、ユーザ(ドライバー)が、検索したい車載設備を見ながら、音声、スイッチ、ジェスチャーなどを用いて検索要求を行う。続いて、検索トリガが認識されて検索要求が通知されると、視線認識により検索対象の座標を特定する。そして、検索座標に基づいて検索を行い、検索座標に相当する車載設備の詳細情報を読み出して、情報を出力する。なお、該当する設備がない場合には、情報がない旨を出力する。 Specifically, the user (driver) makes a search request using voice, switches, gestures, etc. while looking at the in-vehicle equipment to be searched. Subsequently, when the search trigger is recognized and a search request is notified, the coordinates of the search target are specified by line-of-sight recognition. And it searches based on a search coordinate, reads the detailed information of the vehicle equipment corresponding to a search coordinate, and outputs information. If there is no corresponding equipment, a message indicating that there is no information is output.
 これにより、従来では自動車の機能について使い方を調べる容易な方法がなく、分厚い説明書を調べるしかなかったようなものであっても、今後益々高機能化していく自動車の設備について、ユーザ(ドライバー)が容易に設備の使い方を調べる方法を提供することができる。この結果、ユーザ(ドライバー)が予めすべての機能を知っていなくても、視線を向けて検索をかけるだけで、見ている設備の使い方などの詳細情報を確認することができる。さらに、ボタン操作により設定変更を行うことができるという説明が出た場合に、そのボタンを一定時間以上見ることにより、簡単に設定変更のボタン操作を行うことができるようにしてもよい。 As a result, users (drivers) have no idea how to find out how to use the functions of a car in the past. Can provide a way to easily find out how to use the equipment. As a result, even if the user (driver) does not know all the functions in advance, it is possible to confirm detailed information such as how to use the facility he / she is looking at by simply performing a search with a gaze. Further, when it is explained that the setting can be changed by a button operation, the button for changing the setting may be easily performed by looking at the button for a predetermined time or more.
 なお、この実施の形態3においても、実施の形態2と同様に、検索開始ボタン31および検索決定ボタン32をさらに備え、ユーザ(ドライバー)により検索開始ボタン31が押下されている間のみ座標位置の算出を行い、ユーザ(ドライバー)により検索決定ボタン32が押下された場合に検索対象物の詳細情報を検索するようにしてもよいことは、言うまでもない。 In the third embodiment, similarly to the second embodiment, a search start button 31 and a search determination button 32 are further provided, and the coordinate position is changed only while the search start button 31 is pressed by the user (driver). Needless to say, the calculation may be performed and the detailed information of the search object may be searched when the search determination button 32 is pressed by the user (driver).
実施の形態4.
 以上の実施の形態1~3では、この発明における情報処理装置を、車両等の移動体に搭載されるナビゲーション装置に適用した場合を例に説明したが、適用するのは車載用のナビゲーション装置に限らず、人、車両、鉄道、船舶または航空機等を含む移動体用のナビゲーション装置であってもよいし、情報処理システムやナビゲーションシステムのサーバに適用してもよい。また、スマートフォン、タブレットPC、携帯電話等の携帯情報端末等にインストールされる情報処理システムやナビゲーションシステムのアプリケーション等、どのような形態のものにも適用することができる。
Embodiment 4 FIG.
In the above first to third embodiments, the case where the information processing apparatus according to the present invention is applied to a navigation apparatus mounted on a moving body such as a vehicle has been described as an example. However, the present invention is applied to an in-vehicle navigation apparatus. The navigation device for a moving body including, but not limited to, a person, a vehicle, a railroad, a ship, an airplane, or the like may be used, or the information processing system or the navigation system server may be applied. Further, the present invention can be applied to any form such as an information processing system installed in a portable information terminal such as a smartphone, a tablet PC, or a mobile phone, or an application of a navigation system.
 図18は、この発明の実施の形態4におけるナビゲーションシステムの概要を示す図である。このナビゲーションシステムは、車載装置100が、スマートフォンなどの携帯情報端末101およびサーバ102の少なくとも一方と連携して、検索処理およびナビゲーション処理を行ったり、スマートフォンなどの携帯情報端末101およびサーバ102の少なくとも一方が検索処理およびナビゲーション処理を行い、車載装置100に認識結果や地図情報を表示させる等、様々な形態をとることができる。以下、当該ナビゲーションシステムの構成態様について説明する。 FIG. 18 is a diagram showing an outline of the navigation system according to the fourth embodiment of the present invention. In this navigation system, the in-vehicle device 100 performs search processing and navigation processing in cooperation with at least one of the portable information terminal 101 such as a smartphone and the server 102, or at least one of the portable information terminal 101 such as a smartphone and the server 102. However, the search processing and the navigation processing are performed, and the recognition result and the map information are displayed on the in-vehicle device 100. Hereinafter, a configuration aspect of the navigation system will be described.
 実施の形態1~3では、検索処理の機能を、図18に示す車載装置100が備えるものとして説明したが、この実施の形態4におけるナビゲーションシステムでは、サーバ102が検索処理を行い、その検索結果を車載装置100に表示させることによりユーザに提供する場合、および、携帯情報端末101がサーバ102と連携して検索処理を行い、その検索結果を車載装置100に表示させることによりユーザに提供する場合について説明する。 In the first to third embodiments, the search processing function is described as being provided in the in-vehicle device 100 shown in FIG. 18, but in the navigation system in the fourth embodiment, the server 102 performs the search processing, and the search result When the mobile information terminal 101 performs search processing in cooperation with the server 102 and displays the search result on the in-car device 100, the user is provided with the user. Will be described.
 この場合、車載装置100は、例えばモバイル通信や無線LANなどを用いてサーバ102との通信を行うことが可能な通信部を備えている。そして、サーバ102が検索処理部13を備え、車載装置100が認識した検索対象座標をもとに外部サーバ102に検索要求を出し、サーバ102が検索処理を行い、検索結果の出力情報を生成して車載装置100に送ることにより、ユーザ(ドライバー)に検索結果を提示するようにすることができる。また、この場合には、記憶部14が地図データを備えている必要もなく、サーバ102が地図データを有していればよい。 In this case, the in-vehicle device 100 includes a communication unit capable of communicating with the server 102 using, for example, mobile communication or a wireless LAN. The server 102 includes the search processing unit 13, issues a search request to the external server 102 based on the search target coordinates recognized by the in-vehicle device 100, and the server 102 performs search processing and generates search result output information. Thus, the search result can be presented to the user (driver) by sending it to the in-vehicle device 100. In this case, the storage unit 14 does not need to have map data, and the server 102 only needs to have map data.
 まず、サーバ102が検索処理を行い、その検索結果を車載装置100に表示させる場合、すなわち、検索処理機能を有するサーバ102と連携して、車載装置100が表示装置として機能する場合について説明する。
 この構成においては、車載装置100がサーバ102と直接通信するか、または、車載装置100が携帯情報端末101を経由してサーバ102と通信する場合が考えられる。サーバ102は、上記実施の形態1~3で説明した検索処理部13の機能を有する。また、車載装置100は、サーバ102による検索結果をユーザに提供するための表示部を少なくとも備える表示装置として機能する。
First, a case where the server 102 performs a search process and displays the search result on the in-vehicle device 100, that is, a case where the in-vehicle device 100 functions as a display device in cooperation with the server 102 having a search processing function will be described.
In this configuration, it is conceivable that the in-vehicle device 100 communicates directly with the server 102 or the in-vehicle device 100 communicates with the server 102 via the portable information terminal 101. The server 102 has the function of the search processing unit 13 described in the first to third embodiments. The in-vehicle device 100 functions as a display device including at least a display unit for providing a search result by the server 102 to the user.
 また、携帯情報端末101がサーバ102と連携して検索処理を行い、その検索結果を車載装置100がユーザに提供する場合について説明する。
 この構成においては、車載装置100が携帯情報端末101を経由してサーバ102と通信する場合が考えられ、携帯情報端末101のアプリケーションが、サーバ102と連携して検索処理を行う。また、車載装置100は、携帯情報端末101とサーバ102による検索結果をユーザに提供するための表示部を少なくとも備える表示装置として機能する。
Further, a case will be described in which the portable information terminal 101 performs a search process in cooperation with the server 102 and the in-vehicle device 100 provides the search result to the user.
In this configuration, the case where the in-vehicle device 100 communicates with the server 102 via the portable information terminal 101 can be considered, and the application of the portable information terminal 101 performs search processing in cooperation with the server 102. The in-vehicle device 100 functions as a display device including at least a display unit for providing a search result by the portable information terminal 101 and the server 102 to the user.
 いずれの場合にも、車載装置100は基本的に通信機能および表示機能のみを有し、携帯情報端末101とサーバ102との連携による検索結果を受信してユーザに提供する。
 すなわち、携帯情報端末101のアプリケーションにより、ユーザにより要求された検索対象物の検索結果を表示装置である車載装置100に表示させる。
 このように構成しても、実施の形態1~3と同様な効果を得ることができる。
In any case, the in-vehicle device 100 basically has only a communication function and a display function, and receives a search result obtained by cooperation between the portable information terminal 101 and the server 102 and provides it to the user.
That is, the search result of the search object requested by the user is displayed on the in-vehicle device 100 which is a display device by the application of the portable information terminal 101.
Even with this configuration, it is possible to obtain the same effects as in the first to third embodiments.
 なお、カメラなどのセンサ6から取得したセンサ情報をサーバへ送り、検索をかけることも可能である。例えば、カメラから入手した画像をサーバに通知し、その結果から検索対象施設として最も適切な施設を選出することが可能である。
 また、地図の施設情報だけではなく、口コミやコメントタグを追加で要求することで、検索結果の情報量を増やすことも可能である。
 さらに、カメラ情報があれば、車やバイクのような移動体に対して視線を向けて検索を行ったときに、その車やバイクの詳細情報を検索することも可能となる。
It is also possible to send the sensor information acquired from the sensor 6 such as a camera to the server and perform a search. For example, it is possible to notify the server of an image obtained from a camera and select the most suitable facility as a search target facility from the result.
Further, by requesting not only the map facility information but also word-of-mouth and comment tags, it is possible to increase the amount of information in the search results.
Furthermore, if there is camera information, it is also possible to search for detailed information on the car or motorcycle when a search is performed with a line of sight directed toward a moving body such as a car or motorcycle.
 なお、本願発明はその発明の範囲内において、各実施の形態の自由な組み合わせ、あるいは各実施の形態の任意の構成要素の変形、もしくは各実施の形態において任意の構成要素の省略が可能である。 In the present invention, within the scope of the invention, any combination of the embodiments, or any modification of any component in each embodiment, or omission of any component in each embodiment is possible. .
 この発明の情報処理装置は、車載用のナビゲーション装置に限らず、人、車両、鉄道、船舶または航空機等を含む移動体用のナビゲーション装置、携帯型のナビゲーション装置、携帯型の情報処理装置等、また、スマートフォン、タブレットPC、携帯電話等の携帯情報端末等にインストールされるナビゲーションシステムのアプリケーション等に適用することができる。 The information processing device of the present invention is not limited to a vehicle-mounted navigation device, but includes a navigation device for a moving body including a person, a vehicle, a railway, a ship, an aircraft, etc., a portable navigation device, a portable information processing device, etc. In addition, the present invention can be applied to navigation system applications installed in mobile information terminals such as smartphones, tablet PCs, and mobile phones.
 1 音声認識装置、2 スイッチ、3 ジェスチャー認識装置、4 GPS、5 視線認識装置、6 センサ、7 車載機器、8 出力装置(音声出力装置または表示装置)、10 ナビゲーション装置、11 検索要求判断部、12 座標計算部、13 検索処理部、14 記憶部、15 出力情報作成部、16 重畳表示制御部、18 ヘッドアップディスプレイ(HUD)、20 ユーザ、21 ユーザ20の眼球、22 眼球位置測定エリア、23 視線測定可能方向、31 検索開始ボタン(検索開始指示入力部)、32 検索決定ボタン(検索決定指示入力部)、80 検索対象物、81 視線位置、82 発話(発話内容)、83,83’ 音声出力または表示出力(施設詳細情報)、84 検索ポイント、85 検索マーク、100 車載装置、101 携帯情報端末、102 サーバ。 1 voice recognition device, 2 switch, 3 gesture recognition device, 4 GPS, 5 line-of-sight recognition device, 6 sensors, 7 in-vehicle equipment, 8 output device (voice output device or display device), 10 navigation device, 11 search request determination unit, 12 coordinate calculation unit, 13 search processing unit, 14 storage unit, 15 output information creation unit, 16 superimposed display control unit, 18 head-up display (HUD), 20 user, 21 eyeball of user 20, 22 eyeball position measurement area, 23 Gaze measurable direction, 31 Search start button (search start instruction input part), 32 Search decision button (search decision instruction input part), 80 Search object, 81 Gaze position, 82 Utterance (utterance content), 83, 83 'voice Output or display output (detailed facility information), 84 search points, 85 search marks , 100-vehicle device, 101 a portable information terminal, 102 server.

Claims (9)

  1.  移動体を使用するユーザの視線検知による情報処理装置において、
     前記ユーザの視界に重ねて各種情報を表示させるディスプレイと、
     前記ユーザによる検索要求を判断する検索要求判断部と、
     前記検索要求判断部により検索要求があると判断された場合に、少なくとも前記移動体の現在位置および前記ユーザの視線認識に基づいて、前記ユーザの視線方向にある対象物を検索対象として地図上の座標位置を算出する座標計算部と、
     前記座標計算部により算出された前記検索対象物の座標位置と地図データとに基づいて、前記ディスプレイに前記検索対象物を強調する検索マークを重畳表示させる重畳表示制御部と、
     前記座標計算部により算出された前記検索対象物の座標位置と地図データとに基づいて、前記検索対象物の詳細情報を検索する検索処理部と、
     前記検索処理部により検索された前記検索対象物の詳細情報を前記ユーザに提示するための出力情報を生成する出力情報作成部と
     を備えたことを特徴とする情報処理装置。
    In an information processing apparatus based on gaze detection of a user who uses a moving object,
    A display for displaying various types of information superimposed on the user's field of view;
    A search request determination unit for determining a search request by the user;
    When the search request determination unit determines that there is a search request, on the map, an object in the user's line-of-sight direction is selected as a search target based on at least the current position of the mobile object and the user's line-of-sight recognition. A coordinate calculation unit for calculating a coordinate position;
    Based on the coordinate position of the search object calculated by the coordinate calculation unit and map data, a superimposition display control unit that superimposes a search mark that highlights the search object on the display;
    A search processing unit that searches for detailed information of the search object based on the coordinate position and map data of the search object calculated by the coordinate calculation unit;
    An information processing apparatus comprising: an output information generating unit that generates output information for presenting detailed information of the search object searched by the search processing unit to the user.
  2.  前記座標計算部に対して検索開始の指示を行う検索開始指示入力部と、
     前記検索処理部に対して検索決定の指示を行う検索決定指示入力部とをさらに備え、
     前記座標計算部は、前記検索開始指示入力部が押下されている間のみ、前記座標位置を算出し、
     前記検索処理部は、前記検索決定指示入力部が押下された場合に、前記ディスプレイに前記検索マークが重畳表示された検索対象物の詳細情報を検索する
     ことを特徴とする請求項1記載の情報処理装置。
    A search start instruction input unit for instructing the coordinate calculation unit to start a search;
    A search determination instruction input unit that instructs the search processing unit to determine a search;
    The coordinate calculation unit calculates the coordinate position only while the search start instruction input unit is pressed,
    2. The information according to claim 1, wherein the search processing unit searches for detailed information of a search object in which the search mark is superimposed on the display when the search determination instruction input unit is pressed. Processing equipment.
  3.  前記座標計算部は、さらに、前記移動体の周辺情報を検出するセンサから取得した情報に基づいて、前記検索対象物の座標位置を算出する
     ことを特徴とする請求項1記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the coordinate calculation unit further calculates a coordinate position of the search object based on information acquired from a sensor that detects peripheral information of the moving object.
  4.  前記座標計算部は、さらに、少なくとも前記移動体の速度を検出する機器から取得した情報に基づいて、前記検索対象物の座標位置を算出する
     ことを特徴とする請求項1記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the coordinate calculation unit further calculates a coordinate position of the search object based on at least information acquired from a device that detects the speed of the moving object.
  5.  前記検索要求判断部は、前記ユーザにより発話された音声の認識、前記ユーザにより押下されたスイッチ、前記ユーザにより操作されたジェスチャーの認識のいずれかに基づいて、前記検索要求があるか否かを判断する
     ことを特徴とする請求項1記載の情報処理装置。
    The search request determination unit determines whether or not there is a search request based on recognition of a voice spoken by the user, a switch pressed by the user, or recognition of a gesture operated by the user. The information processing apparatus according to claim 1, wherein the determination is performed.
  6.  前記検索処理部は、予め特定された前記移動体の周辺施設を検索ポイントとして検索し、
     前記出力情報作成部は、前記検索処理部により検索された検索ポイントを表示させるよう出力情報を生成する
     ことを特徴とする請求項1記載の情報処理装置。
    The search processing unit searches for a peripheral facility of the mobile body specified in advance as a search point,
    The information processing apparatus according to claim 1, wherein the output information creation unit generates output information so as to display the search points searched by the search processing unit.
  7.  前記出力情報作成部は、前記座標計算部が前記ユーザが検索したい検索対象物の座標位置を算出する際に用いた前記ユーザの視線位置が、前記検索ポイントのいずれかと一致した場合に、当該一致した検索ポイントを強調表示させるように前記出力情報を生成する
     ことを特徴とする請求項6記載の情報処理装置。
    The output information creation unit, when the user's line-of-sight position used when the coordinate calculation unit calculates the coordinate position of the search target that the user wants to search matches with any of the search points, The information processing apparatus according to claim 6, wherein the output information is generated so that the retrieved search points are highlighted.
  8.  移動体を使用するユーザの視線検知による検索結果を表示装置に表示させる情報処理装置であって、
     前記ユーザの視界に重ねて各種情報を表示させるディスプレイと、
     前記ユーザによる検索要求を判断する検索要求判断部と、
     前記検索要求判断部により検索要求があると判断された場合に、少なくとも前記移動体の現在位置および前記ユーザの視線認識に基づいて、前記ユーザの視線方向にある対象物を検索対象として地図上の座標位置を算出する座標計算部と、
     前記座標計算部により算出された前記検索対象物の座標位置と地図データとに基づいて、前記ディスプレイに前記検索対象物を強調する検索マークを重畳表示させる重畳表示制御部と、
     前記座標計算部により算出された前記検索対象物の座標位置と地図データとに基づいて、前記検索対象物の詳細情報を検索する検索処理部と、
     前記検索処理部により検索された前記検索対象物の詳細情報を前記ユーザに提示するための出力情報を生成し、当該生成された出力情報を前記表示装置に表示させる出力情報作成部と
     を備えたことを特徴とする情報処理装置。
    An information processing apparatus that causes a display device to display a search result based on gaze detection of a user who uses a mobile object,
    A display for displaying various types of information superimposed on the user's field of view;
    A search request determination unit for determining a search request by the user;
    When the search request determination unit determines that there is a search request, on the map, an object in the user's line-of-sight direction is selected as a search target based on at least the current position of the mobile object and the user's line-of-sight recognition. A coordinate calculation unit for calculating a coordinate position;
    Based on the coordinate position of the search object calculated by the coordinate calculation unit and map data, a superimposition display control unit that superimposes a search mark that highlights the search object on the display;
    A search processing unit that searches for detailed information of the search object based on the coordinate position and map data of the search object calculated by the coordinate calculation unit;
    An output information generating unit that generates output information for presenting detailed information of the search object searched by the search processing unit to the user, and causes the display device to display the generated output information. An information processing apparatus characterized by that.
  9.  情報処理装置が、移動体を使用するユーザの視線検知による検索結果を表示装置に表示させる情報処理方法であって、
     検索要求判断部が、前記ユーザによる検索要求を判断するステップと、
     座標計算部が、前記検索要求判断部により検索要求があると判断された場合に、少なくとも前記移動体の現在位置および前記ユーザの視線認識に基づいて、前記ユーザの視線方向にある対象物を検索対象として地図上の座標位置を算出するステップと、
     重畳表示制御部が、前記座標計算部により算出された前記検索対象物の座標位置と地図データとに基づいて、前記ユーザの視界に重ねて各種情報を表示させるディスプレイに前記検索対象物を強調する検索マークを重畳表示させるステップと、
     検索処理部が、前記座標計算部により算出された前記検索対象物の座標位置と地図データとに基づいて、前記検索対象物の詳細情報を検索するステップと、
     出力情報作成部が、前記検索処理部により検索された前記検索対象物の詳細情報を前記ユーザに提示するための出力情報を生成し、当該生成された出力情報を前記表示装置に表示させるステップと
     を備えたことを特徴とする情報処理方法。
    An information processing method is an information processing method for causing a display device to display a search result based on gaze detection of a user who uses a moving body,
    A search request determining unit determining a search request by the user;
    When the coordinate calculation unit determines that there is a search request by the search request determination unit, it searches for an object in the user's line-of-sight direction based on at least the current position of the mobile object and the user's line-of-sight recognition Calculating a coordinate position on the map as a target;
    The superimposed display control unit emphasizes the search target on a display that displays various information superimposed on the user's field of view based on the coordinate position of the search target calculated by the coordinate calculation unit and map data. A step of superimposing a search mark;
    A search processing unit searching for detailed information of the search target based on the coordinate position and map data of the search target calculated by the coordinate calculation unit;
    An output information creating unit generating output information for presenting detailed information of the search object searched by the search processing unit to the user, and displaying the generated output information on the display device; An information processing method characterized by comprising:
PCT/JP2013/065610 2013-06-05 2013-06-05 Device for processing information through line-of-sight detection and information processing method WO2014196038A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2013/065610 WO2014196038A1 (en) 2013-06-05 2013-06-05 Device for processing information through line-of-sight detection and information processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2013/065610 WO2014196038A1 (en) 2013-06-05 2013-06-05 Device for processing information through line-of-sight detection and information processing method

Publications (1)

Publication Number Publication Date
WO2014196038A1 true WO2014196038A1 (en) 2014-12-11

Family

ID=52007714

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/065610 WO2014196038A1 (en) 2013-06-05 2013-06-05 Device for processing information through line-of-sight detection and information processing method

Country Status (1)

Country Link
WO (1) WO2014196038A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016148968A (en) * 2015-02-12 2016-08-18 セイコーエプソン株式会社 Head-mounted display device, control system, method for controlling head-mounted display device, and computer program
WO2016151958A1 (en) * 2015-03-20 2016-09-29 ソニー株式会社 Information processing device, information processing system, information processing method, and program
WO2017051721A1 (en) * 2015-09-24 2017-03-30 ソニー株式会社 Information processing device, information processing method, and program
WO2017061183A1 (en) * 2015-10-05 2017-04-13 株式会社村田製作所 Human interface
WO2017104198A1 (en) * 2015-12-14 2017-06-22 ソニー株式会社 Information processing device, information processing method, and program
JP2017175621A (en) * 2016-03-24 2017-09-28 トヨタ自動車株式会社 Three-dimensional head-up display unit displaying visual context corresponding to voice command
CN107480129A (en) * 2017-07-18 2017-12-15 上海斐讯数据通信技术有限公司 A kind of article position recognition methods and the system of view-based access control model identification and speech recognition
WO2020065892A1 (en) * 2018-09-27 2020-04-02 日産自動車株式会社 Travel control method and travel control device for vehicle
CN111886564A (en) * 2018-03-28 2020-11-03 索尼公司 Information processing apparatus, information processing method, and program
WO2020240789A1 (en) * 2019-05-30 2020-12-03 三菱電機株式会社 Speech interaction control device and speech interaction control method
CN112262068A (en) * 2018-06-12 2021-01-22 矢崎总业株式会社 Vehicle control system
CN113536141A (en) * 2020-04-16 2021-10-22 上海仙豆智能机器人有限公司 Position collection method, electronic map and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002267470A (en) * 2001-03-14 2002-09-18 Toyota Motor Corp System and method for presenting information
JP2006242859A (en) * 2005-03-04 2006-09-14 Denso Corp Information display device for vehicle
JP2007080060A (en) * 2005-09-15 2007-03-29 Matsushita Electric Ind Co Ltd Object specification device
JP2010134640A (en) * 2008-12-03 2010-06-17 Honda Motor Co Ltd Information acquisition apparatus
JP2012117911A (en) * 2010-11-30 2012-06-21 Aisin Aw Co Ltd Guidance device, guidance method and guidance program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002267470A (en) * 2001-03-14 2002-09-18 Toyota Motor Corp System and method for presenting information
JP2006242859A (en) * 2005-03-04 2006-09-14 Denso Corp Information display device for vehicle
JP2007080060A (en) * 2005-09-15 2007-03-29 Matsushita Electric Ind Co Ltd Object specification device
JP2010134640A (en) * 2008-12-03 2010-06-17 Honda Motor Co Ltd Information acquisition apparatus
JP2012117911A (en) * 2010-11-30 2012-06-21 Aisin Aw Co Ltd Guidance device, guidance method and guidance program

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016148968A (en) * 2015-02-12 2016-08-18 セイコーエプソン株式会社 Head-mounted display device, control system, method for controlling head-mounted display device, and computer program
WO2016151958A1 (en) * 2015-03-20 2016-09-29 ソニー株式会社 Information processing device, information processing system, information processing method, and program
WO2017051721A1 (en) * 2015-09-24 2017-03-30 ソニー株式会社 Information processing device, information processing method, and program
JPWO2017061183A1 (en) * 2015-10-05 2018-08-30 株式会社村田製作所 Human interface
WO2017061183A1 (en) * 2015-10-05 2017-04-13 株式会社村田製作所 Human interface
JPWO2017104198A1 (en) * 2015-12-14 2018-09-27 ソニー株式会社 Information processing apparatus, information processing method, and program
WO2017104198A1 (en) * 2015-12-14 2017-06-22 ソニー株式会社 Information processing device, information processing method, and program
US11042743B2 (en) 2015-12-14 2021-06-22 Sony Corporation Information processing device, information processing method, and program for preventing deterioration of visual recognition in a scene
JP2017175621A (en) * 2016-03-24 2017-09-28 トヨタ自動車株式会社 Three-dimensional head-up display unit displaying visual context corresponding to voice command
US10140770B2 (en) 2016-03-24 2018-11-27 Toyota Jidosha Kabushiki Kaisha Three dimensional heads-up display unit including visual context for voice commands
CN107480129A (en) * 2017-07-18 2017-12-15 上海斐讯数据通信技术有限公司 A kind of article position recognition methods and the system of view-based access control model identification and speech recognition
CN111886564A (en) * 2018-03-28 2020-11-03 索尼公司 Information processing apparatus, information processing method, and program
CN112262068A (en) * 2018-06-12 2021-01-22 矢崎总业株式会社 Vehicle control system
CN112262068B (en) * 2018-06-12 2024-01-09 矢崎总业株式会社 Vehicle control system
WO2020065892A1 (en) * 2018-09-27 2020-04-02 日産自動車株式会社 Travel control method and travel control device for vehicle
WO2020240789A1 (en) * 2019-05-30 2020-12-03 三菱電機株式会社 Speech interaction control device and speech interaction control method
CN113536141A (en) * 2020-04-16 2021-10-22 上海仙豆智能机器人有限公司 Position collection method, electronic map and computer storage medium

Similar Documents

Publication Publication Date Title
WO2014196038A1 (en) Device for processing information through line-of-sight detection and information processing method
US10943400B2 (en) Multimodal user interface for a vehicle
US9881605B2 (en) In-vehicle control apparatus and in-vehicle control method
EP2826689B1 (en) Mobile terminal
US9261908B2 (en) System and method for transitioning between operational modes of an in-vehicle device using gestures
US10942566B2 (en) Navigation service assistance system based on driver line of sight and vehicle navigation system using the same
KR102206383B1 (en) Speech recognition apparatus and method thereof
CN108099790B (en) Driving assistance system based on augmented reality head-up display and multi-screen voice interaction
US20140278033A1 (en) Window-oriented displays for travel user interfaces
JP3160108B2 (en) Driving support system
KR20140136799A (en) Image display apparatus and operation method of the same
US9341492B2 (en) Navigation device, navigation method, and navigation program
JP2015041197A (en) Display control device
JP2020061642A (en) Agent system, agent control method, and program
JP6598313B2 (en) Navigation system and navigation device
US11325605B2 (en) Information providing device, information providing method, and storage medium
JP2019010919A (en) Travel support device and computer program
JP2009031065A (en) System and method for informational guidance for vehicle, and computer program
JP2015161632A (en) Image display system, head-up display device, image display method, and program
JP7233918B2 (en) In-vehicle equipment, communication system
TW201329784A (en) Interactive voice control navigation system
KR20160053472A (en) System, method and application for confirmation of identity by wearable glass device
KR20140095873A (en) Electronic device and control method for the electronic device
KR102059607B1 (en) Mobile terminal and control method thereof
KR101622692B1 (en) Electronic device and control method for the electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13886446

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13886446

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP