WO2006028171A1 - Dispositif de présentation de données, méthode de présentation de données, programme de présentation de données et support d’enregistrement contenant le programme - Google Patents

Dispositif de présentation de données, méthode de présentation de données, programme de présentation de données et support d’enregistrement contenant le programme Download PDF

Info

Publication number
WO2006028171A1
WO2006028171A1 PCT/JP2005/016515 JP2005016515W WO2006028171A1 WO 2006028171 A1 WO2006028171 A1 WO 2006028171A1 JP 2005016515 W JP2005016515 W JP 2005016515W WO 2006028171 A1 WO2006028171 A1 WO 2006028171A1
Authority
WO
WIPO (PCT)
Prior art keywords
keyword
data
speech
actual data
feature
Prior art date
Application number
PCT/JP2005/016515
Other languages
English (en)
Japanese (ja)
Inventor
Shigeo Matsui
Original Assignee
Pioneer Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pioneer Corporation filed Critical Pioneer Corporation
Priority to JP2006535815A priority Critical patent/JPWO2006028171A1/ja
Publication of WO2006028171A1 publication Critical patent/WO2006028171A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3605Destination input or retrieval
    • G01C21/3608Destination input or retrieval using speech input, e.g. using speech recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • G08G1/0969Systems involving transmission of navigation instructions to the vehicle having a display in the form of a map

Definitions

  • the present invention relates to data presentation means such as setting of a destination when performing navigation, and in particular, a technique of a data presentation device that searches a plurality of actual data stored in advance and presents the result. Belonging to the field.
  • a speech recognition device that recognizes speech uttered by humans is applied to each device.
  • Such a speech recognition apparatus sequentially matches the pattern of the feature amount of the uttered speech with a pattern of the feature amount of speech indicating a recognition candidate word / phrase (hereinafter referred to as a keyword) as a keyword prepared in advance. It is designed to recognize.
  • a navigation apparatus that guides a route of a moving body such as a vehicle based on map data
  • it is generally used to set a destination or a waypoint.
  • the navigation device is configured to set the recognized keyword to the current location, destination, or waypoint (hereinafter referred to as a location), and the location, such as latitude, longitude, and attribute of the set location.
  • Route data and route guidance are obtained by acquiring data related to this (hereinafter referred to as actual data) from the database.
  • the speech is stored in a database and does not match a plurality of keywords.
  • a navigation device is known that registers the uttered voice in a related keyword by a user operation (see, for example, Patent Document 1).
  • Patent Document 1 Japanese Unexamined Patent Publication No. 2003-323192
  • the present invention solves an example of the above-described problem by making the keyword recognized based on the uttered voice as a search key for data search in another database or the like, thereby making the operation complicated.
  • the present invention provides a data presentation device that improves the recognition rate of spoken speech without increasing the number of keywords.
  • the invention according to claim 1 is characterized in that the speech component of the spoken speech is acquired, the speech component is analyzed, and the feature amount of the speech component is calculated.
  • Extraction means for extracting a certain utterance voice feature quantity, first storage means in which a plurality of keyword feature quantity data indicating feature quantities related to the keyword voice are stored in advance, and a name in which the predetermined actual data indicates the name of the actual data
  • a second storage means stored in advance in association with information; a specifying means for specifying at least one keyword based on the uttered voice feature quantity and the keyword feature quantity data; and at least a part of the name information
  • a search means for searching for actual data having the specified keyword, and a presentation means for presenting the detected actual data.
  • the invention according to claim 5 is an acquisition step of acquiring a speech component of the uttered speech and analyzes the speech component to extract a speech speech feature amount that is a feature amount of the speech component.
  • An identification step for identifying at least one keyword on the basis of keyword feature amount data indicating the feature amount related to the speech voice feature amount and the keyword speech, and name information indicating a name in the second storage means.
  • the computer has an acquisition means for acquiring the speech component of the spoken speech, the speech component is analyzed, and the speech that is the feature amount of the speech component Extraction means for extracting speech feature quantity, identification means for identifying at least one keyword based on keyword feature quantity data indicating the feature quantity related to the speech voice feature quantity and keyword speech, and a name indicating the name of predetermined actual data
  • the information processing apparatus may be configured to function as search means for searching for actual data having the specified keyword in at least a part of information, and presentation means for presenting the detected actual data.
  • the invention according to claim 8 is an acquisition means for acquiring a speech component of the uttered speech, and the speech component is analyzed to extract a speech speech feature amount that is a feature amount of the speech component. Extracting means, first storage means in which a plurality of keyword feature quantity data indicating the feature quantity of the keyword speech are stored in advance, and predetermined actual data are associated with name information indicating the name of the actual data.
  • Second storage means stored in advance, and notification means for notifying an operator of the specified keyword when at least one keyword is specified based on the utterance voice feature quantity and the keyword feature quantity data Correction means used to correct the notified keyword when the notified keyword does not match the keyword desired by the operator, and the name Has a configuration comprising a retrieval means for retrieving actual data having been modified by said modifying means in some keywords, and presenting means for presenting the actual data to which the detected and even without less of distribution.
  • the invention according to claim 9 is an acquisition step of acquiring a speech component of the uttered speech, an analysis of the speech component of the uttered speech, and an utterance speech that is a feature amount of the speech component
  • An extraction step for extracting a feature amount a specifying step for specifying at least one keyword based on keyword feature amount data indicating the feature amount related to the utterance voice feature amount and the keyword speech, and an announcement of the specified keyword
  • the notification process notified by the means and the notified keyword are corrected, the name information of the predetermined actual data is changed. It has the structure provided with the search process which searches the actual data which have the said corrected keyword at least in part, and the presentation process which presents the said detected actual data.
  • the computer has an acquisition means for acquiring the speech component of the uttered speech, the speech component of the uttered speech is analyzed, and the feature amount of the speech component is calculated.
  • Extraction means for extracting a certain utterance voice feature quantity
  • specification means for specifying at least one keyword based on the utterance voice feature quantity and keyword feature quantity data indicating a feature quantity relating to the keyword voice
  • the specified keyword Annunciation means Annunciation means Announcement means
  • search means for searching the actual data having the corrected keyword in at least a part of the name information of predetermined actual data, the detected It also has a configuration that allows it to function as a presentation means for presenting actual data.
  • FIG. 1 is a block diagram showing a schematic configuration of a navigation device according to an embodiment of the present application.
  • FIG. 2 is a flowchart (I) showing an operation of a point data search process necessary for a route setting process or a route guidance process in the system control unit 250 of the embodiment.
  • FIG. 3 is a flowchart (II) showing an operation of a search process of point data required at the time of route setting processing or route guidance processing in the system control unit 250 of the embodiment.
  • FIG. 1 is a block diagram showing a schematic configuration of the navigation apparatus of the present embodiment according to the present application.
  • the navigation device 100 of the present embodiment is connected to an antenna AT and receives a GPS (Global Positioning System) data.
  • GPS Global Positioning System
  • Each of the sensors 120 detects data
  • the interface 130 calculates the vehicle position based on GPS data and driving data
  • the VICS data receiver 140 receives VICS (Vehicle Information Communication System) data
  • the user And an operation unit 150 used for setting and inputting commands to the system.
  • VICS Vehicle Information Communication System
  • the navigation device 100 includes a microphone 160 that collects the uttered voice spoken by the operator, and a command that instructs the system also the utterance voice power collected by the microphone 160 (hereinafter referred to as the “voice”). , Simply a command)), a database 180 storing data used when performing speech recognition, and various data such as map data and point data described later are recorded in advance.
  • a display control unit 220 that controls the display unit 200 using the buffer memory 210, an audio processing circuit 230 that generates audio such as route guidance, and a speaker 240 that amplifies the audio signal output from the audio processing circuit 230.
  • the system control unit 250 controls the entire system and controls each process related to speech recognition, and the ROMZRAM 260. Each unit is connected by a bus B.
  • the operation unit 150 of the present embodiment constitutes an operation unit, a selection unit, and a correction unit according to the present invention
  • the speech recognition circuit 170 includes an acquisition unit, an extraction unit, and a specification unit according to the present invention.
  • the display unit 200 and the display control unit 220 or the audio processing circuit 230 and the speaker 240 of the present embodiment constitute a presentation unit and a notification unit according to the present invention.
  • the GPS receiver 110 receives navigation radio waves of a plurality of satellite forces belonging to the GPS via an antenna (not shown), and based on the received radio waves, pseudo coordinates of the current position of the mobile object A value is calculated, and the calculated pseudo coordinate data is output to the interface 130 as GPS data.
  • the sensor unit 120 detects each travel data of the travel speed, acceleration and azimuth of the vehicle, and outputs the detected travel data to the interface 130.
  • the sensor unit 120 detects the traveling speed of the vehicle, converts the detected speed into speed data having a pulse or voltage form, and outputs the speed data to the interface 130. Further, the sensor unit 120 detects the moving state of the vehicle in the vertical direction by comparing the gravitational acceleration and the acceleration generated by the movement of the vehicle, and pulses or outputs acceleration data indicating the detected moving state. It is converted to a voltage form and output to the interface 130. Further, the sensor unit 120 is configured by a so-called gyro sensor, detects the azimuth angle of the vehicle, that is, the traveling direction in which the vehicle is traveling, and the detected azimuth angle is azimuth angle data having a pulse or voltage form. And output to the interface 130.
  • gyro sensor detects the azimuth angle of the vehicle, that is, the traveling direction in which the vehicle is traveling, and the detected azimuth angle is azimuth angle data having a pulse or voltage form.
  • the interface 130 performs interface processing between the sensor unit 120 and the GPS receiving unit 110 and the system control unit 250. Based on the row data, the vehicle position is calculated and the vehicle position is output to the system control unit 250 as the vehicle position data.
  • vehicle position data is collated with map data in the system control unit 250 and used for navigation-related processing such as map matching processing and route search processing.
  • the VICS data receiving unit 140 acquires VICS data by receiving radio waves such as FM multiplex broadcasting, and outputs the acquired VICS data to the system control unit 250.
  • VICS refers to a road traffic information communication system
  • VICS data refers to road traffic information such as traffic jams, accidents, and regulations.
  • the map data storage unit 190 is configured by, for example, a hard disk, and is used when pre-recorded map data such as road maps, route setting, or route guidance.
  • map data such as road maps, route setting, or route guidance.
  • the necessary point data and other information necessary for driving guidance are read out by setting the vehicle, etc., and various kinds of read out data are output to the system control unit 250.
  • the map data storage unit 190 is divided into a plurality of blocks in a map-wide force mesh shape, and map data corresponding to each block is managed as block map data.
  • the map data storage unit 190 includes name data indicating the names of destinations such as parks and stores, and map data of the destinations. Location data indicating the location and facility data such as an address are stored in association with road shape data for each location as location data.
  • the map data storage unit 190 includes a restaurant, a department store, a play facility, a tourist attraction, a name of the facility such as a museum, a dining place, a tour place.
  • Genre information indicating the genre (also referred to as an attribute) of the point of visit such as a play facility, location information indicating the latitude and longitude of the point, facility data such as address, telephone number, business day, and business hours Stored.
  • map data storage unit 190 of the present embodiment is used to specify a point required for route setting or route guidance such as a destination or current location.
  • the point data is searched by the system control unit 250.
  • the operation unit 150 includes a remote control device having a number of keys such as various confirmation buttons, selection buttons, and numeric keys, and a light receiving unit that receives a signal transmitted from the remote control device or various confirmation buttons.
  • the operation panel has a number of keys such as a selection button and numeric keys.
  • the operation unit 150 is used to input a driver's command such as a vehicle travel information display command and a display switching of the display unit 200.
  • the operation unit 150 displays the utterance voice recognized by the voice recognition circuit 170 when the utterance voice is presented as a recognition result in the keyword form on the display unit 200.
  • the displayed keyword selection that is, confirmation of the command or input value, correction of the presented keyword, and determination of the name of the map or the point where the facility is displayed on the map It is getting ready to be done.
  • the voice recognition circuit 170 is configured to receive the speech voice generated by the operator in the microphone 160.
  • the voice recognition circuit 170 uses the database 180 to operate the operation command of the navigation device 100. Or, the utterance voice input as a point name when searching for point data to be described later is analyzed, and the analysis result is displayed on the display unit 200.
  • the speech recognition circuit 170 of the present embodiment is configured to analyze the input speech using the HMM (Hidden Markov Models) method.
  • HMM Hidden Markov Models
  • the speech recognition circuit 170 of the present embodiment extracts feature values of speech components from the input speech speech, and is stored in the database 180 and is feature value data of key words to be recognized as commands.
  • the accuracy in other words, the input utterance is an arbitrary keyword as a command with an instruction or a point name It has come to be specified as.
  • the speech recognition circuit 170 performs a certain amount of time in the utterance speech indicating an arbitrary state in the HMM feature amount pattern and the input utterance speech. A comparison is made with the feature values of each divided speech section, and a similarity indicating the degree of coincidence between the feature value pattern of this HMM and the feature values of each speech section is calculated. Then, the speech recognition circuit 170 calculates, for each HMM keyword for which the similarity is calculated, an HMM connection called a matching process, that is, a cumulative similarity indicating the probability of the keyword connection.
  • the keyword indicating the connection of HMMs having similarities is recognized as the speech language.
  • the database 180 stores a plurality of feature value pattern data based on the utterance speech of the keyword to be recognized as a point name when searching for a point necessary for route setting or route guidance. Specifically, voice data of each phoneme emitted by a plurality of humans is acquired in advance, a feature value pattern is extracted for each phoneme, and a feature value of each phoneme is based on the feature value pattern for each phoneme The HMM for each keyword generated by learning the pattern data is stored in advance.
  • the database 180 stores point data for keywords that should be recognized as point names.
  • the database 180 together with name data such as facility names and names of places.
  • the display unit 200 is composed of, for example, a CRT, a liquid crystal display element, or an organic EL (Electro Luminescence), and displays map data and point data in various modes according to the control of the display control unit 220. Various states necessary for route guidance such as the vehicle position are displayed superimposed on map data or point data.
  • the display unit 200 displays content information other than the map data or the point data in accordance with the control of the display control unit 220, and displays a voice-recognized keyword when voice recognition is performed. Alternatively, when the recognized keyword is corrected, various information is displayed in conjunction with the operation unit 150.
  • the display control unit 220 is configured to receive map data or point data input via the system control unit 250, and the display control unit 220 includes the system control unit 2. Based on the instruction of 50, display data to be displayed on the display unit 200 as described above is generated and temporarily stored in the buffer memory 210, while the display data is read from the buffer memory 210 at a predetermined timing. Display control of the display unit 200 is performed.
  • the display control unit 220 of the present embodiment works in conjunction with the operation unit 150 when determining a command from a speech-recognized keyword, which will be described later, or when correcting the recognized keyword.
  • Display data is generated and display control is performed when the display data is displayed on the display unit 200.
  • the audio processing circuit 230 generates an audio signal based on an instruction from the system control unit 250, and amplifies the generated audio signal via the speaker 240. For example, the vehicle at the next intersection Information on route guidance including traffic congestion information or traffic stop information that should be notified directly to the driver in the direction of travel and driving guidance is output to the speaker 240 as an audio signal.
  • the system control unit 250 includes various input / output ports such as a GPS reception port, a key input port, a display control port, and the like, and comprehensively controls general functions for navigation processing. Yes.
  • the system control unit 250 reads out a control program stored in the ROMZRAM 260, executes each process, and temporarily holds data being processed in the ROMZRAM 260, thereby performing each of route setting or route guidance. Control for processing is being performed.
  • the system control unit 250 controls each unit, and specifies a command based on the utterance voice of the operator in each process of route setting or route guidance. Recognized keyword display processing, various processing when the operator confirms a command based on the displayed keyword, and correction operation processing when the operator corrects the displayed keyword Speak.
  • the system control unit 250 identifies a point necessary for route setting or route guidance, such as a destination or current location, the key word for identifying the point is recognized.
  • the map data storage unit 190 is searched based on the recognized keyword, and search processing for detecting the corresponding point data (hereinafter referred to as route setting processing). Or, search processing of point data at the time of route guidance processing! )
  • the detected point data is displayed on the display unit 200 via the display control unit 220.
  • FIG. 2 and FIG. 3 are flow charts showing the operation of the point data search process required during the route setting process or the route guidance process in the system control unit 250 of the present embodiment.
  • step S11 when an instruction to start the route setting process is input by an operation of the operator and the system control unit 250 receives the instruction (step S11), the system control unit 250 automatically receives the instruction from the GPS reception unit 110. Information indicating the current position of the car is acquired and set as the starting point of the route (step S12).
  • the system control unit 250 controls the display control unit 220 to cause the display unit 200 to display a display prompting the user to input a destination point or a waypoint, and for the destination of the operator. Wait for input (step S13).
  • the system control unit 250 may control the audio processing circuit 230 so as to notify the speaker 240 to input a destination or a stop point.
  • step S14 when the speech recognition circuit 170 detects that the utterance voice of the operator is input via the microphone 160 (step S14), the system control unit 250 stores the database 180 in the speech recognition circuit 170. Voice recognition processing that identifies the relevant keyword using This is called voice recognition processing. ) Is executed (step S15).
  • the speech recognition circuit 170 extracts the feature amount of the speech component in the input uttered speech, and the keyword to be recognized as a command stored in the database 180.
  • a keyword having a predetermined accuracy is identified as a point name as a result of comparison with feature quantity data sequentially.
  • the system control unit 250 searches the map data storage unit 190 based on the identified spot name, and detects spot data having at least a part of the spot name as name information (step). S 16).
  • step S17 the system control unit 250 detects the point identified by the voice recognition process.
  • the name is displayed on the display unit 200 and the operator is prompted to select one spot name.
  • the process waits for the operator to input an instruction and proceeds to the process of step S19 (step S17).
  • step S18 when spot data having name information that matches at least a part of the spot name specified by the voice recognition process by the system control unit 250 is detected, the system control unit 250 The point name of the detected point data is displayed on the display unit 200 together with the point name specified by the voice recognition circuit 170 on the display unit 200, and the operator is prompted to select one point name. Wait for input (step S18).
  • the system control unit 250 displays the display unit 200. Inquires whether the specified spot name is a spot name that the operator wishes to set, that is, a display for confirming the suitability of the specified spot name (step S 19).
  • the system control unit 250 may perform any operation of an instruction input via the operation unit 150.
  • a certain force is judged (step S20).
  • the point name power specified by the system control unit 250 The point name desired by the operator If the system control unit 250 determines that the spot name is the name, the system control unit 250 proceeds to the process of step S21 and determines that the point name specified by the system control unit 250 is the spot name desired by the operator. In that case, the system control unit 250 proceeds to the process of step S24.
  • the system control unit 250 displays image data for correcting the spot name on the display unit 200, and corrects the spot name by linking the display unit 200, the display control unit 220, and the operation unit 150.
  • Work processing (hereinafter referred to as correction processing) is performed (step S21).
  • the display control unit 220 converts any character of the spot name specified in conjunction with the operation of the operator in the operation unit 150 to another character, and the specified spot name
  • the image data for adding characters to and deleting the characters of the specified spot names is generated and displayed on the display unit 200 as appropriate.
  • step S22 when the system control unit 250 detects through the operation unit 150 that the correction of the spot name by the operator is completed (step S22), the map data storage unit 190 is based on the corrected spot name.
  • the site data is searched and spot data having at least a part of the spot name as name information is detected and displayed (step S23).
  • the system control unit 250 displays the detected spot name or the specified spot name on the display unit 200, and prompts the operator to select one spot name. (Step S24).
  • step S25 when the system control unit 250 detects that a point name has been selected (step S25), it sets the selected point data as a destination and sets the point indicated by the point data. Display on map 200 together with map data (step S26)
  • the system control unit 250 displays a message prompting the operator to determine whether there is another destination or waypoint to be set, and based on the input from the operator, another destination to be set. Determines whether there is a transit point (step S27).
  • step S13 the system control unit 250 should set other purposes based on the input of the operation unit 150. If it is determined that there is no ground, the system control unit 250 proceeds to the process of step S28.
  • the system control unit 250 sets a route on which the vehicle should travel based on the set departure point and destination, and starts route guidance based on the set route! This completes the main operation (step S28).
  • the navigation apparatus 100 obtains the speech component of the uttered speech, analyzes the speech component of the obtained uttered speech, and uttered speech feature that is the feature amount of the speech component.
  • the speech recognition circuit 170 for identifying at least one keyword based on the utterance voice feature quantity and the keyword feature quantity data is extracted, and a plurality of keyword feature quantity data indicating the feature quantities relating to the keyboard voice are stored in advance.
  • the map data storage unit 190 Stored in the database 180, the map data storage unit 190 in which the predetermined location data is stored in advance in association with the location name information indicating the location name of the location data, and the map data storage unit 190
  • the system control unit 250 searches for point data, and the identified keyword is at least part of the point name indicated by the point name information.
  • the system control unit 250 for searching for the spot data included in the display unit 200 and the display unit 200 for presenting the detected spot data are provided.
  • the navigation device 100 searches for point data stored in the map data storage unit 190 based on the identified keyword and having the keyword as at least a part of the point name. As a result of the search, when spot data having at least a part of the identified keyword is detected, the detected spot data is presented.
  • the navigation device 100 can search for point data using the point name included in the map data by using the specified point name as a search key, and thus recognizes the utterance voice. It is also necessary to increase the number of keywords to be recognized by performing complicated operations such as pre-registering the names of points that the operator wishes to recognize through voice recognition processing. Absent. As a result, the navigation device 100 of the present embodiment can improve the recognition rate of the keyword desired by the operator, that is, the point name, and can eliminate the complexity of the operation. .
  • the navigation device 100 of the present embodiment further includes an operation unit 150 used for correcting the specified keyword, and the system control unit 250 has the corrected keyword in at least part of the location name information. It has the structure which searches point data.
  • the navigation device 100 of the present embodiment searches for point data having the corrected keyword in at least a part of the point name indicated by the point name information.
  • the navigation device 100 of the present embodiment can correct a speech-recognized keyword even when the result of speech recognition is different from the location name desired by the operator. This eliminates the complexity of entering all names, and allows you to search for point data by voice recognition and operator correction without registering the point names to be recognized as keywords to be recognized in advance. As a result, the recognition rate of the point name desired by the operator can be improved, and the complexity of the operation can be eliminated.
  • the database 180 stores the point data related to the keyword in association with the keyword
  • the display unit Reference numeral 200 has a configuration that presents the spot data detected by the search means and the spot data associated with the identified keyword.
  • the navigation device 100 of the present embodiment presents the detected spot data and the spot data associated with the identified keyword.
  • the navigation device 100 of the present embodiment can search for spot data using the spot name included in the map data using the specified spot name as a search key, and thus recognizes the utterance voice. It is also necessary to increase the number of keywords to be recognized by performing complicated operations such as pre-registering the names of points that the operator wishes to recognize through voice recognition processing. Absent. As a result, the navigation device 100 of the present embodiment can improve the recognition rate of the keyword desired by the operator, that is, the point name, and can eliminate the complexity of the operation. .
  • the navigation device 100 of the present embodiment includes a selection unit used to select a single piece of point data when a plurality of pieces of point data are detected by the system control unit 250.
  • the display unit 200 is configured to present one selected point data.
  • the navigation device 100 of the present embodiment presents the selected point data, so that even when a plurality of keywords are recognized or searched, the point name desired by the user is specified. can do.
  • the navigation apparatus 100 acquires the speech component of the uttered speech, analyzes the speech component of the acquired uttered speech, and extracts the utterance speech feature amount that is the feature amount of the speech component.
  • a map data storage unit 190 in which predetermined point data is stored in advance in association with point name information indicating the point name of the point data, a display unit 200 for notifying the operator of the identified keyword, and a notification If the displayed keyword matches the keyword desired by the operator, the operation unit 15 used for correcting the presented keyword 15 0 And a system control unit 250 that searches for point data having the corrected keyword as at least a part of the point name information, and the display unit 200 has a configuration for presenting the detected point data. Yes.
  • the navigation device 100 of the present embodiment allows the corrected keyword to be corrected when the notified keyword does not match the keyword desired by the operator and the presented keyword is corrected.
  • point data that has at least part of the point name indicated by the point name information is detected
  • the detected point data is presented.
  • the navigation device 100 searches for point data having the corrected keyword in at least a part of the point name indicated by the point name information, and is used when recognizing the uttered voice. It is not necessary to increase the amount of data to be recognized, and it is not necessary to increase the number of keywords to be recognized by performing a troublesome operation such as registering in advance the name of a spot that the operator desires to recognize by voice recognition processing.
  • the navigation device 100 of the present embodiment can correct a keyword that has been voice-recognized even when the result of voice recognition is different from the point name desired by the operator. This eliminates the complexity of inputting all of the names, and enables voice recognition and recognition without registering the point names to be recognized as keywords to be recognized in advance.
  • the point data can be searched by the correction of the operator.
  • the navigation device 100 of the present embodiment can improve the recognition rate of the point name desired by the operator, and can eliminate the complexity of the operation.
  • the power of using point data search processing is not limited to this, and route guidance and other point data are searched.
  • the point data may be searched.
  • the navigation apparatus 100 having the microphone 160 for inputting the utterance voice to be recognized by the above-described navigation apparatus 100 for inputting the utterance voice to be recognized by the computer is added to the computer.
  • a storage medium storing a data presentation program for searching the point data as described above and reading the program by the computer to perform the same point data search process as described above. Good.
  • the display unit 200 when the keyword is specified or when spot data is detected based on the specified spot name, the display unit 200 presents and notifies the operator. Of course, it may be presented and notified to the operator by voice via the speaker 230.
  • the feature name data of the keyword stored in the database and the input speech component of the spoken speech are sequentially compared to determine the point name.
  • the ability to use the HMM method is not limited to this, as long as the speech recognition process is performed using feature data of keywords stored in the database!
  • the power applied to the point data search process in the navigation device 100 As a data search process, the power applied to the point data search process in the navigation device 100.
  • a name search in arbitrary data in a personal computer or other apparatus. Can be applied when doing

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Health & Medical Sciences (AREA)
  • Navigation (AREA)

Abstract

L’invention concerne un dispositif de présentation de données pouvant exclure des manipulations compliquées et améliorer le taux de reconnaissance vocale sans augmenter le nombre de mots clé. Un dispositif de navigation (100) comprend : une base de données (180) contenant une pluralité de données de quantité de caractéristiques de mots clé indiquant la quantité de caractéristiques en relation avec la prononciation du mot clé à reconnaître ; et une unité de stockage de carte (190) contenant des données de point d’emplacement prédéterminé corrélées avec l’information de nom de point d’emplacement indiquant le nom de point d’emplacement des données de point d’emplacement. Selon le mot clé identifié par la reconnaissance vocale, la recherche porte sur des données de point d’emplacement stockées dans l’unité de stockage de carte (190) et ayant le mot clé au moins en tant que partie du nom de point d’emplacement. Les données de point d’emplacement détectées sont présentées si, en résultat de la recherche, des données de carte contenant le mot clé spécifié au moins en tant que partie du nom de point d’emplacement sont détectées.
PCT/JP2005/016515 2004-09-09 2005-09-08 Dispositif de présentation de données, méthode de présentation de données, programme de présentation de données et support d’enregistrement contenant le programme WO2006028171A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006535815A JPWO2006028171A1 (ja) 2004-09-09 2005-09-08 データ提示装置、データ提示方法、データ提示プログラムおよびそのプログラムを記録した記録媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004261819 2004-09-09
JP2004-261819 2004-09-09

Publications (1)

Publication Number Publication Date
WO2006028171A1 true WO2006028171A1 (fr) 2006-03-16

Family

ID=36036452

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/016515 WO2006028171A1 (fr) 2004-09-09 2005-09-08 Dispositif de présentation de données, méthode de présentation de données, programme de présentation de données et support d’enregistrement contenant le programme

Country Status (2)

Country Link
JP (1) JPWO2006028171A1 (fr)
WO (1) WO2006028171A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008164975A (ja) * 2006-12-28 2008-07-17 Nissan Motor Co Ltd 音声認識装置、および音声認識方法
WO2009022446A1 (fr) * 2007-08-10 2009-02-19 Mitsubishi Electric Corporation Dispositif de navigation
WO2010013369A1 (fr) * 2008-07-30 2010-02-04 三菱電機株式会社 Dispositif de reconnaissance vocale

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0981184A (ja) * 1995-09-12 1997-03-28 Toshiba Corp 対話支援装置
JPH11161464A (ja) * 1997-11-25 1999-06-18 Nec Corp 日本語文章作成装置
JPH11183190A (ja) * 1997-12-24 1999-07-09 Toyota Motor Corp ナビゲーション用音声認識装置および音声認識機能付きナビゲーション装置
JP2000278369A (ja) * 1999-03-29 2000-10-06 Sony Corp 通信装置、データ取得装置及びデータの取得方法
JP2003167600A (ja) * 2001-12-04 2003-06-13 Canon Inc 音声認識装置及び方法、ページ記述言語表示装置及びその制御方法、並びにコンピュータ・プログラム
JP2003295891A (ja) * 2002-02-04 2003-10-15 Matsushita Electric Ind Co Ltd インタフェース装置、動作制御方法、画面表示方法
JP2004133796A (ja) * 2002-10-11 2004-04-30 Mitsubishi Electric Corp 情報検索装置および情報検索方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0981184A (ja) * 1995-09-12 1997-03-28 Toshiba Corp 対話支援装置
JPH11161464A (ja) * 1997-11-25 1999-06-18 Nec Corp 日本語文章作成装置
JPH11183190A (ja) * 1997-12-24 1999-07-09 Toyota Motor Corp ナビゲーション用音声認識装置および音声認識機能付きナビゲーション装置
JP2000278369A (ja) * 1999-03-29 2000-10-06 Sony Corp 通信装置、データ取得装置及びデータの取得方法
JP2003167600A (ja) * 2001-12-04 2003-06-13 Canon Inc 音声認識装置及び方法、ページ記述言語表示装置及びその制御方法、並びにコンピュータ・プログラム
JP2003295891A (ja) * 2002-02-04 2003-10-15 Matsushita Electric Ind Co Ltd インタフェース装置、動作制御方法、画面表示方法
JP2004133796A (ja) * 2002-10-11 2004-04-30 Mitsubishi Electric Corp 情報検索装置および情報検索方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008164975A (ja) * 2006-12-28 2008-07-17 Nissan Motor Co Ltd 音声認識装置、および音声認識方法
WO2009022446A1 (fr) * 2007-08-10 2009-02-19 Mitsubishi Electric Corporation Dispositif de navigation
WO2010013369A1 (fr) * 2008-07-30 2010-02-04 三菱電機株式会社 Dispositif de reconnaissance vocale
CN102105929A (zh) * 2008-07-30 2011-06-22 三菱电机株式会社 声音识别装置
JPWO2010013369A1 (ja) * 2008-07-30 2012-01-05 三菱電機株式会社 音声認識装置
US8818816B2 (en) 2008-07-30 2014-08-26 Mitsubishi Electric Corporation Voice recognition device

Also Published As

Publication number Publication date
JPWO2006028171A1 (ja) 2008-07-31

Similar Documents

Publication Publication Date Title
US6067521A (en) Interrupt correction of speech recognition for a navigation device
US6064323A (en) Navigation apparatus, navigation method and automotive vehicles
US7310602B2 (en) Navigation apparatus
JP2001296882A (ja) ナビゲーションシステム
US20060253251A1 (en) Method for street name destination address entry using voice
EP1273887B1 (fr) Système de navigation
JP4642953B2 (ja) 音声検索装置、および、音声認識ナビゲーション装置
JP2000338993A (ja) 音声認識装置、その装置を用いたナビゲーションシステム
WO2006028171A1 (fr) Dispositif de présentation de données, méthode de présentation de données, programme de présentation de données et support d’enregistrement contenant le programme
JP5455355B2 (ja) 音声認識装置及びプログラム
US6963801B2 (en) Vehicle navigation system having position correcting function and position correcting method
JP2005275228A (ja) ナビゲーション装置
JP3818352B2 (ja) ナビゲーション装置及び記憶媒体
JP4274913B2 (ja) 目的地検索装置
JP4705398B2 (ja) 音声案内装置、音声案内装置の制御方法及び制御プログラム
JP3579971B2 (ja) 車載用地図表示装置
US20150192425A1 (en) Facility search apparatus and facility search method
JPH09114487A (ja) 音声認識装置,音声認識方法,ナビゲーション装置,ナビゲート方法及び自動車
WO2019124142A1 (fr) Dispositif de navigation, procédé de navigation et programme informatique
JP2008298522A (ja) ナビゲーション装置、ナビゲーション装置の検索方法及び検索プログラム
JP2007025076A (ja) 車載用音声認識装置
JP4645708B2 (ja) コード認識装置および経路探索装置
JP2005234991A (ja) 情報検索装置、情報検索方法および情報検索プログラム
JP2006090867A (ja) ナビゲーション装置
JP4952379B2 (ja) ナビゲーション装置、ナビゲーション装置の検索方法及び検索プログラム

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2006535815

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 05782288

Country of ref document: EP

Kind code of ref document: A1