HK1136865A - A navigation device, a method of and a computer program for operating the navigation device comprising an audible recognition mode - Google Patents

A navigation device, a method of and a computer program for operating the navigation device comprising an audible recognition mode Download PDF

Info

Publication number
HK1136865A
HK1136865A HK10103149.2A HK10103149A HK1136865A HK 1136865 A HK1136865 A HK 1136865A HK 10103149 A HK10103149 A HK 10103149A HK 1136865 A HK1136865 A HK 1136865A
Authority
HK
Hong Kong
Prior art keywords
navigation device
information
input
output
processor
Prior art date
Application number
HK10103149.2A
Other languages
Chinese (zh)
Inventor
彼得‧格尔林
马雷杰‧罗森
Original Assignee
通腾科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 通腾科技股份有限公司 filed Critical 通腾科技股份有限公司
Publication of HK1136865A publication Critical patent/HK1136865A/en

Links

Abstract

A method of operating a navigation device. includes receiving an indication of enablement of an audible recognition mode in the navigation device; determining at least one choice relating to address information of a travel destination based upon the received audible input; audibly outputting the at least one determined choice; and acknowledging selection of the audibly output upon receiving an affirmative audible input. The navigation device includes a processor, memory, a microphone, user input means and a visual display to receive an indication of enablement of an audible recognition mode and to determine at least one choice relating to address information of a travel destination based upon the received audible input; and an output device to audibly output the at least one determined choice, the processor being further useable to acknowledge selection of the audibly output of the at least one determined choice upon receiving an affirmative audible input.

Description

Navigation device, method of operating the navigation device including an audio recognition mode, and computer program
Technical Field
The present application relates generally to navigation methods and devices.
Background
Navigation devices have traditionally been used primarily in the field of vehicle use, such as on automobiles, motorcycles, trucks, boats, and the like. Alternatively, if the navigation device is portable, it may be further transferred between vehicles and/or may be used outside a vehicle, such as for hiking.
These devices are typically tailored to generate a route of travel based on an initial position of the navigation device, which may be entered into the device, but is traditionally calculated via GPS positioning from a GPS receiver within the navigation device, and a selected/input travel destination (end position). To assist in the navigation of the route, instructions may be output to a user of the navigation device along the route. The instructions may be at least one of audio instructions and visual instructions.
Disclosure of Invention
The inventors have found that a user of a navigation device may have certain difficulties in operating and viewing a touch panel screen. The inventors have thus found that users wish to access without hands, at least to a limited extent, especially when using navigation devices in a vehicle. As such, the present inventors have developed methods that allow hands-free or at least partially hands-free access by utilizing audio recognition patterns.
In at least one embodiment of the present application, a method comprises: receiving an indication of enabling an audio recognition mode in a navigation device; upon receiving an indication that the audio recognition mode is enabled and upon receiving an audio input, determining at least one option for address information regarding a travel destination based on the received audio input; outputting at least one determined option of address information on the travel destination in an audio manner; and upon receiving a positive audio input, confirming selection of the at least one determined option to be output audibly.
In at least one embodiment of the present application, a navigation device includes: a processor to receive an indication that an audio recognition mode in a navigation device is enabled, and to, upon receiving an audio input, determine at least one option for address information regarding a travel destination based on the received audio input; and an output device to audibly output at least one determined option for address information regarding a travel destination, the processor being further operable to confirm selection of the at least one determined option audibly output upon receipt of a positive audible input.
In at least one other embodiment of the present application, a method comprises: receiving an indication of enabling an audio recognition mode in a navigation device; and upon receiving an indication that the audio recognition mode is enabled, displaying on the integrated input and display device an indication as to whether a volume of the received audio input is within an acceptable range, higher than the acceptable range, or lower than the acceptable range.
In at least one other embodiment of the present application, a navigation device includes: a processor to receive an indication of enabling an audio recognition mode in a navigation device; and an integrated input and display device to display, upon the processor receiving an indication of enablement of the audio recognition mode, an indication of whether a volume of the received audio input is within an acceptable range, higher than the acceptable range, or lower than the acceptable range.
In at least one other embodiment of the present application, a method comprises: receiving an indication of enabling an audio recognition mode in a navigation device; receiving additional information from a source different from a user of the navigation device; formulating a question that can be answered by a yes or no answer from the user based on the received additional information; and outputting the formulated question to a user.
In at least one other embodiment of the present application, a navigation device includes: a processor to receive an indication that an audio recognition mode is enabled, to receive additional information from a source different from a user of the navigation device, and to formulate a question that can be answered by a yes or no answer from the user based on the received additional information; and an output device for outputting the formulated question to a user.
Drawings
The application will be described in more detail below by using exemplary embodiments, which will be explained with the aid of the drawings, in which:
FIG. 1 illustrates an exemplary view of a Global Positioning System (GPS);
FIG. 2 illustrates an example block diagram of electronic components of a navigation device of an embodiment of the present application;
FIG. 3 illustrates an example block diagram of a server, navigation device, and connections therebetween of an embodiment of the present application;
figures 4A and 4B are perspective views of an implementation of an embodiment of a navigation device;
FIG. 5 illustrates a flow chart of an embodiment of a method of the present application;
6A-6D are examples of audio recognition mode icons for display in embodiments of the present application;
FIG. 7 illustrates an example diagram of an embodiment of the present application;
FIG. 8 illustrates a flow chart of an embodiment of a method of the present application; and
FIG. 9 illustrates a flow chart of an embodiment of a method of the present application.
Detailed Description
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In describing the exemplary embodiments illustrated in the drawings, specific terminology is employed for the sake of clarity. However, the disclosure of this patent specification is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents that operate in a similar manner.
Example embodiments of the present patent application are described below with reference to the drawings, wherein like reference numerals represent the same or corresponding parts throughout the several views. Like numbers refer to like elements throughout. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Fig. 1 illustrates an example view of a Global Positioning System (GPS) usable by a navigation device, including a navigation device of an embodiment of the present application. Such systems are known and used for a variety of purposes. In general, GPS is a satellite radio-based navigation system that is capable of determining continuous position, velocity, time, and (in some examples) direction information for an unlimited number of users.
The GPS, previously known as NAVSTAR, incorporates a plurality of satellites that operate with the earth in extremely precise orbits. Based on these precise orbits, GPS satellites can relay their position to any number of receiving units.
The GPS system is implemented when a device specially equipped to receive GPS data begins scanning radio frequencies for GPS satellite signals. Upon receiving radio signals from GPS satellites, the device determines the precise location of the satellites via one of a number of different conventional methods. In most cases, the device will continue to scan for signals until it has acquired at least three different satellite signals (note that other triangulation techniques are not typically (but can be) used to determine position with only two signals). By implementing geometric triangulation, the receiver utilizes three known positions to determine its own two-dimensional position relative to the satellites. This can be done in a known manner. In addition, obtaining a fourth satellite signal will allow the receiving device to calculate its three-dimensional position in a known manner by the same geometric calculation. The position and velocity data may be continuously updated in real time by an unlimited number of users.
As shown in fig. 1, the GPS system is generally indicated by the reference numeral 100. A plurality of satellites 120 are in orbit about the earth 124. The orbit of each satellite 120 is not necessarily synchronized with the orbits of the other satellites 120 and is in fact likely to be out of synchronization. The GPS receiver 140, which may be used in embodiments of the navigation device of the present application, is shown receiving spread spectrum GPS satellite signals 160 from various satellites 120.
The spread spectrum signals 160 continuously transmitted from each satellite 120 utilize a highly accurate frequency standard implemented using an extremely accurate atomic clock. Each satellite 120 transmits a data stream indicative of that particular satellite 120 as part of its data signal transmission 160. As is understood by those skilled in the relevant art, the GPS receiver device 140 typically obtains spread spectrum GPS satellite signals 160 from at least three satellites 120 for the GPS receiver device 140 to calculate its two-dimensional position by triangulation. The acquisition of additional signals, which results in signals 160 from a total of four satellites 120, permits the GPS receiver device 140 to calculate its three-dimensional position in a known manner.
Fig. 2 illustrates an example block diagram of electronic components of a navigation device 200 of an embodiment of the present application in block component format. It should be noted that the block diagram of the navigation device 200 does not include all of the components of the navigation device, but is merely representative of many example components.
The navigation device 200 is located within a housing (not shown). The housing includes a processor 210 connected to an input device 220 and a display screen 240. Input device 220 may include a keyboard device, a voice input device, a touch panel, and/or any other known input device for inputting information; and display screen 240 may comprise any type of display screen, such as an LCD display. In at least one embodiment of the present application, the input device 220 and the display screen 240 are integrated into an integrated input and display device that includes a touchpad or touchscreen input, wherein a user need only touch a portion of the display screen 240 to select one of a plurality of display options or to activate one of a plurality of virtual buttons.
In addition, other types of output devices 241 may also include (including but not limited to) audio output devices. Because the output device 241 can generate audio information to the user of the navigation device 200, it should also be understood that the input device 240 can also include a microphone as well as software for receiving input voice commands.
In the navigation device 200, the processor 210 is operatively connected to the input device 240 via connection 225 and is arranged to receive input information from the input device 240 via connection 225 and to operatively connect to at least one of the display screen 240 and the output device 241 via output connection 245 to output information thereto. In addition, the processor 210 is operatively connected to the memory 230 via a connection 235, and is further adapted to receive/send information from/to an input/output (I/O) port 270 via a connection 275, wherein the I/O port 270 is connectable to an I/O device 280 external to the navigation device 200. External I/O device 270 may include, but is not limited to, an external listening device, such as a headset. The connection to the I/O device 280 may further be a wired or wireless connection to any other external device, such as a car stereo unit, for hands-free operation and/or for voice-activated operation, for example, for connection to an earphone or headset, and/or for connection to a mobile phone, for example, where the mobile phone connection may be used to establish a data connection between the navigation device 200 and the internet or any other network, for example, and/or to establish a connection to a server via the internet or some other network, for example.
In at least one embodiment, the navigation device 200 can establish a "mobile" network connection with the server 302 via a mobile device 400, such as a mobile phone, PDA, and/or any device having mobile phone technology, establishing a digital connection, such as a digital connection via known bluetooth technology, for example. Thereafter, through its network service provider, the mobile device 400 can establish a network connection (e.g., through the Internet) with the server 302. As such, a "mobile" network connection is established between the navigation device 200 (which may be and typically is mobile when traveling alone and/or in a vehicle) and the server 302 in order to provide a "real-time" or at least very "up-to-date" gateway for information.
Establishing a network connection between the mobile device 400 (via a service provider) and another device, such as the server 302, using, for example, the internet 410, may be accomplished in a known manner. This may include, for example, the use of a TCP/IP layered protocol. The mobile device 400 may utilize any number of communication standards such as CDMA, GSM, WAN, etc.
As such, an internet connection enabled via a data connection (e.g., via mobile phone or mobile phone technology within the navigation device 200) may be utilized. For this connection, an internet connection between the server 302 and the navigation device 200 is established. This may be done, for example, by a mobile phone or other mobile device and a GPRS (general packet radio service) connection (a GPRS connection is a high speed data connection for mobile devices provided by a telecommunications carrier; GPRS is a method to connect to the internet).
The navigation device 200 can further complete a data connection with the mobile device 400 and ultimately with the internet 410 and server 302 in a known manner, such as via existing bluetooth technology, wherein the data protocol can utilize any number of standards, such as GSRM, a data protocol standard for the GSM standard.
The navigation device 200 may include its own mobile phone technology within the navigation device 200 itself (e.g. including an antenna, wherein the internal antenna of the navigation device 200 may further be used instead). The mobile phone technology within the navigation device 200 may include internal components as specified above, and/or may include a pluggable card, along with, for example, the necessary mobile phone technology and/or antenna. As such, mobile phone technology within the navigation device 200 can similarly establish a network connection between the navigation device 200 and the server 302 via, for example, the internet 410 in a manner similar to that of any mobile device 400.
For GPRS phone settings, bluetooth enabled devices can be used to work correctly with the ever changing spectrum of mobile phone models, manufacturers, etc., model/manufacturer specific settings can be stored on the navigation device 200, for example. The data stored for this information may be updated in the manner discussed in either of the previous or subsequent embodiments.
Fig. 2 further illustrates an operative connection between the processor 210 and the antenna/receiver 250 via connection 255, wherein the antenna/receiver 250 may be, for example, a GPS antenna/receiver. It will be appreciated that the antenna and receiver represented by reference numeral 250 are schematically combined for illustration, but may be separately located components, and the antenna may be, for example, a GPS patch antenna or a helical antenna.
In addition, those skilled in the art will appreciate that the electronic components shown in FIG. 2 are powered by a power supply (not shown) in a conventional manner. As will be appreciated by those skilled in the art, different configurations of the components shown in fig. 2 are considered to be within the scope of the present application. For example, in one embodiment, the components shown in FIG. 2 may communicate with each other via wired and/or wireless connections and the like. Thus, the scope of the navigation device 200 of the present application includes portable or handheld navigation devices 200.
Furthermore, the portable or handheld navigation device 200 of fig. 2 may be connected or "docked" in a known manner to a motor vehicle, such as a car or boat. This navigation device 200 can then be removed from the docked position for portable or handheld navigation use.
Figure 3 illustrates an example block diagram of a server 302 of an embodiment of the present application and a navigation device 200 of the present application (via a general communication channel 318). The server 302 and the navigation device 200 of the present application can communicate when a connection via the communication channel 318 is established between the server 302 and the navigation device 200 (note that such a connection can be a data connection via a mobile device, a direct connection via a personal computer via the internet, etc.).
The server 302 includes, among other components that may not be illustrated, a processor 304, the processor 304 operatively connected to a memory 306 and further operatively connected to a mass data storage 312 via a wired or wireless connection 314. The processor 304 is further operatively connected to the transmitter 308 and the receiver 310 to transmit information to the navigation device 200 and to send information from the navigation device 200 via the communication channel 318. The transmitted and received signals may include data, communications, and/or other propagated signals. The transmitter 308 and receiver 310 may be selected or designed according to the communication requirements and communication technology used in the communication design for the navigation system 200. Additionally, it should be noted that the functions of transmitter 308 and receiver 310 may be combined into a signal transceiver.
The server 302 is further connected to (or includes) a mass storage device 312, noting that the mass storage device 312 can be coupled to the server 302 via a communication link 314. The mass storage device 312 contains a large amount of navigation data and map information, and may likewise be a separate device from the server 302, or may be incorporated into the server 302.
The navigation device 200 is adapted to communicate with the server 302 through the communication channel 318, and includes a processor, memory, etc. as previously described with respect to fig. 2, as well as a transmitter 320 and receiver 322 to send and receive signals and/or data through the communication channel 318, noting that these devices can further be used to communicate with devices other than the server 302. In addition, the transmitter 320 and receiver 322 are selected or designed according to the communication requirements and communication technology used in the communication design for the navigation device 200, and the functions of the transmitter 320 and receiver 322 can be combined into a single transceiver.
Software stored in the server memory 306 provides instructions to the processor 304 and allows the server 302 to provide services to the navigation device 200. One service provided by the server 302 involves processing requests from the navigation device 200 and transmitting navigation data from the mass data storage 312 to the navigation device 200. According to at least one embodiment of the present application, another service provided by the server 302 includes processing navigation data using various algorithms for a desired application and sending the results of these calculations to the navigation device 200.
The communication channel 318 generally represents the propagation medium or path connecting the navigation device 200 with the server 302. According to at least one embodiment of the present application, both the server 302 and the navigation device 200 comprise a transmitter for transmitting data over the communication channel and a receiver for receiving data that has been transmitted over the communication channel.
The communication channel 318 is not limited to a particular communication technology. Additionally, the communication channel 318 is not limited to a single communication technology; that is, the channel 318 may include several communication links using a variety of techniques. For example, in accordance with at least one embodiment, the communication channel 318 may be adapted to provide a path for electrical, optical, and/or electromagnetic communication, among others. As such, the communication channel 318 includes (but is not limited to) one or a combination of: electrical circuits, electrical conductors such as wires and coaxial cables, fiber optic cables, transducers, radio frequency (rf) waves, the atmosphere, vacuum, and the like. Further, according to at least one various embodiment, the communication channel 318 may include intermediate devices, such as routers, repeaters, buffers, transmitters, and receivers.
For example, in at least one embodiment of the present application, the communication channel 318 includes telephone and computer networks. Further, in at least one embodiment, the communication channel 318 may be capable of accommodating wireless communications such as radio frequency, microwave frequency, infrared communications, and the like. In addition, according to at least one embodiment, the communication channel 318 may accommodate satellite communications.
The communication signals transmitted over the communication channel 318 include, but are not limited to, signals as may be required or desired for a given communication technology. For example, the signals may be adapted for use in cellular communication techniques such as Time Division Multiple Access (TDMA), Frequency Division Multiple Access (FDMA), Code Division Multiple Access (CDMA), global system for mobile communications (GSM), and so forth. Both digital and analog signals may be transmitted over the communication channel 318. According to at least one embodiment, these signals may be modulated, encrypted, and/or compressed signals as may be required by the communication technology.
The mass data storage 312 includes sufficient storage for the desired navigation application. Examples of mass data storage 312 may include magnetic data storage media (e.g., hard drives), optical storage media (e.g., CD-roms), charged data storage media (e.g., flash memory), molecular memory, and so forth.
According to at least one embodiment of the present application, the server 302 comprises a remote server accessible by the navigation device 200 via a wireless channel. According to at least one other embodiment of the present application, the server 302 may comprise a network server located on a Local Area Network (LAN), Wide Area Network (WAN), Virtual Private Network (VPN), or the like.
According to at least one embodiment of the present application, the server 302 may comprise a personal computer such as a desktop or laptop computer, and the communication channel 318 may be a cable connected between the personal computer and the navigation device 200. Alternatively, a personal computer may be connected between the navigation device 200 and the server 302 to establish an internet connection between the server 302 and the navigation device 200. Alternatively, a mobile phone or other handheld device may establish a wireless connection to the internet for connecting the navigation device 200 to the server 302 via the internet.
The navigation device 200 may be provided with information from the server 302 via information downloads which may be periodically updated upon a user connecting the navigation device 200 to the server 302 and/or may be more dynamic upon a more constant or frequent connection being made between the server 302 and the navigation device 200 via, for example, a wireless mobile connection device and a TCP/IP connection. For many dynamic computations, the processor 304 in the server 302 may be used to handle the large amount of processing needs; however, the processor 210 of the navigation device 200 can also handle much processing and computation, oftentimes independent of a connection to the server 302.
The mass storage device 312 connected to the server 302 may include a greater amount of mapping and route data, including maps and the like, than can be maintained on the navigation device 200 itself. For example, the server 302 may use a set of processing algorithms to process a majority of the devices of the navigation device 200 that travel along the route. In addition, the mapping and route data stored in the memory 312 may operate on signals originally received by the navigation device 200 (e.g., GPS signals).
As indicated above in fig. 2 of the present application, a navigation device 200 of an embodiment of the present application includes a processor 210, an input device 220, and a display screen 240. In at least one embodiment, the input device 220 and the display screen 240 are integrated into an integrated input and display device to enable both information input (via direct input, menu selection, etc.) and information display (such as through a touch panel screen). This screen may be, for example, a touch input LCD screen, as is well known to those skilled in the art. In addition, the navigation device 200 may also include any additional input devices 220 and/or any additional output devices 241, such as audio input/output devices.
Fig. 4A and 4B are perspective views of a practical implementation of an embodiment of a navigation device 200. As shown in fig. 4A, the navigation device 200 may be a unit that includes an integrated input and display device 290 (e.g., a touch panel screen) and the other components of fig. 2, including but not limited to an internal GPS receiver 250, a microprocessor 210, a power supply, a memory system 220, etc.
The navigation device 200 may rest on an arm 292, which arm 292 itself may be secured to a vehicle dashboard/window/or the like using a large suction cup 294. This arm 292 is one example of a docking station to which the navigation device 200 can dock.
As shown in fig. 4B, the navigation device 200 may be docked or otherwise connected to the arm 292 of the docking station by, for example, snapping the navigation device 292 to the arm 292 of the docking station (this is just one example, as other known alternatives for connecting to a docking station are within the scope of the present application). The navigation device 200 can then be rotated on the arm 292, as shown by the arrow of fig. 4B. To release the connection between the navigation device 200 and the docking station, a button on the navigation device 200 may be pressed, for example (this is just one example, as other known alternatives for disconnecting from the docking station are within the scope of the present application).
In at least one embodiment of the present application, a method comprises: receiving an indication that an audio recognition mode in the navigation device 200 is enabled; upon receiving an indication that the audio recognition mode is enabled and upon receiving an audio input, determining at least one option for address information regarding a travel destination based on the received audio input; outputting at least one determined option of address information on the travel destination in an audio manner; and upon receiving a positive audio input, confirming selection of the at least one determined option to be output audibly.
In at least one embodiment of the present application, a navigation device 200 comprises: a processor 210 to receive an indication that an audio recognition mode in the navigation device 200 is enabled, and to determine, upon receiving an audio input, at least one option for address information relating to a travel destination based on the received audio input; and an output device 241 to audibly output at least one determined option for address information regarding a travel destination, the processor 210 may be further configured to confirm selection of the at least one audibly output option upon receipt of a positive audible input.
FIG. 5 illustrates a flow chart of an example embodiment of the present application. In the embodiment shown in fig. 5, it is first determined in step S2 whether an audio recognition mode has been enabled in the navigation device. For example, as shown in fig. 6A, an icon may be displayed on the integrated input and display device 290 of the navigation device 200. Such an icon may be displayed for selection in an initial menu or a subsequent menu prior to entry/selection of a destination for establishing a route of travel, and/or may be displayed with map information, for example, during use of the navigation device in a navigation mode. Such an icon may include only illustrations (e.g., lips as shown in fig. 6A), and/or may include text indicating that the button corresponds to an audio recognition mode, such as Audio Speech Recognition (ASR). After the processor 210 of the navigation device 200 receives an indication of selection of an icon as shown in figure 6A, the audio recognition mode may be enabled by the processor 210.
The audio recognition mode may include the processor 210 working in conjunction with an ASR engine or module. This ASR engine or module is a software engine that, once the audio recognition mode is enabled as explained above, can be loaded with grammar rules expressed in the language of the country (or selected by the user, for example) of the user of the navigation device 200. Thus, a user of the navigation device 200 will typically type/select the country in which the user is located, and then the processor 210 may select, enter, or match the language of that country. Thereafter, after the audio recognition mode is enabled, the ASR engine may then be loaded with grammar rules from memory 230. The ASR engine may then recognize geographic names (e.g., city names and street names) using the language corresponding to the selected map, and recognize plain speech using the language currently selected/enabled by the user. For example, the system may be set up to enable recognition of complex speech from the user, or may be limited to just a simple reply of, no, done, returned, and/or a digital entry such as 1, 2, 3, etc.
The ASR engine or module is an engine or module that enables a voice interface between a user and the navigation device 200. Such a module is not generally usable in a portable navigation device 200, such as the navigation device 200 shown in figures 2 to 4B of the present application, but embodiments of the present application improve or even optimize memory management, for example, between the processor 210 and the memory device 230, as well as data structures, to allow the ASR module to handle and recognize input information. Basically, during speech recognition; i.e. after the audio recognition mode is enabled in step S2 of fig. 5, all or most of the available memory in the memory device 230 of the navigation device 200 is allocated to the ASR module, while other processes of the processor 210 are halted. Of course, during use of the navigation device 200 in the navigation mode, certain processes dedicated to displaying navigation information and outputting navigation instructions must continue, thus sometimes slowing operation of the ASR module.
In one example embodiment of the present application, the ASR module is primarily used to select address information for a travel destination based on received audio input, and thus typically operates when the navigation device 200 is not used in a navigation mode. After the navigation device 200 is operated in the navigation mode, another embodiment of the present application involves formulating a simple question that can be answered by a yes/no answer from the user (for example) to thereby enable processing power to be allocated to the navigation mode, where only a small amount of processing power is required in the ASR module to recognize such yes/no answers from the user of the navigation device 200. Thus, although the process shown in fig. 5 may operate during use of the navigation device 200 in the navigation mode, after sufficient memory 230 is included in the navigation device 200 and/or after the ASR module is used to recognize, for example, yes/no restricted input information, the operation shown in fig. 5 typically occurs prior to the start of a vehicle in which the navigation device 200 is located, i.e., prior to entry of a travel destination into the navigation device 200 and prior to the route of travel being determined.
Referring back to FIG. 5, in step S2, if the audio recognition mode is not enabled, the system loops back to repeat step S2. However, if the audio recognition mode is enabled by the processor 210 receiving an indication, for example, of the selection of the "talk with me" icon shown in fig. 6A, then language and grammar information is loaded from the memory 230 into the ASR module of the navigation device 200, and in step S4, the navigation device 200 simply waits for audio input. If no audio input is received, the system simply loops back to repeat step S4 until an audio input is received.
ASR modules are commonly used to recognize speech information from different users. This information is typically unpredictable and therefore cannot be stored in memory 230. The ASR module or engine operates in conjunction with the processor 210 to convert the received voice information into a sequence of phonemes in a known manner, and then cooperates with the processor 210 to match the stored existing grammar for city, street names, etc. with the converted sequence of phonemes.
In step S6, if an audio input is received, the processor 210 cooperates with the ASR module to convert the input speech into phonemes and compare the sequence of phonemes to the information stored in the memory 230 to determine at least one option for address information about the travel destination based on the received audio input. For example, in at least one embodiment, the at least one option for address information regarding a travel destination may include a city name. Thus, the user may audibly output the city name as part of the address information for the travel destination, where initial input to the city may be prompted by the navigation device 200 displaying a request to enter travel destination information (e.g., "which city"). Upon receiving this audio information, the processor 210 and ASR module process the phonemes as described above and compare this information to the cities stored in memory 230 to determine at least one option, if possible, for the input audio sound. If nothing is recognized, the navigation device 200 may return to a screen that prompts entry of city or other address information, and may or may not blink, or otherwise display, for example, a message "input not recognized". As will be explained in another embodiment of the present application, an audible indicator may also be displayed to the user, for example, indicating whether the volume of the audio input is within, above, or below the acceptable range.
If at least one address information option (e.g., city) can be determined in step S6, the process proceeds to step S8, and at least one determined option of address information on a travel destination is outputted in audio in step S8. For example, instead of the system merely guessing that the audio input was correctly received, the processor 210 directs the audio output of at least one determined option of address information about the travel destination in step S8. Thereafter, in step S10, the processor 210 waits to see whether an affirmative audio output is received in step S10. If a positive audio output is received, the processor 210 and ASR module may then confirm that a correct determination occurred, and may thus confirm selection of the at least one determined option to output audibly upon receipt and recognition of a positive audio input (e.g., "Yes").
Thus, instead of the processor 210 and ASR module merely guessing that the audio input is correct, at least one determined option regarding the address information is first output audibly, and selection of the at least one determined option is not confirmed until a positive audio input is received.
Upon receiving the audio input, at least one address information option, such as a city name, of the travel destination is determined, as set forth in step S6. However, in at least one example embodiment of the present application, the processor 210 recognizes a plurality of "N-best" options (not just one option, note that N may be any number, such as six). Basically, the processor 210, in conjunction with the ASR module, seeks to best determine the city name from the phonemes of the audio input (e.g., in this first case of input of address information). Processor 210 scans or examines all of the various cities stored in memory 230 for a match. The processor 210 then ranks the most likely matches such that the most likely match will be audibly output to the user of the navigation device 200 as at least one determined option of address information about the travel destination.
Accordingly, upon a positive audio input in step S10, selection of the at least one determined option that is output audibly may be confirmed. However, because an "N-best" city may be initially determined, the processor 210 may also direct the navigation device 200 to display a list of "N-best" options, such as the N-best matches of the city name determined by the processor 210, on the integrated input and display device 290. The most likely match based on the audio input received from the user may be output audibly, and may further be displayed visually at the top of the "N-best" list (as option number one in the displayed list). Thereafter, in step S14, the next best option may be visually displayed to the user as the numbered option (e.g., options two through six). Thereafter, in step S16, the visually output option may be selected via display and subsequent input, such as by the integrated input and display device 290. If the visually output option is selected, the selection may be confirmed in step S20 of fig. 5 by, for example, the processor 210.
Thus, the processor 210 and ASR module may be used to determine not only a single option, but multiple options for address information about a travel destination. For example, each of the plurality of options may be output visually and only one option may be output audibly. The plurality of options may be visually output for selection on the integrated input and display device 290 of the navigation device 200. Each of these options (e.g., a list of cities that sound most like the audio input) may be determined and displayed, and may be selected by at least one of the touch panel and the audio output. Additionally, the at least one option may be further selectable for audio output via receiving an indication of a touch panel input. Further, each of the plurality of determined options may be selected via receiving an indication of a touch panel input and/or by an audio input corresponding to the number of displayed options (e.g., the user says "two" to select the second displayed option).
As one non-limiting example, if the user of the navigation device 200 audibly outputs the City "salt lake City," the processor 210 and ASR module may determine an "N-best" City list of cities to be audibly and visually output. The first city in the displayed list may be "salt lake city" and may be output audibly and visually, for example, on the integrated input and display device 290 of the navigation device. In addition, the processor 210 and ASR module may determine additional "N-best" cities, including, for example, five other cities, such as selem (Salem), sagnac (Sacramento), San Antonio (San Antonio), spongifield (Springfield), and stanton (Staunton). In one exemplary embodiment of the present application, the "N-best" list includes a fixed number of options, for example, six options. These six options (option number one and five other N-best cities) may then be displayed to the user for audio or touch panel input/selection. Thus, if all six options are displayed in order on the touch panel of the integrated input and display device 290, the user may simply touch and thereby select one of the six options. Alternatively, when the first option "salt lake city" is audibly output to the user, the user may confirm selection of the audibly output option by issuing a positive audio input. Alternatively, the user may select any of the other five displayed options (or even, for example, the first option) simply by speaking the number corresponding to the particular option (e.g., "6" representing the sixth option "stanton").
By utilizing a positive audio input and/or an audio input of only one of the six digital values, the processor 210 increases the likelihood of confirming the user's selection, and thereby may adequately confirm the user's selection of a particular option.
Thereafter, once the user selects a city name, and this selection is confirmed in step S12 or S20, the user may issue another audio output for input/reception by the processor 210 and ASR module, the other audio output corresponding to, for example, a street name. Thereafter, upon selection of a city name and upon receiving another audio output, the processor 210 and ASR module may determine at least one street name. Again, the processor 210 and ASR module may determine a list of "N-best" street names for subsequent audio and/or visual output to a user of the navigation device 200 for subsequent selection by the user. The selection may be made in the same manner as previously discussed with respect to city names.
Finally, the user may audibly output for input/reception by the processor 210 and ASR module a number corresponding to the last element of the travel destination address, which number may be recognized and used to determine an "N-best" list in the same manner as previously set forth with respect to city and street names. Alternatively, the user may simply type in the numerical element (number) of the address of the travel destination. As such, a complete address of the travel destination may be input, and the processor 210 may thereafter use the address to determine a route of travel (e.g., in conjunction with GPS signals indicating the current location of the navigation device 200 and map information stored in the memory 230).
It should be noted that the process of fig. 5 may begin with audio input and recognition of, for example, country and/or state rather than city names. Additionally, upon determining a plurality of countries, states, cities, streets, etc. or an "N-best" list thereof, each of the plurality of country, state, city, or street names may be output visually and only one audibly for subsequent selection thereof by touch panel input or audio input in a manner similar to that previously described.
As previously discussed, FIG. 6A provides an illustration of a non-limiting example of a selectable icon for enabling an audio recognition mode. It should be noted that upon enabling this audio recognition mode, the icon display may be changed to indicate to the user that the audio recognition mode has been enabled and that the system is only waiting to receive audio input (e.g., as indicated in step 4 of fig. 5). The display may include changing the displayed icon in some manner, such as changing the color of the virtual button shown in fig. 6A, or otherwise changing the appearance of such virtual button/icon. This is shown in fig. 6B, noting that while waiting for audio input, the button may be a different color, such as green.
Thereafter, when the system correctly specifies the address information option of the travel destination, for example, in step S6, the virtual buttons/icons may be changed again, for example, in the manner shown in fig. 6C. Finally, after the at least one determined option is audibly output (step S8 of fig. 5), the icon may be altered again, for example, as shown in fig. 6D. This may provide feedback to the user regarding the use of the audio recognition pattern.
It should be noted that the determination of the at least one option for address information regarding the travel destination based on the received audio input in step S6 of fig. 5 may involve inputting the country/state/city/street address of the travel destination in a normal manner, and/or may involve determining the travel destination based on recent destinations, points of interest, preferences (favorites), etc., as shown, for example, in fig. 7. Thus, upon receiving the indication that the audio recognition mode is enabled in step S2, a message such as "where you want to go" may be displayed to the user on, for example, the integrated input and display device 290 of the navigation device 200. Thereafter, the initial audio input received in step S4 may be an audio input of a word regarding the information category, such as "home" 710, "preference" 720, "address" 730, "recent destination" 740, or "point of interest (POI)" 750. The processor 210 and ASR module may be programmed to recognize one of the aforementioned categories 710, 720, 730, 740, or 750 such that the determined at least one option for address information regarding a travel destination may include traditional information, such as a city, a state, a street name, etc., or may include other types of information, such as points of interest, preferences, etc. Again, each of these processes may determine the output of an option for address information regarding a travel destination, noting that the most likely option may be output audibly, and the selection of that option confirmed by a positive audio input (or touch panel input), with the other "N-best" options being output visually, with the selection of the other options confirmed by at least one of audio and visual inputs.
In one exemplary embodiment, recognition may proceed as follows:
for example, the process of recognition of geographic names (cities, streets and intersections) can proceed according to the following rules:
the process may be initiated by a user (e.g., selecting a voice recognition address entry).
The processor 210/ASR module may then enter a listening mode and may indicate this mode with, for example, a special icon display. The color of the icon may be changed if the level of input is within an acceptable range, too low (no input), too high, or if the input has not been properly recognized (bad input). This change may serve as feedback to the user.
If the recognition input is deemed acceptable by the processor 210/ASR module, the processor 210/ASR module may then seek to match the accepted sequence of phonemes to the known sequence of the selected grammar. Here, it is possible to combine the precompiled grammar (list of already known names) with the dynamic part of the grammar (name added by the user). This section may be emphasized because it relates to map sharing (MapShare) technology.
The processor 210/ASR module may then present the results to the user via display on the integrated input and display device 290 in the form of an N-best list. For example, if the current speech is a TTS speech, the best entry (first in the list) may be output to the user.
The user may then have the possibility to accept or reject the result. In the first case, the processor 210/ASR module proceeds to the next step, which is the identification or routing of the next address level (city- > street, street- > intersection, or street- > house number). In the second case, if the correct entry is present in the list, it is possible for the user to sound the line number corresponding to that entry, or to return to the previous step by saying, for example, "return".
It should be noted that each of the foregoing aspects of the embodiments of the present application have been described with respect to methods of the present application. However, at least one embodiment of the present application is directed to a navigation device 200, the navigation device 200 comprising: a processor 210 to receive an indication that an audio recognition mode in the navigation device 200 is enabled, and to determine, upon receiving an audio input, at least one option for address information relating to a travel destination based on the received audio input; and an output device 241 to audibly output at least one determined option for address information regarding a travel destination, the processor 210 may be further configured to confirm selection of the at least one audibly output option upon receipt of a positive audible input. Such a navigation device 200 may further comprise an integrated input and display device 290 and/or may further comprise an audio output device, such as a speaker, when the output device 241 enables display of the icons and/or selections and subsequent selections thereof. Additionally, the input device 220 may include a microphone. Thus, as will be understood by those skilled in the art, such a navigation device 200 may be used to perform various aspects of the method described in relation to fig. 5-7. Therefore, further explanation is omitted for the sake of brevity.
In at least one other embodiment of the present application, a method comprises: receiving an indication that an audio recognition mode in the navigation device 200 is enabled; and upon receiving an indication that the audio recognition mode is enabled, displaying an indication on the integrated input and display device 290 as to whether the volume of the received audio input is within an acceptable range, higher than the acceptable range, or lower than the acceptable range.
In at least one other embodiment of the present application, a navigation device 200 comprises: a processor 210 to receive an indication of enabling an audio recognition mode in the navigation device 200; and an integrated input and display device 290 to display an indication of whether the volume of the received audio input is within an acceptable range, higher than the acceptable range, or lower than the acceptable range upon the processor 210 receiving an indication that the audio recognition mode is enabled.
As indicated previously, embodiments of the present application may be used to indicate to a user whether an audio input (e.g., the audio input of step S4 of fig. 5) is within an acceptable range. As shown in FIG. 8, it is initially determined in step S20 by the processor 210, for example in conjunction with an ASR module, whether audio recognition mode is enabled. If enabled, three different displays may be displayed in steps S24, S28, and S32 depending on whether the volume of the audio input is determined to be within the acceptable range. For example, upon receiving audio input, the processor 210 and ASR module may attempt to ascertain the input information. If the volume is within the acceptable range, the processor 210 and ASR module determine that there is a greater likelihood of a correct input.
Accordingly, after the audio recognition mode is enabled and after the audio input information is received, it is determined whether the volume of the audio input is within the acceptable range in step S22. This determination may be accomplished by the processor 210 comparing the volume of the received information to an acceptable range stored in memory, for example, having an upper threshold and a lower threshold. If the volume of the received audio input is within the upper and lower thresholds in step S22, the processor then determines that the volume of the audio input is within the acceptable range. In response to this determination, the process moves to step S24, and in step S24, the processor 210 directs display of an indication that the volume is within the acceptable range. For example, such display may include changing the color of the "talk with me" icon shown in fig. 6A to an icon such as the icon shown in fig. 6B, which is, for example, green, indicating acceptance. Alternatively, another indicator may be displayed, again noting that the indicator may be displayed in a color (e.g., green) indicating acceptance.
If it is determined in step S22 that the volume is not within the acceptable range, the processor 210 then moves to step S26 or step S30 to determine whether the volume is higher or lower than the acceptable range. It should be noted that the order of steps S26 and S30 is not important; as such determinations can be made in any order. If it is determined in step S26 that the volume is above the acceptable range, i.e., greater than the upper threshold of the acceptable range, then an indication may be displayed in step S28 that the volume is above the acceptable range. For example, the icon of fig. 6B may be displayed, for example, in a red color (a color indicating incorrect or something too high), indicating that the audio input is too high, and/or a red indicator may be displayed to the user, again indicating that the volume is too high.
Thereafter, or before step S26, the processor 210 moves to step S30, and in step S30, the processor 210 determines whether the volume is lower than the acceptable range. If the volume is below the acceptable range, an indication that the volume is below the acceptable range may be displayed in step S32. For example, this may involve displaying the icon of fig. 6B in, for example, yellow, indicating to the user that the audio input is not high enough. Alternatively, a yellow indicator may be displayed, for example, on the integrated input and display device 290.
It should be noted that the use of green, red, and yellow are merely examples, and other colors may be utilized. In addition, other methods of displaying an indication that the volume is within, above, or below the acceptable range may also be used, including (but not limited to) displaying words that indicate that the user should speak loudly, etc. Thus, as shown in the example embodiment of fig. 8, the method of the present application may comprise receiving an indication that an audio recognition mode in the navigation device 200 is enabled, and displaying an indication on the integrated input and display device 290 and upon receiving the indication that the audio recognition mode is enabled as to whether a volume of the received audio input is within, above, or below an acceptable range. The display may include, for example, a display to display the indicated color information, where yellow may be used to indicate that the received audio input is lower than the acceptable range, red may be used to indicate that the received audio input is higher than the acceptable range, and green may be used to indicate that the received audio input is within the acceptable range.
For example, in connection with the process shown in FIG. 8, address information regarding the user's travel destination may be received, where the display may then indicate whether the received information is within an acceptable range. Accordingly, the address information may include at least one of city name information and street name information. Additionally, upon receiving address information within an acceptable range, the process may include at least one of: recognizing the address information, displaying an indication of no recognition, and displaying a list of options to the user on the integrated input and display device 290 for selection. Thus, the process shown in fig. 5 may be integrated with the process shown in fig. 8.
It should be noted that each of the foregoing aspects of the embodiments of the present application have been described with respect to methods of the present application. However, at least one embodiment of the present application is directed to a navigation device 200, the navigation device 200 comprising: a processor 210 to receive an indication of enabling an audio recognition mode in the navigation device 200; and an integrated input and display device 290 to display an indication of whether the volume of the received audio input is within an acceptable range, higher than the acceptable range, or lower than the acceptable range upon the processor 210 receiving an indication that the audio recognition mode is enabled. Such a navigation device 200 may further comprise an audio output device, such as a speaker. Additionally, the input device 220 may include a microphone. Thus, as will be understood by those skilled in the art, such a navigation device 200 may be used to perform various aspects of the method described in relation to fig. 5-8. Therefore, further explanation is omitted for the sake of brevity.
Finally, fig. 9 is directed to another embodiment of the present application. Typically, the navigation device 200 is not used in the navigation mode when address information is entered into the navigation device 200. Thus, although the process set out in figure 5 may be used with the navigation device 200 in the navigation mode, this is typically not the case, as the vehicle on which the navigation device 200 is located, for example, is typically stationary when the user has just entered a travel destination from which a travel route may be determined.
In at least one other embodiment of the present application, a method comprises: receiving an indication that an audio recognition mode in the navigation device 200 is enabled; receiving additional information from a source different from a user of the navigation device 200; formulating a question that can be answered by a yes or no answer from the user based on the received additional information; and outputting the formulated question to the user.
In at least one other embodiment of the present application, a navigation device 200 comprises: a processor 210 to receive an indication that an audio recognition mode is enabled, to receive additional information from a source different from a user of the navigation device 200, and to formulate a question that can be answered by a yes or no answer from the user based on the received additional information; and an output device 241 for outputting the formulated question to the user.
FIG. 9 of the present application includes processes involving the enablement of audio recognition modes that are more likely to be available in a navigation device 200 on a moving conveyance in which the navigation device is located; for example, in the case where the navigation device 200 is operating in a navigation mode.
In the process shown in fig. 9, in step S50, it is initially determined whether the audio recognition mode is enabled. This determination may be made, for example, in a manner similar to that previously described, including, for example, recognition of selection of the icon shown in fig. 6A. Once this audio recognition mode is enabled, the processor 210 of the navigation device 200 monitors not only the receipt of audio information from the user, but may also monitor the receipt of additional information from a source other than the user of the navigation device 200. Accordingly, in step S52, the processor 210 determines whether additional information from a source other than the user is received. Such information may include, but is not limited to, receipt of an incoming call or message (e.g., a telephone call or SMS message received by the navigation device 200 itself and/or with a paired mobile phone; received traffic information, etc.). If the additional message is not received, the process simply loops back and continues to monitor for this information.
However, if additional information is received in step S52 from a source other than the user of the navigation device 200, the process moves to step S54, and in step S54, the processor 210 formulates a question that can be answered by a yes or no answer from the user based on the received additional information. For example, the processor 210 may monitor other systems in the navigation device 200, including for example pairs of mobile phones, to determine, for example, whether an SMS message is received. If an SMS message is received, the processor 210 may cooperate with an ASR module and/or more likely TTS module (text-to-speech) to formulate a question that can be answered by a yes or no answer from the user, e.g., "new message received; should it be read aloud? ". Thereafter, in step S56, the formulated question may be output, noting that the output is preferably an audio output (but may also be accompanied by a visual output, for example). Somewhat similarly, when the navigation device 200 can determine that a traffic update is received that indicates a traffic delay (e.g., calculated by the processor 210 in a known manner) for a route along a particular time period, where the processor 210 and TTS module can then order an output, for example, "the traffic delay on your route is now" x "minutes. Do you want to re-route to minimize delay? ".
ASR modules are typically used to recognize speech information from different users. This information is typically unpredictable and therefore cannot be normally stored in memory 230. As described above, the ASR module or engine operates in conjunction with the processor 210 to dynamically convert received voice information into a sequence of phonemes, and cooperates with the processor 210 to match existing grammars for stored city, street names, etc. with the converted sequence of phonemes. As such, the ASR module dynamically causes the processor 210 to utilize the bulk memory 230.
Conversely, when the processor 210 works in conjunction with the TTS module, the TTS module forms a question that may be predefined or pre-recorded in the memory 230, for example. The TTS module can output any kind of audio information, and the limitation condition is that the audio information is expressed by a language corresponding to the voice. Portions of phrases considered most commonly used may also be pre-recorded, stored, and used later by the TTS module to improve the quality of the output. Thus, while a TTS module may be used to convert a simple SMS message to a voice output for the user, the TTS module typically works best in conjunction with the processor 210 for outputting the pre-established questions (slightly modified, if necessary) after the processor 210 determines that additional information, such as SMS messages, traffic updates, etc., has been received by the navigation device 200. This information may include traffic information, incoming phone calls, incoming SMS messages, and the like.
Additionally, formulating the question may include inserting information into the stored question based on the received information, such as inserting traffic delays into the aforementioned traffic delay question. Accordingly, formulating may include inserting information regarding the calculated traffic delay into the stored problem based on the received traffic information. Thereafter, in step S56, the formulated question may be output, noting that the output may include at least one of an audio output and a visual output.
The formulated question output in step S56 is typically formulated to receive a yes or no answer from the user to thereby enable the processor 210 to operate in conjunction with the ASR module while the navigation device 200 is operating in the navigation mode during a driving state. In this mode, the navigation device 200 is utilizing a large portion of the existing memory 230, and preferably does not utilize so many of the memory 230 for the ASR module. By utilizing yes/no questions, the processor 210 and ASR module may readily recognize a short yes or no answer by the user. Thereafter, upon receiving a yes answer from the user, the navigation device 200 may perform a follow-up action, such as calculating a new travel route based on receiving a yes answer from the user regarding the calculated traffic delay. Alternatively, where the additional information is an SMS message, the SMS information may be converted by utilizing, for example, a TTS module, and upon receiving a yes answer from the user, the incoming text message may be output to the user.
It should be noted that each of the foregoing aspects of the embodiments of the present application have been described with respect to methods of the present application. However, at least one embodiment of the present application is directed to a navigation device 200, the navigation device 200 comprising: a processor 210 to receive an indication that an audio recognition mode is enabled, to receive additional information from a source different from a user of the navigation device 200, and to formulate a question that can be answered by a yes or no answer from the user based on the received additional information; and an output device 241 for outputting the formulated question to the user. Such a navigation device 200 may further comprise an integrated input and display device 290 and/or may further comprise an audio output device, such as a speaker, when the output device 241 enables display of the icons and/or selections and subsequent selections thereof. Additionally, the input device 220 may include a microphone. Thus, as will be understood by those skilled in the art, such a navigation device 200 may be used to perform various aspects of the method described in relation to fig. 9. Therefore, further explanation is omitted for the sake of brevity.
The methods of at least one embodiment expressed above may be implemented as a computer data signal embodied in a carrier wave or propagated signal, the computer data signal representing a sequence of instructions that, when executed by a processor, such as the processor 304 of the server 302 and/or the processor 210 of the navigation device 200, cause the processor to perform a respective method. In at least one other embodiment, at least one method provided above may be implemented above as a set of instructions contained on a computer-readable or computer-accessible medium, such as one of the previously described memory devices, to perform the respective method when executed by a processor or other computer device. In various embodiments, the medium may be a magnetic medium, an electronic medium, an optical medium, or the like.
Still further, any of the foregoing methods may be embodied in the form of a program. The program may be stored on a computer readable medium and adapted to perform any of the aforementioned methods when run on a computer device (a device comprising a processor). Thus, the storage medium or computer readable medium is adapted to store information and to interact with a data processing facility or computer device to perform the method of any of the above-mentioned embodiments.
The storage medium may be a built-in medium installed inside the computer device main body or a removable medium arranged to be detachable from the computer device main body. Examples of built-in media include, but are not limited to, rewritable non-volatile memory (e.g., ROM and flash memory) and hard disks. Examples of removable media include (but are not limited to): optical storage media such as CD-ROM and DVD; magneto-optical storage media, such as MO; magnetic storage media including, but not limited to, floppy diskettes (trademark), tape cassettes, and removable hard drives; media with built-in rewritable non-volatile memory, including (but not limited to) memory cards; and media with built-in ROM, including (but not limited to) ROM cartridges; and so on. Further, various information (e.g., characteristic information) about the stored image may be stored in any other form, or it may be provided in other ways.
As will be appreciated by those skilled in the art upon reading the present disclosure, the electronic components of the navigation device 200 and/or the components of the server 302 may be embodied as computer hardware circuits or as a computer readable program, or as a combination of both.
The systems and methods of embodiments of the present application include software operating on a processor to perform at least one of the methods according to the teachings of the present application. One of ordinary skill in the art will understand, upon reading and comprehending this disclosure, the manner in which a software program can be launched from a computer readable medium in a computer based system to execute the functions found in the software program. Those skilled in the art will further appreciate the various programming languages that may be employed to create software programs designed to implement and perform at least one of the methods of the present application.
The programs may be constructed in an object oriented manner using an object oriented language including, but not limited to, JAVA, Smalltalk, C + +, or the like, and may be constructed in a program oriented manner using a programming language including, but not limited to, COBAL, C, or the like. The software components may communicate in any number of ways well known to those skilled in the art, including, but not limited to, through Application Program Interfaces (APIs), interprocess communication techniques (including, but not limited to, reporter calls (RPCs), common object request broker structures (CORBA), Component Object Models (COM), Distributed Component Object Models (DCOM), Distributed System Object Models (DSOM), and remote method calls (RMI)). However, as will be appreciated by those of skill in the art upon reading the present disclosure, the teachings of the present application are not limited to a particular programming language or environment.
The above systems, devices and methods have been described by way of example, and not by way of limitation, with respect to improving accuracy, processor speed, and user interaction simplicity, etc. for the navigation device 200.
Additionally, elements and/or features of different example embodiments may be combined with each other and/or substituted for each other within the scope of this disclosure and the appended claims.
Still further, any of the above and other exemplary features of the present invention may be embodied in the form of apparatuses, methods, systems, computer programs, and computer program products. For example, the foregoing methods may be embodied in the form of a system or device, including, but not limited to, any structure for performing the methods illustrated in the figures.
Having thus described the example embodiments, it will be apparent that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.

Claims (25)

1. A method of operating a navigation device, characterized by
It is indeed determined that an audio recognition mode is enabled in the navigation device,
an audio input is received and an audio input is received,
processing both the audio input and additional information from a source other than a user of the navigation device,
formulating at least one response based on the audio input and the additional information, an
Outputting the response by at least one of visual, visual audio, and wireless signal emitting means.
2. The method of claim 1, wherein the additional information is address information of a travel destination and one response is at least one option of address information determined based on the audio input, the response being audibly output by the navigation device and subsequently acknowledged upon receipt of a further affirmative audio input.
3. The method of claim 1 or 2, wherein the response is a plurality of travel destinations each having address information, and wherein address information for each of the plurality of travel destinations is output visually and only one of the plurality of travel destinations is output audibly.
4. The method of claim 3, wherein the address information for each of the plurality of travel destinations is visually output for selection on an integrated input and display device of the navigation device, each of the travel destinations being selectable via receipt of an indication of a touch panel input.
5. A method according to any preceding claim, wherein a plurality of responses are formulated and then selectable by audio input.
6. The method of claim 5, wherein the selection of one of the plurality of output responses is selectable by audibly inputting a number corresponding to one of the responses.
7. The method of claim 2 or any claim dependent thereon, wherein the at least one determined option of address information comprises a city name.
8. The method of claim 7, wherein upon selection of a city name and upon receipt of another audio input, further address information in the form of at least one street name is determined.
9. The method of claim 8, wherein a plurality of street names are determined, the plurality of street names are visually output, and only one street name from the plurality of street names is output audibly.
10. The method of any preceding claim, wherein the additional information is volume level threshold information and one response is an indication as to whether the volume of the received audio input is within an acceptable range, higher than the acceptable range, or lower than the acceptable range.
11. The method of claim 10, wherein the response is output visually and comprises a display of color information to display the indication.
12. The method of claim 11, wherein yellow is used to indicate that the received audio input is lower than the acceptable range, wherein red is used to indicate that the received audio input is higher than the acceptable range, and wherein green is used to indicate that the received audio input is within an acceptable range.
13. The method of any preceding claim, wherein the response is a question that can be answered by a yes or no answer from the user based on the received additional information.
14. The method of claim 13, wherein the information comprises traffic information.
15. The method of claim 13 or 14, wherein the information comprises receipt of at least one of an incoming call and a message.
16. The method of any of claims 13-15, wherein the formulating of the question comprises inserting information into the question based on the received information, and retrievable storage of the information.
17. The method of claim 16, wherein the formulating of the question comprises inserting information about calculated traffic delays.
18. The method of claim 16, further comprising performing a follow-up action upon receiving a yes answer from the user.
19. The method of claim 16, further comprising calculating a new travel route upon receiving a "yes" answer from the user regarding the calculated traffic delay.
20. A computer program comprising computer program code means adapted to perform all the steps of any of claims 1 to 19 when run on a computer.
A computer program as claimed in claim 20, embodied on or in a computer readable medium.
22. A navigation device programmed to complete any of the methods of claims 1-19 and comprising:
a processor for processing the received data, wherein the processor is used for processing the received data,
a memory for storing a plurality of data to be transmitted,
a microphone (C) is (are) provided,
the user input means are arranged to be able to,
a visual display, and
an audio output member.
23. The navigation device of claim 22, wherein the user input means is integrated with a visual display in a touch-sensitive visual display by means of which information can be displayed and subsequently selected.
24. The navigation device of claim 22 or 23, wherein the navigation device is portable.
25. The navigation device of claim 22 or any claim dependent thereon, wherein the visual display is a color display.
HK10103149.2A 2007-01-10 2007-10-19 A navigation device, a method of and a computer program for operating the navigation device comprising an audible recognition mode HK1136865A (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US60/879,529 2007-01-10
US60/879,553 2007-01-10
US60/879,599 2007-01-10
US60/879,523 2007-01-10
US60/879,577 2007-01-10
US60/879,549 2007-01-10
US60/879,601 2007-01-10

Publications (1)

Publication Number Publication Date
HK1136865A true HK1136865A (en) 2010-07-09

Family

ID=

Similar Documents

Publication Publication Date Title
CN101641569A (en) A navigation device, a method of and a computer program for operating the navigation device comprising an audible recognition mode
US20100286901A1 (en) Navigation device and method relating to an audible recognition mode
NL2001138C1 (en) A navigation device and method related to a speech recognition mode.
CN101395447B (en) Method and apparatus for relational display of point-of-interest items using optional location markers
CN101583848B (en) Method and a navigation device for displaying GPS position data related to map information in text readable form
US8473193B2 (en) Method and device for utilizing selectable location marker for relational display of point of interest entries
KR20110011657A (en) How and where to create map data
CN102037329A (en) Navigation device & method
WO2010076045A1 (en) Timed route navigation device
HK1136865A (en) A navigation device, a method of and a computer program for operating the navigation device comprising an audible recognition mode
JP2012513579A (en) Navigation apparatus and method for determining a moving route
JP3948357B2 (en) Navigation support system, mobile device, navigation support server, and computer program
HK1135763A (en) A navigation device and a method of operating the navigation device with emergency service access
HK1138063A (en) Method and a navigation device for displaying gps position data related to map information in text readable form
HK1136866A (en) A navigation device and method for quick option access
HK1126545A (en) A method and device for utilizing a selectable location marker for relational display of point of interest entries
HK1138364A (en) A navigation device and method for informational screen display
HK1136869A (en) A navigation device and method for driving break warning
HK1126547A (en) A navigation device and method for sequential map display
HK1128757A (en) A navigation device and method for conveying information relationships
HK1151581A (en) Navigation device & method
HK1139729A (en) Improved navigation system