EP0901000B1 - Système de traitement de messages et méthode pour le traitement de messages - Google Patents

Système de traitement de messages et méthode pour le traitement de messages Download PDF

Info

Publication number
EP0901000B1
EP0901000B1 EP98114357A EP98114357A EP0901000B1 EP 0901000 B1 EP0901000 B1 EP 0901000B1 EP 98114357 A EP98114357 A EP 98114357A EP 98114357 A EP98114357 A EP 98114357A EP 0901000 B1 EP0901000 B1 EP 0901000B1
Authority
EP
European Patent Office
Prior art keywords
voice
messages
message
voice tone
aloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP98114357A
Other languages
German (de)
English (en)
Other versions
EP0901000A2 (fr
EP0901000A3 (fr
Inventor
Taizo Asaoka
Naoki Maeda
Hiroyuki Kanemitsu
Masanobu Yamashita
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toyota Motor Corp
Original Assignee
Toyota Motor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toyota Motor Corp filed Critical Toyota Motor Corp
Publication of EP0901000A2 publication Critical patent/EP0901000A2/fr
Publication of EP0901000A3 publication Critical patent/EP0901000A3/fr
Application granted granted Critical
Publication of EP0901000B1 publication Critical patent/EP0901000B1/fr
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3629Guidance using speech or audio output, e.g. text-to-speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/033Voice editing, e.g. manipulating the voice of the synthesiser

Definitions

  • the present invention generally relates to a device and method for processing messages. More particularly, the present invention pertains to a device and method for use in, for example, a vehicle for processing messages sent from outside, such as electronic mail (e-mail), news information, weather information, traffic information and messages generated by the vehicle navigation system, through the use of a read-aloud function.
  • a vehicle for processing messages sent from outside, such as electronic mail (e-mail), news information, weather information, traffic information and messages generated by the vehicle navigation system, through the use of a read-aloud function.
  • Japanese Patent Laid-Open Publication No. Hei 9-23273 describes a device that can read e-mail messages aloud.
  • the device is able to compose e-mail messages and send them to other terminals, and is also able to receive e-mail messages sent from other terminals.
  • the device has a voice synthesizer to generate voice signals from a loudspeaker in accordance with the text data of the received e-mail messages. By generating voice signals the user can understand the contents of the e-mail messages without viewing a display device.
  • the device is also outfitted with a voice navigation device to generate voice guidance messages, for example where the vehicle should turn, to guide the driver along a route to a particular destination.
  • the device described above is susceptible of certain limitations and drawbacks. For example, when the device receives messages from different sources or senders and reads those messages aloud in the same voice tone, the user cannot easily recognize whose message is being read reading aloud. Even if a first message from one person has been read aloud and a second message from another person is beginning to be read aloud, it isn't easy for the user to understand the end of the first message and the start of the second message because the two messages are read in the same voice tone. Therefore, the user may confuse the sender of each message unless the user confirms, through visual observation of the display, who sent the message.
  • one of the drawbacks and disadvantages associated with this device is that the device reading the message is not well suited to distinguishing between the messages from different sources or senders and so the user may misunderstand the source or sender of a particular message.
  • JP-A-05 260082 which describes the pre-characterising features of the present invention.
  • JP-A-06 332822 and JP-A-09 008752.
  • Fig. 1 is a block diagram illustrating the configuration of a mobile terminal in accordance with a first embodiment of the present invention.
  • the mobile terminal is designed to read aloud messages and can be in the form of a Personal Digital Assistant (PDA) which is a type of portable terminal, a notebook-type personal computer, an in-vehicle information terminal, or other types of devices.
  • PDA Personal Digital Assistant
  • various components of the mobile terminal for example a display monitor, a central processing unit (CPU), a memory, etc, can also be used in other contexts such as in relation to a vehicle navigation system.
  • the mobile terminal 1 includes a display device 15 for outputting or displaying information in a visual form, and a loudspeaker device 16 for outputting information in an audio manner.
  • the display device 15 can be in the form of a color LCD (Liquid Crystal Display).
  • the mobile terminal 1 also includes a modem 11 that is adapted to be removably connected to a telephone system, for example a mobile phone, a car phone system, a PHS (Personal Handy-phone System), etc. Such a telephone system can be included in the mobile terminal 1 instead of being removably connected.
  • the modem 11 is designed to communicate with a mail processing device 13 that includes a mail receiving device 12 for receiving electronic mail (e-mail) message data, an e-mail address recognizing device 22 for recognizing the address of the e-mail sender, and a body data processing device 23 for processing the body or text of the e-mail message data.
  • the mail receiving device 12 is connected to the modem 11 as well as the e-mail address recognizing device 22 and the body data processing device 23.
  • the mail processing device 13 receives e-mail data sent to it after the mobile terminal 1 activates its processing program for receiving e-mail message data.
  • the mobile terminal 1 can be connected with an e-mail server in an on-line information center by using the telephone system, and can receive e-mail message data sent to it.
  • the e-mail message data received by the mail receiving device 12 is sent by the mail receiving device 12 through the modem 11.
  • the mail receiving device 12 demodulates and decodes the e-mail message data, and the decoded e-mail data in the mail receiving device 12 is supplied to the address recognizing device 22 and the body data processing device 23.
  • Each e-mail message data includes the sender's address data, time and date information data concerning when the e-mail was sent, the subject data, and the body or text data of the message.
  • the e-mail data in itself is coded data, so the address recognizing device 22 and the body data processing device 23 are designed to change the coded data into recognizable text data or character data or drawing data for display by using their reference dictionary database.
  • list data setting forth a list of received mail such as that shown in Fig. 2 or body data setting forth the text associated with each e-mail message are displayed on the display device 15.
  • the mobile terminal 1 also includes a read-aloud requesting device 14 which is operated by the user.
  • a read-aloud requesting device 14 which is operated by the user.
  • the terminal 1 is placed in its read-aloud mode.
  • a touch-switching system can be used as the read-aloud requesting device 14.
  • the touch switch is displayed on the display device 15 and the terminal 1 can be changed if the user touches the displayed touch switch.
  • the read-aloud mode be started automatically when the vehicle is turned on or begins to run.
  • speed signals indicating that the vehicle is moving can be monitored to determine when the read-aloud mode should be started.
  • This voice processor device 17 includes a voice output controller or voice synthesizer 19, a voice tone selector 20 that is connected to or interfaces with the voice output controller 19, and a voice tone data memory 21 that is connected to or interfaces with the voice tone selector 20.
  • the sender's address data, the time and date information data, the subject data and the body data are sent from the mail processing device 13 to the voice output controller 19. That data is changed into voice signals for being read aloud, and finally the voice signals are emitted from the loudspeaker 16 which is connected to the voice output controller 19.
  • the address recognition device 22 detects the sender of each e-mail message by recognizing the sender's address. If there are plural senders in all of the received messages, the address recognition device 22 associates distinctive numbers for each sender. The distinctive numbers are also supplied to the voice tone selector 20.
  • the voice tone selector 20 reads or allots one voice tone or voice tone data corresponding to the distinctive number from the voice tone data memory 21. In this embodiment, there are five audibly different patterns of voice tones in the voice tone data memory 21, although a different number of voice tone patterns can be provided.
  • the distinctive number for distinguishing between each voice tone data is allotted to the message from each sender. Then, a specified voice tone data is supplied to the voice output controller 19. Therefore, each received mail is read aloud in the specified voice tone data allotted by the voice tone selector 20.
  • the voice tone memory 21 stores such PCM-coded voice tone data.
  • Any person's voice can be used, for example a high tone male voice, a low bass male voice, a high tone female voice, a low female voice, a child's voice, etc.
  • a computer synthesized voice similar to a robot voice, can also be used.
  • the voice tones are distinguishable from one another as heard by the user.
  • Fig. 3A and Fig.3B show a processing flowchart by which the system receives e-mail messages and reads them aloud.
  • a request signal is first generated to connect with an on-line service center.
  • the telephone system calls up the on-line service center after receipt of the requesting signal.
  • the server sends a response signal.
  • a response-OK signal is generated in step S2 indicating that the server has responded to its request.
  • step S3 the mobile terminal generates an indicating signal for starting to check sent e-mail messages. After that, the receipt of e-mail messages from the server begins.
  • step S4 it is determined whether or not any e-mail messages have been received. If there is no e-mail in the server, a message such as "No e-mails" is displayed on the display device 15 in step S5. Thereafter, in step S6, it is determined whether or not the system is in the reading-aloud mode. If the system is in the reading-aloud mode, an indicating signal is generated for making the voice output controller 19 read aloud a message such as "there is no e-mail for you". After reading this message aloud, the process is finished. If the result in step S6 is No (i.e., the system is not in the reading-aloud mode), the process ends without reading anything aloud.
  • step S4 If, at step S4, it is determined that there are one or more e-mail messages present, the program proceeds to step S8 where the e-mail data is received. Then, in step S9, it is determined whether or not the system is in the reading aloud mode. If the system is in the reading aloud mode, the total number of mail senders in all of the received e-mail messages is counted in step S10. Then, in step S11, it is judged whether or not the total number of senders is five or less.
  • step S11 determines in step S11 that there are five or fewer senders, a distinctive number for a specified voice tone data is allotted to each sender in step S12. After that, relation data defining the relationship between the senders and the associated distinctive number is supplied to the voice output controller 19 in step S13.
  • step S11 determines whether the total number of different senders is more than five. If the determination at step S11 is No, in other words if the total number of different senders is more than five, the system detects in step S14 the first five senders and selects all e-mail messages sent by the first five senders.
  • step S15 a distinctive number for the specified voice tone data is allotted to each of the first five senders.
  • the distinctive numbers are supplied to the voice data selector 20 and the body or content data is supplied to the voice output controller 19 in step S16. Therefore, a different voice tone data is allotted to each sender and each message sent by a particular sender is supplied to the voice output controller 19 in step S16. Consequently, the reading-aloud voice tone for each sender's mail is different from one sender to the next when the system reads aloud all of the messages sent by the first five senders.
  • step S17 the number of remaining senders beyond the aforementioned five are counted, and then the program returns to step S11.
  • the remaining messages are processed in the same way.
  • the five voice tone data stored in the voice tone memory are allotted to the first five senders, and for the remaining senders, the same stored voice tone data is repeatedly allotted.
  • a reading-aloud unit can be restricted to five senders, and in each unit each sender's mails are read aloud using different voice tones.
  • step S9 If it is determined at step S9 that the system is not in the reading aloud mode, the received mail list, such as shown in Fig. 2, is displayed on the display device 15 in step S18. Then, the system determines in step S19 whether or not there is an indication to stop displaying the list. If there is an indication in step S19 to stop displaying the list, in other words if the determination in step S19 is Yes, the program ends. However, if the determination at step S19 is No because there is no indication that the display of the list should stop, the program proceeds to step S20 to determine whether or not there is an indication to display the full text or body data of one selected mail message. If the determination in step S20 is Yes, the full text or body data is displayed on the display device 15 in step S21.
  • step S22 the full text data continues to be displayed so long as there is no indication to stop displaying such data in step S22.
  • the display device 15 shows the received mail list again after returning to step S18.
  • step S20 If the determination in step S20 is No, the list is also displayed continuously after returning to the S18.
  • the system can change the voice tone for reading aloud e-mail messages when messages from different senders are prepared for being read aloud.
  • the user can thus easily understand whether or not a message being read aloud is one that was sent by a sender whose messages have been already read aloud.
  • the system may be useful for the system to memorize for a predetermined period of time the data defining the relationship between the senders and the distinctive number associated with each sender.
  • the previously assigned distinctive number is associated with that sender. This allows the one sender's messages to be read aloud repeatedly in the same voice tone.
  • the distinctive numbers are registered or assigned for certain predetermined senders by the user beforehand. The user can thus decide which voice tone data is assigned to the predetermined senders, thus making it easier for the user to recognize the sender whose messages are being read aloud when the user hears the voice tone.
  • the sender may be able to send his e-mail messages with the above distinctive number.
  • the voice tone can be selected as the sender likes. This thus promotes personalization of the voice tone used for reading aloud messages.
  • the voice tone memory be capable of being updated for personalizing the read-aloud voice tone. Additional voice tone data can be supplied by PCM card medium or CD-ROMs. Also, an on-line network may be useful when an on-line service center which can supply additional voice tone data is established.
  • the e-mail server can be designed to allot or control the voice tones.
  • the mobile terminal itself need not always comprise such a voice tone database.
  • the sender can send each e-mail message with a voice tone data and the terminal can then read the message aloud by using the voice tone data attached to each message.
  • the e-mail server can include a voice tone database and be capable of assigning voice tone data to messages, with the terminal receiving the voice tone data for being read aloud when the terminal receives e-mail messages.
  • the e-mail server may be designed to only assign a distinctive number for voice tone data without also including the voice tone data itself.
  • the mobile terminal will then have to be outfitted to include the voice tone database and be able to associate voice tones to the assigned distinctive number.
  • the mail processing device 13 and the voice processing device 17 described above are constituted by a computer system and are controlled by controlling programs in the computer system. Therefore, all processes are generally maintained by running the predetermined program.
  • This program may be pre-installed on the computer system, or may be supplied to the computer system through an on-line network or some medium like a CD-ROM, for example.
  • the mobile terminal shown in Fig. 1 is one which can be installed on a vehicle, but can also be one that is handheld.
  • Fig. 4 illustrates general features of one example of an overall e-mail system in which can be incorporated the mobile terminal shown in Fig. 1.
  • the mobile terminal 1 is connected to the mobile phone 2 which is able to access a mobile-phone center 3.
  • the mobile-phone center 3 is connected to the public telephone network 8.
  • Each terminal 6, 7 is also connectable to the telephone network 8.
  • the terminals 6, 7 can be in the form of, for example, desk-top computers.
  • An on-line service center 4 is also connected to the network 8 through a data converter 5.
  • the on-line service center 4 includes an e-mail server 4a.
  • the mobile terminal 1 is able to receive e-mail messages sent to itself from other terminals after accessing the e-mail server 4a.
  • the mobile-phone center 3 preferably includes many communication cells over a wide area. If the mobile terminal 1 moves within the wide area, it can maintain connection to the on-line service center 4 because the mobile terminal 1 can change from one cell to another.
  • FIG. 5 Another embodiment of the present invention is depicted in Fig. 5 which sets forth a block diagram showing the configuration of the mobile terminal device.
  • the system shown in Fig. 5 is similar to that shown in Fig. 1, but also includes vehicle navigation capabilities and is particularly well suited for use in a vehicle.
  • the system or device shown in Fig. 5 includes a mobile phone 30 similar to the mobile phone 2 shown in Fig. 4 that is connectable to the public telephone network by wireless communication.
  • An on-line service not specifically shown in Fig.5, is also connected to the network. Also, satellite communication may be available.
  • the mobile phone 30 is connected to a modem 32, which is similar to the modem 11 shown in Fig. 1, through a connector 31 so that the mobile phone 30 can maintain data communication.
  • the modem 32 is connected to an outside information controller 33.
  • This outside information controller 33 processes the transmission or reception of outside information such as e-mail messages, news of various kinds, advertising messages, traffic information, weather information, facilities information, business information, sightseeing information, etc. Also, the outside information controller 33 performs processing relating to the display or voice output of such information.
  • the outside information controller 33 is similar to the mail processing device 13 shown in Fig. 1, except that in addition to processing e-mail information, the outside information controller 33 also processes other information such as that mentioned above.
  • a navigation controller 34 is connected to the outside information controller 33.
  • the navigation controller 34 performs processing to display maps, to provide route guidance information or to generate guidance.
  • Both the outside information controller 33 and the navigation controller 34 are connected to a display device 35, an operating device 36, an outside information memory device 37, a loudspeaker system 38 and a voice tone data memory device 39 through the Local Area Network.
  • the display device 35 similar to the display device is shown in Fig. 1, can be in the form of a liquid crystal display and is designed to display map information or various kinds of text data.
  • the operating device 36 is similar to the read-aloud requesting device shown in Fig. 1, except that the operating device has greater capabilities.
  • the operating device 36 is composed of several switches and a touch panel installed in the front of the display device 35. The operating device 36 is operated by the user to input information of various kinds.
  • the outside information memory 37 is similar to the mail receiving device 12 depicted in Fig. 1 except that it is adapted to store a variety of different information beyond e-mail information.
  • the outside information memory 37 memorizes data concerning e-mail messages containing text, the e-mail lists, various kinds of news information or any other outside information.
  • the output from the loudspeaker 38 which is similar to the speaker 16 in Fig. 1, reads aloud the received e-mail messages, the news information or the guidance messages for navigation.
  • the voice tone memory 39 memorizes voice tone data of several different kinds and is similar to the voice tone memory 21 in Fig. 1.
  • a map database 40 and a positioning device 41 are connected to the navigation controller 34.
  • the map database 40 memorizes data concerning the map of an entire area, for example a country.
  • the map database 40 also memorizes various guidance information, for example the names of place, the names of intersections, various kinds of facility names or the names of shops, and message data involving various guidance phrases.
  • the positioning device 41 can detect its present position at all times.
  • This positioning device 41 can include a GPS receiver to receive wireless signals from GPS satellites around the globe.
  • the D-GPS Different Global Positioning System
  • the positioning device 41 can include the well-known dead-reckoning device or the absolute coordinates information receiver to acquire more precise position data.
  • the various functions and operations such as route searching for a destination, position display, searched route display, the driver guidance, etc., are performed in a manner similar to an ordinary navigation device.
  • the navigation controller 34 reads out guidance message data from the map data base 40.
  • the navigation controller 34 then provides data for the voice output by making use of the voice tone data memorized in the voice tone memory 39 and finally outputs the data for the voice output from the loudspeaker 38.
  • the guidance message is output from the loudspeaker 38 based on the voice tone which is memorized in the voice tone memory 39.
  • the outside information controller 33 When the outside information controller 33 receives e-mail messages sent from the outside, the outside information controller 33 first receives the messages through the mobile phone 30, the connector 31 and the modem 32. The outside information memory 37 then memorizes the messages. The outside information controller 33 updates the list of received e-mail messages which are memorized in the outside information memory 37. The outside information controller 33 displays the list of received e-mail messages, or the body or text of such e-mail messages, on the display device 35, and then outputs these from the loudspeaker 38 with the read-aloud voice.
  • the outside information controller 33 When the outside information controller 33 outputs voice data concerning the e-mail messages, the outside information controller 33 provides data for the voice output by making use of the voice tone data which is memorized in the voice tone memory 39 and finally outputs the data for the voice output from the loudspeaker 38.
  • the outside information controller 33 acquires or receives news or any other information from an outside source, the outside information controller 33 performs processing in the same way as in the case of e-mail messages.
  • the voice tone memory 39 memorizes PCM data based on a male's voice and also memorizes PCM data based on a female's voice. If the male voice PCM data is used for or assigned to the outside information controller 33, the female voice PCM data is available for or assigned to the navigation guidance messages. Of course, the opposite association can be employed as well (i.e., the female voice PCM data can used for or assigned to the outside information controller 33 while the male voice PCM data is used for or assigned to the navigation guidance messages). In this way, the user can understand by the voice tone which is output from the loudspeaker 38 whether outside information is being read aloud or whether guidance messages associated with the navigation system are being read aloud.
  • the voice navigation messages providing voice guidance information to the driver of the vehicle are read aloud in a voice tone that is allotted to the voice navigation messages.
  • the outside information messages are read aloud in a voice tone that is allotted to the outside information messages.
  • the voice tone in which the outside information messages are read aloud differs from the voice tone in which the voice navigation messages are read aloud.
  • Fig. 11 generally illustrates a program for reading aloud the different messages in different voice tones.
  • the program determines whether voice navigation messages are to be read aloud and if so, the voice navigation messages are read aloud in a voice tone allotted to the voice navigation messages in step S41.
  • step S40 If it is determined in step S40 that voice navigation messages are not to be read aloud, it is determined in step S42 whether outside information messages are to be read aloud. If so, in step S43 the outside information messages are read aloud in the voice tone that is allotted to the outside information messages, with the voice tone allotted to the outside information messages being different from the voice tone allotted to the voice navigation messages so that the voice navigation messages and the outside information messages are read aloud in different voice tones.
  • This embodiment is particularly effective in situations where the vehicle is approaching one guidance intersection or one guidance point while the system is reading aloud outside information.
  • the voice output timing is of course important for proper navigation guidance. Even if outside information is being read out, the navigation guidance message is read out in the opposite gender voice tone. Therefore, the user is able to discern that a different type of message is being read aloud and so the user is not likely to miss the navigation guidance messages while the outside information is being read aloud.
  • the navigation guidance message may be output in the female voice tone. If the driver is a woman, the navigation message can be output in the male voice tone.
  • the above-mentioned voice tone setting can be performed by the user through hand-operated control. If data concerning the gender of the driver has been registered or preprogrammed, the voice tone setting can be accomplished automatically after detecting the registered gender data. It is of course also possible that various voice tones (such as a robot voice data or a juvenile voice data, etc.) can be suitably selected in addition to the gender (i.e., male or female) of the voice tone.
  • Fig. 6 sets forth a flow chart illustrating the processing of messages to prevent the navigation guidance messages from being read aloud at the same time as the voice output for the outside information and to ensure that the navigation guidance messages are read aloud at the necessary time.
  • the first predetermined value (v1) may be set from 10km/h to 20 km/h for example.
  • step S24 the reading aloud of the outside information message is temporarily stopped to read aloud the voice guidance message, and then the reading aloud of the outside information is restarted in step S25 after the voice guidance messages are finished.
  • the outside information that is interrupted at step S24 should be read aloud from the beginning because it might otherwise be difficult for the user to understand the entirety of the outside information if it is read aloud from some midway point.
  • the user can decide to pick up with the outside information at the point of interruption by operating the operating device 22.
  • VICS Vehicle Information Communication System
  • step S23 If the determination in step S23 is Yes because the vehicle is traveling at a relatively high rate of speed for purposes of navigation guidance, the reading aloud of the outside information is stopped in step S26 and the navigation voice guidance message is output from the loudspeaker 38 in step S27. Then, in step S28, it is determined whether or not all voice guidance messages have been read aloud. As an alternative, this step can be substituted for a step of determining whether or not the vehicle has passed the guidance intersection. Normally, several guidance messages are prepared and outputted for a particular guidance intersection in order to give the driver advanced guidance. For example, Fig. 7 shows an example in which a driver is given four guidance messages to negotiate a turn at a single guidance intersection. A first guidance message is outputted at approximately 700 meters before the guidance intersection.
  • the second guidance message is outputted approximately 300 meters before the guidance intersection.
  • the third guidance message is outputted just before reaching the intersection.
  • the fourth and final guidance message is outputted after turning and passing through the intersection to provide the driver with information concerning the next intersection or the next road. Although the fourth guidance message is not always necessary, it may sometimes provide helpful information to the driver.
  • step S29 it is determined in step S29 whether the vehicle velocity (v) is greater than a second predetermined value (v2).
  • the second predetermined value (v2) can be on the order of 0 km/h to about 5 km/h. If the vehicle velocity (v) does not exceed the predetermined value (v2) in step S29 after the determination in step S23 that the vehicle speed is in excess of the first predetermined velocity (v1), thus indicating that the vehicle is stopped or is caught in a heavy traffic jam, the reading aloud of the outside information is resumed, preferably from the beginning, in step S30.
  • step S29 If it is determined in step S29 that the vehicle speed is in excess of the second predetermined velocity (v2), the outside information continues to not be read aloud in step S31 and the navigation guidance messages continue to be output in step S32. Thereafter, the process returns to step S28.
  • v2 the second predetermined velocity
  • step S28 If the determination at step S28 is that all of the voice guidance messages have been outputted, the outside information may be read aloud again, preferably from the beginning, in step S33.
  • the velocity is not relatively low (i.e., is in excess of v1) and if all the navigation guidance messages for one intersection are not finished being read aloud, the outside information is kept from being read aloud. Therefore, while the vehicle is moving smoothly before the guidance intersection, the voice guidance messages for navigating the vehicle are outputted without the outside information being simultaneously outputted.
  • the vehicle velocity is relatively low (i.e., below the velocity v1) and the vehicle is stopped or is caught in a traffic jam, even if all of the navigation guidance messages have not been read out, the outside information can be read aloud in the interval between each voice guidance message. Therefore, the user doesn't have to wait excessively long between successive voice guidance messages to have the outside information read aloud.
  • the voice tone for the navigation guidance messages is different from the voice tone for reading the outside information and so it is easy for the driver to recognize which kinds of messages are being read aloud.
  • the screen contents of the display device 35 can be adapted to correspond to the voice messages. For example, while the outside information is being read aloud, the outside information can be displayed on the screen. On the other hand, while the voice guidance messages for navigating the driver are being outputted, map data about the guidance intersection can be displayed on the display device 35.
  • the outside information controller and the navigation controller are constituted by a computer system and are controlled by a controlling program(s) in the computer system so that the processes are generally maintained by running the predetermined program(s).
  • This program(s) can be pre-installed on the computer system, or can be supplied to the computer system through an on-line network or some medium like a CD-ROM, for example. In this way, the reading aloud of the different messages in different voice tones can be achieved as can the control of the output timing of the various messages.
  • Fig. 8 shows a slightly modified version of the mobile terminal shown in Fig. 4 in which a passenger seat loudspeaker 38b is connected to the outside information controller 33 and a driver seat loudspeaker 38a is connected to the navigation controller 34.
  • the navigation controller 34 can use female voice tone data for outputting its guidance messages while the output information controller 33 uses male voice tone data for reading aloud its information, for example, e-mail messages, weather report information, traffic information, news information, business information, etc.
  • the voice tone data for navigating the driver may be supplied from the map database 40 instead of the voice tone memory 39. In this case, when the voice guidance timing is operational, the navigation controller 34 acquires voice tone data from the map database 40 and outputs the navigation guidance messages through the loudspeaker 38a installed near the driver's seat.
  • the outside information controller 33 can acquire voice tone data from the voice tone memory 39, formulate the necessary text (sentences) to be read aloud and then output the outside information through the loudspeaker 38b installed near the passenger's seat.
  • voice tone data for the navigation guidance message
  • female voice tone data for reading aloud the outside information.
  • the outside information controller 33 and the navigation controller 34 can be used to control the output of the above-mentioned navigation guidance voice and the communication messages voice, with the two controllers 33, 34 forming separate units and separate electrical circuits. It is also possible to place the outside information controller 33 and the navigation controller 34 in a single unit or case with separate electrical circuitry. Further, one electrical controller can be provided with both the outside information controlling programs and the navigation controlling programs to form a single unit with one electrical circuitry.
  • Fig. 9 shows one example for installing the mobile terminal on a vehicle.
  • the positioning device 41 being in the form of a GPS antenna
  • the GPS antenna 28 is mounted on the upper part of the instrument panel in the vehicle's cabin.
  • An electrical controller unit functioning as the navigation controller 34 can include CD-ROMS as the map database 40 and such unit can be installed in the trunk of the vehicle.
  • the loudspeaker 38 is connected to the multimedia station 60.
  • the loudspeaker 38 is installed near the driver's seat. Another loudspeaker can also be installed on the passenger side of the vehicle.
  • the mobile phone system 72 shown in Fig. 10 is connected to the multimedia station 60 through cables 62.
  • the mobile phone system 72 is placed on a cradle device 70.
  • the speaker system and microphone system are connected with the multimedia station 60. Without having a receiver, the user can call by placing the phone on the cradle 70.
  • Fig. 10 shows the cradle device 70 in more detail.
  • the mobile phone 72 can be placed on the surface of the cradle 70, with the cradle device 70 and the terminal of the mobile phone 72 being connected through a connector 70a.
  • different voice tones are assigned to different messages, or the senders or sources of different messages, to allow the user or receiver to discern between different messages or between different senders or sources of messages.
  • the system can also be adapted to detect the address data of each sender to advantageously facilitate the sorting of messages.
  • the reading aloud of different messages can be made easier by selecting one voice tone from a memory to read aloud one kind of message and selecting another voice tone to read aloud another kind of message.
  • the present invention also compares the number of memorized voice tones and the number of senders of received messages, and then sequentially reads aloud messages that can be read without repetitively using the same voice tone, thus preventing one voice tone from being used repeatedly for the messages of different senders during one reading aloud sequence.
  • the present invention can also be designed to advantageously memorize an assigned relationship between one voice tone and a sender or source who has sent a message or messages after the voice tone has been assigned to the sender or source, and then prioritizes the use of that voice tone for messages received from such sender or source.
  • the system can easily and automatically allot the same voice tone to messages received from that sender or source.
  • the present invention also provides a system which is able to read aloud both outside information messages as well as navigation guidance information messages, with different voice tone data being assigned to the received outside information and the navigation guidance information so that the driver can easily distinguish between the two types of information being read aloud.
  • the system memorizes different types of voice tone date (e.g., female and male voice tone data) and advantageously assigns one voice tone data to received message information and a different voice tone data to the navigation guidance information so that the two types of information are read aloud using different voice tones.
  • voice tone date e.g., female and male voice tone data
  • the system according to the present invention is further advantageous in that the output timing of outside information and the output timing of voice navigation guidance information is adjusted for purposes of reducing the possibility of reading aloud the two types of information at the same time, while also ensuring that the voice navigation guidance information is provided at the necessary time.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Automation & Control Theory (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Telephonic Communication Services (AREA)
  • Information Transfer Between Computers (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Mobile Radio Communication Systems (AREA)

Claims (11)

  1. Dispositif de traitement de messages comprenant :
    un moyen de réception (12) destiné à recevoir des messages envoyés,
    une mémoire de timbres de voix (21) destinée à mémoriser une pluralité de timbres de voix différents,
    un moyen d'attribution (20) destiné à attribuer l'un de ladite pluralité de timbres de voix différents mémorisés dans la mémoire de timbres de voix (21) à au moins un message reçu par ledit moyen de réception (12) et destiné à attribuer un timbre différent de ladite pluralité de timbres de voix différents mémorisés dans la mémoire de timbres de voix (21) à un autre message reçu par ledit moyen de réception (12),
    un moyen de lecture vocale (19) destiné à lire ledit au moins un message dans le premier timbre de voix et destiné à lire ledit autre message dans le timbre de voix différent,
    caractérisé en ce que :
    ledit moyen d'attribution (20) comprend un moyen de classement (13) destiné à classer tous les messages électroniques reçus de l'extérieur par ledit moyen de réception (12) en groupes, chaque groupe contenant des messages électroniques envoyés depuis l'extérieur par un expéditeur commun, ledit moyen de classement (13) étant conçu pour détecter les données d'adresse associées à chaque message électronique et pour classer les messages électroniques sur la base des données d'adresse de l'expéditeur,
    ledit moyen d'attribution (20) comprend en outre un moyen configuré pour :
    (i) comparer le nombre des timbres de voix mémorisés dans la mémoire de timbres de voix (21) au nombre des groupes de messages en attente d'être lus à haute voix,
    (ii) si le nombre de timbres de voix n'est pas inférieur audit nombre de groupes, attribuer un timbre respectif parmi les timbres de voix à chaque groupe respectif de sorte que ces groupes peuvent être lus à haute voix sans répéter l'utilisation du même timbre de voix, ou
    (iii) si le nombre de timbres de voix est inférieur au nombre de groupes, (a) sélectionner un sous-ensemble des groupes, le sous-ensemble comprenant le même nombre de groupes que le nombre de timbres de voix, (b) attribuer un timbre respectif des timbres de voix à chaque groupe respectif dans le sous-ensemble de sorte que ces groupes peuvent être lus à haute voix sans répéter l'utilisation du même timbre de voix, et (c) recommencer à partir de l'étape (i) après que le sous-ensemble de l'étape (iii) a été lu à haute voix, et
    ledit moyen de lecture vocale est configuré pour lire à haute voix chacun dudit nombre respectif de groupes de messages séquentiellement en utilisant ledit timbre de voix attribué qui change d'un groupe au suivant.
  2. Dispositif de traitement de messages conforme à la revendication 1, dans lequel le moyen d'attribution (20) attribue un timbre vocal à tous les messages reçus du même expéditeur, et comprenant un moyen (13) destiné à mémoriser une relation affectée entre un timbre de voix et un expéditeur après que le moyen d'attribution (20) a affecté un timbre de voix audit expéditeur, un moyen (13) destiné à utiliser de façon prioritaire un timbre de voix pour tous les messages suivants reçus à l'avenir du même expéditeur après que ledit moyen de mémorisation (13) a mémorisé la relation affectée entre le timbre de voix et l'expéditeur.
  3. Dispositif de traitement de messages conforme à l'une quelconque des revendications précédentes, dans lequel le dispositif est configuré pour une utilisation dans un véhicule et comprenant en outre un contrôleur de navigation (34) destiné à fournir des informations de guidage vocales pour guider un conducteur du véhicule, et dans lequel le moyen d'attribution (20) est configuré pour attribuer un timbre vocal pour les informations de guidage vocales qui est différent du timbre vocal utilisé pour la lecture à haute voix des messages reçus de l'extérieur.
  4. Dispositif de traitement de messages selon la revendication 3, comprenant en outre un moyen (33, 34) destiné à commander ledit moyen de lecture à haute voix pour interrompre la lecture à haute voix des messages reçus de l'extérieur et pour lire à haute voix les informations de guidage à un instant spécifié.
  5. Dispositif de traitement de messages selon la revendication 3, dans lequel ledit moyen de classement (13) classe en groupes les messages envoyés depuis l'extérieur et le message de guidage associé au positionnement lors d'une navigation.
  6. Dispositif de traitement de messages selon la revendication 3, comprenant en outre un moyen de réglage (33) destiné à régler un instant de sortie du moment où les informations de guidage vocales sont lues à haute voix et du moment où les messages reçus de l'extérieur sont lus à haute voix pour empêcher les informations de guidage vocales et lesdits messages d'être lues à haute voix simultanément.
  7. Procédé de traitement de messages, comprenant les étapes consistant à :
    recevoir les messages envoyés de l'extérieur,
    attribuer un timbre de voix à au moins un desdits messages et attribuer un timbre de voix différent à un message différent et,
    lire à haute voix lesdits messages dans lesdits timbres de voix attribués,
    caractérisé en ce que
    ladite étape d'attribution comprend les sous-étapes consistant à :
    (a) classer tous les messages électroniques reçus de l'extérieur en groupes, chaque groupe contenant des messages électroniques envoyés depuis l'extérieur par un expéditeur commun, ladite étape de classement comprenant la détection des données d'adresse associées à chaque message électronique et le classement des messages électroniques sur la base des données d'adresse de l'expéditeur,
    (b) comparer ledit nombre de timbres de voix au nombre de groupes de messages en attente d'être lus à haute voix,
    (c) si le nombre de timbres de voix n'est pas inférieur au dit nombre de groupes, attribuer un timbre respectif des timbres de voix à chaque groupe respectif de sorte que ces groupes peuvent être lus à haute voix sans répéter l'utilisation du même timbre de voix, ou
    (d) si le nombre de timbres de voix est inférieur au nombre de groupes, (i) sélectionner un sous-ensemble des groupes, le sous-ensemble comprenant le même nombre de groupes que le nombre de timbres de voix, (ii) attribuer un timbre respectif parmi les timbres de voix à chaque groupe respectif dans le sous-ensemble de sorte que ces groupes peuvent être lus à haute voix sans répéter l'utilisation du même timbre de voix, et (iii) recommencer à partir de l'étape (b) après que le sous-ensemble a été lu à haute voix à l'étape (d), et
    ladite étape de lecture à haute voix comprend la lecture à haute voix de chacun dudit nombre respectif de groupes de messages séquentiellement en utilisant ledit timbre de voix attribué qui change d'un groupe au suivant.
  8. Procédé de traitement de messages selon la revendication 7, dans lequel ladite étape d'attribution d'un timbre de voix à au moins un message comprend l'attribution du timbre de voix à un expéditeur du au moins un message, et comprenant la mémorisation d'une relation affectée entre le timbre de voix et ledit expéditeur après que le timbre de voix a été attribué à l'expéditeur, et l'attribution prioritaire dudit timbre de voix aux messages reçus à l'avenir dudit expéditeur après que la relation affectée entre le timbre de voix et ledit expéditeur a été mémorisée.
  9. Procédé de traitement de messages selon l'une quelconque des revendications 7 et 8, dans lequel ladite étape de lecture comprend la lecture d'un message envoyé depuis l'extérieur et d'un message de guidage associé au positionnement lors d'une navigation, et ladite étape de classement comprend le traitement en groupes des messages envoyés depuis l'extérieur et du message de guidage associé au positionnement lors d'une navigation.
  10. Procédé de traitement de messages selon l'une quelconque des revendications 7 à 9, comprenant en outre :
    la lecture à haute voix d'informations de guidage vocales à un conducteur du véhicule pour fournir un guidage pour la conduite du véhicule, et
    le réglage d'un instant où les informations extérieures et les informations de guidage vocales sont lues à haute voix pour empêcher les informations extérieures et les informations de guidage vocales d'être lues à haute voix en même temps.
  11. Support lisible par un ordinateur comprenant un programme de traitement de messages qui exécute les étapes selon l'une quelconque des revendications 7 à 10.
EP98114357A 1997-07-31 1998-07-30 Système de traitement de messages et méthode pour le traitement de messages Expired - Lifetime EP0901000B1 (fr)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP20561597 1997-07-31
JP205615/97 1997-07-31
JP20561597 1997-07-31
JP27777597A JP3287281B2 (ja) 1997-07-31 1997-10-09 メッセージ処理装置
JP27777597 1997-10-09
JP277775/97 1997-10-09

Publications (3)

Publication Number Publication Date
EP0901000A2 EP0901000A2 (fr) 1999-03-10
EP0901000A3 EP0901000A3 (fr) 2000-06-28
EP0901000B1 true EP0901000B1 (fr) 2007-02-14

Family

ID=26515160

Family Applications (1)

Application Number Title Priority Date Filing Date
EP98114357A Expired - Lifetime EP0901000B1 (fr) 1997-07-31 1998-07-30 Système de traitement de messages et méthode pour le traitement de messages

Country Status (4)

Country Link
US (1) US6625257B1 (fr)
EP (1) EP0901000B1 (fr)
JP (1) JP3287281B2 (fr)
DE (1) DE69837064T2 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009073962A1 (fr) * 2007-12-10 2009-06-18 E-Lane Systems Inc. Système de communication embarqué dans un véhicule avec sélection des destinations pour la navigation
US8838075B2 (en) 2008-06-19 2014-09-16 Intelligent Mechatronic Systems Inc. Communication system with voice mail access and call by spelling functionality
US8856009B2 (en) 2008-03-25 2014-10-07 Intelligent Mechatronic Systems Inc. Multi-participant, mixed-initiative voice interaction system
CN105190745A (zh) * 2013-02-20 2015-12-23 谷歌公司 用于共享调适语音简档的方法和系统
US9652023B2 (en) 2008-07-24 2017-05-16 Intelligent Mechatronic Systems Inc. Power management system
US9667726B2 (en) 2009-06-27 2017-05-30 Ridetones, Inc. Vehicle internet radio interface
US9930158B2 (en) 2005-06-13 2018-03-27 Ridetones, Inc. Vehicle immersive communication system
US9978272B2 (en) 2009-11-25 2018-05-22 Ridetones, Inc Vehicle to vehicle chatting and communication system
US9976865B2 (en) 2006-07-28 2018-05-22 Ridetones, Inc. Vehicle communication system with navigation

Families Citing this family (80)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3224760B2 (ja) * 1997-07-10 2001-11-05 インターナショナル・ビジネス・マシーンズ・コーポレーション 音声メールシステム、音声合成装置およびこれらの方法
US6112227A (en) 1998-08-06 2000-08-29 Heiner; Jeffrey Nelson Filter-in method for reducing junk e-mail
AUPQ030299A0 (en) 1999-05-12 1999-06-03 Sharinga Networks Inc. A message processing system
JP3896728B2 (ja) * 1999-06-23 2007-03-22 トヨタ自動車株式会社 携帯型端末装置及び車載情報処理装置
DE19933318C1 (de) 1999-07-16 2001-02-01 Bayerische Motoren Werke Ag Verfahren zur drahtlosen Übertragung von Nachrichten zwischen einem fahrzeuginternen Kommunikationssystem und einem fahrzeugexternen Zentralrechner
WO2001033549A1 (fr) * 1999-11-01 2001-05-10 Matsushita Electric Industrial Co., Ltd. Dispositif et procede de lecture de messages electroniques, et support enregistre de conversion de texte
JP2001209400A (ja) * 2000-01-24 2001-08-03 Denso Corp 音声合成装置及び音声案内システム
JP2001331419A (ja) 2000-05-19 2001-11-30 Nec Corp 電子メール着信機能付き折り畳み式携帯電話機
IT1314811B1 (it) * 2000-05-24 2003-01-16 Microtek Srl Terminale di bordo per automobili, in particolare per taxi o simili
FI115868B (fi) * 2000-06-30 2005-07-29 Nokia Corp Puhesynteesi
US6801931B1 (en) 2000-07-20 2004-10-05 Ericsson Inc. System and method for personalizing electronic mail messages by rendering the messages in the voice of a predetermined speaker
GB0029022D0 (en) * 2000-11-29 2001-01-10 Hewlett Packard Co Locality-dependent presentation
DE10062379A1 (de) * 2000-12-14 2002-06-20 Siemens Ag Verfahren und System zum Umsetzen von Text in Sprache
DE10063503A1 (de) 2000-12-20 2002-07-04 Bayerische Motoren Werke Ag Vorrichtung und Verfahren zur differenzierten Sprachausgabe
JP2002207671A (ja) * 2001-01-05 2002-07-26 Nec Saitama Ltd 携帯電話機及び電子メール文章送信/再生方法
DE10117367B4 (de) * 2001-04-06 2005-08-18 Siemens Ag Verfahren und System zur automatischen Umsetzung von Text-Nachrichten in Sprach-Nachrichten
GB2378875A (en) * 2001-05-04 2003-02-19 Andrew James Marsh Annunciator for converting text messages to speech
DE10134098C2 (de) * 2001-07-13 2003-12-11 Siemens Ag Verfahren zur Voreinstellung eines Mobilfunk-Kommunikationsmodus und Fahrzeug-Mobilfunkanordnung
JP3879006B2 (ja) 2001-08-08 2007-02-07 富士通株式会社 携帯端末装置
JP3693326B2 (ja) * 2001-12-12 2005-09-07 株式会社ナビタイムジャパン 地図表示システム、音声案内支援装置、地図表示装置
JP3705215B2 (ja) 2002-01-28 2005-10-12 日産自動車株式会社 移動体用情報提示装置
DE10207875A1 (de) * 2002-02-19 2003-08-28 Deutsche Telekom Ag Parametergesteuerte Sprachsynthese
GB2389761B (en) * 2002-06-13 2006-04-26 Seiko Epson Corp A semiconductor chip for use in a mobile telephone
US7516182B2 (en) * 2002-06-18 2009-04-07 Aol Llc Practical techniques for reducing unsolicited electronic messages by identifying sender's addresses
US7460940B2 (en) 2002-10-15 2008-12-02 Volvo Technology Corporation Method and arrangement for interpreting a subjects head and eye activity
DE10254183A1 (de) * 2002-11-20 2004-06-17 Siemens Ag Verfahren zur Wiedergabe von gesendeten Textnachrichten
US7620691B1 (en) 2003-02-10 2009-11-17 Aol Llc Filtering electronic messages while permitting delivery of solicited electronics messages
DE10305658A1 (de) * 2003-02-12 2004-08-26 Robert Bosch Gmbh Informationseinrichtung, insbesondere für Fahrzeuge, sowie Verfahren zur Steuerung der Sprachwiedergabe
US7290033B1 (en) 2003-04-18 2007-10-30 America Online, Inc. Sorting electronic messages using attributes of the sender address
EP1475611B1 (fr) 2003-05-07 2007-07-11 Harman/Becker Automotive Systems GmbH Procédé et appareil d'application de sortie vocale, support de donées comprenant des donées de parole
US7590695B2 (en) 2003-05-09 2009-09-15 Aol Llc Managing electronic messages
US7069329B2 (en) * 2003-06-04 2006-06-27 Movedigital, Inc. Systems and methods for providing a volumetric-based network access
JP2005011003A (ja) * 2003-06-18 2005-01-13 Canon Inc 通信装置
JP2005031919A (ja) * 2003-07-10 2005-02-03 Ntt Docomo Inc 通信システム
US7627635B1 (en) 2003-07-28 2009-12-01 Aol Llc Managing self-addressed electronic messages
JP2005091888A (ja) * 2003-09-18 2005-04-07 Canon Inc 通信装置、情報処理方法ならびにプログラム、記憶媒体
US7882360B2 (en) 2003-12-19 2011-02-01 Aol Inc. Community messaging lists for authorization to deliver electronic messages
US20050193130A1 (en) * 2004-01-22 2005-09-01 Mblx Llc Methods and systems for confirmation of availability of messaging account to user
US7469292B2 (en) 2004-02-11 2008-12-23 Aol Llc Managing electronic messages using contact information
US7650170B2 (en) 2004-03-01 2010-01-19 Research In Motion Limited Communications system providing automatic text-to-speech conversion features and related methods
EP1892936B1 (fr) * 2004-03-01 2010-02-10 Research In Motion Limited Terminal de communication mobile avec conversion de texte en parole
US11011153B2 (en) 2004-03-01 2021-05-18 Blackberry Limited Communications system providing automatic text-to-speech conversion features and related methods
US8538386B2 (en) 2004-03-01 2013-09-17 Blackberry Limited Communications system providing text-to-speech message conversion features using audio filter parameters and related methods
GB2412046A (en) 2004-03-11 2005-09-14 Seiko Epson Corp Semiconductor device having a TTS system to which is applied a voice parameter set
JP4684609B2 (ja) * 2004-09-29 2011-05-18 クラリオン株式会社 音声合成装置、制御方法、制御プログラム及び記録媒体
JP3955881B2 (ja) * 2004-12-28 2007-08-08 松下電器産業株式会社 音声合成方法および情報提供装置
US7650383B2 (en) 2005-03-15 2010-01-19 Aol Llc Electronic message system with federation of trusted senders
US7706510B2 (en) 2005-03-16 2010-04-27 Research In Motion System and method for personalized text-to-voice synthesis
ATE362164T1 (de) * 2005-03-16 2007-06-15 Research In Motion Ltd Verfahren und system zur personalisierung von text-zu-sprache umsetzung
US7647381B2 (en) 2005-04-04 2010-01-12 Aol Llc Federated challenge credit system
JP4784156B2 (ja) * 2005-05-31 2011-10-05 株式会社ケンウッド 複数のキャラクタによる音声案内を行う音声合成装置、音声合成方法、そのプログラム及びこのプログラムが記録された情報記録媒体
JP2006337403A (ja) * 2005-05-31 2006-12-14 Kenwood Corp 音声案内装置及び音声案内プログラム
CN101223571B (zh) * 2005-07-20 2011-05-18 松下电器产业株式会社 音质变化部位确定装置及音质变化部位确定方法
TWI270488B (en) * 2005-12-06 2007-01-11 Sin Etke Technology Co Ltd Vehicular remote audio support service system and method
JP2007333603A (ja) * 2006-06-16 2007-12-27 Sony Corp ナビゲーション装置、ナビゲーション装置の制御方法、ナビゲーション装置の制御方法のプログラム、ナビゲーション装置の制御方法のプログラムを記録した記録媒体
EP1879000A1 (fr) * 2006-07-10 2008-01-16 Harman Becker Automotive Systems GmbH Transmission de messages textuels par systèmes de navigation
US7822606B2 (en) * 2006-07-14 2010-10-26 Qualcomm Incorporated Method and apparatus for generating audio information from received synthesis information
GB2444755A (en) * 2006-12-11 2008-06-18 Hutchison Whampoa Three G Ip Improved message handling for mobile devices
TW200839561A (en) * 2007-03-22 2008-10-01 Wistron Corp Method of irregular password configuration and verification
WO2009012031A1 (fr) * 2007-07-18 2009-01-22 Gm Global Technology Operations, Inc. Système de messagerie électronique et procédé pour un véhicule
JP2010010978A (ja) * 2008-06-26 2010-01-14 Nec Corp 検索管理システム、検索管理サーバおよび検索管理方法
JP5421563B2 (ja) * 2008-09-03 2014-02-19 株式会社コナミデジタルエンタテインメント ゲーム装置、ゲーム装置の制御方法及びプログラム
JP4864950B2 (ja) * 2008-09-16 2012-02-01 パイオニア株式会社 車載機器、情報通信システム、車載機器の制御方法およびプログラム
JP2010074215A (ja) * 2008-09-16 2010-04-02 Pioneer Electronic Corp 通信機器、情報通信システム、通信機器の通信制御方法およびプログラム
DE102008042517A1 (de) * 2008-10-01 2010-04-08 Robert Bosch Gmbh Verfahren zur Bestimmung von Ausgabezeitpunkten von Sprachsignalen in einem Fahrzeug
US9305288B2 (en) * 2008-12-30 2016-04-05 Ford Global Technologies, Llc System and method for provisioning electronic mail in a vehicle
US20100190439A1 (en) * 2009-01-29 2010-07-29 Ford Global Technologies, Llc Message transmission protocol for service delivery network
US20110225228A1 (en) * 2010-03-11 2011-09-15 Ford Global Technologies, Llc Method and systems for queuing messages for vehicle-related services
US8718632B2 (en) 2010-08-26 2014-05-06 Ford Global Technologies, Llc Service delivery network
KR101715381B1 (ko) * 2010-11-25 2017-03-10 삼성전자 주식회사 전자장치 및 그 제어방법
US20120197630A1 (en) * 2011-01-28 2012-08-02 Lyons Kenton M Methods and systems to summarize a source text as a function of contextual information
GB2492753A (en) * 2011-07-06 2013-01-16 Tomtom Int Bv Reducing driver workload in relation to operation of a portable navigation device
JP2014086878A (ja) * 2012-10-23 2014-05-12 Sharp Corp 自走式電子機器および携帯端末
JP6433641B2 (ja) 2013-04-02 2018-12-05 クラリオン株式会社 情報表示装置および情報表示方法
US20170160811A1 (en) * 2014-05-28 2017-06-08 Kyocera Corporation Electronic device, control method, and storage medium
US20160064033A1 (en) * 2014-08-26 2016-03-03 Microsoft Corporation Personalized audio and/or video shows
JP6428229B2 (ja) * 2014-12-15 2018-11-28 株式会社Jvcケンウッド テキストメッセージ音声化装置、テキストメッセージ音声化方法、テキストメッセージ音声化プログラム
RU2616888C2 (ru) * 2015-08-07 2017-04-18 Общество С Ограниченной Ответственностью "Лаборатория Эландис" Способ выполнения аналого-цифровой подписи в доверенной среде и устройство его реализующее
US20200168222A1 (en) * 2017-08-01 2020-05-28 Sony Corporation Information processing device, information processing method, and program
DE102019204043A1 (de) * 2019-03-25 2020-10-01 Volkswagen Aktiengesellschaft Verfahren zum Betreiben einer Bedienvorrichtung für ein Kraftfahrzeug und Bedienvorrichtung für ein Kraftfahrzeug

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0820265B2 (ja) * 1987-07-10 1996-03-04 アイシン・エィ・ダブリュ株式会社 車両用ナビゲーション装置
US5148153A (en) 1990-12-20 1992-09-15 Motorola Inc. Automatic screen blanking in a mobile radio data terminal
US5214793A (en) * 1991-03-15 1993-05-25 Pulse-Com Corporation Electronic billboard and vehicle traffic control communication system
JPH04316100A (ja) 1991-04-15 1992-11-06 Oki Electric Ind Co Ltd 音声案内制御システム
JPH05260082A (ja) * 1992-03-13 1993-10-08 Toshiba Corp テキスト読み上げ装置
US5400393A (en) * 1992-06-05 1995-03-21 Phonemate, Inc. Voice mail digital telephone answering device
JP2602158B2 (ja) * 1992-12-04 1997-04-23 株式会社エクォス・リサーチ 音声出力装置
JPH06276220A (ja) 1993-03-19 1994-09-30 Csk Corp 無線電子メールによる音声伝言システム
JPH06332822A (ja) * 1993-05-25 1994-12-02 Nec Corp 電子メール受信通知装置
US5983161A (en) * 1993-08-11 1999-11-09 Lemelson; Jerome H. GPS vehicle collision avoidance warning and control system and method
JPH07177236A (ja) 1993-12-21 1995-07-14 Nippon Telegr & Teleph Corp <Ntt> リアルタイム・蓄積統合型音声通信装置
US5903228A (en) * 1994-05-27 1999-05-11 Sony Corporation Map information display apparatus and traveling route display apparatus and route guidance apparatus for moving body
JP2770747B2 (ja) * 1994-08-18 1998-07-02 日本電気株式会社 音声合成装置
JP3344677B2 (ja) * 1995-04-13 2002-11-11 アルパイン株式会社 車載用ナビゲーションシステム
JPH0950286A (ja) 1995-05-29 1997-02-18 Sanyo Electric Co Ltd 音声合成装置及びこれに使用する記録媒体
JPH098752A (ja) * 1995-06-26 1997-01-10 Matsushita Electric Ind Co Ltd 多重情報受信装置及びナビゲーション装置
JP3215606B2 (ja) 1995-07-05 2001-10-09 日本電信電話株式会社 電子メール音声変換サービスシステム
DE19538453A1 (de) 1995-10-16 1997-04-17 Bayerische Motoren Werke Ag Funksignalempfänger für Kraftfahrzeuge mit einem RDS-Dekoder für digitale Signale
JPH09166450A (ja) 1995-12-18 1997-06-24 Sumitomo Electric Ind Ltd ナビゲーション装置
US6018710A (en) * 1996-12-13 2000-01-25 Siemens Corporate Research, Inc. Web-based interactive radio environment: WIRE
US5911129A (en) * 1996-12-13 1999-06-08 Intel Corporation Audio font used for capture and rendering
US6021181A (en) * 1997-02-24 2000-02-01 Wildfire Communications, Inc. Electronic voice mail message handling system
JPH113499A (ja) * 1997-06-10 1999-01-06 Hitachi Ltd 移動体管理システム,移動体載装置,基地局備装置および移動体管理方法
JP3224760B2 (ja) 1997-07-10 2001-11-05 インターナショナル・ビジネス・マシーンズ・コーポレーション 音声メールシステム、音声合成装置およびこれらの方法

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9930158B2 (en) 2005-06-13 2018-03-27 Ridetones, Inc. Vehicle immersive communication system
US9976865B2 (en) 2006-07-28 2018-05-22 Ridetones, Inc. Vehicle communication system with navigation
WO2009073962A1 (fr) * 2007-12-10 2009-06-18 E-Lane Systems Inc. Système de communication embarqué dans un véhicule avec sélection des destinations pour la navigation
US8856009B2 (en) 2008-03-25 2014-10-07 Intelligent Mechatronic Systems Inc. Multi-participant, mixed-initiative voice interaction system
US8838075B2 (en) 2008-06-19 2014-09-16 Intelligent Mechatronic Systems Inc. Communication system with voice mail access and call by spelling functionality
US9652023B2 (en) 2008-07-24 2017-05-16 Intelligent Mechatronic Systems Inc. Power management system
US9667726B2 (en) 2009-06-27 2017-05-30 Ridetones, Inc. Vehicle internet radio interface
US9978272B2 (en) 2009-11-25 2018-05-22 Ridetones, Inc Vehicle to vehicle chatting and communication system
CN105190745A (zh) * 2013-02-20 2015-12-23 谷歌公司 用于共享调适语音简档的方法和系统
CN105190745B (zh) * 2013-02-20 2017-02-08 谷歌公司 用于共享调适语音简档的方法和设备

Also Published As

Publication number Publication date
JPH11102198A (ja) 1999-04-13
DE69837064T2 (de) 2007-07-05
EP0901000A2 (fr) 1999-03-10
JP3287281B2 (ja) 2002-06-04
DE69837064D1 (de) 2007-03-29
US6625257B1 (en) 2003-09-23
EP0901000A3 (fr) 2000-06-28

Similar Documents

Publication Publication Date Title
EP0901000B1 (fr) Système de traitement de messages et méthode pour le traitement de messages
EP1063494B1 (fr) Terminal portatif et dispositif de traitement d&#39;informations embarqué
JP3198883B2 (ja) 移動スケジュール処理装置
EP1909069B1 (fr) Paramétrage de destination intelligent pour systèmes de navigation
US6108631A (en) Input system for at least location and/or street names
US6675089B2 (en) Mobile information processing system, mobile information processing method, and storage medium storing mobile information processing program
US7286857B1 (en) Enhanced in-vehicle wireless communication system handset operation
JP3185734B2 (ja) 情報端末装置
EP1860405A2 (fr) Procédé de réglage d&#39;un terminal de navigation pour une destination et appareil correspondant
EP1353149A2 (fr) Dispositif de navigation avec guidage routier dépendant des bifurcations
US7457704B2 (en) Navigation apparatus for vehicle
CN1415939A (zh) 用于向导航终端用户提供导航信息的方法
US6928365B2 (en) Navigation apparatus, navigation method, navigation program and recording medium storing the program
JP2000337917A (ja) ナビゲーション装置
JPH10243438A (ja) 車載データ通信装置
JPH1188553A (ja) 情報提供システム及びその制御方法
JP3718990B2 (ja) ナビゲーション装置
US20190258657A1 (en) Information processing device and information processing method
JPH11101652A (ja) 電子メールデータ受信装置、電子メールホスト装置、これらのためのプログラムを記録した媒体、及び電子メールシステム
JP2000193479A (ja) ナビゲ―ション装置及び記憶媒体
JP2003140799A (ja) 表示装置の表示方法
JP2002365069A (ja) ナビゲーション装置およびナビゲーションシステム
JP2006228149A (ja) 地域検索装置、ナビゲーション装置、その制御方法及び制御プログラム
KR100454970B1 (ko) 네비게이션 시스템에서 시설물 검색 방법
JP2003021530A (ja) 音声によるエリア内施設検索方法および装置

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19980819

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE FR GB NL

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

RIC1 Information provided on ipc code assigned before grant

Free format text: 7G 01C 21/20 A, 7H 04L 12/58 B, 7G 06F 17/60 B, 7G 10L 7/00 B

AKX Designation fees paid

Free format text: DE FR GB NL

17Q First examination report despatched

Effective date: 20050214

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIC1 Information provided on ipc code assigned before grant

Ipc: G01L 13/04 20060101ALI20060918BHEP

Ipc: G10L 13/02 20060101ALI20060918BHEP

Ipc: H04L 12/58 20060101ALI20060918BHEP

Ipc: G01C 21/36 20060101ALI20060918BHEP

Ipc: G01C 21/20 20060101AFI20060918BHEP

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB NL

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 69837064

Country of ref document: DE

Date of ref document: 20070329

Kind code of ref document: P

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20071115

REG Reference to a national code

Ref country code: GB

Ref legal event code: 746

Effective date: 20130412

REG Reference to a national code

Ref country code: DE

Ref legal event code: R084

Ref document number: 69837064

Country of ref document: DE

Effective date: 20130410

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 19

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20170613

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20170614

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20170726

Year of fee payment: 20

Ref country code: DE

Payment date: 20170725

Year of fee payment: 20

REG Reference to a national code

Ref country code: DE

Ref legal event code: R071

Ref document number: 69837064

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MK

Effective date: 20180729

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20180729

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20180729