WO2013029265A1 - Method and system of remote voice management in navigation system - Google Patents

Method and system of remote voice management in navigation system Download PDF

Info

Publication number
WO2013029265A1
WO2013029265A1 PCT/CN2011/079257 CN2011079257W WO2013029265A1 WO 2013029265 A1 WO2013029265 A1 WO 2013029265A1 CN 2011079257 W CN2011079257 W CN 2011079257W WO 2013029265 A1 WO2013029265 A1 WO 2013029265A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
mcs
graphic map
pnd
voice instruction
Prior art date
Application number
PCT/CN2011/079257
Other languages
French (fr)
Inventor
Charles Chuanming Wang
Jian Kong
Minxian ZHANG
Original Assignee
Harman International (Shanghai) Management Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harman International (Shanghai) Management Co., Ltd. filed Critical Harman International (Shanghai) Management Co., Ltd.
Priority to PCT/CN2011/079257 priority Critical patent/WO2013029265A1/en
Publication of WO2013029265A1 publication Critical patent/WO2013029265A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3629Guidance using speech or audio output, e.g. text-to-speech
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/168Driving aids for parking, e.g. acoustic or visual feedback on parking space
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0017Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information
    • G08G5/0021Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information located in the aircraft

Definitions

  • the present invention relates to navigation apparatus with improved data transmission, and more particularly, relates to a method and system of remote voice management in a navigation system of detached navigation apparatus.
  • PNDs portable navigation devices
  • Various examples of PNDs include individual portable navigation unites, mobile telephones, and other handheld terminal devices having navigation functions such as tablets, personal media players, and the like. These PNDs can be used in or out of a vehicle. When used in a vehicle, such PNDs have some limitations, such as, small display screen and poor sound quality, and, accordingly, can be troublesome for drivers to use in vehicles.
  • At least one proposed arrangement overcoming such limitations is to connect PNDs with an onboard vehicle computer, such as a multimedia communications system (MCS).
  • MCS multimedia communications system
  • An MCS may have a bigger display screen and a more sophisticated sound or audio system than PNDs.
  • PNDs may be connected to a vehicle computer via a network connection or interface to provide navigation services in a car at a lower cost.
  • the connection may be wired (Ethernet, USB cable/stick) or wireless (WiFi or Bluetooth).
  • a navigation engine may run on a PND, while the audiovisual outputs, e.g., navigation graphics (such as, graphic maps) and voice instruction, of the PND may be displayed and played back on a display screen/device and sound system in a vehicle.
  • the display device and/or the speaker of the PND may be switched to the MCS for better display and sound.
  • RN Remote Navigation
  • one method is to capture digitally- sampled navigation voice in PCM (Pulse Code Modulation) format on the PND, and transmits the PCM code via a wired and/or wireless communication network to the MCS for playing-back.
  • PCM Pulse Code Modulation
  • the typical voice sampling frequency is 44.1 KHz and each data sample requires 16 bit.
  • PCM code burdens the in-vehicle network which has limited bandwidth because it is also used to transfer or transmit graphical navigation maps.
  • the separately transmitted voice data and graphical data can be hardly synchronized, and thus a careful and complicated design may be needed to ensure the real-time nature of a navigation application of both voice data and graphic map data.
  • methods and systems are disclosed for a more effective, reliable and bandwidth-efficient text-based management of remote navigation between a PND and a MCS.
  • Another aspect relates to a method of voice management in a remote (wireless) navigation system, which includes at least a handheld or portable navigation device (PND) and a multimedia communication system (MCS) (also referred to as a multimedia infotainment device (MID)) provided in a vehicle.
  • PND handheld or portable navigation device
  • MCS multimedia communication system
  • MID multimedia infotainment device
  • the method may comprise assembling a text of voice instruction associated with a graphic map at the PND according to the current position of the vehicle, encapsulating the text of voice instruction together with the associated graphic map, and communicating (transmitting) the encapsulated text and associated graphic map from the PND to the MCS, , the MCS being capable of converting the text to audio data through a text-to- speech engine and outputting the audio data from the MCS simultaneously or near simultaneously with a graphic map displayed on a display of the MCS.
  • a remote navigation system which comprises at least a handheld or portable navigation device (PND) and a multimedia communication system (MCS) provided in a vehicle.
  • the PND may include a means for assembling a text of voice instruction associated with a graphic map according to the current position of a vehicle, a means for encapsulating the text of voice instruction together with the associated graphic map, and a means for communicating (transmitting) the encapsulated text and associated graphic map to the MCS.
  • the MCS may be configured to convert the text to audio data and output the audio data and the graphic map data simultaneously or near simultaneously.
  • a fourth aspect relates to a method of voice management in a portable navigation device (PND), comprising the steps of receiving a text of voice instruction and graphic map, assembling the text of voice instruction associated with the graphic map according to the current position of the PND, encapsulating the text of voice instruction together with the associated graphic map, and communicating or transmitting the encapsulated text and the associated graphic map to a multimedia communication system (MCS) provided in a vehicle for converting the text to audio data for playback at the MCS while simultaneously or near simultaneously displaying the associated graphic map on the MCS of the vehicle.
  • MCS multimedia communication system
  • a fifth aspect relates to a portable navigation device (PND), comprising a means for receiving a text of voice instruction and a graphic map, a means for assembling the text of voice instruction associated with the graphic map according to the current position of the PND, a means for encapsulating the text of voice instruction together with the associated graphic map, and a means for communicating or transmitting the encapsulated text and the associated graphic map to a multimedia communication system (MCS) provided in a vehicle for converting the text of voice instruction to audio data for playback at the MCS while synchronously displaying the associated graphic map on the MCS.
  • MCS multimedia communication system
  • a sixth aspect relates to a method of voice management in a multimedia communication system (MCS) equipped in a vehicle.
  • the method comprises the step of receiving an encapsulated text of voice instruction and an associated graphic map from a portable navigation device (PND), converting the text of voice instruction to an audio data through a text-to- speech engine in the multimedia communication system (MCS), and playing back the audio data as voice instructions synchronously or near synchronously output with displaying the associated graphic map on the MCS of the vehicle.
  • PND portable navigation device
  • MCS multimedia communication system
  • a seventh aspect relates to a multimedia communication system (MCS) equipped in a vehicle.
  • the MCS comprises a means for receiving an encapsulated text of voice instruction and an associated graphic map of navigation from a portable navigation device (PND), a means for converting the text of voice instruction to audio data, and a means for playing back the audio data as voice instructions that are synchronously or near synchronously output with displaying the associated graphic map on the MCS of the vehicle.
  • PND portable navigation device
  • Fig. 1 is a block diagram showing the architecture of voice management system according to an embodiment of the invention
  • Fig. 2 shows a flowchart of method steps of voice management being performed at a PND; and Fig. 3 shows a flowchart of method steps of voice management being performed at a MCS.
  • a navigation engine of a navigation system assembles a text to a string of voice instruction according to the current position of a vehicle.
  • the text of voice instruction may include information of distance, turnings, and coming events and so on.
  • the navigation engine may invoke TTS (Text-To-Speech) engine to convert the text of a string of voice instruction into audio format for transmission and play-back.
  • TTS Text-To-Speech
  • the audio signal of the text of voice instruction generated from the detached PND is played back from a connected MCS.
  • This may be accomplished by capturing digitally- sampled navigation voice instruction in PCM format and transmitting the PCM code via a communication network to the MCS for play back, while the communication network needs to transfer/transmit graphic data of navigation maps to the MCS at the same time.
  • the in-vehicle network bandwidth is rather limited so that transmission of the PCM code representative of the voice instruction cannot be done in a desired way, i.e., in synchronization with the associated graphic data of navigation maps being transmitted via the same communication network.
  • the original text-based navigation voice instruction is converted into audio or voice signal for a playback in the vehicle.
  • the original text-based navigation voice instruction may be embedded within or encapsulated together with the associated graphic data of navigation maps.
  • the data may be transmitted from a PND to a remote MCS using a standard-based network protocol.
  • the bandwidth required for transferring or transmitting navigation voice instruction may be significantly reduced while achieving synchronization between the transmitted data of navigation graphic map and voice instruction. Further details of this data exchange will be described in details hereinafter.
  • Fig. 1 shows a block diagram of the server-client based architecture of a remote navigation system 100 according to one embodiment.
  • the system 100 may include at least a PND 110 as a server and a MCS 120 as a client.
  • the PND 110 may be connected with the MCS 120 wirelessly to transfer or transmit the encapsulated text of voice instruction and graphic navigation map from the PND 110 to the MCS 120.
  • a Text-to-Speech (TTS) engine may convert the received text of voice instruction to audio data for playing back on the MCS, while the graphic navigation map is displayed on the MCS.
  • TTS Text-to-Speech
  • encapsulated text of voice instruction and graphic navigation map can also be transmitted via connecting wire (Ethernet or USB cable, etc.) between the PND and MCS.
  • the PND 110 may include a navigation engine 111 for assembling a text of voice instruction associated with a graphic navigation map according to the current position of a vehicle, an API 112 for capturing and passing the assembled text, a frame-buffering module 113 for buffering the text and the associated graphic map and encapsulating them for communication, and a communication module 114 for communicating the encapsulated text and graphic map to the MCS 120.
  • a navigation engine 111 for assembling a text of voice instruction associated with a graphic navigation map according to the current position of a vehicle
  • an API 112 for capturing and passing the assembled text
  • a frame-buffering module 113 for buffering the text and the associated graphic map and encapsulating them for communication
  • a communication module 114 for communicating the encapsulated text and graphic map to the MCS 120.
  • the MCS 120 may include a communication module 121 for receiving data packages of the encapsulated text of voice instruction and graphic navigation map from the PND 110, a frame-updating module 122 for extracting the text from the received data packages, a TTS module 123 for converting the text to audio data, a display module 125 for displaying the graphic map and an audio module 124 for playing back the audio data to generate voice instruction output simultaneously or near simultaneously with the display of the associated graphic map.
  • the API 112 may be integrated with navigation engine 111.
  • the PND 110 may be connected with the MCS 120 through Ethernet, WIFI, USB 2.0 or any other communication means known in the art. With reference to Figs. 2 and 3, the method of voice management is illustrated as being performed in connection with the system 100.
  • Fig. 2 and 3 the method of voice management is illustrated as being performed in connection with the system 100.
  • Fig. 2 shows a flowchart of the steps of the method 200 being performed at the PND
  • Fig. 3 shows a flowchart of steps of the method 300 performed at the MCS.
  • navigation data may be received at the PND 200.
  • Navigation engine 111 may assemble a text of navigation voice instruction associated with a graphic navigation map on the basis of the current position of the vehicle (step S202).
  • the API 112 may collect the text and store it in a frame-buffer.
  • the frame-buffering module 113 may retrieve the text of voice instruction and encapsulate it together with the associated graphic map.
  • the communication module 114 of the PND 110 may transmit the encapsulated data package to the MCS 120 (S204) according to an extended RFB (Remote Framebuffer) protocol. Subsequent text of voice instruction and associated graphic navigation map may be assembled again at S202 as the vehicle position changes or updates. As illustrated in Fig. 3, the communication module 121 of the MCS 120 receives (at step S301) RFB data packages, which may contain the encapsulated text of voice instruction and graphic navigation map. The text in the receiving frame buffer is routed to the TTS module 123.
  • RFB Remote Framebuffer
  • the TTS module 123 converts the text into audio data (S302) which is played back to generate voice at the sound or audio module 124 of the MCS 120 (S303) in synchronization with the display of the associated graphic map on the display module 125 of the MCS 120.
  • the text of voice instruction can be displayed directly on a display module 125 of the MCS 120, together with the graphic navigation map.
  • an extended RFB protocol may be provided by defining new protocol messages for accessing the text of voice instruction.
  • the API 112 is provided to capture the text of voice instruction assembled at the navigation engine 111, and the frame-buffering and frame-updating modules 113 and 122 are provided to transfer the text in a frame buffer based on the extended RFB protocol.
  • the text of voice instruction may be transferred to the MCS 120 over the extended RFB protocol, via a communication network between the PND 110 and MCS 120.
  • the communication module 121 of the MCS 120 receives the text of voice instruction, it invokes the TTS module 123 to convert the text to an audio format which may then be played back on the sound/audio module 124 of the MCS 120.
  • the extended RFB protocol is described briefly hereinafter by way of example.
  • the navigation graphic map and voice instruction is synchronously transferred from the PND to the MCS.
  • the content in the frame buffer in a PND is transferred by employing a standard RFB (Remote Frame buffer) protocol which is commonly used for transferring frame buffer.
  • RFB protocol is a simple protocol and works at the frame buffer level so that it is applicable to all windowing systems and applications.
  • RFB has its limitations as there is no definition for accessing remote voice buffer with RFB protocol.
  • a client i.e., MCS 120
  • a server i.e., PND 110
  • Examples of the types of messages from the client to the server may be set-pixel format, set-encodings, frame-buffer-Update-Request, key-event, pointer-event, and client-cut-text.
  • Examples of the types of messages from the server to the client may be frame-buffer-update, set-color map-entries, bell, and server-cut-text.
  • the PND may be viewed as a server, and MCS as a client.
  • three more types of messages may be added to the extended RFB protocol, and two of the three added messages may be defined as voice-text-update-request and voice-and-frame-buffer-update-request (from the client to server). The other may be voice-text-update (from the server to client).
  • a voice-text-update process consists of transferring a sequence of strings of text of voice which the client should playback one by one in response to a message of voice-text-update-request or voice-and-frame-buffer -update-request from the client.
  • a third-party navigation solution can implement API interfaces with integration of the related head files and library files.
  • the navigation engine can then inter-process communication with the application on a navigation device.
  • the application on the device assembles the text of voice from navigation engine and stores it in a frame-buffer.
  • the RFB server in the navigation device grabs the text in the frame buffer, and encapsulates it in the extended RFB protocol, and transfers the encapsulated text of voice instruction and associated graphic navigation map to the MCS.
  • the data communication cost for navigation voice instruction can be significantly saved by replacing PCM based 44.1 KHz sampling-rate (16-bit) audio stream with several text characters. For instance, a 5-second navigation voice instruction like "In 200 meters, turn right" may require 441,000 bytes in PCM code, while a 26-character string can be transferred in 52 bytes in a text string.
  • the synchronization between frame-buffer and voice is ensured because they are transferred from a navigation terminal to a MCS in a single RFB frame. Accordingly, the bandwidth required for transferring voice instruction is reduced significantly because it is transferred in text format, while synchronizing the graphic map and voice instruction displayed and played back in the MCS.

Abstract

Various embodiments relate to methods and systems for remote voice management in a detached navigation application. The method for voice management in a remote navigation system may be performed using a portable navigation device (PND) and a multimedia communications system (MCS) in a vehicle. The method may include the steps of: assembling a text of voice instruction associated with a graphic map at the PND according to the current position of the vehicle; encapsulating the text together with the associated graphic map and communicating the same from the PND to the MCS; and receiving the encapsulated text and the associated graphic map at the MCS. The method may also include converting the text to audio data through a text-to-speech engine at the MCS; and playing back the audio data as voice instructions that are simultaneously or near simultaneously output with the associated graphic map.

Description

METHOD AND SYSTEM OF REMOTE VOICE MANAGEMENT IN NAVIGATION
SYSTEM
TECHNICAL FIELD
The present invention relates to navigation apparatus with improved data transmission, and more particularly, relates to a method and system of remote voice management in a navigation system of detached navigation apparatus.
BACKGROUND
It is not uncommon to find vehicles that do not have a built-in GPS or navigation system. Instead, a handheld GPS or navigation device may be used by a driver (referred generally as portable navigation devices (PNDs)). Various examples of PNDs include individual portable navigation unites, mobile telephones, and other handheld terminal devices having navigation functions such as tablets, personal media players, and the like. These PNDs can be used in or out of a vehicle. When used in a vehicle, such PNDs have some limitations, such as, small display screen and poor sound quality, and, accordingly, can be troublesome for drivers to use in vehicles.
At least one proposed arrangement overcoming such limitations, is to connect PNDs with an onboard vehicle computer, such as a multimedia communications system (MCS). An MCS may have a bigger display screen and a more sophisticated sound or audio system than PNDs. PNDs may be connected to a vehicle computer via a network connection or interface to provide navigation services in a car at a lower cost. The connection may be wired (Ethernet, USB cable/stick) or wireless (WiFi or Bluetooth).
SUMMARY
When a PND is connected to a vehicle computer (e.g., an MCS), a navigation engine may run on a PND, while the audiovisual outputs, e.g., navigation graphics (such as, graphic maps) and voice instruction, of the PND may be displayed and played back on a display screen/device and sound system in a vehicle. As a result, the display device and/or the speaker of the PND may be switched to the MCS for better display and sound. Such an application is called Remote Navigation (RN).
To achieve remote playback of the navigation voice instruction originated from a detached PND, one method is to capture digitally- sampled navigation voice in PCM (Pulse Code Modulation) format on the PND, and transmits the PCM code via a wired and/or wireless communication network to the MCS for playing-back. The typical voice sampling frequency is 44.1 KHz and each data sample requires 16 bit. Thus, the bandwidth required for transferring or transmitting the digitally-sample voice data (PCM code) burdens the in-vehicle network which has limited bandwidth because it is also used to transfer or transmit graphical navigation maps. Further, the separately transmitted voice data and graphical data can be hardly synchronized, and thus a careful and complicated design may be needed to ensure the real-time nature of a navigation application of both voice data and graphic map data.
Accordingly, in one aspect, methods and systems are disclosed for a more effective, reliable and bandwidth-efficient text-based management of remote navigation between a PND and a MCS.
Another aspect relates to a method of voice management in a remote (wireless) navigation system, which includes at least a handheld or portable navigation device (PND) and a multimedia communication system (MCS) (also referred to as a multimedia infotainment device (MID)) provided in a vehicle. The method may comprise assembling a text of voice instruction associated with a graphic map at the PND according to the current position of the vehicle, encapsulating the text of voice instruction together with the associated graphic map, and communicating (transmitting) the encapsulated text and associated graphic map from the PND to the MCS, , the MCS being capable of converting the text to audio data through a text-to- speech engine and outputting the audio data from the MCS simultaneously or near simultaneously with a graphic map displayed on a display of the MCS.
Another aspect relates to a remote navigation system, which comprises at least a handheld or portable navigation device (PND) and a multimedia communication system (MCS) provided in a vehicle. The PND may include a means for assembling a text of voice instruction associated with a graphic map according to the current position of a vehicle, a means for encapsulating the text of voice instruction together with the associated graphic map, and a means for communicating (transmitting) the encapsulated text and associated graphic map to the MCS. The MCS may be configured to convert the text to audio data and output the audio data and the graphic map data simultaneously or near simultaneously.
A fourth aspect relates to a method of voice management in a portable navigation device (PND), comprising the steps of receiving a text of voice instruction and graphic map, assembling the text of voice instruction associated with the graphic map according to the current position of the PND, encapsulating the text of voice instruction together with the associated graphic map, and communicating or transmitting the encapsulated text and the associated graphic map to a multimedia communication system (MCS) provided in a vehicle for converting the text to audio data for playback at the MCS while simultaneously or near simultaneously displaying the associated graphic map on the MCS of the vehicle.
A fifth aspect relates to a portable navigation device (PND), comprising a means for receiving a text of voice instruction and a graphic map, a means for assembling the text of voice instruction associated with the graphic map according to the current position of the PND, a means for encapsulating the text of voice instruction together with the associated graphic map, and a means for communicating or transmitting the encapsulated text and the associated graphic map to a multimedia communication system (MCS) provided in a vehicle for converting the text of voice instruction to audio data for playback at the MCS while synchronously displaying the associated graphic map on the MCS. A sixth aspect relates to a method of voice management in a multimedia communication system (MCS) equipped in a vehicle. The method comprises the step of receiving an encapsulated text of voice instruction and an associated graphic map from a portable navigation device (PND), converting the text of voice instruction to an audio data through a text-to- speech engine in the multimedia communication system (MCS), and playing back the audio data as voice instructions synchronously or near synchronously output with displaying the associated graphic map on the MCS of the vehicle. A seventh aspect relates to a multimedia communication system (MCS) equipped in a vehicle. The MCS comprises a means for receiving an encapsulated text of voice instruction and an associated graphic map of navigation from a portable navigation device (PND), a means for converting the text of voice instruction to audio data, and a means for playing back the audio data as voice instructions that are synchronously or near synchronously output with displaying the associated graphic map on the MCS of the vehicle.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is a block diagram showing the architecture of voice management system according to an embodiment of the invention;
Fig. 2 shows a flowchart of method steps of voice management being performed at a PND; and Fig. 3 shows a flowchart of method steps of voice management being performed at a MCS. DETAIED DESCRIPTION
Conventionally, during navigation, a navigation engine of a navigation system assembles a text to a string of voice instruction according to the current position of a vehicle. The text of voice instruction may include information of distance, turnings, and coming events and so on. The navigation engine may invoke TTS (Text-To-Speech) engine to convert the text of a string of voice instruction into audio format for transmission and play-back.
As mentioned above, it is desirable to have the audio signal of the text of voice instruction generated from the detached PND to be played back from a connected MCS. This may be accomplished by capturing digitally- sampled navigation voice instruction in PCM format and transmitting the PCM code via a communication network to the MCS for play back, while the communication network needs to transfer/transmit graphic data of navigation maps to the MCS at the same time. However, the in-vehicle network bandwidth is rather limited so that transmission of the PCM code representative of the voice instruction cannot be done in a desired way, i.e., in synchronization with the associated graphic data of navigation maps being transmitted via the same communication network. The original text-based navigation voice instruction is converted into audio or voice signal for a playback in the vehicle. In one or more embodiments of the invention, the original text-based navigation voice instruction may be embedded within or encapsulated together with the associated graphic data of navigation maps. The data may be transmitted from a PND to a remote MCS using a standard-based network protocol. As such, the bandwidth required for transferring or transmitting navigation voice instruction may be significantly reduced while achieving synchronization between the transmitted data of navigation graphic map and voice instruction. Further details of this data exchange will be described in details hereinafter.
Fig. 1 shows a block diagram of the server-client based architecture of a remote navigation system 100 according to one embodiment. The system 100 may include at least a PND 110 as a server and a MCS 120 as a client. The PND 110 may be connected with the MCS 120 wirelessly to transfer or transmit the encapsulated text of voice instruction and graphic navigation map from the PND 110 to the MCS 120. In the MCS 120, a Text-to-Speech (TTS) engine may convert the received text of voice instruction to audio data for playing back on the MCS, while the graphic navigation map is displayed on the MCS. Certainly, such encapsulated text of voice instruction and graphic navigation map can also be transmitted via connecting wire (Ethernet or USB cable, etc.) between the PND and MCS.
As shown in Fig.l, the PND 110 may include a navigation engine 111 for assembling a text of voice instruction associated with a graphic navigation map according to the current position of a vehicle, an API 112 for capturing and passing the assembled text, a frame-buffering module 113 for buffering the text and the associated graphic map and encapsulating them for communication, and a communication module 114 for communicating the encapsulated text and graphic map to the MCS 120.
The MCS 120 may include a communication module 121 for receiving data packages of the encapsulated text of voice instruction and graphic navigation map from the PND 110, a frame-updating module 122 for extracting the text from the received data packages, a TTS module 123 for converting the text to audio data, a display module 125 for displaying the graphic map and an audio module 124 for playing back the audio data to generate voice instruction output simultaneously or near simultaneously with the display of the associated graphic map. Alternatively, the API 112 may be integrated with navigation engine 111. The PND 110 may be connected with the MCS 120 through Ethernet, WIFI, USB 2.0 or any other communication means known in the art. With reference to Figs. 2 and 3, the method of voice management is illustrated as being performed in connection with the system 100. Fig. 2 shows a flowchart of the steps of the method 200 being performed at the PND, and Fig. 3 shows a flowchart of steps of the method 300 performed at the MCS. Upon input of a navigation request S201, navigation data may be received at the PND 200. Navigation engine 111 may assemble a text of navigation voice instruction associated with a graphic navigation map on the basis of the current position of the vehicle (step S202). The API 112 may collect the text and store it in a frame-buffer. At step S203, the frame-buffering module 113 may retrieve the text of voice instruction and encapsulate it together with the associated graphic map. The communication module 114 of the PND 110 may transmit the encapsulated data package to the MCS 120 (S204) according to an extended RFB (Remote Framebuffer) protocol. Subsequent text of voice instruction and associated graphic navigation map may be assembled again at S202 as the vehicle position changes or updates. As illustrated in Fig. 3, the communication module 121 of the MCS 120 receives (at step S301) RFB data packages, which may contain the encapsulated text of voice instruction and graphic navigation map. The text in the receiving frame buffer is routed to the TTS module 123. The TTS module 123 converts the text into audio data (S302) which is played back to generate voice at the sound or audio module 124 of the MCS 120 (S303) in synchronization with the display of the associated graphic map on the display module 125 of the MCS 120. Alternatively or additionally, the text of voice instruction can be displayed directly on a display module 125 of the MCS 120, together with the graphic navigation map.
In one embodiment, to transfer remote voice instruction and frame buffer, an extended RFB protocol may be provided by defining new protocol messages for accessing the text of voice instruction. Accordingly, the API 112 is provided to capture the text of voice instruction assembled at the navigation engine 111, and the frame-buffering and frame-updating modules 113 and 122 are provided to transfer the text in a frame buffer based on the extended RFB protocol. The text of voice instruction may be transferred to the MCS 120 over the extended RFB protocol, via a communication network between the PND 110 and MCS 120. Once the communication module 121 of the MCS 120 receives the text of voice instruction, it invokes the TTS module 123 to convert the text to an audio format which may then be played back on the sound/audio module 124 of the MCS 120.
The extended RFB protocol is described briefly hereinafter by way of example. In a detached navigation application, the navigation graphic map and voice instruction is synchronously transferred from the PND to the MCS. Conventionally, between a pair of connected PND 110 and MCS 120, the content in the frame buffer in a PND is transferred by employing a standard RFB (Remote Frame buffer) protocol which is commonly used for transferring frame buffer. RFB protocol is a simple protocol and works at the frame buffer level so that it is applicable to all windowing systems and applications. However, RFB has its limitations as there is no definition for accessing remote voice buffer with RFB protocol.
In the official RFB protocol, six types of messages have been defined in a client (i.e., MCS 120) and four types of messages defined in a server (i.e., PND 110). Examples of the types of messages from the client to the server may be set-pixel format, set-encodings, frame-buffer-Update-Request, key-event, pointer-event, and client-cut-text. Examples of the types of messages from the server to the client may be frame-buffer-update, set-color map-entries, bell, and server-cut-text. In the context of the present application, the PND may be viewed as a server, and MCS as a client.
For example, three more types of messages may be added to the extended RFB protocol, and two of the three added messages may be defined as voice-text-update-request and voice-and-frame-buffer-update-request (from the client to server). The other may be voice-text-update (from the server to client). A voice-text-update process consists of transferring a sequence of strings of text of voice which the client should playback one by one in response to a message of voice-text-update-request or voice-and-frame-buffer -update-request from the client.
A third-party navigation solution can implement API interfaces with integration of the related head files and library files. The navigation engine can then inter-process communication with the application on a navigation device. The application on the device assembles the text of voice from navigation engine and stores it in a frame-buffer. The RFB server in the navigation device grabs the text in the frame buffer, and encapsulates it in the extended RFB protocol, and transfers the encapsulated text of voice instruction and associated graphic navigation map to the MCS.
The data communication cost for navigation voice instruction can be significantly saved by replacing PCM based 44.1 KHz sampling-rate (16-bit) audio stream with several text characters. For instance, a 5-second navigation voice instruction like "In 200 meters, turn right" may require 441,000 bytes in PCM code, while a 26-character string can be transferred in 52 bytes in a text string. The synchronization between frame-buffer and voice is ensured because they are transferred from a navigation terminal to a MCS in a single RFB frame. Accordingly, the bandwidth required for transferring voice instruction is reduced significantly because it is transferred in text format, while synchronizing the graphic map and voice instruction displayed and played back in the MCS.
The aforesaid descriptions of specific embodiments or examples of the present invention are presented for purpose of illustration. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations are possible in view of the above teachings. It is intended that the scope of the invention be defined by the following claims and their equivalents.

Claims

CLAIMS What is claimed is:
1. A method for voice management in a remote navigation system including a portable navigation device (PND) and a multimedia communication system (MCS) in a vehicle, the method comprising:
assembling a text of voice instruction associated with a graphic map at the PND according to the current position of the vehicle;
encapsulating the text of voice instruction together with the associated graphic map;
communicating the encapsulated text and associated graphic map from the PND to the MCS, the MCS being capable of converting the text to audio data through a text-to- speech engine and outputting the audio data from the MCS simultaneously or near simultaneously with a graphic map displayed on a display of the MCS.
2. The method of claiml, further comprising a step of capturing and passing the assembled text through an application program interface (API) in the PND, after the step of assembling a text of voice instruction.
3. The method of claim 2, wherein said step of capturing and passing the assembled text further comprises a step of passing the text.
4. The method of claim 2, wherein said step of capturing and passing the assembled text further comprises a step of canceling the text.
5. A remote navigation system comprising:
a portable navigation device (PND) comprising:
a means for assembling a text of voice instruction associated with a graphic map according to the current position of the vehicle; and
a means for encapsulating the text of voice instruction together with the associated graphic map; and a means for communicating the encapsulated text and associated graphic map to a vehicle multimedia communication system (MCS), the MCS being configured to convert the text to audio data and output the audio data and graphical map data simultaneously or near simultaneously.
6. The system of claim 5, further comprising means for capturing and passing the assembled text through an application program interface (API) in the PND.
7. The system of claim 6, wherein said means for capturing and passing the assembled text further comprises means for passing the text.
8. The system of claim 6, wherein said means for capturing and passing the assembled text further comprises means for canceling the text.
9. A method for voice management in a portable navigation device (PND), comprising:
receiving a text of voice instruction and a graphic map;
assembling the text of voice instruction associated with the graphic map according to the current position of the PND;
encapsulating the text of voice instruction together with the associated graphic map; and
communicating the encapsulated text and the associated graphic map to a multimedia communications system (MCS) provided in a vehicle for converting the text to audio data for playback at the MCS while simultaneously or near simultaneously displaying the associated graphic map on the MCS.
10. A portable navigation device (PND), comprising
a means for receiving a text of voice instruction and a graphic map;
a means for assembling the text of voice instruction associated with the graphic map according to the current position of the PND; a means for encapsulating the text of voice instruction together with the associated graphic map; and
a means for communicating the encapsulated text and the associated graphic map to a multimedia communications system (MCS) provided in a vehicle for converting the text of voice instruction to audio data for playback at the MCS while synchronously displaying the associated graphic map on the MCS.
11. A method for voice management in a multimedia communications system (MCS) equipped in a vehicle, comprising:
receiving an encapsulated text of voice instruction and an associated graphic map from a portable navigation device (PND);
converting the text of voice instruction to an audio data through a text-to- speech engine in the MCS; and
playing back the audio data as voice instructions that are synchronously or near synchronously output with displaying the associated graphic map on the MCS of the vehicle.
12. A multimedia communications system (MCS) equipped in a vehicle, comprising a means for receiving an encapsulated text of voice instruction and an associated graphic map of navigation from a portable navigation device (PND); a means for converting the text of voice instruction to audio data; and
a means for playing back the audio data as voice instructions that are synchronously or near synchronously output with displaying the associated graphic map on the MCS of the vehicle.
PCT/CN2011/079257 2011-09-01 2011-09-01 Method and system of remote voice management in navigation system WO2013029265A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/079257 WO2013029265A1 (en) 2011-09-01 2011-09-01 Method and system of remote voice management in navigation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/079257 WO2013029265A1 (en) 2011-09-01 2011-09-01 Method and system of remote voice management in navigation system

Publications (1)

Publication Number Publication Date
WO2013029265A1 true WO2013029265A1 (en) 2013-03-07

Family

ID=47755220

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/079257 WO2013029265A1 (en) 2011-09-01 2011-09-01 Method and system of remote voice management in navigation system

Country Status (1)

Country Link
WO (1) WO2013029265A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3016209A3 (en) * 2014-01-09 2015-07-10 Renault Sa DISPLAY SYSTEM FOR A MOTOR VEHICLE AND DISPLAY METHOD

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1596408A (en) * 2001-11-29 2005-03-16 皇家飞利浦电子股份有限公司 Intelligent information delivery system
CN1725825A (en) * 2004-07-21 2006-01-25 株式会社东芝 Digital broadcast receiving apparatus
CN1749944A (en) * 2004-09-15 2006-03-22 哈曼贝克自动系统股份有限公司 Vehicle multimedia system and method for operating a vehicle multimedia system
CN1928497A (en) * 2005-09-07 2007-03-14 上海大众汽车有限公司 Navigating instrument framework for automobiles

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1596408A (en) * 2001-11-29 2005-03-16 皇家飞利浦电子股份有限公司 Intelligent information delivery system
CN1725825A (en) * 2004-07-21 2006-01-25 株式会社东芝 Digital broadcast receiving apparatus
CN1749944A (en) * 2004-09-15 2006-03-22 哈曼贝克自动系统股份有限公司 Vehicle multimedia system and method for operating a vehicle multimedia system
CN1928497A (en) * 2005-09-07 2007-03-14 上海大众汽车有限公司 Navigating instrument framework for automobiles

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3016209A3 (en) * 2014-01-09 2015-07-10 Renault Sa DISPLAY SYSTEM FOR A MOTOR VEHICLE AND DISPLAY METHOD

Similar Documents

Publication Publication Date Title
US20110109468A1 (en) Safety system and method for partial deactivation of a personal communication device while vehicle is in motion
US20110123039A1 (en) System and method for communicating on-board diagnostic information as an audio signal
US20100203830A1 (en) Systems and Methods for Implementing Hands Free Operational Environments
EP4027238A1 (en) Card rendering method and electronic device
JP6271094B1 (en) Train communication system and on-vehicle equipment
CN103738265B (en) Mobile terminal and car machine interconnection method and car machine
JP2002330099A (en) System with information output device and mobile communications terminal
US9538339B2 (en) Method and system of outputting in a vehicle data streamed by mobile applications
JP2009300537A (en) Speech actuation system, speech actuation method and in-vehicle device
KR20150074542A (en) Method for controlling mirrorlink
CN104539572A (en) Mobile terminal and vehicle-mounted terminal interconnection information system and implementation method
CN103738266B (en) The interconnected method and a kind of car machine of mobile terminal and car machine
CN105812474A (en) Interconnection device for vehicle-mounted terminal and intelligent terminal navigation system and control method thereof
US20120149365A1 (en) Vehicle information system
US20140130063A1 (en) Systems and methods for low overhead remote procedure calls
CN111976484A (en) Vehicle-mounted instrument navigation display method and system and automobile
CN103738267B (en) Mobile terminal and car machine interconnection method and car machine
WO2013029265A1 (en) Method and system of remote voice management in navigation system
KR20130019916A (en) Multi-media system by using blutooth communication and control method thereof
EP3726780B1 (en) Method and system for a centralized vehicular electronics system utilizing ethernet with audio video bridging
CN102412877A (en) Non-audio data transmission method based on A2DP (audio 2 device protocol)
CN105024764A (en) Audio-format-based file transmission method and system
CN113287329A (en) Data transmission method and device
CN104270660A (en) Vehicle-mounted and mobile equipment interconnection system based on Slimport technology
KR100726464B1 (en) Method of transacting multimedia data between communication terminals and interoperating between applications installed in the terminals, and communication terminal employing the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11871397

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11871397

Country of ref document: EP

Kind code of ref document: A1