EP0620697A1 - Audio/video information system - Google Patents

Audio/video information system Download PDF

Info

Publication number
EP0620697A1
EP0620697A1 EP93302701A EP93302701A EP0620697A1 EP 0620697 A1 EP0620697 A1 EP 0620697A1 EP 93302701 A EP93302701 A EP 93302701A EP 93302701 A EP93302701 A EP 93302701A EP 0620697 A1 EP0620697 A1 EP 0620697A1
Authority
EP
European Patent Office
Prior art keywords
audio
flight
information
words
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP93302701A
Other languages
German (de)
French (fr)
Inventor
Richard J. Salter, Jr.
Michael C. Sanders
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Asinc Inc
Original Assignee
Asinc Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Asinc Inc filed Critical Asinc Inc
Priority to EP93302701A priority Critical patent/EP0620697A1/en
Priority claimed from AU36718/93A external-priority patent/AU667347B2/en
Publication of EP0620697A1 publication Critical patent/EP0620697A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination

Definitions

  • the present invention relates generally to improvements in aircraft passenger information systems and, more particularly, pertains to a new audio information system for the passengers of an aircraft. Still more specifically, the invention provides means for generating informational messages which are initially created on a ground-based computer system and transmitted up to an aircraft in flight to be converted from digital computer data to audio words and sentences and broadcast in multiple languages via the cabin audio system to the passengers.
  • a particular application for an audio information system for automatically providing spoken messages is in the aircraft and air transportation arena.
  • General information systems relating to aircraft abound in the prior art. Such general systems are utilized for a variety of purposes, such as tracking and analyzing information relating to air traffic control, displaying information on flights to provide for advanced planning and scheduling, and monitoring ground traffic at an airport.
  • U.S. Patent No. 4,975,696 Salter, Jr. et al.
  • copending U.S. application Serial No. 07/763,370 Patents
  • such systems are typically used for the administering of aircraft traffic.
  • U.S. Patent No. 4,975,696 an electronics package connecting the airborne electronics of a passenger aircraft to the passenger visual display system of the aircraft was disclosed.
  • the electronics package provides passengers with a variety of real-time video displays of flight information, such as ground speed, outside air temperature, or altitude.
  • Other information displayed by the electronics package includes a map of the area over which the aircraft flies, as well as destination information, such as a chart of the destination terminal including aircraft gates, baggage claims areas, and connecting flight information listings.
  • the electronics system of U.S. Patent application Serial No. 07/763,370 displays flight information with the flight information automatically tailored to the phases of flight of the aircraft.
  • the invention provides an information system for generating spoken audio messages incorporating real-time, i.e. "variable,” input data by assembling digitized spoken words corresponding to the input data into complete messages or sentences.
  • Each sentence to be assembled includes a framework of fixed digitized words and phrases, into which variable digitized words are inserted.
  • the particular digitized variable words which correspond to the specific input data are retrieved from digital computer memory.
  • All anticipated input parameters are stored as digitized spoken words such that, during operation of the system, appropriate spoken words corresponding to the input data can be retrieved and inserted into the framework of the sentence. In this manner, a complete natural-sounding spoken message which conveys the input data is automatically generated for broadcast.
  • the system includes a memory means for storing digitized spoken words, a receiver for receiving input data, and a data processor.
  • the data processor means includes a retrieval means for retrieving selected digitized words corresponding to the input data and a message assembly means for assembling the retrieved words into audio messages.
  • the data processor means includes means for selecting digitized forms of the words having the proper inflection for inclusion in the spoken sentence, such that a natural-sounding spoken sentence is achieved.
  • the various digitized words and phrases may be recorded in a variety of languages, such that a spoken message may be generated in any of a variety of different languages.
  • the audio information system is mounted aboard a passenger aircraft for automatically generating informative messages for broadcast to the passengers of the aircraft.
  • the system includes a receiver for receiving flight information from the on-board navigation systems of the aircraft and from ground-based transmitters.
  • the input flight information such as the location of the aircraft or the travel time to destination, is automatically communicated to the passengers in the form of natural-sounding spoken sentences.
  • the system may also generate audio messages identifying points of interest in the vicinity of the aircraft.
  • the system generates spoken messages describing destination terminal information received from a ground-based transmitter including connecting gates and baggage claim areas.
  • the system assembles audio messages incorporating the destination terminal information received from the ground and broadcasts the assembled messages to the passengers.
  • the system is alternatively configured to simultaneously provide the destination terminal information in both video and audio form.
  • the invention provides audio messages to aircraft passengers wherein the messages are tailored to the phases of flight of the aircraft.
  • the system includes data processor means utilizing received flight information for determining a current phase of the flight plan and for inputting information corresponding to the current phase of the flight plan to the audio system for broadcast to the passengers.
  • a wide variety of informative spoken messages may be automatically provided to the passengers, with the content of the messages tailored to the various phases of flight of the aircraft.
  • the system may automatically generate one set of spoken messages during the takeoff phase of the flight of the aircraft, and a separate set of messages during the en route cruise phase of the aircraft.
  • the messages are automatically generated by the system in response to input flight information which is received by the system.
  • a spoken message assembler system 200 receives input information in the form of digital alphanumeric data and generates natural-sounding spoken sentences which recite the received data for output to a listening audience through a speaker system, perhaps a public address (PA) system.
  • message assembler 200 includes hundreds or thousands of digitized words and phrases covering all anticipated words which may be required to create sentences reciting the input data.
  • the words and phrases are prerecorded from a human voice in a digitized format and stored in computer ROM.
  • Message assembler 200 assembles sentences by retrieving appropriate digitized words and phrases and assembling the words and phrases into proper syntactic sentences.
  • some of the words and phrases are stored in a number of digitized forms, each having a different inflection, such that the assembled sentence has proper inflection in accordance with natural speech.
  • input information in the form of digital data can be communicated to a listening audience in the form of natural-sounding spoken sentences.
  • the input data is received and the spoken sentences are generated and broadcast entirely automatically without the need for a human operator or human speaker.
  • the spoken message assembler is employed within an audio/video information system for use in the passenger compartment of an aircraft.
  • the message assembler receives flight information such as ground speed, outside air temperature, destination terminal, connecting gate, or baggage claim area information.
  • the message assembler then constructs natural-sounding sentences for broadcasting the flight information to the passengers in the aircraft.
  • the spoken messages may be broadcast over a public address system of the aircraft for all passengers to hear, or may be broadcast over individual passenger headphone sets.
  • the spoken message assembler may be configured to generate sentences in a variety of different languages for either sequential broadcast or simultaneous broadcast over multiple channels.
  • the spoken message assembler of the system thus provides a wide range of useful and informative information to the passengers, while freeing the flight crew from having to provide the information to the passengers.
  • the system may additionally include a video display system for simultaneously displaying the flight information over a video screen or the like.
  • the message assembler of the invention is ideally suited for any application benefitting from the automatic communication of input data to a listening audience.
  • Figure 1 provides a flow chart illustrating the operation of message assembler 200.
  • the message assembler receives an input sentence over a data line 201 in a digital alphanumeric format suitable for input and manipulation by a computer or similar data processing device.
  • the data is received within a sentence format having specific data fields.
  • one data field of the input sentence may provide the time of day.
  • an alphanumeric sequence is received which provides the time of day, e.g., "12:32PM.”
  • a separate data field may provide a destination city for an aircraft flight, e.g., "Los Angeles.”
  • Message assembler 200 may be preprogrammed to receive any of a number of suitable data formats. Any format is suitable so long as the variable data is received within preselected fields such that the message assembler can determine the type of data contained within the received message.
  • message assembler 200 For each type of data, message assembler 200 stores all possible instances of the data type in a digitized spoken form in a mass storage device 211. For the example of destination cities, the message assembler stores the names of all cities that the airline flies into or out of in digitized spoken form. Thus, the message assembler stores the words "New York,” “Los Angeles,” “Chicago,” etc. in ROM.
  • message assembler 200 For data types requiring numbers, such as the time of day, message assembler 200 stores all necessary component numbers in digitized form. To recite the time “12:10,” message assembler 200 retrieves and combines the words “twelve” and “ten.” To recite the time “1:57,” message assembler 200 retrieves and combines the words “one,” “fifty,” and “seven.” To handle any input time of day, message assembler 200 need only store the component numbers 0-9 and 10, 20 ... 50 in digitized form. The numbers 1-10 are assembled either as “one” or "oh-one,” etc., to allow the handling of both hours and minute values between 1 and 10.
  • the message assembler stores the various possible instances of the various possible data types that may be received within an incoming message.
  • the specific data fields that are employed and the specific instances of the data stored for each data field are configurable parameters of the system.
  • a digitized data base can be constructed to provide for almost any type of information, the system is preferably employed where a limited number of types of information must be conveyed to a listening audience, especially where each type of information has a fairly limited range of possible instances. In such case, the total number of digitized spoken words that must be stored in ROM is fairly limited. A system requiring a greater number of digitized words may be implemented using a computer with a greater amount of ROM.
  • An exemplary input sentence format received by the system, at step 202, is provided in Table I.
  • the exemplary sentence format of Table I provides the departure gate number and departing time for particular departing flights.
  • the input sentence of Table I provides a framework for communicating the departing flight's airline name and flight number and the departing flight's gate number and departure time, along with the destination city and destination airport terminal.
  • An input sentence includes a framework of fixed words interlaced with variable words (shown in parentheses) in Table I.
  • the fixed words are "flight,” "will depart,” “from,” “terminal,” “gate,” and “at.”
  • the variable data for inclusion within the sentence include the airline name, the flight number, the city name, the terminal name, the gate number, the departure time, and either "AM” or "PM” appended to the departure time.
  • Each unit of the sentence, comprising either a single fixed or variable word or a fixed or variable phrase, is denoted by a position number.
  • the variable "airline name" is identified as position 1.
  • the fixed word "flight” is identified as position 2.
  • each fixed or variable data unit within the input sentence is represented by a unique number.
  • the system examines the first position within the input sentence, initially position 1.
  • the system determines whether position 1 corresponds to a fixed word or a variable word. Continuing with the example of Table I, position 1 requires a variable word. Accordingly, the system proceeds to step 210 to retrieve the digitized variable word from the data base of the system, which corresponds to the input airline name to be included at position 1.
  • the data base of variable digitized words is set up to include the names of currently operating airlines, with the names digitized from a recording of the spoken airline name.
  • the data base may include, for example, "ABC Airlines” or "XYZ Airlines” in digitized form.
  • the system examines the received message for an alphanumeric representation of the airline name, then, based on the alphanumeric, retrieves the corresponding digitized spoken name from the system's data base. Once retrieved, the digitized data providing the spoken airline name is immediately broadcast to the passengers. Alternatively, the digitized spoken airline name may be transferred to a temporary memory unit (not shown in Figure 1) of the system for subsequent broadcast. In Figure 1, the broadcast step is identified by reference numeral 212.
  • the system determines whether the final position of the sentence format has been processed. If not, the system increments a position pointer and returns along flow line 216 to process the next position within the sentence format. Thus, in the example of Table I, the system returns to process position 2.
  • the system determines that position 2 requires a fixed digitized word. Hence, the system proceeds to step 218 to retrieve the fixed word designated by the sentence format. In this case, the fixed word is "flight.” Hence, the system retrieves digitized data presenting the spoken word "flight" from the data base and broadcasts the retrieved word.
  • the system returns along data flow line 216 to process a new position within the sentence format.
  • the next position, position 3 calls for a variable word setting forth the flight number.
  • the system proceeds to step 210, wherein the system retrieves the digitized data setting forth the spoken flight number corresponding to the alphanumeric flight number designation received in the input message.
  • the system retrieves digitized data providing the spoken words "ten,” "fifty,” and "nine.”
  • the system maintains a "number" data base which stores spoken numbers for use with any data type requiring numbers. Exemplary data types such as flight number, gate number, baggage claim area, departure time, etc.
  • the digitized spoken words “ten,” “fifty,” and “nine” are retrieved in circumstances requiring that the number “1059” be spoken, such as if the departing gate number is “1059,” the departure time is “10:59,” or the baggage claim area is “1059.”
  • the numbers are preferably stored in a variety of different styles and inflections to allow natural-sounding numbers to be recited in any circumstances.
  • the system proceeds to the next position wherein the system retrieves the fixed digitized words "will depart.” Execution continues, during which time the system processes each successive position within the sentence format. At each position, the appropriate variable or fixed digitized words are retrieved from the data base memory and immediately broadcast. Execution proceeds at a sufficient speed such that the words are broadcast one after the other in close succession to produce a natural-sounding sentence.
  • the assembled sentence is thereby "spoken" in the same manner in which a conventional compact disc system broadcasts words or music; that is, the digitized words are "played” in succession. Appropriate pauses may be included between words within the sentence to ensure a natural sentence flow.
  • the resulting "spoken" sentence might be "XYZ Airlines Flight ten fifty-nine will depart Chicago from Terminal One, gate twenty-three at twelve forty-seven PM.”
  • the sentence is broadcast by means described below to the passengers in the aircraft, who thereby hear a natural-sounding sentence as if spoken by a member of the flight crew.
  • the system returns to step 202 to receive and process a new message.
  • the new message may provide the departing flight information for a different airline flight.
  • an incoming message will provide the departing flight information for many connecting flights, perhaps 10-20 such flights.
  • the system will reexecute the steps shown in Figure 1 a number of times to process the input data corresponding to each of the connecting flights, to thereby generate sentences reciting all of the connecting gate information.
  • the retrieved words are stored in a temporary member for later broadcast.
  • a system might include parallel processing capability such that, while a first sentence is being broadcast from temporary memory, a second sentence is being assembled.
  • the system waits to receive a new message.
  • the new message may set forth different types of information within a different sentence format.
  • the system will receive numerous input sentence formats to allow the system to broadcast a wide variety of natural-sounding sentences conveying a wide variety of possible input data.
  • the message assembler shown in Figure 1 is advantageously employed in any environment where variable input information must be communicated to a listening audience over an audio system.
  • the system is advantageously employed wherever input data to be broadcast falls within a finite number of data types, each having a range of anticipated values which may be stored in digitized spoken form in a data base.
  • a natural-sounding sentence is composed of words of differing inflections. Automatically-generated sentences which do not use the proper inflection for component words may sound artificial or metallic. Accordingly, to assemble a natural-sounding sentence from digitized words, the proper inflection for the component words is preferably determined.
  • only "number” words i.e., words used to recite numeric strings, are stored under all three inflection forms. It has been found that input sentence formats may be selected wherein all other words need be stored under only one inflection to achieve sufficient natural-sounding sentences. For example, the word “and” need only be stored under the slowly rising inflection form because the word “and” will always appear in mid-sentence not followed closely by another word.
  • Numbers are stored under all three inflections, since numbers may appear in a variety of positions within a sentence or at the end of a sentence. For example, the number string "1024" may appear in the middle of a sentence followed closely by another word: "Flight 1024A will depart from gate 15.” Alternatively, the number string “1024" may appear in the middle of a sentence not followed closely by another word: “Flight 1024 will depart from gate 15.” Finally, the string "1024" may appear at the end of a sentence: "Flight 15 will depart from gate 1024.” Thus, all numbers are stored under all three inflection forms such that the proper inflection form can be retrieved depending upon the position of the number within the sentence.
  • the numeric string "1024" is actually composed of three component numbers: "ten,” “twenty,” and “four.”
  • the system processes the inflection of each of the individual component words separately.
  • the word “ten” is followed closely by the word “twenty” and the word “twenty” is followed closely by the word “four.”
  • the words “ten” and “twenty” both have a rapidly rising inflection, regardless of the position of "1024" in the sentence.
  • only the word "four” will have a slowly rising, rapidly rising, or falling inflection, depending upon the location of the number "1024" within the sentence.
  • the system also selects a proper style for reciting numbers.
  • the system characterizes numbers according to one of two general numeric styles. In the first, “short” style, the words “hundreds” or “thousands” are not spoken. For example, in the short style, the number “1024" is spoken as “ten twenty-four.” In a “long” numeric style, the words “hundreds” or “thousands” are inserted. For example, the number "1024" is recited as "one thousand twenty-four.”
  • the short style is used for reciting gate numbers, flight numbers, baggage claim areas, and the like.
  • the long style is used for reciting altitudes, distances, temperatures, and the like.
  • “flight 1024” is recited as “flight ten twenty-four”
  • “1024 feet” is recited as “one thousand twenty-four feet.”
  • the message assembler determines the proper numeric style and retrieves the digitized words appropriate to the selected numeric style.
  • the system retrieves the individual words “flight,” “ten,” “twenty,” and “four” from the digitized word data base for playback in succession.
  • the system retrieves the individual digitized words “one,” “thousand,” “twenty,” “four,” and “feet.”
  • FIG. 2 A method by which the invention accounts for numeric style and numeric inflection to generate natural-sounding spoken numbers is shown in Figure 2.
  • the steps of Figure 2 are executed as a part of the execution of step 210 of Figure 1. However, the steps of Figure 2 are executed only for processing alphanumeric strings which include numbers. Thus, other variable words, such as destination cities, i.e., "Los Angeles,” are not processed using the procedure of Figure 2.
  • the system For alphanumeric strings with numbers, the system, at step 250, initially extracts all numeric strings from the input alphanumeric character string. Thus, for input string "1024A,” the system extracts “1024.” Also as an example, for the string “10B24,” the system extracts the number strings “10” and "24.” Thus, an input character string may contain one or more numeric strings. For each extracted numeric string, the system, at step 252, determines the proper numeric style for the numeric string. Thus, if the numeric string is "1024,” the system determines whether this should be recited in the long style or the short style. This determination is made from an examination of the data type of the input character string. For each numeric data type, the system stores an indicator of the corresponding style.
  • the data type is a "flight number,” then the short style is used. If the data type for the input character string is an altitude, then the long style is selected.
  • the proper data type may be determined from the location of the character string within the input data block. Alternatively, the data block may include headers immediately prior to each data type, designating the data type.
  • the system parses the numeric string into its component numbers according to the selected numeric style.
  • "1024" is parsed as "1000" and "24” for the long numeric style, and "10" and "24” for the short numeric style.
  • the system assembles a word equivalent of the alphanumeric string which includes any parsed numeric strings, as well as any letters or other characters.
  • the system determines the inflection of all component numbers included within the word equivalent of the alphanumeric string. To this end, the system examines each "number" word within the string to determine whether the word is positioned in the middle of the string or at the end of the string. If in the middle, then the rapidly rising inflection form is chosen. If the "number" word occurs at the end of the string, then the system must determine what words, if any, follow the alphanumeric string.
  • the alphanumeric string constitutes the final portion of a sentence, a "number" word at the end of the string therefore falls at the end of the sentence. Hence, the falling inflection is chosen. If, on the other hand, the alphanumeric string is positioned in the middle of a sentence, then a "numeric" word falling at the end of the string will be assigned the slowly rising inflection.
  • step 258 the system is ready to retrieve the digitized spoken words corresponding to all components of the word equivalent of the alphanumeric string. This retrieval is accomplished at step 260. Processing continues at step 212 of Figure 1, which operates to broadcast the retrieved words. As the sentence is broadcast to the passengers, numbers recited within the sentence are thereby spoken in the proper style and with the proper inflection.
  • the system shown in Figures 1 and 2 may be configured to assemble sentences in any of a variety of languages.
  • the data base of digitized words must include the necessary foreign words and phrases.
  • each different language has different sentence formats. For example, for a German sentence, the sentence format may have the fixed verb of the sentence at the end of the sentence format, rather than near the beginning of the sentence format as commonly found in English sentences.
  • Each alternative language may be handled by a separate microprocessor device.
  • a single microprocessor device may sequentially process all languages.
  • the spoken message assembler described above is implemented within an on-board flight information system for providing flight information to airline passengers.
  • the system provides connecting gate and baggage claim area information.
  • the system provides flight information such as air speed, altitude, and information regarding points of interest over which the aircraft travels. This information may be tailored to the various phases of flight of the aircraft.
  • a data processor 13 receives messages containing flight information over a data bus 59 from various systems of the aircraft. Examples of such systems include an ACARS receiver 19, a navigation system 15, an aircraft air data system 17, and a maintenance computer 21. Each of these systems, from which information is received, is entirely conventional and will not be described in detail. Data processor 13 may be connected to any one or a multiple of these systems depending on the type of information desired to be displayed to the passengers of the aircraft. Data processor 13 may be controlled by a control unit 22, which includes various means for allowing for manual activation of the data processor and control over the functions of the data processor.
  • Data processor 13 generates audio messages using the message assembler described above and transmits the audio messages in the form of audio signals over an audio link line 91 to an audio selector unit 92 that routes the audio signal to a plurality of conventional audio systems.
  • the audio signals may be transmitted over a link line 93 to a public address speaker 95 in the passenger compartment of the aircraft or over link line 97 to a plurality of individual passenger headphone sets 96 via individual multichannel selectors 94.
  • the data processor may also generate video display screens which set forth the data incorporated in the audio messages.
  • the video display screens are output as a video signal and transmitted over a video link line 31 to a conventional video selector unit 29 that routes the video signal to a plurality of conventional video display systems.
  • the video signal may be transmitted over link lines 39 to a preview monitor 33, or over link lines 43 to a video monitor 37, or over link lines 41 to a video projector 35, which projects the sequences of video screens received onto a video screen 45.
  • Message assembler 200 and its data base of digitized words and phrases are components of data processor 13 and, hence, are not shown separately in Figure 3.
  • FIG. 3 a conventional ACARS/AIRCOM/SITA receiver 19 is shown. This receiver receives connecting gate and baggage claim area information from an airline central computer 47 via a transmitting antenna 51 over carrier waves 53. A link line 49 connects airline computer 47 to transmitting antenna 51.
  • any transmitter receiver system could be used, including a satellite communication system, and this invention is not limited to the ACARS system referred to herein.
  • Destination airport information may also be entered into the system via an optional data entry terminal (not shown).
  • the data processor 13 In order for the data processor 13 to promptly process the information received, the data is assumed to be in a specific fixed format when it is received from ACARS receiver 19.
  • the format illustrated in Table II is an example of a possible format for up-linked data:
  • the data format contains strings of characters which are utilized by data processor 13 to generate audio messages and optional video displays.
  • Exemplary strings are the flight number string “966,” the destination airport string “Frankfurt,” the arrival gate string “17,” and the baggage claim area string “C.”
  • For audio messages relevant data is extracted from the strings and incorporated into audio messages via message assembler 200.
  • For video displays these strings are used both to retrieve an airport chart representing the destination airport, and for direct inclusion in video displays.
  • the following spoken audio messages may automatically be generated: "Lufthansa flight nine six six arriving in Frankfurt at eleven forty five A M, terminal A, gate number seventeen, baggage claim area C.” "Air France flight eight forty one will be departing for Paris from terminal A gate ten at twelve fifteen.” "Lufthansa flight five oh two will be departing for Hamburg from terminal B gate five at twelve thirty.” “Swissair flight sixty five will be departing for Zurich from terminal B gate two at twelve thirty five.”
  • the data processor utilizes the message assembler, described above, to extract relevant data and to assemble messages reciting the data.
  • the message assembler extracts the variable data "Lufthansa,” “966,” “Frankfurt,” “11:45,” “A,” “17,” and “C” for incorporation into a sentence having fixed words “flight,” “arriving in,” “at,” “terminal,” “gate number,” and “baggage claim area.”
  • the message processor retrieves spoken word equivalents of the alphanumeric data extracted from the message in the manner described above.
  • the numbers “966,” “11:45,” and “17” contained within the flight number, arrival time, and arrival gate may be processed according to the inflection and style manipulation procedure described above with reference to Figure 2.
  • the message assembler extracts the various fixed and variable words from the input message, retrieves spoken word equivalents for these alphanumeric values, and broadcasts the spoken word equivalents in succession to produce complete sentences.
  • a total of four different audio messages are thereby generated from the data contained within the data block of Table II.
  • the four messages are generated by executing the steps of Figure 1 a total of four times. Once completed, the system waits until a new input message is received.
  • input messages may provide flight information such as altitude, ground speed, outside air temperature, time or distance to destination, time or distance from destination, etc.
  • weather-related messages may be received and processed, such as messages describing the temperature and weather conditions at the destination airport.
  • weather conditions within the vicinity of the aircraft may be described, including wind speed, visibility, ceiling, etc.
  • Messages providing marine-related information may be provided. For example, messages specifying the surf, tide, and marine visibility may be provided.
  • any input message can be processed so long as each of the component words for inclusion in the sentence is stored in the digitized memory of the system.
  • custom messages may be typed into a ground-based computer, then transmitted to the aircraft for conversion to a spoken audio message.
  • the variety of possible messages is limited only by the number of digitized words stored in the digitized memory of the system. Accordingly, by providing a system with a larger vocabulary of digitized words, a wider range of audio messages can be generated.
  • the system may also generate an optional video display for presentation to the passengers while the audio messages are simultaneously provided over the speaker system.
  • the system may extract the above-described flight information from the input message of Table II and format the information for a textual display.
  • the system may retrieve a map of the destination terminal and provide icons or the like identifying the locations of the various arrival and departure gates on the map.
  • Data processor generator 13 operates on the information it receives in a manner illustrated by the flowchart of Figure 4.
  • the input to data processor 13 is from a digital data bus input port on an interrupt basis, 181. Whenever there is information to be received, the data processor interrupts whatever it is doing to read the new data.
  • processor 13 reads the input message containing the connecting gate data from the bus until a completed message, 185, is received. The processor keeps returning to the interrupt, 187, until an end of message is received.
  • the alphanumeric strings providing the fixed and variable words are extracted, at 189, from the input message.
  • the extracted alphanumeric strings are output to message assembler 200 for generation of audio messages based on data contained within the fixed and variable alphanumeric strings.
  • the thus-generated audio message is output to the passenger audio system, at 194, via a link line 101 to an audio broadcast system 103 ( Figure 3).
  • the audio messages may be broadcast over a public address speaker system within the passenger cabin or may be broadcast over a conventional multichannel individual headphone system to the passengers.
  • the message assembler may provide the audio messages in a variety of languages, each language either being provided over a separate audio channel or broadcast sequentially over a single channel. Background music may be provided to accompany the audio messages.
  • the extracted connecting gate information is arranged into its predetermined page format, at 91, for display.
  • a terminal chart signifying the destination airport specified in the input message is retrieved, at 93, from a data storage unit.
  • An aircraft symbol is positioned at the arrival gate on the terminal chart and the arrival gate and baggage claim area information is written on the terminal chart for display.
  • the terminal chart, along with its information, is output as a video signal to the video display according to a specified sequence, at 195.
  • the terminal chart is displayed, at 197, for a period of typically 10 to 60 seconds.
  • portions of the alphanumeric text containing the connecting gate information are displayed in a suitable format, at 199, for the specific period of time.
  • the duration of the video displays is synchronized with the duration of the audio message which is simultaneously broadcast.
  • a display such as an exemplary display illustrated in Figure 5, may be presented to the passengers while audio messages reciting the displayed information are simultaneously broadcast.
  • a display shown in Figure 6 may be provided to the passengers while an audio message reciting the baggage claim area is simultaneously broadcast.
  • the terminal chart of Figure 6 illustrates all the gates and terminal buildings for a particular airport, along with baggage claim areas.
  • the aircraft symbol is located next to the arrival gate.
  • the connecting gate information may be processed to produce audio messages and video displays immediately after the information is received over the ACARS system, or the information may be stored until the aircraft begins its approach to its destination.
  • the audio portion may be provided as a stand-alone system with no video display generation hardware or software required. In such case, only the audio messages are generated and broadcast. All of the information provided in a combined audio/video system is provided in a stand-alone audio system, with the exception that graphic displays such as flight plan maps and destination airport charts are not provided.
  • the stand-alone audio system is ideally suited for aircraft not possessing passenger video display systems.
  • the stand-alone audio system merely interfaces with a conventional multichannel passenger audio broadcast system, and provides flight information, as described above, through the passenger audio system.
  • FIG. 7-9 an alternative system for providing flight information to the passengers in the aircraft passenger compartment is illustrated.
  • the alternative system may tailor the information to various phases of the flight.
  • An alternative data processor 13' utilizes the received flight information and determines a current phase of the flight of the aircraft, i.e., the system determines whether the aircraft is in "en route cruise,” “descent,” etc. Once the current phase of the flight has been determined, data processor 13' generates audio messages and optional sequences of video display screens tailored to the current phase of the flight for presentation to the passengers of the aircraft. For example, if the aircraft is in an "en route cruise” phase, data processor 13' may generate an audio message reciting the ground speed and outside air temperature and simultaneously generate a video display screen for displaying the same information. If the aircraft is in a "descent" phase, data processor 13' may generate a sequence of audio messages reciting the time to destination and the distance to destination screen and simultaneously generate the same information.
  • Each audio message provides useful information appropriate to the current phase of the flight plan. For example, during power on, preflight, engine start, and taxi out, various digitized audio messages may be provided which welcome passengers aboard the aircraft, describe the aircraft and, in particular, provide safety instructions to the passengers.
  • various audio messages may be generated which indicate points of interest over which the aircraft is flying or recite flight information received via message handler 63'. For example, if an input message is received providing ground speed, outside air temperature, time to destination, and altitude, an audio message may be generated by message assembler 200 reciting the information.
  • a video display screen such as shown in Figure 8 may be simultaneously provided. If the aircraft has approached a point of interest, an audio may be assembled and broadcast to the passengers indicating the proximity of the aircraft to the point of interest.
  • a video display screen such as the one shown in Figure 9 may be simultaneously provided.
  • message assembler 200 may generate an audio voice message such as: "The current ground speed is 574 miles per hour. The current outside air temperature is minus 67 degrees Fahrenheit.” The audio message is then broadcast to the passengers.
  • Data processor 13' includes: a message handler 63' for receiving flight information messages; a flight information processor 65' for determining the current flight phase and for generating audio messages and video display sequences corresponding to the current flight phase or point of interest; and a data storage unit 69' for maintaining flight information and digitized data.
  • Message handler 63' receives flight phase information as encoded messages over data bus 59'. As each new flight information message is received, message handler 63' generates a software interrupt. Flight information processor 65' responds to the software interrupt to retrieve the latest flight information from message handler 63'. Once retrieved, flight information processor 65' stores the flight information in a flight information block 104' in data storage unit 69'.
  • storage unit 69' In addition to maintaining digitized words and phrases for use in assembling audio messages, storage unit 69' also maintains specific sequences of graphic displays 120'. Storage unit 69' also maintains "range” tables 114, which allow flight information processor 65' to determine the current phase of the flight plan. For example, for the "en route cruise” phase, range table 114' may define an altitude range of at least 25,000 feet such that, if the received flight information includes the current altitude of the aircraft, and the current altitude is greater than 25,000 feet, data processor 65' can thereby determine that the current phase of the flight plan is the "en route cruise” phase and generate audio messages and optional video displays appropriate to the "en route cruise” phase of the flight plan.
  • range table 114' may define an altitude range of at least 25,000 feet such that, if the received flight information includes the current altitude of the aircraft, and the current altitude is greater than 25,000 feet, data processor 65' can thereby determine that the current phase of the flight plan is the "en route cruise” phase and generate audio messages
  • Range tables 114' also include points of interest along the flight route of the aircraft. For each point of interest, range tables 114' provide the location of the point of interest and a "minimum range distance" for the point of interest. If the received flight information includes the location of the aircraft, flight information processor 65' determines whether the aircraft is located within the minimum range associated with any of the points of interest. Thus, once the aircraft has reached the vicinity of a point of interest, the system automatically generates audio messages and optional video display screens informing the passengers of the approaching point of interest.
  • the audio message may recite the name of the point of interest and the distance and travel time to the point of interest and the relative location of the point of interest to the aircraft, i.e., "left” or "right.”
  • the audio messages may be provided in a variety of languages, with each language broadcast on a different audio channel.
  • digitized monologues describing the points of interest may be accessed from a mass storage device for playback while the aircraft is in the vicinity of the point of interest.
  • the message assembler need not be used to assemble audio messages. Rather, fixed digitized monologues are simply broadcast. These may be accompanied by background music.
  • the optional video screens may provide, for example, the name of the point of interest, the distance and travel time to the point of interest, and a map including the point of interest, with the flight route of the aircraft superimposed thereon.
  • flight information processor 65' compares the current location of the aircraft with the location of points of interest in the data base tables and determines whether the aircraft has reached the vicinity of a point of interest.
  • range table 114' can include points of interest such as cities and, for each point of interest, include the location in latitude and longitude and a minimum range distance.
  • Table III POINTS OF INTEREST Item Latitude Longitude Minimum Range City A 45 degrees 112 degrees 100 miles City B 47 degrees 114 degrees 10 miles City C 35 degrees 110 degrees 5 miles
  • Flight information processor 65 includes an algorithm for comparing the current location of the aircraft to the location of each city and for calculating the distance between the aircraft and the city. Once the distance to the city is calculated, flight information processor 65 determines whether the distance is greater than or less than the minimum range specified for that city.
  • flight information processor 65 Taking as an example City A, if the aircraft is 200 miles from city A, flight information processor 65 will determine that the aircraft has not yet reached the vicinity of city A. Whereas, if the distance between the aircraft and city A is 90 miles, flight information processor 65 can determine that the aircraft has reached the vicinity of city A and initiate a sequence of displays, previously described, informing the passengers.
  • the algorithm for calculating the distance between the aircraft and each point of interest, based on the latitudes and longitudes, is conventional in nature and will not be described further. The algorithm may take considerable processing time and, hence, is only executed periodically. For example, the point-of-interest table is only accessed after a certain number of miles of flight or after a certain amount of time has passed.
  • Range table 114' may include the location of a wide variety of points of interest, including cities, landforms, the equator, the International Date Line, and the North and South Poles.
  • the message assembler for generating natural-sounding spoken sentences conveying input data.
  • the message assembler has been described in combination with a flight information system for aircraft passengers that provides useful information to the passengers en route to their destination.
  • the system connects into a conventional passenger audio broadcast system.
  • the system provides destination terminal information such as connecting gates and baggage claim areas and flight information.
  • the flight information is tailored to the current phase of the flight plan of the aircraft. For example, messages describing points of interest are generated as the aircraft reaches the vicinity of the points of interest.
  • the systems can be combined to provide both types of information. In such a combined system, the destination terminal information may be automatically presented once the aircraft reaches the "approach" phase of the flight.
  • the system may also provide the information in video form over a video display system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Machine Translation (AREA)

Abstract

An audio message assembler is provided for generating natural-sounding spoken messages for communicating real time data to a listening audience. The system maintains a spoken-word data base having hundreds or thousands of digitally-stored words and phrases. The system receives digital information in the form of alphanumeric character strings, retrieves a preset sentence format appropriate to the particular input data received, and retrieves fixed or variable digitized words and phrases for inclusion in the preset format. For each specific input alphanumeric string, the system retrieves a digitized spoken word equivalent of the alphanumeric string. The system then assembles the retrieved digitized words into complete sentences for broadcast to the listening audience. In a particular embodiment, the system is implemented within an aircraft flight information system for providing flight information to the passengers of an aircraft. In this application, the system receives flight information such as connecting flights and en-route aircraft data, and generates audio messages for broadcast to the passengers which recite the flight information in natural-sounding sentences.

Description

    BACKGROUND OF THE INVENTION 1. Field of the Invention
  • The present invention relates generally to improvements in aircraft passenger information systems and, more particularly, pertains to a new audio information system for the passengers of an aircraft. Still more specifically, the invention provides means for generating informational messages which are initially created on a ground-based computer system and transmitted up to an aircraft in flight to be converted from digital computer data to audio words and sentences and broadcast in multiple languages via the cabin audio system to the passengers.
  • 2. Description of the Prior Art
  • A wide variety of information systems exist for providing audio messages to a listening audience. For entirely automatic systems; that is, systems which do not require an operator, audio messages have traditionally been prerecorded prior to broadcast. Such information systems are incapable of handling real time information to produce audio messages reciting the real time information. To remedy this, various prior art audio information systems have been developed which utilize a voice synthesizer device to convert real time digital information into spoken words or phrases. Unfortunately, the resulting audio messages are often metallic- or artificial-sounding.
  • A particular application for an audio information system for automatically providing spoken messages is in the aircraft and air transportation arena. General information systems relating to aircraft abound in the prior art. Such general systems are utilized for a variety of purposes, such as tracking and analyzing information relating to air traffic control, displaying information on flights to provide for advanced planning and scheduling, and monitoring ground traffic at an airport. Other than U.S. Patent No. 4,975,696 (Salter, Jr. et al.) and copending U.S. application Serial No. 07/763,370 (Pitts), such systems are typically used for the administering of aircraft traffic.
  • In U.S. Patent No. 4,975,696, an electronics package connecting the airborne electronics of a passenger aircraft to the passenger visual display system of the aircraft was disclosed. The electronics package provides passengers with a variety of real-time video displays of flight information, such as ground speed, outside air temperature, or altitude. Other information displayed by the electronics package includes a map of the area over which the aircraft flies, as well as destination information, such as a chart of the destination terminal including aircraft gates, baggage claims areas, and connecting flight information listings.
  • The electronics system of U.S. Patent application Serial No. 07/763,370 displays flight information with the flight information automatically tailored to the phases of flight of the aircraft.
  • Although the electronics systems of U.S. Patent No. 4,975,696 and U.S. application Serial No. 07/763,370 provide much useful information in video displays, the systems do not provide the information over audio channels. Furthermore, as noted above, existing systems which do provide information over audio channels in other applications have not successfully provided natural-sounding, automatically-generated spoken messages incorporating real time information.
  • OBJECTIVES AND SUMMARY OF THE INVENTION
  • Accordingly, it is an object of the present invention to provide a flight information system wherein the system provides real-time flight information such as speed, altitude, and passing points of interest, destination airport terminal information such as connecting flights and gates, and other useful information, over an audio system to passengers in an aircraft.
  • It is another object of the present invention to provide an information system which automatically generates spoken messages in a natural-sounding voice.
  • In accordance with these objectives, the invention provides an information system for generating spoken audio messages incorporating real-time, i.e. "variable," input data by assembling digitized spoken words corresponding to the input data into complete messages or sentences. Each sentence to be assembled includes a framework of fixed digitized words and phrases, into which variable digitized words are inserted. The particular digitized variable words which correspond to the specific input data are retrieved from digital computer memory. All anticipated input parameters are stored as digitized spoken words such that, during operation of the system, appropriate spoken words corresponding to the input data can be retrieved and inserted into the framework of the sentence. In this manner, a complete natural-sounding spoken message which conveys the input data is automatically generated for broadcast.
  • More specifically, the system includes a memory means for storing digitized spoken words, a receiver for receiving input data, and a data processor. The data processor means includes a retrieval means for retrieving selected digitized words corresponding to the input data and a message assembly means for assembling the retrieved words into audio messages.
  • Some of the digitized spoken words are stored in a variety of different inflection forms. The data processor means includes means for selecting digitized forms of the words having the proper inflection for inclusion in the spoken sentence, such that a natural-sounding spoken sentence is achieved.
  • The various digitized words and phrases may be recorded in a variety of languages, such that a spoken message may be generated in any of a variety of different languages.
  • In accordance with a preferred embodiment, the audio information system is mounted aboard a passenger aircraft for automatically generating informative messages for broadcast to the passengers of the aircraft. The system includes a receiver for receiving flight information from the on-board navigation systems of the aircraft and from ground-based transmitters. The input flight information, such as the location of the aircraft or the travel time to destination, is automatically communicated to the passengers in the form of natural-sounding spoken sentences. The system may also generate audio messages identifying points of interest in the vicinity of the aircraft.
  • In one embodiment, the system generates spoken messages describing destination terminal information received from a ground-based transmitter including connecting gates and baggage claim areas. The system assembles audio messages incorporating the destination terminal information received from the ground and broadcasts the assembled messages to the passengers. The system is alternatively configured to simultaneously provide the destination terminal information in both video and audio form.
  • In another embodiment, the invention provides audio messages to aircraft passengers wherein the messages are tailored to the phases of flight of the aircraft. In accordance with this embodiment, the system includes data processor means utilizing received flight information for determining a current phase of the flight plan and for inputting information corresponding to the current phase of the flight plan to the audio system for broadcast to the passengers. In this manner, a wide variety of informative spoken messages may be automatically provided to the passengers, with the content of the messages tailored to the various phases of flight of the aircraft. For example, the system may automatically generate one set of spoken messages during the takeoff phase of the flight of the aircraft, and a separate set of messages during the en route cruise phase of the aircraft. As with the previously-described embodiments, the messages are automatically generated by the system in response to input flight information which is received by the system.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Other objects and many of the attendant advantages of this invention will become apparent as the invention becomes better understood by reference to the following detailed description when considered in conjunction with the accompanying drawings, in which like reference numerals designate like parts throughout the figures thereof, and wherein:
    • Figure 1 is a flow chart representing a method in accordance with the invention for assembling sentences from digitized words;
    • Figure 2 is a flow chart representing a method for selecting words of proper inflection for use in assembling sentences having numbers spoken in a natural-sounding voice;
    • Figure 3 is a block diagram, somewhat in pictorial form, of an aircraft passenger information system in accordance with a preferred embodiment of the present invention;
    • Figure 4 is a block diagram of the data processor of Figure 3;
    • Figure 5 is a representation of a screen that may be displayed by the system of the present invention while corresponding audio messages are broadcast;
    • Figure 6 is another representation of a screen that may be displayed by the system of the present invention while corresponding audio messages are broadcast;
    • Figure 7 provides a flow chart of an alternative embodiment of the invention wherein audio messages conveying flight information such as points of interest are generated;
    • Figure 8 is a representation of a video display screen that may be displayed by the system of Figure 7 while corresponding audio messages are broadcast; and
    • Figure 9 is a representation of another video display screen that may be displayed by the system of Figure 7 while corresponding audio messages are broadcast.
    DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • The following description is provided to enable any person skilled in the art to make and use the invention and sets forth the best modes contemplated by the inventor of carrying out his invention. Various modifications, however, will remain readily apparent to those skilled in the art, since the generic principles of the present invention have been defined herein specifically to provide an audio information system for receiving real time data and for generating natural-sounding spoken messages reciting the real time data.
  • Referring to Figure 1, a spoken message assembler system 200 is illustrated. Message assembler 200 receives input information in the form of digital alphanumeric data and generates natural-sounding spoken sentences which recite the received data for output to a listening audience through a speaker system, perhaps a public address (PA) system. To this end, message assembler 200 includes hundreds or thousands of digitized words and phrases covering all anticipated words which may be required to create sentences reciting the input data. The words and phrases are prerecorded from a human voice in a digitized format and stored in computer ROM. Message assembler 200 assembles sentences by retrieving appropriate digitized words and phrases and assembling the words and phrases into proper syntactic sentences. Preferably, some of the words and phrases are stored in a number of digitized forms, each having a different inflection, such that the assembled sentence has proper inflection in accordance with natural speech.
  • In this manner, input information in the form of digital data can be communicated to a listening audience in the form of natural-sounding spoken sentences. The input data is received and the spoken sentences are generated and broadcast entirely automatically without the need for a human operator or human speaker.
  • In a preferred embodiment, discussed in detail below, the spoken message assembler is employed within an audio/video information system for use in the passenger compartment of an aircraft. In that embodiment, the message assembler receives flight information such as ground speed, outside air temperature, destination terminal, connecting gate, or baggage claim area information. The message assembler then constructs natural-sounding sentences for broadcasting the flight information to the passengers in the aircraft. The spoken messages may be broadcast over a public address system of the aircraft for all passengers to hear, or may be broadcast over individual passenger headphone sets. Also, as will be described below, the spoken message assembler may be configured to generate sentences in a variety of different languages for either sequential broadcast or simultaneous broadcast over multiple channels.
  • The spoken message assembler of the system thus provides a wide range of useful and informative information to the passengers, while freeing the flight crew from having to provide the information to the passengers. As will be described below, the system may additionally include a video display system for simultaneously displaying the flight information over a video screen or the like.
  • Although advantageously implemented within an information system for passenger aircraft, the message assembler of the invention is ideally suited for any application benefitting from the automatic communication of input data to a listening audience.
  • Figure 1 provides a flow chart illustrating the operation of message assembler 200. Initially, at 202, the message assembler receives an input sentence over a data line 201 in a digital alphanumeric format suitable for input and manipulation by a computer or similar data processing device. The data is received within a sentence format having specific data fields. For example, one data field of the input sentence may provide the time of day. Within that data field, an alphanumeric sequence is received which provides the time of day, e.g., "12:32PM." A separate data field may provide a destination city for an aircraft flight, e.g., "Los Angeles." Message assembler 200 may be preprogrammed to receive any of a number of suitable data formats. Any format is suitable so long as the variable data is received within preselected fields such that the message assembler can determine the type of data contained within the received message.
  • For each type of data, message assembler 200 stores all possible instances of the data type in a digitized spoken form in a mass storage device 211. For the example of destination cities, the message assembler stores the names of all cities that the airline flies into or out of in digitized spoken form. Thus, the message assembler stores the words "New York," "Los Angeles," "Chicago," etc. in ROM.
  • For data types requiring numbers, such as the time of day, message assembler 200 stores all necessary component numbers in digitized form. To recite the time "12:10," message assembler 200 retrieves and combines the words "twelve" and "ten." To recite the time "1:57," message assembler 200 retrieves and combines the words "one," "fifty," and "seven." To handle any input time of day, message assembler 200 need only store the component numbers 0-9 and 10, 20 ... 50 in digitized form. The numbers 1-10 are assembled either as "one" or "oh-one," etc., to allow the handling of both hours and minute values between 1 and 10.
  • In this manner, the message assembler stores the various possible instances of the various possible data types that may be received within an incoming message. The specific data fields that are employed and the specific instances of the data stored for each data field are configurable parameters of the system. Although a digitized data base can be constructed to provide for almost any type of information, the system is preferably employed where a limited number of types of information must be conveyed to a listening audience, especially where each type of information has a fairly limited range of possible instances. In such case, the total number of digitized spoken words that must be stored in ROM is fairly limited. A system requiring a greater number of digitized words may be implemented using a computer with a greater amount of ROM.
  • An exemplary input sentence format received by the system, at step 202, is provided in Table I.
    Figure imgb0001
  • The exemplary sentence format of Table I provides the departure gate number and departing time for particular departing flights. Thus, for each flight departing from the destination terminal, the input sentence of Table I provides a framework for communicating the departing flight's airline name and flight number and the departing flight's gate number and departure time, along with the destination city and destination airport terminal.
  • An input sentence includes a framework of fixed words interlaced with variable words (shown in parentheses) in Table I. In the input sentence shown in Table I, the fixed words are "flight," "will depart," "from," "terminal," "gate," and "at." The variable data for inclusion within the sentence include the airline name, the flight number, the city name, the terminal name, the gate number, the departure time, and either "AM" or "PM" appended to the departure time. Each unit of the sentence, comprising either a single fixed or variable word or a fixed or variable phrase, is denoted by a position number. For example, the variable "airline name" is identified as position 1. The fixed word "flight" is identified as position 2. In this manner, each fixed or variable data unit within the input sentence is represented by a unique number.
  • At 206, the system examines the first position within the input sentence, initially position 1. At 208, the system determines whether position 1 corresponds to a fixed word or a variable word. Continuing with the example of Table I, position 1 requires a variable word. Accordingly, the system proceeds to step 210 to retrieve the digitized variable word from the data base of the system, which corresponds to the input airline name to be included at position 1.
  • The data base of variable digitized words is set up to include the names of currently operating airlines, with the names digitized from a recording of the spoken airline name. Thus, the data base may include, for example, "ABC Airlines" or "XYZ Airlines" in digitized form. To retrieve the digitized spoken name of the proper airline, the system examines the received message for an alphanumeric representation of the airline name, then, based on the alphanumeric, retrieves the corresponding digitized spoken name from the system's data base. Once retrieved, the digitized data providing the spoken airline name is immediately broadcast to the passengers. Alternatively, the digitized spoken airline name may be transferred to a temporary memory unit (not shown in Figure 1) of the system for subsequent broadcast. In Figure 1, the broadcast step is identified by reference numeral 212.
  • At step 214, the system determines whether the final position of the sentence format has been processed. If not, the system increments a position pointer and returns along flow line 216 to process the next position within the sentence format. Thus, in the example of Table I, the system returns to process position 2. At step 208, the system determines that position 2 requires a fixed digitized word. Hence, the system proceeds to step 218 to retrieve the fixed word designated by the sentence format. In this case, the fixed word is "flight." Hence, the system retrieves digitized data presenting the spoken word "flight" from the data base and broadcasts the retrieved word.
  • Again, the system returns along data flow line 216 to process a new position within the sentence format. In the example of Table I, the next position, position 3, calls for a variable word setting forth the flight number. Accordingly, the system proceeds to step 210, wherein the system retrieves the digitized data setting forth the spoken flight number corresponding to the alphanumeric flight number designation received in the input message. Thus, if the flight number received in the input message is represented by the alphanumeric sequence "1059," the system retrieves digitized data providing the spoken words "ten," "fifty," and "nine." To this end, the system maintains a "number" data base which stores spoken numbers for use with any data type requiring numbers. Exemplary data types such as flight number, gate number, baggage claim area, departure time, etc. thereby share a common data base. Thus, the digitized spoken words "ten," "fifty," and "nine" are retrieved in circumstances requiring that the number "1059" be spoken, such as if the departing gate number is "1059," the departure time is "10:59," or the baggage claim area is "1059." As will be described below, the numbers are preferably stored in a variety of different styles and inflections to allow natural-sounding numbers to be recited in any circumstances.
  • Once the digitized words "ten," "fifty," and "nine" are retrieved from memory and broadcast, the system proceeds to the next position wherein the system retrieves the fixed digitized words "will depart." Execution continues, during which time the system processes each successive position within the sentence format. At each position, the appropriate variable or fixed digitized words are retrieved from the data base memory and immediately broadcast. Execution proceeds at a sufficient speed such that the words are broadcast one after the other in close succession to produce a natural-sounding sentence.
  • The assembled sentence is thereby "spoken" in the same manner in which a conventional compact disc system broadcasts words or music; that is, the digitized words are "played" in succession. Appropriate pauses may be included between words within the sentence to ensure a natural sentence flow.
  • Continuing with the example of Table I, the resulting "spoken" sentence might be "XYZ Airlines Flight ten fifty-nine will depart Chicago from Terminal One, gate twenty-three at twelve forty-seven PM." The sentence is broadcast by means described below to the passengers in the aircraft, who thereby hear a natural-sounding sentence as if spoken by a member of the flight crew. By assembling the sentence from digitized words and phrases, rather than by using a voice synthesizer wherein words are created by phonetically "sounding out" individual syllables or words, a more natural-sounding sentence is achieved.
  • At step 222, the system returns to step 202 to receive and process a new message. The new message may provide the departing flight information for a different airline flight. Typically, an incoming message will provide the departing flight information for many connecting flights, perhaps 10-20 such flights. Thus, the system will reexecute the steps shown in Figure 1 a number of times to process the input data corresponding to each of the connecting flights, to thereby generate sentences reciting all of the connecting gate information.
  • In an alternative embodiment, the retrieved words are stored in a temporary member for later broadcast. Such a system might include parallel processing capability such that, while a first sentence is being broadcast from temporary memory, a second sentence is being assembled.
  • Once all of the information within a particular incoming message is processed to generate one or more spoken sentences, the system waits to receive a new message. The new message may set forth different types of information within a different sentence format. Typically, the system will receive numerous input sentence formats to allow the system to broadcast a wide variety of natural-sounding sentences conveying a wide variety of possible input data.
  • Also, although generally described with respect to an exemplary flight information system for providing flight information to passengers of an aircraft, the message assembler shown in Figure 1 is advantageously employed in any environment where variable input information must be communicated to a listening audience over an audio system. In particular, the system is advantageously employed wherever input data to be broadcast falls within a finite number of data types, each having a range of anticipated values which may be stored in digitized spoken form in a data base.
  • With reference to Figure 2, a method by which the invention provides spoken numbers of proper style and inflection will now be described.
  • A natural-sounding sentence is composed of words of differing inflections. Automatically-generated sentences which do not use the proper inflection for component words may sound artificial or metallic. Accordingly, to assemble a natural-sounding sentence from digitized words, the proper inflection for the component words is preferably determined.
  • Generally, it has been found that three broad forms of inflection are necessary for use in achieving natural-sounding sentences incorporating numbers. The three forms of inflection are falling, rising, and constant. A word spoken at the end of a sentence generally has a falling inflection. A word spoken in the middle of a sentence generally has a rapidly rising inflection if it is closely followed by another word. A word spoken in the middle of a sentence generally has a slowly rising inflection if it is not followed closely by another word. In accordance with the invention, at least a portion of the words used in assembling sentences are stored in three different digitized forms corresponding to the three inflection forms. Thus, a version of the word having the proper inflection can be retrieved, depending upon the location of the word within the sentence. In a possible embodiment, all words in the data base of digitized words are recorded under all three different inflections.
  • In a preferred embodiment, only "number" words, i.e., words used to recite numeric strings, are stored under all three inflection forms. It has been found that input sentence formats may be selected wherein all other words need be stored under only one inflection to achieve sufficient natural-sounding sentences. For example, the word "and" need only be stored under the slowly rising inflection form because the word "and" will always appear in mid-sentence not followed closely by another word.
  • Numbers are stored under all three inflections, since numbers may appear in a variety of positions within a sentence or at the end of a sentence. For example, the number string "1024" may appear in the middle of a sentence followed closely by another word: "Flight 1024A will depart from gate 15." Alternatively, the number string "1024" may appear in the middle of a sentence not followed closely by another word: "Flight 1024 will depart from gate 15." Finally, the string "1024" may appear at the end of a sentence: "Flight 15 will depart from gate 1024." Thus, all numbers are stored under all three inflection forms such that the proper inflection form can be retrieved depending upon the position of the number within the sentence.
  • In the example just described, the numeric string "1024" is actually composed of three component numbers: "ten," "twenty," and "four." The system processes the inflection of each of the individual component words separately. In this example, the word "ten" is followed closely by the word "twenty" and the word "twenty" is followed closely by the word "four." Accordingly, the words "ten" and "twenty" both have a rapidly rising inflection, regardless of the position of "1024" in the sentence. In this example, only the word "four" will have a slowly rising, rapidly rising, or falling inflection, depending upon the location of the number "1024" within the sentence.
  • The system also selects a proper style for reciting numbers. The system characterizes numbers according to one of two general numeric styles. In the first, "short" style, the words "hundreds" or "thousands" are not spoken. For example, in the short style, the number "1024" is spoken as "ten twenty-four." In a "long" numeric style, the words "hundreds" or "thousands" are inserted. For example, the number "1024" is recited as "one thousand twenty-four."
  • When embodied within an information system for a passenger aircraft, the short style is used for reciting gate numbers, flight numbers, baggage claim areas, and the like. The long style is used for reciting altitudes, distances, temperatures, and the like. Thus, "flight 1024" is recited as "flight ten twenty-four," whereas "1024 feet" is recited as "one thousand twenty-four feet."
  • During assembly of sentences incorporating numbers, the message assembler determines the proper numeric style and retrieves the digitized words appropriate to the selected numeric style. Thus, in the example, to recite a "flight 1024," the system retrieves the individual words "flight," "ten," "twenty," and "four" from the digitized word data base for playback in succession. To recite "1024 feet," the system retrieves the individual digitized words "one," "thousand," "twenty," "four," and "feet."
  • A method by which the invention accounts for numeric style and numeric inflection to generate natural-sounding spoken numbers is shown in Figure 2. The steps of Figure 2 are executed as a part of the execution of step 210 of Figure 1. However, the steps of Figure 2 are executed only for processing alphanumeric strings which include numbers. Thus, other variable words, such as destination cities, i.e., "Los Angeles," are not processed using the procedure of Figure 2.
  • For alphanumeric strings with numbers, the system, at step 250, initially extracts all numeric strings from the input alphanumeric character string. Thus, for input string "1024A," the system extracts "1024." Also as an example, for the string "10B24," the system extracts the number strings "10" and "24." Thus, an input character string may contain one or more numeric strings. For each extracted numeric string, the system, at step 252, determines the proper numeric style for the numeric string. Thus, if the numeric string is "1024," the system determines whether this should be recited in the long style or the short style. This determination is made from an examination of the data type of the input character string. For each numeric data type, the system stores an indicator of the corresponding style. For example, if the data type is a "flight number," then the short style is used. If the data type for the input character string is an altitude, then the long style is selected. The proper data type may be determined from the location of the character string within the input data block. Alternatively, the data block may include headers immediately prior to each data type, designating the data type.
  • Once the proper numeric style is determined, the system, at step 254, parses the numeric string into its component numbers according to the selected numeric style. Thus, "1024" is parsed as "1000" and "24" for the long numeric style, and "10" and "24" for the short numeric style.
  • Next, at step 256, the system assembles a word equivalent of the alphanumeric string which includes any parsed numeric strings, as well as any letters or other characters. Once a word equivalent of the alphanumeric string is assembled in sequential order, the system, at step 258, determines the inflection of all component numbers included within the word equivalent of the alphanumeric string. To this end, the system examines each "number" word within the string to determine whether the word is positioned in the middle of the string or at the end of the string. If in the middle, then the rapidly rising inflection form is chosen. If the "number" word occurs at the end of the string, then the system must determine what words, if any, follow the alphanumeric string. If the alphanumeric string constitutes the final portion of a sentence, a "number" word at the end of the string therefore falls at the end of the sentence. Hence, the falling inflection is chosen. If, on the other hand, the alphanumeric string is positioned in the middle of a sentence, then a "numeric" word falling at the end of the string will be assigned the slowly rising inflection.
  • Once the proper inflection form for each component number is determined at step 258, the system is ready to retrieve the digitized spoken words corresponding to all components of the word equivalent of the alphanumeric string. This retrieval is accomplished at step 260. Processing continues at step 212 of Figure 1, which operates to broadcast the retrieved words. As the sentence is broadcast to the passengers, numbers recited within the sentence are thereby spoken in the proper style and with the proper inflection.
  • The system shown in Figures 1 and 2 may be configured to assemble sentences in any of a variety of languages. To handle various languages, the data base of digitized words must include the necessary foreign words and phrases. Also, each different language has different sentence formats. For example, for a German sentence, the sentence format may have the fixed verb of the sentence at the end of the sentence format, rather than near the beginning of the sentence format as commonly found in English sentences.
  • Each alternative language may be handled by a separate microprocessor device. Alternatively, a single microprocessor device may sequentially process all languages.
  • In accordance with a preferred embodiment shown in the remaining figures, the spoken message assembler described above is implemented within an on-board flight information system for providing flight information to airline passengers. In a first embodiment, the system provides connecting gate and baggage claim area information. In a second embodiment, the system provides flight information such as air speed, altitude, and information regarding points of interest over which the aircraft travels. This information may be tailored to the various phases of flight of the aircraft.
  • The heart of the system, a data processor 13, receives messages containing flight information over a data bus 59 from various systems of the aircraft. Examples of such systems include an ACARS receiver 19, a navigation system 15, an aircraft air data system 17, and a maintenance computer 21. Each of these systems, from which information is received, is entirely conventional and will not be described in detail. Data processor 13 may be connected to any one or a multiple of these systems depending on the type of information desired to be displayed to the passengers of the aircraft. Data processor 13 may be controlled by a control unit 22, which includes various means for allowing for manual activation of the data processor and control over the functions of the data processor.
  • Data processor 13 generates audio messages using the message assembler described above and transmits the audio messages in the form of audio signals over an audio link line 91 to an audio selector unit 92 that routes the audio signal to a plurality of conventional audio systems. For example, the audio signals may be transmitted over a link line 93 to a public address speaker 95 in the passenger compartment of the aircraft or over link line 97 to a plurality of individual passenger headphone sets 96 via individual multichannel selectors 94.
  • The data processor may also generate video display screens which set forth the data incorporated in the audio messages. The video display screens are output as a video signal and transmitted over a video link line 31 to a conventional video selector unit 29 that routes the video signal to a plurality of conventional video display systems. For example, the video signal may be transmitted over link lines 39 to a preview monitor 33, or over link lines 43 to a video monitor 37, or over link lines 41 to a video projector 35, which projects the sequences of video screens received onto a video screen 45.
  • Message assembler 200 and its data base of digitized words and phrases are components of data processor 13 and, hence, are not shown separately in Figure 3.
  • It should be understood that this particular illustration of an aircraft audio/video display system is only set forth as an example of one of many such systems that may be utilized and, therefore, should not be considered as limiting the present invention.
  • The first embodiment, wherein connecting gate and baggage claim area information is processed, will now be described with particular reference to Figures 3-6. In Figure 3, a conventional ACARS/AIRCOM/SITA receiver 19 is shown. This receiver receives connecting gate and baggage claim area information from an airline central computer 47 via a transmitting antenna 51 over carrier waves 53. A link line 49 connects airline computer 47 to transmitting antenna 51. However, any transmitter receiver system could be used, including a satellite communication system, and this invention is not limited to the ACARS system referred to herein.
  • Destination airport information may also be entered into the system via an optional data entry terminal (not shown).
  • Assuming that the ground base station and the aircraft are communicating over an ACARS/AIRCOM/SITA communication system, information transmitted from ground base computer 47 is received by the ACARS/AIRCOM/SITA receiver 19. The data is output from the ACARS/AIRCOM/SITA receiver 19 to the data processor 13 in a format such as described in ARINC characteristic 597, 724, or 724B.
  • In order for the data processor 13 to promptly process the information received, the data is assumed to be in a specific fixed format when it is received from ACARS receiver 19. The format illustrated in Table II is an example of a possible format for up-linked data:
    Figure imgb0002
  • The data format contains strings of characters which are utilized by data processor 13 to generate audio messages and optional video displays. Exemplary strings are the flight number string "966," the destination airport string "Frankfurt," the arrival gate string "17," and the baggage claim area string "C." For audio messages, relevant data is extracted from the strings and incorporated into audio messages via message assembler 200. For video displays, these strings are used both to retrieve an airport chart representing the destination airport, and for direct inclusion in video displays.
  • From information contained within the exemplary data block of Table II, the following spoken audio messages may automatically be generated:
       "Lufthansa flight nine six six arriving in Frankfurt at eleven forty five A M, terminal A, gate number seventeen, baggage claim area C."
       "Air France flight eight forty one will be departing for Paris from terminal A gate ten at twelve fifteen."
       "Lufthansa flight five oh two will be departing for Hamburg from terminal B gate five at twelve thirty."
       "Swissair flight sixty five will be departing for Zurich from terminal B gate two at twelve thirty five."
  • To generate these spoken word audio messages, the data processor utilizes the message assembler, described above, to extract relevant data and to assemble messages reciting the data.
  • To generate the message "Lufthansa flight nine six six arriving in Frankfurt at eleven forty five A M, terminal A, gate number seventeen, baggage claim C," the message assembler extracts the variable data "Lufthansa," "966," "Frankfurt," "11:45," "A," "17," and "C" for incorporation into a sentence having fixed words "flight," "arriving in," "at," "terminal," "gate number," and "baggage claim area." The message processor retrieves spoken word equivalents of the alphanumeric data extracted from the message in the manner described above. The numbers "966," "11:45," and "17" contained within the flight number, arrival time, and arrival gate may be processed according to the inflection and style manipulation procedure described above with reference to Figure 2.
  • To generate the connecting flight information messages, the message assembler extracts the various fixed and variable words from the input message, retrieves spoken word equivalents for these alphanumeric values, and broadcasts the spoken word equivalents in succession to produce complete sentences.
  • A total of four different audio messages are thereby generated from the data contained within the data block of Table II. The four messages are generated by executing the steps of Figure 1 a total of four times. Once completed, the system waits until a new input message is received.
  • An extremely wide range of spoken messages can be generated providing a wide variety of useful information. For example, input messages may provide flight information such as altitude, ground speed, outside air temperature, time or distance to destination, time or distance from destination, etc. Also, weather-related messages may be received and processed, such as messages describing the temperature and weather conditions at the destination airport. Alternatively, weather conditions within the vicinity of the aircraft may be described, including wind speed, visibility, ceiling, etc. Messages providing marine-related information may be provided. For example, messages specifying the surf, tide, and marine visibility may be provided.
  • In general, any input message can be processed so long as each of the component words for inclusion in the sentence is stored in the digitized memory of the system. Thus, a wide variety of custom messages may be typed into a ground-based computer, then transmitted to the aircraft for conversion to a spoken audio message. The variety of possible messages is limited only by the number of digitized words stored in the digitized memory of the system. Accordingly, by providing a system with a larger vocabulary of digitized words, a wider range of audio messages can be generated.
  • The system may also generate an optional video display for presentation to the passengers while the audio messages are simultaneously provided over the speaker system. To this end, the system may extract the above-described flight information from the input message of Table II and format the information for a textual display. Alternatively, rather than providing a simple textual display, the system may retrieve a map of the destination terminal and provide icons or the like identifying the locations of the various arrival and departure gates on the map.
  • Data processor generator 13 operates on the information it receives in a manner illustrated by the flowchart of Figure 4. The input to data processor 13 is from a digital data bus input port on an interrupt basis, 181. Whenever there is information to be received, the data processor interrupts whatever it is doing to read the new data. At 183, processor 13 reads the input message containing the connecting gate data from the bus until a completed message, 185, is received. The processor keeps returning to the interrupt, 187, until an end of message is received.
  • After receiving an end of message, the alphanumeric strings providing the fixed and variable words are extracted, at 189, from the input message. At 90, the extracted alphanumeric strings are output to message assembler 200 for generation of audio messages based on data contained within the fixed and variable alphanumeric strings.
  • The thus-generated audio message is output to the passenger audio system, at 194, via a link line 101 to an audio broadcast system 103 (Figure 3). The audio messages may be broadcast over a public address speaker system within the passenger cabin or may be broadcast over a conventional multichannel individual headphone system to the passengers. Alternatively, the message assembler may provide the audio messages in a variety of languages, each language either being provided over a separate audio channel or broadcast sequentially over a single channel. Background music may be provided to accompany the audio messages.
  • For the optional video display, the extracted connecting gate information is arranged into its predetermined page format, at 91, for display. A terminal chart signifying the destination airport specified in the input message is retrieved, at 93, from a data storage unit. An aircraft symbol is positioned at the arrival gate on the terminal chart and the arrival gate and baggage claim area information is written on the terminal chart for display. The terminal chart, along with its information, is output as a video signal to the video display according to a specified sequence, at 195. The terminal chart is displayed, at 197, for a period of typically 10 to 60 seconds. Upon that display time being over, portions of the alphanumeric text containing the connecting gate information are displayed in a suitable format, at 199, for the specific period of time. Preferably, the duration of the video displays is synchronized with the duration of the audio message which is simultaneously broadcast.
  • If multiple pages of terminal charts or connecting gate information are to be displayed, the pages are cycled onto the display. The entire process is continually repeated.
  • Upon the aircraft approaching its destination, a display, such as an exemplary display illustrated in Figure 5, may be presented to the passengers while audio messages reciting the displayed information are simultaneously broadcast.
  • In order to familiarize the passengers with the layout of the terminal and all the gates of the terminal, as well as the baggage claim areas, a display shown in Figure 6 may be provided to the passengers while an audio message reciting the baggage claim area is simultaneously broadcast. As can be seen, the terminal chart of Figure 6 illustrates all the gates and terminal buildings for a particular airport, along with baggage claim areas. In addition, the aircraft symbol is located next to the arrival gate.
  • The connecting gate information may be processed to produce audio messages and video displays immediately after the information is received over the ACARS system, or the information may be stored until the aircraft begins its approach to its destination.
  • The audio portion may be provided as a stand-alone system with no video display generation hardware or software required. In such case, only the audio messages are generated and broadcast. All of the information provided in a combined audio/video system is provided in a stand-alone audio system, with the exception that graphic displays such as flight plan maps and destination airport charts are not provided.
  • The stand-alone audio system is ideally suited for aircraft not possessing passenger video display systems. In such aircraft, the stand-alone audio system merely interfaces with a conventional multichannel passenger audio broadcast system, and provides flight information, as described above, through the passenger audio system.
  • Referring to Figures 7-9, an alternative system for providing flight information to the passengers in the aircraft passenger compartment is illustrated. The alternative system may tailor the information to various phases of the flight.
  • An alternative data processor 13' utilizes the received flight information and determines a current phase of the flight of the aircraft, i.e., the system determines whether the aircraft is in "en route cruise," "descent," etc. Once the current phase of the flight has been determined, data processor 13' generates audio messages and optional sequences of video display screens tailored to the current phase of the flight for presentation to the passengers of the aircraft. For example, if the aircraft is in an "en route cruise" phase, data processor 13' may generate an audio message reciting the ground speed and outside air temperature and simultaneously generate a video display screen for displaying the same information. If the aircraft is in a "descent" phase, data processor 13' may generate a sequence of audio messages reciting the time to destination and the distance to destination screen and simultaneously generate the same information.
  • Each audio message provides useful information appropriate to the current phase of the flight plan. For example, during power on, preflight, engine start, and taxi out, various digitized audio messages may be provided which welcome passengers aboard the aircraft, describe the aircraft and, in particular, provide safety instructions to the passengers.
  • During flight phases such as takeoff, climb, and en route cruise, various audio messages may be generated which indicate points of interest over which the aircraft is flying or recite flight information received via message handler 63'. For example, if an input message is received providing ground speed, outside air temperature, time to destination, and altitude, an audio message may be generated by message assembler 200 reciting the information. A video display screen such as shown in Figure 8 may be simultaneously provided. If the aircraft has approached a point of interest, an audio may be assembled and broadcast to the passengers indicating the proximity of the aircraft to the point of interest. A video display screen such as the one shown in Figure 9 may be simultaneously provided.
  • Thus, message assembler 200 may generate an audio voice message such as: "The current ground speed is 574 miles per hour. The current outside air temperature is minus 67 degrees Fahrenheit." The audio message is then broadcast to the passengers.
  • Data processor 13' includes: a message handler 63' for receiving flight information messages; a flight information processor 65' for determining the current flight phase and for generating audio messages and video display sequences corresponding to the current flight phase or point of interest; and a data storage unit 69' for maintaining flight information and digitized data.
  • Message handler 63' receives flight phase information as encoded messages over data bus 59'. As each new flight information message is received, message handler 63' generates a software interrupt. Flight information processor 65' responds to the software interrupt to retrieve the latest flight information from message handler 63'. Once retrieved, flight information processor 65' stores the flight information in a flight information block 104' in data storage unit 69'.
  • In addition to maintaining digitized words and phrases for use in assembling audio messages, storage unit 69' also maintains specific sequences of graphic displays 120'. Storage unit 69' also maintains "range" tables 114, which allow flight information processor 65' to determine the current phase of the flight plan. For example, for the "en route cruise" phase, range table 114' may define an altitude range of at least 25,000 feet such that, if the received flight information includes the current altitude of the aircraft, and the current altitude is greater than 25,000 feet, data processor 65' can thereby determine that the current phase of the flight plan is the "en route cruise" phase and generate audio messages and optional video displays appropriate to the "en route cruise" phase of the flight plan.
  • Range tables 114' also include points of interest along the flight route of the aircraft. For each point of interest, range tables 114' provide the location of the point of interest and a "minimum range distance" for the point of interest. If the received flight information includes the location of the aircraft, flight information processor 65' determines whether the aircraft is located within the minimum range associated with any of the points of interest. Thus, once the aircraft has reached the vicinity of a point of interest, the system automatically generates audio messages and optional video display screens informing the passengers of the approaching point of interest.
  • The audio message may recite the name of the point of interest and the distance and travel time to the point of interest and the relative location of the point of interest to the aircraft, i.e., "left" or "right." The audio messages may be provided in a variety of languages, with each language broadcast on a different audio channel.
  • Alternatively, digitized monologues describing the points of interest may be accessed from a mass storage device for playback while the aircraft is in the vicinity of the point of interest. In such an embodiment, the message assembler need not be used to assemble audio messages. Rather, fixed digitized monologues are simply broadcast. These may be accompanied by background music.
  • The optional video screens may provide, for example, the name of the point of interest, the distance and travel time to the point of interest, and a map including the point of interest, with the flight route of the aircraft superimposed thereon.
  • Considering points of interest in greater detail, periodically, flight information processor 65' compares the current location of the aircraft with the location of points of interest in the data base tables and determines whether the aircraft has reached the vicinity of a point of interest. As can be seen from an exemplary range table 114' provided in Table III, range table 114' can include points of interest such as cities and, for each point of interest, include the location in latitude and longitude and a minimum range distance. Table III
    POINTS OF INTEREST
    Item Latitude Longitude Minimum Range
    City A
    45 degrees 112 degrees 100 miles
    City B
    47 degrees 114 degrees 10 miles
    City C
    35 degrees 110 degrees 5 miles
  • Thus, for example, city A is represented as having a particular location and a minimum range distance of 100 miles, whereas city B has a different location and a minimum range distance of 10 miles. Flight information processor 65 includes an algorithm for comparing the current location of the aircraft to the location of each city and for calculating the distance between the aircraft and the city. Once the distance to the city is calculated, flight information processor 65 determines whether the distance is greater than or less than the minimum range specified for that city.
  • Taking as an example City A, if the aircraft is 200 miles from city A, flight information processor 65 will determine that the aircraft has not yet reached the vicinity of city A. Whereas, if the distance between the aircraft and city A is 90 miles, flight information processor 65 can determine that the aircraft has reached the vicinity of city A and initiate a sequence of displays, previously described, informing the passengers. The algorithm for calculating the distance between the aircraft and each point of interest, based on the latitudes and longitudes, is conventional in nature and will not be described further. The algorithm may take considerable processing time and, hence, is only executed periodically. For example, the point-of-interest table is only accessed after a certain number of miles of flight or after a certain amount of time has passed.
  • Range table 114' may include the location of a wide variety of points of interest, including cities, landforms, the equator, the International Date Line, and the North and South Poles.
  • What has been described is a spoken message assembler for generating natural-sounding spoken sentences conveying input data. As a specific application, the message assembler has been described in combination with a flight information system for aircraft passengers that provides useful information to the passengers en route to their destination. The system connects into a conventional passenger audio broadcast system. In one embodiment, the system provides destination terminal information such as connecting gates and baggage claim areas and flight information. In another embodiment, the flight information is tailored to the current phase of the flight plan of the aircraft. For example, messages describing points of interest are generated as the aircraft reaches the vicinity of the points of interest. The systems can be combined to provide both types of information. In such a combined system, the destination terminal information may be automatically presented once the aircraft reaches the "approach" phase of the flight. The system may also provide the information in video form over a video display system.
  • Various modifications are contemplated, and they obviously will be resorted to by those skilled in the art without departing from the spirit and scope of the invention as hereinafter defined by the appended claims, as only a preferred embodiment of the invention has been disclosed.

Claims (16)

  1. An audio information system for generating audio messages for a listening audience, the system having a receiver for receiving input data, said system comprising:
       memory means for storing digitized spoken words, with individual digitized words corresponding to individual units of the received input data;
       data processor means for generating complete audio messages based on said input data, said data processor means including retrieval means for retrieving, from said memory means, selected digitized words which correspond to the units of received input data; and
       message assembly means for assembling the retrieved selected words into complete audio messages which convey the information contained in the input data in natural-sounding sentences.
  2. The audio information system of Claim 1, wherein said input data includes connecting flight information data including one or more of flight numbers, destination terminals, gate numbers, baggage claim area numbers, and arrival and departure times, and wherein said memory means stores digitized spoken words corresponding to said connecting flight information, such that said complete audio messages provide a recitation of the flight information in a natural-sounding sentence.
  3. The audio information system of Claim 1 or claim 2, wherein at least some of said digitized spoken words are stored in a plurality of inflection forms, each form having a different vocal inflection, and wherein said data processor further includes:
       means for determining a proper vocal inflection form for said words, said proper inflection being determined by the relative placement of said words in said audio message; and
       means for selecting said proper inflection form of said selected digitised words for inclusion in said complete audio message.
  4. The audio information system of any preceding claim, wherein at least some of said digitized words are stored in a plurality of forms, each form being a different language version of said word, and wherein said data processor further includes means for retrieving and assembling words of matching languages.
  5. The audio information system of Claim 4, wherein said data processor assembles a plurality of messages conveying the same input data, said messages being in different languages.
  6. The audio information system of Claim 5, wherein said system includes means for outputting said plurality of messages of different languages in sequential order through a single output channel.
  7. The audio information system of Claim 5, wherein said system includes means for outputting said plurality of messages of different languages simultaneously over a plurality of separate output channels.
  8. The audio information system of any preceding claim, wherein the digitized spoken words are maintained in digital form on a mass storage device.
  9. The audio information system of any preceding claim, wherein said system is mounted aboard a passenger aircraft and includes means for broadcasting said complete audio messages to passengers within said aircraft.
  10. The audio information system of Claim 9, further including a receiver for receiving flight information identifying the location of the aircraft, and wherein said memory means also stores the names and locations of a plurality of points of interest in digital form;
       said data processor means further including means for determining a current point of interest by:
       comparing the location of the aircraft with the locations of points of interest stored by the memory means to identify, out of the plurality of points of interest, a point of interest in the vicinity of the current location of the aircraft;
       retrieving digitized words identifying the name and relative location of the point of interest in the vicinity of the aircraft; and
       assembling a complete audio message providing the name and relative location of the point of interest such that, as points of interest are reached during the flight of the aircraft, the system automatically broadcasts an audio message identifying the point of interest to the passengers.
  11. The audio information system of Claim 9, wherein the aircraft follows a flight plan having a plurality of phases, and wherein said data processor means further includes:
       means for determining a current phase of the flight plan;
       means for selectively retrieving flight information from said input data, said selected flight information being selected according to the determined current phase of flight, said selected flight information being used by said message assembly means for generating said audio message such that, as each phase of the flight plan is reached, the system assembles and broadcasts an audio message reciting useful flight information tailored to the current phase of the flight plan to the passengers.
  12. The audio information system of Claim 11 wherein said data processor means also retrieves a sequence of video display information corresponding to the determined current phase of flight and inputs the retrieved sequence of video display information to a video display system for display to the passengers, such that, as each phase of the flight plan is reached, the system displays a sequence of video displays tailored to the current phase of the flight plan to the passengers along with the audio messages.
  13. The audio information system of Claim 11, wherein the memory means further includes a table means for storing a range of flight information corresponding to each phase of the flight plan and wherein the data processor determines the current phase of the flight plan by determining a phase having a range corresponding to the received flight information.
  14. An audio information system for automatically generating audio messages for a listening audience, said audio messages having preselected sentence formats, said system comprising:
       receiving means for receiving input data including one or more fixed units of data and one or more variable units of data;
       memory means for storing digitized spoken words including fixed words corresponding to portions of said preselected sentence formats and variable words corresponding to said variable units of data, with each variable word being a digitized spoken equivalent of a corresponding unit of data;
       data processor means for generating complete audio messages based on the input data, said data processor means including:
       means for determining a sentence format corresponding to the input data;
       means for retrieving digitized fixed words corresponding to the sentence format; and
       means for retrieving digitized variable words corresponding to the variable units of data within said input data; and
       message assembly means for assembling said retrieved fixed and variable words into complete audio messages, such that audio messages are generated which convey the input data in natural-sounding sentences.
  15. An audio information system for providing terminal and gate information to aircraft passengers in an aircraft comprising:
       a receiver for receiving destination airport terminal information regarding one or more of connecting flight numbers, departure times, departure gates and destinations, and baggage claim areas from a ground-based transmitter;
       memory means for storing a plurality of digitized words corresponding to said destination terminal information;
       audio message assembly means for creating audio messages incorporating said destination airport terminal information by selectively retrieving and assembling said digitized words; and
       means for inputting said audio messages to said audio system for broadcast to the passengers.
  16. The audio information system of Claim 15, wherein said memory means further stores data for a plurality of airport charts representative of destination airport terminals; with
       said receiver receiving information regarding flight numbers and destination airports from a ground-based transmitter; and
       data processor means utilizing the received flight numbers and airport information to retrieve the data for the airport chart of the destination airport terminal from said memory means and inputting the data to a video display system for display.
EP93302701A 1993-04-06 1993-04-06 Audio/video information system Withdrawn EP0620697A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP93302701A EP0620697A1 (en) 1993-04-06 1993-04-06 Audio/video information system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU36718/93A AU667347B2 (en) 1993-04-06 1993-04-06 Real-time audio message system for aircraft passangers
EP93302701A EP0620697A1 (en) 1993-04-06 1993-04-06 Audio/video information system

Publications (1)

Publication Number Publication Date
EP0620697A1 true EP0620697A1 (en) 1994-10-19

Family

ID=25623696

Family Applications (1)

Application Number Title Priority Date Filing Date
EP93302701A Withdrawn EP0620697A1 (en) 1993-04-06 1993-04-06 Audio/video information system

Country Status (1)

Country Link
EP (1) EP0620697A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0749221A3 (en) * 1995-06-14 1998-07-29 American Airlines Inc. Method and apparatus for delivering information in a real time mode over a non-dedicated circuit
GB2404545A (en) * 2003-04-24 2005-02-02 Visteon Global Tech Inc Text-to-speech system for generating announcements
EP2189759A3 (en) * 2008-11-24 2013-11-20 Honeywell International Inc. System and method for displaying graphical departure procedures
EP3300345A1 (en) * 2016-09-23 2018-03-28 Airbus Operations GmbH Dynamically adapting pre-recorded announcements
DE102019003553A1 (en) * 2019-05-21 2020-11-26 Diehl Aerospace Gmbh Automatic announcement in the passenger aircraft

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4975696A (en) * 1987-03-23 1990-12-04 Asinc, Inc. Real-time flight and destination display for aircraft passengers
EP0427485A2 (en) * 1989-11-06 1991-05-15 Canon Kabushiki Kaisha Speech synthesis apparatus and method
US5177800A (en) * 1990-06-07 1993-01-05 Aisi, Inc. Bar code activated speech synthesizer teaching device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4975696A (en) * 1987-03-23 1990-12-04 Asinc, Inc. Real-time flight and destination display for aircraft passengers
EP0427485A2 (en) * 1989-11-06 1991-05-15 Canon Kabushiki Kaisha Speech synthesis apparatus and method
US5177800A (en) * 1990-06-07 1993-01-05 Aisi, Inc. Bar code activated speech synthesizer teaching device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN vol. 12, no. 194 (P-713)7 June 1988 & JP-A-62 298 869 ( RICOH ) 25 December 1987 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0749221A3 (en) * 1995-06-14 1998-07-29 American Airlines Inc. Method and apparatus for delivering information in a real time mode over a non-dedicated circuit
GB2404545A (en) * 2003-04-24 2005-02-02 Visteon Global Tech Inc Text-to-speech system for generating announcements
GB2404545B (en) * 2003-04-24 2005-12-14 Visteon Global Tech Inc Text-to-speech system for generating information announcements
EP2189759A3 (en) * 2008-11-24 2013-11-20 Honeywell International Inc. System and method for displaying graphical departure procedures
EP3300345A1 (en) * 2016-09-23 2018-03-28 Airbus Operations GmbH Dynamically adapting pre-recorded announcements
US10232941B2 (en) 2016-09-23 2019-03-19 Airbus Operations Gmbh Dynamically adapting pre-recorded announcements
DE102019003553A1 (en) * 2019-05-21 2020-11-26 Diehl Aerospace Gmbh Automatic announcement in the passenger aircraft
US11299271B2 (en) 2019-05-21 2022-04-12 Diehl Aerospace Gmbh Automatic announcement in a passenger aircraft
DE102019003553B4 (en) 2019-05-21 2024-06-27 Diehl Aerospace Gmbh Announcement device, passenger aircraft, method for issuing an announcement in a passenger aircraft and use of a CMS

Similar Documents

Publication Publication Date Title
US6335694B1 (en) Airborne audio flight information system
US4975696A (en) Real-time flight and destination display for aircraft passengers
EP0533310B1 (en) Flight phase information display system for aircraft passengers
EP2858067B1 (en) System and method for correcting accent induced speech in an aircraft cockpit utilizing a dynamic speech database
US7580377B2 (en) Systems and method of datalink auditory communications for air traffic control
US8306675B2 (en) Graphic display system for assisting vehicle operators
US8335988B2 (en) Method of producing graphically enhanced data communications
CN103489334B (en) For the equipment in aviation field subsidiary communications
Simpson et al. Response time effects of alerting tone and semantic context for synthesized voice cockpit warnings
US20140122070A1 (en) Graphic display system for assisting vehicle operators
US20100332122A1 (en) Advance automatic flight planning using receiver autonomous integrity monitoring (raim) outage prediction
WO2002069294A8 (en) A system and method for automatically triggering events shown on aircraft displays
EP0620697A1 (en) Audio/video information system
Sullivan et al. The NASA 747-400 flight simulator-A national resource for aviation safety research
AU667347B2 (en) Real-time audio message system for aircraft passangers
Prinzo et al. US airline transport pilot international flight language experiences, Report 1: Background information and general/pre-flight preparation
US20230005483A1 (en) System and method for displaying radio communication transcription
US20170263135A1 (en) Analyzer systematic and reducing human faults system in aircraft flight
Saïd et al. The ibn battouta air traffic control corpus with real life ads-b and metar data
Corker et al. Empirical and Analytic studies Human/Automation Dynamics in Airspace Management for Free Flight
US10232941B2 (en) Dynamically adapting pre-recorded announcements
Prinzo et al. United States Airline Transport Pilot International Flight Language Experiences Report 2: Word Meaning and Pronunciation
Lind et al. The influence of data link-provided graphical weather on pilot decision-making
Cartwright et al. A history of aeronautical meteorology: personal perspectives, 1903–1995
Kaylor et al. The mission oriented terminal area simulation facility

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): BE CH DE FR GB LI NL

17P Request for examination filed

Effective date: 19950331

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Withdrawal date: 19960116