EP0620697A1 - AV-Informationssystem - Google Patents

AV-Informationssystem Download PDF

Info

Publication number
EP0620697A1
EP0620697A1 EP93302701A EP93302701A EP0620697A1 EP 0620697 A1 EP0620697 A1 EP 0620697A1 EP 93302701 A EP93302701 A EP 93302701A EP 93302701 A EP93302701 A EP 93302701A EP 0620697 A1 EP0620697 A1 EP 0620697A1
Authority
EP
European Patent Office
Prior art keywords
audio
flight
information
words
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP93302701A
Other languages
English (en)
French (fr)
Inventor
Richard J. Salter, Jr.
Michael C. Sanders
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Asinc Inc
Original Assignee
Asinc Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Asinc Inc filed Critical Asinc Inc
Priority to EP93302701A priority Critical patent/EP0620697A1/de
Priority claimed from AU36718/93A external-priority patent/AU667347B2/en
Publication of EP0620697A1 publication Critical patent/EP0620697A1/de
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination

Definitions

  • the present invention relates generally to improvements in aircraft passenger information systems and, more particularly, pertains to a new audio information system for the passengers of an aircraft. Still more specifically, the invention provides means for generating informational messages which are initially created on a ground-based computer system and transmitted up to an aircraft in flight to be converted from digital computer data to audio words and sentences and broadcast in multiple languages via the cabin audio system to the passengers.
  • a particular application for an audio information system for automatically providing spoken messages is in the aircraft and air transportation arena.
  • General information systems relating to aircraft abound in the prior art. Such general systems are utilized for a variety of purposes, such as tracking and analyzing information relating to air traffic control, displaying information on flights to provide for advanced planning and scheduling, and monitoring ground traffic at an airport.
  • U.S. Patent No. 4,975,696 Salter, Jr. et al.
  • copending U.S. application Serial No. 07/763,370 Patents
  • such systems are typically used for the administering of aircraft traffic.
  • U.S. Patent No. 4,975,696 an electronics package connecting the airborne electronics of a passenger aircraft to the passenger visual display system of the aircraft was disclosed.
  • the electronics package provides passengers with a variety of real-time video displays of flight information, such as ground speed, outside air temperature, or altitude.
  • Other information displayed by the electronics package includes a map of the area over which the aircraft flies, as well as destination information, such as a chart of the destination terminal including aircraft gates, baggage claims areas, and connecting flight information listings.
  • the electronics system of U.S. Patent application Serial No. 07/763,370 displays flight information with the flight information automatically tailored to the phases of flight of the aircraft.
  • the invention provides an information system for generating spoken audio messages incorporating real-time, i.e. "variable,” input data by assembling digitized spoken words corresponding to the input data into complete messages or sentences.
  • Each sentence to be assembled includes a framework of fixed digitized words and phrases, into which variable digitized words are inserted.
  • the particular digitized variable words which correspond to the specific input data are retrieved from digital computer memory.
  • All anticipated input parameters are stored as digitized spoken words such that, during operation of the system, appropriate spoken words corresponding to the input data can be retrieved and inserted into the framework of the sentence. In this manner, a complete natural-sounding spoken message which conveys the input data is automatically generated for broadcast.
  • the system includes a memory means for storing digitized spoken words, a receiver for receiving input data, and a data processor.
  • the data processor means includes a retrieval means for retrieving selected digitized words corresponding to the input data and a message assembly means for assembling the retrieved words into audio messages.
  • the data processor means includes means for selecting digitized forms of the words having the proper inflection for inclusion in the spoken sentence, such that a natural-sounding spoken sentence is achieved.
  • the various digitized words and phrases may be recorded in a variety of languages, such that a spoken message may be generated in any of a variety of different languages.
  • the audio information system is mounted aboard a passenger aircraft for automatically generating informative messages for broadcast to the passengers of the aircraft.
  • the system includes a receiver for receiving flight information from the on-board navigation systems of the aircraft and from ground-based transmitters.
  • the input flight information such as the location of the aircraft or the travel time to destination, is automatically communicated to the passengers in the form of natural-sounding spoken sentences.
  • the system may also generate audio messages identifying points of interest in the vicinity of the aircraft.
  • the system generates spoken messages describing destination terminal information received from a ground-based transmitter including connecting gates and baggage claim areas.
  • the system assembles audio messages incorporating the destination terminal information received from the ground and broadcasts the assembled messages to the passengers.
  • the system is alternatively configured to simultaneously provide the destination terminal information in both video and audio form.
  • the invention provides audio messages to aircraft passengers wherein the messages are tailored to the phases of flight of the aircraft.
  • the system includes data processor means utilizing received flight information for determining a current phase of the flight plan and for inputting information corresponding to the current phase of the flight plan to the audio system for broadcast to the passengers.
  • a wide variety of informative spoken messages may be automatically provided to the passengers, with the content of the messages tailored to the various phases of flight of the aircraft.
  • the system may automatically generate one set of spoken messages during the takeoff phase of the flight of the aircraft, and a separate set of messages during the en route cruise phase of the aircraft.
  • the messages are automatically generated by the system in response to input flight information which is received by the system.
  • a spoken message assembler system 200 receives input information in the form of digital alphanumeric data and generates natural-sounding spoken sentences which recite the received data for output to a listening audience through a speaker system, perhaps a public address (PA) system.
  • message assembler 200 includes hundreds or thousands of digitized words and phrases covering all anticipated words which may be required to create sentences reciting the input data.
  • the words and phrases are prerecorded from a human voice in a digitized format and stored in computer ROM.
  • Message assembler 200 assembles sentences by retrieving appropriate digitized words and phrases and assembling the words and phrases into proper syntactic sentences.
  • some of the words and phrases are stored in a number of digitized forms, each having a different inflection, such that the assembled sentence has proper inflection in accordance with natural speech.
  • input information in the form of digital data can be communicated to a listening audience in the form of natural-sounding spoken sentences.
  • the input data is received and the spoken sentences are generated and broadcast entirely automatically without the need for a human operator or human speaker.
  • the spoken message assembler is employed within an audio/video information system for use in the passenger compartment of an aircraft.
  • the message assembler receives flight information such as ground speed, outside air temperature, destination terminal, connecting gate, or baggage claim area information.
  • the message assembler then constructs natural-sounding sentences for broadcasting the flight information to the passengers in the aircraft.
  • the spoken messages may be broadcast over a public address system of the aircraft for all passengers to hear, or may be broadcast over individual passenger headphone sets.
  • the spoken message assembler may be configured to generate sentences in a variety of different languages for either sequential broadcast or simultaneous broadcast over multiple channels.
  • the spoken message assembler of the system thus provides a wide range of useful and informative information to the passengers, while freeing the flight crew from having to provide the information to the passengers.
  • the system may additionally include a video display system for simultaneously displaying the flight information over a video screen or the like.
  • the message assembler of the invention is ideally suited for any application benefitting from the automatic communication of input data to a listening audience.
  • Figure 1 provides a flow chart illustrating the operation of message assembler 200.
  • the message assembler receives an input sentence over a data line 201 in a digital alphanumeric format suitable for input and manipulation by a computer or similar data processing device.
  • the data is received within a sentence format having specific data fields.
  • one data field of the input sentence may provide the time of day.
  • an alphanumeric sequence is received which provides the time of day, e.g., "12:32PM.”
  • a separate data field may provide a destination city for an aircraft flight, e.g., "Los Angeles.”
  • Message assembler 200 may be preprogrammed to receive any of a number of suitable data formats. Any format is suitable so long as the variable data is received within preselected fields such that the message assembler can determine the type of data contained within the received message.
  • message assembler 200 For each type of data, message assembler 200 stores all possible instances of the data type in a digitized spoken form in a mass storage device 211. For the example of destination cities, the message assembler stores the names of all cities that the airline flies into or out of in digitized spoken form. Thus, the message assembler stores the words "New York,” “Los Angeles,” “Chicago,” etc. in ROM.
  • message assembler 200 For data types requiring numbers, such as the time of day, message assembler 200 stores all necessary component numbers in digitized form. To recite the time “12:10,” message assembler 200 retrieves and combines the words “twelve” and “ten.” To recite the time “1:57,” message assembler 200 retrieves and combines the words “one,” “fifty,” and “seven.” To handle any input time of day, message assembler 200 need only store the component numbers 0-9 and 10, 20 ... 50 in digitized form. The numbers 1-10 are assembled either as “one” or "oh-one,” etc., to allow the handling of both hours and minute values between 1 and 10.
  • the message assembler stores the various possible instances of the various possible data types that may be received within an incoming message.
  • the specific data fields that are employed and the specific instances of the data stored for each data field are configurable parameters of the system.
  • a digitized data base can be constructed to provide for almost any type of information, the system is preferably employed where a limited number of types of information must be conveyed to a listening audience, especially where each type of information has a fairly limited range of possible instances. In such case, the total number of digitized spoken words that must be stored in ROM is fairly limited. A system requiring a greater number of digitized words may be implemented using a computer with a greater amount of ROM.
  • An exemplary input sentence format received by the system, at step 202, is provided in Table I.
  • the exemplary sentence format of Table I provides the departure gate number and departing time for particular departing flights.
  • the input sentence of Table I provides a framework for communicating the departing flight's airline name and flight number and the departing flight's gate number and departure time, along with the destination city and destination airport terminal.
  • An input sentence includes a framework of fixed words interlaced with variable words (shown in parentheses) in Table I.
  • the fixed words are "flight,” "will depart,” “from,” “terminal,” “gate,” and “at.”
  • the variable data for inclusion within the sentence include the airline name, the flight number, the city name, the terminal name, the gate number, the departure time, and either "AM” or "PM” appended to the departure time.
  • Each unit of the sentence, comprising either a single fixed or variable word or a fixed or variable phrase, is denoted by a position number.
  • the variable "airline name" is identified as position 1.
  • the fixed word "flight” is identified as position 2.
  • each fixed or variable data unit within the input sentence is represented by a unique number.
  • the system examines the first position within the input sentence, initially position 1.
  • the system determines whether position 1 corresponds to a fixed word or a variable word. Continuing with the example of Table I, position 1 requires a variable word. Accordingly, the system proceeds to step 210 to retrieve the digitized variable word from the data base of the system, which corresponds to the input airline name to be included at position 1.
  • the data base of variable digitized words is set up to include the names of currently operating airlines, with the names digitized from a recording of the spoken airline name.
  • the data base may include, for example, "ABC Airlines” or "XYZ Airlines” in digitized form.
  • the system examines the received message for an alphanumeric representation of the airline name, then, based on the alphanumeric, retrieves the corresponding digitized spoken name from the system's data base. Once retrieved, the digitized data providing the spoken airline name is immediately broadcast to the passengers. Alternatively, the digitized spoken airline name may be transferred to a temporary memory unit (not shown in Figure 1) of the system for subsequent broadcast. In Figure 1, the broadcast step is identified by reference numeral 212.
  • the system determines whether the final position of the sentence format has been processed. If not, the system increments a position pointer and returns along flow line 216 to process the next position within the sentence format. Thus, in the example of Table I, the system returns to process position 2.
  • the system determines that position 2 requires a fixed digitized word. Hence, the system proceeds to step 218 to retrieve the fixed word designated by the sentence format. In this case, the fixed word is "flight.” Hence, the system retrieves digitized data presenting the spoken word "flight" from the data base and broadcasts the retrieved word.
  • the system returns along data flow line 216 to process a new position within the sentence format.
  • the next position, position 3 calls for a variable word setting forth the flight number.
  • the system proceeds to step 210, wherein the system retrieves the digitized data setting forth the spoken flight number corresponding to the alphanumeric flight number designation received in the input message.
  • the system retrieves digitized data providing the spoken words "ten,” "fifty,” and "nine.”
  • the system maintains a "number" data base which stores spoken numbers for use with any data type requiring numbers. Exemplary data types such as flight number, gate number, baggage claim area, departure time, etc.
  • the digitized spoken words “ten,” “fifty,” and “nine” are retrieved in circumstances requiring that the number “1059” be spoken, such as if the departing gate number is “1059,” the departure time is “10:59,” or the baggage claim area is “1059.”
  • the numbers are preferably stored in a variety of different styles and inflections to allow natural-sounding numbers to be recited in any circumstances.
  • the system proceeds to the next position wherein the system retrieves the fixed digitized words "will depart.” Execution continues, during which time the system processes each successive position within the sentence format. At each position, the appropriate variable or fixed digitized words are retrieved from the data base memory and immediately broadcast. Execution proceeds at a sufficient speed such that the words are broadcast one after the other in close succession to produce a natural-sounding sentence.
  • the assembled sentence is thereby "spoken" in the same manner in which a conventional compact disc system broadcasts words or music; that is, the digitized words are "played” in succession. Appropriate pauses may be included between words within the sentence to ensure a natural sentence flow.
  • the resulting "spoken" sentence might be "XYZ Airlines Flight ten fifty-nine will depart Chicago from Terminal One, gate twenty-three at twelve forty-seven PM.”
  • the sentence is broadcast by means described below to the passengers in the aircraft, who thereby hear a natural-sounding sentence as if spoken by a member of the flight crew.
  • the system returns to step 202 to receive and process a new message.
  • the new message may provide the departing flight information for a different airline flight.
  • an incoming message will provide the departing flight information for many connecting flights, perhaps 10-20 such flights.
  • the system will reexecute the steps shown in Figure 1 a number of times to process the input data corresponding to each of the connecting flights, to thereby generate sentences reciting all of the connecting gate information.
  • the retrieved words are stored in a temporary member for later broadcast.
  • a system might include parallel processing capability such that, while a first sentence is being broadcast from temporary memory, a second sentence is being assembled.
  • the system waits to receive a new message.
  • the new message may set forth different types of information within a different sentence format.
  • the system will receive numerous input sentence formats to allow the system to broadcast a wide variety of natural-sounding sentences conveying a wide variety of possible input data.
  • the message assembler shown in Figure 1 is advantageously employed in any environment where variable input information must be communicated to a listening audience over an audio system.
  • the system is advantageously employed wherever input data to be broadcast falls within a finite number of data types, each having a range of anticipated values which may be stored in digitized spoken form in a data base.
  • a natural-sounding sentence is composed of words of differing inflections. Automatically-generated sentences which do not use the proper inflection for component words may sound artificial or metallic. Accordingly, to assemble a natural-sounding sentence from digitized words, the proper inflection for the component words is preferably determined.
  • only "number” words i.e., words used to recite numeric strings, are stored under all three inflection forms. It has been found that input sentence formats may be selected wherein all other words need be stored under only one inflection to achieve sufficient natural-sounding sentences. For example, the word “and” need only be stored under the slowly rising inflection form because the word “and” will always appear in mid-sentence not followed closely by another word.
  • Numbers are stored under all three inflections, since numbers may appear in a variety of positions within a sentence or at the end of a sentence. For example, the number string "1024" may appear in the middle of a sentence followed closely by another word: "Flight 1024A will depart from gate 15.” Alternatively, the number string “1024" may appear in the middle of a sentence not followed closely by another word: “Flight 1024 will depart from gate 15.” Finally, the string "1024" may appear at the end of a sentence: "Flight 15 will depart from gate 1024.” Thus, all numbers are stored under all three inflection forms such that the proper inflection form can be retrieved depending upon the position of the number within the sentence.
  • the numeric string "1024" is actually composed of three component numbers: "ten,” “twenty,” and “four.”
  • the system processes the inflection of each of the individual component words separately.
  • the word “ten” is followed closely by the word “twenty” and the word “twenty” is followed closely by the word “four.”
  • the words “ten” and “twenty” both have a rapidly rising inflection, regardless of the position of "1024" in the sentence.
  • only the word "four” will have a slowly rising, rapidly rising, or falling inflection, depending upon the location of the number "1024" within the sentence.
  • the system also selects a proper style for reciting numbers.
  • the system characterizes numbers according to one of two general numeric styles. In the first, “short” style, the words “hundreds” or “thousands” are not spoken. For example, in the short style, the number “1024" is spoken as “ten twenty-four.” In a “long” numeric style, the words “hundreds” or “thousands” are inserted. For example, the number "1024" is recited as "one thousand twenty-four.”
  • the short style is used for reciting gate numbers, flight numbers, baggage claim areas, and the like.
  • the long style is used for reciting altitudes, distances, temperatures, and the like.
  • “flight 1024” is recited as “flight ten twenty-four”
  • “1024 feet” is recited as “one thousand twenty-four feet.”
  • the message assembler determines the proper numeric style and retrieves the digitized words appropriate to the selected numeric style.
  • the system retrieves the individual words “flight,” “ten,” “twenty,” and “four” from the digitized word data base for playback in succession.
  • the system retrieves the individual digitized words “one,” “thousand,” “twenty,” “four,” and “feet.”
  • FIG. 2 A method by which the invention accounts for numeric style and numeric inflection to generate natural-sounding spoken numbers is shown in Figure 2.
  • the steps of Figure 2 are executed as a part of the execution of step 210 of Figure 1. However, the steps of Figure 2 are executed only for processing alphanumeric strings which include numbers. Thus, other variable words, such as destination cities, i.e., "Los Angeles,” are not processed using the procedure of Figure 2.
  • the system For alphanumeric strings with numbers, the system, at step 250, initially extracts all numeric strings from the input alphanumeric character string. Thus, for input string "1024A,” the system extracts “1024.” Also as an example, for the string “10B24,” the system extracts the number strings “10” and "24.” Thus, an input character string may contain one or more numeric strings. For each extracted numeric string, the system, at step 252, determines the proper numeric style for the numeric string. Thus, if the numeric string is "1024,” the system determines whether this should be recited in the long style or the short style. This determination is made from an examination of the data type of the input character string. For each numeric data type, the system stores an indicator of the corresponding style.
  • the data type is a "flight number,” then the short style is used. If the data type for the input character string is an altitude, then the long style is selected.
  • the proper data type may be determined from the location of the character string within the input data block. Alternatively, the data block may include headers immediately prior to each data type, designating the data type.
  • the system parses the numeric string into its component numbers according to the selected numeric style.
  • "1024" is parsed as "1000" and "24” for the long numeric style, and "10" and "24” for the short numeric style.
  • the system assembles a word equivalent of the alphanumeric string which includes any parsed numeric strings, as well as any letters or other characters.
  • the system determines the inflection of all component numbers included within the word equivalent of the alphanumeric string. To this end, the system examines each "number" word within the string to determine whether the word is positioned in the middle of the string or at the end of the string. If in the middle, then the rapidly rising inflection form is chosen. If the "number" word occurs at the end of the string, then the system must determine what words, if any, follow the alphanumeric string.
  • the alphanumeric string constitutes the final portion of a sentence, a "number" word at the end of the string therefore falls at the end of the sentence. Hence, the falling inflection is chosen. If, on the other hand, the alphanumeric string is positioned in the middle of a sentence, then a "numeric" word falling at the end of the string will be assigned the slowly rising inflection.
  • step 258 the system is ready to retrieve the digitized spoken words corresponding to all components of the word equivalent of the alphanumeric string. This retrieval is accomplished at step 260. Processing continues at step 212 of Figure 1, which operates to broadcast the retrieved words. As the sentence is broadcast to the passengers, numbers recited within the sentence are thereby spoken in the proper style and with the proper inflection.
  • the system shown in Figures 1 and 2 may be configured to assemble sentences in any of a variety of languages.
  • the data base of digitized words must include the necessary foreign words and phrases.
  • each different language has different sentence formats. For example, for a German sentence, the sentence format may have the fixed verb of the sentence at the end of the sentence format, rather than near the beginning of the sentence format as commonly found in English sentences.
  • Each alternative language may be handled by a separate microprocessor device.
  • a single microprocessor device may sequentially process all languages.
  • the spoken message assembler described above is implemented within an on-board flight information system for providing flight information to airline passengers.
  • the system provides connecting gate and baggage claim area information.
  • the system provides flight information such as air speed, altitude, and information regarding points of interest over which the aircraft travels. This information may be tailored to the various phases of flight of the aircraft.
  • a data processor 13 receives messages containing flight information over a data bus 59 from various systems of the aircraft. Examples of such systems include an ACARS receiver 19, a navigation system 15, an aircraft air data system 17, and a maintenance computer 21. Each of these systems, from which information is received, is entirely conventional and will not be described in detail. Data processor 13 may be connected to any one or a multiple of these systems depending on the type of information desired to be displayed to the passengers of the aircraft. Data processor 13 may be controlled by a control unit 22, which includes various means for allowing for manual activation of the data processor and control over the functions of the data processor.
  • Data processor 13 generates audio messages using the message assembler described above and transmits the audio messages in the form of audio signals over an audio link line 91 to an audio selector unit 92 that routes the audio signal to a plurality of conventional audio systems.
  • the audio signals may be transmitted over a link line 93 to a public address speaker 95 in the passenger compartment of the aircraft or over link line 97 to a plurality of individual passenger headphone sets 96 via individual multichannel selectors 94.
  • the data processor may also generate video display screens which set forth the data incorporated in the audio messages.
  • the video display screens are output as a video signal and transmitted over a video link line 31 to a conventional video selector unit 29 that routes the video signal to a plurality of conventional video display systems.
  • the video signal may be transmitted over link lines 39 to a preview monitor 33, or over link lines 43 to a video monitor 37, or over link lines 41 to a video projector 35, which projects the sequences of video screens received onto a video screen 45.
  • Message assembler 200 and its data base of digitized words and phrases are components of data processor 13 and, hence, are not shown separately in Figure 3.
  • FIG. 3 a conventional ACARS/AIRCOM/SITA receiver 19 is shown. This receiver receives connecting gate and baggage claim area information from an airline central computer 47 via a transmitting antenna 51 over carrier waves 53. A link line 49 connects airline computer 47 to transmitting antenna 51.
  • any transmitter receiver system could be used, including a satellite communication system, and this invention is not limited to the ACARS system referred to herein.
  • Destination airport information may also be entered into the system via an optional data entry terminal (not shown).
  • the data processor 13 In order for the data processor 13 to promptly process the information received, the data is assumed to be in a specific fixed format when it is received from ACARS receiver 19.
  • the format illustrated in Table II is an example of a possible format for up-linked data:
  • the data format contains strings of characters which are utilized by data processor 13 to generate audio messages and optional video displays.
  • Exemplary strings are the flight number string “966,” the destination airport string “Frankfurt,” the arrival gate string “17,” and the baggage claim area string “C.”
  • For audio messages relevant data is extracted from the strings and incorporated into audio messages via message assembler 200.
  • For video displays these strings are used both to retrieve an airport chart representing the destination airport, and for direct inclusion in video displays.
  • the following spoken audio messages may automatically be generated: "Lufthansa flight nine six six arriving in Frankfurt at eleven forty five A M, terminal A, gate number seventeen, baggage claim area C.” "Air France flight eight forty one will be departing for Paris from terminal A gate ten at twelve fifteen.” "Lufthansa flight five oh two will be departing for Hamburg from terminal B gate five at twelve thirty.” “Swissair flight sixty five will be departing for Zurich from terminal B gate two at twelve thirty five.”
  • the data processor utilizes the message assembler, described above, to extract relevant data and to assemble messages reciting the data.
  • the message assembler extracts the variable data "Lufthansa,” “966,” “Frankfurt,” “11:45,” “A,” “17,” and “C” for incorporation into a sentence having fixed words “flight,” “arriving in,” “at,” “terminal,” “gate number,” and “baggage claim area.”
  • the message processor retrieves spoken word equivalents of the alphanumeric data extracted from the message in the manner described above.
  • the numbers “966,” “11:45,” and “17” contained within the flight number, arrival time, and arrival gate may be processed according to the inflection and style manipulation procedure described above with reference to Figure 2.
  • the message assembler extracts the various fixed and variable words from the input message, retrieves spoken word equivalents for these alphanumeric values, and broadcasts the spoken word equivalents in succession to produce complete sentences.
  • a total of four different audio messages are thereby generated from the data contained within the data block of Table II.
  • the four messages are generated by executing the steps of Figure 1 a total of four times. Once completed, the system waits until a new input message is received.
  • input messages may provide flight information such as altitude, ground speed, outside air temperature, time or distance to destination, time or distance from destination, etc.
  • weather-related messages may be received and processed, such as messages describing the temperature and weather conditions at the destination airport.
  • weather conditions within the vicinity of the aircraft may be described, including wind speed, visibility, ceiling, etc.
  • Messages providing marine-related information may be provided. For example, messages specifying the surf, tide, and marine visibility may be provided.
  • any input message can be processed so long as each of the component words for inclusion in the sentence is stored in the digitized memory of the system.
  • custom messages may be typed into a ground-based computer, then transmitted to the aircraft for conversion to a spoken audio message.
  • the variety of possible messages is limited only by the number of digitized words stored in the digitized memory of the system. Accordingly, by providing a system with a larger vocabulary of digitized words, a wider range of audio messages can be generated.
  • the system may also generate an optional video display for presentation to the passengers while the audio messages are simultaneously provided over the speaker system.
  • the system may extract the above-described flight information from the input message of Table II and format the information for a textual display.
  • the system may retrieve a map of the destination terminal and provide icons or the like identifying the locations of the various arrival and departure gates on the map.
  • Data processor generator 13 operates on the information it receives in a manner illustrated by the flowchart of Figure 4.
  • the input to data processor 13 is from a digital data bus input port on an interrupt basis, 181. Whenever there is information to be received, the data processor interrupts whatever it is doing to read the new data.
  • processor 13 reads the input message containing the connecting gate data from the bus until a completed message, 185, is received. The processor keeps returning to the interrupt, 187, until an end of message is received.
  • the alphanumeric strings providing the fixed and variable words are extracted, at 189, from the input message.
  • the extracted alphanumeric strings are output to message assembler 200 for generation of audio messages based on data contained within the fixed and variable alphanumeric strings.
  • the thus-generated audio message is output to the passenger audio system, at 194, via a link line 101 to an audio broadcast system 103 ( Figure 3).
  • the audio messages may be broadcast over a public address speaker system within the passenger cabin or may be broadcast over a conventional multichannel individual headphone system to the passengers.
  • the message assembler may provide the audio messages in a variety of languages, each language either being provided over a separate audio channel or broadcast sequentially over a single channel. Background music may be provided to accompany the audio messages.
  • the extracted connecting gate information is arranged into its predetermined page format, at 91, for display.
  • a terminal chart signifying the destination airport specified in the input message is retrieved, at 93, from a data storage unit.
  • An aircraft symbol is positioned at the arrival gate on the terminal chart and the arrival gate and baggage claim area information is written on the terminal chart for display.
  • the terminal chart, along with its information, is output as a video signal to the video display according to a specified sequence, at 195.
  • the terminal chart is displayed, at 197, for a period of typically 10 to 60 seconds.
  • portions of the alphanumeric text containing the connecting gate information are displayed in a suitable format, at 199, for the specific period of time.
  • the duration of the video displays is synchronized with the duration of the audio message which is simultaneously broadcast.
  • a display such as an exemplary display illustrated in Figure 5, may be presented to the passengers while audio messages reciting the displayed information are simultaneously broadcast.
  • a display shown in Figure 6 may be provided to the passengers while an audio message reciting the baggage claim area is simultaneously broadcast.
  • the terminal chart of Figure 6 illustrates all the gates and terminal buildings for a particular airport, along with baggage claim areas.
  • the aircraft symbol is located next to the arrival gate.
  • the connecting gate information may be processed to produce audio messages and video displays immediately after the information is received over the ACARS system, or the information may be stored until the aircraft begins its approach to its destination.
  • the audio portion may be provided as a stand-alone system with no video display generation hardware or software required. In such case, only the audio messages are generated and broadcast. All of the information provided in a combined audio/video system is provided in a stand-alone audio system, with the exception that graphic displays such as flight plan maps and destination airport charts are not provided.
  • the stand-alone audio system is ideally suited for aircraft not possessing passenger video display systems.
  • the stand-alone audio system merely interfaces with a conventional multichannel passenger audio broadcast system, and provides flight information, as described above, through the passenger audio system.
  • FIG. 7-9 an alternative system for providing flight information to the passengers in the aircraft passenger compartment is illustrated.
  • the alternative system may tailor the information to various phases of the flight.
  • An alternative data processor 13' utilizes the received flight information and determines a current phase of the flight of the aircraft, i.e., the system determines whether the aircraft is in "en route cruise,” “descent,” etc. Once the current phase of the flight has been determined, data processor 13' generates audio messages and optional sequences of video display screens tailored to the current phase of the flight for presentation to the passengers of the aircraft. For example, if the aircraft is in an "en route cruise” phase, data processor 13' may generate an audio message reciting the ground speed and outside air temperature and simultaneously generate a video display screen for displaying the same information. If the aircraft is in a "descent" phase, data processor 13' may generate a sequence of audio messages reciting the time to destination and the distance to destination screen and simultaneously generate the same information.
  • Each audio message provides useful information appropriate to the current phase of the flight plan. For example, during power on, preflight, engine start, and taxi out, various digitized audio messages may be provided which welcome passengers aboard the aircraft, describe the aircraft and, in particular, provide safety instructions to the passengers.
  • various audio messages may be generated which indicate points of interest over which the aircraft is flying or recite flight information received via message handler 63'. For example, if an input message is received providing ground speed, outside air temperature, time to destination, and altitude, an audio message may be generated by message assembler 200 reciting the information.
  • a video display screen such as shown in Figure 8 may be simultaneously provided. If the aircraft has approached a point of interest, an audio may be assembled and broadcast to the passengers indicating the proximity of the aircraft to the point of interest.
  • a video display screen such as the one shown in Figure 9 may be simultaneously provided.
  • message assembler 200 may generate an audio voice message such as: "The current ground speed is 574 miles per hour. The current outside air temperature is minus 67 degrees Fahrenheit.” The audio message is then broadcast to the passengers.
  • Data processor 13' includes: a message handler 63' for receiving flight information messages; a flight information processor 65' for determining the current flight phase and for generating audio messages and video display sequences corresponding to the current flight phase or point of interest; and a data storage unit 69' for maintaining flight information and digitized data.
  • Message handler 63' receives flight phase information as encoded messages over data bus 59'. As each new flight information message is received, message handler 63' generates a software interrupt. Flight information processor 65' responds to the software interrupt to retrieve the latest flight information from message handler 63'. Once retrieved, flight information processor 65' stores the flight information in a flight information block 104' in data storage unit 69'.
  • storage unit 69' In addition to maintaining digitized words and phrases for use in assembling audio messages, storage unit 69' also maintains specific sequences of graphic displays 120'. Storage unit 69' also maintains "range” tables 114, which allow flight information processor 65' to determine the current phase of the flight plan. For example, for the "en route cruise” phase, range table 114' may define an altitude range of at least 25,000 feet such that, if the received flight information includes the current altitude of the aircraft, and the current altitude is greater than 25,000 feet, data processor 65' can thereby determine that the current phase of the flight plan is the "en route cruise” phase and generate audio messages and optional video displays appropriate to the "en route cruise” phase of the flight plan.
  • range table 114' may define an altitude range of at least 25,000 feet such that, if the received flight information includes the current altitude of the aircraft, and the current altitude is greater than 25,000 feet, data processor 65' can thereby determine that the current phase of the flight plan is the "en route cruise” phase and generate audio messages
  • Range tables 114' also include points of interest along the flight route of the aircraft. For each point of interest, range tables 114' provide the location of the point of interest and a "minimum range distance" for the point of interest. If the received flight information includes the location of the aircraft, flight information processor 65' determines whether the aircraft is located within the minimum range associated with any of the points of interest. Thus, once the aircraft has reached the vicinity of a point of interest, the system automatically generates audio messages and optional video display screens informing the passengers of the approaching point of interest.
  • the audio message may recite the name of the point of interest and the distance and travel time to the point of interest and the relative location of the point of interest to the aircraft, i.e., "left” or "right.”
  • the audio messages may be provided in a variety of languages, with each language broadcast on a different audio channel.
  • digitized monologues describing the points of interest may be accessed from a mass storage device for playback while the aircraft is in the vicinity of the point of interest.
  • the message assembler need not be used to assemble audio messages. Rather, fixed digitized monologues are simply broadcast. These may be accompanied by background music.
  • the optional video screens may provide, for example, the name of the point of interest, the distance and travel time to the point of interest, and a map including the point of interest, with the flight route of the aircraft superimposed thereon.
  • flight information processor 65' compares the current location of the aircraft with the location of points of interest in the data base tables and determines whether the aircraft has reached the vicinity of a point of interest.
  • range table 114' can include points of interest such as cities and, for each point of interest, include the location in latitude and longitude and a minimum range distance.
  • Table III POINTS OF INTEREST Item Latitude Longitude Minimum Range City A 45 degrees 112 degrees 100 miles City B 47 degrees 114 degrees 10 miles City C 35 degrees 110 degrees 5 miles
  • Flight information processor 65 includes an algorithm for comparing the current location of the aircraft to the location of each city and for calculating the distance between the aircraft and the city. Once the distance to the city is calculated, flight information processor 65 determines whether the distance is greater than or less than the minimum range specified for that city.
  • flight information processor 65 Taking as an example City A, if the aircraft is 200 miles from city A, flight information processor 65 will determine that the aircraft has not yet reached the vicinity of city A. Whereas, if the distance between the aircraft and city A is 90 miles, flight information processor 65 can determine that the aircraft has reached the vicinity of city A and initiate a sequence of displays, previously described, informing the passengers.
  • the algorithm for calculating the distance between the aircraft and each point of interest, based on the latitudes and longitudes, is conventional in nature and will not be described further. The algorithm may take considerable processing time and, hence, is only executed periodically. For example, the point-of-interest table is only accessed after a certain number of miles of flight or after a certain amount of time has passed.
  • Range table 114' may include the location of a wide variety of points of interest, including cities, landforms, the equator, the International Date Line, and the North and South Poles.
  • the message assembler for generating natural-sounding spoken sentences conveying input data.
  • the message assembler has been described in combination with a flight information system for aircraft passengers that provides useful information to the passengers en route to their destination.
  • the system connects into a conventional passenger audio broadcast system.
  • the system provides destination terminal information such as connecting gates and baggage claim areas and flight information.
  • the flight information is tailored to the current phase of the flight plan of the aircraft. For example, messages describing points of interest are generated as the aircraft reaches the vicinity of the points of interest.
  • the systems can be combined to provide both types of information. In such a combined system, the destination terminal information may be automatically presented once the aircraft reaches the "approach" phase of the flight.
  • the system may also provide the information in video form over a video display system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Machine Translation (AREA)
EP93302701A 1993-04-06 1993-04-06 AV-Informationssystem Withdrawn EP0620697A1 (de)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP93302701A EP0620697A1 (de) 1993-04-06 1993-04-06 AV-Informationssystem

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP93302701A EP0620697A1 (de) 1993-04-06 1993-04-06 AV-Informationssystem
AU36718/93A AU667347B2 (en) 1993-04-06 1993-04-06 Real-time audio message system for aircraft passangers

Publications (1)

Publication Number Publication Date
EP0620697A1 true EP0620697A1 (de) 1994-10-19

Family

ID=25623696

Family Applications (1)

Application Number Title Priority Date Filing Date
EP93302701A Withdrawn EP0620697A1 (de) 1993-04-06 1993-04-06 AV-Informationssystem

Country Status (1)

Country Link
EP (1) EP0620697A1 (de)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0749221A3 (de) * 1995-06-14 1998-07-29 American Airlines Inc. Verfahren und Vorrichtung für eine Echtzeitinformationsübertragung über gewöhnliche Fernmeldeleitungen
GB2404545A (en) * 2003-04-24 2005-02-02 Visteon Global Tech Inc Text-to-speech system for generating announcements
EP2189759A3 (de) * 2008-11-24 2013-11-20 Honeywell International Inc. System und Verfahren zur Anzeige grafischer Abflugsprozeduren
EP3300345A1 (de) * 2016-09-23 2018-03-28 Airbus Operations GmbH Dynamische anpassung von im voraus aufgezeichneten ankündigungen
DE102019003553A1 (de) * 2019-05-21 2020-11-26 Diehl Aerospace Gmbh Automatische Ansage im Passagierflugzeug

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4975696A (en) * 1987-03-23 1990-12-04 Asinc, Inc. Real-time flight and destination display for aircraft passengers
EP0427485A2 (de) * 1989-11-06 1991-05-15 Canon Kabushiki Kaisha Verfahren und Einrichtung zur Sprachsynthese
US5177800A (en) * 1990-06-07 1993-01-05 Aisi, Inc. Bar code activated speech synthesizer teaching device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4975696A (en) * 1987-03-23 1990-12-04 Asinc, Inc. Real-time flight and destination display for aircraft passengers
EP0427485A2 (de) * 1989-11-06 1991-05-15 Canon Kabushiki Kaisha Verfahren und Einrichtung zur Sprachsynthese
US5177800A (en) * 1990-06-07 1993-01-05 Aisi, Inc. Bar code activated speech synthesizer teaching device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN vol. 12, no. 194 (P-713)7 June 1988 & JP-A-62 298 869 ( RICOH ) 25 December 1987 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0749221A3 (de) * 1995-06-14 1998-07-29 American Airlines Inc. Verfahren und Vorrichtung für eine Echtzeitinformationsübertragung über gewöhnliche Fernmeldeleitungen
GB2404545A (en) * 2003-04-24 2005-02-02 Visteon Global Tech Inc Text-to-speech system for generating announcements
GB2404545B (en) * 2003-04-24 2005-12-14 Visteon Global Tech Inc Text-to-speech system for generating information announcements
EP2189759A3 (de) * 2008-11-24 2013-11-20 Honeywell International Inc. System und Verfahren zur Anzeige grafischer Abflugsprozeduren
EP3300345A1 (de) * 2016-09-23 2018-03-28 Airbus Operations GmbH Dynamische anpassung von im voraus aufgezeichneten ankündigungen
US10232941B2 (en) 2016-09-23 2019-03-19 Airbus Operations Gmbh Dynamically adapting pre-recorded announcements
DE102019003553A1 (de) * 2019-05-21 2020-11-26 Diehl Aerospace Gmbh Automatische Ansage im Passagierflugzeug
US11299271B2 (en) 2019-05-21 2022-04-12 Diehl Aerospace Gmbh Automatic announcement in a passenger aircraft
DE102019003553B4 (de) 2019-05-21 2024-06-27 Diehl Aerospace Gmbh Ansageeinrichtung, Passagierflugzeug, Verfahren zur Ausgabe einer Ansage in einem Passagierflugzeug und Verwendung eines CMS

Similar Documents

Publication Publication Date Title
US4975696A (en) Real-time flight and destination display for aircraft passengers
EP0533310B1 (de) Flugphasenauskunftanzeigesystem für Flugzeugpassagiere
EP2858067B1 (de) System und Verfahren zur Korrektur von akzentinduzierter Sprache in einem Flugzeug-Cockpit mittels einer dynamischen Sprachdatenbank
US7580377B2 (en) Systems and method of datalink auditory communications for air traffic control
US8306675B2 (en) Graphic display system for assisting vehicle operators
US8335988B2 (en) Method of producing graphically enhanced data communications
CN103489334B (zh) 用于在航空领域辅助通信的设备
US20100332122A1 (en) Advance automatic flight planning using receiver autonomous integrity monitoring (raim) outage prediction
WO2002036427A3 (en) Weather information network including graphical display
WO2002069294A8 (en) A system and method for automatically triggering events shown on aircraft displays
US20230005483A1 (en) System and method for displaying radio communication transcription
EP0620697A1 (de) AV-Informationssystem
AU667347B2 (en) Real-time audio message system for aircraft passangers
Prinzo et al. US airline transport pilot international flight language experiences, Report 1: Background information and general/pre-flight preparation
Saïd et al. The ibn battouta air traffic control corpus with real life ads-b and metar data
Lind et al. The influence of data link-provided graphical weather on pilot decision-making
US20180086465A1 (en) Dynamically Adapting Pre-Recorded Announcements
Cartwright et al. A history of aeronautical meteorology: personal perspectives, 1903–1995
McCauley et al. Assessment of cockpit interface concepts for data link retrofit
ARMSTRONG Automatic speech recognition in air-ground data link(Abstract Only)
Duke Aircraft descended below minimum sector altitude and crew failed to respond to Gpws as chartered Boeing 707 flew into mountain in Azores
Francetić et al. Analysis of radiotelephony communication errors made during the student-pilot training flight
Dess1er STRUCTURE AND FUNCTION OF A SECONDARY LINGUISTIC CODE: COMMUNICATION OF AIR TRAFFIC CONTROLLERS
Newton " GENERAL AVIATION'S METEOROLOGICAL REQUIREMENTS
White Advanced transport operating systems program

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): BE CH DE FR GB LI NL

17P Request for examination filed

Effective date: 19950331

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Withdrawal date: 19960116