EP1640968A1 - Procédé et dispositif pour la synthèse de la parole - Google Patents

Procédé et dispositif pour la synthèse de la parole Download PDF

Info

Publication number
EP1640968A1
EP1640968A1 EP04447212A EP04447212A EP1640968A1 EP 1640968 A1 EP1640968 A1 EP 1640968A1 EP 04447212 A EP04447212 A EP 04447212A EP 04447212 A EP04447212 A EP 04447212A EP 1640968 A1 EP1640968 A1 EP 1640968A1
Authority
EP
European Patent Office
Prior art keywords
speech
units
linguistic
features
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP04447212A
Other languages
German (de)
English (en)
Inventor
Richard Beaufort
Vincent Colotte
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Multitel ASBL
Original Assignee
Multitel ASBL
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Multitel ASBL filed Critical Multitel ASBL
Priority to EP04447212A priority Critical patent/EP1640968A1/fr
Priority to EP20050447078 priority patent/EP1589524B1/fr
Priority to AT05447078T priority patent/ATE389224T1/de
Priority to DE200560005241 priority patent/DE602005005241D1/de
Publication of EP1640968A1 publication Critical patent/EP1640968A1/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/06Elementary speech units used in speech synthesisers; Concatenation rules
    • G10L13/07Concatenation rules
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • G10L13/10Prosody rules derived from text; Stress or intonation

Definitions

  • the present invention is related to a method and device for speech synthesis.
  • Natural language processing aims at extracting information that allows reading the text aloud. This information can vary from one system to another but always comprises words, their nature and their phonetisation.
  • Units selection aims at choosing speech units that correspond to the information extracted by natural language processing.
  • digital signal processing concatenates the selected speech units and, if needed, changes their acoustic characteristics so that required speech signals are obtained.
  • these units extracted from read-aloud sequences are diphones, i.e. pieces of speech starting from the middle of a phoneme and ending in the middle of the following phoneme (see Fig.2). This means that a diphone extends from the stable part of a phoneme till the stable part of the following phoneme and contains, in its middle part, the coarticulation phase characterising the transition from one phoneme to another, which is very difficult to model mathematically.
  • diphones as speech units improves speech generation and makes it easier, because concatenation is performed on their stable parts.
  • the first systems using vocal databases for synthesis employed only one sample of each diphone.
  • the underlying idea was to get rid of acoustic variations present in the diphones and dependent from the elocution time: accent, tone, fundamental frequency and duration.
  • diphones merely are acoustic parameters describing the vocal tract only.
  • Fundamental frequency, prosody and duration have to be regenerated during synthesis.
  • Diphones may need to undergo some acoustic modifications in order to obtain the required prosodic features. This unfortunately leads to a loss of quality: the synthesised voice seems less natural.
  • prosody keeps being neutral and listless.
  • Neutral speech units constitute an important drawback to overcome, therefore non-uniform units started to be investigated.
  • non-uniform is meant that the speech unit may change in two ways: length and acoustic production.
  • Length variation means that the unit is not exclusively a diphone, but may be either shorter or longer. Longer units imply less frequent concatenation problems. However, in some cases, the corpus constitution (an inconsistency or incompleteness) can impose the use of a smaller unit, like a phoneme or half-phoneme. Therefore a variation in terms of length may be considered in both directions.
  • Variation in terms of acoustic production means that the same unit has to appear several times in the corpus : for the same unit, they may be several representations with different acoustic realisation. By doing so, units are not neutral anymore; they reflect the variations occurring during the elocution.
  • the search for speech units corresponding to the units described by natural language analysis often yields several candidates for each target unit.
  • the result of this search is a lattice of possible units, allocated to different positions in the speech signal. Each position corresponds to one unit to be searched for and covers potential candidates found in the corpus (see Fig.3). So the challenge is to determine the best sequence of units to be selected in order to generate the speech signal.
  • the target cost and the concatenation cost should be used.
  • the target cost gives the distance between a target unit and units coming from the corpus. It is computed from the features added to each speech unit.
  • the concatenation cost estimates the acoustic distance between units to be concatenated.
  • the different systems that have been set up determine the concatenation cost between adjacent units in terms of acoustic distance, based on several criteria such as the fundamental frequency, a intensity difference or the spectral distance. Note that said acoustic distance does not compare to the real acoustic distance perceived by a listener.
  • the selection of a sequence of units for a particular sentence is cost expensive in terms of CPU time and memory if no efficient optimisation is used. So far, two kinds of optimisation have been investigated.
  • the first optimisation manages the whole selection. A single unit sequence has to be selected from the lattice. This task corresponds to finding the best path in a graph. This is usually solved with dynamic programming by means of the well-known Viterbi algorithm.
  • the second optimisation method consists in assessing the importance of the different features used to determine the target or concatenation cost. Indeed most features may not be considered as equally important: some features affect more the resulting quality than other. Consequently, it has been investigated what would be the ideal weighting for the selection process.
  • the proposed systems however apply a manually implemented weighting, which, as a consequence, is competence based and depends on the operator's expertise rather than on statistic values.
  • One possible weighting method suggests forming a network between all sounds of the corpus (see Prosody and Selection of source Units for Concatenative Synthesis, Campbell and Black, pp. 279-292, Springer-Verlag,1996). Once this network has been set up, a learning phase can start aiming at improving the acoustic similarity between a reference sentence and the signal given by the system. This improvement can be achieved by tuning the features weighting, by successive iterations or by linear regression.
  • This method has two inherent drawbacks: on the one hand, its computational load, still consuming resources even though performed off-line, and on the other hand, the limited amount of features the computation can weight. Most of the time, part of the weighting remains to be done manually. In order to reduce the computational load, one can carry out a clustering of sounds to keep only one representative sound, the centroid, on which the selection computation may be performed.
  • Another weighting method relies on a corpus representation based on a phonetic and phonologic tree (see e.g. 'Non-uniform unit selection and the similarity metric within BT's laureate TTS system', Breen & Jackson, ESCA / COCOSDA 3 rd Workshop on Speech Synthesis, pp. 201-206, Jenolan Caves, Australia, Nov. 26-29, 1998). During the selection, they look for candidate units with the same context as the target unit. However, the features they use are not automatically weighted.
  • Non-uniform units-based systems try to give synthesised speech a more natural character, closer to human speech than that generated by previous systems. This goal is achieved by using non-neutralised units of variable length.
  • the performance of such speech synthesis systems is currently limited by the intrinsic weakness of their prosodic models, restricted to some acoustic and symbolic parameters. These models, corpus- or rule-based, are not sufficient as they do not allow a natural prosodic variation of the synthesised sentences. Yet, the quality of prosody depends directly on how listeners perceive synthesised speech.
  • the use of such prosodic models shows a major advantage: the selection of acoustic units that are relatively neutral, limits discontinuities between units to be concatenated further on. As a consequence, spectral smoothing at units boundaries is strongly restricted in order to keep the naturalness of speech units.
  • the present invention aims to provide a speech synthesis method that does not need any prosodic model and that requires little digital signal processing. It also aims to provide a speech synthesis device, operating according to the disclosed synthesis method.
  • the present invention relates to a method to synthesise speech, comprising the steps of
  • said selected linguistic features are determined in a training step preceding the above-mentioned steps.
  • the step of selecting candidate speech units is performed using a database comprising information on phonemes and at least their linguistic features.
  • the information on the linguistic features comprises a weighting coefficient for each linguistic feature.
  • the weighting coefficients typically result from an automatic weighting procedure.
  • the information is obtained from a step of labelling and segmenting a corpus.
  • the speech units are diphonic units.
  • a target cost is calculated for each candidate cluster.
  • a target cost is calculated from the target costs for the candidate clusters.
  • the concatenation of speech units is performed taking into account said target cost as well as a concatenation cost.
  • the linguistic features comprise features from the group ⁇ surrounding phonemes, emphasis information, number of syllables, syllables, word location, number of words, rhythm group information ⁇ .
  • the invention relates to a speech synthesis device comprising a linguistic analysis engine producing phonemes to be pronounced and, associated to each phoneme, a list of linguistic features,
  • the speech synthesis device further comprises calculation means for computing automatically a weighting coefficient for each linguistic feature.
  • Fig. 1 represents a Text-to-Speech Synthesiser system.
  • Fig. 2 represents the segmentation into phonemes and diphones. "_" corresponds to silence.
  • Fig. 3 represents a lattice network for the diphone sequence of the word 'speech'.
  • Fig. 4 represents the steps of the method according to the present invention.
  • the present invention discloses a speech units selection system freed from any prosodic model (either acoustic or symbolic) that allows more prosodic variations in synthesised sentences, thereby applying little signal processing at the units' boundaries.
  • speech units selection in the method according to the present invention is exclusively based on a features set selected among linguistic information provided by language analysis.
  • any prosodic model either rules- or corpus-based, relies on a list of linguistic features that allow to choose values for any acoustic or symbolic feature of the model.
  • a prosodic model is just an acoustic and symbolic synthesis of linguistic features.
  • the prosodic model is deterministic: from a finite list of linguistic features, this model always deduces the same prosodic features. Language however is not deterministic. Indeed, the same speaker could pronounce a given sentence with a single linguistic analysis, in different ways. Parameters having an influence on the pronunciation and prosody of this sentence can be affective or intellective.
  • the synthesis method according to the invention is divided into a training and a run-time phase. In both phases, the same linguistic analysis engine is used for the linguistic features extraction, giving thus some homogeneity to the system.
  • the training phase it is necessary to list the relevant linguistic features for selecting the units. Once this list is obtained, the further training consists in a labelling and a segmentation of the corpus as well as a weighting of the linguistic features. Note that in text-to-speech synthesis, a spoken language corpus is always paired with a written corpus that is its transcription. The written corpus helps in choosing labels and features for each unit of the spoken language corpus.
  • the spoken language corpus may as well be called a speech units corpus or a speech units database.
  • the run-time phase is carried out on a sentence applied to the synthesis system input. First the linguistic sentence is analysed. Then candidate speech units are selected based on selected linguistic features. Lastly, selected units are concatenated in order to form the speech signal corresponding to the sentence. Both phases are now presented in detail.
  • the features selection is intrinsically linked to the linguistic analysis engine, the capabilities of which determine the amount of available linguistic information.
  • the exclusive use of linguistic features for selection forces to add supplementary, prosody affecting information to the features typically used (like phonemes around the target, syllabification, number of syllables in the word, location of words in the sentence ).
  • Very common linguistic features like the phonemes surrounding the target unit and the number of syllables in the word rarely are used in state-of-the-art systems. Consequently, the analysis engine must be powerful enough to determine the required additional information.
  • Said additional information comprises:
  • each sentence of the written corpus is annotated as follows: amount of words and place of the words in the sentence, syllabification and phonetisation of the words, synthesis in terms of articulatory criteria of phonemic contexts for each phoneme.
  • the annotation elements are then discretised as integer values and stored into a linguistic units database wherein each phoneme is linked with its own linguistic features.
  • the sentences of the spoken language corpus are segmented into phonemes and diphones. All phonemes occurring in the speech units corpus are then collected. For each phoneme the acoustic features useful for the concatenation cost are calculated and also added to the speech units corpus.
  • acoustic features are the fundamental frequency, LPC (Linear Predictive Coding) coefficients and the intensity.
  • LPC Linear Predictive Coding
  • this number is set at 7 clusters of acoustic representations of one phoneme distributed according to their duration d: 1. d ⁇ M - 2 D 2. M - 2 D ⁇ d ⁇ M - D 3. M - D ⁇ d ⁇ M - D / 2 4. M - D / 2 ⁇ d ⁇ M + D 5. M + D ⁇ d ⁇ M + 2 D 6. M + D ⁇ d ⁇ M + 2 D 7. d > M + 2 D where M denotes the mean duration of all representations for one phoneme, and D represents the standard deviation of this representation.
  • the (fully automatic) linguistic features weighting may start.
  • the objective is to determine to which extent each feature allows to discriminate between several clusters, whereby each cluster is seen as a class to be selected or a decision to be taken.
  • the most appropriate method to do this is by using a decision tree.
  • Decision tree building relies on the concept of entropy. Entropy computation for a list of features allows classifying them according to their intrinsic information. The more a feature i reduces the uncertainty about which cluster C to select, the more it is informative and relevant.
  • the entropy of feature i is computed as gain ratio GR( i , C ), i.e. the ratio of Information Gain IG( i , C ) to the Split Information SI(C).
  • the Split Information normalises the Information Gain of a given feature by taking into account the number of different values this feature can take.
  • the Gain Ratio allows determining the features ranking between all decision tree levels, and also weights the features during the target cost calculation.
  • the weighting coefficients are also stored in the database.
  • the linguistic analysis At run-time, each time a sentence enters the system, the linguistic analysis generates the corresponding phonemes as well as a list of linguistic features associated to each of them. Every pair ⁇ phoneme, features ⁇ is defined as a target.
  • the speech units selection occurs in three steps:
  • diphonic units to be selected are only those that can be formed from adjacent phonemic candidates in the speech units corpus. However, if a target diphone does not have any candidate, one creates candidates containing the target phoneme partly left or partly right-hand side, according to the diphone needed.
  • the units selection is performed in a traditional way, by solving the lattice with the Viterbi algorithm. In this way the path is selected in the lattice of diphones, which minimises the double cost ⁇ target, concatenation ⁇ .
  • the target cost was already pre-computed at the pre-selection stage, whereas the concatenation cost is determined when running through the lattice.
  • the concatenation cost has been defined as the acoustic distance between the units to be concatenated. To calculate this distance, the system thus needs acoustic features, taken at the boundaries of the units to be concatenated: fundamental frequency, spectrum, energy and duration. The distance, and thus the cost, is obtained by adding up:
  • Figure 4 shows a block scheme of a text-to-speech synthesis system that implements the method of the invention.
  • the system is split into three blocks, each corresponding to one of the steps of the run-time phase as described above : the NLP (Natural Language Processing), the USP (Units Selection Processing) and the DSP (Digital Signal Processing).
  • the input to the system is the text that is to be transformed into speech.
  • the output to the system is a speech signal concatenated from non-uniform speech units.
  • Each block uses databases.
  • the NLP loads linguistic databases (DBA) for each task (pre-processing, morphological analysis,).
  • the DSP loads the Speech Units Database, from which speech units are selected and concatenated into a speech signal.
  • DBA linguistic databases
  • the USP in between, loads a Linguistic Units Database, comprising a set of triplets ⁇ phoneme, linguistic features, position ⁇ .
  • the first pair, ⁇ phoneme, linguistic features ⁇ describes a unit from the Speech Units Database.
  • the last information, position is the position in milliseconds of the unit in the Speech Units Database. It means that both databases describe and store Candidate Units, and are aligned thanks to the position feature.
  • the NLP block aims at analysing the input text in order to generate a list of target units (T 1 , T 2 ,..., T n ). Each target unit is a pair ⁇ phoneme, linguistic features ⁇ .
  • the second block, USP works in three steps.
  • the Linguistic Units Database selects from the Linguistic Units Database a set of phonemic candidates for each target unit. A target cost computation is performed for each candidate. Candidate diphonic units are then determined together with their target cost and a lattice of weighted diphones is created, one diphone for each pair of adjacent phonemes. Next, it selects by dynamic programming the best path of diphones through the lattice.
  • the DSP block takes selected diphones from the Speech Units Database. Then, it concatenates them acoustically, using a technique of the OverLap And Add type: pitch values are used to improve the concatenation. No signal processing is necessary other than the concatenation itself. Selected units are concatenated without any discontinuity. As a result, linguistic criteria used in the selection prove their relevance.
  • the technology can for example be used for advertisement diffusion (broadcasting) in shopping centres. Advertisements of shopping centres must frequently change, which requires frequent and expensive need for professional speakers.
  • the proposed synthesis method only once requires the services of a professional speaker, and subsequently allows pronouncing any written text, without additional cost.
  • Another application could be directed to information for travellers in railway stations and airports and the like.
  • the synthesis system according to the present invention can easily solve this problem.
  • Speech synthesis can also generate fluent interactive dialogues. This is related to dialogue systems able to model a conversation and to automatically generate text in order to interact with the user.
  • Two traditional examples are interactive terminals in stations, airports and shopping centres, as well as vocal servers that are accessible by phone.
  • Systems currently used in this context are strongly limited: based on pieces of pre-recorded sentences, they are limited to some basic syntactic structures. Moreover, the result obtained is less natural, because of prosodic discontinuities at words or word-groups boundaries.
  • the synthesis by non-uniform units selection using linguistic criteria is the ideal solution to get rid of these drawbacks, as it is not limited in terms of syntactic structures.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)
EP04447212A 2004-04-15 2004-09-27 Procédé et dispositif pour la synthèse de la parole Withdrawn EP1640968A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP04447212A EP1640968A1 (fr) 2004-09-27 2004-09-27 Procédé et dispositif pour la synthèse de la parole
EP20050447078 EP1589524B1 (fr) 2004-04-15 2005-04-08 Procédé et dispositif pour la synthèse de la parole
AT05447078T ATE389224T1 (de) 2004-04-15 2005-04-08 Verfahren und vorrichtung zur sprachsynthese
DE200560005241 DE602005005241D1 (de) 2004-04-15 2005-04-08 Verfahren und Vorrichtung zur Sprachsynthese

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP04447212A EP1640968A1 (fr) 2004-09-27 2004-09-27 Procédé et dispositif pour la synthèse de la parole

Publications (1)

Publication Number Publication Date
EP1640968A1 true EP1640968A1 (fr) 2006-03-29

Family

ID=34933089

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04447212A Withdrawn EP1640968A1 (fr) 2004-04-15 2004-09-27 Procédé et dispositif pour la synthèse de la parole

Country Status (1)

Country Link
EP (1) EP1640968A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011131785A1 (fr) * 2010-04-21 2011-10-27 Université Catholique de Louvain Normalisation de textes bruyants tapés à la machine
CN106920547A (zh) * 2017-02-21 2017-07-04 腾讯科技(上海)有限公司 语音转换方法和装置
CN110622240A (zh) * 2017-05-24 2019-12-27 日本放送协会 语音向导生成装置、语音向导生成方法及广播系统

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002097794A1 (fr) * 2001-05-25 2002-12-05 Rhetorical Group Plc Synthese vocale

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002097794A1 (fr) * 2001-05-25 2002-12-05 Rhetorical Group Plc Synthese vocale

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
COLOTTE V ET AL: "Synthèse vocale par sélection linguistiquement orientée d'unités non-uniformes: LIONS", JOURNÉES D'ETUDE SUR LA PAROLE - JEP '04, 19 April 2004 (2004-04-19), FEZ, MOROCCO, XP002307516, Retrieved from the Internet <URL:http://www.lpl.univ-aix.fr/jep-taln04/proceed/actes/jep2004/Colotte-Beaufort.pdf> [retrieved on 20041125] *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011131785A1 (fr) * 2010-04-21 2011-10-27 Université Catholique de Louvain Normalisation de textes bruyants tapés à la machine
CN106920547A (zh) * 2017-02-21 2017-07-04 腾讯科技(上海)有限公司 语音转换方法和装置
KR20190065408A (ko) * 2017-02-21 2019-06-11 텐센트 테크놀로지(센젠) 컴퍼니 리미티드 음성 변환 방법, 컴퓨터 장치 및 저장 매체
EP3588490A4 (fr) * 2017-02-21 2020-04-08 Tencent Technology (Shenzhen) Company Limited Procédé de conversion de parole, dispositif informatique et support d'enregistrement
US10878803B2 (en) 2017-02-21 2020-12-29 Tencent Technology (Shenzhen) Company Limited Speech conversion method, computer device, and storage medium
CN110622240A (zh) * 2017-05-24 2019-12-27 日本放送协会 语音向导生成装置、语音向导生成方法及广播系统

Similar Documents

Publication Publication Date Title
US20230058658A1 (en) Text-to-speech (tts) processing
US7124083B2 (en) Method and system for preselection of suitable units for concatenative speech
US5905972A (en) Prosodic databases holding fundamental frequency templates for use in speech synthesis
US7979274B2 (en) Method and system for preventing speech comprehension by interactive voice response systems
US20200410981A1 (en) Text-to-speech (tts) processing
US11763797B2 (en) Text-to-speech (TTS) processing
US20070282608A1 (en) Synthesis-based pre-selection of suitable units for concatenative speech
Latorre et al. New approach to the polyglot speech generation by means of an HMM-based speaker adaptable synthesizer
JP2002530703A (ja) 音声波形の連結を用いる音声合成
US10699695B1 (en) Text-to-speech (TTS) processing
JP2007249212A (ja) テキスト音声合成のための方法、コンピュータプログラム及びプロセッサ
Dutoit A short introduction to text-to-speech synthesis
Mullah A comparative study of different text-to-speech synthesis techniques
EP1589524B1 (fr) Procédé et dispositif pour la synthèse de la parole
JPH08335096A (ja) テキスト音声合成装置
EP1640968A1 (fr) Procédé et dispositif pour la synthèse de la parole
Louw et al. The Speect text-to-speech entry for the Blizzard Challenge 2016
Bruce et al. On the analysis of prosody in interaction
Houidhek et al. Evaluation of speech unit modelling for HMM-based speech synthesis for Arabic
Shah et al. Influence of various asymmetrical contextual factors for TTS in a low resource language
Latorre et al. New approach to polyglot synthesis: How to speak any language with anyone's voice
Klabbers Text-to-Speech Synthesis
Juergen Text-to-Speech (TTS) Synthesis
Toderean et al. Achievements in the field of voice synthesis for Romanian
Demenko et al. The design of polish speech corpus for unit selection speech synthesis

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL HR LT LV MK

AKX Designation fees paid
REG Reference to a national code

Ref country code: DE

Ref legal event code: 8566

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20060930