CN1167999C - Method for converting super medium document into speech sound - Google Patents

Method for converting super medium document into speech sound Download PDF

Info

Publication number
CN1167999C
CN1167999C CNB981161952A CN98116195A CN1167999C CN 1167999 C CN1167999 C CN 1167999C CN B981161952 A CNB981161952 A CN B981161952A CN 98116195 A CN98116195 A CN 98116195A CN 1167999 C CN1167999 C CN 1167999C
Authority
CN
China
Prior art keywords
hypermedia
label
steering order
word string
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CNB981161952A
Other languages
Chinese (zh)
Other versions
CN1243284A (en
Inventor
钟锦钧
黄绍华
钟崇斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Priority to CNB981161952A priority Critical patent/CN1167999C/en
Publication of CN1243284A publication Critical patent/CN1243284A/en
Application granted granted Critical
Publication of CN1167999C publication Critical patent/CN1167999C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Abstract

The present invention relates to a system for converting a super medium document into a speech signal. The present invention comprises a super medium marking language parsing unit for reading and parsing a super medium document and separating the super medium document into character contents, a super medium document label and a pronunciation control instruction, a pronunciation control instruction parsing unit for analyzing the pronunciation control instruction and storing the contents of the pronunciation control instruction in different tables according to instruction categories, a character converting unit for executing the substitution and the conversion of characters so as to correct pronunciation, a label converting unit for analyzing a super medium label, controlling a pronunciation mode of a traditional character and speech converting unit and executing sound effect synthesis, and the traditional character and speech converting unit for converting the character contents which are converted by the character converting unit into a speech signal.

Description

With the hypermedia file conversion is the method for voice
Technical field
The present invention relates to text conversion is the system of voice.
Background technology
Text-to-speech converter (text-to-speech converter) be a kind of be the device of voice with text conversion.For people visually impaired, this device can help them to listen to extraneous information.Under particular environment, this device also is a kind of important outfit of common people's acquired information, when for example driving, or makes telephonic the time.And the source of these information, but e-file, or see through the Word message that light scanning unit and character recognition device are converted.
In daily life, the source of electronic type information is numerous day by day, and becomes progression to increase, for example, and Email, schedule, E-News, stock information and the world wide web that gets most of the attention.These electronic information to be converted to digital speech, if adopt artificial recording digitized processing in addition again, not only need to expend huge manpower and huge storage space, and artificial record type can't be applicable to that computer system adjusts the electronic information of generation automatically according to user's demand.
For the designer of text-to-speech converting unit, how various scripts are applicable to that the electronic information of Visual Display is converted to the voice pattern, determine it is a kind of challenge.Wherein most important reason is, presenting of electronic information not only needs to present its word content, simultaneously also need to consider the presentation mode of these word contents, for example the capital and small letter in Visual Display, runic, italic, paragraph type and enumerate presentation mode such as pattern.When carrying out the text-to-speech conversion, form and font control code that these are used to control Visual Display originally can not directly be converted to voice.Punctuation mark in word content can not directly be converted to voice.In addition, at different front and back literary compositions, the pronunciation of a word string also has different manners of articulation, and for example distorsion word of Chinese pronunciation is exactly a typical example.In order to address these problems, previous technological invention has proposed various solutions.
United States Patent (USP) the 5th, 555 has disclosed a kind of text-to-speech switch technology that solves this class problem No. 343.Wherein comprise the format and the processing mode of font control code, and the disposal route of punctuation mark and specific alphanumeric form.The method adopts first form of building up in advance with format and the corresponding Chinese idiom sound of font control code control code, is used for controlling the rate of articulation or volume.The method adopts second form of building up in advance with the colloquial literal word string of the corresponding one-tenth of specific alphanumeric form.These specific alphanumeric forms comprise the digital word string that is used for express time and separates with colon, be used for representing the date and the digital word string separated with oblique line and be used for representing archives catalog and the literal word string separated with oblique line etc.The method adopts the 3rd form of building up in advance with punctuation mark or corresponding colloquial literal word string of one-tenth of mathematical operation symbol or voice control code.The method is used a form of building up in advance, to determine a character of importing for can pronounce or can not pronounce.Run into the character that can not pronounce,, determine suitable articulation type just according to aforesaid first, second and third form of building up in advance.
United States Patent (USP) the 5th, 634, the another kind of text-to-speech switch technology that solves this class problem of No. 084 announcement.The method choosing will input literal according to contextual relation be categorized into numeral, linear module, geographical term in time between classification such as date.Again the literal after this classification is launched according to one or more different classes of abbreviation word table, and the colloquial words of corresponding one-tenth.The for example abbreviation of place name " SF, CA ", the method can be converted into " San Francisco Califomia (San Francisco, California) "; Also " MPEG " can be converted to colloquial " m peg ".
Because popularizing of Internet (Internet) and world wide web (World Wide Web), world wide web has become one of main source of current electronic information.(we are referred to as the hypermedia file to the most of employing of electronic information on world wide web hypermedia indicating language for Hyper Text Markup Language, form HTML).Hypermedia file and the different place of other e-files are in its source document, except file content, also to contain hypermedia label (HTML tag).The hypermedia label is the defined word tag of hypermedia indicating language, is used for the content and structure of markup document, or the demonstration of file control.For example, this example is represented the source document of one section hypermedia file below:
  <!BODY BGCOLOR=#DBFFFF>  <body bgcolor=white>  <CENTER>  <map name=“Main”>  <area shape=“rect”corrds=“157,12,257,112”href=“Main.html”>  <area shape=“rect”corrds=“293,141,393,241”href=“VRML.html”>  <area shape=“rect”corrds=“18,141,118,241”href=“VRML.html’>  <area shape=“rect”corrds=“157,226,257,366”href=“Main.html”>  </map>  <img src“Images/Main.gif”usemap=“#Main”border=0></img>        <!-- SIPO <DP n="2"> -->        <dp n="d2"/>  <br><br><br><br>  <b>  <font size=3 color=black>  Welcome to the VR workshop of our company  </font>  <a href=“http://www.ccl.itri.org.tw”><font size=3color=blue>ITRI</font></a>  <font size=3 color=black>/</font>  <a herf=“http://www.ccl.itri.org.tw”><font size=3color=blue>CCL</font></a>  <font size=3 color=black>.We have been<br>  developing some advanced technologies as follows,<br>  </b>  <ul>  <a href=“Main.html”>  <li><font size=3 color=blue>Pano VR</font>  </a>  <font size=3>(A panoramic image-based VR)</font><br>  <a href=“VRML.html”>  <li><font size=3 color=blue>Cyber VR</font>  </a>  <font size=3>(A VRML 1.0browser)</font><br>  </ul>  <br><br>  <a href=“Winner.html”><img src=“Images/Winner.gif”border></img></a><br>  </a>  <br><br>  <font size=3 color=black>  <br>You are the<img src=“cgibin/Count.cgi?df=vvr.dat”border=0align=middle>thvisitor<br>        <!-- SIPO <DP n="3"> -->        <dp n="d3"/>  </font>  <HR size=2 WIDTH=480 ALIGN=CENTER>  (C)Copyright 1996 Computer and Communication Laboratory,<br>  Industrial Technology Research Institute,Taiwan,R.O.C.  </BODY>
Can be seen by this example, in the source document of hypermedia file, all be displayable character, do not have the special control code that can't show.The hypermedia label is with "<" and "〉" indicate, and be divided into start-tag and end-tag.Open the beginning label start with "<", end-tag then with " 〉/" start.Therefore, "<font size=3 color=black〉" be the start-tag of " font " hypermedia label, "</font〉" then be its end-tag.
The hypermedia indicating language will be given special meaning by the word content that start-tag and end-tag indicated of hypermedia label, to express the structure of file, for example title, paragraph, enumerate and form etc.The display mode of these structural details is then controlled by the world wide web browse unit.Therefore, same hypermedia file on different global information visit unit or display system, has different display modes.In addition, the hypermedia label also allows the arrangement mode of nido, for example in last example, "<b〉" reach "<font size=3 color=black〉" be added on " Welcome to the VRworkgroup of our company " in the mode of nido.
Though above-mentioned example is the hypermedia file of writing with English, the word content of hypermedia file allows the language with other, as in, day and Korean etc.For the reader of general world wide web Chinese web page, to some English specific terms, reach nouns such as " HTML " as " Web ", " World Wide Web ", not strange.Therefore in general Chinese web page, regular meeting mixes English proper noun.On the other hand, in files such as the technological document of being write with the hypermedia indicating language, notice of meeting and p.m.entry, also often use English proper noun or abbreviation.Therefore in general world wide web webpage, often mixing has multiple language.In addition, we need consider that also same word string has the problem of multiple pronunciation, Zhong Wen distorsion word for example, and same word has different articulation types at different speech or hereinafter preceding.
Above-mentioned traditional text-to-speech converting unit and not to be suitable for the hypermedia file conversion be voice signal.In the process of conversion, at first the problem that must solve is exactly how to analyze the hypermedia file to know the hypermedia label to debate.Because the hypermedia label forms by displayable character, but not special control code, and the hypermedia label allows the arrangement mode of nido formula, therefore the technological invention of relevant text-to-speech converting unit and be not suitable for and analyze the hypermedia file.Moreover above-mentioned traditional text-to-speech converting unit also can't solve the distorsion word problem.Though some Chinese text speech conversion unit can solve the distorsion word problem of part.But these text-to-speech converting units are not considered the problem that multilingual mixes.Emphasis of the present invention is promptly overcoming these problems.
Summary of the invention
Therefore, it can be the computer system and the conversion method thereof of voice signal with the hypermedia file conversion that purpose of the present invention just provides a kind of, and can solve the problem of distorsion word.
Method according to proposed by the invention can be voice signal with the hypermedia file conversion, or reach other purpose.At the embodiment that this proposed, be one to be the computer system of voice signal with the hypermedia file conversion.This computer system comprises hypermedia indicating language analysis unit, a pronunciation steering order analysis unit, a label converting unit, a text conversion unit and a traditional text-to-speech converting unit.Hypermedia indicating language analysis unit reads and analyzes the hypermedia file of input, and it is separated into the pronunciation steering order of the hypermedia label and the control articulation type of word content, markup document structure.Pronunciation steering order parser analysis pronunciation steering order, and the classification of foundation instruction deposits its content in label corresponding tables, audio table, parameter list, distorsion word table and the proprietary name vocabulary in.The text conversion unit is according to the distorsion word table, and the every word string that must revise pronunciation in the word content with the hypermedia file is to substitute the pronunciation of word string correction literal.The text conversion unit is also according to the proprietary name vocabulary simultaneously, and the every word string that must translate replaces the pronunciation of former word string in the word content with the hypermedia file with the translation word string.The label converting unit is analyzed the hypermedia label, and according to label corresponding tables and parameter list, controls the voice parameter of traditional text-to-speech converting unit, to change the voice parameters such as volume, speed and the rhythm of the word content that this hypermedia label indicated.The label converting unit is also carried out the synthetic of audio according to label corresponding tables and audio table simultaneously, allows the audience be easy to offer an explanation the structure of file.The articulation type that tradition text-to-speech converting unit then sets according to the label converting unit is responsible for the word content that will finish via the text conversion cell translation, is converted to voice signal.
This can be a voice signal with the hypermedia file conversion according to the system of institute of the present invention construction, also solves problems such as distorsion word and language mix simultaneously.This system is deft design not only, also has simultaneously to use and expansion capacity.These apparatus and method not only can provide the conversion of personalized text-to-speech, and allow the supplier of hypermedia file, are easier to design the phonetic representation mode of hypermedia file.
The invention provides a kind of is the computer system of voice signal with the hypermedia file conversion, comprise: hypermedia indicating language analysis unit, with the file of a hypermedia mark up language format, be separated into the pronunciation steering order of the hypermedia label and the control articulation type of word content, markup document structure; This pronunciation steering order is analyzed in one pronunciation steering order analysis unit, and according to content modification label corresponding tables, audio table, parameter list, distorsion word table and the proprietary name vocabulary of this steering order of pronouncing; One text conversion unit, when stipulating that in this distorsion word table must revise the word string of pronouncing appears in this literal content, mode according to defined in this distorsion word table, revise pronunciation, and when the word string that must translate of regulation appears in this literal content in this proper noun, do translation according to the mode of defined in this proprietary name vocabulary; One label converting unit, regulation must be revised the hypermedia label of voice parameter or insertion audio when appearing in this hypermedia file in this label corresponding tables, project according to this specified in this label corresponding tables audio table, insert the audio data, and use the project of this specified in this label corresponding tables parameter list, revise the voice parameter of this literal content shown in this hypermedia label; And a text-to-speech converting unit, this literal content via this literal converting unit and the amended result of this label converting unit, is converted to voice signal.
The invention provides a kind of is the method for voice signal with the hypermedia file conversion in a pronunciation steering order analysis unit, comprise the following step: the volume that analysis specifies specific hypermedia label to use, the sound steering order of speed and rhythm voice parameter, and the sound effect control instruction of the audio data that specific hypermedia label should use is specified in analysis, and according to the content of this sound steering order, voice parameter is deposited in the parameter project, and in the label corresponding tables, set the corresponding relation of this hypermedia label and this parameter list project, and according to the content of this sound effect control instruction, the audio data are deposited in the audio table entry, and in this label corresponding tables, set the corresponding relation of this hypermedia label and this audio table entry; And the vocal modifications mode of specific word string in the interior word content of hypermedia file and the distorsion word steering order of front and back text strings are specified in analysis, and the proper noun steering order of the interpretative system of specific word string in this literal content is specified in analysis, wherein this vocal modifications mode and this interpretative system are the text strings that can be converted to voice signal by a text-to-speech converting unit, and should the pronunciation steering order analyze the content of unit according to this distorsion word steering order, produce the distorsion word table, deposit this specific word string and corresponding this vocal modifications mode and this front and back text strings thereof, and according to the content of this proper noun steering order, produce the proprietary name vocabulary, deposit this specific word string and this corresponding interpretative system thereof.
The invention provides a kind of is the method for voice signal with the hypermedia file conversion in a text conversion unit, comprise the following step: the specific word string that must revise pronunciation at each project defined of distorsion word table, with this specific word string in the word content of hypermedia file, the alternative word string replacement that is used to revise pronunciation with its appointment, word string should be substituted and the multiple pronunciation of this specific word string wherein a kind of specific pronunciation can be appointed as; And the specific word string that must do to translate at each project defined of proprietary name vocabulary, with this specific word string in the word content of this hypermedia file, the translation word string that is used to translate with its appointment replaces, this translation word string can allow part in this specific word string can't be converted to the word string of voice signal, is converted to the voice signal of appointment via the text-to-speech converting unit.
The invention provides a kind of is that the method for voice signal comprises the following step with the hypermedia file conversion: analysis pronunciation steering order, this step comprises the following step: at the sound steering order, the label corresponding tables project that generation one is retrieved by the hypermedia label of this sound steering order appointment, and produce a parameter list project, store the specified volume of this sound steering order, speed and rhythm voice parameter, and in this label corresponding tables project, the index of setting in its index field is pointed to this parameter project; Instruct at sound effect control, the label corresponding tables project that generation one is retrieved by the hypermedia label of this sound effect control instruction appointment, and produce an audio table entry, store the specified audio data of this sound effect control instruction, and in this label corresponding tables project, the index of setting in its index field is pointed to this audio table entry; At distorsion word steering order, the distorsion word table project that the specific word string that generation one must be revised pronunciation by this distorsion word steering order appointment is retrieved, store the specified alternative word string of this distorsion word steering order, this alternative word string can be converted to wherein a kind of specific pronunciation with this specific word string with multiple pronunciation; And at the proper noun steering order, the proper noun table entry that generation one is retrieved by this specific word string that must do to translate by the appointment of proper noun steering order, store the specified translation word string of this proper noun steering order, this translation word string can allow script can't be converted to this specific word string of voice signal by the text-to-speech converting unit, is converted to specific voice signal.
Description of drawings
Fig. 1 illustrates a specific embodiment of the present invention.
Fig. 2 illustrates each interelement data flow of the present invention.
Fig. 3 explanation is hidden in pronunciation steering order in the hypermedia file with a sequence.
Fig. 4 A, 4B and 4C illustrate parameter list, audio table and label corresponding tables respectively.
Fig. 5 A and Fig. 5 B illustrate distorsion word table and proprietary name vocabulary respectively.
Fig. 6 supporting paper reads the execution in step of control module.
Fig. 7 illustrates the execution in step of pronunciation steering order analysis unit.
The execution in step of Fig. 8 comment converting unit.
The execution in step of Fig. 9 care label converting unit.
Embodiment
Fig. 1 illustrates a hypermedia speech conversion system 10 according to institute of the present invention construction.This system is a computer system, includes a central processing unit 11, primary memory 12, network apparatus 13, phone interfare device 14, keyboard and mouse 15, audio device 16, display 17 and memory storage 18.This system uses a bus 19 that these devices 11-18 is linked together.Through bus 19 thus, but transfer instruction or data between this this device 11-18.
Memory storage 18 can be a disk drive, is used to store data and program (process).Primary memory 12 is also stored data and program, but normally is used for storing instruction and the data that present central processing unit 11 is being carried out.Central processing unit 11 is used for instruction and deal with data in the executive routine.Network apparatus 13 is used for being connected with computer network, for example the networking card of Ethernet connector or other patterns.Phone interfare device 14 is used for being connected with the phone networking.Keyboard and mouse 15 are used to receive the order or the data of user's input.Display 17 shows electronic information with literal pattern or pattern.Audio device 16 receives audio digital signals, and sounds or audio by loudspeaker or earphone.
As shown in Figure 1, memory storage 18 stored operating system, application, hypermedia archive files 23, pronunciation steering order file 21 and file read module 29.Operating system and application are the technology of generally knowing, and do not repeat them here.File read module 29 include files read control module 28, text-to-speech converting unit 27, hypermedia indicating language analysis unit 24, pronunciation steering order analysis unit 22, label converting unit 25, label corresponding tables 41, parameter list 42, audio table 43, text conversion unit 26, distorsion word table 31 and proprietary name vocabulary 32.
Though above-mentioned program 22,24-28 is responsible for execution with the mode cause central processing unit 11 that timesharing is shared, and this is just for the ease of stating method of the present invention.Said procedure 22,24-28 also can use the hardware technology of knowing, and reach identical functions with the embodiment of hardware, do not repeat them here this embodiment.In addition, text-to-speech converting unit 27 and hypermedia indicating language analysis unit 24 also are the technology of knowing, and are not described in detail in this.
File reads control module 28 and is responsible in the whole transfer process of control at each program 22, the data flow between 24-27.The data flow in the control module 28 is read in Fig. 2 display file.Hypermedia archive files 23 may stem from network apparatus 13, or reads from memory storage 18.Pronunciation steering order file 21 also is to stem from network apparatus 13, or reads from memory storage 18.
The content of hypermedia archive files 23 is analyzed in hypermedia indicating language analysis unit 24, and it is separated into the pronunciation steering order of the hypermedia label and the control articulation type of word content, markup document structure.Label converting unit 25 is delivered to isolated hypermedia label in hypermedia indicating language analysis unit 24.Text conversion unit 26 is delivered to isolated word content in hypermedia indicating language analysis unit 24.Pronunciation steering order analysis unit 22 is delivered to isolated pronunciation steering order in hypermedia indicating language analysis unit 24.
The pronunciation steering order is responsible for analyzing in pronunciation steering order analysis unit 22.The pronunciation steering order may be stored in the steering order file 21 that independently pronounces, or lies in the hypermedia archive files 23.The pronunciation steering order is divided into following four kinds of patterns:
(1) sound steering order, its form is: PARAM hypermedia tag attributes voice parameter;
(2) sound effect control instruction, its form is: AUDIO hypermedia tag attributes audio data;
(3) distorsion word steering order, its form is: ALT distorsion word string substitutes word string front and back text strings;
(4) proper noun steering order, its form is: TERM proper noun string translation word string.
Fig. 3 shows the pronunciation steering order 110,120,130,140,150,160,170 and 180 of a sequence.These pronunciation steering orders are to use the note label (commenttag) in the hypermedia indicating language to be indicated, and it can be hidden in the hypermedia file.Instruction 110 is sound steering orders, because of its use " PARAM " identification code 111.All word contents that indicate with hypermedia label 113 " LI " of this sound steering order 110 definition are all according to its voice parameter 115 defined parameters sound control system of uttering a word.In this sound steering order 110, voice parameter 115 defined parameters are: speed (speed) is 1.0, and volume (volume) is 0.8, and the rhythm (pitch) is 1.2.In the sound steering order, can optionally add the attribute field at hypermedia label 113 and 115 of voice parameters.The attribute field is used to limit 110 applicable scope of this sound steering order.For example the attribute field can be some attributes of hypermedia label, and the use attribute field then can limit 110 of this sound steering orders and be applicable to the hypermedia label 113 with particular community.
Instruction 120 is sound effect control instructions, because of its use " AUDIO " identification code 121.These sound effect control instruction 120 definition when running into hypermedia label 123 " LI ", must be inserted audio data 125 " beep.au ".In this example, audio data 125 are one to be named as the audio data file of beep.au.In sound effect control instruction 120, also can optionally add the attribute field, be used for limiting 120 applicable scope of this sound effect control instruction.
Pronunciation steering order analysis unit 22 is when analyzing the instruction of sound steering order and sound effect control, and meeting is according to the parameter list 42 of content modification shown in Fig. 4 A of instruction, or the audio table 43 shown in Fig. 4 B.The steering order of pronouncing then analysis unit 22 is revised the label corresponding tables 41 shown in Fig. 4 C again.
Shown in Fig. 4 A, pronunciation steering order analysis unit 22 can add or the project of modification 42-1 in parameter list 42 when analyzing sound steering order 110.Operable project 42-1 must be obtained at parameter list 42 in the steering order of at first pronouncing analysis unit 22.The acquisition mode of project 42-1 can be the new project of inserting in parameter list 42, or reuses the project that does not re-use in the parameter list 42.After obtaining project 42-1, pronunciation steering order analysis unit 22 can deposit the 115 defined parameters of voice parameter in the sound steering order 110 in field 42-12,42-13 and the 42-14 of project 42-1.PID field 42-11 among Fig. 4 A represents the parameter recognition sign indicating number, and this field is for convenience of description, and can ignore when reality is used need not.
Shown in Fig. 4 B, pronunciation steering order analysis unit 22 instructed 120 o'clock in the analysis sound effect control, can add in audio table 43 or the project of modification 43-1.Operable project 43-1 must be obtained at audio table 43 in the steering order of at first pronouncing analysis unit 22.The acquisition mode of project 43-1 can be the new project of inserting in audio table 43, or reuses the project that does not re-use in the audio table 43.After obtaining project 43-1, field 43-12 and 43-13 that pronunciation steering order analysis unit 22 can deposit the audio Data Filename 125 in the sound effect control instruction 120 and audio data content thereof in project 43-1.AID field 43-11 among Fig. 4 B represents the audio identification code of date, and this field is for convenience of description, and can ignore when reality is used need not.In addition, in order to save the use of storage space, pronunciation steering order analysis unit 22 can use 125 pairs of audio tables 43 of audio Data Filename to retrieve before revising audio table 43, if find that there is the action of then not making an amendment in identical filename.
The content of label corresponding tables 41 is then revised in pronunciation steering order analysis unit 22 after the modification of parameter list 42 or audio table 43 is finished.The steering order of at first pronouncing analysis unit 22 is with the identification code 111 or 121 of pronunciation steering order 110 or 120, hypermedia label 113 or 123 and attribute field Checking label corresponding tables 41, and obtains project 41-1 or 42-2.In retrieval,, then set up new project 41-1 or 41-2 if project 41-1 or 41-2 do not exist.The steering order of pronouncing then analysis unit 22 is set at PARAM at sound steering order 110 with the pattern field 41-13 of project 41-1, points to parameter list 42 to indicate index field 41-14.The steering order of pronouncing simultaneously analysis unit 22 also is set at index field 41-14 the corresponding project 42-1 of sound steering order 110 in parameter list 42.At sound effect control instruction 120, pronunciation steering order analysis unit 22 is set at AUDIO with the pattern field 4-23 of project 41-2, points to audio table 43 to indicate index field 41-24.The steering order of pronouncing simultaneously analysis unit 22 also is set at index field 41-24 the corresponding project 43-1 of this audio commander instruction 120 in audio table 43.
In Fig. 3, instruction 130,140 and 150 is distorsion word steering orders, and the identification code 131,141 and 151 of these instructions is " ALT ".Each distorsion word steering order 130,140 and 150 all defines distorsion word string 133,143 and 153, and the alternative word string 135,145 and 155 that obtains distorsion word string 133,143 and 153.Its objective is and use alternative word string 135,145 and 155 to allow text-to-speech converting unit 27 produce correct pronunciation.Instruction 140 and 150 is definition front and back text strings 147 and 157 also, is used for limiting 140 and 150 applicable scope of instruction.
Shown in Fig. 5 A, pronunciation steering order analysis unit 22 can add in distorsion word table 31 or the project of modification 31-1,31-2 and 31-3 in regular turn analyzing distorsion word steering order 130,140 and at 150 o'clock.Pronunciation steering order analysis unit 22 reads in and does suitable text conversion with distorsion word string 133,143 and 153, deposits distorsion word string field 31-11,31-21 and the 31-31 of project 31-1,31-2 and 31-3 more respectively in.Pronunciation steering order analysis unit 22 will substitute word string 135,145 and 155 and read in and do suitable text conversion, deposit alternative word string field 31-12,31-22 and the 31-32 of project 31-1,31-2 and 31-3 more respectively in.Pronunciation steering order analysis unit 22 reads in and does suitable text conversion with front and back text strings 147 and 157, and other again branch deposits front and back text strings field 31-23 and the 31-33 of project 31-2 and 31-3 in.
In Fig. 3, instruction 160,170 and 180 is proper noun steering orders, and the identification code 161,171 and 181 of these instructions is " TERM ".Each proper noun steering order 160,170 and 180 all defines proper noun word string 163,173 and 183, and the translation word string 165,175 and 185 that is used for replacing proper noun word string 163,173 and 183.Its objective is and use translation word string 165,175 and 185, allow text-to-speech converting unit 27 produce orthoepy at proper noun word string 163,173 and 183.For example when text-to-speech converting unit 27 can only be changed Chinese text, proper noun steering order 160,170 and 180 can be used for the English or Sino-British proper noun that mixes is converted to the Chinese language signal.
Shown in Fig. 5 B, pronunciation steering order analysis unit 22 can add in proprietary name vocabulary 32 or the project of modification 32-1,32-2 and 32-3 in regular turn analyzing proper noun steering order 160,170 and at 180 o'clock.Pronunciation steering order analysis unit 22 reads in and does suitable text conversion (being specified in the back) with proper noun word string 163,173 and 183, deposits proper noun field 32-11,32-21 and the 32-31 of project 32-1,32-2 and 32-3 more respectively in.Pronunciation steering order analysis unit 22 reads in and does suitable text conversion (being specified in the back) with translation word string 165,175 and 185, deposits translation field 32-12,32-22 and the 32-32 of project 32-1,32-2 and 32-3 more respectively in.
In Fig. 2, text conversion unit 26 receives and handles by hypermedia indicating language analysis unit 24 and separated the word content that produces.These literal contents are searched in text conversion unit 26, with decision whether distorsion word string in the distorsion word table 31 and the proper noun in the proprietary name vocabulary 32 are arranged.If find to have distorsion word string or proper noun, text conversion unit 26 will be carried out word string according to the contents of a project of distorsion word table 31 or proprietary name vocabulary 32 and substitute.Text conversion unit 26 is sent to text-to-speech converting unit 27 with the word content of finishing dealing with.
Label converting unit 26 receives and handles the hypermedia label by the separation generation of hypermedia indicating language analysis unit 24.Whether this hypermedia label search label corresponding tables 41 of label converting unit 25 usefulness needs the sound control system of making a sound or sound effect control to determine this hypermedia label.The sound control system if this hypermedia labeling requirement of discovery is made a sound, label converting unit 25 obtains corresponding parameters from parameter list 42, and these voice parameters is delivered to text-to-speech converting unit 27 according to the index field of the project of retrieval acquisition.Do sound effect control if find this hypermedia labeling requirement, label converting unit 25 obtains the corresponding sound effect data from audio table 43, and it is delivered to audio device 16 or phone interfare device 14 according to the index field of the project of retrieval acquisition.
Text-to-speech converting unit 27 receives and handles the word content from text conversion unit 26, and from the voice parameter of label converting unit 25.Text-to-speech converting unit 27 can change its parameter setting according to the voice parameter of newly receiving, for example changes the isoparametric setting value of speed, volume and the rhythm of sound.When text-to-speech converting unit 27 received word content, the meeting foundation is the setting value of voice parameter at that time, this literal content is converted to voice signal, and the result is delivered to audio device 16 or phone interfare device 14.
Fig. 6 supporting paper reads the execution in step of control module 28.At step S1, whether hypermedia speech conversion system 10 (central processing unit 11 executive operating systems or application) decision needs to read independently to pronounce steering order file 21.If file reads control module 28 execution in step S2, read the content of this document, and give pronunciation steering order analysis unit 22 to analyze the pronunciation steering order with file content at step S6.At execution of step S6, or when not needing to read independently to pronounce steering order file 21, file reads control module 28 execution in step S3, reads the content of hypermedia archive files 23.In step S4, file reads control module 28 content of hypermedia archive files 23 is transferred to hypermedia indicating language analysis unit 24 to be separated into the file element.One file element can be the text strings or a pronunciation steering order of a hypermedia label, a word content.File reads 28 of control modules and reads and handle the file element that hypermedia indicating language analysis unit 24 is separated in regular turn.In step S5, file reads whether the file element of separating control module 28 decision hypermedia indicating language analysis unit 24 is the pronunciation steering order, if words, file reads control module 28 execution in step S6, and the steering order of should pronouncing is transferred to pronunciation steering order analysis unit 22 and analyzed.Behind the execution of step S6, file reads control module 28 and gets back to step S4, reads and handle the file element that next is separated by hypermedia indicating language analysis unit 24.In step S5, if the file element that reads is not the pronunciation steering order, file reads control module 28 execution in step S7.If file reads control module 28 and finds that at step S7 this document element is a hypermedia label, then file reads control module 28 in step S8, and this hypermedia label is transferred to label converting unit 25, to carry out the control of sound and audio.File reads control module 28 and gets back to step S4 then, reads and handle next file element.In step S7, if the file element that reads is not the hypermedia label, file reads control module 28 execution in step S9.In step S9, file reads control module 28 the file element that reads is considered as the text strings of a word content, and it is transferred to the alternative conversion that literal is done in text conversion unit 26.In step S10, file reads control module 28 and gives the result of conversion the text-to-speech converting unit 27 processing, and text-to-speech converting unit 27 is converted into voice signal, and plays back via audio device 16 or phone interfare device 14.File reads control module 28 and gets back to step S4 then, reads and handle next file element.File reads control module 28 can repeat these steps S4-S10, and file elements all in hypermedia archive files 23 are all handled.
Fig. 7 illustrates the execution flow process of pronunciation steering order analysis unit 22.In step S11, pronunciation steering order analysis unit 22 reads the pronunciation steering order.At step S12, whether this pronunciation steering order is differentiated according to the identification code of instruction in pronunciation steering order analysis unit 22 is the sound steering order.If pronunciation steering order analysis unit 22 execution in step S13 add in label corresponding tables 41 or revise a project, and the hypermedia label that will instruct, instruct kenel (is PARAM at this), attribute and parameter index to deposit the field of correspondence in.The steering order of pronouncing then analysis unit 22 execution in step S14 should instruct defined voice parameter to deposit parameter list 42 in.Behind the execution of step S14, step S11 is got back in pronunciation steering order analysis unit 22, reads and handle next pronunciation steering order.
At step S12, if instruction is not the sound steering order, pronunciation steering order analysis unit 22 execution in step S15 differentiate this instruction and are the sound effect control instruction.If pronunciation steering order analysis unit 22 execution in step S16 add in label corresponding tables 41 or revise a project, and the hypermedia label that will instruct, instruct kenel (is AUDIO at this), attribute and audio data target to deposit the field of correspondence in.The steering order of pronouncing then analysis unit 22 execution in step S25 should instruct defined audio shelves name and archive content thereof to deposit audio table 43 in.Behind the execution of step S25, step S11 is got back in pronunciation steering order analysis unit 22, reads and handle next pronunciation steering order.
At step S15, if instruction is not the sound effect control instruction, pronunciation steering order analysis unit 22 execution in step S17, whether differentiate this instruction is the proper noun steering order.If pronunciation steering order analysis unit 22 execution in step S18 according to distorsion word table 31 existing contents, carry out text conversion to this instruction.That is to say,, check in regular turn whether this instruction exists the distorsion word string of needs conversion at each project of distorsion word table 31.If need to find the distorsion word string of conversion in this instruction, 22 of unit of pronunciation steering order analysis are converted into corresponding alternative word string.Execution of step S18, pronunciation steering order analysis unit 22 execution in step S19 according to proprietary name vocabulary 32 existing contents, carry out text conversion to this instruction.That is to say,, check in regular turn whether this instruction exists the proper noun word string of needs conversion at each project of proprietary name vocabulary 32.If need to find the proper noun word string of conversion in this instruction, 22 of unit of pronunciation steering order analysis are converted into corresponding translation word string.Execution of step S19, pronunciation steering order analysis unit 22 execution in step S20 deposit the proper noun steering order after the conversion in proprietary name vocabulary 32.Step S11 is got back in the steering order of pronouncing then analysis unit 22, reads and handle next pronunciation steering order.
At step S17, if instruction is not the proper noun steering order, pronunciation steering order analysis unit 22 execution in step S21, whether differentiate this instruction is distorsion word steering order.If pronunciation steering order analysis unit 22 execution in step S22 according to distorsion word table 31 existing contents, carry out text conversion to this instruction.That is to say,, check in regular turn whether this instruction exists the distorsion word string of needs conversion at each project of distorsion word table 31.If need to find the distorsion word string of conversion in this instruction, 22 of unit of pronunciation steering order analysis are converted into corresponding alternative word string.Execution of step S22, pronunciation steering order analysis unit 22 execution in step S23 deposit the distorsion word steering order after the conversion in distorsion word table 31.Step S11 is got back in the steering order of pronouncing then analysis unit 22, reads and handle next pronunciation steering order.
At step S21, if instruction is not a distorsion word steering order, pronunciation steering order analysis unit 22 execution in step S24.In step S24, pronunciation steering order analysis unit 22 is considered as the data that read to explain, and therefore it is ignored.Step S11 is got back in the steering order of pronouncing then analysis unit 22, reads and handle next pronunciation steering order.Pronunciation steering order analysis unit 22 meeting repeated execution of steps S11-S24 handle up to all pronunciation steering orders.
The execution flow process of Fig. 8 comment converting unit 26.In step S31, text conversion unit 26 reads the word content of hypermedia archive files 23.Then text conversion unit 26 execution in step S32 according to distorsion word table 31 existing contents, carry out text conversion to this literal content.That is to say that text conversion unit 26 can check in regular turn whether this literal content exists the distorsion word string of needs conversion at each project of distorsion word table 31.If need to find the distorsion word string of conversion in this literal content, 26 of text conversion unit are converted into corresponding alternative word string.In order to carry out above efficient, text conversion unit 26 is when doing text conversion at a project of distorsion word table 31, at first search this literal content, find out the position of the distorsion word string of this project, again with the front and back character or the word string of this position, add this distorsion word string, compare, whether need replace with the alternative word string of this project with this distorsion word string that determines this literal content with the front and back text strings of this project.If text conversion unit 26 just replaces this distorsion word string with the alternative word string of this project.Text strings before and after if this project does not define, 26 of text conversion unit directly replace this distorsion word string to substitute word string.After handling, text conversion unit 26 continues to search this literal content in this way, finds out and handle the distorsion word string of next this project, all searches up to this literal content to finish.
Execution of step S32, text conversion unit 26 is execution in step S33 then, and according to proprietary name vocabulary 32 existing contents, the word content that step S32 is converted carries out text conversion.That is to say that text conversion unit 26 checks in regular turn at each project of proprietary name vocabulary 32 whether this instruction exists the proper noun word string of needs conversion.If need to find the proper noun string of conversion in this literal content, then text conversion unit 26 is converted into corresponding translation word string.This literal content is at first searched in text conversion unit 26 when doing text conversion at a project of proprietary name vocabulary 32, find out the proper noun word string of this project, and directly replaces this proper noun word string with the translation word string.After handling, text conversion unit 26 continues to search this literal content in this way, finds out and handle the proper noun word string of next this project, all searches up to this literal content to finish.
Execution of step S33, step S31 is got back in text conversion unit 26, reads and handle down a string word content that is produced by hypermedia indicating language analysis unit 24.
The execution in step of Fig. 9 care label converting unit 25.In order to handle the nido formula arrangement mode of hypermedia label, label converting unit 25 is used a storehouse (stack) that is stored in primary memory 12 or the central processing unit 11, so that carry out the conversion process of hypermedia label.At step S41, analyze the hypermedia label that unit 24 is produced at one by the hypermedia indicating language, whether label converting unit 25 is at first differentiated it is start-tag.If this hypermedia label is an initial label, label converting unit 25 execution in step S42 are with its propelling (push) storehouse.Otherwise, label converting unit 25 execution in step S43 eject (pop) hypermedia label in storehouse.
At step S44, label converting unit 25 is carried out the label conversion at the hypermedia label on storehouse top, and with this hypermedia label search label corresponding tables 41.At step S45, label converting unit 25 determines according to the result of retrieval whether this hypermedia label has corresponding parameters setting item (its pattern field is PARAM).If have, label converting unit 25 execution in step S46 use in the parameter index autoregressive parameter table 42 of this project and read corresponding parameters.Label converting unit 25 is sent to text-to-speech converting unit 27 with these parameters then, to change the articulation type of word content backward.
Execution of step S46, when perhaps this hypermedia label does not have the corresponding parameters setting item, label converting unit 25 execution in step S47, the result according to retrieval determines whether this hypermedia label has corresponding sound effect control project (its pattern field is AUDIO).If have, label converting unit 25 execution in step S48 utilize the audio data target of this project to read the corresponding sound effect data from audio table 43.Label converting unit 25 is sent to audio device 16 or 14 broadcasts of phone interfare device with these audio data then.
At step S47, if this hypermedia label does not have the corresponding sound effect control project, label converting unit 25 execution in step S49 ignore this hypermedia label and do not handle.Execution of step S48 or step S49, label converting unit 25 is got back to step S41, waits pending next hypermedia label that is produced by hypermedia indicating language analysis unit 24.
Label converting unit 25 employed storehouses, the hypermedia element (HTMLelement) that can guarantee internal layer can use sound and the sound effect control of oneself.When getting back to the hypermedia element on upper strata, still can recover employed sound of this element and sound effect control simultaneously.
The above embodiments are just in order to illustrate method proposed by the invention.Those skilled in the art also can derive different embodiments in not breaking away from the disclosed spirit and scope of the present invention.

Claims (11)

1. one kind is the computer system of voice signal with the hypermedia file conversion, comprises:
One hypermedia indicating language analysis unit with the file of a hypermedia mark up language format, is separated into the pronunciation steering order of the hypermedia label and the control articulation type of word content, markup document structure;
This pronunciation steering order is analyzed in one pronunciation steering order analysis unit, and according to content modification label corresponding tables, audio table, parameter list, distorsion word table and the proprietary name vocabulary of this steering order of pronouncing;
One text conversion unit, when stipulating that in this distorsion word table must revise the word string of pronouncing appears in this literal content, mode according to defined in this distorsion word table, revise pronunciation, and when the word string that must translate of regulation appears in this literal content in this proper noun, do translation according to the mode of defined in this proprietary name vocabulary;
One label converting unit, regulation must be revised the hypermedia label of voice parameter or insertion audio when appearing in this hypermedia file in this label corresponding tables, project according to this specified in this label corresponding tables audio table, insert the audio data, and use the project of this specified in this label corresponding tables parameter list, revise the voice parameter of this literal content shown in this hypermedia label; And
One text-to-speech converting unit via this literal converting unit and the amended result of this label converting unit, is converted to voice signal with this literal content.
2. in a pronunciation steering order analysis unit, be the method for voice signal, comprise the following step the hypermedia file conversion:
The volume that analysis specifies specific hypermedia label to use, the sound steering order of speed and rhythm voice parameter, and the sound effect control instruction of the audio data that specific hypermedia label should use is specified in analysis, and according to the content of this sound steering order, voice parameter is deposited in the parameter project, and in the label corresponding tables, set the corresponding relation of this hypermedia label and this parameter list project, and according to the content of this sound effect control instruction, the audio data are deposited in the audio table entry, and in this label corresponding tables, set the corresponding relation of this hypermedia label and this audio table entry; And
The vocal modifications mode of specific word string in the interior word content of hypermedia file and the distorsion word steering order of front and back text strings are specified in analysis, and the proper noun steering order of the interpretative system of specific word string in this literal content is specified in analysis, wherein this vocal modifications mode and this interpretative system are the text strings that can be converted to voice signal by a text-to-speech converting unit, and should the pronunciation steering order analyze the content of unit according to this distorsion word steering order, produce the distorsion word table, deposit this specific word string and corresponding this vocal modifications mode and this front and back text strings thereof, and according to the content of this proper noun steering order, produce the proprietary name vocabulary, deposit this specific word string and this corresponding interpretative system thereof.
3. in a text conversion unit, be the method for voice signal, comprise the following step the hypermedia file conversion:
Must revise the specific word string of pronunciation at each project defined of distorsion word table, with this specific word string in the word content of hypermedia file, the alternative word string replacement that is used to revise pronunciation with its appointment, word string should be substituted and the multiple pronunciation of this specific word string wherein a kind of specific pronunciation can be appointed as; And
The specific word string that must do to translate at each project defined of proprietary name vocabulary, with this specific word string in the word content of this hypermedia file, the translation word string that is used to translate with its appointment replaces, this translation word string can allow part in this specific word string can't be converted to the word string of voice signal, is converted to the voice signal of appointment via the text-to-speech converting unit.
4. one kind is the method for voice signal with the hypermedia file conversion, comprises the following step:
Analysis pronunciation steering order, this step comprises the following step:
At the sound steering order, the label corresponding tables project that generation one is retrieved by the hypermedia label of this sound steering order appointment, and produce a parameter list project, store the specified volume of this sound steering order, speed and rhythm voice parameter, and in this label corresponding tables project, the index of setting in its index field is pointed to this parameter project;
Instruct at sound effect control, the label corresponding tables project that generation one is retrieved by the hypermedia label of this sound effect control instruction appointment, and produce an audio table entry, store the specified audio data of this sound effect control instruction, and in this label corresponding tables project, the index of setting in its index field is pointed to this audio table entry;
At distorsion word steering order, the distorsion word table project that the specific word string that generation one must be revised pronunciation by this distorsion word steering order appointment is retrieved, store the specified alternative word string of this distorsion word steering order, this alternative word string can be converted to wherein a kind of specific pronunciation with this specific word string with multiple pronunciation; And
At the proper noun steering order, the proper noun table entry that generation one is retrieved by this specific word string that must do to translate by the appointment of proper noun steering order, store the specified translation word string of this proper noun steering order, this translation word string can allow script can't be converted to this specific word string of voice signal by the text-to-speech converting unit, is converted to specific voice signal.
5. method as claimed in claim 4 further comprises the following steps: this pronunciation steering order of extraction from one of described hypermedia file is explained.
6. method as claimed in claim 4 further comprises the following steps: to read each described pronunciation steering order.
7. method as claimed in claim 4 further comprises the following steps:
Analyze the data of a hypermedia file;
When running into a hypermedia label, just with the described label corresponding tables of described hypermedia label search;
At retrieval and this label corresponding tables project, use the index in its index field, obtain this parameter list project and this audio table entry;
Use one group of voice parameter in this parameter project, revise the employed voice parameter of text-to-speech converting unit backward; And
Insert the stored audio data of this audio project.
8. method as claimed in claim 7 further comprises the following steps:
When running into the start-tag in the hypermedia label, this start-tag is pushed storehouse; And
When running into the end-tag in the hypermedia label, just from this storehouse, eject a hypermedia label,
Wherein being used to the specific hypermedia label retrieved, is a hypermedia label that is positioned at this storehouse top.
9. method as claimed in claim 4 further comprises the following steps:
Analyze the data of a hypermedia file;
Search the word content of this hypermedia file;
With every word string that meets the necessary modification pronunciation of regulation in this distorsion word table in this literal content, replace with alternative word string specified in this distorsion word table project; And
With every word string that meets the necessary do translation of regulation in this proprietary name vocabulary in this literal content, replace with translation word string specified in this proper noun table entry.
10. method as claimed in claim 9, wherein said distorsion word table project also contains front and back literary composition field, meet the necessary word string of revising pronunciation of regulation in this distorsion word table project in the described word content, literary composition also must meet the front and back text strings of the interior defined of front and back literary composition field of this project before and after it, in the case, this word string just can replace with alternative word string specified in this distorsion word table project.
11. method as claimed in claim 10 further comprises the following steps:
Produce a voice signal, its content comprises the voice signal that described audio data are produced, and by the described word content of described hypermedia literal, described alternative word string and described translation word string according to voice signal that described voice parameter converted.
CNB981161952A 1998-07-24 1998-07-24 Method for converting super medium document into speech sound Expired - Lifetime CN1167999C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB981161952A CN1167999C (en) 1998-07-24 1998-07-24 Method for converting super medium document into speech sound

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB981161952A CN1167999C (en) 1998-07-24 1998-07-24 Method for converting super medium document into speech sound

Publications (2)

Publication Number Publication Date
CN1243284A CN1243284A (en) 2000-02-02
CN1167999C true CN1167999C (en) 2004-09-22

Family

ID=5224980

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB981161952A Expired - Lifetime CN1167999C (en) 1998-07-24 1998-07-24 Method for converting super medium document into speech sound

Country Status (1)

Country Link
CN (1) CN1167999C (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692670B (en) * 2009-10-23 2012-07-18 中国电信股份有限公司 Rich media play control method, rich media play control system and rich media service platform
CN103856817B (en) * 2012-11-29 2018-07-20 上海文广互动电视有限公司 The interactive playback method and system of hypermedia
CN104462027A (en) * 2015-01-04 2015-03-25 王美金 Method and system for performing semi-manual standardized processing on declarative sentence in real time

Also Published As

Publication number Publication date
CN1243284A (en) 2000-02-02

Similar Documents

Publication Publication Date Title
US7954044B2 (en) Method and apparatus for linking representation and realization data
US6115686A (en) Hyper text mark up language document to speech converter
Atkins et al. Corpus design criteria
KR101255405B1 (en) Indexing and searching speech with text meta-data
US20070055493A1 (en) String matching method and system and computer-readable recording medium storing the string matching method
US20070156404A1 (en) String matching method and system using phonetic symbols and computer-readable recording medium storing computer program for executing the string matching method
EP0525427A2 (en) Non-text object storage and retrieval
US20060271838A1 (en) Method and systems for accessing data by spelling discrimination letters of link names
US20070011160A1 (en) Literacy automation software
US8918323B2 (en) Contextual conversion platform for generating prioritized replacement text for spoken content output
CN102880599B (en) For resolving the sentence heuristic approach that sentence is also supported to learn this parsing
Yaseen et al. Building Annotated Written and Spoken Arabic LRs in NEMLAR Project.
Samudravijaya Indian language speech label (ILSL): a de facto national standard
CN1167999C (en) Method for converting super medium document into speech sound
Xydas et al. Augmented auditory representation of e-texts for text-to-speech systems
CN115988149A (en) Method for generating video by AI intelligent graphics context
Dousset et al. Developing a database for Australian Indigenous kinship terminology: The AustKin project
JP2005250525A (en) Chinese classics analysis support apparatus, interlingual sentence processing apparatus and translation program
Fremerey SyncPlayer–a Framework for Content-Based Music Navigation
KR20000049713A (en) Web-based Internet Newspaper Edit System and Edit Method
TW434492B (en) Hyper text-to-speech conversion method
JPH09258763A (en) Voice synthesizing device
Kishore et al. A text to speech interface for Universal Digital Library
Trips et al. From original sources to linguistic analysis: Tools and datasets for the investigation of multilingualism in medieval english
Fraga et al. ANNOTATING UNSTRUCTURED TEXTS FOR ENHANCING SEMANTIC ANALYSIS PROCESSES.

Legal Events

Date Code Title Description
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C06 Publication
PB01 Publication
C14 Grant of patent or utility model
GR01 Patent grant
CX01 Expiry of patent term

Granted publication date: 20040922

CX01 Expiry of patent term