WO2001001390A1 - Trieuse-liseuse electronique - Google Patents
Trieuse-liseuse electronique Download PDFInfo
- Publication number
- WO2001001390A1 WO2001001390A1 PCT/JP2000/004109 JP0004109W WO0101390A1 WO 2001001390 A1 WO2001001390 A1 WO 2001001390A1 JP 0004109 W JP0004109 W JP 0004109W WO 0101390 A1 WO0101390 A1 WO 0101390A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- electronic document
- reading
- document processing
- sentence
- attribute information
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
Definitions
- the present invention relates to a background art related to an electronic document processing apparatus for processing an electronic document.
- WW W Wi rid Wide Web
- WWW is a system that performs document processing such as creating, publishing, or sharing documents and showing the way of new style documents.
- document processing such as creating, publishing, or sharing documents and showing the way of new style documents.
- sophisticated document processing beyond the WWW such as classification and summarization of documents based on the contents of the documents, is required.
- the mechanical processing of the contents is essential.
- HTML Hyper Text Markup Language
- W Hyper Text Markup Language
- the WWW is a system that shows the way new documents should be.
- advanced document processing could not be performed.
- performing advanced document processing requires mechanical processing of the document.
- a user uses an information search system such as a so-called search engine to search for a desired information from a huge amount of information provided through the Internet.
- This information search system is a system that searches for information based on specified keywords and provides the searched information to users. The user selects desired information from the provided information.
- an information search system information can be easily searched in this way, but the user can read the information provided by the search, understand the outline, and determine whether or not it is the desired information. You need to judge. This is a heavy burden on users, especially when the amount of information provided is large. So, recently, text information, That is, a so-called automatic summarization system, which is a system for automatically summarizing the contents of a document, has been attracting attention.
- An automatic summarization system is a system that creates summaries by reducing the length and complexity of textual information while preserving the original information, that is, the meaning of the document. The user can understand the outline of the document by reading the summary created by this automatic summary creation system.
- automatic summarization systems use sentences and words in text as a single unit, assigning them some sort of information-based importance, and ordering them. Then, the automatic summarization system collects sentences and words ordered in a higher order and creates a summary.
- Speech synthesis is essentially to generate speech mechanically based on the results of speech analysis and simulation of human speech generation mechanisms, and assembles elements or phonemes of individual languages under digital control. Things.
- An object of the present invention is to provide an electronic document processing apparatus, an electronic document processing method, and a recording medium on which an electronic document processing program is recorded.
- An electronic document processing apparatus that achieves the above-mentioned object is, in an electronic document processing apparatus that processes an electronic document, a document input unit to which the electronic document is input, and a speech synthesizer reads out based on the electronic document. And voice reading data generating means for generating voice reading data.
- Such an electronic document processing device generates read-aloud data based on an electronic document.
- An electronic document processing method that achieves the above object is a digital document processing method for processing an electronic document, the method comprising: a document inputting step of inputting an electronic document; And a voice reading data generating step of generating voice reading data for reading aloud.
- Such an electronic document processing method generates read-aloud data based on an electronic document.
- the recording medium on which the gram is recorded is a recording medium on which a computer-controllable electronic document processing program for processing an electronic document is recorded.
- Such a recording medium on which the electronic document processing program according to the present invention is recorded provides an electronic document processing program for generating voice reading data based on an electronic document.
- an electronic document processing apparatus for achieving the above-mentioned object is a digital document processing apparatus for processing an electronic document, wherein the electronic document processing apparatus has a plurality of elements and has a hierarchical structure. It is characterized by comprising a document input means for inputting the electronic document to which the tag information shown is given, and a document reading means for reading out the electronic document by voice synthesis based on the tag information.
- Such an electronic document processing apparatus inputs an electronic document to which tag information indicating the internal structure of an electronic document having a plurality of elements and a hierarchical structure is added, and to this electronic document, Read out the electronic document directly based on the assigned tag information.
- Such an electronic document processing method includes: inputting an electronic document to which tag information indicating the internal structure of an electronic document having a plurality of elements and having a hierarchical structure is added; Read out the electronic document directly based on the assigned tag information.
- a recording medium on which an electronic document processing program according to the present invention for achieving the above object is recorded is a computer readable recording medium on which a computer controllable electronic document processing program for processing an electronic document is recorded.
- the document processing program includes: a document input step of inputting the electronic document to which tag information indicating an internal structure of the electronic document having a plurality of elements and having a hierarchical structure is added; And a text-to-speech process for reading out an electronic document by speech synthesis.
- a recording medium on which such an electronic document processing program according to the present invention is recorded receives an electronic document to which tag information indicating the internal structure of an electronic document having a plurality of elements and a hierarchical structure is added.
- an electronic document processing program for directly reading an electronic document based on tag information given to the electronic document is provided.
- an electronic document processing apparatus that achieves the above-described object is an electronic document processing apparatus that processes an electronic document, comprising: a summary sentence creating unit that creates an abstract of the electronic document; Voice reading data generating means for generating voice reading data to be read out by the synthesizer.
- the voice reading data generating means includes a portion of the electronic document included in the summary sentence which is included in the summary sentence. It is characterized by generating voice-to-speech data by adding attribute information indicating that the text is read out with emphasis compared to the non-existing part.
- Such an electronic document processing apparatus adds attribute information indicating that a portion included in an abstract sentence of an electronic document is emphasized and read out compared to a portion not included in the abstract sentence. Generate speech-to-speech data.
- An electronic document processing method that achieves the above-mentioned object is a digital document processing method for processing an electronic document, the method comprising: creating a summary sentence for generating a summary sentence of the electronic document; And a voice reading data generating step for generating voice reading data for reading by the voice reading data generating step.
- a part included in the summary sentence of the electronic document is not included in the summary sentence. It is characterized in that voice reading data is generated by adding attribute information indicating that reading is performed with emphasis compared to the part.
- a part included in the abstract sentence of the electronic document is provided with attribute information indicating that the part is emphasized and read out as compared with a part not included in the abstract sentence.
- a recording medium on which an electronic document processing program according to the present invention for achieving the above object is recorded is a recording medium on which a computer-controllable electronic document processing program for processing an electronic document is recorded.
- the electronic document processing program includes a summary sentence creating step of creating a summary sentence of the electronic document, and a speech reading data generation step of generating speech reading data for reading the electronic document by a speech synthesizer.
- the part of the electronic document included in the summary sentence is given attribute information indicating that it is read out with emphasis compared to the part not included in the summary sentence, so that the voice read-out data is provided.
- -Data is generated.
- Such a recording medium on which the electronic document processing program according to the present invention is recorded indicates that, in the electronic document, a portion included in the abstract is read out with emphasis compared to a portion not included in the abstract. It provides an electronic document processing program that generates speech-to-speech data by adding attribute information.
- Such an electronic document processing apparatus directly reads out a portion included in the abstract sentence of the electronic document with emphasis compared to a portion not included in the abstract sentence.
- An electronic document processing method that achieves the above object is a digital document processing method for processing an electronic document, comprising: a summary sentence creating step of creating a summary sentence of the electronic document; It is characterized in that the part included in the sentence is provided with a document reading process that emphasizes and reads the part compared to the part not included in the summary sentence.
- a portion included in an abstract sentence of an electronic document is directly read out with emphasis compared to a portion not included in the abstract sentence.
- the recording medium on which the electronic document processing program according to the present invention for achieving the above object is recorded is a recording medium on which a computer-controllable electronic document processing program for processing an electronic document is recorded.
- the abstract sentence creation process of creating an abstract sentence of the electronic document, and the portion of the electronic document included in the abstract sentence is read out with emphasis compared to the portion not included in the abstract sentence It has a document reading process.
- Such a recording medium on which the electronic document processing program according to the present invention is recorded is an electronic document in which a portion included in an abstract sentence is read out with emphasis compared to a portion not included in the abstract sentence. Provide a processing program.
- an electronic document processing device that achieves the above-mentioned object, in an electronic document processing device that processes an electronic document, comprises a paragraph, a sentence, and a phrase, from among a plurality of elements constituting the electronic document.
- Detecting means for detecting at least two of the start positions; and detecting, based on the detection result obtained by the detecting means, at least two start positions of a paragraph, a sentence, and a phrase with respect to the electronic document. It is characterized by comprising: voice reading data generation means for generating voice reading data to be read out by a voice synthesizer by giving attribute information indicating that a different pause period is provided.
- the electronic document processing apparatus generates speech-to-speech data by adding attribute information indicating that at least two pause positions of paragraphs, sentences, and phrases are provided with different pause periods.
- An electronic document processing method that achieves the above object is a digital document processing method for processing an electronic document, the method comprising: selecting a paragraph, a sentence, and a phrase from a plurality of elements constituting the electronic document. Based on a detection step of detecting at least two start positions, and a detection result obtained in the detection step, the electronic document includes at least one of a paragraph, a sentence, and a phrase. At least two start positions are provided with attribute information indicating that different pause periods are to be provided, so that a voice reading data generation step of generating voice reading data for reading by a voice synthesizer is provided. It is characterized by.
- Such an electronic document processing method generates speech-to-speech data by adding attribute information indicating that at least two pause positions of paragraphs, sentences, and phrases are provided with different pause periods.
- a recording medium on which an electronic document processing program according to the present invention for achieving the above object is recorded is a recording medium on which a computer-controllable electronic document processing program for processing an electronic document is recorded.
- the electronic document processing program detects a start position of at least two of a paragraph, a sentence, and a phrase from among a plurality of elements constituting the electronic document, and a detection result obtained in the detection process. Attribute information indicating that at least two start positions of paragraphs, sentences, and phrases have different pause periods based on the electronic document, based on which And a voice reading data generating step of generating reading data.
- Such a recording medium on which the electronic document processing program according to the present invention is recorded is characterized in that at least two start positions of a paragraph, a sentence, and a phrase are provided with attribute information indicating that different pause periods are provided from each other.
- an electronic document processing program for generating read-aloud data is provided.
- an electronic document processing device that achieves the above-mentioned object, in an electronic document processing device that processes an electronic document, comprises a paragraph, a sentence, and a phrase, from among a plurality of elements constituting the electronic document.
- Less of us Detecting means for detecting at least two start positions, and, based on the detection result obtained by the detecting means, providing an electronic document by providing different pause periods at least at two start positions of a paragraph, a sentence, and a phrase. It is characterized by having a text-to-speech means for reading out speech by speech synthesis.
- Such an electronic document processing device directly reads out an electronic document by providing different pause periods at least at the start positions of paragraphs, sentences, and phrases.
- An electronic document processing method that achieves the above object is a digital document processing method for processing an electronic document, the method comprising: selecting a paragraph, a sentence, and a phrase from a plurality of elements constituting the electronic document.
- Such an electronic document processing method directly reads out an electronic document by providing different pause periods at least at the start positions of paragraphs, sentences, and phrases.
- a recording medium on which an electronic document processing program according to the present invention for achieving the above-mentioned object is recorded is a computer-readable recording medium on which a computer-controllable electronic document processing program for processing an electronic document is recorded.
- the processing program detects a start position of at least two of a paragraph, a sentence, and a phrase from among a plurality of elements constituting the electronic document, and performs a process based on a detection result obtained in the detection process. At least two pauses at the beginning of paragraphs, paragraphs, sentences, and phrases are provided with different pause periods, and the text-to-speech reading of the electronic document is performed by speech synthesis. And a process.
- Such a recording medium on which the electronic document processing program according to the present invention is recorded includes an electronic document processing program for directly reading an electronic document by providing different pause periods at least at the start positions of paragraphs, sentences, and phrases. I will provide a.
- FIG. 1 is a block diagram illustrating a configuration of a document processing apparatus shown as an embodiment of the present invention.
- FIG. 2 is a diagram showing the internal structure of a document.
- FIG. 3 is a diagram for explaining the display contents of the display unit, and is a diagram showing a window in which the internal structure of the document is displayed by tags.
- FIG. 4 is a flowchart illustrating a series of processes when reading a document.
- FIG. 5 is a diagram illustrating an example of a received or created Japanese document, and is a diagram illustrating a window displaying the document.
- FIG. 6 is a diagram showing an example of a received or created English document, and is a diagram showing a window displaying the document.
- FIG. 7A is a diagram showing a Tada file which is the tagged Japanese document shown in FIG. 5, and is a diagram showing a heading portion.
- FIG. 7B is a view showing a tag file which is the tagged Japanese document shown in FIG. 5, and is a view showing the last paragraph.
- FIG. 8 is a diagram showing a tag file which is the tagged English document shown in FIG.
- FIG. 9A is a diagram showing a speech-to-speech file generated from the tag file shown in FIG. 7, and is a diagram corresponding to an excerpt of a heading portion shown in FIG. 7A.
- FIG. 9B is a diagram showing a speech-to-speech file generated from the tag file shown in FIG. 7, and is a diagram corresponding to an excerpt of the last paragraph shown in FIG. 7B.
- FIG. 10 is a diagram showing a voice reading file generated from the tag file shown in FIG.
- FIG. 11 is a flowchart illustrating a series of processes for generating a text-to-speech file.
- FIG. 12 is a diagram showing a user interface window.
- FIG. 13 is a diagram showing a window displaying a document.
- FIG. 14 is a diagram illustrating a window displaying a document, and is a diagram illustrating a state in which a display area for displaying a summary is larger than the display area illustrated in FIG. 13.
- FIG. 15 is a flowchart illustrating a series of processes when creating a summary sentence.
- FIG. 16 is a flowchart illustrating a series of processes when performing active diffusion.
- FIG. 17 is a diagram showing a connection structure of elements for explaining the active diffusion process.
- FIG. 18 is a flowchart for explaining a series of processes when performing link processing of active diffusion.
- FIG. 19 is a diagram showing a window displaying a document and its summary.
- FIG. 20 is a flowchart illustrating a series of processes when a new summary is created by changing the display range of the display area for displaying the summary.
- FIG. 21 is a diagram showing a window displaying a document and a summary sentence thereof, showing a state in which the summary sentence is displayed in the window shown in FIG. 14.
- FIG. 22 is a flowchart for explaining a series of processes when a summary sentence is created and a document is read aloud.
- FIG. 23 is a flowchart illustrating a series of processes when generating a speech-to-speech file after creating a summary sentence.
- the document processing apparatus shown as an embodiment of the present invention has a function of synthesizing a given electronic document and a summary sentence created from the electronic document using a speech synthesis engine and reading out the electronic document.
- a speech synthesis engine When reading a sentence, for the elements included in the summary sentence, the volume is read out at an increased volume, and a predetermined pause period is provided at the start position of the steps, sentences, and phrases that constitute these electronic documents and the summary sentence. Is to read aloud.
- an electronic document is simply referred to as a document.
- the document processing apparatus includes a main body 10 having a control unit 11 and an interface 12 and information input by a user.
- An input unit 20 for supplying to the main unit 10, a receiving unit 21 for receiving an external signal and supplying to the main unit 10, and a communication unit for performing communication processing between the server 24 and the main unit 10 22; an audio output unit 30 for outputting information output from the main unit 10 as audio; a display unit 31 for displaying information output from the main unit 10; and a recording medium 33.
- It has a recording / reproducing unit 32 for recording and / or reproducing information by using a hard disk drive (HDD) 34.
- HDD hard disk drive
- the main body 10 has a control unit 11 and an interface 12 and constitutes a main part of the document processing apparatus.
- the control unit 11 includes a CPU (Central Processing Unit) 13 for executing processing in the document processing apparatus, a RAM (Random Access Memory) 14 as a volatile memory, and a ROM (Non-volatile memory). Read Only Memory) 15.
- CPU Central Processing Unit
- RAM Random Access Memory
- ROM Non-volatile memory
- the CPU 13 performs control for executing the program according to, for example, the ROM 15 or the program recorded on the hard disk.
- the RAM I 4 temporarily stores programs and data necessary for the CPU 13 to execute various processes as needed.
- the interface 12 is connected to the input unit 20, the receiving unit 21, the communication unit 22, the display unit 31, the recording / reproducing unit 32, and the hard disk drive 34.
- the interface 12 receives data supplied through the input unit 20, the receiving unit 21 and the communication unit 22 under the control of the control unit 11, and transmits data to the display unit 31.
- Output, recording / reproducing section 32 For data input / output with respect to 32, adjust the data input / output timing and convert the data format.
- the input unit 20 receives a user input to the document processing apparatus.
- the input unit 20 is constituted by, for example, a keyboard and a mouse.
- the user can, for example, input a keyboard using a keyboard or select and input an element of a document displayed on the display unit 31 using a mouse.
- An element is an element that constitutes a document, and includes, for example, a document, a sentence, and a word.
- the receiving unit 21 receives data transmitted from outside to the document processing apparatus via, for example, a communication line.
- the receiving unit 21 receives a plurality of electronic documents and an electronic document processing program for processing these documents.
- the data received by the receiving unit 21 is supplied to the main unit 10.
- the communication unit 22 is composed of, for example, a modem, a terminal adapter, and the like, and is connected to the Internet 23 via a telephone line.
- a server 24 storing data such as documents is connected to the Internet 23, and the communication unit 22 accesses the server 24 via the Internet 23, and the server 24 receives data from the server 24. Data can be received.
- the data received by the communication unit 22 is supplied to the main unit 10.
- the audio output unit 30 is configured by, for example, a speaker. An electrical audio signal obtained by performing voice synthesis by a voice synthesis engine or the like and other various voice signals are input to the voice output unit 30 via the interface 12.
- the audio output unit 30 converts the input signal into a voice and outputs it.
- Character information and image information are input to the display unit 31 via the interface 12.
- the display unit 31 displays the input information. More specifically, the display unit 31 is, for example, a cathode ray tube (Cathode Ray Tu be; CRT) and liquid crystal display (Liquid Crystal Display; LCD). For example, one or more windows are displayed, and characters and figures are displayed on these windows.
- CTR cathode ray tube
- LCD liquid crystal display
- the recording / reproducing unit 32 records and / or reproduces data on / from a removable recording medium 33 such as a floppy disk, an optical disk, or a magneto-optical disk under the control of the control unit 11.
- the recording medium 33 stores an electronic document processing program for processing a document and a document to be processed.
- the hard disk drive 34 records data and performs Z or reproduction on a hard disk which is a large-capacity magnetic recording medium.
- Such a document processing device receives a desired document and displays it on the display unit 31 as follows.
- the user operates the input unit 20 to start a program for performing communication via the Internet 23, and the URL (Uniform) of the server 24 (search engine) is started.
- the control unit 11 controls the communication unit 22 and accesses the server 24.
- the server 24 outputs the data of the search screen to the communication unit 22 of the document processing device via the Internet 23.
- the CPU 13 outputs this data to the display unit 31 via the interface 12 and displays it.
- the communication unit 22 and the search engine communicate with the search engine via the Internet 23.
- a search command is transmitted to the server 24 that has not been executed.
- the server 24 executes the search command and transmits the obtained search result to the communication unit 22 via the Internet 23.
- the control unit 11 controls the communication unit 22 to receive the search result transmitted from the server 24 and display a part of the search result on the display unit 31.
- the document processing apparatus when the user inputs a keyword such as “TCP” using the input unit 20 and issues a search command, the document processing apparatus includes the keyword “TCP” from the server 24. Various information is transmitted, and, for example, the following document is displayed on the display unit 31.
- ARPANET Transmission Control Protocol / Internet Protocol
- the official name of the ARPANET is the Advanced Research Project Agency Network, which is a part of the American Department of Defense's Department of Defense (DOD).
- DOD Department of Defense
- the Defense Advanced Research Project Agency (DARPA) is a network of bucket exchanges for experiments and research that has been built as a sponsor.
- ARPANET departed from a very small network connecting the host computers of universities and research institutes with 50 kbps lines.
- each element such as an internal structure, a document, a sentence, a vocabulary element, a normal link, a reference / referenced link, and the like by the tagging are included in the tag.
- open circles “ ⁇ ” are document elements such as vocabulary, segments, and sentences, that is, elements, and the lowest open circles “ ⁇ ” are vocabulary elements corresponding to the lowest-level words in the document.
- a solid line is a normal link indicating a connection between elements of a document such as a word, a phrase, a clause, a sentence, and the like.
- the dashed line is the reference link indicating the dependency relationship between the reference and the referenced.
- the internal structure of a document is, from top to bottom, document (document), subdivision (subdivision), paragraph (paragraph), sentence (sentence), subsentence section, subsentential segment), - -, it is composed g ⁇ Ereme down Bok force 3 ⁇ 4 et al. Of these, subdivisions and paragraphs are optional.
- semantic and pragmatic tagging is There are tags that describe the syntactic structure that indicates the referent of a pronoun, etc., and describe semantic information, such as the meaning of polysemous words.
- Tagging in the present embodiment is in the form of XML (extensible Markup Language) as in HTML (Hyper Text Markup Language).
- ⁇ sentence>, ⁇ noun>, ⁇ noun phrase>, ⁇ verb>, ⁇ verb phrase>, ⁇ adjective verb>, ⁇ adjective verb phrase> are sentence, noun, noun phrase, verb, verb phrase, respectively.
- a prepositional phrase or a postpositional phrase containing an adjective / postpositional phrase / adjective phrase, adjective phrase Represents the syntactic structure of a sentence such as an adjective verb phrase.
- Tags are placed in front of and just after the end of the element. A tag placed immediately after the end of an element is indicated by the symbol "" to indicate that the element is at the end.
- timeO indicates multiple meanings of the word “time”, that is, the 0th meaning of the multiple meanings. Specifically, “time” has a noun and a verb, but here it indicates that "time” is a noun.
- the word “orange” has at least the meaning of a plant name, color, and fruit, but these can also be distinguished by their meaning.
- a syntactic structure in a window 101 of a display unit 31.
- the vocabulary element is displayed on the right half 103 and the internal structure of the sentence is displayed on the left half 102.
- the syntactic structure can be displayed not only for documents written in Japanese but also for documents written in any language such as English.
- Relation “x” indicates a relation attribute.
- This relationship attribute describes the syntactic, semantic, and rhetorical interactions.
- Grammar functions such as subject, object, and indirect object, subject roles such as actor, actor, beneficiary, etc., and rhetorical relations such as reason, result, etc. are described by this relation attribute.
- relational attributes are described for relatively easy grammatical functions such as subjects, objects, and indirect objects.
- the document processing device can receive the document tagged in this way.
- the CPU 13 activates the voice reading program of the electronic document processing program recorded on the ROM 15 or the hard disk by the CPU 13, the document processing apparatus goes through a series of steps as shown in FIG. Read the document aloud.
- FIG. Read the document aloud.
- the document processing device receives a tagged document in step S1. It is assumed that a tag necessary for performing voice synthesis is added to this document as described later. Further, the document processing device can receive a tagged document and create a document by newly adding a tag necessary for performing speech synthesis to the document. Further, the document processing apparatus may receive an untagged document, tag the document including a tag necessary for performing speech synthesis, and create a tag file. In the following, the tagged document prepared as received or created in this way is referred to as a tag file.
- step S2 the document processing device generates a speech-to-speech file (speech-to-speech data) based on the tag file under the control of the CPU 13.
- this speech-to-speech file is obtained from the tag in the tag file by using an attribute for speech. It is generated by deriving information and embedding this attribute information.
- step S3 the document processing device performs a process suitable for the speech synthesis engine using the speech reading file under the control of the CPU 13.
- this speech synthesis engine may be configured with a hardware, or may be realized with software.
- the application program is stored in the ROM 15 of the document processing apparatus, a hard disk, or the like in advance.
- step S4 the document processing device performs a process according to an operation performed by a user using a user interface described later.
- the document processing apparatus can synthesize a given document by speech and read it out.
- the document processing device accesses the server 24 shown in FIG. 1 and receives a document as a result of a search based on a keyword or the like.
- the document processing device receives the tagged document, and newly adds a tag necessary for performing speech synthesis to the document to create the document.
- the document processing apparatus can receive a document that has not been tagged, tag the document with tags necessary for performing speech synthesis, and create a tag file.
- Cancer has been the leading cause of death in Japan for more than a decade. The mortality rate is increasing with age. When thinking about the health of the elderly, we cannot avoid cancer.
- Cancer is cell proliferation and metastasis.
- cancer genes which act as accelerators in automobiles and rapidly grow cancer
- cancer suppressors which act as brakes.
- metastasis cancer does not need to be so afraid. Just resection can be completely cured. Here is the importance of suppressing metastasis.
- Cancer cells dissolve proteins (protein) between cells and create their own way to enter blood vessels and lymph vessels.
- complex movements such as searching for a new “dwelling house” while circulating are being performed.
- the document processing apparatus When receiving the Japanese document, the document processing apparatus displays the document in a window 110 displayed on the display unit 31 as shown in FIG.
- the window 110 has a document name display section 1 11 for displaying the name of the document, a key input section 1 12 for entering a key word, and a document input section 1 12 for creating a summary of the document as described later.
- Summary work that is the execution button Button 1 13 and a reading area 1 20 which displays the reading button 1 1 4 which is an execution button for reading out aloud, and a display area 130 where the document is displayed.
- a screen lever 13 1 and buttons 13 2 and 13 3 for moving the scroll bar 13 1 up and down.
- the original document of the tag file shown in Fig. 6 is the following English document.
- the document processing apparatus When receiving the English document, the document processing apparatus displays the document in a window 140 displayed on the display unit 31 as shown in FIG.
- the window 140 has a document name display area 141 on which the name of the document is displayed, a key input area 144 for inputting a key, and a document name display area, similarly to the window 110.
- the summarization execution button 1 4 3 which is an execution button for creating a summary sentence and the It is divided into a display area 150 in which a read-out execution button 144 which is an execution button is displayed, and a display area 160 in which a document is displayed.
- a scroll bar 161, and buttons 162 and 163 for moving the scroll bar 161 up and down are provided at the right end of the display area 160.
- the Japanese or English documents shown in Fig. 5 or Fig. 6 are configured as tag files as shown in Fig. 7 or Fig. 8, respectively.
- the tag file shown in Fig. 7A shows an excerpt of the heading "[Nicely Aging] Z8 Cancer Transfer, Can You Suppress !?”
- the tag file shown in Figure 7B is the last paragraph, “This metastasis does not occur just by increasing the number of cancer cells. Cancer cells dissolve proteins and other proteins between cells, and their own path. In recent years, it has been elucidated that complex movements such as creating a new “dwelling house” while circulating and entering a blood vessel or lymphatic vessel are being elucidated. ” The paragraphs are omitted. In this case, the actual tag file consists of a single file from the heading to the last paragraph.
- ⁇ Heading> in the heading part shown in FIG. 7A indicates that this part is a heading.
- the last paragraph shown in FIG. 7B is provided with a tag indicating that the relation attribute is “condition” or “means”. ing.
- the last paragraph shown in FIG. 7B shows an example of a tag required for performing the above-described speech synthesis.
- the tags required to perform speech synthesis are those that are given when information indicating pronunciation (reading kana) is given to the original document, such as “protein”. There is. That is, in this case,
- tags necessary for speech synthesis are attached to technical terms such as “lymphatic vessels” and difficult-to-read parts such as “dwelling house” that can be read aloud incorrectly. There is something.
- a tag indicating that the sentence is a quote is attached to the file.
- a tag indicating that the sentence is a question sentence is attached to the tag file (not shown).
- step S1 shown in FIG. 4 the document processing apparatus receives or creates a document to which a tag necessary for performing speech synthesis has been added.
- the document processing device derives attribute information for reading out from the tag in the tag file, and generates a voice-to-speech file by embedding the attribute information.
- the document processing apparatus finds tags indicating the start positions of paragraphs, sentences, and phrases of the document, and embeds attribute information for reading out in correspondence with these tags. Also, as described later, when a document summary is created, the document processing apparatus finds, from the document, the start position of the portion included in the summary, and attribute information for increasing the volume when reading out the document. Can be embedded to emphasize that it is included in the summary.
- the document processing apparatus generates a speech-to-speech file as shown in FIG. 9 or FIG. 10 from the tag file shown in FIG. 7 or FIG.
- the text-to-speech file shown in Fig. 9A corresponds to the excerpt of the heading shown in Fig. 7A
- the text-to-speech file shown in Fig. 9B It corresponds to an excerpt from the last paragraph.
- the actual text-to-speech file is composed of a single file from the heading to the last paragraph.
- This attribute information indicates the language that describes the document.
- Each of these attributes indicates the starting position of a paragraph, sentence or phrase in the document.
- the document processing device detects at least two start positions of these paragraphs, sentences, and phrases based on the tags in the tag file described above.
- These attribute information indicate that there is a pause period of 500 milliseconds, 100 milliseconds, and 50 milliseconds, respectively, when reading out. That is, at the start of the paragraph, sentence and phrase of the document,
- the document processing apparatus sets a pause period of 65 milliseconds obtained by adding a pause period of each paragraph, sentence, and phrase of the document, for example. Read aloud. In this way, the document processing apparatus provides a pause period corresponding to a paragraph, a sentence, and a phrase, for example, so that the length becomes shorter in the order of the paragraph, the sentence, and the phrase. Reading can be performed without discomfort taking into account.
- This pause period does not need to be 500 milliseconds, 100 milliseconds, and 50 milliseconds at the start of the paragraph, sentence, and phrase in the document, and may be changed as appropriate. be able to.
- the text-to-speech file also has attribute information for designating only this quotation to use another speech synthesis engine based on the tag indicating that the quotation is included in the document. It may be embedded. Further, in the text-to-speech file, attribute information for raising the intonation at the end of the sentence may be embedded based on the tag indicating the question.
- the text-to-speech file may include, as necessary, attribute information for converting a style of an inexact expression such as so-called "Dana-tona” into a style of polite expression such as "Dan-ma-tona”. Can be embedded.
- the document processing device instead of embedding such attribute information in the text-to-speech file, converts the style of the non-policy expression into a style of the polite expression to generate a text-to-speech file. Is also good.
- the document processing device analyzes the received or created tag file by the CPU 13 in step S11.
- the document processing apparatus determines the language in which the document is described, and searches for the starting position of the paragraph, sentence and phrase of the document, and the reading attribute information based on the tag.
- step S13 the document processing device replaces the start positions of the paragraphs, sentences, and phrases of the document with the attribute information in the text-to-speech file according to CPU13.
- the document processing apparatus automatically generates a speech-to-speech file by performing the processing shown in FIG. 11 in step S2 shown in FIG.
- the document processing device stores the generated voice reading file in RAM 14.
- the speech synthesis engine is provided with an identifier according to the type of language, male voice Z female voice, etc., and the information is recorded on the hard disk as, for example, an initialization file.
- the document processing device refers to the initialization file and selects a speech synthesis engine having an identifier corresponding to the language.
- the volume attribute information is V.
- the document processing device converts the percentage information into absolute value information based on the attribute information, and thus obtains the absolute value, because the expression is expressed as a percentage of the increase in the default sound volume as shown in FIG.
- step S3 shown in FIG. 4 the document processing apparatus performs processing using such a text-to-speech file to convert the text-to-speech file into a format in which the text-to-speech engine can read the document. Convert.
- the document processing apparatus can be operated by, for example, operating the mouse or the like of the input unit 20 and pressing the read-out execution button 114 or the read-out execution button 144 shown in FIG. 5 or FIG. Start the speech synthesis engine. Then, the document processing device
- the user interface window 170 has a play button 17 1 for reading the document, a stop button 17 2 for stopping the reading, and a temporary button 17 for temporarily stopping the reading. And a stop button 1 7 3.
- the user interface window 170 has buttons for performing cueing including rewinding and fast-forwarding.
- the user interface ⁇ window 170 is provided with a cue button 174, a rewind button 175, and a fast-forward button 176 for cueing, rewinding, and fast-forwarding in units of sentences.
- the user interface window 170 is provided with selection switches 18 3 and 18 4 for selecting whether to read the entire sentence or a summary sentence to be described later. Having.
- the user interface window 170 is not shown here, but is, for example, a button for increasing or decreasing the volume, a button for increasing or decreasing the reading speed, and for changing the voice of a male or female voice. Button or the like.
- the document processing apparatus performs a reading operation by the speech synthesis engine when the user operates the mouse or the like of the input unit 20 and presses the various buttons Z switch, for example, to select them. For example, a document processing apparatus starts reading out a document by a user pressing a play button 171, and is currently reading out the text by a user pressing a cue button 174 during the reading. Jump to the beginning of the sentence and read again. In addition, the document processing apparatus can make such a jump in mark units at the time of reading aloud by the marking performed in step S3 in FIG.
- the document processing apparatus when the user presses the rewind button 178 or the fast forward button 179 using, for example, the mouse or the like of the input / output unit 20, the document processing apparatus returns to 10 Jumps by identifying only the mark that indicates the start position of the paragraph, which is a number in the 0's.
- the rewind button 1775, the fast-forward button 1776, the rewind button 181, and the fast-forward button 182 are released.
- the document processing apparatus can respond to a request that a user wants to repeatedly reproduce a desired portion in a document, for example, by performing a jump in paragraphs, sentences, and phrases at the time of reading. .
- step S4 the document processing device reads out the document by the speech synthesis engine when the user performs an operation using such a user interface.
- the read information is output from the audio output unit 30.
- the document processing apparatus can read out a desired document without any discomfort by the speech synthesis engine.
- the document processing apparatus when creating a summary of a document, the user operates the input unit 20 to execute the automatic summary creation mode while the document is displayed on the display unit 31.
- the document processing device drives the hard disk drive 34 under the control of the CPU 13 to activate an automatic summary sentence creation program among the electronic document processing programs stored in the hard disk.
- the display unit 31 is controlled by the CPU 13 to display an initial screen for an automatic summary sentence creating program as shown in FIG.
- a window 190 displayed on the display unit 31 is a document name display unit 191, on which the name of the document is displayed, and a keypad for inputting a key word.
- Input area 1 92 display area 200 on which summary creation execution button 1 93, etc., which is an execution button for creating a summary of a document, and display area 200, and display area 2 on which a document is displayed 10 and a display area 220 in which a summary of the document is displayed.
- the document name display section 1991 of the display area 200 the document name and the like of the document displayed in the display area 210 are displayed. Also, a keyword for creating a summary of a document is input to the keyword input unit 192 using, for example, the keyboard of the input unit 20.
- the summary creation execution button 1993 is used to start execution of the summary creation processing of the document displayed in the display area 210 by, for example, being pressed using the mouse or the like of the input unit 20. Execute button.
- a document is displayed.
- a scroll lever 211 At the right end of the display area 210, there are provided a scroll lever 211 and buttons 212, 213 for moving the scroll lever 211 up and down.
- the display area can be moved by directly moving the scroll bar 2 1 1 up and down using the mouse or the like, or by moving the scroll bar 2 1 1 up and down by pressing the buttons 2 1 2 and 2 13.
- the display content displayed on 210 can be scrolled vertically.
- the user can select and summarize a part of the document displayed in the display area 210 or can summarize the entire document.
- a summary sentence is displayed. In FIG. 13, nothing is displayed in this display area 220 because the summary sentence has not been created yet.
- the user can change the display range (size) of the display area 220 by operating the input unit 20. Can be. Specifically, the user can enlarge the display range (size) of the display area 220 shown in FIG. 13 as shown in FIG. 14, for example.
- the document processing apparatus executes the processing shown in FIG. 15 under the control of the CPU 13. Execute to start creating a summary sentence.
- the process of creating a summary from a document is performed based on tagging of the internal structure of the document.
- the size of the display area 220 of the window 190 can be changed as shown in FIG.
- the document processing device executes the summary creation after the force for newly drawing the window 190 on the display unit 31 or the size of the display area 220 is changed.
- the button 19 3 is operated, a summary is created from the document that at least part of it is displayed in the display area 210 of the window 190 so that it fits the display area 220. Execute the processing to be performed.
- the document processing apparatus performs a process called active diffusion under the control of the CPU 13 in step S 21.
- the document is summarized by adopting the central activity value obtained by the activity diffusion as the importance.
- a central activation value corresponding to the tagging with respect to the internal structure can be given to each element.
- the active diffusion is a process of giving a high central activity value to an element related to an element having a high central activity value.
- active diffusion means that the central activity value is equal between the anaphora (coreference) expressed element and its antecedent, and each central activity value converges to the same value otherwise.
- this central activity value is determined according to the tagging of the internal structure of the document, it can be used for the analysis of the document in consideration of the internal structure.
- the document processing apparatus executes active diffusion by going through a series of steps shown in FIG.
- the document processing device initializes each element under the control of the CPU 13 in step S41.
- the document processing device assigns the initial value of the central activity value to all the elements except the vocabulary element and the vocabulary element. For example, the document processing apparatus assigns “1” to all the elements except the vocabulary element and “ ⁇ ” to the vocabulary element as the initial value of the central activity value. .
- the document processing apparatus reflects the bias of the initial value in the central activity value obtained as a result of active diffusion by pre-assigning a non-uniform value to the initial value of the central activity value of each element. It can be done.
- the document processing apparatus can obtain a central activity value reflecting the user's interest by setting a high initial value of the central activity value for an element of interest to the user.
- References between elements.References which are links that have a dependency relationship with the referenced, For the referenced link and the normal link, which is the other link, the end point activation value of the end point of the link connecting the elements. Set to "0".
- the document processing device stores the initial value of the endpoint activation value thus assigned, for example, in RAM 14.
- FIG. 17 an example of the connection structure between the element and the element is shown in Fig. 17.
- an element Ei and an element Ei are shown as a part of the structure of the element and the link that constitute the document.
- the element E i and the element E i each have a central activation value e;,, and are connected by a link L ;;
- the end point of the link L i) connected to the element E i is T ii, and the end point connected to the element E i is d.
- Ereme down DOO E i in addition to Ereme down bets E i to be connected by a link L ii, link L ik, L i, and Ereme down preparative E k (not shown) by L im, E, and E m Each is connected.
- the elements E i are not only elements E i connected by links L ii, but also elements E P , E r not shown by links L, P , L iq and L. Connected to each other.
- step S42 in FIG. 16 the document processing apparatus initializes a counter for counting the elements Ei constituting the document under the control of the CPU 13. That is, the document processing apparatus sets the count value i of the element for counting elements to “1”. This means that the counter refers to the first element E ⁇ .
- step S43 the document processing apparatus executes a link process for calculating a new central activation value for the element referenced by the counter under the control of the CPU 13. This link processing will be further described later.
- step S44 the document processing device determines whether or not the calculation of a new central activity value has been completed for all the elements in the document under the control of CPU13.
- the document processing device needs to update all the elements in the document. If it is determined that the calculation of the central activation value has been completed, the process proceeds to step S45, while the calculation of the new central activation value is performed for all the elements in the document. If it is determined that the processing has not been completed, the process proceeds to step S47.
- the document processing device determines whether or not the count value of the counter i has reached the total number of elements included in the document under the control of the CPU 13. When determining that the count value i of the counter has reached the total number of elements included in the document, the document processing apparatus determines that all elements have been calculated, and proceeds to step S45. Transfer processing. On the other hand, if the document processing device determines that the count value i of the counter has not reached the total number of elements included in the document, the document processing apparatus determines that the calculation has not been completed for all the elements, The processing shifts to S47.
- the document processing apparatus the count value i power s of the counter, when it is determined that not reached the total number of Ereme down bets including documents, in step S 4 7, under control of the CPU 1 3, counter Is incremented by "1" and the count value of the counter is set to "i + 1".
- the counter refers to the (i + 1) th element, that is, the next element.
- the document processing apparatus shifts the processing to step S43, and the calculation of the endpoint activation value and a series of steps following this are executed for the next (i + 1) -th element.
- the document processing device determines that the count value of the counter i has reached the total number of elements included in the document, the document processing device includes the document under the control of the CPU 13 in step S45.
- the change in the central activity value of all the elements that are calculated that is, the newly calculated central activity
- An average value is calculated for the change in the gender value from the original central activity value.
- the document processing device reads out, for example, the original central activation value and the newly calculated central activation value stored in the RAM 14 for all the elements included in the document.
- the document processing apparatus calculates the change in the central activity value of all elements by dividing the sum of the changes of the newly calculated central activity value from the original central activity value by the total number of elements included in the document. Calculate the average of minutes.
- the document processing device stores, for example, in the RAMI 4, the average value of the change in the central activity value of all the elements calculated in this way.
- step S46 under the control of the CPU 13, the document processing apparatus sets in advance the average value of the change in the central activity values of all the elements calculated in step S45. It is determined whether it is within the threshold. Then, when the document processing device determines that the change is within the threshold, the document processing device ends the series of processes. On the other hand, when the document processing device determines that the change is not within the threshold, the document processing device shifts the process to step S42, sets the count value i of the counter to "1", and sets the counter value to "1". Repeat the series of steps to calculate the element's central activity value. In the document processing apparatus, each time the loop of steps S42 to S46 is repeated, the amount of change gradually decreases.
- the document processing apparatus can perform active diffusion in this way.
- the link processing executed in step S43 to perform this active diffusion will be described with reference to FIG. Note that the flowchart shown in FIG. 18 shows processing for one element Ei, but this processing is performed for all elements. Things.
- the document processing apparatus in step S51, under the control of the CPU 13, controls one element Ei that constitutes the document and one end connected to it. Initializes the counter that counts links. That is, the document processing apparatus sets the count value j of the counter for counting the links to “1”.
- This counter a first link which is a connection elementary preparative E i L; will refer to;.
- step S52 under the control of the CPU 13, the document processing device refers to the tag of the relation attribute for the link connecting the element Ei and the element E]. Then, it is determined whether or not the link Lii is a normal link.
- Document processing device link L u is lexically Ereme down you want to correspond to words, sentences Jer instrument corresponding to sentences, and usually links showing the relationship between such a paragraph element corresponding to paragraphs, reference ⁇ the referenced Judge whether the link is a reference link that indicates a dependency relationship. If the document processing apparatus determines that the link is a normal link, it shifts the processing to step S53. If it determines that link L; i is a reference link, it proceeds to step S53. The processing shifts to S54.
- step S53 the document processing apparatus calculates a new endpoint activation value of the endpoint T ii connected to the normal link of the element E i. Perform the following processing.
- step S 5 3 the determination in Step S 5 2, is clearly summer that the link L u is the normal link.
- Ereme down DOO E normal new point activation values t u endpoints T is connected to the link L u of the elementary bets of endpoint activity value of E i, Li down except links L ii Add the end activity values t i P , t, t ir of the end points T ip , T iq , T connected to the node and the central activity value e of the element E, to which the element E i is connected by the link L ii. Then, the value obtained by this addition is divided by the total number of elements contained in the document.
- the document processing device Under the control of the CPU 13, the document processing device reads necessary endpoint activation values and central activation values from, for example, the RAM 14. The document processing device calculates a new endpoint activity value of the endpoint connected to the normal link as described above for the read endpoint activity value and the read central activity value. Then, the document processing device stores the new endpoint activation value calculated in this way in, for example, the RAM I4.
- step S54 the document processing apparatus determines the end point activation value of the end point T ; , connected to the reference link of the element E ,. Perform the calculation process.
- step S54 it is clear from the determination in step S52 that link L,) is the reference link.
- the endpoint activity value ti; of the endpoint T,) connected to the reference link Lii of the element E, is the endpoint activity value of the element Ei that is connected to the link except for the link and, obtained by adding the central activation value ei of; end point T j P, T iq, tj r endpoint activity values tip, t i Q, and t, the element ⁇ the element is connected by a link.
- the document processing apparatus Under the control of the CPU 13, the document processing apparatus reads necessary endpoint activation values and central activation values from, for example, the endpoint activation values and the central activation values stored in the RAM 14.
- the document processing device calculates a new endpoint activity value connected to the reference link as described above, using the read endpoint activity value and the central activity value. Then, the document processing device is configured as follows.
- the calculated end point activation value is stored in, for example, the RAM 14.
- step S53 The processing of the normal link in step S53 and the processing of the reference link in step S54 go from step S52 to step S55, and return to step S52 via step S57.
- the processing is executed for all the links L ii connected to the element E i referred to by the count value i.
- step S57 the count value: j for counting the number of links connected to the element E ; is incremented.
- step S53 the document processing apparatus performs all the links connected to the element Ei under the control of the CPU 13 in step S55. It is determined whether or not the endpoint activity value has been calculated. If the document processing apparatus determines that the endpoint activation values have been calculated for all the links, the process proceeds to step S56, and the endpoint activation values have been calculated for all the links. If it is determined that there is not, the process proceeds to step S57.
- step S56 when the document processing apparatus determines that the endpoint activation values have been calculated for all the links, in step S56, under the control of the CPU 13, the element E; The central activation value ei is updated.
- the prime "',' means a new value.
- the new central activation value is the element of the element. To the sum of the new endpoint activity values for the endpoints of the element to the central activity value of.
- the document processing device Under the control of the CPU 13, the document processing device reads a necessary endpoint activation value from the endpoint activation value and the central activation value stored in the RAM 14, for example. The document processing device performs the above-described calculation, and calculates the central activation value e i of the element E i. Then, the document processing apparatus stores the calculated new central activation value e i in, for example, the RAM 14.
- the document processing device calculates a new central activation value for each element in the document. Then, the document processing apparatus executes the active diffusion in step S21 in FIG. 15 in this way. Subsequently, in step S22 in FIG. 15, the document processing apparatus displays the window 190 displayed on the display unit 31 previously shown in FIG.
- the size of the display area 220 that is, the maximum number of characters that can be displayed in the display area 220 is set as Ws.
- the document processing apparatus sets the maximum number Ws of characters that can be displayed in the display area 220 and the initial value S of the digest S set as described above. Is stored in, for example, RAM 14.
- the document processing apparatus stores the force value i set in this way in, for example, RAM 14.
- the document processing device sets the CPU 13 Under the control of, with respect to the count value i of the counter, the skeleton of the sentence with the i-th highest average central activity value is extracted from the sentence of the summary creation control.
- the average central activity value is the average of the central activity values of the elements constituting one sentence.
- the document processing device is, for example, a summary S! Stored in RAM I4.
- the document processing device stores the summary Si thus obtained in, for example, the RAM 14.
- the document processing apparatus creates a list 1; in the order of the central activation values of the elements not included in the skeleton of the sentence, and stores the list 1 i in, for example, the RAM 14.
- step S24 the document processing apparatus selects a sentence in descending order of the average central activity value by using the result of the activity diffusion under the control of the CPU 13 and selects the sentence of the selected sentence. Extract the skeleton.
- the skeleton of a sentence is composed of essential elements extracted from the sentence.
- the required elements can be the head of the element and the subject
- the structured related element is a required element, it is the element directly included in the coordinated structure.
- the document processor connects the required elements of the sentence to generate a sentence skeleton and adds it to the summary.
- step S25 the document processing apparatus sets the length of the summary S i, that is, the number of characters from the maximum number of characters Ws in the display area 220 of the window 190 under the control of the CPU 13 It is determined whether there is too much.
- step S 26 the process proceeds to step S 26, and under the control of the CPU 13, + The central activity value of the sentence having the highest average central activity value is compared with the central activity value of the element having the highest central activity value among the elements of the list li created in step S24. Then, the document processing apparatus determines that the central activity value of the sentence having the i + 1-th highest average central activity value is higher than the central activity value of the element having the highest central activity value in the elements of List 1 i. If it is determined, the process proceeds to step S28.
- the central activity value of the sentence having the i + 1-th highest average central activity value is larger than the central activity value of the element having the highest central activity value among the elements of the list li. If it is determined that it is not high, the process proceeds to step S27.
- the document processing apparatus determines that the central activity value of the sentence having the i + 1-th highest average central activity value is higher than the central activity value of the element having the highest central activity value among the elements in List 1 ; If it is not high, in step S27, under the control of the CPU 13, the count i of the counter is incremented by "1", and the process returns to step S24.
- the document processing apparatus determines that the central activity value of the sentence having the i + 1-th highest average central activity value is higher than the central activity value of the element having the highest central activity value among the elements of List 1 i. If you decide, In S28, under the control of the CPU 13, the element e having the highest central activity value among the list li elements is added to the summary S i to generate SS i, and the element e is further deleted. G 1 Delete from i. Then, the document processing device stores the summary SS i generated in this way in, for example, the RAM 14.
- the document processing device in step S 2 9, under the control of the CPU 1 3, summarizes SS; whether the number of characters is larger than the maximum number of characters W s of the display area 2 2 0 windows 1 9 0 Is determined.
- Document processing device when the number of characters in summary s S i is determined to not more than the maximum number of characters w s repeats the process from Step S 2 6.
- the document processing device summarized SS; if the number of characters is determined to greater than the maximum number of characters W s, in step S 3 1, under the control of the CPU 1 3, Abstract S; final the This is set as a summary sentence, displayed in the display area 220, and a series of processing ends. In this way, the document processing apparatus generates a summary sentence so as not to exceed the maximum number of characters Ws.
- the document processing apparatus can create a summary by summarizing the tagged documents.
- the document processing device creates a summary sentence as shown in FIG. 19 and displays it in the display area 220 of the display range.
- ARPANET was a small computer that connected the host computers of four universities and research institutes on the west coast of North America in 1969 with 50 kbps lines. ARPANET departed from a large-scale network, where a mainframe general-purpose computer series was developed in 1964. Such a project, which anticipated the future of computer communication in the future, could be said to have been unique to the United States. Is created and displayed in the display area 220.
- a document processing apparatus instead of reading the entire text of a document, the user can read the summary to understand the outline of the text and determine whether the text is the desired information. it can.
- the method of assigning importance to elements in a document does not necessarily use the active diffusion described above.
- a word May be weighted using the tf * idf method, and the sum of the weights of words appearing in the document may be used as the importance of the document. Details of this method are described in "K. Zechner, Fast generation of abstracts from general domain text corpora by extracting relevant sentences, In Proc. Of the 16th International Conference on Computational Linguistics, pp.986-989, 1996". I have.
- a method other than these methods can be used for assigning importance. Further, by inputting a key into the keyword input section 1992 of the display area 200, it is possible to set the importance based on the key.
- the document processing device can enlarge the display range of the display area 220 of the window 190 displayed on the display unit 31, If the display range of the display area 220 is changed while the text is displayed in the display area 220, the information amount of the summary text can be changed according to the display range. In this case, the document processing device performs the processing shown in FIG.
- the document processing device shifts the processing to step S62, and under the control of the CPU 13, the display range of the display area 220 is changed. Is measured.
- steps S63 to S65 is the same as the processing performed in and after step S22 in FIG. 15, and a summary sentence corresponding to the display range of display area 220 is created. It ends.
- step S63 the document processing apparatus displays the display area based on the measurement result of the display range of the display area 220 and the size of the character specified in advance under the control of the CPU 13. Determine the total number of characters in the summary text displayed in 220.
- step S64 the document processing device, under the control of the CPU 13, controls the RAM 14 from the RAM 14 so that the created summary does not exceed the number of characters determined in step S63. Select sentences or words in descending order.
- step S65 the document processing apparatus joins the sentences or words selected in step S64 under the control of the CPU 13 to create a summary sentence, and displays the summary on the display unit 31. It is displayed in area 220.
- the document processing apparatus can newly create a summary sentence according to the display range of the display area 220. For example, when the document processing device enlarges the display range of the display area 220 by dragging the mouse of the input unit 20 with the user, A detailed summary is newly created, and the new summary is displayed in the display area 220 of the window 190, as shown in FIG.
- ARPANET has been constructed with the sponsorship of the Defense Advanced Research Projects Agency of the US Department of Defense D0D. In 1969, ARPANET departed from a very small network that connected the host computers of four universities and research institutes on the west coast of North America with 50 kbps lines. In 1945, the world's first computer, ENIAC, was developed at the University of Pennsylvania, and in 1964, the first mainframe general-purpose computer series that implemented ICs as theoretical elements was developed. Given the context of this era, such a project, which anticipated the future of computer communications in the future, is truly the United States. I would say that was of the La. To create a summary statement that j, is displayed in the display area 2 2 0.
- the user when the displayed summary is too simple to grasp the outline of the document, the user can increase the display range of the display area 220 to increase the display range. A more detailed summary with information content can be referenced.
- the document processing device receives the tagged document in step S71.
- this document is provided with tags necessary for performing speech synthesis, and is configured as a tag file shown in FIG.
- the document processing apparatus can receive a tagged document and create a document by adding a new tag necessary for performing speech synthesis to the document. Further, the document processing apparatus may receive an untagged document, tag the document including a tag necessary for performing speech synthesis, and create a tag file. This step corresponds to step S1 in FIG.
- step S72 the document processing device creates a document summary by the method described above under the control of the CPU 13.
- the document serving as the source of the summary is tagged as shown in step S71, a tag corresponding to the document is also added to the created summary.
- step S73 the document processing device generates a text-to-speech file for all contents of the document based on the tag file under the control of the CPU 13.
- This voice reading file is generated by deriving attribute information for reading from the tag in the tag file and embedding the attribute information.
- the document processing apparatus generates a text-to-speech file through a series of steps shown in FIG.
- step S81 the document processing apparatus analyzes the received or created tag file by the CPU 13 in step S81.
- the document processing device determines the language in which the document is described.
- the starting position of paragraphs, sentences and phrases in the document, and reading attribute information are searched for based on tags.
- step S86 the document processing device replaces the correct reading with CPU 13 based on the reading attribute information.
- step S87 the document processing apparatus searches for a portion included in the summary by the CPU 13.
- the document processing apparatus reads out the portion included in the summary at a volume increased by 80% from the default volume. The volume does not need to be 80% higher than the default volume, and can be changed as appropriate.
- the document processing apparatus automatically generates a speech-to-speech file by performing the processing shown in FIG. 23 in step S73 in FIG.
- the document processing device stores the generated voice reading file in the RAM 14. This step corresponds to step S2 in FIG. Things.
- step S74 in FIG. 22 the document processing apparatus stores in advance a ROM 15 hard disk or the like using a voice reading file under the control of the CPU 13. Performs processing appropriate for the speech synthesis engine. This step corresponds to step S3 in FIG.
- step S75 the document processing device performs a process in accordance with the operation performed by the user using the user interface described above.
- This step corresponds to step S4 in FIG.
- the user uses the mouse or the like of the input unit 20 to select the selection switch 184 of the user interface screen 170 shown in FIG.
- the summary created in 72 can be read aloud.
- the document processing apparatus can start reading out the summary sentence, for example, when the user presses the play button 171 using the mouse of the input unit 20 or the like.
- the selection switch 18 3 using the mouse or the like of the input unit 20 and presses the play button 171
- the document processing apparatus reads out the document as described above. Start.
- the document processing device sets a different pause period at the start position of the paragraph, the sentence, and the phrase and reads the text. .
- the document processing apparatus can read out a given document or a prepared summary sentence.
- the document processing device can change the reading method according to the generated summary, such as emphasizing a portion included in the generated summary. .
- the document processing apparatus can automatically generate a text-to-speech file from a given document, and read out the document and a summary sentence created from the document using an appropriate speech synthesis engine. At this time, when reading out the portion included in the created summary sentence, the document processing apparatus can emphasize the portion included in the summary sentence by increasing the volume of the portion, thereby reading out the portion. It can draw the user's attention. In addition, the document processing apparatus can identify a starting position of a paragraph, a sentence, and a phrase, and provide a pause period corresponding to each of the starting positions, so that natural reading without a sense of incongruity can be performed.
- the present invention is not limited to this.
- the present invention can be applied to a case where a document is transmitted via a satellite or the like, and even if the document is read from the recording medium 33 in the recording / reproducing unit 32 or the document is recorded in the ROM 15 in advance. Good.
- a speech-to-speech file is generated from a received or created tag file. However, such a speech-to-speech file is not generated, and speech is read directly based on an evening file. You may.
- the document processing apparatus uses the speech synthesis engine to identify the paragraph, sentence, and phrase based on the tag indicating the paragraph, sentence, and phrase attached to the tag file. Read aloud with a predetermined pause at the beginning of these paragraphs, sentences and phrases.
- the tag file is provided with attribute information for prohibiting reading and attribute information indicating pronunciation, and the document processing apparatus removes the portion for which reading is prohibited, and corrects the information. Read aloud by replacing the pronunciation or pronunciation.
- the document processing apparatus operates the user interface described above during the reading, so that the paragraph, the paragraph, the sentence, and the phrase attached to the tag file are added. You can also search, fast-forward, or rewind when reading aloud in units of sentences and phrases.
- the document processing apparatus can directly read the document based on the tag file without generating the voice reading file.
- the present invention it is also possible to easily realize, as the recording medium 33, a disk-shaped recording medium or a tape-shaped recording medium in which the above-described electronic document processing program is written.
- the mouse of the input unit 20 has been exemplified as a device for operating various windows displayed on the display unit 31.
- the present invention is not limited to this. Horse not.
- an evening bullet pen can be used as such a device.
- an electronic document processing apparatus for processing an electronic document, comprising: a document input unit for inputting an electronic document; And a voice reading data generating means for generating a voice reading data to be read by the voice synthesizer.
- the electronic document processing device generates speech-to-speech data based on the electronic document, and can use the speech-to-speech data to synthesize any electronic document with high accuracy by speech synthesis. And they can read aloud without discomfort.
- An electronic document processing method is the electronic document processing method for processing an electronic document, wherein: a document inputting step of inputting the electronic document; and a voice reading data for reading out by the voice synthesizer based on the electronic document. And a voice reading data generating step of generating an overnight.
- the electronic document processing method generates speech-reading data based on the electronic document. It is possible to read out any electronic document using speech synthesis with high accuracy and without discomfort by using speech synthesis.
- the recording medium on which the electronic document processing program according to the present invention is recorded is a recording medium on which a computer-controllable electronic document processing program for processing an electronic document is recorded. And a voice reading data generating step of generating a voice reading data to be read out by a voice synthesizer based on the electronic document.
- the recording medium on which the electronic document processing program according to the present invention is recorded can provide an electronic document processing program that generates a voice reading-out process based on an electronic document. Therefore, the apparatus provided with the electronic document processing program can read any electronic document with high accuracy and without a sense of incongruity by voice synthesis using the voice reading process.
- the electronic document processing device in the electronic document processing device for processing an electronic document, is provided with tag information indicating an internal structure of the electronic document having a plurality of elements and a hierarchical structure.
- Document input means for inputting the electronic document
- document reading means for reading out the electronic document by voice synthesis based on the tag information.
- the electronic document processing apparatus inputs an electronic document to which tag information indicating the internal structure of an electronic document having a plurality of elements and a hierarchical structure is added, and assigns the electronic document to the electronic document. Based on the received evening information, it is possible to read out the electronic document directly with high accuracy and without a sense of incongruity.
- An electronic document processing method processes an electronic document.
- a document input step of inputting the electronic document to which tag information indicating the internal structure of the electronic document having a plurality of elements and having a hierarchical structure is added, based on the tag information
- a text-to-speech process in which an electronic document is voice-synthesized and read out.
- the electronic document processing method includes: inputting an electronic document to which evening information indicating the internal structure of an electronic document having a plurality of elements and having a hierarchical structure is added; Based on the evening information provided, it is possible to read out an electronic document directly with high precision and without a sense of incongruity.
- a recording medium in which the electronic document processing program according to the present invention is recorded is a recording medium in which a computer-controllable electronic document processing program for processing an electronic document is recorded.
- the recording medium on which the electronic document processing program according to the present invention is recorded receives an electronic document to which tag information indicating the internal structure of the electronic document having a plurality of elements and a hierarchical structure is added.
- tag information indicating the internal structure of the electronic document having a plurality of elements and a hierarchical structure is added.
- the device provided with the electronic document processing program can input an electronic document and read it directly with high precision and without any discomfort.
- the electronic document processing device processes an electronic document.
- a summary sentence creating means for creating a summary sentence of the electronic document
- a speech reading-out data generating means for generating a speech reading-out data for reading out the electronic document by a speech synthesizer.
- the text-to-speech data generation means includes attribute information indicating that, in the electronic document, a portion included in the summary is read with emphasis compared to a portion not included in the summary. By adding, a speech-to-speech data is generated.
- the electronic document processing apparatus assigns attribute information indicating that a part included in the summary sentence of the electronic document is emphasized and read out compared with a part not included in the summary sentence.
- An electronic document processing method is the electronic document processing method for processing an electronic document, wherein: a summary sentence creating step of creating an abstract sentence of the electronic document; A voice reading data generating step of generating a data file, wherein in the voice reading data generating step, a portion included in the summary sentence of the electronic document is emphasized in comparison with a portion not included in the summary sentence. Speech-to-speech data is generated by adding attribute information indicating readability.
- a portion included in the abstract sentence of the electronic document is provided with attribute information indicating that it is read out with emphasis compared to a portion not included in the abstract sentence.
- the recording medium on which the electronic document processing program according to the present invention is recorded is a recording medium on which a computer-controllable electronic document processing program for processing an electronic document is recorded.
- the voice reading data generating step includes: Speech-to-speech data is generated for a part included in the summary sentence of the electronic document by adding attribute information indicating that the part is not emphasized in the summary sentence than in the part not included in the summary sentence.
- the recording medium on which the electronic document processing program according to the present invention is recorded has an attribute that indicates that a portion included in the abstract sentence of the electronic document is read with emphasis compared to a portion not included in the abstract sentence. It is possible to provide an electronic document processing program for generating voice reading data by adding gender information. For this reason, the device provided with the electronic document processing program uses speech reading data to synthesize any electronic document with high accuracy and a sense of incongruity by speech synthesis. It becomes possible to read out with emphasis.
- an electronic document processing device for processing an electronic document, wherein the summary document creating means for creating a summary sentence of the electronic document, and the summary sentence included in the electronic document The part of the document that is read out with emphasis compared to the part not included in the summary Lifting means.
- the electronic document processing apparatus is capable of synthesizing an arbitrary electronic document with high accuracy and a sense of incongruity by speech synthesis. It can be read directly with emphasis compared to the part without.
- the electronic document processing method is a digital document processing method for processing an electronic document, wherein a summary sentence creating step of creating an abstract sentence of the electronic document; Has a text-to-speech process that emphasizes and reads out parts that are not included in the summary.
- any electronic document can be obtained by speech synthesis with high precision and without a sense of incongruity. Further, a part of the electronic document that is included in the abstract is included in the abstract. Enables direct reading out with emphasis compared to non-existing parts.
- the recording medium on which the electronic document processing program according to the present invention is recorded is a recording medium on which a computer-controllable electronic document processing program for processing an electronic document is recorded.
- the recording medium on which the electronic document processing program according to the present invention is recorded, a part of the electronic document that is included in the abstract is read out with emphasis compared to a part that is not included in the abstract.
- a program can be provided. Therefore, the device provided with the electronic document processing program can convert an arbitrary electronic document into speech. It is more accurate and less uncomfortable, and it is possible to directly read out the part of the electronic document that is included in the summary sentence with emphasis compared to the part that is not included in the summary sentence.
- the electronic document processing apparatus is an electronic document processing apparatus for processing an electronic document, wherein at least two start positions of a paragraph, a sentence, and a phrase are selected from among a plurality of elements constituting the electronic document. Means that a different pause period is provided at the start position of at least two of a paragraph, a sentence, and a phrase for an electronic document based on a detection result obtained by the detection means.
- a voice reading data generating means for generating a voice reading data to be read by a voice synthesizer by providing attribute information is provided.
- the electronic document processing device generates speech-aloud data by adding an attribute information indicating that a different pause period is set to at least two start positions of a paragraph, a sentence, and a phrase. Accordingly, any electronic document can be read aloud with high accuracy and without a sense of incongruity by voice synthesis using the voice reading data. Further, in the electronic document processing method according to the present invention, in the electronic document processing method for processing an electronic document, at least two start positions of a paragraph, a sentence, and a phrase are selected from among a plurality of elements constituting the electronic document.
- the electronic document processing method includes: By generating attribute information indicating that at least two start positions of phrases have different pause periods, a text-to-speech data can be generated. Speech synthesis makes it possible to read aloud with high accuracy and without a sense of incongruity.
- the recording medium on which the electronic document processing program according to the present invention is recorded is a recording medium on which a computer-controllable electronic document processing program for processing an electronic document is recorded. Detecting a start position of at least two of a paragraph, a sentence, and a phrase from a plurality of elements constituting the , At least two of the start positions of paragraphs, sentences, and phrases are given attribute information indicating that different pause periods are to be provided, so that a text-to-speech data to be read out by a voice synthesizer is generated. And a step of generating a voice reading aloud.
- the recording medium on which the electronic document processing program according to the present invention is recorded is provided with attribute information indicating that at least two start positions of a paragraph, a sentence, and a phrase are provided with different pause periods, and read out voice aloud.
- An electronic document processing program that generates the data can be provided. Therefore, the device provided with the electronic document processing program can read out any electronic document with high accuracy and without a sense of incongruity by using voice reading data by voice synthesis.
- the electronic document processing device according to the present invention is an electronic document processing device for processing an electronic document, wherein at least two of a paragraph, a sentence, and a phrase are selected from a plurality of elements constituting the electronic document.
- a pause is provided at least at the start position of a paragraph, a sentence, or a phrase at a different pause period, and the electronic document is read out by speech synthesis. It has reading means.
- the electronic document processing apparatus provides a pause period different from the start position of at least two of the paragraph, the sentence, and the phrase so that any electronic document can be synthesized with high accuracy by speech synthesis without any discomfort. Can read directly.
- the electronic document processing method for processing an electronic document, at least two start positions of a paragraph, a sentence, and a phrase are selected from among a plurality of elements constituting the electronic document. Based on the detection step, based on the detection result obtained in the detection step, at least two scheduled positions of paragraphs, sentences, and phrases are provided with different pause periods, and the electronic document is voice-synthesized. And a document reading process.
- the electronic document processing method provides a different pause period at at least two start positions of a paragraph, a sentence, and a phrase, so that an arbitrary electronic document can be synthesized with high accuracy by speech synthesis without any discomfort. Enables direct reading.
- the recording medium on which the electronic document processing program according to the present invention is recorded is a recording medium on which a computer-controllable electronic document processing program for processing an electronic document is recorded. Detecting a start position of at least two of a paragraph, a sentence, and a phrase from among a plurality of elements that form the paragraph, and, based on a detection result obtained in the detection step, a paragraph, a sentence, and a phrase.
- a document reading process is provided in which at least two start positions are provided with different pause periods, and the electronic document is read out by speech synthesis.
- a recording medium on which the electronic document processing program according to the present invention is recorded is provided with an electronic document processing program that directly reads an electronic document by providing different pause periods at least at the start positions of paragraphs, sentences, and phrases. Can be provided. Therefore, the device provided with the electronic document processing program can directly read out any electronic document with high accuracy and without discomfort by speech synthesis.
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP00940814A EP1109151A4 (en) | 1999-06-30 | 2000-06-22 | ELECTRONIC SORTER |
US09/763,832 US7191131B1 (en) | 1999-06-30 | 2000-06-22 | Electronic document processing apparatus |
US10/926,805 US6985864B2 (en) | 1999-06-30 | 2004-08-26 | Electronic document processing apparatus and method for forming summary text and speech read-out |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP11/186839 | 1999-06-30 | ||
JP11186839A JP2001014306A (ja) | 1999-06-30 | 1999-06-30 | 電子文書処理方法及び電子文書処理装置並びに電子文書処理プログラムが記録された記録媒体 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/763,832 A-371-Of-International US7191131B1 (en) | 1999-06-30 | 2000-06-22 | Electronic document processing apparatus |
US10/926,805 Division US6985864B2 (en) | 1999-06-30 | 2004-08-26 | Electronic document processing apparatus and method for forming summary text and speech read-out |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2001001390A1 true WO2001001390A1 (fr) | 2001-01-04 |
Family
ID=16195543
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2000/004109 WO2001001390A1 (fr) | 1999-06-30 | 2000-06-22 | Trieuse-liseuse electronique |
Country Status (4)
Country | Link |
---|---|
US (2) | US7191131B1 (ja) |
EP (1) | EP1109151A4 (ja) |
JP (1) | JP2001014306A (ja) |
WO (1) | WO2001001390A1 (ja) |
Families Citing this family (151)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060116865A1 (en) | 1999-09-17 | 2006-06-01 | Www.Uniscape.Com | E-services translation utilizing machine translation and translation memory |
US8645137B2 (en) | 2000-03-16 | 2014-02-04 | Apple Inc. | Fast, language-independent method for user authentication by voice |
GB0215123D0 (en) * | 2002-06-28 | 2002-08-07 | Ibm | Method and apparatus for preparing a document to be read by a text-to-speech-r eader |
GB2390704A (en) * | 2002-07-09 | 2004-01-14 | Canon Kk | Automatic summary generation and display |
US7535922B1 (en) * | 2002-09-26 | 2009-05-19 | At&T Intellectual Property I, L.P. | Devices, systems and methods for delivering text messages |
US7299261B1 (en) * | 2003-02-20 | 2007-11-20 | Mailfrontier, Inc. A Wholly Owned Subsidiary Of Sonicwall, Inc. | Message classification using a summary |
US20040260551A1 (en) * | 2003-06-19 | 2004-12-23 | International Business Machines Corporation | System and method for configuring voice readers using semantic analysis |
US20050120300A1 (en) * | 2003-09-25 | 2005-06-02 | Dictaphone Corporation | Method, system, and apparatus for assembly, transport and display of clinical data |
CN100527076C (zh) * | 2003-09-30 | 2009-08-12 | 西门子公司 | 为计算机程序配置语言的方法和系统 |
US7783474B2 (en) * | 2004-02-27 | 2010-08-24 | Nuance Communications, Inc. | System and method for generating a phrase pronunciation |
US7983896B2 (en) * | 2004-03-05 | 2011-07-19 | SDL Language Technology | In-context exact (ICE) matching |
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US8036894B2 (en) * | 2006-02-16 | 2011-10-11 | Apple Inc. | Multi-unit approach to text-to-speech synthesis |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US8027837B2 (en) * | 2006-09-15 | 2011-09-27 | Apple Inc. | Using non-speech sounds during text-to-speech synthesis |
US9087507B2 (en) * | 2006-09-15 | 2015-07-21 | Yahoo! Inc. | Aural skimming and scrolling |
US8521506B2 (en) | 2006-09-21 | 2013-08-27 | Sdl Plc | Computer-implemented method, computer software and apparatus for use in a translation system |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US20080300872A1 (en) * | 2007-05-31 | 2008-12-04 | Microsoft Corporation | Scalable summaries of audio or visual content |
EP2156438A1 (en) * | 2007-06-15 | 2010-02-24 | Koninklijke Philips Electronics N.V. | Method and apparatus for automatically generating summaries of a multimedia file |
US8145490B2 (en) * | 2007-10-24 | 2012-03-27 | Nuance Communications, Inc. | Predicting a resultant attribute of a text file before it has been converted into an audio file |
US20090157407A1 (en) * | 2007-12-12 | 2009-06-18 | Nokia Corporation | Methods, Apparatuses, and Computer Program Products for Semantic Media Conversion From Source Files to Audio/Video Files |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
JP2009265279A (ja) * | 2008-04-23 | 2009-11-12 | Sony Ericsson Mobilecommunications Japan Inc | 音声合成装置、音声合成方法、音声合成プログラム、携帯情報端末、および音声合成システム |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US20100030549A1 (en) | 2008-07-31 | 2010-02-04 | Lee Michael M | Mobile device having human language translation capability with positional feedback |
US8984398B2 (en) * | 2008-08-28 | 2015-03-17 | Yahoo! Inc. | Generation of search result abstracts |
US20100070863A1 (en) * | 2008-09-16 | 2010-03-18 | International Business Machines Corporation | method for reading a screen |
US8990087B1 (en) * | 2008-09-30 | 2015-03-24 | Amazon Technologies, Inc. | Providing text to speech from digital content on an electronic device |
JP4785909B2 (ja) * | 2008-12-04 | 2011-10-05 | 株式会社ソニー・コンピュータエンタテインメント | 情報処理装置 |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US9262403B2 (en) | 2009-03-02 | 2016-02-16 | Sdl Plc | Dynamic generation of auto-suggest dictionary for natural language translation |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US20120311585A1 (en) | 2011-06-03 | 2012-12-06 | Apple Inc. | Organizing task items that represent tasks to perform |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US9431006B2 (en) | 2009-07-02 | 2016-08-30 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
DE202011111062U1 (de) | 2010-01-25 | 2019-02-19 | Newvaluexchange Ltd. | Vorrichtung und System für eine Digitalkonversationsmanagementplattform |
US8103554B2 (en) * | 2010-02-24 | 2012-01-24 | GM Global Technology Operations LLC | Method and system for playing an electronic book using an electronics system in a vehicle |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US8423365B2 (en) | 2010-05-28 | 2013-04-16 | Daniel Ben-Ezri | Contextual conversion platform |
US20110313756A1 (en) * | 2010-06-21 | 2011-12-22 | Connor Robert A | Text sizer (TM) |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US9128929B2 (en) | 2011-01-14 | 2015-09-08 | Sdl Language Technologies | Systems and methods for automatically estimating a translation time including preparation time in addition to the translation itself |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US8994660B2 (en) | 2011-08-29 | 2015-03-31 | Apple Inc. | Text correction processing |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9280610B2 (en) | 2012-05-14 | 2016-03-08 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
US10199051B2 (en) | 2013-02-07 | 2019-02-05 | Apple Inc. | Voice trigger for a digital assistant |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
WO2014144579A1 (en) | 2013-03-15 | 2014-09-18 | Apple Inc. | System and method for updating an adaptive speech recognition model |
CN105027197B (zh) | 2013-03-15 | 2018-12-14 | 苹果公司 | 训练至少部分语音命令系统 |
WO2014197336A1 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
WO2014197334A2 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
WO2014197335A1 (en) | 2013-06-08 | 2014-12-11 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
CN110442699A (zh) | 2013-06-09 | 2019-11-12 | 苹果公司 | 操作数字助理的方法、计算机可读介质、电子设备和系统 |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
KR101809808B1 (ko) | 2013-06-13 | 2017-12-15 | 애플 인크. | 음성 명령에 의해 개시되는 긴급 전화를 걸기 위한 시스템 및 방법 |
DE112014003653B4 (de) | 2013-08-06 | 2024-04-18 | Apple Inc. | Automatisch aktivierende intelligente Antworten auf der Grundlage von Aktivitäten von entfernt angeordneten Vorrichtungen |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
EP3480811A1 (en) | 2014-05-30 | 2019-05-08 | Apple Inc. | Multi-command single utterance input method |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US9578173B2 (en) | 2015-06-05 | 2017-02-21 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US9875734B2 (en) | 2016-01-05 | 2018-01-23 | Motorola Mobility, Llc | Method and apparatus for managing audio readouts |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
CN108885869B (zh) * | 2016-03-16 | 2023-07-18 | 索尼移动通讯有限公司 | 控制包含语音的音频数据的回放的方法、计算设备和介质 |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
DK179309B1 (en) | 2016-06-09 | 2018-04-23 | Apple Inc | Intelligent automated assistant in a home environment |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
DK179049B1 (en) | 2016-06-11 | 2017-09-18 | Apple Inc | Data driven natural language event detection and classification |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
DK179343B1 (en) | 2016-06-11 | 2018-05-14 | Apple Inc | Intelligent task discovery |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
DK201770383A1 (en) | 2017-05-09 | 2018-12-14 | Apple Inc. | USER INTERFACE FOR CORRECTING RECOGNITION ERRORS |
DK201770439A1 (en) | 2017-05-11 | 2018-12-13 | Apple Inc. | Offline personal assistant |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
DK201770428A1 (en) | 2017-05-12 | 2019-02-18 | Apple Inc. | LOW-LATENCY INTELLIGENT AUTOMATED ASSISTANT |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
DK201770432A1 (en) | 2017-05-15 | 2018-12-21 | Apple Inc. | Hierarchical belief states for digital assistants |
DK201770431A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
DK179549B1 (en) | 2017-05-16 | 2019-02-12 | Apple Inc. | FAR-FIELD EXTENSION FOR DIGITAL ASSISTANT SERVICES |
US10635863B2 (en) | 2017-10-30 | 2020-04-28 | Sdl Inc. | Fragment recall and adaptive automated translation |
US10482159B2 (en) | 2017-11-02 | 2019-11-19 | International Business Machines Corporation | Animated presentation creator |
US10817676B2 (en) | 2017-12-27 | 2020-10-27 | Sdl Inc. | Intelligent routing services and systems |
US11256867B2 (en) | 2018-10-09 | 2022-02-22 | Sdl Inc. | Systems and methods of machine learning for digital assets and message creation |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09244869A (ja) * | 1996-03-11 | 1997-09-19 | Nec Corp | 文章読み上げ方式 |
JPH09258763A (ja) * | 1996-03-18 | 1997-10-03 | Nec Corp | 音声合成装置 |
JPH10105370A (ja) * | 1996-09-25 | 1998-04-24 | Canon Inc | 文書読み上げ装置,文書読み上げ方法および記憶媒体 |
JPH10254861A (ja) * | 1997-03-14 | 1998-09-25 | Nec Corp | 音声合成装置 |
JPH10260814A (ja) * | 1997-03-17 | 1998-09-29 | Toshiba Corp | 情報処理装置及び情報処理方法 |
JPH10274999A (ja) * | 1997-03-31 | 1998-10-13 | Sanyo Electric Co Ltd | 文書読み上げ装置 |
JPH1152973A (ja) * | 1997-08-07 | 1999-02-26 | Ricoh Co Ltd | 文書読み上げ方式 |
JP2000099072A (ja) * | 1998-09-21 | 2000-04-07 | Ricoh Co Ltd | 文書読み上げ装置 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3704345A (en) * | 1971-03-19 | 1972-11-28 | Bell Telephone Labor Inc | Conversion of printed text into synthetic speech |
US4864502A (en) | 1987-10-07 | 1989-09-05 | Houghton Mifflin Company | Sentence analyzer |
JP2783558B2 (ja) | 1988-09-30 | 1998-08-06 | 株式会社東芝 | 要約生成方法および要約生成装置 |
US5185698A (en) | 1989-02-24 | 1993-02-09 | International Business Machines Corporation | Technique for contracting element marks in a structured document |
DE69327774T2 (de) * | 1992-11-18 | 2000-06-21 | Canon Information Syst Inc | Prozessor zur Umwandlung von Daten in Sprache und Ablaufsteuerung hierzu |
US5384703A (en) | 1993-07-02 | 1995-01-24 | Xerox Corporation | Method and apparatus for summarizing documents according to theme |
US5572625A (en) | 1993-10-22 | 1996-11-05 | Cornell Research Foundation, Inc. | Method for generating audio renderings of digitized works having highly technical content |
JP3340585B2 (ja) * | 1995-04-20 | 2002-11-05 | 富士通株式会社 | 音声応答装置 |
US5907323A (en) * | 1995-05-05 | 1999-05-25 | Microsoft Corporation | Interactive program summary panel |
JPH08328590A (ja) * | 1995-05-29 | 1996-12-13 | Sanyo Electric Co Ltd | 音声合成装置 |
JP3384646B2 (ja) * | 1995-05-31 | 2003-03-10 | 三洋電機株式会社 | 音声合成装置及び読み上げ時間演算装置 |
US5675710A (en) | 1995-06-07 | 1997-10-07 | Lucent Technologies, Inc. | Method and apparatus for training a text classifier |
JPH09259028A (ja) * | 1996-03-19 | 1997-10-03 | Toshiba Corp | 情報呈示方法 |
JPH09325787A (ja) | 1996-05-30 | 1997-12-16 | Internatl Business Mach Corp <Ibm> | 音声合成方法、音声合成装置、文章への音声コマンド組み込み方法、及び装置 |
US6029131A (en) * | 1996-06-28 | 2000-02-22 | Digital Equipment Corporation | Post processing timing of rhythm in synthetic speech |
US5850629A (en) * | 1996-09-09 | 1998-12-15 | Matsushita Electric Industrial Co., Ltd. | User interface controller for text-to-speech synthesizer |
JPH10105371A (ja) | 1996-10-01 | 1998-04-24 | Canon Inc | 文書読み上げ装置及び文書読み上げ方法 |
US20020002458A1 (en) * | 1997-10-22 | 2002-01-03 | David E. Owen | System and method for representing complex information auditorially |
GB9806085D0 (en) * | 1998-03-23 | 1998-05-20 | Xerox Corp | Text summarisation using light syntactic parsing |
US6446040B1 (en) * | 1998-06-17 | 2002-09-03 | Yahoo! Inc. | Intelligent text-to-speech synthesis |
US6317708B1 (en) * | 1999-01-07 | 2001-11-13 | Justsystem Corporation | Method for producing summaries of text document |
JP3232289B2 (ja) * | 1999-08-30 | 2001-11-26 | インターナショナル・ビジネス・マシーンズ・コーポレーション | 記号挿入装置およびその方法 |
WO2001033549A1 (fr) | 1999-11-01 | 2001-05-10 | Matsushita Electric Industrial Co., Ltd. | Dispositif et procede de lecture de messages electroniques, et support enregistre de conversion de texte |
-
1999
- 1999-06-30 JP JP11186839A patent/JP2001014306A/ja not_active Withdrawn
-
2000
- 2000-06-22 US US09/763,832 patent/US7191131B1/en not_active Expired - Fee Related
- 2000-06-22 WO PCT/JP2000/004109 patent/WO2001001390A1/ja not_active Application Discontinuation
- 2000-06-22 EP EP00940814A patent/EP1109151A4/en not_active Withdrawn
-
2004
- 2004-08-26 US US10/926,805 patent/US6985864B2/en not_active Expired - Fee Related
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09244869A (ja) * | 1996-03-11 | 1997-09-19 | Nec Corp | 文章読み上げ方式 |
JPH09258763A (ja) * | 1996-03-18 | 1997-10-03 | Nec Corp | 音声合成装置 |
JPH10105370A (ja) * | 1996-09-25 | 1998-04-24 | Canon Inc | 文書読み上げ装置,文書読み上げ方法および記憶媒体 |
JPH10254861A (ja) * | 1997-03-14 | 1998-09-25 | Nec Corp | 音声合成装置 |
JPH10260814A (ja) * | 1997-03-17 | 1998-09-29 | Toshiba Corp | 情報処理装置及び情報処理方法 |
JPH10274999A (ja) * | 1997-03-31 | 1998-10-13 | Sanyo Electric Co Ltd | 文書読み上げ装置 |
JPH1152973A (ja) * | 1997-08-07 | 1999-02-26 | Ricoh Co Ltd | 文書読み上げ方式 |
JP2000099072A (ja) * | 1998-09-21 | 2000-04-07 | Ricoh Co Ltd | 文書読み上げ装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1109151A4 * |
Also Published As
Publication number | Publication date |
---|---|
JP2001014306A (ja) | 2001-01-19 |
US7191131B1 (en) | 2007-03-13 |
US20050055212A1 (en) | 2005-03-10 |
EP1109151A1 (en) | 2001-06-20 |
EP1109151A4 (en) | 2001-09-26 |
US6985864B2 (en) | 2006-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2001001390A1 (fr) | Trieuse-liseuse electronique | |
US7076732B2 (en) | Document processing apparatus having an authoring capability for describing a document structure | |
Desagulier et al. | Corpus linguistics and statistics with R | |
Cassidy et al. | Multi-level annotation in the Emu speech database management system | |
US7941745B2 (en) | Method and system for tagging electronic documents | |
US7610546B1 (en) | Document processing apparatus having capability of controlling video data | |
Cresti et al. | C-ORAL-ROM: integrated reference corpora for spoken romance languages | |
US20080177528A1 (en) | Method of enabling any-directional translation of selected languages | |
US20080027726A1 (en) | Text to audio mapping, and animation of the text | |
US20060085735A1 (en) | Annotation management system, annotation managing method, document transformation server, document transformation program, and electronic document attachment program | |
US20080300872A1 (en) | Scalable summaries of audio or visual content | |
CN102880599A (zh) | 用于解析句子并支持对该解析进行学习的句子探索方法 | |
JP2009140466A (ja) | 使用者製作問答データに基づいた会話辞書サービスの提供方法及びシステム | |
WO2000043909A1 (fr) | Procede et dispositif de traitement de documents et support d'enregistrement | |
CN107066437B (zh) | 数字作品标注的方法及装置 | |
Androutsopoulos et al. | Generating multilingual personalized descriptions of museum exhibits-The M-PIRO project | |
JP2001109762A (ja) | 文書処理方法及び装置並びに記録媒体 | |
JP4186321B2 (ja) | 文書処理方法及び装置並びに記録媒体 | |
JP2001014305A (ja) | 電子文書処理方法及び電子文書処理装置並びに電子文書処理プログラムが記録された記録媒体 | |
JP2001014307A (ja) | 文書処理装置、文書処理方法、及び記録媒体 | |
JP2001027997A (ja) | 電子文書処理方法及び電子文書処理装置並びに電子文書処理プログラムが記録された記録媒体 | |
JP2001027996A (ja) | 電子文書処理方法及び電子文書処理装置並びに電子文書処理プログラムが記録された記録媒体 | |
JP3734101B2 (ja) | ハイパーメディア構築支援装置 | |
JP2010238263A (ja) | 出願文書情報作成装置、出願文書情報作成方法、及びプログラム | |
JP2001014137A (ja) | 電子文書処理方法及び電子文書処理装置並びに電子文書処理プログラムが記録された記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2000940814 Country of ref document: EP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 09763832 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 2000940814 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 2000940814 Country of ref document: EP |