WO2014136534A1 - 理解支援システム、理解支援サーバ、理解支援方法、及びコンピュータ読み取り可能な記録媒体 - Google Patents
理解支援システム、理解支援サーバ、理解支援方法、及びコンピュータ読み取り可能な記録媒体 Download PDFInfo
- Publication number
- WO2014136534A1 WO2014136534A1 PCT/JP2014/053058 JP2014053058W WO2014136534A1 WO 2014136534 A1 WO2014136534 A1 WO 2014136534A1 JP 2014053058 W JP2014053058 W JP 2014053058W WO 2014136534 A1 WO2014136534 A1 WO 2014136534A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- text data
- writer
- text
- data
- summary writing
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 29
- 208000032041 Hearing impaired Diseases 0.000 claims abstract description 48
- 230000005540 biological transmission Effects 0.000 claims abstract description 48
- 238000012552 review Methods 0.000 claims abstract description 25
- 238000011156 evaluation Methods 0.000 claims description 50
- 238000012217 deletion Methods 0.000 claims description 30
- 230000037430 deletion Effects 0.000 claims description 30
- 238000004891 communication Methods 0.000 claims description 29
- 238000000354 decomposition reaction Methods 0.000 claims description 14
- 239000000284 extract Substances 0.000 claims description 9
- 238000013518 transcription Methods 0.000 abstract description 2
- 230000035897 transcription Effects 0.000 abstract description 2
- 238000012937 correction Methods 0.000 description 38
- 238000010586 diagram Methods 0.000 description 16
- 238000012545 processing Methods 0.000 description 8
- 230000001915 proofreading effect Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 230000008707 rearrangement Effects 0.000 description 6
- 208000016354 hearing loss disease Diseases 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 206010011878 Deafness Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000010370 hearing loss Effects 0.000 description 1
- 231100000888 hearing loss Toxicity 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B21/00—Teaching, or communicating with, the blind, deaf or mute
- G09B21/009—Teaching or communicating with deaf persons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B21/00—Teaching, or communicating with, the blind, deaf or mute
Definitions
- the present invention relates to an understanding support system, an understanding support server, and an understanding support method for supporting an understanding of a hearing impaired person when the hearing impaired person receives a service by a summary writing of a speaker's utterance.
- the present invention also relates to a computer-readable recording medium on which a program for realizing these is recorded.
- Hearing impaired persons with hearing loss in both ears of minus 100 decibels (hereinafter simply referred to as “Hearing impaired persons”) can hardly receive language information in speech even with a hearing aid. For this reason, when a hearing impaired person listens to lectures and classes at school, a sign language interpreter or a summary writing interpreter may be attached.
- a summary writing interpreter when attached, usually one or more hearing impaired interpreters are assigned to one person with a hearing impairment, for example, at a school class. Then, these summary writing interpreters transcribe the teacher's spoken text and the like with a PC (Personal Computer) or a paper notebook and present it to the hearing impaired.
- PC Personal Computer
- two or more summary writing interpreters are required is that the summary writing work is heavy and the accuracy of summarization tends to be reduced by one person.
- Patent Document 1 discloses a device that supports transcription of speech.
- the device disclosed in Patent Document 1 creates synthesized speech from text data obtained by speech recognition or manual text conversion, and extracts feature quantities of the created synthesized speech and the uttered original speech.
- the apparatus disclosed by patent document 1 compares both feature-values, and presents the error of a characterization result based on a comparison result.
- Patent Document 2 discloses an apparatus that automatically detects an error in a speech recognition result.
- the apparatus disclosed in Patent Document 2 accumulates and learns acoustic information based on past speech recognition results, thereby generating a correct / incorrect discrimination model, and using the generated correct / incorrect discrimination model, Detect errors.
- the acoustic information is information that has a feature amount obtained by analysis in the time domain and the frequency domain, and is discriminated as a correct answer or an error for each minimum unit.
- Patent Document 3 discloses an apparatus for correcting subtitles for presentation audio in real time.
- the device disclosed in Patent Document 3 obtains one or a plurality of character string candidates and the certainty factor for each character string candidate by speech recognition, and determines the current processing status while determining the current processing status.
- One of automatic determination and manual determination is selected for the character string candidate.
- the apparatus automatically determines a confirmed character string for the first character string candidate.
- manual determination the device manually determines a confirmed character string for the first character string candidate. If it cannot be determined, the device determines a character string that has not been confirmed as a keyword list. And a matching score is calculated, and based on this, a keyword as a correction result is output.
- Patent Document 4 discloses a device that enables a proofreader to perform proofreading using text data obtained by speech recognition and speech data used for speech recognition.
- the device disclosed in Patent Document 4 converts a series of voices generated into voice signals, divides them into a plurality of voice signals, and converts them into text data. After that, the device disclosed in Patent Document 4 reads out and outputs the audio signal and text data corresponding to each other in synchronization.
- the proofreader can execute proofreading based on these.
- Patent Documents 1 to 3 do not check by humans, there is no way to correct and present it to a hearing impaired person if an error is included in the recognition result. .
- recognition accuracy can be improved as compared with the devices disclosed in Patent Documents 1 and 2, but for that purpose, a database used for calculating the match score is used. , It is necessary to create it for each field to be recognized. In this case, since the cost of the apparatus becomes very high, it becomes difficult to use in a school or the like.
- the proofreader performs a check based on the voice, but the recognition result may include an error, so it is considered that the burden on the proofreader is large. Accordingly, the recognition accuracy varies depending on the degree of fatigue of the proofreader.
- An example of an object of the present invention is to provide an understanding support system, an understanding support server, and an understanding support method that can provide accurate information to a hearing impaired person while solving the above-described problems and reducing the burden on a person who performs summary writing. And providing a computer-readable recording medium.
- an understanding support system is a system for supporting an understanding of a hearing impaired person who receives a service by summarizing writing of a speaker's utterance
- a server apparatus a first client terminal used by a writer who performs summary writing, a second client terminal used by a reviewer who reviews summary writing made by the writer, and a hearing impaired person use A third client terminal
- the server device A text receiving unit for receiving text data obtained by summary writing from an utterance by the writer from the first client;
- a text transmission unit for transmitting the received text data to the second client terminal;
- a data communication unit that receives the text data that has been censored by the reviewer from the second client terminal, and transmits the text data that has been censored to the third client terminal; It is characterized by having.
- an understanding support server is a server for supporting an understanding of a hearing impaired person who receives a service by a summary writing of a speaker's utterance,
- a text receiving unit for receiving text data obtained by the summary writing from the utterance by the writer from the first client used by the writer who performs the summary writing;
- a text transmission unit for transmitting the received text data to the second client terminal used by a reviewer who reviews a summary writing made by the writer;
- the understanding support method in the present invention is a method for supporting the understanding of the hearing impaired person who receives the service by the summary writing of the utterance of the speaker, (A) receiving, from a first client used by the writer who performs the summary writing, text data obtained by the summary writing from the utterance by the writer; (B) transmitting the text data received in the step (a) to a second client terminal used by a reviewer who reviews a summary writing made by the writer; (C) The text data that has been reviewed by the reviewer is received from the second client terminal, and the text data that has been reviewed is transmitted to the third client terminal used by the hearing impaired person Step, It is characterized by having.
- a computer-readable recording medium is a computer-readable recording medium storing a program for supporting an understanding of a hearing impaired person who receives a service by a summary writing of a speaker's utterance.
- a possible recording medium In the computer, (A) receiving, from a first client used by the writer who performs the summary writing, text data obtained by the summary writing from the utterance by the writer; (B) transmitting the text data received in the step (a) to a second client terminal used by a reviewer who reviews a summary writing made by the writer; (C) The text data that has been reviewed by the reviewer is received from the second client terminal, and the text data that has been reviewed is transmitted to the third client terminal used by the hearing impaired person Step, A program including an instruction for executing is recorded.
- FIG. 1 is a diagram schematically showing an overall configuration of an understanding support system according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing a configuration of an understanding support system and an understanding support server in the embodiment of the present invention.
- FIG. 3 is a flowchart showing the operation of the writer client constituting the understanding support system in the embodiment of the present invention.
- FIG. 4 is a flowchart showing the operation of the understanding support server in the embodiment of the present invention.
- FIG. 5 is a flowchart showing the operation of the reviewer client constituting the understanding support system according to the embodiment of the present invention.
- FIG. 6 is a diagram for explaining the part of speech decomposition processing in the text evaluation unit shown in FIG. FIG.
- FIG. 7 is a diagram for explaining the matching character string calculation processing in the text evaluation unit shown in FIG.
- FIG. 8 is a diagram for explaining the order rearrangement process in the text evaluation unit shown in FIG.
- FIG. 9 is a diagram showing an example of the writer content holding list used in the present embodiment.
- FIG. 10 is a diagram for explaining the duplicate location deletion process by the duplicate location deletion unit shown in FIG.
- FIG. 11 is a block diagram illustrating an example of a computer that realizes the understanding support server according to the embodiment of the present invention.
- FIG. 1 is a diagram schematically showing an overall configuration of an understanding support system according to an embodiment of the present invention.
- the understanding support system 10 is for hearing-impaired persons (hereinafter also referred to as “users”) when a hearing-impaired person receives a service based on the summary writing of a speaker's utterance. This system supports the understanding of summary writing.
- an understanding support system 10 includes an understanding support server 1 that is a server device, a client terminal 2 used by a writer who performs summary writing, a client terminal 3 used by a user, and a reviewer. And a client terminal 4 to be used.
- the reviewer is a third party who reviews (ie, proofreads) the summary written by the writer.
- the client terminal 3 is referred to as “user client” 3.
- the client terminal 2 is referred to as “writer client 2”.
- the client terminal 4 is referred to as “inspector client” 4.
- the understanding support server 1 includes a text reception unit 11, a text transmission unit 17, and a data communication unit 16.
- the text receiving unit 11 receives from the writer client 2 text data obtained by summary writing from the utterance by the writer.
- the text transmitting unit 17 transmits the text data received by the text receiving unit 11 to the reviewer client 4.
- the data communication unit 16 receives the text data that has been censored by the reviewer from the reviewer client 4, and transmits the text data that has been censored to the user client 3.
- the user client 3 presents the summary after proofreading to the user.
- FIG. 2 is a block diagram showing a configuration of an understanding support system and an understanding support server in the embodiment of the present invention.
- the understanding support system 10 is used for lectures at schools, lectures, and the like, for example. Then, the voices of the speakers including the lecturer are collected as voice data in the voice distribution server 5 and transmitted from the voice distribution server 5 to the writer client 2 of each writer.
- the scribe client 2 includes a voice reception unit 21, a text transmission unit 22, and a text input unit 23.
- the audio receiving unit 21 receives the audio data distributed from the audio distribution server 5.
- the received audio data is reproduced by the audio reproducing device 6.
- the audio reproducing device 6 is, for example, a speaker provided in the writer client 2.
- the audio reproducing device 6 may be any device that can reproduce audio data.
- the text input unit 23 accepts the input of the character string, and the received character string is converted into the text Input to the transmitter 22.
- the text transmission unit 22 transmits the character string of the input character to the understanding support server 1 as text data.
- a group composed of two or more writers is set for each time zone, and summary writing for the same utterance can be simultaneously performed by two or more writers constituting the same group. Done. For this reason, each writer client 2 is given an identifier for specifying the writer and the group.
- each time zone is set so that successive time zones partially overlap each other in order to prevent leakage of the summary writing.
- examples of the input device of the writer client 2 include a general computer keyboard and a six-point keyboard used at the site of summary writing.
- the input device may be any device that can input a character string by a writer.
- the understanding support server 1 integrates texts for each writer based on the result of summary writing, and evaluates the integrated texts.
- the understanding support server specifies a duplicate character string and deletes it in order to eliminate duplication of the character string caused by duplication of time zones.
- the text data is in a state where it is easy for the reviewer to perform the review. Then, the text data processed so that the reviewer can easily proofread is transmitted to the reviewer client 4.
- the understanding support server 1 includes an input control unit 12, a duplicate allocation control unit 13, in addition to the text reception unit 11, the text transmission unit 17, and the data communication unit 16 described above. And a text evaluation unit 14 and an overlapping part deletion unit 15.
- the text receiving unit 11 receives text data and inputs it to the input control unit 12.
- the input control unit 12 integrates the received text data for each writer.
- a group is set for each time slot in the writer, so that the input control unit 12 sets each writer's group for each writer constituting the same group.
- the text data received during the time zone assigned to is stored.
- the input control unit 12 integrates the accumulated text data when the time zone assigned to the group currently performing summary writing ends and when the completion of transmission of the text data is notified from the writer client 2. .
- the input control unit 12 integrates the text data of each writer constituting this group, and then outputs the integrated text data to the duplicate allocation control unit 13.
- the duplicate allocation control unit 13 When the integrated text data of each writer is output, the duplicate allocation control unit 13 is ready when the text data of all the scribes in the group currently performing the summary writing are prepared, or when a certain period of time has elapsed. The text data is output to the text evaluation unit 14.
- duplication allocation control unit 13 is subject to review by the writer who performed the summary writing in the previous time zone when the summary writing is performed by a different writer for each time zone.
- Acquire text data hereinafter referred to as “text data before correction”).
- the writer sets a group for each time period, and the text evaluation unit 14 described later selects the most appropriate text data as candidate data as a summary writing. Is done. For this reason, the duplication allocation control part 13 acquires the text data before correction of the candidate data used as the review object of the group which performed summary writing in the previous time slot
- the duplicate allocation control unit 13 assigns an allocation number to the text data before correction, and registers the text data before correction with the allocation number in a list (hereinafter referred to as a “writer content holding list”). (See FIG. 9). The allocation number will be described later.
- the duplication allocation control unit 13 determines whether or not the text data before correction is registered in the writer content holding list, and if it is registered, it is selected by the text data before correction and the text evaluation unit 14. The candidate data is transferred to the duplicate part deletion unit 15.
- the duplication allocation control unit 13 checks the text data obtained via the input control unit 12 in the text transmission unit 17. To the client 4.
- the transmitted text data is candidate data selected by the text evaluation unit 14.
- the duplication part deletion unit 15 compares the acquired pre-correction text data with the text data (candidate data) obtained from the writer (group) who performed the summary writing in the current time zone, and duplicates the latter. Delete the part.
- the duplication portion deletion unit 15 is located at the beginning of the character string located at the end of the acquired text data before correction and the candidate data obtained from the group that performed summary writing in the current time zone. Compare with a string. At this time, the comparison may be performed for the set number of characters. The number of characters may be set by a fixed value or may be set according to the number of characters in the entire text data.
- the duplicate part deletion unit 15 outputs the text data (candidate data) from which the duplicate deletion has been performed to the text transmission unit 17.
- the text transmission unit 17 transmits the text data (candidate data) subjected to the duplicate deletion to the reviewer client 4.
- the text evaluation unit 14 acquires, from the duplication assignment control unit 13, the text data after integration of each writer who configures the same group.
- the text evaluation unit 14 compares these text data with each other, and selects text data to be censored as candidate data for each group based on the comparison result.
- the candidate data selected at this time does not need to be one for each group, and may be plural.
- the text evaluation unit 14 performs part-of-speech decomposition for each acquired integrated text data and extracts only a character string corresponding to a specific part-of-speech. Next, the text evaluation unit 14 calculates, for each piece of text data after integration, the number of character strings that match character strings extracted from other text data among character strings extracted therefrom. Then, the text evaluation unit 14 can select candidate data for each of the integrated text data using the calculated number.
- the reviewer client 4 includes a text reception unit 41, a display unit 42, a text transmission unit 43, and a censor calibration unit 44.
- the text receiving unit 41 receives the text data transmitted from the understanding support server 1, specifically, candidate data from which duplicate portions are deleted, and inputs this to the display unit 42.
- the display unit 42 presents the received candidate data to the reviewer by an output device (not shown in FIG. 2) such as a display device. Further, when there are a plurality of candidate data for the same utterance, these are all presented.
- the reviewer client 4 may be a terminal device other than a PC, for example, a tablet terminal, a smartphone, or the like. In this case, a display device built in the terminal device is an output device.
- the censor proofreading unit 44 accepts input of correction of text data by the censor and inputs the corrected text data to the text transmission unit 43. Further, when there are a plurality of candidate data for the same utterance, the reviewer performs correction after selecting any candidate data, so the censor proofreading unit 44 transmits only the selected and corrected candidate data to the text transmitting unit. 43. If the reviewer determines that there is no need for correction, the censor proofreading unit 44 inputs the text data before correction (selected candidate data) to the text transmission unit 43.
- the text transmission unit 43 transmits the corrected text data and the corresponding text data before correction (text data before correction of the selected candidate data when selection is made) to the understanding support server 1. To do. Further, the text data before correction transmitted at this time is registered in the writer content holding list together with the allocation number by the duplication allocation control unit 13 in the understanding support server 1. Furthermore, the registered pre-correction text data is used for deleting overlapping portions.
- the data transmitted from the text transmitting unit 43 is received by the data communication unit 16.
- the data communication unit 16 can specify the writer who created the candidate data selected by the reviewer in the reviewer client 4 and can evaluate each writer based on the specified result. For example, each time candidate data is selected, the data communication unit 16 adds the score of the writer who created it as an evaluation. The writer's evaluation thus obtained can be used in the selection of candidate data in the text evaluation unit 14 described above.
- the data communication unit 16 transmits the text data corrected by the reviewer to the user client 3 among the data transmitted from the reviewer client 4.
- the user client 3 includes a text receiving unit 31 and a display unit 32.
- the text receiving unit 31 receives the text data corrected by the reviewer transmitted from the data communication unit 16 of the understanding support server 1 and inputs the received text data to the display unit 32.
- the display unit 32 presents the received corrected text data to the user by an output device (not shown in FIG. 2) such as a display device.
- an output device such as a display device.
- the user client 3 may be a terminal device other than a PC, for example, a tablet-type terminal, a smartphone, or the like.
- a display device built in the terminal device is an output device.
- FIGS. 1 and 2 are referred to as appropriate.
- the understanding support method is implemented by operating the understanding support system 10. Therefore, the description of the understanding support method in the present embodiment is replaced with the following description of the operation of the understanding support system 10.
- FIG. 3 is a flowchart showing the operation of the writer client constituting the understanding support system in the embodiment of the present invention.
- the voice distribution server 5 uses the acquired voice data as the writer client 2. Deliver to.
- the voice receiving unit 21 receives the voice data (step A1) and causes the voice playback device 6 to play back the received voice data (step A2). Thereby, the writer can hear the voice which the speaker has uttered.
- step A2 the writer starts the summary writing while listening to the speaker's voice when he / she makes the summary writing, and inputs the text of the writer client 2 using an input device such as a keyboard.
- the content of the summary writing is input as a character string.
- the text input unit 23 receives the input character string, and inputs the input character string to the text transmission unit 22 as text data (step A3).
- the text transmission unit 22 transmits the text data to the understanding support server 1 (step A4).
- FIG. 4 is a flowchart showing the operation of the understanding support server in the embodiment of the present invention.
- the text receiving unit 11 receives text data transmitted from each writer client 2, and then outputs the received text data to the input control unit 12. (Step B1).
- the input control unit 12 integrates the text data for each writer belonging to the group to which the current time zone is assigned (step B2). Specifically, the input control unit 12 accumulates the text data for each writer until the completion of transmission of the text data is notified from the writer client 2 during the allocated time period, and then accumulates the text data. Integrated text data.
- the input control unit 12 transmits the integrated text data (hereinafter referred to as “integrated text data”) to the duplicate allocation control unit 13 together with the allocation number.
- integrated text data hereinafter referred to as “integrated text data”
- the assigned number represents the order of the integrated text data in the entire system, and the same number is assigned to the integrated text data obtained by the summary writing in the same time zone (that is, obtained from the same group). Is granted.
- the input control unit 12 integrates “Today is good weather”, and assigns an assignment number assigned to the group to which the writer belongs to the integrated text data.
- the duplication allocation control unit 13 accumulates the integrated text data transmitted from the input control unit 12 for each group. Then, the overlapping assignment control unit 13 determines that the integrated text data of all the writers of the group after a certain time has elapsed after the integrated text data is first transmitted for the group to which the current time zone is assigned. It is determined whether they are aligned (step B3).
- Step B3 If, as a result of step B3, the integrated text data of all the writers is available, the duplication assignment control unit 13 outputs the accumulated integrated text data to the text evaluation unit 14. Thereafter, Step B5 is executed.
- the duplication assignment control unit 13 outputs only the integrated text data accumulated so far to the text evaluation unit 14.
- the duplicate allocation control unit 13 is integrated text data sent after a lapse of a predetermined time and has already been assigned the same allocation number as the text data output to the text evaluation unit 14. Discard (step B4).
- the text evaluation unit 14 compares the integrated text data of each writer constituting the same group with each other, and selects text data to be censored as candidate data for each group based on the comparison result. (Step B5).
- step B5 the text evaluation unit 14 performs part-of-speech decomposition for each piece of integrated text data, and extracts only a character string corresponding to a specific part of speech. Subsequently, for each integrated text data, the text evaluation unit 14 calculates the number of character strings that match the character strings extracted from the other text data among the character strings extracted from the integrated text data. To select candidate data for each integrated text data. A specific example of step B5 will be described later.
- the duplication allocation control unit 13 writes the pre-correction text data of the group (the group to which the previous time zone has been allocated) that has been subjected to the summary writing before the group from which the candidate data was selected in Step B5. It is determined whether it is registered in the person content holding list (step B6).
- the writer content holding list registers the pre-correction text data and the corresponding allocation number in association with each other.
- the text data before correction is candidate data of a group to which a previous time zone has been assigned, transmitted from the reviewer client 4, and is candidate data before the review selected by the reviewer. It is.
- the text data before correction is arranged in the order of the assigned number. A specific example of the writer content holding list will be described later with reference to FIG.
- the duplication assignment control unit 13 determines whether or not a certain period has elapsed (step B7). It becomes a state. On the other hand, when a certain period has elapsed, that is, when there is no registration during standby, the duplication assignment control unit 13 transmits candidate data to the reviewer client 4 via the data transmission unit 17 ( Step B9).
- the duplication assignment control unit 13 duplicates the pre-correction text data and the candidate data selected by the text evaluation unit 14. The data is output to the part deletion unit 15.
- the duplicated part deleting unit 15 compares the output uncorrected text data with the candidate data that has been output, and deletes the duplicated part from the candidate data (step B8).
- the duplication portion deletion unit 15 includes a character string located at the end of the text data before correction, a character string located at the beginning of the candidate data obtained from the group that performed summary writing in the current time zone, and Are compared, and the matching character string is deleted from the candidate data.
- step B8 will be described later.
- the duplicate allocation control unit 13 executes Step B9.
- the candidate data transmitted to the reviewer client 4 is candidate data from which duplicate portions are deleted.
- the duplicate allocation control unit 13 deletes the pre-correction text data used in Step B8 from the writer content holding list. This is because the text data before the master is not used later.
- step B10 the data communication unit 16 sends the selected candidate data after the review by the reviewer 4 and the corresponding candidate data before the review (text before correction) from the review client 4. Data) is received (step B10).
- the data communication unit 16 transmits the received candidate data after the review and the corresponding allocation number to the user client 3 (step B11). Thereby, in the user client 3, the corrected text data is displayed at a position corresponding to the assigned number on the screen and presented to the user.
- step B11 the data communication unit 16 inputs the received uncorrected text data together with the allocation number to the duplicate allocation control unit 13. Thereby, the duplication assignment control unit 13 registers the pre-correction text data in the writer content holding list.
- the data communication unit 16 can specify the writer who created the candidate data selected by the reviewer in the reviewer client 4 and can evaluate each writer based on the specified result. As described above, for example, every time candidate data is selected, the data communication unit 16 adds the score of the writer who created it as an evaluation.
- FIG. 5 is a flowchart showing the operation of the reviewer client constituting the understanding support system according to the embodiment of the present invention.
- the text receiving unit 41 receives candidate data transmitted from the understanding support server 1 (step C ⁇ b> 1) and inputs it to the display unit 42.
- the display unit 42 presents the received candidate data to the reviewer using an output device such as a display device (step C2).
- the reviewer confirms the plurality of presented candidate data.
- the review proofreading unit 44 accepts input such as selection and correction by the reviewer, and inputs the candidate data after the review, that is, candidate data subjected to one or both of selection and correction to the text transmission unit 43 ( Step C3).
- the text transmission unit 43 transmits the corrected text data and the corresponding text data before correction to the understanding support server 1 (step C4).
- the uncorrected text data transmitted at this time is registered in the writer content holding list together with the allocation number by the duplicate allocation control unit 13 as described in step B11 above.
- step B5 shown in FIG. 4 will be specifically described with reference to FIGS.
- FIG. 6 is a diagram for explaining the part of speech decomposition processing in the text evaluation unit shown in FIG.
- FIG. 7 is a diagram for explaining the matching character string calculation processing in the text evaluation unit shown in FIG.
- FIG. 8 is a diagram for explaining the order rearrangement process in the text evaluation unit shown in FIG.
- the text evaluation unit 14 first decomposes the integrated text data of writers A, B, and C constituting the same group into parts of speech.
- Part-of-speech decomposition can be performed by using, for example, an existing application program that decomposes each part of speech when a character string is input.
- the method of part-of-speech decomposition is not particularly limited as long as the same method is used for each integrated text data.
- the text evaluation unit 14 extracts only the character strings of nouns, verbs, adjectives, and adverbs from each integrated text data that is separated for each part of speech after decomposition.
- the integrated text data of the writers A, B, and C are decomposed into parts of speech.
- writer A's integrated text data will be broken down as “Today we will begin a social class.
- Textbook The integrated text data of writer B is decomposed as “Today we will do a social class”.
- the integrated text data of scriber C will be broken down as “Today, the continuation of the previous session will begin a social class. Textbook”.
- the text evaluation unit 14 compares the integrated text data from which the character strings of the four parts of speech are extracted one-to-one, for each integrated text data (for each writer). In addition, the number of character strings that match with other integrated text data is calculated. That is, the text evaluation unit 14 examines all combinations and sequentially adds the number of matches obtained from each combination for each integrated text data.
- the text evaluation unit 14 ranks the integrated text data of each writer in order of the calculated number of matches, and rearranges them.
- the rearrangement is not performed in the state where the part-of-speech decomposition is performed, but the rearrangement is performed in the received character string. That is, rearrangement is performed so that the integrated text data with the largest number of matches appears first, and the integrated text data with the smallest number of matches appears last.
- the text evaluation unit 14 sends each writing sent from the data communication unit 16 as shown in the lower part of FIG.
- the integrated text data of the same number of matches is rearranged in the descending order of evaluation with reference to the evaluation points of the users.
- the evaluation score is obtained by adding +1 to the score of the writer who created the candidate data and accumulating the score. Yes. That is, when the candidate data created by the writer A is selected by the reviewer, the data communication unit 16 adds +1 to the evaluation score of the writer A.
- the evaluation points of writers A, B, and C are A: 0 points, B: 2 points, and C: 10 points, respectively, and C is evaluated most by the reviewer. ing. Accordingly, each integrated text data is finally rearranged so that the integrated text data of the writer C is at the top as shown in the lower part of FIG.
- the text evaluation unit 14 selects any number of integrated text data as candidate data in order from the top, and returns the selected candidate data to the duplication allocation control unit 13. Note that the number of candidate data to be selected may be determined according to the number of writers constituting the group, or may be a fixed value. Further, the text evaluation unit 14 discards the integrated text data that has not been selected because it is not used later.
- FIG. 9 is a diagram showing an example of the writer content holding list used in the present embodiment.
- FIG. 10 is a diagram for explaining the duplicate location deletion process by the duplicate location deletion unit shown in FIG.
- step B6 the duplication allocation control unit 13 registers the pre-correction text data for the group for which summary writing was performed before the group for which candidate data was selected in step B5, in the writer content holding list shown in FIG. Determine whether it has been. As shown in FIG. 9, in the writer content holding list, the text data before correction selected by the reviewer (candidate data before correction) and the allocation number thereof are registered.
- the duplicate allocation control unit 13 searches for 3 that is the previous allocation number from the writer content holding list. . If the allocation number 3 and its pre-correction text data are registered, the duplicate allocation control unit 13 converts the pre-correction text data of the allocation number 3 and the candidate data of the allocation number 4 to the duplicate location deletion unit 15. Send to.
- the duplicate part deletion unit 15 compares the pre-correction text data with the candidate data. Then, the duplicate deletion unit 15 compares the constant character string at the end of the pre-correction text data with the constant character string at the beginning of the candidate data, and deletes the matched character string from each candidate data. To do.
- the duplicate deletion unit 15 compares the character string at the end of the pre-correction text data with the allocation number 3 and the character string at the beginning of the candidate data with the allocation number 4. Then, the duplicate part deletion unit 15 deletes the duplicate character string “Today” from each candidate data.
- the duplicate part deletion unit 15 returns the candidate data from which the duplicate part has been deleted to the duplicate assignment control unit 13 together with the assignment number.
- the time zone is overlapped and the process of deleting the duplicated portion of the text data is performed.
- the present embodiment is not limited to this example.
- the summary writer is an experienced expert and does not need to overlap the time zone, the process of deleting the overlapping portion is omitted.
- the abstract writer is an expert, a configuration may be adopted in which a single writer changes the summary writer and the reviewer calibrates each writer one by one.
- the program in the present embodiment may be a program that causes a computer to execute steps B1 to B11 shown in FIG.
- the understanding support server 1 in the present embodiment can be realized.
- the CPU (Central Processing Unit) of the computer is as a text receiving unit 11, an input control unit 12, a duplicate assignment control unit 13, a text evaluation unit 14, a duplicate location deletion unit 15, a data communication unit 16, and a text transmission unit 17. Functions and processes.
- FIG. 11 is a block diagram illustrating an example of a computer that realizes the understanding support server according to the embodiment of the present invention.
- the computer 110 includes a CPU 111, a main memory 112, a storage device 113, an input interface 114, a display controller 115, a data reader / writer 116, and a communication interface 117. These units are connected to each other via a bus 121 so that data communication is possible.
- the CPU 111 performs various operations by developing the program (code) in the present embodiment stored in the storage device 113 in the main memory 112 and executing them in a predetermined order.
- the main memory 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory).
- the program in the present embodiment is provided in a state of being stored in a computer-readable recording medium 120. Note that the program in the present embodiment may be distributed on the Internet connected via the communication interface 117.
- the storage device 113 include a semiconductor storage device such as a flash memory in addition to a hard disk.
- the input interface 114 mediates data transmission between the CPU 111 and an input device 118 such as a keyboard and a mouse.
- the display controller 115 is connected to the display device 119 and controls display on the display device 119.
- the data reader / writer 116 mediates data transmission between the CPU 111 and the recording medium 120, and reads a program from the recording medium 120 and writes a processing result in the computer 110 to the recording medium 120.
- the communication interface 117 mediates data transmission between the CPU 111 and another computer.
- the recording medium 120 include general-purpose semiconductor storage devices such as CF (Compact Flash (registered trademark)) and SD (Secure Digital), magnetic storage media such as a flexible disk, or CD- Optical storage media such as ROM (Compact Disk Read Only Memory) are listed.
- CF Compact Flash
- SD Secure Digital
- magnetic storage media such as a flexible disk
- CD- Optical storage media such as ROM (Compact Disk Read Only Memory) are listed.
- a system for supporting the understanding of hearing-impaired persons who receive services by summarizing the utterances of speakers A server apparatus, a first client terminal used by a writer who performs summary writing, a second client terminal used by a reviewer who reviews summary writing made by the writer, and a hearing impaired person use A third client terminal,
- the server device A text receiving unit for receiving text data obtained by summary writing from an utterance by the writer from the first client;
- a text transmission unit for transmitting the received text data to the second client terminal;
- a data communication unit that receives the text data that has been censored by the reviewer from the second client terminal, and transmits the text data that has been censored to the third client terminal;
- An understanding support system characterized by comprising:
- the server device Duplicate assignment control that obtains text data before censorship that was subject to censorship by a writer who performed summarization writing in the previous time zone when summary writing was done by different writer for each time zone And Comparing the acquired text data before the censorship with text data obtained from a writer who performed summary writing in the current time zone, and deleting a duplicated portion from the latter,
- the text transmission unit is text data obtained from a writer who has performed summary writing in a current time zone, and transmits data from which duplicated portions have been deleted to the second client terminal. Understanding support system described in appendix 1
- the server device The received text for each writer who constitutes the same group when summary writing is simultaneously performed for the same utterance by a group composed of two or more writers for each time period An input control unit that integrates data; A text evaluation unit that compares the integrated text data of each of the scribers constituting the same group with each other, and selects text data to be censored as candidate data for each group based on the comparison result.
- the duplication allocation control unit obtains text data before censorship of the candidate data that has been subject to censorship of the group that performed summary writing in the previous time zone
- the duplicated part deletion unit compares the acquired text data before the censorship with the candidate data of the group that has performed summary writing in the current time zone, and deletes the duplicated part from the latter
- the text transmission unit is a candidate data of a group for which summary writing has been performed in the current time zone, and transmits data from which duplicate portions have been deleted to the second client terminal.
- the input control unit accumulates the text data received during the time zone assigned to the group of the writer for each writer, and notifies the completion of transmission of the text data from the first client terminal. Then, the accumulated text data is integrated, The understanding support system according to attachment 3.
- the text evaluation unit Acquire integrated text data of each writer who composes the same group, For each piece of integrated text data obtained, perform part-of-speech decomposition, extract only the character string that corresponds to the specific part-of-speech, and out of the character strings extracted from the text data, characters extracted from other text data Calculate the number of strings that match the column, Select candidate data using the number calculated for each merged text data.
- the understanding support system according to appendix 3 or 4.
- the duplicate allocation control unit Hold a list of the acquired text data before the censorship, When the text data before the censorship is registered in the list, it is passed to the duplicate part deletion unit, If the text data before the censorship is not registered in the list, the text transmission unit transmits the received text data to the second client terminal.
- the understanding support system according to any one of appendices 2 to 5.
- the duplicate part deletion unit is Comparing the obtained character string located at the end of the text data before the censorship with the character string located at the beginning of the text data obtained from the writer who performed the summary writing in the current time zone, The understanding support system according to any one of appendices 2 to 6.
- the server device An input control unit that integrates the received text data for each of the plurality of writers, when summary writing is performed on the same utterance by a plurality of the writers simultaneously, A text evaluation unit that compares the integrated text data of each of the plurality of writers with each other and selects text data to be censored as candidate data based on the comparison result; Understanding support system described in.
- a server for supporting the understanding of hearing-impaired persons who receive services by summarizing the utterances of speakers A text receiving unit for receiving text data obtained by the summary writing from the utterance by the writer from the first client used by the writer who performs the summary writing; A text transmission unit for transmitting the received text data to a second client terminal used by a reviewer who reviews a summary writing made by the writer; Data that receives the text data that has been reviewed by the reviewer from the second client terminal, and transmits the text data that has been reviewed to the third client terminal used by the hearing impaired person A communication department;
- An understanding support server characterized by comprising:
- the understanding support server (Appendix 10) Duplicate assignment control that obtains text data before censorship that was subject to censorship by a writer who performed summarization writing in the previous time zone when summary writing was done by different writer for each time zone And Comparing the acquired text data before the censorship with text data obtained from a writer who performed summary writing in the current time zone, and deleting a duplicated portion from the latter,
- the text transmission unit is text data obtained from a writer who has performed summary writing in a current time zone, and transmits data from which duplicated portions have been deleted to the second client terminal.
- the duplication allocation control unit obtains text data before censorship of the candidate data that has been subject to censorship of the group that performed summary writing in the previous time zone
- the duplicated part deletion unit compares the acquired text data before the censorship with the candidate data of the group that has performed summary writing in the current time zone, and deletes the duplicated part from the latter
- the text transmission unit is a candidate data of a group for which summary writing has been performed in the current time zone, and transmits data from which duplicate portions have been deleted to the second client terminal.
- the input control unit accumulates the text data received during the time zone assigned to the group of the writer for each writer, and notifies the completion of transmission of the text data from the first client terminal. Then, the accumulated text data is integrated, The understanding support server according to attachment 11.
- the text evaluation unit Acquire integrated text data of each writer who composes the same group, For each piece of integrated text data obtained, perform part-of-speech decomposition, extract only the character string that corresponds to the specific part-of-speech, and out of the character strings extracted from the text data, characters extracted from other text data Calculate the number of strings that match the column, Select candidate data using the number calculated for each merged text data.
- the understanding support server according to attachment 11 or 12.
- the duplicate allocation control unit Hold a list of the acquired text data before the censorship, When the text data before the censorship is registered in the list, it is passed to the duplicate part deletion unit, If the text data before the censorship is not registered in the list, the text transmission unit transmits the received text data to the second client terminal.
- the understanding support server according to any one of appendices 10 to 13.
- the duplicate part deletion unit is Comparing the obtained character string located at the end of the text data before the censorship with the character string located at the beginning of the text data obtained from the writer who performed the summary writing in the current time zone, 15.
- the understanding support server according to any one of appendices 10 to 14.
- An input control unit that integrates the received text data for each of the plurality of writers, when summary writing is performed on the same utterance by a plurality of the writers simultaneously, A text evaluation unit that further compares the integrated text data of each of the plurality of writers and selects text data to be censored as candidate data based on the comparison result. Understanding support server described in.
- (Appendix 17) A method for supporting the understanding of hearing impaired people who receive a service by summarizing the utterances of a speaker, (A) receiving, from a first client used by the writer who performs the summary writing, text data obtained by the summary writing from the utterance by the writer; (B) transmitting the text data received in the step (a) to a second client terminal used by a reviewer who reviews a summary writing made by the writer; (C) The text data that has been reviewed by the reviewer is received from the second client terminal, and the text data that has been reviewed is transmitted to the third client terminal used by the hearing impaired person Step,
- An understanding support method characterized by comprising:
- step (f) for each writer, the text data received during the time period assigned to the group of the writer is accumulated, and the transmission of the text data from the first client terminal is completed. Is notified, the accumulated text data is integrated, The understanding support method according to attachment 19.
- step (g) Acquire integrated text data of each writer who composes the same group, For each piece of integrated text data obtained, perform part-of-speech decomposition, extract only the character string that corresponds to the specific part-of-speech, and out of the character strings extracted from the text data, characters extracted from other text data Calculate the number of strings that match the column, Select candidate data using the number calculated for each merged text data.
- step (d) Using the list in which the acquired text data before censorship is registered, When the text data before the censorship is registered in the list, the step (e) is executed using the text data, When the text data before the censorship is not registered in the list, the received text data is transmitted to the second client terminal in the step (b).
- the understanding support method according to any one of appendices 18 to 21.
- (Appendix 25) A computer-readable recording medium that records a program for supporting the understanding of a hearing-impaired person who receives a service by a summary writing of a speaker's utterance by a computer,
- (A) receiving, from a first client used by the writer who performs the summary writing, text data obtained by the summary writing from the utterance by the writer;
- (B) transmitting the text data received in the step (a) to a second client terminal used by a reviewer who reviews a summary writing made by the writer;
- the text data that has been reviewed by the reviewer is received from the second client terminal, and the text data that has been reviewed is transmitted to the third client terminal used by the hearing impaired person Step,
- the computer-readable recording medium which recorded the program containing the instruction
- the program is (D) Acquire text data before censorship that was subject to censorship by a writer who performed summarization writing in the previous time period when summary writing was performed by different writer for each time period; Steps, (E) The text data before the censorship obtained in the step (d) is compared with the text data obtained from the writer who performed the summary writing in the current time zone. Deleting, and further comprising instructions for causing the computer to execute, In the step of (b), the text data obtained from the writer who performed the summary writing in the current time zone, the data from which the duplicated portion is deleted, is transmitted to the second client terminal.
- the computer-readable recording medium according to attachment 25 is
- the program is (F) Received for each writer who constitutes the same group when summary writing is simultaneously performed for the same utterance by a group composed of two or more writers for each time period. Integrating said text data, and (G) comparing the integrated text data of each of the writers constituting the same group with each other, and selecting text data to be censored as candidate data for each group based on the comparison result; and Further including instructions for causing the computer to execute, In the step (d), obtaining the text data before the censorship of the candidate data subject to the censorship of the group that performed the summary writing in the previous time zone, In the step (e), the text data before the censorship obtained in the step (d) is compared with the candidate data of the group that has performed the summary writing in the current time zone, and the latter is duplicated. Delete the part, In the step (b), the candidate data of the group in which the summary writing was performed in the current time zone, and the data from which the duplicated portions are deleted are transmitted to the second client terminal.
- step (f) for each writer, the text data received during the time period assigned to the group of the writer is accumulated, and the transmission of the text data from the first client terminal is completed. Is notified, the accumulated text data is integrated, The computer-readable recording medium according to attachment 27.
- step (g) Acquire integrated text data of each writer who composes the same group, For each piece of integrated text data obtained, perform part-of-speech decomposition, extract only the character string that corresponds to the specific part-of-speech, and out of the character strings extracted from the text data, characters extracted from other text data Calculate the number of strings that match the column, Select candidate data using the number calculated for each merged text data.
- the computer-readable recording medium according to appendix 27 or 28.
- step (d) Using the list in which the acquired text data before censorship is registered, When the text data before the censorship is registered in the list, the step (e) is executed using the text data, When the text data before the censorship is not registered in the list, the received text data is transmitted to the second client terminal in the step (b).
- the computer-readable recording medium according to any one of appendices 26 to 29.
- the program is (H) integrating summary received text data for each of the plurality of writers when summary writing is being performed on the same utterance by the plurality of writers simultaneously; (I) causing the computer to further execute a step of comparing the integrated text data of each of the plurality of writers and selecting text data to be censored as candidate data based on the comparison result.
- Including instructions The computer-readable recording medium according to attachment 25.
- the present invention accurate information can be provided to a hearing impaired person while reducing the burden on a person who performs summary writing.
- the present invention is useful in a field that requires assistance for a hearing impaired person.
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Tourism & Hospitality (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Primary Health Care (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Electrically Operated Instructional Devices (AREA)
- Machine Translation (AREA)
- Document Processing Apparatus (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
サーバ装置と、要約筆記を行なう筆記者が使用する第1のクライアント端末と、前記筆記者によってなされた要約筆記を査閲する査閲者が使用する第2のクライアント端末と、前記聴覚障がい者が使用する第3のクライアント端末と、を備え、
前記サーバ装置は、
前記第1のクライアントから、前記筆記者による発話からの要約筆記によって得られたテキストデータを受信する、テキスト受信部と、
受信した前記テキストデータを前記第2のクライアント端末に送信する、テキスト送信部と、
前記第2のクライアント端末から、前記査閲者による査閲が終了した前記テキストデータを受信し、この査閲が終了した前記テキストデータを、前記第3のクライアント端末に送信する、データ通信部と、
を備えている、ことを特徴とする。
要約筆記を行なう筆記者が使用する第1のクライアントから、前記筆記者による発話からの要約筆記によって得られたテキストデータを受信する、テキスト受信部と、
受信した前記テキストデータを前記筆記者によってなされた要約筆記を査閲する査閲者が使用する前記第2のクライアント端末に送信する、テキスト送信部と、
前記第2のクライアント端末から、前記査閲者による査閲が終了した前記テキストデータを受信し、この査閲が終了した前記テキストデータを、前記聴覚障がい者が使用する第3のクライアント端末に送信する、データ通信部と、
を備えている、ことを特徴とする。
(a)要約筆記を行なう筆記者が使用する第1のクライアントから、前記筆記者による発話からの要約筆記によって得られたテキストデータを受信する、ステップと、
(b)前記(a)のステップで受信した前記テキストデータを、前記筆記者によってなされた要約筆記を査閲する査閲者が使用する第2のクライアント端末に送信する、ステップと、
(c)前記第2のクライアント端末から、前記査閲者による査閲が終了した前記テキストデータを受信し、この査閲が終了した前記テキストデータを、前記聴覚障がい者が使用する第3のクライアント端末に送信する、ステップと、
を備えている、ことを特徴とする。
前記コンピュータに、
(a)要約筆記を行なう筆記者が使用する第1のクライアントから、前記筆記者による発話からの要約筆記によって得られたテキストデータを受信する、ステップと、
(b)前記(a)のステップで受信した前記テキストデータを、前記筆記者によってなされた要約筆記を査閲する査閲者が使用する第2のクライアント端末に送信する、ステップと、
(c)前記第2のクライアント端末から、前記査閲者による査閲が終了した前記テキストデータを受信し、この査閲が終了した前記テキストデータを、前記聴覚障がい者が使用する第3のクライアント端末に送信する、ステップと、
を実行させる命令を含む、プログラムを記録していることを特徴とする。
以下、本発明の実施の形態における、理解支援システム、理解支援サーバ、理解支援方法、及びプログラムについて、図1~図11を参照しながら説明する。なお、図中、同一又は同等の部分には同一の符号を付けて表示する。
最初に、図1を用いて、本発明の理解支援システム及び理解支援サーバの概略構成について説明する。図1は、本発明の実施の形態における理解支援システムの全体構成を概略的に示す図である。
図2に示すように、筆記者クライアント2は、音声受信部21と、テキスト送信部22と、テキスト入力部23とを備えている。音声受信部21は、音声配信サーバ5から配信されてきた音声データを受信する。
本実施の形態では、理解支援サーバ1は、要約筆記の結果に基づいて、筆記者毎に、テキストを統合化し、その統合化された各テキストを評価する。また、理解支援サーバは、時間帯の重複によって生じた文字列の重複を解消すべく、重複した文字列を特定し、これを削除する。この結果、テキストデータは、査閲者が査閲を行ない易い状態となる。そして、査閲者が校正しやすいように加工されたテキストデータは、査閲者クライアント4に送信される。
図2に示すように、査閲者クライアント4は、テキスト受信部41と、表示部42と、テキスト送信部43と、査閲校正部44とを備えている。このうち、テキスト受信部41は、理解支援サーバ1から送信されてきたテキストデータ、具体的には、重複箇所が削除された候補データを受信し、これを表示部42に入力する。
また、図2に示すように、利用者クライアント3は、テキスト受信部31と、表示部32とを備えている。テキスト受信部31は、理解支援サーバ1のデータ通信部16から送信されてきた、査閲者による修正後のテキストデータを受信し、これを表示部32に入力する。
次に、本発明の実施の形態における理解支援システム10及び理解支援サーバ1の動作について図3~図6を用いて説明する。以下の説明においては、適宜図1及び図2を参酌する。また、本実施の形態では、理解支援システム10を動作させることによって、理解支援方法が実施される。よって、本実施の形態における理解支援方法の説明は、以下の理解支援システム10の動作説明に代える。
最初に、図3を用いて、筆記者クライアント2の動作について説明する。図3は、本発明の実施の形態における理解支援システムを構成する筆記者クライアントの動作を示すフロー図である。
次に、図4を用いて、理解支援サーバ1の動作について説明する。図4は、本発明の実施の形態における理解支援サーバの動作を示すフロー図である。
次に、図5を用いて、査閲者クライアント4の動作について説明する。図5は、本発明の実施の形態における理解支援システムを構成する査閲者クライアントの動作を示すフロー図である。
続いて、図6~図8を用いて、図4に示したステップB5について具体的に説明する。図6は、図2に示したテキスト評価部における品詞分解処理を説明するための図である。図7は、図2に示したテキスト評価部における一致文字列の算出処理を説明するための図である。図8は、図2に示したテキスト評価部における順位並び替え処理を説明するための図である。
続いて、図9及び図10を用いて、図4に示したステップB6及びB8について具体的に説明する。図9は、本実施の形態で用いられる筆記者内容保持リストの一例を示す図である。図10は、図2に示した重複箇所削除部による重複箇所削除処理を説明するための図である。
以上のように、本実施の形態では、要約筆記によって得られたテキストデータから、情報量の多さ、重要な文字列の漏れの有無、査閲者からの信頼性等が考慮されて、利用者に提示すべきテキストデータが選出される。
上述した例では、筆記者のグループによって要約筆記が行なわれる例が説明されているが、本実施の形態はこれに限定されない。本実施の形態は、単独の筆記者ごとに交代しながら要約筆記を行い、査閲者は各筆記者一人ずつの校正を行う例であっても良い。
本実施の形態におけるプログラムは、コンピュータに、図4に示すステップB1~B11を実行させるプログラムであれば良い。このプログラムをコンピュータにインストールし、実行することによって、本実施の形態における理解支援サーバ1を実現することができる。この場合、コンピュータのCPU(Central Processing Unit)は、テキスト受信部11、入力制御部12、重複割当制御部13、テキスト評価部14、重複箇所削除部15、データ通信部16、テキスト送信部17として機能し、処理を行なう。
話者の発話の要約筆記によるサービスを受ける聴覚障がい者の理解を支援するためのシステムであって、
サーバ装置と、要約筆記を行なう筆記者が使用する第1のクライアント端末と、前記筆記者によってなされた要約筆記を査閲する査閲者が使用する第2のクライアント端末と、前記聴覚障がい者が使用する第3のクライアント端末と、を備え、
前記サーバ装置は、
前記第1のクライアントから、前記筆記者による発話からの要約筆記によって得られたテキストデータを受信する、テキスト受信部と、
受信した前記テキストデータを前記第2のクライアント端末に送信する、テキスト送信部と、
前記第2のクライアント端末から、前記査閲者による査閲が終了した前記テキストデータを受信し、この査閲が終了した前記テキストデータを、前記第3のクライアント端末に送信する、データ通信部と、
を備えている、ことを特徴とする理解支援システム。
前記サーバ装置は、
時間帯毎に、異なる筆記者によって要約筆記が行なわれている場合に、以前の時間帯において要約筆記を行なった筆記者の査閲対象となった、査閲前のテキストデータを取得する、重複割当制御部と、
取得された前記査閲前のテキストデータと、現在の時間帯において要約筆記を行なった筆記者から得られたテキストデータとを比較し、後者から重複する箇所を削除する、重複箇所削除部と、を更に備え、
前記テキスト送信部は、現在の時間帯において要約筆記を行なった筆記者から得られたテキストデータであって、重複する箇所が削除されたデータを、前記第2のクライアント端末に送信する、
付記1に記載の理解支援システム
前記サーバ装置は、
時間帯毎に、2以上の前記筆記者で構成されたグループによって同時に同一の発話に対して要約筆記が行なわれている場合に、同一のグループを構成する前記筆記者毎に、受信した前記テキストデータを統合する、入力制御部と、
同一のグループを構成する前記筆記者それぞれの統合後のテキストデータを互いに比較し、比較結果に基づいて、グループ毎に、査閲の対象となるテキストデータを候補データとして選出する、テキスト評価部とを更に備え、
前記重複割当制御部は、以前の時間帯において要約筆記を行なったグループの査閲対象となった前記候補データの、査閲前のテキストデータを取得し、
前記重複箇所削除部は、取得された前記査閲前のテキストデータと、現在の時間帯において要約筆記を行なったグループの前記候補データとを比較し、後者から重複する箇所を削除し、
前記テキスト送信部は、現在の時間帯において要約筆記を行なったグループの候補データであって、重複する箇所が削除されたデータを、前記第2のクライアント端末に送信する、
付記2に記載の理解支援システム。
前記入力制御部が、前記筆記者毎に、当該筆記者の前記グループに割り当てられている時間帯の間に受信したテキストデータを蓄積し、前記第1のクライアント端末からテキストデータの送信完了が通知されると、蓄積した前記テキストデータを統合する、
付記3に記載の理解支援システム。
前記テキスト評価部が、
同一のグループを構成する前記筆記者それぞれの統合後のテキストデータを取得し、
取得した統合後のテキストデータそれぞれについて、品詞分解を行なって、特定の品詞に該当する文字列のみを抽出し、当該テキストデータで抽出された文字列のうち、他のテキストデータで抽出された文字列と一致する文字列の個数を算出し、
統合後のテキストデータそれぞれについて算出した個数を用いて、候補データを選出する、
付記3または4に記載の理解支援システム。
前記重複割当制御部が、
取得した前記査閲前のテキストデータが登録されたリストを保持し、
前記リストに、前記査閲前のテキストデータが登録されている場合は、それを前記重複箇所削除部に渡し、
前記リストに、前記査閲前のテキストデータが登録されていない場合は、前記テキスト送信部に、受信した前記テキストデータを前記第2のクライアント端末へと送信させる、
付記2~5のいずれかに記載の理解支援システム。
前記重複箇所削除部が、
取得された前記査閲前のテキストデータの文末に位置する文字列と、現在の時間帯において要約筆記を行なった筆記者から得られたテキストデータの文頭に位置する文字列とを比較する、
付記2~6のいずれかに記載の理解支援システム。
前記サーバ装置は、
複数の前記筆記者によって同時に同一の発話に対して要約筆記が行なわれている場合に、複数の前記筆記者それぞれ毎に、受信した前記テキストデータを統合する、入力制御部と、
複数の前記筆記者それぞれの統合後のテキストデータを互いに比較し、比較結果に基づいて、査閲の対象となるテキストデータを候補データとして選出する、テキスト評価部と、を更に備えている、付記1に記載の理解支援システム。
話者の発話の要約筆記によるサービスを受ける聴覚障がい者の理解を支援するための、サーバであって、
要約筆記を行なう筆記者が使用する第1のクライアントから、前記筆記者による発話からの要約筆記によって得られたテキストデータを受信する、テキスト受信部と、
受信した前記テキストデータを前記筆記者によってなされた要約筆記を査閲する査閲者が使用する第2のクライアント端末に送信する、テキスト送信部と、
前記第2のクライアント端末から、前記査閲者による査閲が終了した前記テキストデータを受信し、この査閲が終了した前記テキストデータを、前記聴覚障がい者が使用する第3のクライアント端末に送信する、データ通信部と、
を備えている、ことを特徴とする理解支援サーバ。
時間帯毎に、異なる筆記者によって要約筆記が行なわれている場合に、以前の時間帯において要約筆記を行なった筆記者の査閲対象となった、査閲前のテキストデータを取得する、重複割当制御部と、
取得された前記査閲前のテキストデータと、現在の時間帯において要約筆記を行なった筆記者から得られたテキストデータとを比較し、後者から重複する箇所を削除する、重複箇所削除部と、を更に備え、
前記テキスト送信部は、現在の時間帯において要約筆記を行なった筆記者から得られたテキストデータであって、重複する箇所が削除されたデータを、前記第2のクライアント端末に送信する、
付記9に記載の理解支援サーバ。
時間帯毎に、2以上の前記筆記者で構成されたグループによって同時に同一の発話に対して要約筆記が行なわれている場合に、同一のグループを構成する前記筆記者毎に、受信した前記テキストデータを統合する、入力制御部と、
同一のグループを構成する前記筆記者それぞれの統合後のテキストデータを互いに比較し、比較結果に基づいて、グループ毎に、査閲の対象となるテキストデータを候補データとして選出する、テキスト評価部とを更に備え、
前記重複割当制御部は、以前の時間帯において要約筆記を行なったグループの査閲対象となった前記候補データの、査閲前のテキストデータを取得し、
前記重複箇所削除部は、取得された前記査閲前のテキストデータと、現在の時間帯において要約筆記を行なったグループの前記候補データとを比較し、後者から重複する箇所を削除し、
前記テキスト送信部は、現在の時間帯において要約筆記を行なったグループの候補データであって、重複する箇所が削除されたデータを、前記第2のクライアント端末に送信する、
付記10に記載の理解支援サーバ。
前記入力制御部が、前記筆記者毎に、当該筆記者の前記グループに割り当てられている時間帯の間に受信したテキストデータを蓄積し、前記第1のクライアント端末からテキストデータの送信完了が通知されると、蓄積した前記テキストデータを統合する、
付記11に記載の理解支援サーバ。
前記テキスト評価部が、
同一のグループを構成する前記筆記者それぞれの統合後のテキストデータを取得し、
取得した統合後のテキストデータそれぞれについて、品詞分解を行なって、特定の品詞に該当する文字列のみを抽出し、当該テキストデータで抽出された文字列のうち、他のテキストデータで抽出された文字列と一致する文字列の個数を算出し、
統合後のテキストデータそれぞれについて算出した個数を用いて、候補データを選出する、
付記11または12に記載の理解支援サーバ。
前記重複割当制御部が、
取得した前記査閲前のテキストデータが登録されたリストを保持し、
前記リストに、前記査閲前のテキストデータが登録されている場合は、それを前記重複箇所削除部に渡し、
前記リストに、前記査閲前のテキストデータが登録されていない場合は、前記テキスト送信部に、受信した前記テキストデータを前記第2のクライアント端末へと送信させる、
付記10~13のいずれかに記載の理解支援サーバ。
前記重複箇所削除部が、
取得された前記査閲前のテキストデータの文末に位置する文字列と、現在の時間帯において要約筆記を行なった筆記者から得られたテキストデータの文頭に位置する文字列とを比較する、
付記10~14のいずれかに記載の理解支援サーバ。
複数の前記筆記者によって同時に同一の発話に対して要約筆記が行なわれている場合に、複数の前記筆記者それぞれ毎に、受信した前記テキストデータを統合する、入力制御部と、
複数の前記筆記者それぞれの統合後のテキストデータを互いに比較し、比較結果に基づいて、査閲の対象となるテキストデータを候補データとして選出する、テキスト評価部と、を更に備えている、付記9に記載の理解支援サーバ。
話者の発話の要約筆記によるサービスを受ける聴覚障がい者の理解を支援するための方法であって、
(a)要約筆記を行なう筆記者が使用する第1のクライアントから、前記筆記者による発話からの要約筆記によって得られたテキストデータを受信する、ステップと、
(b)前記(a)のステップで受信した前記テキストデータを、前記筆記者によってなされた要約筆記を査閲する査閲者が使用する第2のクライアント端末に送信する、ステップと、
(c)前記第2のクライアント端末から、前記査閲者による査閲が終了した前記テキストデータを受信し、この査閲が終了した前記テキストデータを、前記聴覚障がい者が使用する第3のクライアント端末に送信する、ステップと、
を備えている、ことを特徴とする理解支援方法。
(d)時間帯毎に、異なる筆記者によって要約筆記が行なわれている場合に、以前の時間帯において要約筆記を行なった筆記者の査閲対象となった、査閲前のテキストデータを取得する、ステップと、
(e)前記(d)のステップで取得された前記査閲前のテキストデータと、現在の時間帯において要約筆記を行なった筆記者から得られたテキストデータとを比較し、後者から重複する箇所を削除する、ステップと、を更に有し、
前記(b)のステップにおいて、現在の時間帯において要約筆記を行なった筆記者から得られたテキストデータであって、重複する箇所が削除されたデータを、前記第2のクライアント端末に送信する、
付記17に記載の理解支援方法。
(f)時間帯毎に、2以上の前記筆記者で構成されたグループによって同時に同一の発話に対して要約筆記が行なわれている場合に、同一のグループを構成する前記筆記者毎に、受信した前記テキストデータを統合する、ステップと、
(g)同一のグループを構成する前記筆記者それぞれの統合後のテキストデータを互いに比較し、比較結果に基づいて、グループ毎に、査閲の対象となるテキストデータを候補データとして選出する、ステップとを更に有し、
前記(d)のステップにおいて、以前の時間帯において要約筆記を行なったグループの査閲対象となった前記候補データの、査閲前のテキストデータを取得し、
前記(e)のステップにおいて、前記(d)のステップで取得された前記査閲前のテキストデータと、現在の時間帯において要約筆記を行なったグループの前記候補データとを比較し、後者から重複する箇所を削除し、
前記(b)のステップにおいて、現在の時間帯において要約筆記を行なったグループの候補データであって、重複する箇所が削除されたデータを、前記第2のクライアント端末に送信する、
付記18に記載の理解支援方法。
前記(f)のステップにおいて、前記筆記者毎に、当該筆記者の前記グループに割り当てられている時間帯の間に受信したテキストデータを蓄積し、前記第1のクライアント端末からテキストデータの送信完了が通知されると、蓄積した前記テキストデータを統合する、
付記19に記載の理解支援方法。
前記(g)のステップにおいて、
同一のグループを構成する前記筆記者それぞれの統合後のテキストデータを取得し、
取得した統合後のテキストデータそれぞれについて、品詞分解を行なって、特定の品詞に該当する文字列のみを抽出し、当該テキストデータで抽出された文字列のうち、他のテキストデータで抽出された文字列と一致する文字列の個数を算出し、
統合後のテキストデータそれぞれについて算出した個数を用いて、候補データを選出する、
付記19または20に記載の理解支援方法。
前記(d)のステップにおいて、
取得した前記査閲前のテキストデータが登録されたリストを用い、
前記リストに、前記査閲前のテキストデータが登録されている場合は、それを用いて前記(e)のステップを実行させ、
前記リストに、前記査閲前のテキストデータが登録されていない場合は、前記(b)のステップにおいて、受信した前記テキストデータを前記第2のクライアント端末へと送信させる、
付記18~21のいずれかに記載の理解支援方法。
前記(e)のステップにおいて、
取得された前記査閲前のテキストデータの文末に位置する文字列と、現在の時間帯において要約筆記を行なった筆記者から得られたテキストデータの文頭に位置する文字列とを比較する、
付記18~22のいずれかに記載の理解支援方法。
(h)複数の前記筆記者によって同時に同一の発話に対して要約筆記が行なわれている場合に、複数の前記筆記者それぞれ毎に、受信した前記テキストデータを統合する、ステップと、
(i)複数の前記筆記者それぞれの統合後のテキストデータを互いに比較し、比較結果に基づいて、査閲の対象となるテキストデータを候補データとして選出する、ステップと、を更に有する、付記17に記載の理解支援方法。
コンピュータによって、話者の発話の要約筆記によるサービスを受ける聴覚障がい者の理解を支援するためのプログラムを記録したコンピュータ読み取り可能な記録媒体であって、
前記コンピュータに、
(a)要約筆記を行なう筆記者が使用する第1のクライアントから、前記筆記者による発話からの要約筆記によって得られたテキストデータを受信する、ステップと、
(b)前記(a)のステップで受信した前記テキストデータを、前記筆記者によってなされた要約筆記を査閲する査閲者が使用する第2のクライアント端末に送信する、ステップと、
(c)前記第2のクライアント端末から、前記査閲者による査閲が終了した前記テキストデータを受信し、この査閲が終了した前記テキストデータを、前記聴覚障がい者が使用する第3のクライアント端末に送信する、ステップと、
を実行させる命令を含む、プログラムを記録したコンピュータ読み取り可能な記録媒体。
前記プログラムが、
(d)時間帯毎に、異なる筆記者によって要約筆記が行なわれている場合に、以前の時間帯において要約筆記を行なった筆記者の査閲対象となった、査閲前のテキストデータを取得する、ステップと、
(e)前記(d)のステップで取得された前記査閲前のテキストデータと、現在の時間帯において要約筆記を行なった筆記者から得られたテキストデータとを比較し、後者から重複する箇所を削除する、ステップと、を更に前記コンピュータに実行させる命令を含み、
前記(b)のステップにおいて、現在の時間帯において要約筆記を行なった筆記者から得られたテキストデータであって、重複する箇所が削除されたデータを、前記第2のクライアント端末に送信する、
付記25に記載のコンピュータ読み取り可能な記録媒体。
前記プログラムが、
(f)時間帯毎に、2以上の前記筆記者で構成されたグループによって同時に同一の発話に対して要約筆記が行なわれている場合に、同一のグループを構成する前記筆記者毎に、受信した前記テキストデータを統合する、ステップと、
(g)同一のグループを構成する前記筆記者それぞれの統合後のテキストデータを互いに比較し、比較結果に基づいて、グループ毎に、査閲の対象となるテキストデータを候補データとして選出する、ステップと、を更に前記コンピュータに実行させる命令を含み、
前記(d)のステップにおいて、以前の時間帯において要約筆記を行なったグループの査閲対象となった前記候補データの、査閲前のテキストデータを取得し、
前記(e)のステップにおいて、前記(d)のステップで取得された前記査閲前のテキストデータと、現在の時間帯において要約筆記を行なったグループの前記候補データとを比較し、後者から重複する箇所を削除し、
前記(b)のステップにおいて、現在の時間帯において要約筆記を行なったグループの候補データであって、重複する箇所が削除されたデータを、前記第2のクライアント端末に送信する、
付記26に記載のコンピュータ読み取り可能な記録媒体。
前記(f)のステップにおいて、前記筆記者毎に、当該筆記者の前記グループに割り当てられている時間帯の間に受信したテキストデータを蓄積し、前記第1のクライアント端末からテキストデータの送信完了が通知されると、蓄積した前記テキストデータを統合する、
付記27に記載のコンピュータ読み取り可能な記録媒体。
前記(g)のステップにおいて、
同一のグループを構成する前記筆記者それぞれの統合後のテキストデータを取得し、
取得した統合後のテキストデータそれぞれについて、品詞分解を行なって、特定の品詞に該当する文字列のみを抽出し、当該テキストデータで抽出された文字列のうち、他のテキストデータで抽出された文字列と一致する文字列の個数を算出し、
統合後のテキストデータそれぞれについて算出した個数を用いて、候補データを選出する、
付記27または28に記載のコンピュータ読み取り可能な記録媒体。
前記(d)のステップにおいて、
取得した前記査閲前のテキストデータが登録されたリストを用い、
前記リストに、前記査閲前のテキストデータが登録されている場合は、それを用いて前記(e)のステップを実行させ、
前記リストに、前記査閲前のテキストデータが登録されていない場合は、前記(b)のステップにおいて、受信した前記テキストデータを前記第2のクライアント端末へと送信させる、
付記26~29のいずれかに記載のコンピュータ読み取り可能な記録媒体。
前記(e)のステップにおいて、
取得された前記査閲前のテキストデータの文末に位置する文字列と、現在の時間帯において要約筆記を行なった筆記者から得られたテキストデータの文頭に位置する文字列とを比較する、
付記26~30のいずれかに記載のコンピュータ読み取り可能な記録媒体。
前記プログラムが、
(h)複数の前記筆記者によって同時に同一の発話に対して要約筆記が行なわれている場合に、複数の前記筆記者それぞれ毎に、受信した前記テキストデータを統合する、ステップと、
(i)複数の前記筆記者それぞれの統合後のテキストデータを互いに比較し、比較結果に基づいて、査閲の対象となるテキストデータを候補データとして選出する、ステップと、を更に前記コンピュータに実行させる命令を含む、
付記25に記載のコンピュータ読み取り可能な記録媒体。
2 筆記者クライアント
3 利用者クライアント
4 査閲者クライアント
5 音声配信サーバ
6 音声再生装置
10 理解支援システム
11 テキスト受信部
12 入力制御部
13 重複割当制御部
14 テキスト評価部
15 重複箇所削除部
16 データ通信部
17 テキスト送信部
21 音声受信部
22 テキスト送信部
23 テキスト入力部
31 テキスト受信部
32 表示部
41 テキスト受信部
42 表示部
43 テキスト送信部
44 査閲校正部
110 コンピュータ
111 CPU
112 メインメモリ
113 記憶装置
114 入力インターフェイス
115 表示コントローラ
116 データリーダ/ライタ
117 通信インターフェイス
118 入力機器
119 ディスプレイ装置
120 記録媒体
121 バス
Claims (10)
- 話者の発話の要約筆記によるサービスを受ける聴覚障がい者の理解を支援するためのシステムであって、
サーバ装置と、要約筆記を行なう筆記者が使用する第1のクライアント端末と、前記筆記者によってなされた要約筆記を査閲する査閲者が使用する第2のクライアント端末と、前記聴覚障がい者が使用する第3のクライアント端末と、を備え、
前記サーバ装置は、
前記第1のクライアントから、前記筆記者による発話からの要約筆記によって得られたテキストデータを受信する、テキスト受信部と、
受信した前記テキストデータを前記第2のクライアント端末に送信する、テキスト送信部と、
前記第2のクライアント端末から、前記査閲者による査閲が終了した前記テキストデータを受信し、この査閲が終了した前記テキストデータを、前記第3のクライアント端末に送信する、データ通信部と、
を備えている、ことを特徴とする理解支援システム。 - 前記サーバ装置は、
時間帯毎に、異なる筆記者によって要約筆記が行なわれている場合に、以前の時間帯において要約筆記を行なった筆記者の査閲対象となった、査閲前のテキストデータを取得する、重複割当制御部と、
取得された前記査閲前のテキストデータと、現在の時間帯において要約筆記を行なった筆記者から得られたテキストデータとを比較し、後者から重複する箇所を削除する、重複箇所削除部と、を更に備え、
前記テキスト送信部は、現在の時間帯において要約筆記を行なった筆記者から得られたテキストデータであって、重複する箇所が削除されたデータを、前記第2のクライアント端末に送信する、
請求項1に記載の理解支援システム。 - 前記サーバ装置は、
時間帯毎に、2以上の前記筆記者で構成されたグループによって同時に同一の発話に対して要約筆記が行なわれている場合に、同一のグループを構成する前記筆記者毎に、受信した前記テキストデータを統合する、入力制御部と、
同一のグループを構成する前記筆記者それぞれの統合後のテキストデータを互いに比較し、比較結果に基づいて、グループ毎に、査閲の対象となるテキストデータを候補データとして選出する、テキスト評価部とを更に備え、
前記重複割当制御部は、以前の時間帯において要約筆記を行なったグループの査閲対象となった前記候補データの、査閲前のテキストデータを取得し、
前記重複箇所削除部は、取得された前記査閲前のテキストデータと、現在の時間帯において要約筆記を行なったグループの前記候補データとを比較し、後者から重複する箇所を削除し、
前記テキスト送信部は、現在の時間帯において要約筆記を行なったグループの候補データであって、重複する箇所が削除されたデータを、前記第2のクライアント端末に送信する、
請求項2に記載の理解支援システム。 - 前記入力制御部が、前記筆記者毎に、当該筆記者の前記グループに割り当てられている時間帯の間に受信したテキストデータを蓄積し、前記第1のクライアント端末からテキストデータの送信完了が通知されると、蓄積した前記テキストデータを統合する、
請求項3に記載の理解支援システム。 - 前記テキスト評価部が、
同一のグループを構成する前記筆記者それぞれの統合後のテキストデータを取得し、
取得した統合後のテキストデータそれぞれについて、品詞分解を行なって、特定の品詞に該当する文字列のみを抽出し、当該テキストデータで抽出された文字列のうち、他のテキストデータで抽出された文字列と一致する文字列の個数を算出し、
統合後のテキストデータそれぞれについて算出した個数を用いて、候補データを選出する、
請求項3または4に記載の理解支援システム。 - 前記重複割当制御部が、
取得した前記査閲前のテキストデータが登録されたリストを保持し、
前記リストに、前記査閲前のテキストデータが登録されている場合は、それを前記重複箇所削除部に渡し、
前記リストに、前記査閲前のテキストデータが登録されていない場合は、前記テキスト送信部に、受信した前記テキストデータを前記第2のクライアント端末へと送信させる、
請求項2~5のいずれかに記載の理解支援システム。 - 前記重複箇所削除部が、
取得された前記査閲前のテキストデータの文末に位置する文字列と、現在の時間帯において要約筆記を行なった筆記者から得られたテキストデータの文頭に位置する文字列とを比較する、
請求項2~6のいずれかに記載の理解支援システム。 - 話者の発話の要約筆記によるサービスを受ける聴覚障がい者の理解を支援するための、サーバであって、
要約筆記を行なう筆記者が使用する第1のクライアントから、前記筆記者による発話からの要約筆記によって得られたテキストデータを受信する、テキスト受信部と、
受信した前記テキストデータを前記筆記者によってなされた要約筆記を査閲する査閲者が使用する第2のクライアント端末に送信する、テキスト送信部と、
前記第2のクライアント端末から、前記査閲者による査閲が終了した前記テキストデータを受信し、この査閲が終了した前記テキストデータを、前記聴覚障がい者が使用する第3のクライアント端末に送信する、データ通信部と、
を備えている、ことを特徴とする理解支援サーバ。 - 話者の発話の要約筆記によるサービスを受ける聴覚障がい者の理解を支援するための方法であって、
(a)要約筆記を行なう筆記者が使用する第1のクライアントから、前記筆記者による発話からの要約筆記によって得られたテキストデータを受信する、ステップと、
(b)前記(a)のステップで受信した前記テキストデータを、前記筆記者によってなされた要約筆記を査閲する査閲者が使用する第2のクライアント端末に送信する、ステップと、
(c)前記第2のクライアント端末から、前記査閲者による査閲が終了した前記テキストデータを受信し、この査閲が終了した前記テキストデータを、前記聴覚障がい者が使用する第3のクライアント端末に送信する、ステップと、
を備えている、ことを特徴とする理解支援方法。 - コンピュータによって、話者の発話の要約筆記によるサービスを受ける聴覚障がい者の理解を支援するためのプログラムを記録したコンピュータ読み取り可能な記録媒体であって、
前記コンピュータに、
(a)要約筆記を行なう筆記者が使用する第1のクライアントから、前記筆記者による発話からの要約筆記によって得られたテキストデータを受信する、ステップと、
(b)前記(a)のステップで受信した前記テキストデータを、前記筆記者によってなされた要約筆記を査閲する査閲者が使用する第2のクライアント端末に送信する、ステップと、
(c)前記第2のクライアント端末から、前記査閲者による査閲が終了した前記テキストデータを受信し、この査閲が終了した前記テキストデータを、前記聴覚障がい者が使用する第3のクライアント端末に送信する、ステップと、
を実行させる命令を含む、プログラムを記録したコンピュータ読み取り可能な記録媒体。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/773,171 US20160012751A1 (en) | 2013-03-07 | 2014-02-10 | Comprehension assistance system, comprehension assistance server, comprehension assistance method, and computer-readable recording medium |
JP2015504216A JP6172769B2 (ja) | 2013-03-07 | 2014-02-10 | 理解支援システム、理解支援サーバ、理解支援方法、及びプログラム |
KR1020157027711A KR20150126027A (ko) | 2013-03-07 | 2014-02-10 | 이해 지원 시스템, 이해 지원 서버, 이해 지원 방법, 및 컴퓨터 판독가능 기록 매체 |
CN201480012828.7A CN105009151A (zh) | 2013-03-07 | 2014-02-10 | 理解辅助系统、理解辅助服务器、理解辅助方法和计算机可读记录介质 |
EP14760360.9A EP2966601A4 (en) | 2013-03-07 | 2014-02-10 | UNDERSTANDING ASSISTANCE SYSTEM, UNDERHENSION ASSISTANT SERVER, UNDERSTANDING ASSISTING METHOD, AND COMPUTER READABLE RECORDING MEDIUM |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013044977 | 2013-03-07 | ||
JP2013-044977 | 2013-03-07 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014136534A1 true WO2014136534A1 (ja) | 2014-09-12 |
Family
ID=51491059
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/053058 WO2014136534A1 (ja) | 2013-03-07 | 2014-02-10 | 理解支援システム、理解支援サーバ、理解支援方法、及びコンピュータ読み取り可能な記録媒体 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20160012751A1 (ja) |
EP (1) | EP2966601A4 (ja) |
JP (1) | JP6172769B2 (ja) |
KR (1) | KR20150126027A (ja) |
CN (1) | CN105009151A (ja) |
WO (1) | WO2014136534A1 (ja) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10878721B2 (en) | 2014-02-28 | 2020-12-29 | Ultratec, Inc. | Semiautomated relay method and apparatus |
US10748523B2 (en) * | 2014-02-28 | 2020-08-18 | Ultratec, Inc. | Semiautomated relay method and apparatus |
US20180034961A1 (en) | 2014-02-28 | 2018-02-01 | Ultratec, Inc. | Semiautomated Relay Method and Apparatus |
US10389876B2 (en) | 2014-02-28 | 2019-08-20 | Ultratec, Inc. | Semiautomated relay method and apparatus |
US20180270350A1 (en) | 2014-02-28 | 2018-09-20 | Ultratec, Inc. | Semiautomated relay method and apparatus |
WO2017179276A1 (ja) * | 2016-04-12 | 2017-10-19 | シャープ株式会社 | サーバ、出力方法、プログラム、および、表示システム |
US10320474B2 (en) * | 2016-12-29 | 2019-06-11 | Stmicroelectronics S.R.L. | System, method and article for adaptive framing for TDMA MAC protocols |
US11017778B1 (en) | 2018-12-04 | 2021-05-25 | Sorenson Ip Holdings, Llc | Switching between speech recognition systems |
US10573312B1 (en) | 2018-12-04 | 2020-02-25 | Sorenson Ip Holdings, Llc | Transcription generation from multiple speech recognition systems |
US11170761B2 (en) | 2018-12-04 | 2021-11-09 | Sorenson Ip Holdings, Llc | Training of speech recognition systems |
US10388272B1 (en) | 2018-12-04 | 2019-08-20 | Sorenson Ip Holdings, Llc | Training speech recognition systems using word sequences |
US11539900B2 (en) | 2020-02-21 | 2022-12-27 | Ultratec, Inc. | Caption modification and augmentation systems and methods for use by hearing assisted user |
US11488604B2 (en) | 2020-08-19 | 2022-11-01 | Sorenson Ip Holdings, Llc | Transcription of audio |
JP2022139022A (ja) * | 2021-03-11 | 2022-09-26 | 富士フイルムビジネスイノベーション株式会社 | 情報処理装置及びプログラム |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001134276A (ja) | 1999-11-02 | 2001-05-18 | Nippon Hoso Kyokai <Nhk> | 音声文字化誤り検出装置および記録媒体 |
JP2002268679A (ja) | 2001-03-07 | 2002-09-20 | Nippon Hoso Kyokai <Nhk> | 音声認識結果の誤り検出方法及び装置及び音声認識結果の誤り検出プログラム |
JP2004240920A (ja) | 2003-02-10 | 2004-08-26 | Nippon Television Network Corp | 校正システム |
JP2007256714A (ja) | 2006-03-24 | 2007-10-04 | Internatl Business Mach Corp <Ibm> | 字幕修正装置 |
Family Cites Families (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6222909B1 (en) * | 1997-11-14 | 2001-04-24 | Lucent Technologies Inc. | Audio note taking system and method for communication devices |
US6748361B1 (en) * | 1999-12-14 | 2004-06-08 | International Business Machines Corporation | Personal speech assistant supporting a dialog manager |
US7668718B2 (en) * | 2001-07-17 | 2010-02-23 | Custom Speech Usa, Inc. | Synchronized pattern recognition source data processed by manual or automatic means for creation of shared speaker-dependent speech user profile |
US7219301B2 (en) * | 2002-03-01 | 2007-05-15 | Iparadigms, Llc | Systems and methods for conducting a peer review process and evaluating the originality of documents |
US7680820B2 (en) * | 2002-04-19 | 2010-03-16 | Fuji Xerox Co., Ltd. | Systems and methods for displaying text recommendations during collaborative note taking |
US7016844B2 (en) * | 2002-09-26 | 2006-03-21 | Core Mobility, Inc. | System and method for online transcription services |
US8849648B1 (en) * | 2002-12-24 | 2014-09-30 | At&T Intellectual Property Ii, L.P. | System and method of extracting clauses for spoken language understanding |
ATE417346T1 (de) * | 2003-03-26 | 2008-12-15 | Koninkl Philips Electronics Nv | Spracherkennungs- und korrektursystem, korrekturvorrichtung und verfahren zur erstellung eines lexikons von alternativen |
US7917364B2 (en) * | 2003-09-23 | 2011-03-29 | Hewlett-Packard Development Company, L.P. | System and method using multiple automated speech recognition engines |
US20050210046A1 (en) * | 2004-03-18 | 2005-09-22 | Zenodata Corporation | Context-based conversion of language to data systems and methods |
US8335688B2 (en) * | 2004-08-20 | 2012-12-18 | Multimodal Technologies, Llc | Document transcription system training |
US8412521B2 (en) * | 2004-08-20 | 2013-04-02 | Multimodal Technologies, Llc | Discriminative training of document transcription system |
GB0420464D0 (en) * | 2004-09-14 | 2004-10-20 | Zentian Ltd | A speech recognition circuit and method |
KR20070114606A (ko) * | 2006-05-29 | 2007-12-04 | 삼성전자주식회사 | 통신 시스템과 VoIP기기 및 데이터 통신방법 |
US8306816B2 (en) * | 2007-05-25 | 2012-11-06 | Tigerfish | Rapid transcription by dispersing segments of source material to a plurality of transcribing stations |
US20090068631A1 (en) * | 2007-09-10 | 2009-03-12 | Chris Halliwell | Web based educational system for collaborative learning |
US8843370B2 (en) * | 2007-11-26 | 2014-09-23 | Nuance Communications, Inc. | Joint discriminative training of multiple speech recognizers |
US20090319910A1 (en) * | 2008-06-22 | 2009-12-24 | Microsoft Corporation | Automatic content and author emphasis for shared data |
US8392187B2 (en) * | 2009-01-30 | 2013-03-05 | Texas Instruments Incorporated | Dynamic pruning for automatic speech recognition |
US9871916B2 (en) * | 2009-03-05 | 2018-01-16 | International Business Machines Corporation | System and methods for providing voice transcription |
EP2325838A1 (en) * | 2009-10-27 | 2011-05-25 | verbavoice GmbH | A method and system for transcription of spoken language |
KR20110051385A (ko) * | 2009-11-10 | 2011-05-18 | 삼성전자주식회사 | 통신 단말기 및 그의 통신 방법 |
US8880403B2 (en) * | 2010-09-03 | 2014-11-04 | Canyon Ip Holdings Llc | Methods and systems for obtaining language models for transcribing communications |
US9183843B2 (en) * | 2011-01-07 | 2015-11-10 | Nuance Communications, Inc. | Configurable speech recognition system using multiple recognizers |
US8898065B2 (en) * | 2011-01-07 | 2014-11-25 | Nuance Communications, Inc. | Configurable speech recognition system using multiple recognizers |
US20130018895A1 (en) * | 2011-07-12 | 2013-01-17 | Harless William G | Systems and methods for extracting meaning from speech-to-text data |
US9430468B2 (en) * | 2012-06-28 | 2016-08-30 | Elsevier Bv | Online peer review system and method |
US20140039876A1 (en) * | 2012-07-31 | 2014-02-06 | Craig P. Sayers | Extracting related concepts from a content stream using temporal distribution |
US9966075B2 (en) * | 2012-09-18 | 2018-05-08 | Qualcomm Incorporated | Leveraging head mounted displays to enable person-to-person interactions |
-
2014
- 2014-02-10 US US14/773,171 patent/US20160012751A1/en not_active Abandoned
- 2014-02-10 WO PCT/JP2014/053058 patent/WO2014136534A1/ja active Application Filing
- 2014-02-10 KR KR1020157027711A patent/KR20150126027A/ko not_active Application Discontinuation
- 2014-02-10 CN CN201480012828.7A patent/CN105009151A/zh active Pending
- 2014-02-10 JP JP2015504216A patent/JP6172769B2/ja active Active
- 2014-02-10 EP EP14760360.9A patent/EP2966601A4/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001134276A (ja) | 1999-11-02 | 2001-05-18 | Nippon Hoso Kyokai <Nhk> | 音声文字化誤り検出装置および記録媒体 |
JP2002268679A (ja) | 2001-03-07 | 2002-09-20 | Nippon Hoso Kyokai <Nhk> | 音声認識結果の誤り検出方法及び装置及び音声認識結果の誤り検出プログラム |
JP2004240920A (ja) | 2003-02-10 | 2004-08-26 | Nippon Television Network Corp | 校正システム |
JP2007256714A (ja) | 2006-03-24 | 2007-10-04 | Internatl Business Mach Corp <Ibm> | 字幕修正装置 |
Non-Patent Citations (4)
Title |
---|
KAZUKI HIROSAWA ET AL.: "Enkaku Yoyaku Hikki Nyuryoku Shien Gijutsu no Kairyo ni Tsuite", DAI 75 KAI (HEISEI 25 NEN) ZENKOKU TAIKAI KOEN RONBUNSHU (4), INTERFACE COMPUTER TO NINGEN SHAKAI, 6 March 2013 (2013-03-06), pages 4-407 - 4-408, XP008178028 * |
MINORU MAJIMA: "TBS ni Okeru Real Time Jimaku eno Torikumi ni Tsuite Jimaku Seisaku Hoshiki 'Relay Hoshiki' towa", BROADCAST ENGINEERIG, vol. 61, no. 2, 1 February 2008 (2008-02-01), pages 99 - 105, XP008178022 * |
See also references of EP2966601A4 |
SHIGEKI MIYOSHI ET AL.: "Development of Web Based Real-Time Captioning System for Hearing Impaired Persons", THE HUMAN INTERFACE SYMPOSIUM 2004 RONBUNSHU, HUMAN INTERFACE SOCIETY, 6 October 2004 (2004-10-06), pages 661 - 664, XP008178021 * |
Also Published As
Publication number | Publication date |
---|---|
EP2966601A1 (en) | 2016-01-13 |
EP2966601A4 (en) | 2016-06-29 |
JP6172769B2 (ja) | 2017-08-02 |
CN105009151A (zh) | 2015-10-28 |
JPWO2014136534A1 (ja) | 2017-02-09 |
US20160012751A1 (en) | 2016-01-14 |
KR20150126027A (ko) | 2015-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6172769B2 (ja) | 理解支援システム、理解支援サーバ、理解支援方法、及びプログラム | |
US6424935B1 (en) | Two-way speech recognition and dialect system | |
US20050255431A1 (en) | Interactive language learning system and method | |
KR19990044575A (ko) | 대화형 언어훈련용 장치 | |
JP7107229B2 (ja) | 情報処理装置および情報処理方法、並びにプログラム | |
WO2019019406A1 (zh) | 一种用于更新教学录播数据的装置 | |
Wald | Creating accessible educational multimedia through editing automatic speech recognition captioning in real time | |
Cheng | Unfamiliar accented English negatively affects EFL listening comprehension: It helps to be a more able accent mimic | |
KR101992370B1 (ko) | 말하기 학습방법 및 학습시스템 | |
JP2003228279A (ja) | 音声認識を用いた語学学習装置、語学学習方法及びその格納媒体 | |
KR102396833B1 (ko) | 음성 분석을 통한 한국어 발음 학습 방법 및 시스템 | |
Neumeyer et al. | Webgrader: a multilingual pronunciation practice tool | |
JP7107228B2 (ja) | 情報処理装置および情報処理方法、並びにプログラム | |
JP3936351B2 (ja) | 音声応答サービス装置 | |
JP5791124B2 (ja) | 要約筆記支援システム、要約筆記支援装置、要約筆記支援方法、及びプログラム | |
JP2017021245A (ja) | 語学学習支援装置、語学学習支援方法および語学学習支援プログラム | |
JP2020071312A (ja) | 語学力評価方法及び語学力評価システム | |
KR20200108261A (ko) | 음성 인식 수정 시스템 | |
KR20110064964A (ko) | 지능형 언어 학습 및 발음교정 시스템 | |
KR20010046852A (ko) | 속도변환을 이용한 대화형 언어 교습 시스템 및 그 방법 | |
JP6498346B1 (ja) | 外国語学習支援システムおよび外国語学習支援方法ならびにプログラム | |
KR101958981B1 (ko) | 외국어 학습 방법 및 이를 실행하는 장치 | |
Lamel et al. | Question Answering on Speech Transcriptions: the QAST evaluation in CLEF. | |
JP2022181361A (ja) | 学習支援システム | |
Liyanapathirana et al. | Using the TED talks to evaluate spoken post-editing of machine translation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14760360 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015504216 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14773171 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2014760360 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014760360 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 20157027711 Country of ref document: KR Kind code of ref document: A |