WO2016174945A1 - Dispositif d'affichage de lecture pour contenu numérique pour l'apprentissage - Google Patents

Dispositif d'affichage de lecture pour contenu numérique pour l'apprentissage Download PDF

Info

Publication number
WO2016174945A1
WO2016174945A1 PCT/JP2016/057911 JP2016057911W WO2016174945A1 WO 2016174945 A1 WO2016174945 A1 WO 2016174945A1 JP 2016057911 W JP2016057911 W JP 2016057911W WO 2016174945 A1 WO2016174945 A1 WO 2016174945A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
screen
output condition
digital content
screen output
Prior art date
Application number
PCT/JP2016/057911
Other languages
English (en)
Japanese (ja)
Inventor
佑介 田代
西澤 達夫
真史 神林
Original Assignee
シナノケンシ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シナノケンシ株式会社 filed Critical シナノケンシ株式会社
Publication of WO2016174945A1 publication Critical patent/WO2016174945A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B17/00Teaching reading
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition

Definitions

  • the present invention relates to a learning digital content reproduction display device, and more specifically, learning digital content that enables the learning digital content to be used in an optimum state for each user who is difficult to read characters.
  • the present invention relates to a reproduction display device.
  • the present invention is based on the subjective setting according to the user's preference and the objective setting in which the user objectively evaluates the suitability for performing the character reading learning. It is an object of the present invention to provide a digital content reproduction display device capable of performing display setting that enables the user's character reading learning to be efficiently performed while reflecting the preference.
  • a screen an audio output unit, a storage unit storing at least screen output condition data having color conditions, text data for screen output condition confirmation, and learning digital content
  • the screen and the audio output unit An output setting unit configured to perform output setting adapted to a user of the learning digital content when outputting the learning digital content, and the output setting unit is based on the screen output condition data, Outputting the screen output condition confirmation text data to the screen in a plurality of patterns; and allowing the user to output any one of the plurality of patterns of the screen output condition confirmation text data output to the screen.
  • Selecting, and causing the user to read the selected screen output condition text data aloud Collecting the recorded sound data and storing the recorded data in the storage unit as read sound recording data in a state linked to the screen output condition data for at least two of the screen output condition data, Calculating the reading time of the recording data and storing the screen output condition evaluation data in the storage unit in a state linked to the screen output condition data; and for the screen output condition evaluation having the shortest reading time And a step of outputting the learning digital content to the screen to which the screen output condition data associated with the data is applied.
  • a screen an audio output unit, a screen output condition data having at least a color condition, text data for screen output condition confirmation, reference data for confirmation, and learning digital content
  • An output setting unit configured to perform output setting adapted to a user of the learning digital content when outputting the learning digital content to the audio output unit, and the output setting unit includes the screen output condition Based on the data, the step of outputting the screen output condition confirmation text data to the screen in a plurality of patterns, the user, among the plurality of patterns of the screen output condition confirmation text data output to the screen, Selecting any one of them, and reading the selected text data for checking the screen output condition by the user.
  • Performing the step of creating the aloud text data obtained by converting the aloud content of the aloud recording data into text, calculating the correct reading rate based on the reference data for confirmation and the aloud text data, and the screen output condition Storing the screen output condition evaluation data in the storage unit in a state linked to the data, and the screen output condition data linked to the screen output condition evaluation data having the highest correct reading rate.
  • Outputting the learning digital content to the applied screen It may also be a learning digital content reproducing display device to.
  • a screen Furthermore, a screen, a sound output unit, a storage unit storing at least screen output condition data having color conditions, screen output condition confirmation text data, confirmation reference data, and learning digital content, and the screen And an output setting unit that performs output setting adapted to a user of the learning digital content when outputting the learning digital content to the audio output unit, and the output setting unit outputs the screen Based on the condition data, the step of outputting the screen output condition confirmation text data to the screen in a plurality of patterns, and the user, among the plurality of patterns of the screen output condition confirmation text data output to the screen , Selecting any one of them, and allowing the user to select the selected text data for confirming the screen output condition.
  • the step of selecting the optimum screen output condition evaluation data by the output setting unit has a minimum value obtained by dividing the reading time in the screen output condition evaluation data by the correct reading rate or the correct reading rate.
  • the screen output condition evaluation data having a maximum value divided by the reading time is selected.
  • the optimum is based on the reading speed of text data (speech reading time) and the accuracy of reading content of text data (correct reading rate) by the user.
  • a screen display state can be provided.
  • the screen output condition data includes a background color character that defines a combination of the background color of the screen, the character color of the character displayed on the background of the screen, the presence / absence of highlight on the character, and the highlight color. It preferably includes color data. Moreover, it is preferable that the said screen output condition data contain the ruby provision presence data regarding the ruby provision to the character displayed on the said screen.
  • the screen output condition data preferably includes character format data related to a character format displayed on the screen, and the character format data includes character size data related to a character size displayed on the screen, and , Font data relating to the font of characters displayed on the screen, character string direction data relating to vertical writing or horizontal writing of characters displayed on the screen, and screen usage direction data relating to use of the screen in the vertical direction or horizontal direction. More preferably, at least one of them is included.
  • the output setting unit further executes a step of setting a basic playback speed of the learning digital content based on at least one of the reading time and the correct reading rate in the reading sound recording data.
  • the output setting unit executes a step of setting a basic playback speed of the learning digital content based on at least one of the reading time or the correct reading rate in the reading sound recording data.
  • the step of adjusting the basic reproduction speed within a required range is further executed.
  • a setting based on subjective data according to the user's preference a setting based on objective data for allowing the user to perform efficient character reading learning
  • 1 is a configuration explanatory diagram illustrating a schematic structure of a digital content reproduction display device according to a first embodiment. It is explanatory drawing which shows the detail of the data group in 1st Embodiment. It is explanatory drawing which shows the setting flow in the digital content reproduction display apparatus in 1st Embodiment. It is explanatory drawing which shows the setting flow in the digital content reproduction display apparatus in 1st Embodiment. It is explanatory drawing which shows the setting flow in the digital content reproduction display apparatus in 2nd Embodiment. It is explanatory drawing which shows the setting flow in the digital content reproduction display apparatus in 2nd Embodiment. It is explanatory drawing which shows the setting flow in the digital content reproduction display apparatus in 3rd Embodiment.
  • a digital content playback / display apparatus 100 includes a screen 10 provided on one surface of a main body unit 1, a speaker 20 as an audio output unit, and a nonvolatile memory 30 as a storage unit.
  • the data group 40 stored in the nonvolatile memory 30, the collection of the data group 40, the storage in the nonvolatile memory 30, and the data processing of the data group 40, respectively, and the voice input unit And a microphone 60.
  • the non-volatile memory 30 in the present embodiment exemplifies the form of a so-called built-in memory that is built in the main body 1 in advance, but the external recording media connection terminal ( It is also possible to adopt a form employing a so-called detachable memory (not shown) used by connecting to a not-shown).
  • a slate personal computer called a so-called tablet terminal is preferably used as the digital content playback / display apparatus 100 having such a configuration.
  • the data group 40 in this embodiment employs a configuration as shown in FIG. That is, when outputting data to the screen 10, the screen output condition data 42 having at least a color condition for specifying the color, the audio output condition data 43 regarding the audio output setting, and the screen based on the screen output condition data 42 Text data 44 for screen output condition confirmation used to make the user confirm the output state, reference data 46 for confirmation for confirming the suitability of the screen settings based on the screen output condition data 42 for the user, Digital content 48 for learning.
  • the screen output condition confirmation text data 44 and the learning digital content 48 are different data, but the text data obtained by extracting a specific portion of the learning digital content 48 is used as the screen output condition confirmation text data 44. It can also be used.
  • the screen output condition data 42 includes background color character color data 42A that defines a combination of the background color of the background to be output to the screen 10 and the character color of the character displayed on the background of the screen 10, and the addition of ruby to the character. This includes ruby assignment presence / absence data 42B and character format data 42C relating to the character format.
  • the background color character color data 42A includes a default background color character color data 42Aa which is a plurality of types of basic data obtained by combining a background color and a character color in advance, additional background color character color data 42Ab, In addition to setting the presence / absence of light display, in the case of performing highlight display, it has highlight presence / absence regarding highlight portion color and highlight color data 42Ac.
  • the additional background color character color data 42Ab has a data table related to the background color and a data table related to the character color (both not shown), and the user receives the respective color data from the background color data table and the character color data table. A synthesized color using the selected or selected color data can also be generated.
  • the combination data of the background color and the character color selected or generated by the user in this way can be additionally stored in the nonvolatile memory 30 as the additional background color character color data 42Ab.
  • the ruby application presence / absence data 42B is not limited to setting whether or not to add ruby to characters, but may be data relating to ruby in general including ruby color information in addition to ruby font and size.
  • the character format data 42C includes character size data 42Ca relating to the size of the character displayed on the screen 10, and character string direction data 42Cb for designating whether the character string displayed on the screen 10 is to be displayed vertically or horizontally.
  • Screen use direction data 42Cc regarding whether the screen 10 is used in the portrait orientation (use in the portrait orientation) or the screen 10 is used in the landscape orientation (use in the landscape orientation), and the font (font) of characters displayed on the screen 10 And at least one of the font data 42Cd.
  • the audio output condition data 43 in the present embodiment is composed of various output condition data relating to audio data output from the speaker 20.
  • the volume data 43A for setting the volume of the audio data the sound quality data (tone) 43B for setting the sound quality (tone) of the audio data
  • the playback speed data 43C for setting the playback speed of the audio data
  • the male voice data At least one of voice type data (male / female) 43D for setting whether to output in voice or female voice, and pitch data 43E for setting the reproduction pitch of voice data is included. Yes.
  • the screen output condition confirmation text data 44 is used to objectively evaluate the output condition of the screen 10 based on the screen output condition data 42 as described above. Specifically, when reading by a user who can read characters at a standard level for each age and gender and a user who makes reading difficult, Sentence data that tends to cause a difference in reading time and correct reading rate is used.
  • the above-described dedicated data is used for the screen output condition confirmation text data 44.
  • a predetermined range (part) of the text data portion of the learning digital content 48 to be described later is extracted and used for screen output condition confirmation.
  • the text data 44 can also be used.
  • the reference data 46 for confirmation will be described in more detail.
  • the reference data 46 for confirmation is data used for objectively evaluating the suitability of the screen setting for the user, which is used together with data collected by the user actually reading characters. Specifically, it summarizes the required time and correct reading rate when a person who has a standard character reading ability for each age and gender reads the screen output condition confirmation text data 44 aloud.
  • sentence data similar to the text data 44 for confirming screen output conditions can be used. It is also possible to use data obtained by extracting a predetermined range (part) of the text data portion of the learning digital content and having a standard character reading ability read aloud.
  • the learning digital content 48 will be described in more detail.
  • the learning digital content 48 is data used by a user for learning to read characters.
  • the learning digital content 48 can be created by appropriately combining known data such as text data, audio data, and image data.
  • a textbook learning auxiliary book or the like converted into data in a format conforming to the DAISY standard can be used as the learning digital content 48.
  • the output setting unit 50 in the present embodiment is a CPU 54 that executes an output setting program 52 stored in advance in the nonvolatile memory 30 and a command that is built in the main unit 1 and described in the output setting program 52. And have. Based on the output setting program 52, the CPU 54 controls the operation of the screen 10, the speaker 20 as an audio output unit, the nonvolatile memory 30 as a storage unit, the data group 40, and the microphone 60 as an audio input unit, The setting process of the digital content playback / display apparatus 100 is executed.
  • 3 and 4 are explanatory diagrams showing a setting flow in the digital content reproduction / display apparatus according to the first embodiment.
  • the CPU 54 reads out the output setting program 52 from the nonvolatile memory 30, and The setting process of the digital content reproduction / display apparatus 100 designated in the output setting program 52 is started (S1).
  • the CPU 54 reads the screen output condition data 42 from the non-volatile memory 30, and uses the background color and character color combination specified by the default background color character color data 42Aa for a part of the display area of the screen 10 for learning.
  • a part of the digital content 48 is displayed (S2). Since the default background color character color data 42Aa includes a plurality of combinations of data, the digital content for learning with different or the same color on the background of the same or different color is displayed on the screen 10 in a plurality of predetermined areas. A part of 48 is displayed.
  • the CPU 54 reads the background color data table and the character color data table from the non-volatile memory 30, outputs the respective data tables to the screen 10, and provides the user with the background color data table. And a color in the character color data table are selected, and an output reflecting the color selected by the user as the background color in the specific area of the screen 10 and the display color of the digital content for learning 48 (character color) is performed. It may be. Further, the CPU 54 selects a plurality of colors in the background color data table and the character color data table selected by the user and creates a composite color for the background color and the display color of the learning digital content 48, respectively. The synthesized color may be stored in the nonvolatile memory 30, and the background color and the character color based on the created synthesized color may be displayed on the screen 10.
  • the CPU 54 allows the user to select any one from the combination of the background color in the specific area output to the screen 10 and the display color of the learning digital content 48 (character color) (S3).
  • the CPU 54 uses the combination of the background color and the character color selected by the user on the entire screen 10 to confirm the screen output condition.
  • the screen output of the text data 44 is performed (S4).
  • the CPU 54 may output a part of the learning digital content 48 to the screen 10 instead of using the screen output condition confirmation text data 44.
  • the CPU 54 displays a message on the screen 10 so as to request the user to read the screen output condition confirmation text data 44 aloud (S5).
  • the CPU 54 activates the microphone 60 as a voice input unit, and records the reading aloud by the user with the microphone 60 to collect the aloud recording data ( S6). Subsequently, the CPU 54 stores in the nonvolatile memory 30 the first association data created by associating the voice-recorded recording data with the combination of the background color and the character color (screen output condition data 42) selected by the user. (S7). In this way, the CPU 54 repeatedly executes the steps (S3) to (S7), and associates the read sound recording data with each of at least two data in the default background color character color data 42Aa (in pairs). The first associated data in the state is stored in the nonvolatile memory 30.
  • the CPU 54 collects the first linking data (collection of the reading sound recording data). ) Is displayed on the screen 10.
  • the CPU 54 uses the voice-reading recording data of each first linked data to determine the reading time. Calculation is performed (S9). Subsequently, the CPU 54 associates the generated reading time data and the screen output condition data 42 (default background color character color data 42Aa) with the second linked data (screen output condition evaluation data for the reading time). The data is stored in the nonvolatile memory 30 (S10).
  • the CPU 54 calculates the shortest time data of the reading time data and creates the second linked data
  • the second linked data speech reading time selection screen output condition evaluation linked with the shortest reading time data. It is preferable to add a flag to the data.
  • the CPU 54 creates aloud text data using the aloud recording data associated with the first association data (S11). Then, the CPU 54 calculates the correct reading rate of the text-to-speech text data in each of the first linked data using the created text-to-speech data and the text data of the confirmation reference data 46 (S12). The CPU 54 creates third linked data (screen output condition evaluation data for the correct reading rate) in which the calculated correct reading rate is linked to the screen output condition data 42 (default background color character color data 42Aa). The three linked data are stored in the nonvolatile memory 30 (S13). This is executed for all the first linking data. At this time, it is preferable that the CPU 54 adds a flag to the third linked data (correct reading rate selection screen output condition evaluation data) to which data having the highest correct reading rate is linked.
  • the CPU 54 reads the reference data 46 for confirmation from the nonvolatile memory 30 (S14). Subsequently, the CPU 54 reads, from the non-volatile memory 30, the second linked data linked with the reading time data with the shortest reading time and the third linked data with the highest correct reading rate. At the same time, the CPU 54 displays the screen to which the color condition based on the screen output condition data 42 (default background color character color data 42Aa) associated with the second association data and the third association data is applied. 10 output display is executed (S15). At this time, the CPU 54 indicates that it is an output display of the screen 10 based on the shortest reading time data, and an output display of the screen 10 based on the highest correct reading rate data, beside the output display of each screen 10. Is preferably displayed on the screen 10 together with a standard thing (output display of the screen 10 based on average reading time data and average correct reading rate data in the user's reading data).
  • the CPU 54 displays a screen prompting the user to select one of the output states of the screen 10 output in S15 (S16).
  • the CPU 54 outputs a display for confirming whether or not the selection condition is applied to the screen 10, and when the user selects application, the CPU 54 selects the selected screen output.
  • the setting of the digital content reproduction display device 100 is executed (S17), and the setting flow is ended (END).
  • the user can display the learning digital content 48 in a state in which his / her preference is reflected and the characters are easy to read for the user. 10 can be read and learned to read characters. That is, it is advantageous in that it can contribute to improvement in motivation in learning to read characters as well as improving the reading learning efficiency for the user.
  • the shortest reading time and the highest correct reading rate data are linked from the first linked data in which the screen output condition data 42 and the voice-reading recording data are linked.
  • the screen output condition data 42 selected based on the reading time and the screen output condition data 42 selected based on the correct reading rate are output to the screen 10 to allow the user to select one of them. It is not limited to this form.
  • 5 and 6 are explanatory diagrams showing a setting flow in the digital content reproduction / display apparatus according to the second embodiment.
  • the steps from (S1) to (S9) are the same as the steps from (S1) to (S9) in the first embodiment.
  • a detailed explanation of each step is saved.
  • the CPU 54 performs a screen display that prompts the user to determine whether or not to execute the output setting of the screen 10 based on the first linked data in which the voice reading recording data has the shortest reading time (S10), and the user applies Is selected, the CPU 54 executes the setting of the digital content reproduction display device 100 based on the first linked data in which the reading sound recording data has the shortest reading time (S11), and ends the setting flow (END).
  • the present embodiment is characterized in that the screen output condition data 42 focusing only on the reading time is adopted as the objective setting condition.
  • FIGS. 7 and 8 are explanatory diagrams showing a setting flow in a digital content reproduction / display apparatus according to a third embodiment.
  • the steps from (S1) to (S8) are the same as the steps from (S1) to (S8) in the first embodiment. Then, detailed explanation about each step is saved.
  • the CPU 54 creates aloud text data from the aloud recording data (S9), and calculates the correct reading rate of the aloud text data (S10).
  • the calculation method of the correct reading rate can be performed in the same manner as in the first embodiment.
  • the CPU 54 displays a screen that prompts the user to determine whether or not to execute the output setting of the screen 10 based on the screen output condition data 42 that maximizes the correct reading rate (S11), and the user selects application. If it does (S12), CPU54 will perform the setting of the digital content reproduction display apparatus 100 based on the screen output condition data 42 used as the highest correct reading rate (S13), and will end a setting flow (END).
  • the present embodiment is characterized in that the screen output condition data 42 focusing on only the correct reading rate is adopted as the objective setting condition.
  • FIGS. 9 and 10 are explanatory diagrams showing a setting flow in a digital content reproduction / display apparatus according to a fourth embodiment.
  • (S1) to (S14) in the present embodiment are the same as the steps (S1) to (S14) in the first embodiment.
  • Detailed explanation is labor saving.
  • the CPU 54 reads out the second tied data and the third linked data to which the same screen output condition data 42 is linked from the nonvolatile memory 30 (S15), and sets the reading time in the second linked data.
  • a value V1 divided by the correct reading rate in the third linked data (or a value V2 obtained by dividing the correct reading rate in the third linked data by the reading time in the second linked data) may be calculated for all combinations (S16). ).
  • the CPU54 extracts the screen output condition data 42 in which V1 becomes the minimum value (or V2 is the maximum value) (S17). Then, the CPU 54 performs a screen display (S18) that prompts the user to determine whether or not to execute the output setting of the screen 10 based on the screen output condition data 42 in which V1 becomes the minimum value (or V2 is the maximum value).
  • the CPU 54 executes the setting of the digital content reproduction display device 100 based on the screen output condition data 42 in which V1 is the minimum value (or V2 is the maximum value) (S20). Then, the setting flow is ended (END).
  • the screen output condition data 42 that maximizes the balance between the reading time and the correct reading rate is adopted, so that the objective setting condition for the user is optimized as the subjective setting condition of the user. It is possible to execute setup of the digital content reproduction / display apparatus 100 added in the state of being converted into a digitalized state.
  • FIG. 11 is an explanatory diagram showing a setting flow in a digital content playback / display apparatus according to a fifth embodiment.
  • FIG. 12 is a correlation table for extracting the basic playback speed of the learning digital content based on the reading speed ratio of the reading sound recording data in the fifth embodiment. Since (S1) to (S14) in this embodiment are the same as the steps (S1) to (S14) in the first embodiment, detailed description of each step is omitted here.
  • the CPU 54 reads out the second linked data and the third linked data from the nonvolatile memory 30, and calculates the average reading time in the reading sound recording data (S15). Subsequently, the CPU 54 calculates the reading speed ratio of the reading sound recording data based on the average reading time calculated in (S15) (S16).
  • the rate of reading aloud is a value obtained by dividing the reading time by TTS (Text To Speech) by the average reading time from the text data when reading the reading data (text data when collecting the reading data). The quotient of the reading time by TTS and the average reading time by the user).
  • the CPU 54 extracts the basic reproduction speed of the learning digital content 48 from the correlation table shown in FIG.
  • the basic reproduction speed is an optimum reproduction speed (recommended reproduction speed) for the user that is statistically calculated based on the reading speed ratio of the reading sound recording data.
  • the CPU 54 outputs a screen requesting the user to select whether or not to change the playback speed of the learning digital content 48 from the basic playback speed (S18).
  • the CPU 54 adjusts a range limited to a required range from the basic playback speed (for example, a range of plus or minus 30% with respect to the basic playback speed can be adjusted). Display on the screen for adjusting the playback speed of the learning digital content 48 (S19).
  • the playback speed of the learning digital content 48 may be increased or decreased from the basic playback speed by the user sliding the playback speed adjustment switch (not shown) displayed on the screen 10 by the CPU 54 on the screen 10 (S20). it can.
  • the CPU 54 outputs a part of the learning digital content 48 from the speaker 20 at the reproduction speed at that time in accordance with the operation of the reproduction speed adjustment switch.
  • the CPU 54 ends the playback speed adjustment step of the learning digital content 48 (S21), and sets the playback of the learning digital content 48 based on the adjusted playback speed or the basic playback speed extracted from the correlation table. To complete the setup (END).
  • the CPU 54 determines the basic playback speed of the learning digital content 48 based on the reading speed ratio (average reading time of the voice reading recording data) of the voice reading recording data, and within the predetermined range with respect to the basic playback speed.
  • the reading speed ratio average reading time of the voice reading recording data
  • the predetermined range with respect to the basic playback speed.
  • the CPU 54 extracts the basic playback speed of the learning digital content 48 from the statistical data based on the reading speed ratio of the reading data (average reading time of the reading data).
  • the basic playback speed of the learning digital content 48 can be extracted from the statistical data based on the average reading rate as well as the data reading time.
  • the basic playback speed of the learning digital content 48 can be extracted from the statistical data based on the average value of both the reading time and correct reading rate of the reading sound recording data.
  • the learning digital is based on at least one of the reading time and correct reading rate instead of the average value.
  • a mode in which the basic playback speed of the content 48 is appropriately extracted from the statistical data may be employed.
  • the digital content playback / display apparatus 100 has been described based on a plurality of embodiments with modifications.
  • the technical scope of the present invention is limited to the above embodiments and modifications. It is not a thing.
  • the CPU 54 automatically creates a plurality of types of candidates for the playback speed of the learning digital content 48 by the digital content playback display device 100 in the same manner as in the above embodiment, and uses A function and step for allowing the user to select any reproduction speed created by the CPU 54 and a function and step for reproducing the learning digital content 48 based on the reproduction speed selected by the user may be included. In this way, by playing the learning digital content 48 at a speed that is easy for the user to hear, efficient learning support can be provided not only for reading the text but also for listening to the text. Is convenient.
  • the screen output condition data 42 is digital content. As described in the explanation part of the reproduction display device 100, it is not composed only of the background color and the character color of the screen 10.
  • character size data 42Ca relating to the size of characters to be displayed on the screen 10 and character strings to be displayed on the screen 10 are displayed vertically.
  • the character string direction data 42Cb for designating whether to use the horizontal writing display or the character format data 42C having the screen usage direction data 42Cc regarding whether the screen 10 is used in the vertical direction or whether the screen 10 is used in the horizontal direction.
  • the CPU 54 may further execute a function and a step for causing the user to add each data to the screen output condition data 42 while displaying the output state of the screen 10 by each data.
  • the CPU 54 calculates the playback speed of the screen output condition data 42 and the learning digital content 48 using the shortest reading time and the highest reading rate for the reading time and the correct reading rate in the reading sound recording data, respectively.
  • these embodiments are examples. That is, regarding the handling of the reading time and the correct reading rate in the reading sound recording data, the screen output condition data 42 and the learning digital content 48 for the user in a state where the reading time and the correct reading rate are balanced by an appropriate method. It is also possible to cause the CPU 54 to calculate the reproduction speed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

La présente invention vise à fournir un dispositif d'affichage de lecture de contenu numérique par lequel un utilisateur peut apprendre efficacement les lectures de caractères sur la base de réglages subjectifs par l'utilisateur et de réglages objectifs. À cet effet, l'invention concerne un dispositif d'affichage de lecture (100) pour contenu numérique pour l'apprentissage, qui est caractérisé en ce qu'il est muni d'un écran (10), d'un haut-parleur (20), d'une mémoire non-volatile (30) stockant des données de condition de sortie d'écran (42), des données de texte pour confirmation de condition de sortie d'écran (44), et du contenu numérique pour l'apprentissage (48), et une unité de réglage de sortie (50) pour régler une sortie appropriée à l'utilisateur lors de la sortie du contenu numérique pour l'apprentissage (48) à l'écran (10) et au haut-parleur (20), et en ce que l'unité de réglage de sortie (50) exécute une étape de réglage d'affichage d'écran subjective pour régler le contenu de sortie sur l'écran (10) à l'aide des préférences de l'utilisateur, et une étape de réglage d'affichage d'écran objective sur la base de données de lecture orale enregistrées lorsque l'utilisateur lit oralement un texte affiché sur l'écran (10).
PCT/JP2016/057911 2015-04-30 2016-03-14 Dispositif d'affichage de lecture pour contenu numérique pour l'apprentissage WO2016174945A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-093667 2015-04-30
JP2015093667A JP6096829B2 (ja) 2015-04-30 2015-04-30 学習用デジタルコンテンツ再生表示装置

Publications (1)

Publication Number Publication Date
WO2016174945A1 true WO2016174945A1 (fr) 2016-11-03

Family

ID=57199709

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/057911 WO2016174945A1 (fr) 2015-04-30 2016-03-14 Dispositif d'affichage de lecture pour contenu numérique pour l'apprentissage

Country Status (2)

Country Link
JP (1) JP6096829B2 (fr)
WO (1) WO2016174945A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11338609A (ja) * 1998-05-26 1999-12-10 Nippon Telegr & Teleph Corp <Ntt> 書籍型情報表示装置および書籍型情報表示方法およびコンピュータが読み取り可能なプログラムを記録した記録媒体
JP2003150291A (ja) * 2001-11-14 2003-05-23 Oki Electric Ind Co Ltd 携帯端末の画面表示制御方法及び装置
JP2009294740A (ja) * 2008-06-03 2009-12-17 Mitsubishi Electric Corp データ処理装置及びプログラム
JP2010258940A (ja) * 2009-04-28 2010-11-11 Hitachi Ltd 表示文字自動調整機能付き表示装置
US20110111377A1 (en) * 2009-11-10 2011-05-12 Johannes Alexander Dekkers Method to teach a dyslexic student how to read, using individual word exercises based on custom text
JP2014089443A (ja) * 2012-10-03 2014-05-15 Tottori Univ 文字音読指導装置および文字音読指導プログラム
WO2014147767A1 (fr) * 2013-03-19 2014-09-25 楽天株式会社 Dispositif de traitement de document, procédé de traitement de document, programme et support de stockage d'informations

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11338609A (ja) * 1998-05-26 1999-12-10 Nippon Telegr & Teleph Corp <Ntt> 書籍型情報表示装置および書籍型情報表示方法およびコンピュータが読み取り可能なプログラムを記録した記録媒体
JP2003150291A (ja) * 2001-11-14 2003-05-23 Oki Electric Ind Co Ltd 携帯端末の画面表示制御方法及び装置
JP2009294740A (ja) * 2008-06-03 2009-12-17 Mitsubishi Electric Corp データ処理装置及びプログラム
JP2010258940A (ja) * 2009-04-28 2010-11-11 Hitachi Ltd 表示文字自動調整機能付き表示装置
US20110111377A1 (en) * 2009-11-10 2011-05-12 Johannes Alexander Dekkers Method to teach a dyslexic student how to read, using individual word exercises based on custom text
JP2014089443A (ja) * 2012-10-03 2014-05-15 Tottori Univ 文字音読指導装置および文字音読指導プログラム
WO2014147767A1 (fr) * 2013-03-19 2014-09-25 楽天株式会社 Dispositif de traitement de document, procédé de traitement de document, programme et support de stockage d'informations

Also Published As

Publication number Publication date
JP2016212167A (ja) 2016-12-15
JP6096829B2 (ja) 2017-03-15

Similar Documents

Publication Publication Date Title
CN101622659B (zh) 音质编辑装置及音质编辑方法
JP2008524656A (ja) 同期化されたプレゼンテーションを伴う楽譜捕捉および同期化されたオーディオパフォーマンス用のシステムおよび方法
JP5634853B2 (ja) 電子コミックのビューワ装置、電子コミックの閲覧システム、ビューワプログラム、ならびに電子コミックの表示方法
KR20160111335A (ko) 외국어 학습 시스템 및 외국어 학습 방법
JP6641045B1 (ja) コンテンツ生成システム、及びコンテンツ生成方法
JP2013061369A (ja) 情報処理装置、情報処理システムおよびプログラム
US8553855B2 (en) Conference support apparatus and conference support method
JP5534517B2 (ja) 発話学習支援装置およびそのプログラム
JP6153255B2 (ja) 歌唱パート決定システム
JP6096829B2 (ja) 学習用デジタルコンテンツ再生表示装置
JP2010134681A (ja) 講演資料作成支援システム、講演資料作成支援方法及び講演資料作成支援プログラム
JP2006133521A (ja) 語学学習機
JP2018097250A (ja) 言語学習装置
JP2005321706A (ja) 電子書籍の再生方法及びその装置
JP5310682B2 (ja) カラオケ装置
JP5454802B2 (ja) カラオケ装置
JP6155102B2 (ja) 学習支援装置
DeCure et al. Latinx Actor Training
KR20110065276A (ko) 비교 영상을 이용한 발음 학습 방법 및 장치
KR20140079677A (ko) 언어 데이터 및 원어민의 발음 데이터를 이용한 연음 학습장치 및 방법
KR20140087951A (ko) 이미지 데이터 및 원어민의 발음 데이터를 이용한 영어 문법 학습장치 및 방법
KR20140073768A (ko) 의미단위 및 원어민의 발음 데이터를 이용한 언어교육 학습장치 및 방법
KR20140082127A (ko) 단어의 어원 및 원어민의 발음 데이터를 이용한 단어 학습장치 및 방법
KR102025903B1 (ko) 언어 학습을 위한 장치 및 그 제어방법
JP7425698B2 (ja) カラオケ装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16786222

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16786222

Country of ref document: EP

Kind code of ref document: A1