WO2022260111A1 - Educational toy and program - Google Patents

Educational toy and program Download PDF

Info

Publication number
WO2022260111A1
WO2022260111A1 PCT/JP2022/023208 JP2022023208W WO2022260111A1 WO 2022260111 A1 WO2022260111 A1 WO 2022260111A1 JP 2022023208 W JP2022023208 W JP 2022023208W WO 2022260111 A1 WO2022260111 A1 WO 2022260111A1
Authority
WO
WIPO (PCT)
Prior art keywords
sentence
input
unit
screen
educational toy
Prior art date
Application number
PCT/JP2022/023208
Other languages
French (fr)
Japanese (ja)
Inventor
緋奈子 小沢
Original Assignee
株式会社バンダイ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社バンダイ filed Critical 株式会社バンダイ
Publication of WO2022260111A1 publication Critical patent/WO2022260111A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H33/00Other toys
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H33/00Other toys
    • A63H33/22Optical, colour, or shadow toys
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H5/00Musical or noise- producing devices for additional toy effects other than acoustical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03545Pens or stylus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B11/00Teaching hand-writing, shorthand, drawing, or painting
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied

Definitions

  • the present invention relates to the technology of educational toys.
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2001-194986
  • Patent Document 1 it is stated that as an educational toy, it is possible to learn not only how to write letters, but also how to read letters, and to practice writing letters in a game-like manner, and to provide an educational toy that does not get bored. Have been described.
  • Patent Document 1 it is stated that the educational toy outputs character sounds according to character recognition of characters input to the handwritten character input unit.
  • handwritten character input is limited to one character, and only the voice of that one character is output. Therefore, the examples of the prior art are not sufficiently interesting for children to practice and learn to write characters, and it is difficult to motivate children to learn sufficiently.
  • the purpose of the present invention is to provide a technology that can enhance interest and arouse children's sufficient motivation to learn in relation to the technology of educational toys that enable children to practice and learn to write letters.
  • the educational toy of the embodiment includes a presentation unit that presents a predetermined sentence, an input unit that allows the user to input characters by handwriting corresponding to the predetermined sentence, and an effect corresponding to the predetermined sentence, a storage unit that stores performance data including at least one of sentences, images, and sounds; an output unit that outputs the performance corresponding to the predetermined sentence; and a control unit, wherein the control unit comprises the presentation unit presents the predetermined sentence, detects an input to the input unit, reads out data of the effect corresponding to the predetermined sentence from the storage unit, and causes the output unit to output the effect. .
  • FIG. 1 shows the configuration of an educational toy according to Embodiment 1 of the present invention
  • 1 shows a functional block configuration example of an intellectual training toy according to Embodiment 1.
  • FIG. 4 shows a processing flow of the intellectual training toy of Embodiment 1.
  • FIG. 1 a configuration example of data of sentences and effects will be shown.
  • Screen examples 1 and 2 are shown in the first embodiment.
  • Screen examples 3 and 4 are shown in the first embodiment.
  • Screen examples 5 and 6 are shown in the first embodiment.
  • Screen examples 7 and 8 are shown in the first embodiment.
  • the processor is composed of, for example, a semiconductor device such as a CPU or GPU.
  • a processor is composed of devices and circuits capable of performing predetermined operations.
  • the processing can be implemented not only by software program processing but also by dedicated circuits. FPGA, ASIC, CPLD, etc. can be applied to the dedicated circuit.
  • the program may be installed as data in the target computer in advance, or may be distributed to the target computer as data from the program source and installed.
  • the program source may be a program distribution server on a communication network, a non-transitory computer-readable storage medium (for example, memory card), or the like.
  • a program may consist of a plurality of modules.
  • Various data and information are represented by structures such as tables and lists, but are not limited to this. Expressions such as identification information, identifiers, IDs, names, numbers, etc. are interchangeable.
  • Embodiment 1 An educational toy according to Embodiment 1 of the present invention will be described with reference to FIGS. 1 to 8.
  • FIG. The educational toy of Embodiment 1 has a function that allows a child, who is a user, to enjoy practicing and learning to write a simple sentence (composed of a plurality of characters). It outputs an effect according to the sentence input by handwriting.
  • the input of a sentence and the output of an effect are realized in the form of sending a letter (including a sentence) from a user to a fictional character and an effect including a reply from the character to the letter.
  • FIG. 1 shows the configuration of an educational toy 1 according to Embodiment 1.
  • This educational toy 1 is a pad-type (in other words, generally flat plate-shaped) electronic device.
  • the intellectual education toy 1 has a pad-type housing 2 and an attached touch pen 3 .
  • the touch pen 3 can be attached/detached to/from the housing 2 .
  • a computer is built in the housing 2 .
  • the screen of the display panel 4 is arranged on the main surface of the housing 2 .
  • the display panel 4 is a liquid crystal touch panel module in this example, and serves as display means and input means.
  • the screen of the display panel 4 receives an input operation by the user with the touch pen 3 (or fingers).
  • the screen of the display panel 4 accepts input from the touch pen 3, especially when inputting handwritten characters, which will be described later. Input using the touch pen 3 is also referred to as handwriting input.
  • the display panel 4 has a mechanism such as a touch sensor that detects touch input, and can detect the position coordinates of the position where the tip of the touch pen 3 approaches or contacts within the screen.
  • a mechanism such as a touch sensor that detects touch input, and can detect the position coordinates of the position where the tip of the touch pen 3 approaches or contacts within the screen.
  • it is recommended that the input on the screen of the display panel 4 is based on input using the dedicated touch pen 3, but it is not limited to this, and it is also possible to input directly with fingers.
  • buttons 5 are provided on the housing 2. Buttons 5 include a power button, a volume button, a home button, and the like.
  • the housing 2 is also provided with a speaker 6 capable of outputting sound.
  • a menu screen is displayed on the screen of the display panel 4 in FIG.
  • a plurality of icons 7 are displayed in the menu screen.
  • Icons 7 represent items of selectable functions (corresponding applications) such as "programming learning”, “arithmetic", and "English".
  • One of the plurality of icons 7 is an icon for selecting an application for practice/study of writing a sentence in the form of a letter, which is a feature of the first embodiment.
  • this application is also referred to as a "letter application.”
  • the name of this letter application is, for example, "Let's send a letter to XX" or “Let's write a sentence to XX" ("XX" is the name of the character).
  • the menu may have a hierarchical structure, for example, a configuration in which a "letter application” icon or the like is in the lower hierarchy when the "national language” icon is selected.
  • All user operations during the operation of the "letter application” described later are basically realized by touch input operations on the screen of the display panel 4. It is not necessary to equip the housing 2 with a dedicated hardware button for user operation during this operation.
  • the housing 2 may be provided with dedicated hardware buttons for operating applications.
  • a completion button which will be described later, may be provided as a hardware button outside the screen instead of a software button (in other words, image) within the screen.
  • FIG. 2 shows an example of functional block configuration as a computer system in the intellectual training toy 1.
  • the educational toy 1 includes a processor 101, a memory 102, a display device 103 (including the display panel 4), a speaker 104, an operation input unit 105 (buttons, etc.), an interface device 106, a battery 107, etc., which are connected via a bus or the like. connected to each other.
  • the processor 101 is composed of a CPU, ROM, RAM, etc., and constitutes a controller that controls the entire educational toy 1 and each part.
  • the processor 101 realizes each part by processing based on the program 51 .
  • the intellectual training toy 1 has a control section 11, a presentation section 12, an input section 13, a storage section 14, an output section 15, a determination section 16, and an operation input section 17 as respective sections.
  • the memory 102 is composed of a non-volatile storage device or the like, and stores various data and information handled by the processor 101 and the like.
  • the memory 102 stores, for example, a program 51, setting information 52, effect data 53, display data 54, and the like.
  • the program 51 is a group of programs corresponding to the program of Embodiment 1 (that is, the program for realizing the letter application), the OS, middleware, and various other application programs.
  • the setting information 52 is setting information by the program 51 and user setting information.
  • the user setting information is setting information when the user can make variable settings for the letter application.
  • the effect data 53 is data of sentences and effects that are defined and set in advance and used in the functions of the letter application based on the program of the first embodiment.
  • the performance data includes image and audio data.
  • a configuration example of the effect data 53 will be described later (FIG. 4).
  • the display data 54 is data to be displayed on the screen in the function of the letter application, and includes character image information detected by handwriting input.
  • the display device 103 is a device including the display panel 4 of FIG. 1 and a display drive circuit, etc., and is a liquid crystal touch panel display device with a built-in touch sensor.
  • a speaker 104 is an audio output device corresponding to the speaker 6 in FIG.
  • the operation input unit 105 is a part including the button 5 of FIG. 1 and the like, and is a device for inputting basic operations by the user.
  • the interface device 106 is, although not essential, a device such as an input/output interface or communication interface to which a mouse, keyboard, microphone, memory card, and other sensors and devices can be connected.
  • a battery 107 supplies power to each unit.
  • the control unit 11 controls from the presentation unit 12 to the output unit 15.
  • the control unit 11 presents a predetermined sentence by the presentation unit 12, detects an input to the input unit 13, reads the data of the effect corresponding to the predetermined sentence from the storage unit 14, and causes the output unit 15 to output the effect. .
  • the control unit 11 displays character image information (a letter including a sentence described later) detected by the input unit 13 .
  • the presentation unit 12 presents a predetermined sentence or the like on the screen of the display panel 4 .
  • the input unit 13 is a part where the user inputs handwritten characters by touching the touch pen 3 in correspondence with a predetermined sentence.
  • the input unit 13 displays a handwritten text for tracing that corresponds to a predetermined sentence.
  • the input unit 13 detects the character image information of characters input by the user and displays the character image information superimposed on the handwritten text for tracing.
  • the storage unit 14 stores, as effect data 53, effect data including at least one of a sentence, an image, and a sound as an effect corresponding to a predetermined sentence.
  • the storage unit 14 stores the effect data 53 in the memory 102 .
  • the output unit 15 outputs an effect corresponding to a given sentence.
  • the effect output includes image display on the screen of the display panel 4 and audio output from the speaker 6 .
  • the presentation unit 12 presents a plurality of sentences as options as predetermined sentences.
  • the input unit 13 inputs characters corresponding to sentences selected by the user from a plurality of sentences.
  • the storage unit 14 stores data of effects corresponding to each sentence of a plurality of sentences.
  • the control unit 11 reads the data of the effect corresponding to the sentence selected by the user from the storage unit 14 and causes the output unit 15 to output the effect. Further, the storage unit 14 stores data of a plurality of effects as effects corresponding to predetermined sentences.
  • the control unit 11 reads out data of effects selected from a plurality of effects from the storage unit 14 and causes the output unit 15 to output the effects according to a predetermined sentence.
  • control unit 11 When outputting at least a part of the effect, the control unit 11 hides or gradually erases the handwritten text for tracing on the input unit 13, while character image information (described later) detected by the input unit 13 is displayed. letters with sentences) to appear or gradually appear.
  • the determination unit 16 is a part that determines that the user has completed the input to the input unit 13 (input of a handwritten sentence).
  • the determination unit 16 includes, for example, an operation input unit 17 (completion button described later) for inputting an operation indicating that the user has completed the input to the input unit 13 .
  • the determination unit 16 determines that the input has been completed, triggered by the operation input of the operation input unit 17 (completion button).
  • the control unit 11 controls to output an effect, triggered by the determination by the determination unit 16 (in other words, the completion of the input).
  • the operation input unit 17 (completion button) enters a state in which the operation input is validated and accepted on condition that the input to the input unit 13 is detected.
  • the determination by the determination unit 16 is possible without being limited to using the operation input unit 17 (complete button).
  • the determination by the determination unit 16 may be based on a time condition such as elapse of a predetermined time.
  • the control unit 11 causes an effect to be output when a predetermined time elapses regarding the input to the input unit 13 .
  • the details of time measurement and determination may be a certain period of time from the start of the screen, a certain period of time from the detection of an input, or a continuation of a no-input state for a certain period of time.
  • the operation input unit 17 is configured as a software button (in other words, an image) within the screen of the display panel 4 in Embodiment 1, it is possible without being limited to this.
  • a dedicated hardware button having the same function as the completion button may be provided outside the screen of the display panel 4 in the housing 2 of the educational toy 1 .
  • control unit 11 does not perform character recognition processing on the character image information of the characters input to the input unit 13, and determines how the character image information of the characters input to the input unit 13 is. Even with such contents, if there is at least part of the character image information, an effect is output.
  • the predetermined sentence is a letter from the user to the character or a sentence of conversation.
  • the rendition includes at least one of a letter or text of dialogue, a character image, and a character voice in response from the character to the user.
  • the predetermined sentence is the sentence of a letter
  • the presentation includes an image of a letter paper, an effect image, and a sound effect.
  • Character image information detected by the input unit 13 is displayed on the image of the letter paper.
  • FIG. 3 shows the main processing flow of the intellectual training toy 1 of Embodiment 1. As shown in FIG. FIG. 3 has steps S301 to S309.
  • the processor 101 (especially the control unit 11) of FIG. 2 of the intellectual education toy 1 performs such processing while reading and writing data in the memory 102.
  • FIG. 1 shows the main processing flow of the intellectual training toy 1 of Embodiment 1. As shown in FIG. FIG. 3 has steps S301 to S309.
  • the processor 101 (especially the control unit 11) of FIG. 2 of the intellectual education toy 1 performs such processing while reading and writing data in the memory 102.
  • the processor 101 displays a menu screen as shown in FIG.
  • the processor 101 accepts selection of an icon 7 (corresponding application) by a user's touch operation using the touch pen 3 on the menu screen.
  • Processor 101 performs the subsequent processing when the letter application is selected.
  • step S301 the processor 101 displays an opening screen (screen G1 in FIG. 5 described later) on the screen of the display panel 4.
  • This opening screen is a guide screen for explaining the contents of the letter application to the user.
  • step S302 the processor 101 causes the screen of the display panel 4 to transition from the opening screen to the question selection screen (screen G2 in FIG. 5 described later) on the screen of the display panel 4 at a predetermined opportunity.
  • This question selection screen is a screen that presents the user with a plurality of predetermined sentences that are candidates for writing a letter.
  • step S303 the processor 101 accepts selection of one sentence from a plurality of sentences by the user's touch operation with the touch pen 3 on the question selection screen.
  • the processor 101 displays a handwriting input screen (screen G3 in FIG. 6 described later) on the screen of the display panel 4 in response to the selection of the one sentence.
  • the processor 101 displays the handwritten text corresponding to the selected sentence in a light color in a predetermined area of the handwriting input screen.
  • the processor 101 disables a completion button, which will be described later.
  • step S305 the processor 101 accepts handwriting input by the user's touch operation of the pen 3 in a predetermined area of the handwriting input screen.
  • the display device 103 detects touch position coordinates and the like corresponding to the handwritten input in the area, and the processor 101 acquires sentence image data (character image information) corresponding to the handwritten input. Based on the acquired data, the processor 101 draws an image (points, lines, etc.) of the text corresponding to the handwritten input on the handwritten text in the region.
  • Processor 101 activates the completion button when there is a handwritten input. In other words, the processor 101 enables the determination unit to determine that the input has been completed in response to handwritten input.
  • the processor 101 detects and recognizes, in other words, determines whether or not the sentence input by the user in the area of the handwriting input screen is completed.
  • the processor 101 considers that the sentence has been completed when the completion button in the handwriting input screen is pressed by a touch operation (screen G5 in FIG. 7, which will be described later).
  • the processor 101 displays the letter including the completed sentence on the sentence completion rendering screen (screen G6 in FIG. 7 described later) in response to the completion of the sentence.
  • the processor 101 causes the screen of the display panel 4 to transition from the sentence completion effect screen to the letter transmission screen (screen G7 in FIG. 8 described later) on the screen of the display panel 4 at a predetermined opportunity.
  • This letter transmission screen is a screen showing how a letter is transmitted from the user to the character.
  • step S309 the processor 101 causes the screen of the display panel 4 to transition from the letter transmission screen to a character reply screen (screen G8 in FIG. 8 described later) on the screen of the display panel 4 at a predetermined opportunity.
  • This reply screen is a screen showing how the character receives a letter from the user and replies to the text of the letter.
  • the processor 101 outputs an effect determined according to the text of the letter on this screen.
  • the presentation includes a reply sentence, and the image and voice of the character.
  • the processor 101 causes the screen of the display panel 4 to transition from the reply screen to a common success screen (not shown) at a predetermined trigger. This ends the flow.
  • FIG. 4 shows a configuration example of data of sentences and effects that are defined and set in advance. Such data is stored in advance as the effect data 53 in the memory 102 of FIG.
  • the data example of FIG. 4 is an example of data corresponding to the part "Let's write a letter (sentence) to character A" in the letter application.
  • this data has a plurality of candidate sentences to be selected sentences of the letter as predetermined sentences 401 in the left column. For example, there are five sentences such as sentence A1 to sentence A5.
  • sentence A1 is "good morning”
  • sentence A2 is "good night”
  • sentence A3 is "how are you?"
  • sentence A4 is "do your best”
  • sentence A5 is "good job”.
  • the predetermined sentence is a relatively short sentence consisting of several characters, but it can be longer and more complicated depending on the target age of the child.
  • the central column in this data has data of character A's reply sentence 402 associated with a predetermined sentence 401 .
  • two reply sentences are associated with each sentence 401 and set.
  • sentence A1 sentence B11 is prepared as "Good morning! I hope you have a good time today! and sentence B12 is prepared as "Good morning!
  • sentence A2 sentence B21 "Yes, good night! Let's do our best tomorrow!
  • sentence A3 sentence B31 "I'm fine! How are you?" and sentence B32 "I'm very fine!
  • sentences B41 and B42 are prepared for sentence A4.
  • a sentence B51 and a sentence B52 are prepared for the sentence A5.
  • the correspondence between the predetermined sentence 401 and the reply sentence 402 is not limited to the above example, and one sentence 401 may be associated with one or more reply sentences 402 . A different number of reply sentences 402 may be prepared for each predetermined sentence 401 .
  • the character A's reply sentence 402 is set in association with character A's image and voice 402 data.
  • an image g11 and a voice s11 are prepared for sentence B11.
  • An image g12 and a voice s12 are prepared for the sentence B12.
  • the image g11 is an image representing a greeting like the sentence B11
  • the voice s11 is the voice for uttering the sentence B11.
  • the image and voice of character A are prepared for each reply sentence 402 such as sentence B21, sentence B22, sentence B31, sentence B32, sentence B41, sentence B42, sentence B51, and sentence B52.
  • sentence A3 "How are you?"
  • This production includes production of completion of the letter including the handwritten input sentence corresponding to the selected sentence, production of the transmission of the letter, and output of the reply sentence, image and voice by the character A who received the letter. including.
  • the reply sentence 402 by the character A one is selected from a plurality of reply sentence candidates of the effect data in FIG.
  • sentence B31 "Yes, how are you?" is selected as one randomly selected from sentences B31 and B32.
  • the image and sound 403 associated therewith are also selected.
  • FIG. 5 to 8 show examples of various display screens and transitions in the letter application. Hereinafter, the screen transition will be described in order.
  • screen G1 shows an example of the opening screen (in other words, guide screen) of the letter application.
  • a character X that guides the letter application appears as an image in the background, and guides the user to the contents of the letter application (that is, practice writing sentences) with sentences, images, and voices.
  • the screen G1 for example, as the lines 501 of character X, "Let's write a letter to character A (Mr. ⁇ )! 2nd page) is displayed, and the corresponding voice is output.
  • the screen G1 also displays an example of a predetermined sentence ("Good morning", etc.) described later on the background.
  • the dialogue 501 spans a plurality of pages, transition between pages is performed by, for example, a touch operation.
  • the background of various screens may be a predetermined wallpaper, or an image of a fictional scene or the like.
  • the screen G1 transitions to the next screen G2 at a predetermined trigger.
  • This trigger may be a touch operation on the screen G1, or may be the elapse of a predetermined period of time.
  • transitions between various screens are accompanied by predetermined screen effects (in other words, visual effects) or effects.
  • predetermined screen effects in other words, visual effects
  • the first screen moves out of the display panel 4 screen while the second screen moves into the display panel 4 screen.
  • a screen effect (fade-out/fade-in of the screen) may be used.
  • a screen effect may be used in which the first screen gradually fades and disappears while the second screen appears gradually darker.
  • a lower screen G2 in FIG. 5 shows an example of a question selection screen (in other words, a sentence presentation screen).
  • a plurality of predetermined sentences 502 that are candidates for the user to write in the letter are presented (in other words, displayed) as options.
  • a sentence such as "Choose the word you want to write” is displayed as a guide, and the corresponding voice is output.
  • other images such as the guide character X, other characters, and scenes may be displayed.
  • Screen G2 the user selects one sentence from a plurality of sentences 502 by touching the touch pen 3.
  • Screen G1 transitions to the next screen (FIG. 6) when one sentence is selected.
  • a selected sentence is also described as a selected sentence.
  • An example of a selection sentence is "Are you fine?".
  • a screen G3 shows a model display state as an example of a handwriting input screen.
  • a region 601 in other words, a writing area
  • handwritten text 602 for tracing is displayed according to the sentence selected on the previous screen G2.
  • a handwritten text 602 for tracing is displayed in light gray with a character frame and a plurality of characters "How are you?".
  • a sentence such as "Trace it according to the manual” is displayed, and the corresponding voice is output.
  • the screen G3 also has a button 603 such as a pen tool.
  • the buttons 603 include a pen tool button, an eraser button, and a "skip all" button.
  • the Pen Tool button is automatically selected as the active state.
  • the user can draw a point or line within a predetermined area 601 by a touch operation (that is, handwriting input) with the touch pen 3 .
  • the eraser tool is activated by a selection operation, the user can erase drawn points and lines in a predetermined area 611 by a touch operation with the touch pen 3 .
  • the "skip all" button is selected and operated, all the points and lines in the predetermined area 611 can be erased to return to a blank page.
  • the user can handwrite the selection sentence "How are you?", which is a simple sentence, by handwriting with the touch pen 3 along the handwritten text 602.
  • the user writes each character by touching the handwriting text 602 for tracing with the touch pen 3 in the region 601 of the screen G3. Since it is not detected whether or not the handwritten input characters are misaligned with respect to the characters of the handwritten text 602, even if the handwritten input characters are misaligned with respect to the characters of the handwritten text 602, it is permissible. In the example of screen G3, it is still before handwriting input, and no point or line is drawn in area 601 .
  • a completion button 604 is displayed, for example, at the bottom.
  • the processor of the intellectual training toy 1 puts the completion button 604 in an invalid state (a state in which it cannot be touched) in a state before handwriting input such as this screen G3. Display in semi-transparent state or light color.
  • a screen G4 on the lower side of FIG. 6 shows an example of a state in which the user starts handwriting input of a sentence in the area 601 of the screen G3 and is in the middle of handwriting input.
  • the processor of the intellectual training toy 1 detects a touch input to the area 601 based on the functions of the display device 103 including the display panel 4, and based on the detection information, points and lines corresponding to the touch input position coordinates are displayed on the area 601. (corresponding character 605).
  • Characters 605 are examples of points and lines drawn by handwriting input. In this example, the lines and dots of the character 605 are thick black. In the state of this example, the characters up to "How are you?" in the sentence "How are you doing?" are drawn. It should be noted that the color, thickness, etc. of the characters 605 may be variably set.
  • the processor When a point or line begins to be drawn in the area 601, that is, when at least a portion of the point or line is drawn, the processor enables the completion button 604 (a state in which it can be pressed by touch). 604 is displayed in a non-translucent normal state or in a dark color.
  • the screen G5 shows an example of the screen when a sentence is completed by handwriting input in the area 601 of the screen G4.
  • a handwritten sentence 606 "How are you?" is drawn.
  • a case is shown in which the user presses the completion button 604 with the touch pen 3 .
  • the processor detects pressing of the completion button 604 by the functions of the display device 103 including the display panel 4 .
  • the processor detects and recognizes that the completion button 604 has been pressed, it regards this as the completion of the sentence, and acquires the data (corresponding character image information) of the sentence 606 drawn in the area 601 at that time.
  • the completion button 604 it is not necessary that all the characters of the selected sentence have actually been drawn by handwriting input, and the sentence may be incomplete.
  • the processor determines that the sentence is complete even if the sentence is incomplete, that is, if at least some lines or dots are drawn in the area 601 and the completion button 604 is pressed. When a child uses the system, it is assumed that only incomplete handwritten characters can be input.
  • the completion button 604 is pressed to complete the handwritten input.
  • the completion condition may be that a predetermined amount or more of characters are written in the area 601 .
  • the processor 101 regards the sentence as completed when a certain period of time has passed since the start of the handwriting input screen G3.
  • the processor 101 regards the sentence as completed when a certain period of time has passed since the input detection (in other words, touch detection) in the area 601 .
  • the processor 101 regards the sentence as completed when the state of no input by touching the area 601 continues for a certain period of time.
  • the lower screen G6 in FIG. 7 is a sentence completion effect screen, in other words, a letter completion effect screen.
  • the processor displays the image of the letter paper (in other words, the paper) in the area 701 that occupies most of the screen G6, superimposes it on the letter paper image, and acquires it when completing the previous screen G5.
  • a sentence image 702 corresponding to the sentence 606 is displayed.
  • the processor does not display elements such as the handwriting text 602 (frame lines and characters) of the previous screen G5.
  • the processor as a screen effect or performance when displaying this screen G6, may, for example, leave the display of the sentence 606 on the previous screen G5 as it is, while other displayed objects such as the text 602 gradually disappear.
  • the processor gradually displays the letter paper image in the area 701, displays a predetermined effect image (for example, twinkling star effect), and produces sound such as a corresponding sound effect. Control to output. From the user's point of view, the background appears to have changed to a letter (letter paper). Note that the type of letter paper image of the letter and the type of effect may be randomly determined from a plurality of candidates, or may be variably set.
  • the processor continues to display such a sentence completion effect screen G2 for a predetermined time or more.
  • the user can enjoy viewing the completed letter on the screen G6.
  • the processor terminates the screen G6 at a predetermined opportunity and transitions to the next screen (FIG. 8).
  • This trigger is, for example, a touch operation on the screen G6 after a predetermined minimum display time has passed, or a further predetermined time has elapsed.
  • screen G7 shows an example of a screen for sending a letter.
  • the screen G7 is a screen that expresses, as part of the presentation, how the user sends the letter 801 including the sentence 702 created on the previous screen G6 to a predetermined character (assumed to be A).
  • a part of the screen G7 displays the letter 801 created up to the previous screen (that is, the letter image including letter paper and sentences).
  • the processor may, for example, perform display control so that the letter 801 gradually emerges from the background.
  • an image 802 of a predetermined character Y and a line 803 are displayed, and the corresponding voice is output.
  • Character Y is, for example, a character who delivers or receives letters. These represent that the letter 801 from the user has arrived at the character A. For example, as the dialogue of character Y, a sentence such as "Mr.
  • the processor terminates the screen G7 at a predetermined trigger and transitions to the next screen G8.
  • This trigger is a touch operation in the screen G7 or the lapse of a further predetermined time after the predetermined minimum display time has elapsed.
  • the processor controls the display of, for example, an element leaving the screen and an element newly appearing in the screen.
  • the area of the image 802 and the dialogue 803 of the character Y on the screen G7 is display-controlled so as to move from a fixed position within the screen to outside the screen.
  • display control is performed so that the image and words of character A on the next screen G8 are moved from outside the screen to fixed positions within the screen.
  • the screen G8 on the lower side of FIG. 8 is a screen of a reply effect by the character A (in other words, receiving a letter).
  • the character A receives a letter 801 from the user and the character A replies to the text of the letter 801 from the user.
  • the previous letter 801 is similarly displayed, an image 804 of the character A and a reply 805 (speech) are displayed in a predetermined area, and a voice corresponding to the reply 805 is output.
  • the user can feel that the character A has replied to the letter 801 he created. It is possible to make them have a strong desire to learn.
  • the reply 805 sentence is a reply sentence selected and determined according to the letter 801 sentence. As details of the decision of the reply 805 sentence, one is randomly selected from a plurality of candidate reply sentences, for example, based on predetermined data (effect data 53 in FIG. 4). A plurality of patterns of reply sentences are prepared in advance according to the sentence selected by the user. In this example, the text of the reply 805 selected in response to the text "How are you?" of the letter 801 is displayed as "Yes, how are you?"
  • the processor terminates the screen G8 at a predetermined trigger and transitions to the next screen, the common success screen.
  • This trigger is a touch operation within the screen after display for a predetermined minimum time.
  • the common success screen is a screen showing the end of the letter application, and is a screen with contents common to each application. After the common success screen, return to the menu screen.
  • a letter 801 including a sentence written by the user is output together with a reply from character A, an image, and the like.
  • the child who is the user, can obtain a reaction including a reply from the character A based on the sentence selected and written by him/herself, which increases interest.
  • the child who is the user, may be able to obtain an effect including a different reply depending on the sentence he/she selects and writes, which increases interest.
  • the format is a letter, but the format is not limited to this. It is also applicable to exchanges including sentences in the form of conversation between the user and the character. For example, in the form of conversation, in response to the input of a first selection sentence by the user, a first response sentence is returned from the character, in response to which the user inputs a second selection sentence, and in response , a second response may be returned from the character.
  • the handwritten text 602 is displayed in the handwritten input area 601 of the screen G3 in FIG. 6, and the handwritten input text 605 is superimposed thereon.
  • the present invention is not limited to this, and the display of the handwritten text corresponding to the selected sentence and the display of the handwritten input may be provided in separate areas in parallel within the screen.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Business, Economics & Management (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Toys (AREA)

Abstract

[Problem] To provide a technology which can stimulate interest and therefore sufficiently motivate children to learn. [Solution] An educational toy 1 presents predetermined text (steps S302, S303); receives the input, by user handwriting, of characters so that the same correspond to the predetermined text (steps S304, S305); stores, as a performance corresponding to the predetermined text, performance data including at least one from among text, an image, and a voice; and outputs the performance corresponding to the prescribed text (steps S307, S308, S309).

Description

知育玩具およびプログラムEducational toys and programs
 本発明は、知育玩具の技術に関する。 The present invention relates to the technology of educational toys.
 子供向けの知育玩具に関して、子供が文字を書く練習・学習を楽しくできるように工夫した技術がある。先行技術例として、特開2001-194986号公報(特許文献1)が挙げられる。特許文献1には、知育玩具として、単に文字を書くだけではなく文字の読み方を合わせて学習することができるとともに、ゲーム感覚で文字を書く練習ができ、飽きのこない知育玩具を提供する旨が記載されている。 Regarding educational toys for children, there are techniques that have been devised so that children can enjoy practicing and learning to write letters. As an example of prior art, Japanese Patent Application Laid-Open No. 2001-194986 (Patent Document 1) can be cited. In Patent Document 1, it is stated that as an educational toy, it is possible to learn not only how to write letters, but also how to read letters, and to practice writing letters in a game-like manner, and to provide an educational toy that does not get bored. Have been described.
特開2001-194986号公報Japanese Patent Application Laid-Open No. 2001-194986
 特許文献1では、知育玩具は、手書き文字入力部に入力された文字の文字認識に応じて文字音声を出力させる旨が記載されている。特許文献1のような先行技術例では、手書きでの文字の入力は一文字に限られており、その一文字の音声が出力されるだけである。そのため、先行技術例では、子供が文字を書く練習・学習に関して、興趣性が不十分であり、子供に十分な学習意欲を惹起させることは難しい。 In Patent Document 1, it is stated that the educational toy outputs character sounds according to character recognition of characters input to the handwritten character input unit. In a prior art example such as Patent Document 1, handwritten character input is limited to one character, and only the voice of that one character is output. Therefore, the examples of the prior art are not sufficiently interesting for children to practice and learn to write characters, and it is difficult to motivate children to learn sufficiently.
 本発明の目的は、子供が文字を書く練習・学習を可能とする知育玩具の技術に関して、興趣性を高め、子供に十分な学習意欲を惹起させることができる技術を提供することである。 The purpose of the present invention is to provide a technology that can enhance interest and arouse children's sufficient motivation to learn in relation to the technology of educational toys that enable children to practice and learn to write letters.
 本発明のうち代表的な実施の形態は以下に示す構成を有する。実施の形態の知育玩具は、所定の文を提示する提示部と、前記所定の文に対応させて、利用者が手書きで文字を入力する入力部と、前記所定の文に応じた演出として、文、画像、および音声のうち少なくとも1つを含む演出のデータを記憶する記憶部と、前記所定の文に応じた前記演出を出力する出力部と、制御部と、を備え、前記制御部は、前記提示部により前記所定の文を提示し、前記入力部への入力を検出し、前記所定の文に応じた前記演出のデータを前記記憶部から読み出して前記出力部に前記演出を出力させる。 A representative embodiment of the present invention has the configuration shown below. The educational toy of the embodiment includes a presentation unit that presents a predetermined sentence, an input unit that allows the user to input characters by handwriting corresponding to the predetermined sentence, and an effect corresponding to the predetermined sentence, a storage unit that stores performance data including at least one of sentences, images, and sounds; an output unit that outputs the performance corresponding to the predetermined sentence; and a control unit, wherein the control unit comprises the presentation unit presents the predetermined sentence, detects an input to the input unit, reads out data of the effect corresponding to the predetermined sentence from the storage unit, and causes the output unit to output the effect. .
 本発明のうち代表的な実施の形態によれば、子供が文字を書く練習・学習を可能とする知育玩具の技術に関して、興趣性を高め、子供に十分な学習意欲を惹起させることができる。上記した以外の課題、構成および効果等については、発明を実施するための形態において説明される。 According to the representative embodiments of the present invention, it is possible to enhance interest and sufficiently motivate children to learn, regarding the technology of educational toys that enable children to practice and learn to write letters. Problems, configurations, effects, etc. other than those described above will be described in the embodiments for carrying out the invention.
本発明の実施の形態1の知育玩具の構成を示す。1 shows the configuration of an educational toy according to Embodiment 1 of the present invention; 実施の形態1の知育玩具の機能ブロック構成例を示す。1 shows a functional block configuration example of an intellectual training toy according to Embodiment 1. FIG. 実施の形態1の知育玩具の処理フローを示す。4 shows a processing flow of the intellectual training toy of Embodiment 1. FIG. 実施の形態1で、文と演出のデータの構成例を示す。In Embodiment 1, a configuration example of data of sentences and effects will be shown. 実施の形態1で、画面例その1およびその2を示す。Screen examples 1 and 2 are shown in the first embodiment. 実施の形態1で、画面例その3およびその4を示す。Screen examples 3 and 4 are shown in the first embodiment. 実施の形態1で、画面例その5およびその6を示す。Screen examples 5 and 6 are shown in the first embodiment. 実施の形態1で、画面例その7およびその8を示す。Screen examples 7 and 8 are shown in the first embodiment.
 以下、図面を参照しながら本発明の実施の形態を詳細に説明する。図面において、同一部には原則として同一符号を付し、繰り返しの説明を省略する。図面において、各構成要素の表現は、発明の理解を容易にするために、実際の位置、大きさ、形状、および範囲等を表していない場合がある。説明上、プログラムによる処理について説明する場合に、プログラムや機能や処理部等を主体として説明する場合があるが、それらについてのハードウェアとしての主体は、プロセッサ、あるいはそのプロセッサ等で構成されるコントローラ、装置、計算機、システム等である。計算機は、プロセッサによって、適宜にメモリや通信インタフェース等の資源を用いながら、メモリ上に読み出されたプログラムに従った処理を実行する。これにより、所定の機能や処理部等が実現される。プロセッサは、例えばCPUやGPU等の半導体デバイス等で構成される。プロセッサは、所定の演算が可能な装置や回路で構成される。処理は、ソフトウェアプログラム処理に限らず、専用回路でも実装可能である。専用回路は、FPGA、ASIC、CPLD等が適用可能である。プログラムは、対象計算機に予めデータとしてインストールされていてもよいし、プログラムソースから対象計算機にデータとして配布されてインストールされてもよい。プログラムソースは、通信網上のプログラム配布サーバでもよいし、非一過性のコンピュータ読み取り可能な記憶媒体(例えばメモリカード)等でもよい。プログラムは、複数のモジュールから構成されてもよい。各種のデータや情報は例えばテーブルやリスト等の構造で表現されるが、これに限定されない。識別情報、識別子、ID、名、番号等の表現は互いに置換可能である。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the drawings, in principle, the same parts are denoted by the same reference numerals, and repeated explanations are omitted. In the drawings, the representation of each component may not represent the actual position, size, shape, range, etc., in order to facilitate understanding of the invention. For the purpose of explanation, when explaining the processing by the program, there are cases where the program, function, processing unit, etc. are mainly explained, but the main body as hardware for them is the processor or the controller composed of the processor etc. , devices, computers, systems, etc. A computer executes processing according to a program read out on a memory by a processor while appropriately using resources such as a memory and a communication interface. As a result, predetermined functions, processing units, and the like are realized. The processor is composed of, for example, a semiconductor device such as a CPU or GPU. A processor is composed of devices and circuits capable of performing predetermined operations. The processing can be implemented not only by software program processing but also by dedicated circuits. FPGA, ASIC, CPLD, etc. can be applied to the dedicated circuit. The program may be installed as data in the target computer in advance, or may be distributed to the target computer as data from the program source and installed. The program source may be a program distribution server on a communication network, a non-transitory computer-readable storage medium (for example, memory card), or the like. A program may consist of a plurality of modules. Various data and information are represented by structures such as tables and lists, but are not limited to this. Expressions such as identification information, identifiers, IDs, names, numbers, etc. are interchangeable.
 <実施の形態1>
 図1~図8を用いて、本発明の実施の形態1の知育玩具について説明する。実施の形態1の知育玩具は、利用者である子供が簡単な文(複数の文字から成る)を書く練習・学習を楽しく行うことができる機能を有するものであり、利用者が画面で選択して手書き入力した文に応じた演出を出力するものである。実施の形態1では、文の入力および演出の出力は、利用者からフィクション上のキャラクターに対する手紙(文を含む)の送付とそれに対するキャラクターからの返事を含む演出という形式で実現される。
<Embodiment 1>
An educational toy according to Embodiment 1 of the present invention will be described with reference to FIGS. 1 to 8. FIG. The educational toy of Embodiment 1 has a function that allows a child, who is a user, to enjoy practicing and learning to write a simple sentence (composed of a plurality of characters). It outputs an effect according to the sentence input by handwriting. In Embodiment 1, the input of a sentence and the output of an effect are realized in the form of sending a letter (including a sentence) from a user to a fictional character and an effect including a reply from the character to the letter.
 [知育玩具]
 図1は、実施の形態1の知育玩具1の構成を示す。この知育玩具1は、パッド型(言い換えると概略平板形状)の電子装置である。知育玩具1は、パッド型の筐体2と、付属するタッチペン3とを有する。タッチペン3は、筐体2に取り付け・取り外しが可能である。筐体2にはコンピュータが内蔵されている。筐体2には、主面である表面において、表示パネル4の画面が配置されている。表示パネル4は、本例では液晶タッチパネルモジュールであり、表示手段かつ入力手段である部分である。表示パネル4の画面は、利用者によるタッチペン3(または手指)による入力操作を受け付ける。表示パネル4の画面は、特に後述の手書き文字入力の際に、タッチペン3による入力を受け付ける。なお、タッチペン3による入力を、手書き入力とも称する。
[Educational toys]
FIG. 1 shows the configuration of an educational toy 1 according to Embodiment 1. As shown in FIG. This educational toy 1 is a pad-type (in other words, generally flat plate-shaped) electronic device. The intellectual education toy 1 has a pad-type housing 2 and an attached touch pen 3 . The touch pen 3 can be attached/detached to/from the housing 2 . A computer is built in the housing 2 . The screen of the display panel 4 is arranged on the main surface of the housing 2 . The display panel 4 is a liquid crystal touch panel module in this example, and serves as display means and input means. The screen of the display panel 4 receives an input operation by the user with the touch pen 3 (or fingers). The screen of the display panel 4 accepts input from the touch pen 3, especially when inputting handwritten characters, which will be described later. Input using the touch pen 3 is also referred to as handwriting input.
 表示パネル4は、タッチ入力を検出するタッチセンサ等の機構を備えており、画面内においてタッチペン3の先端が近接や接触した位置の位置座標等を検出可能である。実施の形態1では、表示パネル4の画面に対する入力は、専用のタッチペン3による入力を基本として推奨するが、これに限らず、直接的に手指による入力とすることも可能である。 The display panel 4 has a mechanism such as a touch sensor that detects touch input, and can detect the position coordinates of the position where the tip of the touch pen 3 approaches or contacts within the screen. In Embodiment 1, it is recommended that the input on the screen of the display panel 4 is based on input using the dedicated touch pen 3, but it is not limited to this, and it is also possible to input directly with fingers.
 筐体2には、各種のボタン5が設けられている。ボタン5は、電源ボタン、音量ボタン、ホームボタンなどがある。筐体2には、音声出力可能なスピーカ6も設けられている。 Various buttons 5 are provided on the housing 2. Buttons 5 include a power button, a volume button, a home button, and the like. The housing 2 is also provided with a speaker 6 capable of outputting sound.
 図1での表示パネル4の画面には、メニュー画面が表示されている。メニュー画面内には、複数のアイコン7が表示されている。アイコン7は、例えば「プログラミング学習」、「算数」、「英語」といった、選択可能な機能(対応するアプリ)の項目を表している。複数のうちの1つのアイコン7は、実施の形態1での特徴である、手紙の形式で文を書く練習・学習用のアプリケーションを選択するためのアイコンである。このアプリを、説明上、「手紙アプリ」とも記載する。この手紙アプリは、名称としては例えば「○○に手紙を出そう」または「○○に文を書こう」(「○○」はキャラクターの名前)といったものとなる。なお、メニューは階層構造になっていてもよく、例えば「国語」アイコン選択時の下階層に「手紙アプリ」アイコン等があるといった構成でもよい。 A menu screen is displayed on the screen of the display panel 4 in FIG. A plurality of icons 7 are displayed in the menu screen. Icons 7 represent items of selectable functions (corresponding applications) such as "programming learning", "arithmetic", and "English". One of the plurality of icons 7 is an icon for selecting an application for practice/study of writing a sentence in the form of a letter, which is a feature of the first embodiment. For the sake of explanation, this application is also referred to as a "letter application." The name of this letter application is, for example, "Let's send a letter to XX" or "Let's write a sentence to XX" ("XX" is the name of the character). Note that the menu may have a hierarchical structure, for example, a configuration in which a "letter application" icon or the like is in the lower hierarchy when the "national language" icon is selected.
 後述の「手紙アプリ」の動作中における利用者の操作は、基本的にすべて表示パネル4の画面でのタッチ入力操作によって実現される。この動作中の利用者操作のために、筐体2に専用のハードウェアボタンの具備は必要無い。変形例としては、筐体2に、アプリの操作のための専用のハードウェアボタンが設けられてもよい。例えば後述の完成ボタンが、画面内のソフトウェアボタン(言い換えると画像)ではなく、画面外のハードウェアボタンとして設けられてもよい。 All user operations during the operation of the "letter application" described later are basically realized by touch input operations on the screen of the display panel 4. It is not necessary to equip the housing 2 with a dedicated hardware button for user operation during this operation. As a modification, the housing 2 may be provided with dedicated hardware buttons for operating applications. For example, a completion button, which will be described later, may be provided as a hardware button outside the screen instead of a software button (in other words, image) within the screen.
 [コンピュータシステム]
 図2は、知育玩具1におけるコンピュータシステムとしての機能ブロック構成例を示す。知育玩具1は、プロセッサ101、メモリ102、表示装置103(表示パネル4を含む)、スピーカ104、操作入力部105(ボタン等)、インタフェース装置106、バッテリ107等を備え、それらはバス等を介して相互に接続されている。
[Computer system]
FIG. 2 shows an example of functional block configuration as a computer system in the intellectual training toy 1. As shown in FIG. The educational toy 1 includes a processor 101, a memory 102, a display device 103 (including the display panel 4), a speaker 104, an operation input unit 105 (buttons, etc.), an interface device 106, a battery 107, etc., which are connected via a bus or the like. connected to each other.
 プロセッサ101は、CPU、ROM、RAM等で構成され、知育玩具1の全体および各部を制御するコントローラを構成する。プロセッサ101は、プログラム51に基づいた処理によって、各部を実現する。知育玩具1は、各部として、制御部11、提示部12、入力部13、記憶部14、出力部15、判定部16および操作入力部17を有する。 The processor 101 is composed of a CPU, ROM, RAM, etc., and constitutes a controller that controls the entire educational toy 1 and each part. The processor 101 realizes each part by processing based on the program 51 . The intellectual training toy 1 has a control section 11, a presentation section 12, an input section 13, a storage section 14, an output section 15, a determination section 16, and an operation input section 17 as respective sections.
 メモリ102は、不揮発性記憶装置等で構成され、プロセッサ101等が扱う各種のデータや情報が記憶される。メモリ102には、例えば、プログラム51、設定情報52、演出データ53、表示データ54等が記憶される。プログラム51は、実施の形態1のプログラム(即ち手紙アプリを実現するプログラム)の他、OS、ミドルウェア、他の各種のアプリケーションプログラムに相当するプログラム群である。設定情報52は、プログラム51による設定情報や、ユーザ設定情報である。ユーザ設定情報は、利用者による手紙アプリの可変の設定を可能とする場合の設定情報である。 The memory 102 is composed of a non-volatile storage device or the like, and stores various data and information handled by the processor 101 and the like. The memory 102 stores, for example, a program 51, setting information 52, effect data 53, display data 54, and the like. The program 51 is a group of programs corresponding to the program of Embodiment 1 (that is, the program for realizing the letter application), the OS, middleware, and various other application programs. The setting information 52 is setting information by the program 51 and user setting information. The user setting information is setting information when the user can make variable settings for the letter application.
 演出データ53は、実施の形態1のプログラムに基づいた手紙アプリの機能において使用する、予め規定・設定された、文や演出のデータである。演出のデータの中には画像や音声のデータも含む。演出データ53の構成例については後述する(図4)。表示データ54は、手紙アプリの機能において画面に表示するためのデータであり、手書き入力による検出した文字画像情報を含む。 The effect data 53 is data of sentences and effects that are defined and set in advance and used in the functions of the letter application based on the program of the first embodiment. The performance data includes image and audio data. A configuration example of the effect data 53 will be described later (FIG. 4). The display data 54 is data to be displayed on the screen in the function of the letter application, and includes character image information detected by handwriting input.
 表示装置103は、図1の表示パネル4と表示駆動回路等とを含む装置であり、タッチセンサを内蔵した液晶タッチパネル表示装置である。スピーカ104は、図1のスピーカ6に対応した音声出力装置である。操作入力部105は、図1のボタン5等を含む部分であり、利用者による基本的な操作を入力するためのデバイスである。インタフェース装置106は、必須ではないが、マウスやキーボードやマイクやメモリカード、その他のセンサやデバイスが接続可能である、入出力インタフェースまたは通信インタフェース等のデバイスである。バッテリ107は、各部に電力を供給する。 The display device 103 is a device including the display panel 4 of FIG. 1 and a display drive circuit, etc., and is a liquid crystal touch panel display device with a built-in touch sensor. A speaker 104 is an audio output device corresponding to the speaker 6 in FIG. The operation input unit 105 is a part including the button 5 of FIG. 1 and the like, and is a device for inputting basic operations by the user. The interface device 106 is, although not essential, a device such as an input/output interface or communication interface to which a mouse, keyboard, microphone, memory card, and other sensors and devices can be connected. A battery 107 supplies power to each unit.
 制御部11は、提示部12から出力部15までを制御する。制御部11は、提示部12により所定の文を提示し、入力部13への入力を検出し、所定の文に応じた演出のデータを記憶部14から読み出して出力部15に演出を出力させる。制御部11は、演出の少なくとも一部を出力させる際には、入力部13で検出した文字画像情報(後述の文を含む手紙)を表示させる。 The control unit 11 controls from the presentation unit 12 to the output unit 15. The control unit 11 presents a predetermined sentence by the presentation unit 12, detects an input to the input unit 13, reads the data of the effect corresponding to the predetermined sentence from the storage unit 14, and causes the output unit 15 to output the effect. . When outputting at least part of the effect, the control unit 11 displays character image information (a letter including a sentence described later) detected by the input unit 13 .
 提示部12は、表示パネル4の画面内において、所定の文などを提示する。入力部13は、所定の文に対応させて、利用者がタッチペン3のタッチ操作による手書きで文字を入力する部分である。入力部13は、所定の文に対応させた、なぞり書き用の手本文を表示する。入力部13は、利用者により入力された文字の文字画像情報を検出しながら、なぞり書き用の手本文の上に重ねて文字画像情報を表示する。 The presentation unit 12 presents a predetermined sentence or the like on the screen of the display panel 4 . The input unit 13 is a part where the user inputs handwritten characters by touching the touch pen 3 in correspondence with a predetermined sentence. The input unit 13 displays a handwritten text for tracing that corresponds to a predetermined sentence. The input unit 13 detects the character image information of characters input by the user and displays the character image information superimposed on the handwritten text for tracing.
 記憶部14は、所定の文に応じた演出として、文、画像、および音声のうち少なくとも1つを含む演出のデータを演出データ53として記憶する。記憶部14は、メモリ102に演出データ53を保存する。出力部15は、所定の文に応じた演出を出力する。演出の出力は、表示パネル4の画面での画像の表示やスピーカ6からの音声の出力を含む。 The storage unit 14 stores, as effect data 53, effect data including at least one of a sentence, an image, and a sound as an effect corresponding to a predetermined sentence. The storage unit 14 stores the effect data 53 in the memory 102 . The output unit 15 outputs an effect corresponding to a given sentence. The effect output includes image display on the screen of the display panel 4 and audio output from the speaker 6 .
 実施の形態1では、提示部12は、所定の文として、複数の文を選択肢として提示する。入力部13は、利用者が複数の文から選択した文に対応させた文字を入力する。記憶部14は、複数の文のそれぞれの文に応じた演出のデータを記憶する。制御部11は、利用者が選択した文に応じた演出のデータを記憶部14から読み出して出力部15に演出を出力させる。また、記憶部14は、所定の文に応じた演出として、複数の演出のデータを記憶する。制御部11は、所定の文に応じて、複数の演出から選択した演出のデータを記憶部14から読み出して出力部15に演出を出力させる。 In Embodiment 1, the presentation unit 12 presents a plurality of sentences as options as predetermined sentences. The input unit 13 inputs characters corresponding to sentences selected by the user from a plurality of sentences. The storage unit 14 stores data of effects corresponding to each sentence of a plurality of sentences. The control unit 11 reads the data of the effect corresponding to the sentence selected by the user from the storage unit 14 and causes the output unit 15 to output the effect. Further, the storage unit 14 stores data of a plurality of effects as effects corresponding to predetermined sentences. The control unit 11 reads out data of effects selected from a plurality of effects from the storage unit 14 and causes the output unit 15 to output the effects according to a predetermined sentence.
 制御部11は、演出の少なくとも一部を出力させる際には、入力部13のなぞり書き用の手本文を非表示として、または次第に消去させながら、入力部13で検出した文字画像情報(後述の文を含む手紙)を表示させる、または次第に表示させる。 When outputting at least a part of the effect, the control unit 11 hides or gradually erases the handwritten text for tracing on the input unit 13, while character image information (described later) detected by the input unit 13 is displayed. letters with sentences) to appear or gradually appear.
 判定部16は、利用者が入力部13への入力(手書きによる文の入力)が完了したことを判定する部分である。判定部16は、例えば、利用者が入力部13への入力が完了したことを示す操作を入力する操作入力部17(後述の完成ボタン)を含む。実施の形態1では、判定部16は、操作入力部17(完成ボタン)の操作の入力を契機として、入力が完了したと判定する。制御部11は、判定部16での判定(言い換えると入力の完了)を契機として、演出を出力させるように制御する。操作入力部17(完成ボタン)は、入力部13への入力が検出されたことを条件に、操作の入力を有効として受け付ける状態になる。 The determination unit 16 is a part that determines that the user has completed the input to the input unit 13 (input of a handwritten sentence). The determination unit 16 includes, for example, an operation input unit 17 (completion button described later) for inputting an operation indicating that the user has completed the input to the input unit 13 . In the first embodiment, the determination unit 16 determines that the input has been completed, triggered by the operation input of the operation input unit 17 (completion button). The control unit 11 controls to output an effect, triggered by the determination by the determination unit 16 (in other words, the completion of the input). The operation input unit 17 (completion button) enters a state in which the operation input is validated and accepted on condition that the input to the input unit 13 is detected.
 判定部16の判定は、上記操作入力部17(完成ボタン)を用いることに限定されずに可能である。変形例では、判定部16の判定は、所定時間経過などの時間条件を用いてもよい。制御部11は、入力部13への入力に関して所定の時間の経過を契機として、演出を出力させる。時間の計測や判断に関する詳細としては、画面開始から一定時間としてもよいし、入力検出から一定時間としてもよいし、入力無し状態が一定時間継続などとしてもよい。 The determination by the determination unit 16 is possible without being limited to using the operation input unit 17 (complete button). In a modified example, the determination by the determination unit 16 may be based on a time condition such as elapse of a predetermined time. The control unit 11 causes an effect to be output when a predetermined time elapses regarding the input to the input unit 13 . The details of time measurement and determination may be a certain period of time from the start of the screen, a certain period of time from the detection of an input, or a continuation of a no-input state for a certain period of time.
 操作入力部17(完成ボタン)は、実施の形態1では、表示パネル4の画面内にソフトウェアボタン(言い換えると画像)として構成されるが、これに限定されずに可能である。変形例では、知育玩具1の筐体2において、表示パネル4の画面外に、完成ボタンとしての同様の機能を持たせた専用のハードウェアボタンが設けられてもよい。 Although the operation input unit 17 (completion button) is configured as a software button (in other words, an image) within the screen of the display panel 4 in Embodiment 1, it is possible without being limited to this. In a modification, a dedicated hardware button having the same function as the completion button may be provided outside the screen of the display panel 4 in the housing 2 of the educational toy 1 .
 実施の形態1の知育玩具1では、制御部11は、入力部13に入力された文字の文字画像情報についての文字認識処理を行わず、入力部13に入力された文字の文字画像情報がどのような内容であっても、少なくとも一部の文字画像情報がある場合には、演出を出力させる。 In the intellectual training toy 1 of Embodiment 1, the control unit 11 does not perform character recognition processing on the character image information of the characters input to the input unit 13, and determines how the character image information of the characters input to the input unit 13 is. Even with such contents, if there is at least part of the character image information, an effect is output.
 実施の形態1では、所定の文は、利用者からキャラクターへの手紙または会話の文である。演出は、キャラクターから利用者への応答における、手紙または会話の文、キャラクター画像、およびキャラクター音声の少なくとも1つを含む。実施の形態1では、特に、所定の文は、手紙の文であり、演出は、手紙の便箋の画像、エフェクト画像、および効果音を含む。手紙の便箋の画像上には、入力部13で検出した文字画像情報(即ち手書き入力の文)が表示される。 In the first embodiment, the predetermined sentence is a letter from the user to the character or a sentence of conversation. The rendition includes at least one of a letter or text of dialogue, a character image, and a character voice in response from the character to the user. Particularly in Embodiment 1, the predetermined sentence is the sentence of a letter, and the presentation includes an image of a letter paper, an effect image, and a sound effect. Character image information detected by the input unit 13 (that is, a handwritten input sentence) is displayed on the image of the letter paper.
 [処理フロー]
 図3は、実施の形態1の知育玩具1の主な処理フローを示す。図3はステップS301~S309を有する。知育玩具1の図2のプロセッサ101(特に制御部11)は、メモリ102のデータを読み書きしながら、このような処理を行う。
[Processing flow]
FIG. 3 shows the main processing flow of the intellectual training toy 1 of Embodiment 1. As shown in FIG. FIG. 3 has steps S301 to S309. The processor 101 (especially the control unit 11) of FIG. 2 of the intellectual education toy 1 performs such processing while reading and writing data in the memory 102. FIG.
 ステップS300で、プロセッサ101は、知育玩具1の起動(例えば電源ボタンのオン)に応じて、図1のようなメニュー画面を、表示パネル4の画面に表示する。プロセッサ101は、メニュー画面において、利用者によるタッチペン3を用いたタッチ操作による、アイコン7(対応するアプリ)の選択を受け付ける。プロセッサ101は、手紙アプリが選択された場合、以降の処理を行う。 At step S300, the processor 101 displays a menu screen as shown in FIG. The processor 101 accepts selection of an icon 7 (corresponding application) by a user's touch operation using the touch pen 3 on the menu screen. Processor 101 performs the subsequent processing when the letter application is selected.
 ステップS301で、プロセッサ101は、表示パネル4の画面に、オープニング画面(後述の図5の画面G1)を表示する。このオープニング画面は、利用者に対し手紙アプリの内容を説明するガイド画面である。 In step S301, the processor 101 displays an opening screen (screen G1 in FIG. 5 described later) on the screen of the display panel 4. This opening screen is a guide screen for explaining the contents of the letter application to the user.
 ステップS302で、プロセッサ101は、所定の契機で、表示パネル4の画面において、オープニング画面から出題選択画面(後述の図5の画面G2)の表示に遷移させる。この出題選択画面は、利用者に対し、手紙に書くための候補となる複数の所定の文を提示する画面である。 In step S302, the processor 101 causes the screen of the display panel 4 to transition from the opening screen to the question selection screen (screen G2 in FIG. 5 described later) on the screen of the display panel 4 at a predetermined opportunity. This question selection screen is a screen that presents the user with a plurality of predetermined sentences that are candidates for writing a letter.
 ステップS303で、プロセッサ101は、上記出題選択画面で、複数の文からの、利用者によるタッチペン3のタッチ操作による1つの文の選択を受け付ける。 In step S303, the processor 101 accepts selection of one sentence from a plurality of sentences by the user's touch operation with the touch pen 3 on the question selection screen.
 ステップS304で、プロセッサ101は、上記1つの文の選択に応じて、表示パネル4の画面に、手書き入力画面(後述の図6の画面G3)を表示する。プロセッサ101は、手書き入力画面の所定の領域において、選択文に対応した手本文を薄い色で表示する。プロセッサ101は、この手書き入力前の時点では、後述の完成ボタンを無効状態にする。 At step S304, the processor 101 displays a handwriting input screen (screen G3 in FIG. 6 described later) on the screen of the display panel 4 in response to the selection of the one sentence. The processor 101 displays the handwritten text corresponding to the selected sentence in a light color in a predetermined area of the handwriting input screen. Before this handwriting input, the processor 101 disables a completion button, which will be described later.
 ステップS305で、プロセッサ101は、手書き入力画面の所定の領域において、利用者によるペン3のタッチ操作による手書き入力を受け付ける。表示装置103は、領域での手書き入力に対応したタッチ位置座標等を検出し、プロセッサ101は、それに基づいて、手書き入力に応じた文の画像のデータ(文字画像情報)を取得する。プロセッサ101は、取得したデータに基づいて、領域において、手本文の上に、手書き入力に対応した文の画像(点や線等)を描画する。プロセッサ101は、手書き入力がある場合には、完成ボタンを有効状態にする。言い換えると、プロセッサ101は、手書き入力があることに応じて判定部が入力が完了したこと判定可能な状態にする。 In step S305, the processor 101 accepts handwriting input by the user's touch operation of the pen 3 in a predetermined area of the handwriting input screen. The display device 103 detects touch position coordinates and the like corresponding to the handwritten input in the area, and the processor 101 acquires sentence image data (character image information) corresponding to the handwritten input. Based on the acquired data, the processor 101 draws an image (points, lines, etc.) of the text corresponding to the handwritten input on the handwritten text in the region. Processor 101 activates the completion button when there is a handwritten input. In other words, the processor 101 enables the determination unit to determine that the input has been completed in response to handwritten input.
 ステップS306で、プロセッサ101は、上記手書き入力画面の領域での利用者による文の入力が完成したかどうかを検出・認識、言い換えると判定する。実施の形態1では、プロセッサ101は、上記手書き入力画面内の完成ボタンがタッチ操作によって押下された場合(後述の図7の画面G5)に、文が完成したとみなす。 At step S306, the processor 101 detects and recognizes, in other words, determines whether or not the sentence input by the user in the area of the handwriting input screen is completed. In Embodiment 1, the processor 101 considers that the sentence has been completed when the completion button in the handwriting input screen is pressed by a touch operation (screen G5 in FIG. 7, which will be described later).
 ステップS307で、プロセッサ101は、上記文の完成に応じて、文完成演出画面(後述の図7の画面G6)で、完成した文を含む手紙を表示する。 At step S307, the processor 101 displays the letter including the completed sentence on the sentence completion rendering screen (screen G6 in FIG. 7 described later) in response to the completion of the sentence.
 ステップS308で、プロセッサ101は、所定の契機で、表示パネル4の画面において、上記文完成演出画面から手紙送信画面(後述の図8の画面G7)の表示に遷移させる。この手紙送信画面は、利用者からキャラクターに手紙を送信する様子を表す画面である。 At step S308, the processor 101 causes the screen of the display panel 4 to transition from the sentence completion effect screen to the letter transmission screen (screen G7 in FIG. 8 described later) on the screen of the display panel 4 at a predetermined opportunity. This letter transmission screen is a screen showing how a letter is transmitted from the user to the character.
 ステップS309で、プロセッサ101は、所定の契機で、表示パネル4の画面において、上記手紙送信画面からキャラクターによる返事の画面(後述の図8の画面G8)の表示に遷移させる。この返事の画面は、キャラクターが利用者から手紙を受け取って手紙の文に対して返事をする様子を表す画面である。プロセッサ101は、この画面で、手紙の文に応じて決定した演出を出力させる。演出は、返事の文と、キャラクターの画像および音声とを含む。 In step S309, the processor 101 causes the screen of the display panel 4 to transition from the letter transmission screen to a character reply screen (screen G8 in FIG. 8 described later) on the screen of the display panel 4 at a predetermined opportunity. This reply screen is a screen showing how the character receives a letter from the user and replies to the text of the letter. The processor 101 outputs an effect determined according to the text of the letter on this screen. The presentation includes a reply sentence, and the image and voice of the character.
 ステップS310で、プロセッサ101は、所定の契機で、表示パネル4の画面において、上記返事の画面から共通成功画面(図示しない)の表示に遷移させる。これにより、フローが終了する。 At step S310, the processor 101 causes the screen of the display panel 4 to transition from the reply screen to a common success screen (not shown) at a predetermined trigger. This ends the flow.
 [文と演出のデータ]
 図4は、予め規定・設定された文と演出のデータの構成例を示す。図2のメモリ102には、予め、このようなデータが演出データ53として格納されている。図4のデータ例は、手紙アプリにおける「キャラクターAに手紙(文)を書こう」部分に対応したデータ例である。このデータは、図示のように、左側の列には、所定の文401として手紙の選択文となる複数の候補の文を有する。例えば、文A1~文A5のように5個の文を有する。本例では、文A1は「おはよう」、文A2は「おやすみ」、文A3は「げんきですか」、文A4は「がんばってね」、文A5は「おつかれさま」である。実施の形態1では、所定の文は、このように数文字から成る比較的短い文としたが、子供の対象年齢等に応じて、より長く複雑な文とすることも可能である。
[Sentence and performance data]
FIG. 4 shows a configuration example of data of sentences and effects that are defined and set in advance. Such data is stored in advance as the effect data 53 in the memory 102 of FIG. The data example of FIG. 4 is an example of data corresponding to the part "Let's write a letter (sentence) to character A" in the letter application. As shown in the figure, this data has a plurality of candidate sentences to be selected sentences of the letter as predetermined sentences 401 in the left column. For example, there are five sentences such as sentence A1 to sentence A5. In this example, sentence A1 is "good morning", sentence A2 is "good night", sentence A3 is "how are you?", sentence A4 is "do your best", and sentence A5 is "good job". In the first embodiment, the predetermined sentence is a relatively short sentence consisting of several characters, but it can be longer and more complicated depending on the target age of the child.
 本データにおける中央の列には、所定の文401に対して関連付けられた、キャラクターAの返事の文402のデータを有する。本例では、1つの文401毎に、2つの返事の文が関連付けて設定されている。例えば、文A1に対しては、文B11として「おはよう! きょうも たのしく すごせると いいね!」と、文B12として「おはよう! はやおき すると きぶんが いいね!」とが用意されている。文A2に対しては文B21「うん おやすみ! あしたも がんばろう!」と文B22「おやすみ! いいゆめ みられると いいね!」とが用意されている。文A3に対しては文B31「うん げんきだよ! きみは どうだい?」と文B32「ぼくは すごく げんきだよ!」とが用意されている。同様に、文A4に対しては文B41および文B42が用意されている。文A5に対しては文B51および文B52が用意されている。 The central column in this data has data of character A's reply sentence 402 associated with a predetermined sentence 401 . In this example, two reply sentences are associated with each sentence 401 and set. For example, for sentence A1, sentence B11 is prepared as "Good morning! I hope you have a good time today!" and sentence B12 is prepared as "Good morning! For sentence A2, sentence B21 "Yes, good night! Let's do our best tomorrow!" For sentence A3, sentence B31 "I'm fine! How are you?" and sentence B32 "I'm very fine!" are prepared. Similarly, sentences B41 and B42 are prepared for sentence A4. A sentence B51 and a sentence B52 are prepared for the sentence A5.
 所定の文401と返事の文402との対応は、上記例に限らず可能であり、1つの文401毎に1つまたは複数の返事の文402が関連付けられてもよい。所定の文401毎に異なる数の返事の文402が用意されてもよい。 The correspondence between the predetermined sentence 401 and the reply sentence 402 is not limited to the above example, and one sentence 401 may be associated with one or more reply sentences 402 . A different number of reply sentences 402 may be prepared for each predetermined sentence 401 .
 また、本データにおける右側の列に示すように、キャラクターAの返事の文402に対しては、キャラクターAの画像および音声402のデータが関連付けられて設定されている。例えば、文B11に対しては、画像g11と音声s11とが用意されている。文B12に対しては、画像g12と音声s12とが用意されている。画像g11は、文B11のように挨拶する様子を表す画像であり、音声s11は、文B11を発声する音声である。同様に、文B21,文B22,文B31,文B32,文B41,文B42,文B51,文B52といった返事の文402ごとに、キャラクターAの画像および音声が用意されている。 Also, as shown in the right column of this data, the character A's reply sentence 402 is set in association with character A's image and voice 402 data. For example, an image g11 and a voice s11 are prepared for sentence B11. An image g12 and a voice s12 are prepared for the sentence B12. The image g11 is an image representing a greeting like the sentence B11, and the voice s11 is the voice for uttering the sentence B11. Similarly, the image and voice of character A are prepared for each reply sentence 402 such as sentence B21, sentence B22, sentence B31, sentence B32, sentence B41, sentence B42, sentence B51, and sentence B52.
 例えば、利用者が所定の文401として文A3「げんきですか」を選択した場合、この選択文に応じた演出の例は以下である。この演出は、後述するが、選択文に応じた手書き入力の文を含んだ手紙の完成の演出、手紙の送信の演出、および、手紙を受信したキャラクターAによる返事の文、画像および音声の出力を含む。キャラクターAによる返事の文402としては、制御部11により、図4の演出データの複数の返事文の候補から1つが選択される。例として、文A3「げんきですか」に対し、文B31と文B32とからランダムに選択された1つとして、文B31「うん、げんきだよ! きみは どうだい?」が選択される。選択された返事文とともに、それに関連付けられた画像および音声403も選択される。 For example, when the user selects sentence A3 "How are you?" This production, which will be described later, includes production of completion of the letter including the handwritten input sentence corresponding to the selected sentence, production of the transmission of the letter, and output of the reply sentence, image and voice by the character A who received the letter. including. As the reply sentence 402 by the character A, one is selected from a plurality of reply sentence candidates of the effect data in FIG. As an example, for the sentence A3 "How are you?", sentence B31 "Yes, how are you?" is selected as one randomly selected from sentences B31 and B32. Along with the selected reply, the image and sound 403 associated therewith are also selected.
 [画面表示]
 図5~図8は、手紙アプリにおける各種の表示画面および遷移の例を示す。以下、画面遷移の順に説明する。
[Screen display]
5 to 8 show examples of various display screens and transitions in the letter application. Hereinafter, the screen transition will be described in order.
 [画面(1)]
 図5で、画面G1は、画面G1は、手紙アプリのオープニング画面(言い換えるとガイド画面)の例を示す。まず、画面G1では、背景上に例えば手紙アプリをガイドするキャラクターXが画像で登場し、利用者に対し文や画像や音声で手紙アプリの内容(即ち文を書く練習)をガイドする。画面G1では、例えば、キャラクターXの台詞501として、「キャラクターA(〇〇くん)に てがみを かいてみましょう!」(1ページ目)、「キャラクターYが てがみを とどけてくれますよ」(2ページ目)といった台詞が表示され、対応する音声が出力される。それとともに、画面G1には、背景上に後述の所定の文の例(「おはよう」等)も表示されている。また、台詞501が複数のページにわたる場合には、例えばタッチ操作によってページ間が遷移される。なお、各種の画面の背景は、所定の壁紙としてもよいが、フィクション上の場面等の画像としてもよい。
[Screen (1)]
In FIG. 5, screen G1 shows an example of the opening screen (in other words, guide screen) of the letter application. First, on the screen G1, for example, a character X that guides the letter application appears as an image in the background, and guides the user to the contents of the letter application (that is, practice writing sentences) with sentences, images, and voices. On the screen G1, for example, as the lines 501 of character X, "Let's write a letter to character A (Mr. 〇〇)!" 2nd page) is displayed, and the corresponding voice is output. At the same time, the screen G1 also displays an example of a predetermined sentence ("Good morning", etc.) described later on the background. Moreover, when the dialogue 501 spans a plurality of pages, transition between pages is performed by, for example, a touch operation. The background of various screens may be a predetermined wallpaper, or an image of a fictional scene or the like.
 ガイドの台詞501が終了した後の画面G1は、所定の契機で次の画面G2に遷移される。この契機は、画面G1のタッチ操作であり、他には所定時間経過などでもよい。なお、各種の画面間の遷移は、所定の画面エフェクト(言い換えると視覚効果)または演出を伴って行われる。例えば、第1画面から第2画面へ遷移する際に、第1画面が表示パネル4画面外に出るように移動しながら第2画面が表示パネル4画面内に入ってくるように移動するような画面エフェクト(画面のフェードアウト・フェードイン)でもよい。あるいは、第1画面が次第に薄くなって消えながら第2画面が次第に濃くなって現れるような画面エフェクトでもよい。 After the guide line 501 ends, the screen G1 transitions to the next screen G2 at a predetermined trigger. This trigger may be a touch operation on the screen G1, or may be the elapse of a predetermined period of time. Note that transitions between various screens are accompanied by predetermined screen effects (in other words, visual effects) or effects. For example, when transitioning from the first screen to the second screen, the first screen moves out of the display panel 4 screen while the second screen moves into the display panel 4 screen. A screen effect (fade-out/fade-in of the screen) may be used. Alternatively, a screen effect may be used in which the first screen gradually fades and disappears while the second screen appears gradually darker.
 [画面(2)]
 図5の下側の画面G2は、出題選択画面(言い換えると文提示画面)の例を示す。画面G2では、利用者が手紙に書くための候補となる複数の所定の文502が選択肢として提示(言い換えると表示)される。また、画面G2では、ガイドとして、「かきたいことば(ぶん)をえらんでね」といった文が表示され、対応する音声が出力される。画面G2内には、他にもガイドのキャラクターXや他のキャラクターや場面等の画像が表示されてもよい。
[Screen (2)]
A lower screen G2 in FIG. 5 shows an example of a question selection screen (in other words, a sentence presentation screen). On the screen G2, a plurality of predetermined sentences 502 that are candidates for the user to write in the letter are presented (in other words, displayed) as options. Also, on the screen G2, a sentence such as "Choose the word you want to write" is displayed as a guide, and the corresponding voice is output. In the screen G2, other images such as the guide character X, other characters, and scenes may be displayed.
 この画面G2で、利用者は、複数の文502から1つの文をタッチペン3のタッチ操作によって選択する。画面G1は、1つの文が選択された契機で、次の画面(図6)に遷移する。選択された文を、選択文とも記載する。選択文の例として「げんきですか」とする。 On this screen G2, the user selects one sentence from a plurality of sentences 502 by touching the touch pen 3. Screen G1 transitions to the next screen (FIG. 6) when one sentence is selected. A selected sentence is also described as a selected sentence. An example of a selection sentence is "Are you fine?".
 [画面(3)]
 図6で、画面G3は、手書き入力画面の例として、手本表示状態を示す。この画面G3では、領域601(言い換えると書き込みエリア)において、先の画面G2での選択文に応じた、なぞり書き用の手本文602が表示される。本例では、領域601に、なぞり書き用の手本文602として、薄いグレー色で、文字枠とともに複数の文字「げんきですか」が表示されている。また、この画面G3では、ガイドとして、例えば「おてほんどおりになぞってね」といった文が表示され、対応する音声が出力される。
[Screen (3)]
In FIG. 6, a screen G3 shows a model display state as an example of a handwriting input screen. On this screen G3, in a region 601 (in other words, a writing area), handwritten text 602 for tracing is displayed according to the sentence selected on the previous screen G2. In this example, in an area 601, a handwritten text 602 for tracing is displayed in light gray with a character frame and a plurality of characters "How are you?". Also, on this screen G3, as a guide, a sentence such as "Trace it according to the manual" is displayed, and the corresponding voice is output.
 また、画面G3内には、ペンツール等のボタン603を有する。本例では、ボタン603として、ペンツールボタン、消しゴムボタン、および「ぜんぶけす」ボタンを有する。最初時には、ペンツールボタンが自動的に有効状態として選択されている。ペンツール有効状態では、所定の領域601内において、利用者がタッチペン3によるタッチ操作(即ち手書き入力)によって、点や線を描くことができる。消しゴムツールが選択操作されて有効状態になった場合には、所定の領域611内において、利用者がタッチペン3によるタッチ操作によって、描かれている点や線を消すことができる。「ぜんぶけす」ボタンが選択操作された場合には、所定の領域611内の点や線をすべて消去して白紙に戻すことができる。 The screen G3 also has a button 603 such as a pen tool. In this example, the buttons 603 include a pen tool button, an eraser button, and a "skip all" button. Initially, the Pen Tool button is automatically selected as the active state. In the pen tool valid state, the user can draw a point or line within a predetermined area 601 by a touch operation (that is, handwriting input) with the touch pen 3 . When the eraser tool is activated by a selection operation, the user can erase drawn points and lines in a predetermined area 611 by a touch operation with the touch pen 3 . When the "skip all" button is selected and operated, all the points and lines in the predetermined area 611 can be erased to return to a blank page.
 利用者は、この画面G3で、簡単な文である選択文「げんきですか」を、手本文602に沿って、タッチペン3による手書きによって書くことができる。利用者は、この画面G3の領域601において、なぞり書き用の手本文602を、タッチペン3によるタッチ操作でなぞるようにして各文字を書く。手本文602の文字に対し手書き入力文字がずれているかどうかの検出は行われないため、手本文602の文字に対し手書き入力文字がずれていても許容される。画面G3の例では、まだ手書き入力前の状態であり、領域601には何の点や線も描かれていない。 On this screen G3, the user can handwrite the selection sentence "How are you?", which is a simple sentence, by handwriting with the touch pen 3 along the handwritten text 602. The user writes each character by touching the handwriting text 602 for tracing with the touch pen 3 in the region 601 of the screen G3. Since it is not detected whether or not the handwritten input characters are misaligned with respect to the characters of the handwritten text 602, even if the handwritten input characters are misaligned with respect to the characters of the handwritten text 602, it is permissible. In the example of screen G3, it is still before handwriting input, and no point or line is drawn in area 601 .
 また、この画面G3内には、例えば下部に、完成ボタン604が表示される。知育玩具1のプロセッサは、この画面G3のような手書き入力前の状態では、完成ボタン604を無効状態(タッチで押すことができない状態)とし、有効状態とは異なる態様として、例えば完成ボタン613を半透明状態や薄い色で表示する。 Also, in this screen G3, a completion button 604 is displayed, for example, at the bottom. The processor of the intellectual training toy 1 puts the completion button 604 in an invalid state (a state in which it cannot be touched) in a state before handwriting input such as this screen G3. Display in semi-transparent state or light color.
 [画面(4)]
 図6の下側の画面G4は、利用者が画面G3の領域601で文の手書き入力を始め、手書き入力途中の状態の例を示す。知育玩具1のプロセッサは、表示パネル4を含む表示装置103の機能に基づいて、領域601に対するタッチ入力を検出し、検出情報に基づいて、領域601に、タッチ入力位置座標に応じた点や線(それに対応する文字605)を描画する。文字605は、手書き入力されて描画された点や線の例である。本例では、文字605の線や点は、太めの黒色とされている。本例の状態では、文「げんきですか」のうち「げんき」までの文字が描画されている。なお、文字605の描画の色や太さ等について可変設定を可能としてもよい。
[Screen (4)]
A screen G4 on the lower side of FIG. 6 shows an example of a state in which the user starts handwriting input of a sentence in the area 601 of the screen G3 and is in the middle of handwriting input. The processor of the intellectual training toy 1 detects a touch input to the area 601 based on the functions of the display device 103 including the display panel 4, and based on the detection information, points and lines corresponding to the touch input position coordinates are displayed on the area 601. (corresponding character 605). Characters 605 are examples of points and lines drawn by handwriting input. In this example, the lines and dots of the character 605 are thick black. In the state of this example, the characters up to "How are you?" in the sentence "How are you doing?" are drawn. It should be noted that the color, thickness, etc. of the characters 605 may be variably set.
 プロセッサは、領域601に点や線が描かれ始めた場合、即ち少なくとも一部でも点や線の描画がある場合、完成ボタン604を有効状態(タッチで押すことができる状態)にし、例えば完成ボタン604を半透明ではない通常状態や濃い色で表示する。 When a point or line begins to be drawn in the area 601, that is, when at least a portion of the point or line is drawn, the processor enables the completion button 604 (a state in which it can be pressed by touch). 604 is displayed in a non-translucent normal state or in a dark color.
 [画面(5)]
 図7で、画面G5は、画面G4の領域601の手書き入力での文の完成時の画面例を示す。画面G5の状態の例では、領域601において、手書き入力による文606「げんきですか」が描画されている。そして、利用者が完成ボタン604をタッチペン3で押した場合を示している。プロセッサは、表示パネル4を含む表示装置103の機能によって、完成ボタン604の押下を検出する。
[Screen (5)]
In FIG. 7, the screen G5 shows an example of the screen when a sentence is completed by handwriting input in the area 601 of the screen G4. In the example of the state of the screen G5, in the area 601, a handwritten sentence 606 "How are you?" is drawn. A case is shown in which the user presses the completion button 604 with the touch pen 3 . The processor detects pressing of the completion button 604 by the functions of the display device 103 including the display panel 4 .
 プロセッサは、完成ボタン604の押下を検出・認識すると、そのことを文の完成とみなし、その時に領域601に描画されている文606のデータ(対応する文字画像情報)を取得する。ここで、完成ボタン604の押下時に、実際に選択文のすべての文字が手書き入力で描画されている必要は無く、未完成でもよい。プロセッサは、文が未完成であっても、即ち領域601内に少なくとも一部の線や点が描画されていて完成ボタン604が押されたならば、それを完成と判定する。子供が利用する場合には、未完成の状態とされるような手書き文字しか入力できないことも想定されるため、利用者である子供が満足したタイミングで完成ボタン604を押して手書き入力を完成とさせ、その後の演出へと進めることができるようにすることで、子供の学習意欲の低下を抑制し、子供が何度も文字を書く練習・学習を繰り返す学習意欲を惹起することができる。なお、これに限らず、変形例としては、領域601において所定の量以上で文字が書かれいることを完成の条件としてもよい。プロセッサは、完成ボタン604の押下を契機として、次の画面G6に遷移させる。 When the processor detects and recognizes that the completion button 604 has been pressed, it regards this as the completion of the sentence, and acquires the data (corresponding character image information) of the sentence 606 drawn in the area 601 at that time. Here, when the completion button 604 is pressed, it is not necessary that all the characters of the selected sentence have actually been drawn by handwriting input, and the sentence may be incomplete. The processor determines that the sentence is complete even if the sentence is incomplete, that is, if at least some lines or dots are drawn in the area 601 and the completion button 604 is pressed. When a child uses the system, it is assumed that only incomplete handwritten characters can be input. Therefore, when the child, who is the user, is satisfied with the timing, the completion button 604 is pressed to complete the handwritten input. By making it possible to proceed to the subsequent performance, it is possible to suppress the decline in the child's motivation to learn, and to provoke the child's motivation to repeat the practice and learning of writing characters many times. As a modification, the completion condition may be that a predetermined amount or more of characters are written in the area 601 . When the completion button 604 is pressed, the processor transitions to the next screen G6.
 なお、前述の変形例として、文完成(言い換えると入力終了)の判断に、完成ボタン604ではなく、時間条件を用いる場合、例えば以下のように実現できる。プロセッサ101は、例えば、手書き入力の画面G3の開始から一定時間が経過した場合に、文完成とみなす。あるいは、プロセッサ101は、領域601での入力検出(言い換えるとタッチ検出)から一定時間が経過した場合に、文完成とみなす。あるいは、プロセッサ101は、領域601でのタッチによる入力無し状態が一定時間継続した場合に、文完成とみなす。 As a modified example of the above, if a time condition is used instead of the completion button 604 to determine sentence completion (in other words, the end of input), it can be realized as follows, for example. For example, the processor 101 regards the sentence as completed when a certain period of time has passed since the start of the handwriting input screen G3. Alternatively, the processor 101 regards the sentence as completed when a certain period of time has passed since the input detection (in other words, touch detection) in the area 601 . Alternatively, the processor 101 regards the sentence as completed when the state of no input by touching the area 601 continues for a certain period of time.
 [画面(6)]
 図7の下側の画面G6は、文完成演出画面、言い換えると手紙完成演出画面である。この画面G6では、プロセッサは、画面G6内を大部分占める領域701において、手紙の便箋(言い換えると用紙)の画像を表示し、その便箋画像上に重ねて、前の画面G5で完成時に取得した文606に対応した文画像702を表示する。この際に、プロセッサは、前の画面G5の手本文602(枠線や文字)等の要素については表示しないようにする。プロセッサは、この画面G6の表示の際の画面エフェクトまたは演出として、例えば、前の画面G5での文606の表示をそのまま残しながら、手本文602等の他の表示物が次第に消えるように、所定の表示制御を行う。また、その際に、プロセッサは、文完成演出の一部として、領域701において便箋画像を次第に表示させるとともに、所定のエフェクト画像(例えばキラキラ星エフェクト)を表示し、対応する効果音等の音声を出力するように制御する。利用者からみると、背景が手紙(便箋)に変化したように感じられる。なお、手紙の便箋画像の種類やエフェクトの種類については、複数の候補からランダムで決定されるようにしてもよいし、可変設定を可能としてもよい。
[Screen (6)]
The lower screen G6 in FIG. 7 is a sentence completion effect screen, in other words, a letter completion effect screen. In this screen G6, the processor displays the image of the letter paper (in other words, the paper) in the area 701 that occupies most of the screen G6, superimposes it on the letter paper image, and acquires it when completing the previous screen G5. A sentence image 702 corresponding to the sentence 606 is displayed. At this time, the processor does not display elements such as the handwriting text 602 (frame lines and characters) of the previous screen G5. The processor, as a screen effect or performance when displaying this screen G6, may, for example, leave the display of the sentence 606 on the previous screen G5 as it is, while other displayed objects such as the text 602 gradually disappear. display control. At that time, as part of the sentence completion effect, the processor gradually displays the letter paper image in the area 701, displays a predetermined effect image (for example, twinkling star effect), and produces sound such as a corresponding sound effect. Control to output. From the user's point of view, the background appears to have changed to a letter (letter paper). Note that the type of letter paper image of the letter and the type of effect may be randomly determined from a plurality of candidates, or may be variably set.
 プロセッサは、このような文完成演出の画面G2を所定時間以上、表示継続させる。利用者は、この画面G6で、完成した文による手紙を見て楽しむことができる。プロセッサは、画面G6を所定の契機で終了させて次の画面(図8)に遷移させる。この契機は、例えば、所定の最低表示時間の経過後に、画面G6内のタッチ操作、あるいは更なる所定時間経過である。 The processor continues to display such a sentence completion effect screen G2 for a predetermined time or more. The user can enjoy viewing the completed letter on the screen G6. The processor terminates the screen G6 at a predetermined opportunity and transitions to the next screen (FIG. 8). This trigger is, for example, a touch operation on the screen G6 after a predetermined minimum display time has passed, or a further predetermined time has elapsed.
 [画面(7)]
 図8で、画面G7は、手紙送信の画面例を示す。画面G7は、先の画面G6で作成された文702を含む手紙801を、利用者から所定のキャラクター(Aとする)へ送付する際の様子を演出の一部として表現する画面である。画面G7では、一部の領域に、先の画面までに作成された手紙801(即ち便箋と文とを含む手紙画像)が表示される。その際には、プロセッサは、例えば、背景上に手紙801が次第に浮かび上がるように表示制御をしてもよい。また、画面G7の他の領域には、例えば所定のキャラクターYの画像802と台詞803が表示され、対応する音声が出力される。キャラクターYは、例えば手紙を配送するあるいは一次受け取りするキャラクターである。それらにより、利用者からの手紙801がキャラクターAに届いたことが表現される。例えば、キャラクターYの台詞として「〇〇くん(=キャラクターA)、てがみがとどいたぞ」といった文が表示される。
[Screen (7)]
In FIG. 8, screen G7 shows an example of a screen for sending a letter. The screen G7 is a screen that expresses, as part of the presentation, how the user sends the letter 801 including the sentence 702 created on the previous screen G6 to a predetermined character (assumed to be A). A part of the screen G7 displays the letter 801 created up to the previous screen (that is, the letter image including letter paper and sentences). At that time, the processor may, for example, perform display control so that the letter 801 gradually emerges from the background. Also, in another area of the screen G7, for example, an image 802 of a predetermined character Y and a line 803 are displayed, and the corresponding voice is output. Character Y is, for example, a character who delivers or receives letters. These represent that the letter 801 from the user has arrived at the character A. For example, as the dialogue of character Y, a sentence such as "Mr.
 プロセッサは、画面G7を所定の契機で終了させ、次の画面G8に遷移させる。この契機は、所定の最低表示時間経過後に、画面G7内のタッチ操作、または更なる所定時間経過である。また、プロセッサは、画面G7から画面G8に遷移させる際には、例えば画面内から退場する要素と画面内に新たに登場する要素とについて表示を制御する。例えば、画面G7でのキャラクターYの画像802と台詞803の領域は、画面内の定位置から画面外に移動するように表示制御される。それとともに、次の画面G8でのキャラクターAの画像と台詞が画面外から画面内の定位置に移動してくるように表示制御される。 The processor terminates the screen G7 at a predetermined trigger and transitions to the next screen G8. This trigger is a touch operation in the screen G7 or the lapse of a further predetermined time after the predetermined minimum display time has elapsed. Further, when the screen G7 is changed to the screen G8, the processor controls the display of, for example, an element leaving the screen and an element newly appearing in the screen. For example, the area of the image 802 and the dialogue 803 of the character Y on the screen G7 is display-controlled so as to move from a fixed position within the screen to outside the screen. At the same time, display control is performed so that the image and words of character A on the next screen G8 are moved from outside the screen to fixed positions within the screen.
 [画面(8)]
 図8の下側の画面G8は、キャラクターAによる返事演出(言い換えると手紙受信)の画面である。この画面G8は、キャラクターAが利用者からの手紙801を受け取り、キャラクターAが利用者の手紙801の文に対し返事をする様子が演出として表現される。画面G8では、先の手紙801が同様に表示され、所定の領域には、キャラクターAの画像804と、返事805(台詞)の文とが表示され、返事805に対応する音声が出力される。利用者が手書きした手紙801が画像804および返事805(台詞)の文と同じ画面内に表示されることで、自分が作成した手紙801に対してキャラクターAが返事をくれたという実感を利用者に抱かせることが可能となり、学習意欲を惹起させることができる。
[Screen (8)]
The screen G8 on the lower side of FIG. 8 is a screen of a reply effect by the character A (in other words, receiving a letter). On this screen G8, the character A receives a letter 801 from the user and the character A replies to the text of the letter 801 from the user. On the screen G8, the previous letter 801 is similarly displayed, an image 804 of the character A and a reply 805 (speech) are displayed in a predetermined area, and a voice corresponding to the reply 805 is output. By displaying the letter 801 handwritten by the user on the same screen as the image 804 and the text of the reply 805 (speech), the user can feel that the character A has replied to the letter 801 he created. It is possible to make them have a strong desire to learn.
 返事805の文は、手紙801の文に応じて選択決定された返事の文である。返事805の文の決定の詳細としては、例えば予め規定されたデータ(図4の演出データ53)に基づいて、候補の複数の返事文からランダムに1つが選択される。利用者の選択文に応じて、複数のパターンの返事文が予め用意されている。本例では、手紙801の文「げんきですか」に応じて選択された返事805の文として「うん、げんきだよ! きみは どうだい?」といったように表示されている。 The reply 805 sentence is a reply sentence selected and determined according to the letter 801 sentence. As details of the decision of the reply 805 sentence, one is randomly selected from a plurality of candidate reply sentences, for example, based on predetermined data (effect data 53 in FIG. 4). A plurality of patterns of reply sentences are prepared in advance according to the sentence selected by the user. In this example, the text of the reply 805 selected in response to the text "How are you?" of the letter 801 is displayed as "Yes, how are you?"
 プロセッサは、画面G8を所定の契機で終了させ、次の画面である共通成功画面に遷移させる。この契機は、所定の最低時間表示後における画面内のタッチ操作である。図示しないが、共通成功画面は、手紙アプリの終了を表現する内容の画面であり、各アプリで共通の内容の画面である。共通成功画面の後、メニュー画面に戻る。 The processor terminates the screen G8 at a predetermined trigger and transitions to the next screen, the common success screen. This trigger is a touch operation within the screen after display for a predetermined minimum time. Although not shown, the common success screen is a screen showing the end of the letter application, and is a screen with contents common to each application. After the common success screen, return to the menu screen.
 [効果等]
 以上説明したように、実施の形態1の知育玩具1によれば、利用者である子供からキャラクターへの手紙の送付の形式において、提示され選択された所定の文に対し、子供が手書きでなぞり書きとして文を入力した後、文完成の検出・認識(例えば完成ボタン押下)に応じて、その選択文に応じた演出としてキャラクターの返事等の演出が出力される。これにより、子供が文字を書く練習・学習の際に、興趣性が高くなり、子供の学習意欲をより惹起させることができる。特に、前述の演出および画面の例では、図8の画面G7,G8のように、利用者が書いた文を含む手紙801とともに、キャラクターAの返事や画像等が出力される。これにより、利用者である子供は、自分が選択して書いた文によって、キャラクターAからの返事を含む反応が得られるので、興趣性が高くなる。また、利用者である子供は、自分が選択して書いた文に応じて、異なる返事を含む演出が得られる場合があるので、興趣性が高くなる。
[Effects, etc.]
As described above, according to the educational toy 1 of Embodiment 1, in the form of sending a letter from the child, who is the user, to the character, the child traces the given sentence presented and selected by handwriting. After inputting a sentence as writing, in response to detection/recognition of completion of the sentence (for example, pressing of a completion button), an effect such as a character's reply is output as an effect corresponding to the selected sentence. As a result, when a child practices and learns to write characters, the interest becomes higher, and the child's desire to learn can be further aroused. In particular, in the example of the effects and screens described above, as in screens G7 and G8 of FIG. 8, a letter 801 including a sentence written by the user is output together with a reply from character A, an image, and the like. As a result, the child, who is the user, can obtain a reaction including a reply from the character A based on the sentence selected and written by him/herself, which increases interest. In addition, the child, who is the user, may be able to obtain an effect including a different reply depending on the sentence he/she selects and writes, which increases interest.
 [変形例]
 実施の形態1に対する変形例として以下も可能である。実施の形態1では、手紙の形式としたが、これに限定されない。利用者とキャラクターとの間で、会話などの形式で、文を含むやり取りを行うものにも同様に適用可能である。例えば、会話の形式において、利用者による第1の選択文の入力に対し、キャラクターから第1の応答文が返され、それに対し、利用者による第2の選択文の入力が行われ、それに対し、キャラクターから第2の応答文が返される、といったことも可能である。
[Modification]
The following are also possible as modifications to the first embodiment. In Embodiment 1, the format is a letter, but the format is not limited to this. It is also applicable to exchanges including sentences in the form of conversation between the user and the character. For example, in the form of conversation, in response to the input of a first selection sentence by the user, a first response sentence is returned from the character, in response to which the user inputs a second selection sentence, and in response , a second response may be returned from the character.
 また、変形例として、他にも、フィクション上のキャラクター間で、手紙または会話等の文をやり取りする形式とすることも可能である。例えば、キャラクターAからキャラクターBに手紙の文を書いて送付する際に、利用者がそのキャラクターAの手紙の文を代わりに書いて入力するという形式となる。 In addition, as a modification, it is also possible to exchange texts such as letters or conversations between fictional characters. For example, when character A writes and sends a letter to character B, the user writes and inputs the text of character A's letter instead.
 実施の形態1では、提示部12と入力部13に関して、図6の画面G3の手書き入力の領域601で、手本文602を表示し、その上に手書き入力の文605を重畳表示した。これに限らず、画面内において、選択文に対応した手本文の表示と、手書き入力の表示とを別の領域に分けて並列で設けてもよい。 In the first embodiment, regarding the presentation unit 12 and the input unit 13, the handwritten text 602 is displayed in the handwritten input area 601 of the screen G3 in FIG. 6, and the handwritten input text 605 is superimposed thereon. The present invention is not limited to this, and the display of the handwritten text corresponding to the selected sentence and the display of the handwritten input may be provided in separate areas in parallel within the screen.
 以上、本発明を実施の形態に基づいて具体的に説明したが、本発明は前述の実施の形態に限定されず、要旨を逸脱しない範囲で種々変更可能である。 Although the present invention has been specifically described above based on the embodiments, the present invention is not limited to the above-described embodiments, and can be variously modified without departing from the scope of the invention.
 1…知育玩具、2…筐体、3…ペン(タッチペン)、4…表示パネル、5…ボタン、6…スピーカ、7…アイコン、11…制御部、12…提示部、13…入力部、14…記憶部、15…出力部、16…判定部、17…操作入力部、101…プロセッサ、102…メモリ。 DESCRIPTION OF SYMBOLS 1... Educational toy, 2... Housing, 3... Pen (touch pen), 4... Display panel, 5... Button, 6... Speaker, 7... Icon, 11... Control part, 12... Presentation part, 13... Input part, 14 Memory unit 15 Output unit 16 Determination unit 17 Operation input unit 101 Processor 102 Memory.

Claims (13)

  1.  所定の文を提示する提示部と、
     前記所定の文に対応させて、利用者が手書きで文字を入力する入力部と、
     前記所定の文に応じた演出として、文、画像、および音声のうち少なくとも1つを含む演出のデータを記憶する記憶部と、
     前記所定の文に応じた前記演出を出力する出力部と、
     制御部と、
     を備え、
     前記制御部は、
     前記提示部により前記所定の文を提示し、
     前記入力部への入力を検出し、
     前記所定の文に応じた前記演出のデータを前記記憶部から読み出して前記出力部に前記演出を出力させる、
     知育玩具。
    a presentation unit that presents a predetermined sentence;
    an input unit for a user to input characters by handwriting corresponding to the predetermined sentence;
    a storage unit that stores performance data including at least one of a sentence, an image, and a sound as a performance corresponding to the predetermined sentence;
    an output unit that outputs the effect corresponding to the predetermined sentence;
    a control unit;
    with
    The control unit
    presenting the predetermined sentence by the presenting unit;
    detecting an input to the input unit;
    reading data of the effect corresponding to the predetermined sentence from the storage unit and causing the output unit to output the effect;
    Educational toys.
  2.  請求項1記載の知育玩具において、
     前記入力部は、前記所定の文に対応させた、なぞり書き用の文を表示し、
     前記入力部は、入力された文字の文字画像情報を検出しながら、前記なぞり書き用の文の上に重ねて前記文字画像情報を表示する、知育玩具。
    The educational toy according to claim 1,
    The input unit displays a sentence for tracing that corresponds to the predetermined sentence,
    An educational toy, wherein the input unit displays the character image information superimposed on the tracing sentence while detecting the character image information of the input character.
  3.  請求項1記載の知育玩具において、
     前記制御部は、前記演出の少なくとも一部を出力させる際に、前記入力部で検出した文字画像情報を表示させる、知育玩具。
    The educational toy according to claim 1,
    An educational toy, wherein the control unit displays character image information detected by the input unit when outputting at least part of the effect.
  4.  請求項1記載の知育玩具において、
     前記利用者が前記入力部への入力が完了したことを判定する判定部を有し、
     前記制御部は、前記判定部での判定を契機として、前記演出を出力させる、知育玩具。
    The educational toy according to claim 1,
    a determination unit for determining that the user has completed input to the input unit;
    The intellectual training toy, wherein the control section outputs the presentation in response to the determination by the determination section.
  5.  請求項4記載の知育玩具において、
     前記判定部は、前記利用者が前記入力部への入力が完了したことを示す操作を入力する操作入力部を含み、
     前記判定部は、前記操作入力部の前記操作の入力を契機として、入力が完了したと判定する、知育玩具。
    In the educational toy according to claim 4,
    The determination unit includes an operation input unit for inputting an operation indicating that the user has completed input to the input unit,
    The intellectual education toy, wherein the determination unit determines that the input is completed, triggered by the input of the operation of the operation input unit.
  6.  請求項5記載の知育玩具において、
     前記操作入力部は、前記入力部への入力が検出されたことを条件に、前記操作の入力を有効として受け付ける状態になる、知育玩具。
    In the educational toy according to claim 5,
    The educational toy, wherein the operation input unit enters a state of accepting the input of the operation as valid on condition that the input to the input unit is detected.
  7.  請求項1記載の知育玩具において、
     前記制御部は、前記入力部に入力された文字の文字画像情報についての文字認識処理を行わず、前記入力部に入力された文字の文字画像情報がどのような内容であっても、少なくとも一部の文字画像情報がある場合には、前記演出を出力させる、知育玩具。
    The educational toy according to claim 1,
    The control unit does not perform character recognition processing on the character image information of the characters input to the input unit, and does not perform character recognition processing on the character image information of the characters input to the input unit. An educational toy for outputting the presentation when there is character image information of the part.
  8.  請求項1記載の知育玩具において、
     前記提示部は、前記所定の文として、複数の文を選択肢として提示し、
     前記入力部は、前記利用者が前記複数の文から選択した文に対応させた文字を入力し、
     前記記憶部は、前記複数の文のそれぞれの文に応じた演出のデータを記憶し、
     前記制御部は、前記利用者が選択した文に応じた演出のデータを前記記憶部から読み出して前記出力部に前記演出を出力させる、知育玩具。
    The educational toy according to claim 1,
    The presentation unit presents a plurality of sentences as options as the predetermined sentence,
    The input unit inputs characters corresponding to a sentence selected from the plurality of sentences by the user,
    The storage unit stores performance data corresponding to each of the plurality of sentences,
    An educational toy, wherein the control unit reads data of an effect corresponding to the sentence selected by the user from the storage unit and causes the output unit to output the effect.
  9.  請求項1記載の知育玩具において、
     前記記憶部は、前記所定の文に応じた前記演出として、複数の演出のデータを記憶し、
     前記制御部は、前記所定の文に応じて、前記複数の演出から選択した演出のデータを前記記憶部から読み出して前記出力部に前記演出を出力させる、知育玩具。
    The educational toy according to claim 1,
    The storage unit stores data of a plurality of effects as the effects corresponding to the predetermined sentence,
    An educational toy, wherein the control unit reads data of an effect selected from the plurality of effects from the storage unit and causes the output unit to output the effect according to the predetermined sentence.
  10.  請求項2記載の知育玩具において、
     前記制御部は、前記演出の少なくとも一部を出力させる際に、前記なぞり書き用の文を非表示として、または次第に消去させながら、前記入力部で検出した文字画像情報を表示させる、または次第に表示させる、知育玩具。
    In the educational toy according to claim 2,
    When outputting at least part of the effect, the control unit displays or gradually displays the character image information detected by the input unit while hiding or gradually erasing the text for tracing. Educational toys that let you.
  11.  請求項1記載の知育玩具において、
     前記所定の文は、前記利用者からキャラクターへの手紙または会話の文であり、
     前記演出は、前記キャラクターから前記利用者への応答における、手紙または会話の文、キャラクター画像、およびキャラクター音声の少なくとも1つを含む、知育玩具。
    The educational toy according to claim 1,
    The predetermined sentence is a letter or conversation sentence from the user to the character,
    An educational toy, wherein the presentation includes at least one of a letter or a sentence of conversation, a character image, and a character voice in a response from the character to the user.
  12.  請求項1記載の知育玩具において、
     前記所定の文は、手紙の文であり、
     前記演出は、前記手紙の便箋の画像、エフェクト画像、および効果音を含み、
     前記手紙の便箋の画像上に、前記入力部で検出した文字画像情報が表示される、知育玩具。
    The educational toy according to claim 1,
    The predetermined sentence is a sentence of a letter,
    The production includes an image of the letter paper, an effect image, and a sound effect,
    An educational toy in which character image information detected by the input unit is displayed on the image of the letter paper.
  13.  知育玩具に情報処理を実行させるためのプログラムであって、
     前記知育玩具は、
     所定の文を提示する提示部と、
     前記所定の文に対応させて、利用者が手書きで文字を入力する入力部と、
     前記所定の文に応じた演出として、文、画像、および音声のうち少なくとも1つを含む演出のデータを記憶する記憶部と、
     前記所定の文に応じた前記演出を出力する出力部と、
     制御部と、
     を備え、
     前記制御部に、前記提示部により前記所定の文を提示し、前記入力部への入力を検出し、前記所定の文に応じた前記演出のデータを前記記憶部から読み出して前記出力部に前記演出を出力させる、
     プログラム。
     
    A program for causing an educational toy to execute information processing,
    The educational toy is
    a presentation unit that presents a predetermined sentence;
    an input unit for a user to input characters by handwriting corresponding to the predetermined sentence;
    a storage unit that stores performance data including at least one of a sentence, an image, and a sound as a performance corresponding to the predetermined sentence;
    an output unit that outputs the effect corresponding to the predetermined sentence;
    a control unit;
    with
    The presentation unit presents the predetermined sentence to the control unit, detects an input to the input unit, reads the performance data corresponding to the predetermined sentence from the storage unit, and outputs the data to the output unit. to output the performance,
    program.
PCT/JP2022/023208 2021-06-10 2022-06-09 Educational toy and program WO2022260111A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021097628A JP7030231B1 (en) 2021-06-10 2021-06-10 Educational toys and programs
JP2021-097628 2021-06-10

Publications (1)

Publication Number Publication Date
WO2022260111A1 true WO2022260111A1 (en) 2022-12-15

Family

ID=81215054

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/023208 WO2022260111A1 (en) 2021-06-10 2022-06-09 Educational toy and program

Country Status (3)

Country Link
JP (2) JP7030231B1 (en)
CN (2) CN118059510A (en)
WO (1) WO2022260111A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004020061A1 (en) * 2002-08-28 2004-03-11 Sega Toys, Ltd. Game apparatus
JP2005049387A (en) * 2003-07-29 2005-02-24 Taito Corp Game machine with character learning function
WO2005057524A1 (en) * 2003-11-28 2005-06-23 Kotobanomori Inc. Composition evaluation device
JP2014029560A (en) * 2013-10-28 2014-02-13 Fujitsu Ltd Teaching material creation device, teaching material creation method and computer program
CN108171226A (en) * 2018-03-19 2018-06-15 陶忠道 It can prevent the suggestion device of clerical error
US20200302824A1 (en) * 2019-03-19 2020-09-24 Young Suk Lee Seven steps learning study book of ruah education and ruah learning method using thereof

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001194986A (en) * 2000-01-12 2001-07-19 Sente Creations:Kk Intellectual education toy
JP2002278691A (en) * 2001-03-19 2002-09-27 Sega Toys:Kk Game machine
JP2012238295A (en) * 2011-04-27 2012-12-06 Panasonic Corp Handwritten character input device and handwritten character input method
CN102419692A (en) * 2011-12-15 2012-04-18 无敌科技(西安)有限公司 Input system and method for Chinese learning
JP5551205B2 (en) * 2012-04-26 2014-07-16 株式会社バンダイ Portable terminal device, terminal program, augmented reality system, and toy
CN107222384A (en) * 2016-03-22 2017-09-29 深圳新创客电子科技有限公司 Electronic equipment and its intelligent answer method, electronic equipment, server and system
CN107783683A (en) * 2016-08-24 2018-03-09 南京乐朋电子科技有限公司 A kind of hand-written touch screen machine of practising handwriting of novel children
CN114446102B (en) * 2016-11-18 2024-03-05 株式会社和冠 Digital input device
JP6174774B1 (en) * 2016-12-02 2017-08-02 秀幸 松井 Learning support system, method and program
US11175754B2 (en) * 2017-03-13 2021-11-16 Keiji Tatani Electronic device and information processing method
CN111569443A (en) * 2020-04-21 2020-08-25 长沙师范学院 Intelligent toy with writing scroll for children and use method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004020061A1 (en) * 2002-08-28 2004-03-11 Sega Toys, Ltd. Game apparatus
JP2005049387A (en) * 2003-07-29 2005-02-24 Taito Corp Game machine with character learning function
WO2005057524A1 (en) * 2003-11-28 2005-06-23 Kotobanomori Inc. Composition evaluation device
JP2014029560A (en) * 2013-10-28 2014-02-13 Fujitsu Ltd Teaching material creation device, teaching material creation method and computer program
CN108171226A (en) * 2018-03-19 2018-06-15 陶忠道 It can prevent the suggestion device of clerical error
US20200302824A1 (en) * 2019-03-19 2020-09-24 Young Suk Lee Seven steps learning study book of ruah education and ruah learning method using thereof

Also Published As

Publication number Publication date
CN115253318B (en) 2024-03-26
CN118059510A (en) 2024-05-24
JP2022189194A (en) 2022-12-22
JP7030231B1 (en) 2022-03-04
CN115253318A (en) 2022-11-01
JP2022189711A (en) 2022-12-22

Similar Documents

Publication Publication Date Title
US5596698A (en) Method and apparatus for recognizing handwritten inputs in a computerized teaching system
US20090253107A1 (en) Multi-Modal Learning System
Liao et al. Pen-top feedback for paper-based interfaces
CN102141887A (en) Brush, carbon-copy, and fill gestures
CN102169407A (en) Contextual multiplexing gestures
CN102141888A (en) Stamp gestures
JP6278641B2 (en) Writing practice system and program
US20090248960A1 (en) Methods and systems for creating and using virtual flash cards
US11322036B2 (en) Method for displaying learning content of terminal and application program therefor
CN104657054A (en) Clicking-reader-based learning method and device
KR20180028186A (en) Method for Learning Management for Handwriting with Immediate Feedback using Multi-sensory and the Recorded Media of the Handwriting Learning Management Program read by Computer
Mak et al. Design considerations for educational mobile apps for young children
KR100838343B1 (en) Dynamic avatar guided multimode story talker on touch screen
WO2022260111A1 (en) Educational toy and program
KR20110094981A (en) Apparatus and method for outputting an information based on dot-code using gesture recognition
Poll Visualising graphical user interfaces for blind users
JP2003084656A (en) Device for learning writing order of character, etc., with sense of directly writing onto blank paper
KR20000036398A (en) The character writing apparatus and the utilizing method of the apparatus as an interface of a computer
Grussenmeyer Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired
KR101994453B1 (en) Apparatus for learning English composition and method thereof
US20240143160A1 (en) Electronic whiteboard system and operation method thereof
JP2023062632A (en) Learning support program, learning support device and learning support method
JP3176673U (en) Interactive multimedia educational device
CN100440189C (en) Language studying system combining graphic depiction and its operational method
Fard et al. Braille-based Text Input for Multi-touch Screen Mobile Phones

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22820290

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE