CN115253318A - Intellectual development toy, control method for intellectual development toy, program, and storage medium - Google Patents

Intellectual development toy, control method for intellectual development toy, program, and storage medium Download PDF

Info

Publication number
CN115253318A
CN115253318A CN202210597881.4A CN202210597881A CN115253318A CN 115253318 A CN115253318 A CN 115253318A CN 202210597881 A CN202210597881 A CN 202210597881A CN 115253318 A CN115253318 A CN 115253318A
Authority
CN
China
Prior art keywords
sentence
unit
input
performance
predetermined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210597881.4A
Other languages
Chinese (zh)
Other versions
CN115253318B (en
Inventor
小泽绯奈子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bandai Co Ltd
Original Assignee
Bandai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bandai Co Ltd filed Critical Bandai Co Ltd
Publication of CN115253318A publication Critical patent/CN115253318A/en
Application granted granted Critical
Publication of CN115253318B publication Critical patent/CN115253318B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H33/00Other toys
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H33/00Other toys
    • A63H33/22Optical, colour, or shadow toys
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H5/00Musical or noise- producing devices for additional toy effects other than acoustical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03545Pens or stylus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B11/00Teaching hand-writing, shorthand, drawing, or painting
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied

Abstract

The invention provides a intellectual education toy, a control method for the intellectual education toy, a program, and a storage medium, which can improve the interest and bring a sufficient learning intention to children. A intellectual training toy (1) presents a predetermined sentence (steps S302, S303), a user inputs characters corresponding to the predetermined sentence by handwriting (steps S304, S305), data of a performance including at least one of the sentence, an image, and a sound is stored for the performance corresponding to the predetermined sentence, and the performance corresponding to the predetermined sentence is output (steps S307, S308, S309).

Description

Intellectual training toy, control method for intellectual training toy, program, and storage medium
Technical Field
The invention relates to a technology of a intellectual education toy.
Background
As for a child-oriented intellectual training toy, there is a technique that tries to enable a child to exercise/learn writing pleasantly. An example of the prior art is Japanese patent laid-open No. 2001-194986 (patent document 1). Patent document 1 describes, as a intellectual education toy, the following idea of providing an intellectual education toy: the writing tool can be used for learning not only by simply writing characters but also by matching with a character reading method, and can be used for practicing writing with a game feeling, so that the writing tool is not tired.
Documents of the prior art
Patent document
Patent document 1: japanese patent laid-open No. 2001-194986
Disclosure of Invention
Problems to be solved by the invention
Patent document 1 describes that a intellectual education toy outputs a character sound by character recognition of a character input to a handwritten character input section. In the conventional example such as patent document 1, the input of characters by handwriting is limited to only one character, and only the sound of the one character is output. Therefore, in the related art example, the interest of the child in the practice/learning of writing characters is insufficient, and it is difficult to bring a sufficient learning intention of the child.
The invention aims to provide a technique of a intellectual education toy capable of training and learning writing characters for children, which can improve interest and arouse sufficient learning intention of the children.
Means for solving the problems
Representative embodiments of the present invention have the following configurations. The intellectual education toy of the embodiment comprises: a presentation unit that presents a predetermined sentence; an input unit for inputting characters corresponding to the predetermined sentence by handwriting by a user; a storage unit that stores data of a presentation including at least one of a sentence, an image, and a sound, for a presentation corresponding to the predetermined sentence; an output unit that outputs the performance corresponding to the predetermined sentence; and a control unit that presents the predetermined term by the presentation unit, detects an input to the input unit, reads data of the performance corresponding to the predetermined term from the storage unit, and causes the output unit to output the performance.
ADVANTAGEOUS EFFECTS OF INVENTION
According to the representative embodiment of the present invention, it is possible to enhance the interest and to bring a sufficient learning intention to children in the art of a intellectual training toy which allows children to practice/learn writing characters. The following description is made of embodiments of the present invention with respect to problems, structures, effects, and the like other than those described above.
Drawings
Fig. 1 shows a structure of a intellectual toy according to embodiment 1 of the present invention.
Fig. 2 shows an example of a functional block structure of the intellectual training toy according to embodiment 1.
Fig. 3 shows a process flow of the intellectual training toy according to embodiment 1.
Fig. 4 shows an example of the structure of data of sentences and performances in embodiment 1.
Fig. 5 shows example screens 1 and 2 in embodiment 1.
Fig. 6 shows a screen example 3 and 4 thereof in embodiment 1.
Fig. 7 shows a screen example 5 and 6 thereof in embodiment 1.
Fig. 8 shows a screen example 7 and 8 thereof in embodiment 1.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the drawings. In the drawings, the same parts are denoted by the same reference numerals in principle, and redundant description is omitted. In the drawings, the components may not be represented in actual positions, sizes, shapes, ranges, and the like for easy understanding of the invention. For convenience of explanation, although a program, a function, a processing unit, and the like are mainly explained in some cases when a process based on a program is explained, a processor, or a controller, a device, a computer, a system, and the like configured by the processor or the like is a main body of hardware related to these. The computer executes processing in accordance with a program read out from a memory by using resources such as the memory and a communication interface as appropriate by a processor. Thereby, predetermined functions, processing units, and the like are realized. The processor is constituted by a semiconductor device such as a CPU or a GPU. The processor is constituted by a device and a circuit capable of performing a predetermined operation. The processing is not limited to software program processing, and can be realized by a dedicated circuit. The dedicated circuit can apply FPGA, ASIC, CPLD, etc. The program may be installed in advance as data in the target computer, or may be distributed as data from a program source to the target computer and installed in the target computer. The program source may be a program distribution server on a communication network, or may be a non-transitory computer-readable storage medium (for example, a memory card). The program may be constituted by a plurality of modules. The various data and information are represented by, for example, a table, a list, or the like, but are not limited thereto. Representations of identification information, identifiers, IDs, names, serial numbers, etc. can be interchanged with one another.
< embodiment 1>
A intellectual training toy according to embodiment 1 of the present invention will be described with reference to fig. 1 to 8. The intelligent toy according to embodiment 1 has a function of enabling a child as a user to practice and learn writing a simple sentence (composed of a plurality of characters) with pleasure, and outputs a performance corresponding to the sentence selected and handwritten-inputted by the user on a screen. In embodiment 1, the input of the sentence and the output of the performance are realized in a form including sending a letter (including the sentence) from the user to the virtual character and the performance in which the character replies to the letter.
[ intellectual education toy ]
Fig. 1 shows a structure of a intellectual training toy 1 according to embodiment 1. The intellectual development toy 1 is a plate-shaped (in other words, substantially plate-shaped) electronic device. The intellectual training toy 1 has a plate-shaped case 2 and an attached stylus 3. The stylus 3 is attachable to and detachable from the housing 2. A computer is built in the housing 2. A screen of the display panel 4 is disposed on a surface of the casing 2 as a main surface. In this example, the display panel 4 is a liquid crystal touch panel module, and is a part serving as a display unit and an input unit. The screen of the display panel 4 is subjected to an input operation by the user using the stylus 3 (or finger). In particular, when handwritten characters are input as described later, the screen of the display panel 4 receives input by the stylus 3. The input by the stylus 3 is also referred to as handwriting input.
The display panel 4 includes a mechanism such as a touch sensor for detecting a touch input, and is capable of detecting position coordinates of a position in which the tip of the stylus 3 approaches or comes into contact with the display screen. In embodiment 1, input with the dedicated stylus 3 is basically recommended for input to the screen of the display panel 4, but the input is not limited to this, and input with a finger can be performed directly.
Various buttons 5 are provided on the housing 2. The buttons 5 include a power button, a volume button, a home button, and the like. A speaker 6 capable of outputting sound is also provided in the housing 2.
A menu screen is displayed in the screen of the display panel 4 in fig. 1. A plurality of icons 7 are displayed on the menu screen. The icon 7 shows items of selectable functions (corresponding applications) such as "programming learning", "arithmetic", "english", for example. One icon 7 among the plurality of icons is an icon for selecting an application for practice/learning for writing a sentence in the form of a letter, which is a feature in embodiment 1. For convenience of description, this application is also described as "letter application". As for the letter application, for example, a name such as "give o letter" or "give o letter write" ("> o" is a name of a character) is given as a name. The menu may have a hierarchical structure, and for example, may have a structure such as a "letter application" icon at a lower level when the "national language" icon is selected.
The user basically performs all operations in the "letter application" operation described later by touch input operations on the screen of the display panel 4. It is not necessary to provide the housing 2 with a dedicated hardware button for the user to operate the operation. As a modification, a dedicated hardware button for operation of an application may be provided in the housing 2. For example, the finish button described later may be provided not as a software button (in other words, an image) inside the screen but as a hardware button outside the screen.
[ computer System ]
Fig. 2 shows an example of the structure of functional blocks as a computer system in the intellectual education toy 1. The intellectual training toy 1 includes a processor 101, a memory 102, a display device 103 (including the display panel 4), a speaker 104, an operation input unit 105 (buttons and the like), an interface device 106, a battery 107, and the like, which are connected to each other via a bus or the like.
The processor 101 is composed of a CPU, ROM, RAM, and the like, and constitutes a controller for controlling the whole and each part of the intellectual education toy 1. The processor 101 realizes each section by processing based on the program 51. The intellectual development toy 1 includes, as each section, a control section 11, a presentation section 12, an input section 13, a storage section 14, an output section 15, a determination section 16, and an operation input section 17.
The memory 102 is configured by a nonvolatile storage device or the like, and stores various data and information processed by the processor 101 or the like. The memory 102 stores, for example, the program 51, the setting information 52, the performance data 53, the display data 54, and the like. The program 51 is a program group corresponding to an OS, middleware, and other various application programs, in addition to the program of embodiment 1 (i.e., a program for realizing a mail application). The setting information 52 is setting information of the program 51 and user setting information. The user setting information is setting information in a case where the user can make variable settings of the letter application.
The performance data 53 is data of a predetermined and set sentence or performance used for a function of a letter application based on the program of embodiment 1. The data of the performance also includes data of images and sounds. An example of the structure of the performance data 53 will be described later (fig. 4). The display data 54 is data for displaying on the screen in the function of the letter application, and includes information of a character image detected by handwriting input.
The display device 103 is a device including the display panel 4 and the display driving circuit of fig. 1, and is a liquid crystal touch panel display device incorporating a touch sensor. The speaker 104 is an audio output device corresponding to the speaker 6 of fig. 1. The operation input unit 105 is a portion including the button 5 and the like in fig. 1, and is a device for inputting a basic operation performed by a user. The interface device 106 is not necessarily required, and may be a device to which a mouse, a keyboard, a microphone, a memory card, another sensor, an input/output interface of the device, or a communication interface can be connected. The battery 107 supplies electric power to each unit.
The control unit 11 controls the presentation unit 12 to the output unit 15. The control unit 11 presents a predetermined term by the presentation unit 12, detects an input to the input unit 13, reads data of a performance corresponding to the predetermined term from the storage unit 14, and causes the output unit 15 to output the performance. The control unit 11 causes the character image information (letters including a sentence described later) detected by the input unit 13 to be displayed when causing at least a part of the presentation to be output.
The presentation unit 12 presents a predetermined sentence or the like on the screen of the display panel 4. The input unit 13 is a portion for a user to input characters corresponding to a predetermined sentence by handwriting through a touch operation with the stylus 3. The input unit 13 displays a copy signature sentence corresponding to a predetermined sentence. The input unit 13 detects character image information of a character input by a user, and displays the character image information so as to be superimposed on a copy of a copybook sentence.
The storage unit 14 stores, as the performance data 53, data of a performance including at least one of a term, an image, and a sound for a performance corresponding to a predetermined term. The storage unit 14 stores the performance data 53 in the memory 102. The output unit 15 outputs a performance corresponding to a predetermined sentence. The output of the presentation includes displaying an image on the screen of the display panel 4 and outputting sound from the speaker 6.
In embodiment 1, the presentation unit 12 presents a plurality of sentences as predetermined sentences as options. The input unit 13 inputs characters corresponding to a sentence selected by the user from among a plurality of sentences. The storage unit 14 stores data of the performances corresponding to each of the plurality of sentences. The control unit 11 reads data of the presentation corresponding to the sentence selected by the user from the storage unit 14, and causes the output unit 15 to output the presentation. The storage unit 14 stores data of a plurality of performances for performances corresponding to predetermined phrases. The control unit 11 reads data of a performance selected from a plurality of performances from the storage unit 14 according to a predetermined term, and causes the output unit 15 to output the performance.
When outputting at least a part of the presentation, the control unit 11 causes the text image information (letters including the sentence described later) detected by the input unit 13 to be displayed or gradually displayed while causing the copybook sentence for the copy by the input unit 13 to be not displayed or to gradually disappear.
The determination unit 16 is a part that determines that the user has completed inputting (inputting a word or phrase by handwriting) to the input unit 13. The determination unit 16 includes, for example, an operation input unit 17 (a completion button described later), and the operation input unit 17 is used to input an operation indicating that the input to the input unit 13 by the user is completed. In embodiment 1, the determination unit 16 determines that the input is completed when an input operation of the operation input unit 17 (completion button) is initiated. The control unit 11 controls the output of the rendering in response to the determination by the determination unit 16 (in other words, completion of the input). The operation input unit 17 (completion button) is in a state of accepting an input of an operation by enabling the input of the operation on condition that the input to the input unit 13 is detected.
The determination by the determination unit 16 is not limited to the use of the operation input unit 17 (completion button). In the modification, the determination by the determination unit 16 may use a time condition such as the elapse of a predetermined time. The control unit 11 outputs the rendering when a predetermined time has elapsed with respect to the input unit 13. As details of the measurement and determination of the time, a fixed time from the start of the screen, a fixed time from the detection of the input, a constant time duration in the no-input state, or the like may be used.
In embodiment 1, the operation input unit 17 (completion button) is configured as a software button (in other words, an image) on the screen of the display panel 4, but the present invention is not limited to this. In a modification, a dedicated hardware button having the same function as the completion button may be provided outside the screen of the display panel 4 of the housing 2 of the intellectual education toy 1.
In the intellectual training toy 1 according to embodiment 1, the control unit 11 does not perform the character recognition processing for the character image information of the character input to the input unit 13, and the control unit 11 causes the output of the presentation when there is at least a part of the character image information regardless of the content of the character image information of the character input to the input unit 13.
In embodiment 1, the predetermined sentence is a letter given to a character by a user or a conversation with a character by a user. The presentation includes at least one of a sentence, a character image, and a character sound of a letter or a conversation in a response from the character to the user. In embodiment 1, the predetermined sentence is a letter sentence, and an image of a letter including letters, a special effect image, and an effect sound are rendered. Character image information (i.e., a sentence handwritten) detected by the input section 13 is displayed on an image of a letter.
[ Process flow ]
Fig. 3 shows a main process flow of the intellectual development toy 1 according to embodiment 1. Fig. 3 has steps S301 to S309. The processor 101 (particularly, the control unit 11) in fig. 2 of the intellectual training toy 1 performs such processing while reading data in the memory 102.
In step S300, the processor 101 displays a menu screen as shown in fig. 1 on the screen of the display panel 4 in response to activation of the intellectual toy 1 (e.g., turning on of a power button). The processor 101 receives selection of an icon 7 (corresponding application) by a touch operation performed by the user using the stylus 3 on the menu screen. In the case where the letter application is selected, the processor 101 performs the processing thereafter.
In step S301, the processor 101 causes a subtitle screen (screen G1 of fig. 5 described later) to be displayed on the screen of the display panel 4. The sequence screen is a guidance screen for explaining the contents of the letter application to the user.
In step S302, the processor 101 shifts the display from the subtitle screen to the topic selection screen (screen G2 of fig. 5 described later) on the screen of the display panel 4 at a predetermined timing. The question selection screen is a screen for presenting a plurality of predetermined sentences as candidates for writing in a letter to the user.
In step S303, the processor 101 accepts selection of one sentence from among a plurality of sentences by the touch operation with the stylus pen 3 by the user on the question selection screen.
In step S304, the processor 101 displays a handwriting input screen (screen G3 of fig. 6 described later) on the screen of the display panel 4 in response to the selection of the above-described one sentence. The processor 101 displays the signature sentence corresponding to the selected sentence in a light color in a predetermined area of the handwriting input screen. The processor 101 sets a completion button, which will be described later, to an invalid state at a time point before the handwriting input.
In step S305, the processor 101 receives a handwriting input by the user through a touch operation with the stylus 3 in a predetermined area of the handwriting input screen. The display device 103 detects a touch position coordinate or the like corresponding to a handwritten input in the area, and the processor 101 acquires data (character image information) of an image of a term corresponding to the handwritten input based on the touch position coordinate or the like. The processor 101 draws an image (a dot, a line, or the like) of a sentence corresponding to the handwritten input on the copybook sentence in the area based on the acquired data. In the case where there is handwriting input, the processor 101 sets the completion button to an active state. In other words, the processor 101 is set to a state in which the determination section can make a determination that the input is completed, depending on the presence of the handwritten input.
In step S306, the processor 101 detects and recognizes, in other words, determines whether or not the input of the sentence by the user in the area of the handwriting input screen is completed. In embodiment 1, when the completion button in the handwriting input screen is pressed by a touch operation (screen G5 in fig. 7 described later), the processor 101 regards that the sentence is completed.
In step S307, the processor 101 displays a letter including the completed sentence on the sentence completion presentation screen (screen G6 of fig. 7 described later) in response to completion of the sentence.
In step S308, the processor 101 shifts the display from the sentence completion rendering screen to the letter transmission screen (screen G7 in fig. 8 described later) on the screen of the display panel 4 at a predetermined chance. The letter transmission screen is a screen showing a situation where the user transmits a letter to the character.
In step S309, the processor 101 causes the display of the above-described letter transmission screen to transition to the display of the reply screen by the character (screen G8 of fig. 8 described later) on the screen of the display panel 4 at a predetermined opportunity. The reply screen is a screen showing a case where the character receives a letter from the user and replies to the sentence of the letter. The processor 101 causes the presentation determined by the sentence of the letter to be output on the screen. The presentation includes the returned sentence, the image of the character, and the sound.
In step S310, the processor 101 shifts the display of the screen of the display panel 4 from the above-described restored screen to the display of the common success screen (not shown) at a predetermined opportunity. Thereby, the flow ends.
[ data of sentences and performances ]
Fig. 4 shows an example of the structure of predetermined sentence and presentation data. Such data is stored in advance as performance data 53 in the memory 102 of fig. 2. The data example of fig. 4 is a data example corresponding to the "mail (sentence) to be written to character a" portion in the mail application. As shown in the figure, this data has a plurality of candidate sentences as the selected sentences of the letter as the prescribed sentences 401 in the left column. For example, there are 5 words as in the words A1 to A5. In this example, sentence A1 is "good morning", sentence A2 is "good evening", sentence A3 is "do you? ", statement A4 is" oil o "and statement A5 is" bitter ". In embodiment 1, the predetermined sentence is a relatively short sentence made up of a plurality of characters as described above, but a longer and more complicated sentence can be used depending on the age of the child or the like.
The central column in this data has data of statement 402 of the reply of role a associated with the prescribed statement 401. In this example, two reply sentences are set in association for each sentence 401. For example, for the sentence A1, "good morning!as the sentence B11 is prepared! Today, the user must just like to feel happy! "and" good morning!as statement B12! The mood of the morning call is very good! ". For statement A2, a statement B21 "EYE, late Ann! In tomorrow, oil is added! "sum statement B22" good night! Have a good dream! ". For statement A3, statement B31 "kay, I am so good! How do you look? "and statement B32" I am very good o! ". Similarly, for the sentence A4, the sentence B41 and the sentence B42 are prepared. For the sentence A5, the sentence B51 and the sentence B52 are prepared.
The correspondence between the specified sentences 401 and the replied sentences 402 can not be limited to the above example, and one or more replied sentences 402 may be associated for each sentence 401. A different number of replied statements 402 may also be prepared for each specified statement 401.
As shown in the right column of the data, data of an image and a sound 403 of the character a are set in association with each other in the sentence 402 of the reply of the character a. For example, for the sentence B11, the image g11 and the sound s11 are prepared. For the sentence B12, the image g12 and the sound s12 are prepared. The image g11 is an image showing a greeting situation as in the sentence B11, and the sound s11 is a sound obtained by reading the sentence B11. Similarly, an image and a sound of the character a are prepared for each reply sentence 402 such as the sentence B21, the sentence B22, the sentence B31, the sentence B32, the sentence B41, the sentence B42, the sentence B51, and the sentence B52.
For example, when the user selects a sentence A3 "do you? "an example of the performance corresponding to the selected sentence is as follows. The presentation includes presentation of letters including a handwritten input sentence corresponding to a selected sentence, presentation of a letter to be transmitted, and output of a sentence, an image, and a sound in which a reply is given by the character a who has received the letter, and the presentation will be described later. The control unit 11 selects one sentence 402 to be returned by the character a from the plurality of candidates of the return sentences in the presentation data of fig. 4. As an example, for statement A3 "do you? ", the sentence B31" chosen as one randomly selected from the sentences B31 and B32 is kay, i am very good! How do you like? ". In addition to the selected reply sentence, an image and a sound 403 associated with the selected reply sentence are also selected.
[ display of Picture ]
Fig. 5-8 show examples of various displays and transitions in a mail application. The following description will be made in the order of screen transition.
[ Picture (1) ]
In fig. 5, a screen G1 shows an example of a caption screen (in other words, a guidance screen) of a letter application. First, in the screen G1, a character X for guiding, for example, a letter application on the background comes out of the image, and the content of the letter application (i.e., exercise for writing a sentence) is guided to the user by a sentence, an image, and a voice. In the screen G1, for example, the japanese character 501 as the character X is displayed "to try to write to the character a (good component)! "(first page)," character Y will come to a word such as "second page," and the corresponding sound is output. At the same time, on the screen G1, an example of a predetermined phrase (such as "good morning") described later is displayed on the background. In addition, in the case where the speech 501 extends over a plurality of pages, for example, the speech is migrated between the pages by a touch operation. The background of each screen may be predetermined wallpaper, but may also be an image such as a virtual scene.
The screen G1 after the end of the guided line 501 transitions to the next screen G2 at a predetermined chance. This trigger is a touch operation on the screen G1, or may be the elapse of a predetermined time or the like. Moreover, transition between various screens is accompanied by a predetermined screen effect (in other words, visual effect) or presentation. For example, when the screen is shifted from the first screen to the second screen, a special screen effect (fade-out and fade-in of the screen) may be used in which the first screen is moved out of the screen of the display panel 4 and the second screen is moved into the screen of the display panel 4. Alternatively, the screen effect may be such that the first screen gradually disappears in a shallow manner and the second screen gradually appears in a dark manner.
[ Picture (2) ]
The lower screen G2 of fig. 5 shows an example of the question selection screen (in other words, the sentence presentation screen). On the screen G2, a plurality of predetermined sentences 502, which are candidates for the user to write in a letter, are presented (in other words, displayed) as options. On the screen G2, a sentence "please select a language (sentence) to be written" is displayed as a guide, and a corresponding sound is output. In the screen G2, in addition to the above, images of the character X for guidance, other characters, scenes, and the like may be displayed.
In this screen G2, the user selects one sentence from the plurality of sentences 502 by a touch operation with the stylus 3. The screen G1 transitions to the next screen (fig. 6) when a single sentence is selected. The selected sentence is also described as a selection sentence. As an example of the selection statement, set to "do you? ".
[ Picture (3) ]
In fig. 6, a screen G3 shows a copybook display state as an example of the handwriting input screen. On this screen G3, a copybook sentence 602 for copying corresponding to the selected sentence on the previous screen G2 is displayed in an area 601 (in other words, a writing area). In this example, in a region 601, a plurality of characters "a sentence" 12312312365123911237712363 "(is" you are. On the screen G3, a sentence such as "copy by copybook" is displayed as a guide, and a corresponding sound is output.
Further, a button 603 such as a pen tool is provided in the screen G3. In this example, as the buttons 603, there are a pen tool button, an eraser button, and an "all erase" button. Initially, the pen tool button is automatically selected to be in an active state. In a state where the pen tool is enabled, the user can draw a point or a line in the predetermined area 601 by a touch operation (i.e., handwriting input) with the stylus 3. When the eraser tool is selected and becomes active, the user can erase the drawn point or line by the touch operation of the stylus 3 in the predetermined area 611. When the "all erase" button is selected, all dots and lines in the predetermined area 611 are erased and the paper can be returned to the blank paper.
A user can write a selected sentence "sep \1236512377911236363" (chinese is "you are. The user writes each character in a manner of copying a copybook sentence 602 for copying by a touch operation with the stylus 3 in the area 601 of the screen G3. Since it is not detected whether or not the handwritten input character is deviated from the character of the signature sentence 602, it is allowed even if the handwritten input character is deviated from the character of the signature sentence 602. In the example of the screen G3, any point or line is not drawn in the area 601, but it is still in a state before handwriting input.
In the screen G3, a finish button 604 is displayed, for example, on the lower portion. The processor of the intellectual toy 1 sets the finish button 604 to the inactive state (the state in which it cannot be pressed by touch) in the state before the handwriting input as in the screen G3, and displays the finish button 613 in a semi-transparent state or in a light color, for example, as a mode different from the active state.
[ Picture (4) ]
The lower screen G4 of fig. 6 shows an example of a state in the middle of handwriting input in which the user starts handwriting input of a sentence in the area 601 of the screen G3. The processor of the intellectual development toy 1 detects a touch input to the area 601 based on the function of the display device 103 including the display panel 4, and draws a point and a line (a character 605 corresponding thereto) corresponding to the coordinates of the touch input position in the area 601 based on the detection information. The character 605 is an example of a point or a line drawn by handwriting input. In this example, the lines and dots of the character 605 are thick black. In the state of this example, a word until "septd" \ 1236565656565656565656, \\ 1237763. Further, the color, thickness, and the like of the drawing of the character 605 can be variably set.
When drawing of a point or a line in the area 601 is started, that is, when there is at least a part of the point or the line, the processor sets the finish button 604 to an active state (a state that can be pressed by touch), for example, displays the finish button 604 in a normal state that is not translucent or in a dark color.
[ Picture (5) ]
In fig. 7, a screen G5 shows an example of a screen at the completion of a sentence input by handwriting in an area 601 of the screen G4. In the case of the state of the screen G5, a sentence 606 "sepsepbi 123651237712363. Then, the user presses the completion button 604 with the stylus pen 3. The processor detects the pressing of the done button 604 by the function of the display device 103 including the display panel 4.
When detecting/recognizing the pressing of the finish button 604, the processor regards the pressing of the finish button 604 as the completion of the sentence, and acquires data (corresponding character image information) of the sentence 606 drawn in the area 601 at that time. Here, when the completion button 604 is pressed, it is not necessary that all the characters in the selected sentence are drawn by handwriting input, and the selection may be incomplete. Even if a sentence is not completed, the processor determines that the sentence is completed as long as at least a part of a line or a point is drawn in the area 601 and the completion button 604 is pressed. In the case of use by a child, it is also assumed that only handwritten characters in an unfinished state can be input, and therefore, the child who is a user can press the finish button 604 at a satisfactory timing to finish handwriting input and enter a subsequent performance, and thus it is possible to suppress a reduction in learning intention of the child, and to bring about learning intention of the child to practice/learn writing characters repeatedly a plurality of times. Further, not limited to this, as a modification, a condition that characters of a predetermined amount or more are written in the area 601 may be set as a completion condition. The processor transitions to the next screen G6 when the completion button 604 is pressed.
In addition, as a modification example described above, when the time condition is used instead of the completion button 604 in the determination of the completion of the term (in other words, the completion of the input), the following can be implemented, for example. For example, when a certain time has elapsed from the start of the screen G3 of the handwriting input, the processor 101 regards the sentence as being completed. Alternatively, when a certain time has elapsed from the input detection (in other words, touch detection) in the area 601, the processor 101 regards the sentence as being completed. Alternatively, when the state in which no input is made by a touch continues for a certain time in the area 601, the processor 101 regards that the sentence is completed.
[ Picture (6) ]
The lower screen G6 of fig. 7 is a sentence completion rendering screen, in other words, a letter completion rendering screen. In this screen G6, the processor displays an image of a letter (in other words, a sheet) of a letter in an area 701 occupying most parts of the screen G6, and displays a sentence image 702 corresponding to the sentence 606 acquired when the completion was made in the previous screen G5 so as to be superimposed on the letter image. At this time, the processor does not display the elements such as the copybook sentence 602 (frame lines, characters) of the previous screen G5. The processor performs predetermined display control such that, for example, the other displayed objects such as the copybook word 602 gradually disappear while the display of the word 606 in the previous screen G5 is kept as it is, and the screen effect or the presentation when the screen G6 is displayed is performed. At this time, the processor performs control so that the memo image is gradually displayed in the region 701, a predetermined special effect image (for example, a twinkling star special effect) is displayed, and a corresponding sound such as an effect sound is output as a part of the sentence completion presentation. From the user's perspective, the perceived background changes to letters (stationery). Note that the type of the letter image and the type of the special effect of the letter may be determined randomly from a plurality of candidates, or may be variably set.
The processor causes the screen G2 in which the presentation is completed to be displayed for a predetermined time or longer. The user can view letters composed of the completed sentence on the screen G6. The processor terminates the screen G6 at a predetermined opportunity and transitions to the next screen (fig. 8). The trigger is, for example, a touch operation on the screen G6 performed after a predetermined minimum display time has elapsed or a further predetermined time has elapsed.
[ Picture (7) ]
In fig. 8, a screen G7 shows an example of a screen for letter transmission. The screen G7 is a screen showing a part of the presentation when the letter 801 including the sentence 702 created in the previous screen G6 is transmitted from the user to a predetermined character (referred to as a). In the screen G7, letters 801 (i.e., letter images including notes and sentences) created up to the previous screen are displayed in a partial area. At this time, the processor may also perform display control so that, for example, the letter 801 gradually emerges on the background. In addition, for example, an image 802 and a speech 803 of a predetermined character Y are displayed in another area of the screen G7, and a corresponding sound is output. The role Y is, for example, the role of distributing letters or initially receiving letters. Thereby, the performance character a receives the letter 801 from the user. For example, as the speech of the character Y, a sentence of "good quality you (= character a) has received a letter" is displayed.
The processor terminates the screen G7 at a predetermined opportunity and transitions to the next screen G8. The trigger is that the touch operation in the screen G7 is performed after a predetermined minimum display time has elapsed or a further predetermined time has elapsed. When the processor makes a transition from the screen G7 to the screen G8, the processor controls display of, for example, an element whose field is going back from the inside of the screen and an element whose field is going back from the inside of the screen. For example, display control is performed such that the areas of the image 802 and the speech 803 of the character Y on the screen G7 are moved from a predetermined position within the screen to outside the screen. At the same time, display control is performed so that the image and the speech of character a in the next screen G8 are moved from outside the screen to a predetermined position inside the screen.
[ Picture (8) ]
The lower screen G8 of fig. 8 is a screen of the replay performance of the character a (in other words, letter reception). In this screen G8, a situation in which the character a receives the letter 801 from the user and the character a replies to the sentence of the letter 801 of the user appears as a performance. On the screen G8, the previous letter 801 is displayed in the same manner, and the image 804 of the character a and the sentence of the reply 805 (line) are displayed in a predetermined area, and the sound corresponding to the reply 805 is output. The letter 801 handwritten by the user is displayed on the same screen as the sentence of the image 804 and the reply 805 (line), and thus the user can feel the reality that the character a replies to the letter 801 created by the user, and will learn.
The statement of reply 805 is the statement of the reply decided from the statement selection of letter 801. As details of the sentence for which the reply 805 is determined, for example, one sentence is randomly selected from a plurality of candidate reply sentences based on predetermined data (performance data 53 in fig. 4). Reply sentences in a plurality of modes are prepared in advance according to the selection sentences of the user. In this example, "EYE, I am very good! How do you like? "as the sentence" do you get good? "selected reply 805 statement.
The processor terminates the screen G8 at a predetermined opportunity, and shifts to a common success screen as a next screen. The trigger is a touch operation within the screen after a predetermined minimum time period has elapsed. Although not shown, the common success screen is a screen representing the content of which the mail application is ended, and is a screen of the content common to the respective applications. After the successful screen is shared, the menu screen is returned to.
[ Effect and the like ]
As described above, according to the intellectual development toy 1 of embodiment 1, in the form of sending a letter from a child as a user to a character, after the child inputs a sentence in a manner of handwriting copying with respect to a predetermined sentence presented and selected, an entertainment such as a reply of the character is output as an entertainment corresponding to the selected sentence based on detection/recognition of completion of the sentence (for example, pressing of a completion button). Therefore, when the children practice/study writing characters, the interestingness can be improved, and the learning intention of the children can be more aroused. In particular, in the above-described example of the presentation and screen, as in the screens G7 and G8 of fig. 8, the reply, the image, and the like of the character a are output together with the letter 801 including the sentence written by the user. This makes it possible for the child who is the user to obtain a reaction including a reply from the character a by selecting and writing the sentence by himself/herself, thereby increasing the interest. In addition, since the child who is a user obtains a presentation including different replies according to the sentence selected and written by the child, the interest is increased.
[ modified examples ]
As a modification to embodiment 1, the following is also possible. In embodiment 1, a form of a letter is used, but the present invention is not limited to this. The same can be applied to communication including a sentence between a user and a character in the form of a conversation or the like. For example, in the form of a session, it is also possible to: the role replies a first response sentence according to the input of the first selection sentence by the user, the user inputs a second selection sentence according to the first response sentence, and the role replies a second response sentence according to the input of the second selection sentence.
As a modification, a form of communicating a letter, a conversation, or the like between virtual characters may be used. For example in the form: when writing and transmitting a sentence of a letter from the character a to the character B, the user instead writes and inputs the sentence of the letter of the character a.
In embodiment 1, the presentation unit 12 and the input unit 13 display a signature sentence 602 in the handwriting input area 601 of the screen G3 of fig. 6, and superimpose and display a handwritten input sentence 605 on the signature sentence 602. In addition, the display of the signature sentence corresponding to the selected sentence and the display of the handwritten input may be arranged in different areas in the screen.
The present invention has been specifically described above based on the embodiments, but the present invention is not limited to the above-described embodiments, and various modifications can be made without departing from the scope of the invention.
Description of the reference numerals
1: a intellectual education toy; 2: a housing; 3: a pen (stylus); 4: a display panel; 5: a button; 6: a speaker; 7: an icon; 11: a control unit; 12: a presentation section; 13: an input section; 14: a storage unit; 15: an output section; 16: a determination unit; 17: an operation input unit; 101: a processor; 102: a memory.

Claims (15)

1. A intellectual education toy comprises:
a presentation unit that presents a predetermined sentence;
an input unit for inputting characters corresponding to the predetermined sentence by handwriting by a user;
a storage unit that stores data of a performance including at least one of a phrase, an image, and a sound, for a performance corresponding to the predetermined phrase;
an output unit that outputs the performance corresponding to the predetermined sentence; and
a control part for controlling the operation of the display device,
wherein the control unit causes the presentation unit to present the predetermined sentence, detects an input to the input unit, reads data of the performance corresponding to the predetermined sentence from the storage unit, and causes the output unit to output the performance.
2. The intellectual education toy according to claim 1, wherein,
the input unit displays a sentence for copying corresponding to the predetermined sentence,
the input unit displays the character image information of the inputted character so as to be superimposed on the sentence for copying, while detecting the character image information.
3. The intellectual training toy as claimed in claim 1,
the control unit displays the character image information detected by the input unit when causing at least a part of the performance to be output.
4. The intellectual training toy as claimed in claim 1,
further comprising a determination unit that determines that the input to the input unit by the user is completed,
the control unit causes the presentation to be output in response to the determination by the determination unit.
5. The intellectual training toy as claimed in claim 4,
the determination unit includes an operation input unit for inputting an operation indicating that the input to the input unit by the user is completed,
the determination unit determines that the input is completed when the input of the operation by the operation input unit is triggered.
6. The intellectual training toy as claimed in claim 5,
the operation input unit is configured to be in a state in which the input of the operation is enabled and the input of the operation is accepted on condition that the input of the operation to the input unit is detected.
7. The intellectual training toy as claimed in claim 1,
the control unit does not perform character recognition processing for character image information of characters input to the input unit, and causes the rendering to be output when there is at least a part of character image information regardless of the content of character image information of characters input to the input unit.
8. The intellectual training toy as claimed in claim 1,
the presentation unit presents a plurality of sentences as the predetermined sentence as options,
the input unit is configured to input a character corresponding to a sentence selected by the user from the plurality of sentences,
the storage unit stores data of performances corresponding to each of the plurality of sentences,
the control unit reads out data of a performance corresponding to the sentence selected by the user from the storage unit, and causes the output unit to output the performance.
9. The intellectual training toy as claimed in claim 1,
the storage unit stores data of a plurality of performances for the performances corresponding to the predetermined sentence,
the control unit reads data of a performance selected from the plurality of performances from the storage unit according to the predetermined sentence, and causes the output unit to output the performance.
10. The intellectual training toy as claimed in claim 2,
the control unit causes the character image information detected by the input unit to be displayed or gradually displayed while causing the sentence for copying to be not displayed or gradually disappear when at least a part of the performance is output.
11. The intellectual training toy as claimed in claim 1,
the prescribed sentence is a letter given by the user to the character or a sentence of a conversation of the user to the character,
the performance includes at least one of a sentence, a character image, and a character sound of a letter or a conversation in a response of the character to the user.
12. The intellectual training toy as claimed in claim 1,
the prescribed sentence is a sentence of a letter,
the performance includes an image of notes of the letter, a special effects image, and an effect sound,
and displaying the character image information detected by the input part on the image of the letter.
13. A program for causing a intellectual education toy to perform information processing,
the intellectual education toy is provided with:
a presentation unit that presents a predetermined sentence;
an input unit for inputting characters corresponding to the predetermined sentence by handwriting by a user;
a storage unit that stores data of a performance including at least one of a phrase, an image, and a sound, for a performance corresponding to the predetermined phrase;
an output unit that outputs the performance corresponding to the predetermined sentence; and
a control part for controlling the operation of the display device,
wherein the program causes the control section to perform: the presentation unit presents the predetermined sentence, detects an input to the input unit, reads data of the performance corresponding to the predetermined sentence from the storage unit, and causes the output unit to output the performance.
14. A method for controlling a intellectual education toy, the intellectual education toy comprising:
a presentation unit that presents a predetermined sentence;
an input unit for inputting characters corresponding to the predetermined sentence by handwriting by a user;
a storage unit that stores data of a performance including at least one of a phrase, an image, and a sound, for a performance corresponding to the predetermined phrase;
an output unit that outputs the performance corresponding to the predetermined sentence; and
a control part for controlling the operation of the display device,
in the control method of the intellectual education toy, the control unit displays the predetermined sentence through the display unit, detects an input to the input unit, reads out data of the performance corresponding to the predetermined sentence from the storage unit, and causes the output unit to output the performance.
15. A storage medium storing a program for causing a intellectual education toy to perform information processing, the intellectual education toy comprising:
a presentation unit that presents a predetermined sentence;
an input unit for inputting characters corresponding to the predetermined sentence by handwriting by a user;
a storage unit that stores data of a performance including at least one of a phrase, an image, and a sound, for a performance corresponding to the predetermined phrase;
an output unit that outputs the performance corresponding to the predetermined sentence; and
a control part for controlling the operation of the motor,
wherein the program, when executed, causes the control section to perform: the presentation unit presents the predetermined sentence, detects an input to the input unit, reads data of the performance corresponding to the predetermined sentence from the storage unit, and causes the output unit to output the performance.
CN202210597881.4A 2021-06-10 2022-05-30 Intellectual education toy, method for controlling intellectual education toy, and storage medium Active CN115253318B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021097628A JP7030231B1 (en) 2021-06-10 2021-06-10 Educational toys and programs
JP2021-097628 2021-06-10

Publications (2)

Publication Number Publication Date
CN115253318A true CN115253318A (en) 2022-11-01
CN115253318B CN115253318B (en) 2024-03-26

Family

ID=81215054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210597881.4A Active CN115253318B (en) 2021-06-10 2022-05-30 Intellectual education toy, method for controlling intellectual education toy, and storage medium

Country Status (3)

Country Link
JP (2) JP7030231B1 (en)
CN (1) CN115253318B (en)
WO (1) WO2022260111A1 (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001194986A (en) * 2000-01-12 2001-07-19 Sente Creations:Kk Intellectual education toy
JP2002278691A (en) * 2001-03-19 2002-09-27 Sega Toys:Kk Game machine
CN102419692A (en) * 2011-12-15 2012-04-18 无敌科技(西安)有限公司 Input system and method for Chinese learning
TW201305925A (en) * 2011-04-27 2013-02-01 Panasonic Corp Handwritten character input device and handwritten character input method
CN103218733A (en) * 2012-04-26 2013-07-24 株式会社万代 Portable terminal device, toll, reality expansion system and method
CN107222384A (en) * 2016-03-22 2017-09-29 深圳新创客电子科技有限公司 Electronic equipment and its intelligent answer method, electronic equipment, server and system
CN107783683A (en) * 2016-08-24 2018-03-09 南京乐朋电子科技有限公司 A kind of hand-written touch screen machine of practising handwriting of novel children
CN108171226A (en) * 2018-03-19 2018-06-15 陶忠道 It can prevent the suggestion device of clerical error
CN109155111A (en) * 2016-12-02 2019-01-04 记忆支持合同会社 learning support system, method and program
CN109964266A (en) * 2016-11-18 2019-07-02 株式会社和冠 Digital input unit, digital additions and deletions device and communication educational system
CN110462710A (en) * 2017-03-13 2019-11-15 田谷圭司 Electronic equipment and information processing method
CN111569443A (en) * 2020-04-21 2020-08-25 长沙师范学院 Intelligent toy with writing scroll for children and use method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2004020061A1 (en) * 2002-08-28 2005-12-15 株式会社セガ トイズ Game device
JP2005049387A (en) * 2003-07-29 2005-02-24 Taito Corp Game machine with character learning function
CN1886767A (en) * 2003-11-28 2006-12-27 语言的森林有限公司 Composition evaluation device
JP6129055B2 (en) * 2013-10-28 2017-05-17 富士通株式会社 Teaching material creation device, teaching material creation method and computer program
KR102034158B1 (en) * 2019-03-19 2019-10-18 이영숙 7th steps learning study book of Ruah education and Ruah educating method using thereof

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001194986A (en) * 2000-01-12 2001-07-19 Sente Creations:Kk Intellectual education toy
JP2002278691A (en) * 2001-03-19 2002-09-27 Sega Toys:Kk Game machine
TW201305925A (en) * 2011-04-27 2013-02-01 Panasonic Corp Handwritten character input device and handwritten character input method
CN102419692A (en) * 2011-12-15 2012-04-18 无敌科技(西安)有限公司 Input system and method for Chinese learning
CN103218733A (en) * 2012-04-26 2013-07-24 株式会社万代 Portable terminal device, toll, reality expansion system and method
CN107222384A (en) * 2016-03-22 2017-09-29 深圳新创客电子科技有限公司 Electronic equipment and its intelligent answer method, electronic equipment, server and system
CN107783683A (en) * 2016-08-24 2018-03-09 南京乐朋电子科技有限公司 A kind of hand-written touch screen machine of practising handwriting of novel children
CN109964266A (en) * 2016-11-18 2019-07-02 株式会社和冠 Digital input unit, digital additions and deletions device and communication educational system
CN109155111A (en) * 2016-12-02 2019-01-04 记忆支持合同会社 learning support system, method and program
CN110462710A (en) * 2017-03-13 2019-11-15 田谷圭司 Electronic equipment and information processing method
CN108171226A (en) * 2018-03-19 2018-06-15 陶忠道 It can prevent the suggestion device of clerical error
CN111569443A (en) * 2020-04-21 2020-08-25 长沙师范学院 Intelligent toy with writing scroll for children and use method

Also Published As

Publication number Publication date
JP2022189711A (en) 2022-12-22
JP7030231B1 (en) 2022-03-04
JP2022189194A (en) 2022-12-22
WO2022260111A1 (en) 2022-12-15
CN115253318B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
US6882975B2 (en) Method, storage medium, apparatus, server and program for providing an electronic chat
US6515690B1 (en) Systems and methods providing an interface for navigating dynamic text
US20060033725A1 (en) User created interactive interface
KR101971161B1 (en) Decoupled applications for printed materials
US20090248960A1 (en) Methods and systems for creating and using virtual flash cards
KR101789057B1 (en) Automatic audio book system for blind people and operation method thereof
CN104657054A (en) Clicking-reader-based learning method and device
KR102260222B1 (en) Apparatus and method for supporting to write reading review
CN115253318B (en) Intellectual education toy, method for controlling intellectual education toy, and storage medium
WO2013018166A1 (en) Text writing device
JPH0883092A (en) Information inputting device and method therefor
JP4971528B1 (en) Handwritten text creation method and program
KR20100052023A (en) Educational apparatus or tablet
KR100387033B1 (en) Apparatus and method for inputting special characters easily in a telephone
JP5427331B1 (en) Typing training system, typing training method, and typing training program
KR101421554B1 (en) Apparatus and Method for Inputting Hand Writing on Touch Screen
KR20000036398A (en) The character writing apparatus and the utilizing method of the apparatus as an interface of a computer
JP2008015997A (en) Character display unit and character display program
JPH08137385A (en) Conversation device
CA2527240A1 (en) User created interactive interface
JPH10254484A (en) Presentation support device
Fard et al. Braille-based Text Input for Multi-touch Screen Mobile Phones
Masaki et al. Prototype development of interactive tactile graphics editor with latex and participant's experience in using the editor
JP6450127B2 (en) Language training device
JP6215024B2 (en) Typing training system, typing training method, and typing training program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant