WO2011102600A2 - Terminal d'apprentissage, procédé de génération de contenu d'apprentissage, procédé d'apprentissage et support d'enregistrement associé - Google Patents

Terminal d'apprentissage, procédé de génération de contenu d'apprentissage, procédé d'apprentissage et support d'enregistrement associé Download PDF

Info

Publication number
WO2011102600A2
WO2011102600A2 PCT/KR2010/009220 KR2010009220W WO2011102600A2 WO 2011102600 A2 WO2011102600 A2 WO 2011102600A2 KR 2010009220 W KR2010009220 W KR 2010009220W WO 2011102600 A2 WO2011102600 A2 WO 2011102600A2
Authority
WO
WIPO (PCT)
Prior art keywords
learning
image
input
area
terminal
Prior art date
Application number
PCT/KR2010/009220
Other languages
English (en)
Korean (ko)
Other versions
WO2011102600A3 (fr
Inventor
김도영
Original Assignee
(주)시누스
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020100015298A external-priority patent/KR101000463B1/ko
Priority claimed from KR1020100095897A external-priority patent/KR101060281B1/ko
Application filed by (주)시누스 filed Critical (주)시누스
Publication of WO2011102600A2 publication Critical patent/WO2011102600A2/fr
Publication of WO2011102600A3 publication Critical patent/WO2011102600A3/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied

Definitions

  • the present invention relates to a learning terminal, a learning content generation method, a learning method, and a recording medium thereof, wherein the learning content generation method for a learning terminal for outputting learning material corresponding to a selected item among a plurality of items arranged on one image is output.
  • the present invention relates to a learning content generation method capable of easily generating content desired by a user, thereby broadening the learning range and inspiring learning motivation, and a terminal providing the same.
  • learning boards and painting boards have been used as a means for learning infants such as numbers, letters, words, and various objects. These are prints of images arranged with a large number of letters, numbers, or pictures, and because they are learning tools that depend only on the visual part, they have no effect on pronunciation learning and are difficult to attract attention of low concentration children.
  • learning board-type electronic terminals providing a more interactive learning method.
  • the present invention has been made to solve the above-mentioned conventional problems, and an object of the present invention is to select a specific area of an image in which a plurality of letters, numbers, pictures, etc. are arranged, and then select a letter, number, It is to provide a learning terminal that can enhance the learning effect by providing a learning terminal for outputting a voice related to a picture.
  • Another object of the present invention is to provide a learning content generation method that enables users to easily create learning content provided to users in a learning terminal.
  • Still another object of the present invention is to provide a learning content generation method that enables to create learning content using an image without being limited by the size or shape of an image in which letters, numbers, pictures, and the like are arranged.
  • step (d) comprises: (d1) assigning one piece of identification information to the image data; ; (d2) generating and storing a name of the learning content generated in step (c); And (d3) storing a correspondence between the identification information allocated in step (d1) and the name of the learning content generated in step (d2).
  • Learning terminal for displaying a digital or analog image arranged a plurality of items;
  • a touch sensing unit sensing a user's touch input applied to the image displayed on the image display unit;
  • a screen output unit configured to output data of a text, an image, or a video;
  • a voice output unit for outputting voice data;
  • a controller configured to calculate a position of the touch input sensed by the touch sensing unit and output learning data associated with an item corresponding to the position of the calculated touch input among the plurality of items arranged in the image.
  • the user may select one digital or analog image from the user, set a plurality of different areas in the selected image, receive the learning material corresponding to each set area, and generate the learning content corresponding to the selected image. have.
  • a learning file generating method for a learning terminal in which learning material corresponding to a selected item among a plurality of items arranged on one image is output.
  • Receiving a selection of at least one corresponding input area from among a plurality of preset input areas for each item in the image material consisting of one image in which items of? receives a designation of one or more learning material files corresponding to one or more input areas selected for each item in step (a); And (c) generating the corresponding relationship information between each input area and the learning data file into one learning file.
  • the learning method is a learning method using a learning terminal that outputs a learning material corresponding to a selected item among a plurality of items arranged on one image, (A) executing the selected learning file. Steps; (B) detecting a user input in any one of a plurality of preset input areas; (C) retrieving one or more designated learning material files corresponding to the input area where a user input is detected from the learning file; And (D) outputting one or more learning materials corresponding to the searched learning material files.
  • a computer-readable recording medium is installed in a learning terminal that outputs learning material corresponding to a selected item among a plurality of items arranged on one image, and causes the terminal to (1) Receiving a learning file; (2) interpreting the selected learning file and reading a learning material name corresponding to each preset input area in the learning file; (3) if the user input is detected, searching for a learning material name corresponding to the input area where the user input is detected; (4) searching for a corresponding learning material file in the data stored in the terminal with the found learning material name; And (5) playing the searched learning material file.
  • the learning terminal As described in detail above, according to the learning terminal, the learning content generation method, the learning method, and the recording medium according to the present invention, the following effects can be expected.
  • the learning effect can be enhanced by providing a learning terminal for outputting voices related to letters, numbers, and pictures corresponding to the selected area.
  • the learning terminal the learning content generating method, and the recording medium according to the present invention
  • the learning terminal provided to the user in the learning terminal can be easily made by the user, thereby enabling the user to perform a wider range of learning. have.
  • the learning content generation method, and the recording medium according to the present invention since the learning content using the image can be made without being limited by the size or shape of the image in which letters, numbers, pictures, etc. are arranged, various contents There is an advantage that can be utilized.
  • FIGS. 1A to 1C are perspective views illustrating a learning terminal according to an embodiment of the present invention.
  • Figure 1d is a perspective view showing a learning terminal according to another embodiment of the present invention.
  • Figure 2a is a block diagram schematically showing the configuration of a learning terminal according to an embodiment of the present invention.
  • Figure 2b is a block diagram showing the configuration of a computer for generating learning content of the learning terminal according to an embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating a method of generating learning content according to an exemplary embodiment of the present invention.
  • FIG. 4A is a flowchart illustrating a first embodiment for performing step 100 of the method of generating learning content shown in FIG. 3;
  • FIG. 4A is a flowchart illustrating a first embodiment for performing step 100 of the method of generating learning content shown in FIG. 3;
  • Fig. 4b is a flow chart showing more specifically the first embodiment of the 100th step for the analog image.
  • 4C and 4D illustrate screens in the case of setting an area according to the first embodiment of the 100th step for an analog image.
  • FIG. 5A is a flowchart illustrating a second embodiment for performing step 100 of the method of generating learning contents shown in FIG. 3;
  • FIG. 5A is a flowchart illustrating a second embodiment for performing step 100 of the method of generating learning contents shown in FIG. 3;
  • 5B and 5C are flowcharts illustrating in more detail a second embodiment of the 100th step for digital and analog images, respectively;
  • FIG. 5D is a screen example of setting an area according to the second embodiment of the 100th step shown in FIG. 5A.
  • FIG. 6A is a flowchart illustrating a first embodiment for performing step 200 of the method for generating learning contents shown in FIG. 3;
  • FIG. 6A is a flowchart illustrating a first embodiment for performing step 200 of the method for generating learning contents shown in FIG. 3;
  • FIG. 6B is an exemplary view of a screen in the case of allocating learning material according to the first embodiment of step 200 shown in FIG. 6A;
  • FIG. 7A is a flowchart illustrating a second embodiment for performing step 200 of the method of generating learning content shown in FIG. 3; FIG.
  • FIG. 7B illustrates an example of a screen for allocating learning material according to the second embodiment of step 200 of FIG. 7A.
  • step 400 is a flowchart illustrating a more specific embodiment of step 400 of the method of generating learning content illustrated in FIG. 3.
  • 9A is a flowchart illustrating a method of generating image data in a method of generating learning content according to an embodiment of the present invention.
  • FIG. 9B is a screen example of generating image data and learning content by the image data generating method shown in FIG. 9A; FIG.
  • FIG. 10 is a flowchart illustrating a learning method using learning contents generated by an embodiment of the present invention step by step.
  • FIG. 11 is a flowchart illustrating a method of generating a training file according to another embodiment of the present invention in a stepwise manner.
  • FIG. 12 is a diagram illustrating an embodiment of a training file generated by the training file generating method shown in FIG.
  • FIG. 13 is a flowchart illustrating a learning method step by step according to another embodiment of the present invention.
  • Audio output unit 15 Controller
  • control unit 230 display unit
  • TTS converter 270 storage
  • FIGS. 1A to 1C are perspective views illustrating a learning terminal according to an embodiment of the present invention.
  • the learning terminal 100 is provided with a screen output unit (10).
  • the screen output unit 10 is configured with display means such as an LCD (Liquid Crystal Display) to output a screen corresponding to the operation of the terminal.
  • LCD Liquid Crystal Display
  • the learning terminal 100 is provided with a voice output unit 11.
  • the voice output unit 11 is a speaker means for outputting the stored audio data according to the operation of the terminal.
  • the audio output unit 11 may output audio data related to the image data displayed on the screen output unit 10.
  • the learning terminal 100 is provided with a touch screen 12 for receiving a command or data from the user.
  • the touch screen 12 is formed by forming a touch panel on the front of the display means such as LCD.
  • Various screens may be displayed on the touch screen 12, and a user may input data or commands by touching one side of the screen with the stylus fan 13 or the like.
  • the touch screen 12 may display an image in which a plurality of letters, numbers, or pictures are arranged.
  • an image in which alphabets are arranged may be displayed as shown in the figure.
  • the screen 12 is partitioned into a plurality of preset input areas. Each input area of the plurality of input areas has a predetermined predetermined size and position.
  • the learning terminal 100 displays image data I on which the plurality of letters, numbers, or pictures are arranged for the user's learning on the touch screen 12.
  • each item of the image material I is arranged at a position corresponding to at least one of the plurality of input areas of the touch screen 12.
  • the image data I displayed through the touch screen 12 corresponds to a digital image stored in the learning terminal 100.
  • the learning materials related to the selected letters, numbers, or pictures are displayed on the screen output unit 10 or the like. It is output through the voice output unit 11.
  • the voice output unit 11 starts with 'ei' or letter A corresponding to the pronunciation of the letter A.
  • the pronunciation of any one of the English words may be output.
  • the screen output unit 10 may also output words or pictures related to the pronunciation output from the voice output unit 11.
  • the user can learn words, letters, numbers, pictures, multiplication tables, etc. arranged in the image material I through the learning terminal 100.
  • the learning terminal 100 may be implemented in various embodiments in addition to the embodiment shown in Figure 1a. As shown in FIG. 1B, another embodiment of the learning terminal 100 also includes the screen output unit 10 and the voice output unit 11. And image material I is displayed at the bottom, in this embodiment the image material I is provided as an analog image. That is, the image data I may be a printed matter or an image drawn by a user. Therefore, the learning terminal 100 may further include mechanical means for fixing or supporting such an analog image.
  • the learning terminal 100 illustrated in FIG. 1B includes a wireless signal transmitting means 23 as a means for detecting a user's touch input on a specific region of the image material I composed of an analog image.
  • the wireless signal transmission means 23 is provided with a contact recognition unit for sensing pressure at one end thereof, and a wireless signal transmission unit for transmitting infrared signals and ultrasonic signals when pressure is detected at the contact recognition unit. Accordingly, when the user touches a specific portion on the image material I using the wireless signal transmitting means 23, the infrared signal and the ultrasonic signal are transmitted by the wireless signal transmitting means 23.
  • the two ultrasonic receiving sensors S1 and the infrared receiving sensor S2 arranged at predetermined intervals on the learning terminal 100 detect the signal transmitted from the wireless signal transmitting means 23.
  • the learning terminal 100 uses the time required for receiving the ultrasonic signals from the two ultrasonic receiving sensors S1 from the time when the infrared signal is received by the infrared receiving sensor S2. 23) calculates respective distances between the position at which the signal is sent and the two ultrasonic receiving sensors S1.
  • the distance between the wireless signal transmitting means 23 and the ultrasonic wave receiving sensor S1 is calculated using the time when the ultrasonic wave reaches the two ultrasonic receiving sensors S1. do. If the calculated distance value and the distance value between the two ultrasonic receiving sensors S1 are used, the coordinates of the portion where the wireless signal transmitting means 23 touches the image material I may be calculated. Accordingly, the learning terminal 100 can know the coordinates of the position touched by the user on the image data I, and accordingly voice data corresponding to items such as letters, numbers, and pictures arranged at the corresponding coordinates Or text, images, etc. may be output through the voice output unit 11 and the screen output unit 10.
  • the user since the size, shape, etc. of the image data (I) in which letters, numbers, and pictures are arranged are less limited, the user directly draws the image data, or utilizes a printed material that he / she has. Can be used to create new image material.
  • the learning terminal 100 may be implemented in an embodiment as shown in FIG. 1C.
  • the learning terminal 100 includes a screen output unit 10 and a voice output unit 11.
  • the learning terminal 100 is provided with a touch panel 32 on the front to receive a touch input by a user's hand 33 or a stylus pen.
  • the touch panel 32 is formed to be transparent so that the image material I to be installed on the rear of the touch panel 32 can be seen from the front of the touch panel 32.
  • An insertion slot 35 may be provided to enable an analog image such as a printed matter to be displayed on the learning terminal 100 as image material I.
  • the image material I is installed in the learning terminal 100 through the insertion slot 35, the image material I is positioned on the rear surface of the touch panel 32.
  • the configuration of the insertion slot 35 is only one embodiment, and the learning terminal 100 may have any structure in which the image material I may be fixed to the rear surface of the touch panel 32. .
  • the learning terminal 100 is provided with a screen output unit 10 and a voice output unit (11).
  • the learning terminal 100 includes a panel in which a plurality of switches S that are energized when a pressure is applied by a user's hand or other means are arranged in a matrix form. On the panel image material I can be fixed in front of it.
  • the plurality of switches S may include, for example, two film layers coated with a conductive metal, or two conductive metal films, and further include an electromagnetic plate (PCB) in the form of a key matrix. May be In this case, when pressure is applied to a specific region of the image material I on the panel, the contact may be formed on the two film layers, and the driving may be performed in a manner in which electricity flows.
  • the switch S indicates each of a plurality of predetermined input regions on the two film layers coated with the conductive metal.
  • the learning materials corresponding to the input area corresponding to the portion to which the pressure is applied are output from the screen output unit 10 or the voice output unit 11.
  • the learning materials corresponding to the input area corresponding to the portion to which the pressure is applied are output from the screen output unit 10 or the voice output unit 11.
  • the alphabet A is input
  • a word starting with the alphabet A and a pronunciation of the word may be output from the screen output unit 10 and the voice output unit 11.
  • the image material I composed of a digital image or an analog image may be directly made by a user.
  • a digital image may be produced by synthesizing a plurality of image files, and an analog image may be freely created by drawing itself or copying or pasting another image.
  • the learning terminal 100 is freely made by the user. Regardless of the size of the image material I, the contents, the position of the item included in the image material, or the like, appropriate learning materials corresponding thereto may be provided in the learning terminal 100.
  • FIG. 2A is a block diagram schematically illustrating a configuration of a learning terminal according to an embodiment of the present invention.
  • the learning terminal 100 includes a controller 15 for the overall control of the terminal. 1A to 1D, the screen output unit 10 and the audio output unit 11 are included. Through the screen output unit 10 and / or the voice output unit 11, learning materials to be learned by the user are output as images, texts, and audio.
  • the learning terminal 100 includes an image display unit 16.
  • the image display unit 16 is a means for displaying analog or digital image data in which a plurality of letters, numbers, pictures, and the like are arranged on the learning terminal 100.
  • the image display unit 16 may be an electronic device such as the touch screen of FIG. It may be a means, but may be configured as a means for simply fixing or supporting the analog image as in the embodiment described with reference to FIG. 1B or 1C.
  • the analog image may be fixed, and at the same time, a means for sensing a user input including a matrix-type input area may be provided.
  • the learning terminal 100 includes a touch sensing unit 17.
  • the touch sensing unit 17 is a means for detecting a user's selection on the image material I, and may be provided by various means as described with reference to FIGS. 1A to 1C. That is, it may be a touch screen which outputs a digital image as shown in FIG. 1A, and a wireless signal transmitting means for transmitting a radio signal on an analog image as shown in FIG. 1B, and a radio signal transmitted by the radio signal transmitting means. It may include a sensor means for receiving. Alternatively, the touch panel may be provided on the analog image as shown in FIG. 1C. Furthermore, as shown in FIG. 1D, the switch may be a switch using a conductive metal film provided on the rear surface of the analog image.
  • the controller 15 is set to divide the entire area in which the touch sensing unit 17 can detect a user input into a plurality of input areas in a predetermined matrix form. Each of the plurality of input areas is preset in size and position. Each input area is assigned a unique identifier to distinguish it, so that the controller 15 can distinguish each input area.
  • the input area is designated as a default when the learning terminal 100 is manufactured, and the user may reset the input area by changing a position or size as necessary.
  • the controller 15 may sense the touch input using the touch sensing unit 17.
  • the controller 15 detects which of the plurality of input areas a position where a touch input of the user sensed using the touch sensor 17 corresponds to.
  • the controller 15 may sense the touch input using the touch sensing unit 17.
  • the controller 15 detects which of the plurality of input areas a position where a touch input of the user sensed using the touch sensor 17 corresponds to.
  • the learning terminal 100 is provided with a storage unit 18.
  • the storage unit 18 stores learning materials. That is, one item among a plurality of items such as letters, numbers, pictures, and the like arranged on the image material I displayed on the image display unit 16 is selected, and the controller 15 determines that the touch sensing unit ( The controller 15 detects the learning data corresponding to the selected item among the learning materials stored in the storage 18 through the screen output unit 10 or the voice output unit 11. Output Therefore, the storage unit 18 stores learning materials such as audio, video, text, and image to be output to the screen output unit 10 or the voice output unit 11.
  • the learning material may be an image, video, audio, text file, etc. for the items arranged in the image material (I).
  • various learning materials may be provided to be downloaded by using the website, and the learning materials generated by the users are uploaded to the website. This can facilitate the sharing of learning materials among users.
  • the learning terminal 100 may be connected to a network directly or through a device such as a computer to receive learning materials. Users can upload their own audio, image, video files, etc. to the website via the network.
  • a predetermined fee may be paid to the user who uploaded the learning materials.
  • the learning terminal 100 may be provided with a connection unit 19.
  • the connection unit 19 is a communication interface means for allowing the learning terminal 100 to be connected to a computer or another device. This may be provided for downloading data such as learning material from a computer or other device.
  • connection unit 19 may be connected to an external storage medium that can be selectively connected to the learning terminal 100.
  • the user connects the external storage medium storing the desired learning material file or the following learning file to the learning terminal 100 so that the learning material file or the learning file stored in the external storage medium is stored in the learning terminal 100.
  • the external storage medium storing the desired learning material file or the following learning file to the learning terminal 100 so that the learning material file or the learning file stored in the external storage medium is stored in the learning terminal 100.
  • Figure 2b is a block diagram showing the configuration of a computer for generating learning content of the learning terminal according to an embodiment of the present invention.
  • the learning terminal 100 is connected to the interface unit 210 of the computer 200 through the connection unit 19.
  • the computer 200 is an information processing device such as a general personal computer.
  • the computer 200 includes a control unit 220 for overall control of the computer 200 according to an operating system, a display unit 230 for outputting a result of the operation of the computer 200, and a user to the computer 200.
  • An input unit 240 for inputting a command or data is provided.
  • the display unit 230 may be an LCD monitor or the like, and the input unit 240 may be a conventional input means such as a keyboard or a mouse.
  • the voice input unit 250 may be provided in the computer 200.
  • the voice input unit 250 is a conventional microphone means, and detects a sound to be input by the user and converts it into a digital audio signal.
  • the computer 200 may be provided with a text-to-speech (TTS) converter 260. When the user inputs text through the input unit 240, the TTS converter 260 converts the text into an audio signal corresponding to the pronunciation of the text.
  • TTS text-to-speech
  • the computer 200 is provided with a storage unit 270.
  • the storage unit 270 stores the audio signal generated by the voice input unit 250 or the TTS converter 260, and stores data generated by a method of generating learning content, which will be described later.
  • a user may generate learning content corresponding to the image material I having a plurality of items arranged using learning materials including audio, text, or video materials. This may be downloaded to the terminal 100 through the interface unit 210.
  • a method of generating learning content for enabling a user to perform learning based on image data (I) having a plurality of items arranged using the learning terminal 100 as described above will be described. That is, a method of generating learning content provided through the learning terminal 100 will be described.
  • Such learning content may be directly generated by the learning terminal 100 or generated using the computer 200. It may be downloaded to the learning terminal 100.
  • step S100 of setting an area corresponding to each of a plurality of items arranged in a specific image material I on the image material I is performed. Starts from.
  • the step S100 is performed by setting the area where each item arranged in the image material I as a separate area, and the two-dimensional coordinate area corresponding to the position where the specific item is arranged on the image material I.
  • Each coordinate region is stored as a region corresponding to the item. For example, for an image data I having a coordinate of 100x100, a coordinate region from coordinates (0, 0) to coordinates (50, 100) is set as an area corresponding to one item, and coordinates (50, 100) By setting the area from to to the coordinates 100 and 100 as the area corresponding to the other item, the coordinate area corresponding to each item is set.
  • the step S100 may be performed differently depending on whether the image material I is configured as an analog image or a digital image, and may also be performed differently depending on an item arrangement method on the image material I and the like. Can be. This will be described in more detail later with reference to other drawings.
  • the step (S200) of mapping the learning material related to the item corresponding to each set area to each area is performed.
  • the step S 200 is performed by designating the learning material related to the item corresponding to the specific area for each area.
  • the learning material may be audio material such as voice or music and / or video material such as text or image. That is, for example, as shown in FIG. 1A, a coordinate region is set in each of the plurality of alphabet items arranged in the image material I, and is output when the user selects a coordinate region corresponding to the alphabet A.
  • FIG. Learning materials for example, as shown in the drawing, the English word 'Apple' starting with the letter A may be assigned as the learning material for the coordinate region corresponding to the letter A.
  • the learning materials associated with each alphabet are also assigned to each of the coordinate regions corresponding to the remaining alphabets.
  • the audio material may be materials stored in the learning terminal 100 or the computer 200 in advance, or the user may record or newly create data using the TTS conversion. May be As illustrated in FIG. 2B, the computer 200 may be provided with a voice input unit 250 and / or a TTS converter 260 to generate a new audio material. If the learning terminal 100 also includes means such as a voice input unit or a TTS converting unit, audio material may be allocated to an area corresponding to each item through the learning terminal 100.
  • the video learning material may also be materials stored in advance in the learning terminal 100 or the computer 200, or may be a digital image newly created by the user or downloaded through a network such as the Internet.
  • text directly input by a user may be assigned to each area as a learning material.
  • the learning content may include a range of coordinate areas corresponding to each item arranged in the image material I, audio and / or video learning materials corresponding to each area, and a correspondence relationship between each area and the learning material. It is a collection of data that it contains.
  • the generated learning content is stored in the learning terminal 100 or the computer 200 (S400), which image material I corresponds to the stored learning content, and its corresponding relationship is displayed.
  • the image material I is a digital image
  • the image material I may be stored together in the learning content.
  • the image material I is an analog image
  • the learning content is stored. It is indicated whether or not it corresponds to the image material (I).
  • step S 400 will also be described in more detail later.
  • FIG. 4A is a flowchart illustrating a first embodiment for performing step S100 of the method of generating learning content illustrated in FIG. 3, and FIG. 4B illustrates a first embodiment of the step S100 for an analog image in more detail.
  • 4C and 4D are diagrams illustrating screens when an area is set according to the first embodiment of step S100 for an analog image.
  • FIG. 5A is a flowchart illustrating a second embodiment for performing step S100 of the method of generating learning content illustrated in FIG. 3, and FIGS. 5B and 5C are diagrams illustrating steps S100 of digital and analog images, respectively.
  • FIG. 5D is a flowchart illustrating a second embodiment in more detail, and FIG. 5D is a screen example of setting an area according to the second embodiment of step S100 shown in FIG. 5A.
  • step S100 of FIG. 3 may be performed in various embodiments.
  • the first embodiment of the method starts from the step of providing a plurality of predetermined region division schemes as shown in FIG. 4A (S101). That is, the controller 15 of the learning terminal 100 or the controller 220 of the computer 200 may provide any one of a plurality of region division schemes set in advance.
  • the controller ( 15) or the controller 220 provides both methods to the user, so that the user selects either of them.
  • the image material (I) used to generate the learning content is composed of items arranged in respective regions divided into three quarters and three quarters, the user can arrange three sections horizontally and vertically. Allows you to choose how to divide the area into three parts.
  • the region division scheme has been described with only two relatively simple schemes provided, but in practice, various region division schemes may be provided so that each region may be set in a wide variety of ways.
  • each divided area according to the selected method is set as an area corresponding to each item (S103).
  • each coordinate of the image data I is divided into two horizontal and two vertical portions. It is set to be divided into four groups.
  • step S100 described with reference to FIG. 4A may be implemented both when the image data I is a digital image and when it is an analog image.
  • the first embodiment is performed through more specific steps as shown in FIG. 4B.
  • a plurality of predetermined area division schemes are provided (S111), and the user selects one area division scheme corresponding to the arrangement of items of the analog image (S112).
  • the user may perform one of the plurality of region division methods provided in step S111 as shown in FIG. 4C. Select the division method of the shape that corresponds to 5 horizontal divisions and 5 vertical divisions.
  • the user may input the horizontal and vertical lengths of the analog image to be used for generating the learning content (S113).
  • step S113 the user inputs the horizontal and vertical lengths of the actual analog image, so that the learning terminal 100 or the computer 200 can generate the entire coordinate region corresponding to the actual image.
  • the area division method selected in operation S112 is applied to the entire area corresponding to the analog image size input from the user to calculate the size and position of each area divided in the entire area. For example, when the learning content is generated using an analog image of 50 cm x 50 cm as the image material (I) as shown in FIG. Twenty-five areas of 10 cm length are calculated as areas corresponding to each item.
  • the divided regions are set as regions corresponding to each item arranged in the image data according to the size and position information of each region calculated as described above (S115).
  • a process of providing a plurality of predetermined region division methods, and the user selecting one of them, as shown in FIG. 4D is the number of horizontal and vertical divisions of image data. It may be provided by directly inputting. Accordingly, the quadrangular image divided according to the number of divisions selected by the user may be displayed as shown in FIG. 4C.
  • the user wants to create on the display unit 230 of the computer 200 or the screen output unit 10 of the learning terminal 100
  • the digital sample image of the image material I corresponding to the learning content is displayed starts at step S151 (S151).
  • the digital sample image may be the digital image itself to be used as the image material I or reduced.
  • a digital sample image refers to an analog image that is scanned or photographed and converted into digital data. That is, the digital sample image having the same shape as the analog image to be used as the image material I may be a digital sample image.
  • step S151 When the digital sample image is displayed in step S151, the user visually checks a plurality of items shown in the digital sample image and drags and selects an area corresponding to each item according to the arrangement of the items ( S152).
  • each area selected by the drag input is set as an area corresponding to each item (S153).
  • step S100 may be performed differently according to the case where the image data I is a digital image and an analog image, as described above. If the image data I corresponds to the digital image as described above, step S161 of selecting one of the pre-stored digital images is performed.
  • the learning terminal 100 When the learning terminal 100 provides a method for generating learning content, one of the digital images stored in the learning terminal 100 may be selected to generate corresponding learning content, and the computer 200 may generate the learning content corresponding thereto. When a learning content generation method is provided, one of the digital images stored in the computer 200 may be selected to generate learning content corresponding thereto, and downloaded to the learning terminal 100.
  • a sample image of the selected digital image is displayed on the display unit 230 of the computer 200 or the screen output unit 10 of the learning terminal 100 ( S162).
  • the sample image may be the selected digital image as it is, or an enlargement or reduction thereof.
  • the user selects a partial area of the sample image displayed as the drag input.
  • the user can appropriately select a region in which any one item among the plurality of items arranged on one digital image selected as the image material I is arranged (S163).
  • each area corresponding to each item can be freely selected.
  • the controller 15 of the learning terminal 100 or the controller 220 of the computer 200 calculates and stores the size and position of the selected region. (S164). Accordingly, each region selected by the user through the drag input is set as an region corresponding to the item (S165).
  • the analog image is used as the image material (I)
  • the learning terminal 100 or the computer 200 to store and display it (S172) is performed.
  • the process of generating a digital image sample for the analog image may be performed by cutting out an unnecessary portion of the digital image generated by photographing or scanning the analog image with a digital camera and not corresponding to the analog image.
  • the user may receive the size of the analog image corresponding to the digital image sample displayed in step S172 (S173). That is, the horizontal and vertical lengths of the analog images are input, so that the computer 200 or the learning terminal 100 knows the total area of the analog image.
  • the user receives a drag input for a specific area on the digital image sample displayed in operation S172 to select a partial region of the digital image sample.
  • the size and position information of the actual analog image corresponding to the partial region of the digital image sample selected in operation S174 may be calculated and stored. That is, the size and position of the selected area are calculated by using the horizontal and vertical lengths of the actual analog image input in step S173. Since the ratio of the width and height of the digital image sample will be the same as or slightly smaller than the ratio of the width and height of the actual analog image, selecting some areas of the digital image sample will give the actual analog image the size and position of the selected area. The size and position of the corresponding region in the image may be calculated.
  • each stored area is set as an area corresponding to each item arranged in the analog image (S176).
  • step S173 for receiving the size of the image does not need to be performed.
  • FIG. 5D An example of performing the step S100 according to the second embodiment as described above will be described with reference to FIG. 5D.
  • the screen output unit of the display unit 230 or the learning terminal 100 of the computer 200 is described.
  • a digital image sample of the image material I is displayed. Accordingly, the user selects by dragging or touch dragging a specific area of the displayed digital image sample. As shown in the drawing, only an area corresponding to a specific item may be selectively set on a digital image sample in which several items are irregularly arranged.
  • a user interface for inputting the horizontal and vertical lengths of the image may be provided together so that the user may input the size of the actual analog image.
  • the area selected by the user may be set as an area corresponding to the item on the image material I, and input learning material such as voice or text corresponding thereto. That is, the user may repeatedly select a specific area and input a corresponding learning material.
  • FIGS. 6A and 7A are flowcharts illustrating a first embodiment and a second embodiment step by step for performing step S200 of the method for generating learning content illustrated in FIG. 3, respectively.
  • FIGS. 6B and 7B are FIGS. 6A and 7B, respectively.
  • 7A is a screen diagram illustrating an example of allocating learning material according to the first and second embodiments of step S200 shown in FIG. 7A.
  • the process of allocating the learning materials for each item corresponding to the plurality of areas set on the image material I includes selecting one area (S210) and corresponding to the selected area. Input the audio data associated with the item or select one of the stored voice data to correspond to the selected area (S220). In addition, in step S220, only when the learning terminal 100 or the computer 200 provides a TTS conversion function, voice data may be input as text.
  • step S230 text related to an item selected by the learning terminal 100 may be displayed when the text data related to the item corresponding to the selected area is input by using the learning terminal 100.
  • Both the step S220 and the step S230 may be performed, or only one of them may be selectively performed.
  • the learning material may be image data or video data, in addition to audio data or text data, and each of these data may be selectively input by the user.
  • the display unit 230 of the computer 200 or in the case of generating the learning content in the learning terminal 100, is set in the image material I in the screen output unit 10.
  • a plurality of areas is displayed.
  • a plurality of regions divided according to a plurality of predetermined region division schemes are schematically displayed, or each region determined by a drag input by the user is displayed on the digital image sample.
  • a user interface for inputting or selecting learning materials related to an item corresponding to the corresponding area is provided.
  • voice data corresponding to the selected area may be input by directly recording and inputting voice data, selecting prestored voice data, or generating text data by performing TTS conversion after text input.
  • text data related to an item corresponding to the selected area may be input.
  • the voice data including the pronunciation of the word 'lion' or the word 'lion' with the initial consonant of ' ⁇ ' as the voice material may be allocated to the corresponding area as the learning material.
  • 'textured' or 'lion' may be assigned as text data, and an image related to a lion may be assigned to the corresponding area as learning data.
  • step S200 as shown in FIG. 7A also starts from step S250 in which one area of the set area is selected.
  • the user may input first learning data corresponding to the first input with respect to the area selected in operation S250.
  • the first input means any one of a plurality of input methods.
  • the learning terminal 100 detects a user's touch input and outputs learning data related to an item corresponding to an area where the touch input is detected. Can be performed. Only one short touch input may be detected within a specific area, one long touch input over a predetermined time may be detected, and two touch inputs may be detected within a predetermined time. In addition, a touch input input in a specific direction within a specific area may be detected, or a touch input input in a shape corresponding to a specific shape may be detected.
  • the controller 15 of the learning terminal 100 may recognize a pattern of a signal input from the touch sensing unit 17 and recognize a touch input as a different command according to the input pattern. Accordingly, the controller 15 may be preprogrammed to recognize the first input, the second input, and the like of the predetermined pattern as different commands.
  • the first input may be a touch input corresponding to one predetermined input pattern among various touch input patterns, and may be an operation of touching a specific region once within a predetermined time.
  • the learning data to be output when the first input is detected in the specific region selected in step S250 is input in step S260.
  • the learning material may be a voice material or may be a material such as an image, text, or video.
  • another learning material corresponding to the second input is input.
  • the second input may also be preset as a touch input corresponding to any one input pattern among various touch input patterns as one touch input having a pattern different from the first input.
  • the learning data input corresponding to the second input is another learning material which is distinguished from the first learning data input corresponding to the first input.
  • the first learning material is a voice material
  • the second learning material may be text data. In this way, the step S200 is performed by designating both learning materials corresponding to the first input and the second input for each area.
  • a plurality of areas set as shown in Figure 7b is displayed, the user can select any one of them.
  • a plurality of regions are displayed, only relative positions and sizes of each region may be schematically displayed as illustrated in FIG. 6B, and as shown in FIG. 7B, a digital image sample image for the image material I may be displayed. Each divided area may be displayed.
  • the learning material related to the item ' ⁇ ' corresponding to the selected region is received.
  • learning materials are input separately according to the input method. That is, the voice data 'siot' and the text data 'siot' may be allocated as the learning data to be output when the corresponding area is touched briefly, and the learning data to be output when the corresponding area is touched once long may be allocated separately. .
  • the learning material to be output when the user touches the area twice twice may be separately allocated. Accordingly, when all the learning materials are allocated to each of the plurality of regions, the learning contents can be generated by storing them.
  • FIG. 8 is a flowchart illustrating a more exemplary embodiment of step S400 of the method of generating learning content illustrated in FIG. 3. Referring to FIG. 8, a process of displaying and storing a corresponding relationship with the image material I in the learning content generating method illustrated in FIG. 3 will be described.
  • the learning content generated by the learning content generating method is based on the image material (I). That is, when a user selects any one of a plurality of items arranged in the image material I displayed on the terminal 100, in order to generate the learning content for outputting the learning material corresponding to the item, a specific image is first generated. Data (I) should be selected. Thus, each of the generated learning contents corresponds to one image material (I). For each learning content generated for this purpose, a correspondence with each image material I is displayed.
  • identification information is allocated to the image material I corresponding to the learning content to be generated. For example, when the image data I corresponds to a digital image, step S410 is performed by allocating a file name already assigned to the digital image as identification information or allocating a new file name.
  • identification information including at least one of a number, a letter, and a pattern is assigned to the analog image.
  • an identification number may be assigned to the analog image, or an identification pattern that may be displayed at a specific position of the analog image may be assigned.
  • a pattern recognition means for recognizing the identification pattern is separately disposed on the learning terminal 100, and the pattern recognition when the analog image is installed in the learning terminal 100.
  • the identification pattern is located at a position corresponding to the position of the means so that the pattern recognition means can read the identification pattern.
  • the identification pattern may be generated by displaying a portion of the plurality of squares drawn at a predetermined position in black and the others in white. In the case of four squares, the number of two or four , that is, 16 different cases may be represented from the case where all four rectangles are displayed in white, to the case where all the rectangles are displayed in black.
  • the image of the identification pattern is output on the screen, so that the user can directly display the same pattern as the identification pattern output on the screen in a specific area of the analog image.
  • the learning content is generated and stored under the determined name (S430).
  • the name of the learning content is input from the user or automatically determined. It may be determined by a name associated with the identification information assigned to the image material (I).
  • the learning content includes a location of each area set on the image material I, learning materials corresponding to each area, and a corresponding relationship between each area and the learning materials. That is, area information corresponding to each item arranged in the image data I, learning materials such as voice, text, image, and video, and their corresponding relations are stored together.
  • the correspondence between the image data I and the learning content is stored (S440). That is, it stores with the learning content whether the specific learning content is based on what image material (I). For this purpose, the correspondence between the identification information assigned to the image data I and the name of the learning content is stored.
  • the step S440 is not necessarily a step to be performed. For example, if the identification information allocated to the image material I is used as the name of the learning content corresponding thereto, there is no need to store a separate correspondence relationship.
  • the image material (I) when the image material (I) is a digital image from above, the image material (I) may be stored together in the learning content without additional identification information.
  • FIG. 9A is a flowchart illustrating a method of generating image data in a method of generating learning content according to an embodiment of the present invention.
  • FIG. 9B is a diagram illustrating image data and learning content generated by the method of generating image data shown in FIG. 9A.
  • the screen is an example of the case.
  • FIG. 9A One example of a method of generating image material as shown in FIG. 9A is limited to the case of using a digital image as the image material I. In addition, this is only one embodiment of the image data generating method that can be additionally performed in the embodiment of the present invention and image data may be generated by other methods.
  • an embodiment for generating image data starts with the step of providing a plurality of predetermined region division schemes (S500).
  • various region division schemes predetermined in the learning terminal 100 or the computer 200 are provided to the user, and any one of the provided schemes is selected by the user (S510).
  • Each divided area is displayed on the screen according to the selected method, and the user selects one area on the displayed screen to select an image of an item to be displayed on the corresponding area (S520).
  • FIG. 9B when the user selects a horizontal 4-split or vertical 4-split scheme among a plurality of predetermined region division schemes, an image of the divided regions is shown on one side of the screen, and the user divides the screen.
  • the selected 16 areas may be sequentially selected to select an image of an item to be displayed in the corresponding area.
  • each region where the items are arranged may be set as the region set by performing step S100 of FIG.
  • the corresponding learning materials corresponding to the corresponding areas can be matched together, so that each step for image generation, area setting, and learning material allocation can be performed in parallel at the same time.
  • Embodiments of the method for generating learning content according to the present invention as described above may be performed in the learning terminal 100 or the computer 200 as described above, and for this purpose, the learning terminal 100 or the computer ( 200 includes display means for screen display, input means for receiving a command or data from a user, and the like.
  • the learning terminal 100 or the computer includes display means for screen display, input means for receiving a command or data from a user, and the like.
  • a learning content generating program is stored in the learning terminal 100 or the computer 200.
  • the controller 15 or the controller 230 may execute this.
  • the computer 200 In the case of generating the learning content in the computer 200, if the computer 200 is provided with a communication module, the user can access the Internet using the communication module, and various image and audio materials necessary for generating the learning content can be easily There is also an advantage that can be obtained easily.
  • an Internet server for sharing the learning content used in the learning terminal 100 a plurality of users upload and download their own self-produced learning content to each other, so that each user can learn a wide variety of learning content. It can also be used.
  • the internet server may provide various items for generating learning content.
  • 10 is a flowchart illustrating a learning method using learning content generated by an embodiment of the present invention in stages.
  • a specific image material I is selected by the user and displayed (S900). Is begun.
  • the user may select one of the one or more image materials I stored in the learning terminal 100, and the selected image material I may be displayed.
  • the image data I is an analog image
  • the user may select a specific analog image and install it on the learning terminal 100.
  • the analog image since the setting area of each item stored in the learning terminal 100 and the corresponding area of the actual analog image must correspond, the analog image can be installed according to a specific reference point of the learning terminal 100.
  • the terminal as shown in FIG. 1B may display a reference point between the two ultrasonic receiving sensors S1 to allow the user to center the upper side of the analog image.
  • the controller 15 of the learning terminal 100 searches for learning content corresponding to the image material I among the stored learning content (S910).
  • the controller 15 may recognize the learning content corresponding to the selected digital image.
  • the corresponding learning content is selected by the user, The identification information displayed in the specific area may be read and the learning content corresponding thereto may be searched for.
  • the user waits for a user's touch input (S920).
  • a user's touch input to a specific area of the image material I (S930)
  • the learning material related to the item corresponding to the corresponding area is displayed.
  • Search S940. That is, since the learning content includes all of the plurality of learning materials and the corresponding relationship between each area and the learning material, the learning content corresponding to the area where the user's touch input is sensed can be searched.
  • the controller 15 touches not only the touch input position of the user but also the touch.
  • the learning pattern corresponding to the touch input pattern may be output by calculating the input pattern.
  • FIG. 11 is a flowchart illustrating a method of generating a training file according to another embodiment of the present invention.
  • FIG. 12 is a diagram illustrating an example of a training file generated by the training file generating method of FIG. 11. to be.
  • the learning file may be assigned to each command assigned to be executed when a user input is detected as a corresponding input area for each of a plurality of preset input areas among all areas in which the touch sensing unit 17 can detect a user input.
  • Data file containing the mapping That is, the learning file includes data representing a correspondence relationship between a command allocated to a plurality of input areas and the learning material to be output from the learning terminal 100 in response to the command.
  • the touch sensing unit 17 allocates one or more commands corresponding to each of a plurality of preset input areas among all areas in which the touch sensing unit 17 can detect a user input. It starts from step S1100.
  • the step S1100 is performed by allocating a command corresponding to each preset input area. For example, as shown in FIG. 1D, when 20 input areas of 5 rows and 4 columns are preset, the first command is allocated to the four input areas included in the first row, and the four commands included in the second row.
  • the second command may be allocated to two input areas. That is, it is not necessary to allocate different commands to each input area, but the same command may be allocated to a plurality of input areas. In addition, it is not necessary to assign a command to all input areas, and a command may be assigned to only some of them.
  • the training data to be output corresponding to each command is designated. That is, for example, audio, video, text, or image data to be output is designated in response to one command.
  • 20 input regions of 5 rows and 4 columns are preset, and a first command is assigned to four input regions included in the first row, and four input regions included in the second row are assigned.
  • the fourth instruction in clause 4 and the fifth instruction in clause 5 one or more learning material files corresponding to the first instruction can be designated. According to this, when a user input is detected in any one of the four input areas included in the first row, the designated learning data file may be output.
  • one learning file including a command allocated to each input area and a learning material file name corresponding to each command is generated.
  • the learning file generated in step S1120 includes a corresponding relationship between each input area and each command, and a corresponding relationship between each command and the learning data file name.
  • the learning file includes a matrix, such as an Excel file generated by an application program such as Microsoft Office Excel, and each element of the matrix represents a preset input area, and the name of a command corresponding to each input area.
  • Each of these matrices is written, and contains the name of the command and a list of correspondences of the files to be output.
  • the learning file when there are 80 preset input areas in total of 10 rows and 8 columns, the learning file includes a matrix of 10 rows and 8 columns, and each region of the matrix has a name of each command. Is written. That is, the same command A is assigned to all four input regions in the upper left corner of the matrix. Command A is a command to output the audio file a.mp3. Therefore, in this case, when the user detects a user input in any one of the plurality of input areas, the audio file a.mp3 is output in response to the command A.
  • the learning material file name may be directly written in each matrix. In FIG. 12, only one learning material file corresponds to each command, but a plurality of learning material files may be designated for one command. For example, one voice file and one image file may be designated in correspondence to one command.
  • Such a learning file serves to determine a correspondence relationship between each input area of the touch sensing unit 17 of the learning terminal 100 and the learning data files associated with each item arranged in the image material I. do. To this end, the controller 15 interprets the data included in the learning file as shown in FIG. 12, and when the user input is detected in each input area of the touch sensing unit 17, the learning material corresponding to the corresponding input area. You can run an application that retrieves and prints files.
  • the user executes the learning file corresponding to the image material I displayed on the learning terminal 100 with the application program so that appropriate learning material related to the items arranged in the image material I can be output. .
  • each item arranged in the image material I is input to any one of the plurality of input areas while the image material I is installed in the learning terminal 100. After checking whether it is arranged at a location corresponding to the area, a command for outputting learning data related to the item is allocated to the input area at the location corresponding to the specific item.
  • the learning file may be generated by a document editing program such as an Excel program on a computer, downloaded to the learning terminal 100, or may be generated as a separate dedicated program.
  • the learning terminal 100 may be generated directly.
  • the screen output unit 10 receives a list of files stored in the storage unit 18. By selecting one or more of the files, a file corresponding to each input area may be allocated.
  • the user directly selects each input area and then records and stores the voice file to be output when the corresponding input area is selected. You can also create
  • the controller 15 executes the data, and analyzes the data included in the learning file to search for the learning data file corresponding to the input area when a user input is detected in each input area of the touch sensing unit 17.
  • the outputting application may operate in other ways. That is, when a user input is detected in a specific input area of the touch sensing unit 17, the operation of searching for and outputting a learning material file corresponding to the input area is reversed. After outputting, it may be operated by monitoring whether the user selects a corresponding input area accordingly and outputting the result.
  • a plurality of input areas of the touch sensing unit 17 and corresponding learning are provided, such as finding and inputting all input areas set to output the same learning material to the user, or sequentially selecting the input areas to make a specific word. It can provide a variety of possible learning methods that can be used with the material.
  • the learning file is generated by the method as shown in FIG. 11, with respect to different image data I, the corresponding learning according to the position of each item arranged in the image material I and the meaning of each item corresponding thereto. Since the data can be properly allocated, the utilization of the learning terminal 100 can be widened.
  • the size of the image data I or the data is arranged in the image data I. Even if the items are different from each other, the input area and corresponding learning materials can be designated so that learning materials related to the items included in the image data can be output. There is an advantage.
  • FIG. 13 is a flowchart illustrating a learning method step by step according to another embodiment of the present invention.
  • the learning method according to another embodiment of the present invention starts from step S1300 in which image material I is displayed on the learning terminal 100.
  • step S1300 in which image material I is displayed on the learning terminal 100.
  • the image material I can be replaced, but also the respective input areas of the touch sensing unit 17 by the learning file. Since it is possible to change the command to be performed corresponding to the, in step S1300 it is possible to replace the various image data (I).
  • the image may be displayed as a digital image or an analog image according to a sensing method of a user input of the learning terminal 100.
  • step S1310 of executing the learning file corresponding to the displayed image material I is performed.
  • the controller 15 analyzes the data included in the learning file and detects a correspondence between a command corresponding to each input area and a learning material file corresponding to each command.
  • the controller 15 sensed by the sensing unit 17 and the controller 15 executes a command corresponding to the input area where the user's input is sensed based on the contents of the learning file currently being executed.
  • the command corresponding to the input area in which the user input is detected may be searched in the storage unit 18 for the learning data file whose file name is stored in the learning file in advance in correspondence with the corresponding input area. It is output to the voice output unit 11.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Tourism & Hospitality (AREA)
  • Theoretical Computer Science (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

La présente invention porte sur un procédé de génération de contenu d'apprentissage et sur un terminal fournissant le procédé. La présente invention propose un procédé de génération de contenu d'apprentissage pour un terminal d'apprentissage, les données d'apprentissage correspondant à un article sélectionné étant émises parmi une pluralité d'articles disposés sur une image, et le procédé consistant à : (a) établir des régions correspondant à chaque article disposé sur ladite image, par rapport aux données d'image formées avec ladite image sur laquelle la pluralité d'articles sont disposés ; (b) mettre en correspondance chaque région avec les données d'apprentissage relatives aux articles correspondant à chaque région établie dans l'étape (a) ; (c) générer un contenu d'apprentissage par utilisation des données d'apprentissage correspondant à chaque région dans l'étape (b) ; et (d) afficher et stocker la relation de correspondance entre le contenu d'apprentissage généré et les données d'image. Ainsi, selon la présente invention, l'effet d'apprentissage est accru et les utilisateurs sont aptes à créer facilement un contenu d'apprentissage, les utilisateurs pouvant ainsi apprendre dans une gamme plus diverse. En outre, puisque le contenu d'apprentissage utilisant des images est généré sans restriction de taille, de forme ou analogue des images, il est possible d'utiliser différents contenus.
PCT/KR2010/009220 2010-02-19 2010-12-22 Terminal d'apprentissage, procédé de génération de contenu d'apprentissage, procédé d'apprentissage et support d'enregistrement associé WO2011102600A2 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR1020100015298A KR101000463B1 (ko) 2010-02-19 2010-02-19 유아용 학습 단말기 및 유아용 학습 컨텐츠 생성방법
KR10-2010-0015298 2010-02-19
KR1020100095897A KR101060281B1 (ko) 2010-10-01 2010-10-01 유아용 학습 단말기 및 이를 이용한 학습자료 생성방법, 학습방법, 그리고 프로그램이 저장된 저장매체
KR10-2010-0095897 2010-10-01

Publications (2)

Publication Number Publication Date
WO2011102600A2 true WO2011102600A2 (fr) 2011-08-25
WO2011102600A3 WO2011102600A3 (fr) 2011-11-03

Family

ID=44483434

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2010/009220 WO2011102600A2 (fr) 2010-02-19 2010-12-22 Terminal d'apprentissage, procédé de génération de contenu d'apprentissage, procédé d'apprentissage et support d'enregistrement associé

Country Status (1)

Country Link
WO (1) WO2011102600A2 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016089130A1 (fr) * 2014-12-04 2016-06-09 주식회사 와이비엠솔루션 Système de reproduction de contenus et procédé de reproduction utilisant un stylo électronique

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR960018667U (ko) * 1994-11-02 1996-06-19 조영하 유아용 지능학습기구
KR200202226Y1 (ko) * 2000-05-30 2000-11-15 박종국 전자 학습기
KR20010078529A (ko) * 2000-01-13 2001-08-21 김옥평 어린이 지능 계발용 전자 학습기
KR20040083562A (ko) * 2003-03-22 2004-10-06 이상협 비접촉 정전용량 센서 방식 대화형 교구
KR20060009476A (ko) * 2004-07-23 2006-02-01 황명삼 유아용 학습교구
KR20060065958A (ko) * 2004-12-11 2006-06-15 주식회사 팬택앤큐리텔 글쓰기 학습 기능을 가진 이동통신 단말기 및 그 방법
KR100631158B1 (ko) * 2006-04-25 2006-10-04 자프린트닷컴 주식회사 인쇄물의 문자위치정보 디지털코드 인쇄 및 제작방법과상기 방법으로 제작된 인쇄물 및 상기 인쇄물을 인식하는인식장치
KR20090097484A (ko) * 2008-03-11 2009-09-16 배강원 다중 표현 문장이미지를 이용한 언어 학습 시스템 및방법과, 이를 위한 기록매체와 언어 학습 교재
KR20100012789A (ko) * 2009-01-06 2010-02-08 (주)시누스 무선신호를 이용한 범용 학습 장치
KR20100014018A (ko) * 2008-08-01 2010-02-10 주식회사대교 전자학습 단말장치 및 이를 이용한 전자학습 방법

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR960018667U (ko) * 1994-11-02 1996-06-19 조영하 유아용 지능학습기구
KR20010078529A (ko) * 2000-01-13 2001-08-21 김옥평 어린이 지능 계발용 전자 학습기
KR200202226Y1 (ko) * 2000-05-30 2000-11-15 박종국 전자 학습기
KR20040083562A (ko) * 2003-03-22 2004-10-06 이상협 비접촉 정전용량 센서 방식 대화형 교구
KR20060009476A (ko) * 2004-07-23 2006-02-01 황명삼 유아용 학습교구
KR20060065958A (ko) * 2004-12-11 2006-06-15 주식회사 팬택앤큐리텔 글쓰기 학습 기능을 가진 이동통신 단말기 및 그 방법
KR100631158B1 (ko) * 2006-04-25 2006-10-04 자프린트닷컴 주식회사 인쇄물의 문자위치정보 디지털코드 인쇄 및 제작방법과상기 방법으로 제작된 인쇄물 및 상기 인쇄물을 인식하는인식장치
KR20090097484A (ko) * 2008-03-11 2009-09-16 배강원 다중 표현 문장이미지를 이용한 언어 학습 시스템 및방법과, 이를 위한 기록매체와 언어 학습 교재
KR20100014018A (ko) * 2008-08-01 2010-02-10 주식회사대교 전자학습 단말장치 및 이를 이용한 전자학습 방법
KR20100012789A (ko) * 2009-01-06 2010-02-08 (주)시누스 무선신호를 이용한 범용 학습 장치

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016089130A1 (fr) * 2014-12-04 2016-06-09 주식회사 와이비엠솔루션 Système de reproduction de contenus et procédé de reproduction utilisant un stylo électronique

Also Published As

Publication number Publication date
WO2011102600A3 (fr) 2011-11-03

Similar Documents

Publication Publication Date Title
EP3821330A1 (fr) Dispositif électronique et procédé de génération de raccourci de commande rapide
WO2016085234A1 (fr) Procédé et dispositif pour modifier des caractères manuscrits
WO2017082519A1 (fr) Dispositif de terminal utilisateur pour recommander un message de réponse et procédé associé
WO2014010974A1 (fr) Appareil à interface utilisateur et procédé pour terminal utilisateur
WO2016093506A1 (fr) Terminal mobile et procédé de commande associé
WO2016043482A1 (fr) Procédé d'application de style à un contenu et dispositif à écran tactile pour application de style à un contenu
WO2014092451A1 (fr) Dispositif et procédé de recherche d'informations et support d'enregistrement lisible par ordinateur associé
WO2011102689A2 (fr) Appareil d'entrée de clé plurilingue et procédé associé
WO2013151347A1 (fr) Appareil et procédé d'entrée de caractères
WO2014098528A1 (fr) Procédé d'affichage d'agrandissement de texte
WO2011087204A2 (fr) Appareil de signalisation numérique et procédé l'utilisant
WO2016129923A1 (fr) Dispositif d'affichage, procédé d'affichage et support d'enregistrement lisible par ordinateur
WO2019112235A1 (fr) Appareil électronique, procédé de commande de celui-ci et support d'enregistrement lisible par ordinateur
WO2013191315A1 (fr) Appareil et procédé de traitement d'image numérique
WO2015072803A1 (fr) Terminal et procédé de commande de terminal
WO2014027818A2 (fr) Dispositif électronique pour afficher une région tactile à présenter et procédé de ce dispositif
WO2019139367A1 (fr) Dispositif d'affichage et procédé de pour interface tactile
WO2015182811A1 (fr) Appareil et procédé d'établissement d'une interface utilisateur
WO2019039739A1 (fr) Appareil d'affichage et son procédé de commande
EP3087752A1 (fr) Appareil de terminal utilisateur, appareil électronique, système et procédé de commande associé
WO2015046789A1 (fr) Système et procédé de partage d'un objet sur la base d'une entrée de coups
WO2019135553A1 (fr) Dispositif électronique, son procédé de commande et support d'enregistrement lisible par ordinateur
WO2015088196A1 (fr) Appareil et procédé d'édition de sous-titres
WO2011102600A2 (fr) Terminal d'apprentissage, procédé de génération de contenu d'apprentissage, procédé d'apprentissage et support d'enregistrement associé
WO2018164534A1 (fr) Dispositif portable et procédé de commande d'écran du dispositif portable

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10846238

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: COMMUNICATION NOT DELIVERED. NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26.10.2012)

122 Ep: pct application non-entry in european phase

Ref document number: 10846238

Country of ref document: EP

Kind code of ref document: A2