KR101855419B1 - Apparatus for learning language using augmented reality and language learning method using thereof - Google Patents

Apparatus for learning language using augmented reality and language learning method using thereof Download PDF

Info

Publication number
KR101855419B1
KR101855419B1 KR1020160009830A KR20160009830A KR101855419B1 KR 101855419 B1 KR101855419 B1 KR 101855419B1 KR 1020160009830 A KR1020160009830 A KR 1020160009830A KR 20160009830 A KR20160009830 A KR 20160009830A KR 101855419 B1 KR101855419 B1 KR 101855419B1
Authority
KR
South Korea
Prior art keywords
character
image
character information
block
character block
Prior art date
Application number
KR1020160009830A
Other languages
Korean (ko)
Other versions
KR20170089513A (en
Inventor
진경희
임은진
김호권
Original Assignee
(주)몬도미오
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)몬도미오 filed Critical (주)몬도미오
Priority to KR1020160009830A priority Critical patent/KR101855419B1/en
Publication of KR20170089513A publication Critical patent/KR20170089513A/en
Application granted granted Critical
Publication of KR101855419B1 publication Critical patent/KR101855419B1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • G06K2209/01

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The language learning apparatus using an augmented reality according to the present invention includes a plurality of character blocks in which different characters are recorded on the outer surface and character information corresponding to each character is stored, an accommodation space in which a plurality of the character blocks are accommodated, An input space in which a character block selected by the user is placed and an image display unit on which an augmented reality image is displayed on the image processing apparatus; a character input unit for inputting character information corresponding to the character block, And a communication module installed in the main body and transmitting the character information recognized by the sensor unit to the image processing apparatus, wherein the image processing apparatus receives the character information from the communication module A word corresponding to the character information is generated, and a three-dimensional virtual object is augmented A character block corresponding to the character is displayed on the character block, and the character block corresponding to the character is displayed on the image display unit, Wherein the sensor unit is provided in a shape corresponding to the protruding portion of the character block to recognize the character information of the character block, .

Description

[0001] The present invention relates to a language learning apparatus using an augmented reality and a language learning method using the same,

The present invention relates to a language learning apparatus using an augmented reality and a language learning method using the same. More specifically, when a user generates words by combining character blocks of a language learning apparatus, The present invention relates to a language learning apparatus using an augmented reality in which a user can learn language by himself / herself by displaying an image combining a virtual object and a virtual object to a user, and a language learning method using the same.

Augmented Reality (AR) is a field of Virtual Reality (VR). It is a computer graphics technique that combines virtual objects or information into a real environment to make it look like objects in the original environment. Augmented reality can synthesize virtual objects on the basis of real world, unlike virtual reality. Therefore, it can be applied to various real environments while virtual reality is mainly used for games and educational purposes. Especially, it is suitable technology for realizing ubiquitous.

 One example of augmented reality appears in science fiction films. Typically, when you look at a specific object or person at the point of a terminator in the future of a film terminator scene, information about the object or person is displayed on the display .

One example of such augmented reality has been introduced into education in Korean Patent Publication No. 2010-0020051 ("Educational Board Game System Using Augmented Reality Technology, " Feb. 22, 2010, hereinafter referred to as Prior Art 1). In more detail, the conventional art 1 attaches a specific pattern to an object, acquires the position, direction and recognition number of the object based on the information on the position and size of the pattern, and the acquired information is input to the control unit, The board game is performed by manipulating the virtual object shown in FIG. However, in the prior art 1, although the educational game is performed using the augmented reality, the position and size information of the attached pattern is collected through the camera connected to the control unit, and after completing the complex structure There is a problem that it is not suitable for a language learning method in which a user learns himself by matching words and objects to each other.

Korean Patent Publication No. 2010-0020051 ("Educational Board Game System Using Augmented Reality Technology," 2010.02.22.)

SUMMARY OF THE INVENTION It is an object of the present invention to provide a language learning apparatus using an augmented reality for learning a language of a user and a language learning method using the same.

The present invention relates to an information processing apparatus and a method for processing the same, which includes a plurality of character blocks in which different characters are recorded on the outer surface and character information corresponding to each character is stored, an accommodation space in which a plurality of the character blocks are accommodated, A sensor unit installed in the input space for recognizing the character information corresponding to the character block if the character block selected by the user is placed; And a communication module for transmitting the character information recognized by the sensor unit to the image processing apparatus, wherein the image processing apparatus receives the character information from the communication module and generates a word corresponding to the character information Wherein the 3D virtual object includes a control unit for displaying an enhanced image, An image display apparatus comprising: an image display unit for displaying an image of a virtual object enhanced on an image display unit, the character block forming character blocks corresponding to the characters by forming a protrusion of 3 rows and 2 columns, The protrusion arrangement is formed differently according to the character information, and the sensor unit is installed in a shape corresponding to the protrusion of the character block, and recognizes the character information of the character block.
And the main body further includes an extra accommodating space in which the additional character block is accommodated.
The communication module is at least one selected from the Internet, PSTN, WCDMA, CDMA, GSM, 4G network, Bluetooth, and ZigBee.
The language learning method using the language learning apparatus of the present invention includes a character input step in which the character block selected by the user is placed in the input space, a character information recognition step in which character information is recognized from the protruding part of the character block by the sensor unit, A character information transmission step of transmitting the character information to the image processing apparatus by a communication module; a word generation step of generating words by judging the meaning of the character information transmitted from the communication module by the control unit; An image retrieving step of retrieving an image corresponding to the generated word in a database, a reinforcement processing step of enhancing a three-dimensional virtual object in the image by the control unit and processing the image by the control unit, Wherein the image display unit displays an image in which the three- Characterized in that it comprises a display step in which the image is displayed.
The database is provided in an external server through a built-in image processing apparatus or through wired / wireless communication.

The apparatus for learning a language using an augmented reality and the method for learning a language using the augmented reality according to the present invention have a structure in which a user directly generates a word by combining character blocks and displays the result as an augmented reality, There is an effect that it is possible to provide a tangible language learning apparatus and method that can meet curiosity and continuously improve interest.

1 is a perspective view showing a schematic configuration of a language learning apparatus using an augmented reality according to the present invention;
FIG. 2 is a perspective view showing another enlarged view of the accommodating space of the language learning apparatus using the augmented reality of the present invention.
3 is a side view of a language learning apparatus using an augmented reality of the present invention
4 is a perspective view showing an image processing apparatus of a language learning apparatus using an augmented reality according to the present invention.
FIG. 5 is a flowchart showing a flow of a language learning method using the language learning apparatus of FIG. 1

Hereinafter, a language learning apparatus using an augmented reality according to the present invention will be described in detail with reference to the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are included to provide a further understanding of the technical concept of the present invention, are incorporated in and constitute a part of the specification, and are not intended to limit the scope of the present invention.

1 is a perspective view of a language learning apparatus using an augmented reality according to the present invention.

1, an embodiment of a language learning apparatus using an augmented reality according to the present invention includes a character block 100, a main body 200, a sensor unit and a communication module, The image processing apparatus 10 includes the image processing apparatus 10 as shown in FIG.

As shown in FIG. 1, the character block 100 includes a plurality of chips, and chips having different characters are written on the outer surface and character information corresponding to the characters is stored.

The shape of the character block 100 may be variously different, but is basically formed so as to fit in the accommodation space 210 of the main body 200, which will be described later. In the present invention, a shape of a cube is provided so as to be easily arranged side by side, and characters are recorded on one side.

The characters to be written on one side of the character block 100 may be Korean, English, Japanese, Chinese, German and other languages. In an embodiment of the present invention, as shown in FIGS. 1 to 4, 100) are English and numbers. The character block 100 is composed of a plurality of characters as described above, and is configured to correspond to the number of characters of a language desired to be learned by the user. That is, the character blocks 100 are composed of a plurality of character blocks 100 according to the type of the language to be learned, and are defined as one set for convenience of explanation. A user learns a word with the character block 100 constructed as described above. In some cases, the same character may overlap due to the characteristics of the word. For example, in the case of an apple word for apple, 'apple', there are two 'p' characters, so you need two letter blocks with 'p'. As such, in case two or more of the same characters are needed, the character block 100 may be composed of a plurality of sets, and there may be a separate space in the main body 200 to accommodate the extra set , Which will be described later.

Character information corresponding to a character used on the outer surface of the character block 100 is stored in a chip (not shown) built in the character block 100. The character information embedded in the chip can be recognized as a wireless communication means separate from a sensor provided in the main body 200 to be described later.

According to the design conditions, even if no chip is embedded in the character block 100, a protrusion is formed on the outer surface of the character block 100, and the protrusion is pressed on the surface of the main body 200, It is possible to determine which character is recorded in the character string 100, and this method will be described later.

FIG. 2 is another perspective view showing an accommodation space of a language learning apparatus using an augmented reality of the present invention, and FIG. 3 is a side view of a language learning apparatus using an augmented reality of the present invention.

1 to 3, the main body 200 includes a receiving space 210, an input space 220, and an extra accommodating space 230.

As shown in FIG. 1, the accommodation space 210 is a space in which the character block 100, which is a set of a plurality of sets, is accommodated. The accommodation space 210 may be a portion formed in the main body 200 at a predetermined depth to accommodate the character block 100. Although not shown in FIG. 1, in order to fix a plurality of character blocks 100 in the accommodating space 210 according to design conditions and to define a custom character per one character block, So as to fix the character block 100 so that it does not move when the character block 100 is positioned inside the accommodating space 210. For example, in an embodiment of the present invention, since the cross section of the character block 100 is a cube, the protrusions formed in the accommodating space 210 may have protrusions corresponding to the cube protruding upward from the bottom surface at a predetermined height So that the character block 100 can be fixed.

As shown in FIG. 1, the input space 220 includes a character block selected by a user among a plurality of the character blocks 100 located in the accommodation space 210. The character block 100 selected by the user is disposed in the input space 220. In FIG. 1, one side on which characters are recorded faces upward. However, the character block 100 may be disposed facing the other direction, A character can be written on the entire surface of the character block 100. [

The sensor unit (not shown) is installed in the input space 220 and recognizes the character information stored in the chip when the character block 100 selected by the user is placed. In more detail, the position where the sensor unit is to be installed is preferably installed below the input space 200. The sensor unit is a method for recognizing character information stored in the chip, and the sensor unit may be a device for wireless communication installed in the input space 220. In this case, various other schemes can be used, but the simplest one that can be used is NFC (Near Field Comunication) called short-range wireless communication. The NFC is a technique for performing wireless communication with a very short distance and has a transmission speed of about 400 kilobits per second and is mainly used for transmitting simple information. Since the character block 100 has only one character, it has a size of 1 byte. Therefore, the character block 100 is suitable for the sensor unit because the transmission speed is high.

The sensor unit may be configured in another form, which is the braille system described above. In the case of English braille, it has 3 rows and 2 columns and represents different characters depending on the arrangement of the protruding Braille characters. Therefore, braille is protruded on one side of the character block 100, and depressions of 3 rows and 2 columns are formed wherever the character block 100 is positioned so that braille protruding on the upper surface of the input space 220 is inserted. It is used as an input means. In this case, the reaction rate is faster than the case using NFC.

Since the character block 100 is generally selected by a user in order to generate words and placed in the input space 220, the character information recognized by the sensor unit is sequential character information, i.e., a character string.

The communication module (not shown) is installed in the main body 200 and transmits the character information recognized by the sensor unit to the image processing apparatus 10. The communication module includes a wired communication network such as the Internet and PSTN, a mobile communication network such as WCDMA, CDMA, GSM, and 4G network, and a Bluetooth And a local area network such as ZigBee. One embodiment of the present invention uses Bluetooth because most of the image processing apparatus 10 is a mobile communication terminal such as a smart phone and most of recent mobile communication terminals use Bluetooth to communicate This is possible. The character information transmitted by the communication module is a character string recognized by the sensor unit, and the character string is transmitted to the image processing apparatus (10). Also, the language learning apparatus using the augmented reality according to an embodiment of the present invention can be turned on and off using Bluetooth. To this end, a PCB substrate unit may be provided in the main body 200, in which an insertion space for inserting a battery, a power supply for the battery, and a circuit is configured to be turned on and off by Bluetooth. As described above, the language learning apparatus using the augmented reality is configured to be powered by a battery, to be charged using a charger, or to be replaced with an extra battery. According to design conditions, a battery may be inserted into the main body 200 in addition to the battery to supply power. In summary, the user can send a signal to the main body 200 in the Bluetooth module of the terminal when desired, and turn on the power of the Bluetooth module built in the main body 200. Then, when the character block 100 is placed in the input space 220, the sensor unit and the communication module are operated to learn the language using the augmented reality. Conversely, when the use is to be terminated, the power is turned off in the same manner as described above.

The main body 200 may further include an extra accommodation space 230 for accommodating one set of the character blocks 100 as described above. According to the present invention, the character block 100 can be constituted as a plurality of sets as necessary, and the extra accommodation space 230 is formed in a drawer type in a certain region of the main body 200, have.

4 is a perspective view illustrating an image processing apparatus of a language learning apparatus using an augmented reality of the present invention.

As shown in FIG. 4, the image processing apparatus 10 may be any one selected from a computer, a tablet PC, a portable learner, and a PMP in addition to a mobile communication terminal such as a smart phone, Or any other terminal having a video input device such as a television.

The image processing apparatus 10 includes a control unit (not shown) for receiving the character information from the communication module, generating words corresponding to the character information, and displaying an enhanced image of the three-dimensional virtual object. That is, when the character string information is received from the communication module, the control unit generates a word corresponding to the character string, and if there is an object corresponding thereto, The object is augmented to display the image. The process of outputting an object in the image processing apparatus 10 will be described later.

Hereinafter, a language learning method using a language learning apparatus using an augmented reality according to the present invention will be described in detail with reference to the accompanying drawings.

5 is a flowchart of a language learning method using the language learning apparatus of FIG.

5, a language learning method using a language learning apparatus using the augmented reality includes a character input step S10, a character information recognition step S20, a character information transmission step S30, a word generation step S40), an image search step S50, an enhancement processing step S60, and a display step S70.

The character input step S 10 is a step in which the character block 100 selected by the user is placed in the input space 220. That is, the character input step S10 is a step of selecting a character corresponding to a word to be learned by the user.

The character information recognizing step (S20) is a step in which character information is recognized from the chip built in the character block (100) by the sensor unit. The sequence of the character block 100 recognized by the sensor unit is recognized from the left to the right, and the character information sequentially recognized is transmitted to the image processing apparatus 10 by the communication module in the character information transmission step S30 .

The word generation step S40 is a step of generating words by judging the meaning of the character information transmitted from the communication module by the control unit. For example, if a user combines the character block 100 and places 'a', 'p', 'p', 'l', and 'e' in the input space 220 as described above, Recognizes the character block 100 arranged as 'apple' in order and generates the word apple.

The image searching step S50 is a step in which an image corresponding to the word generated by the control unit is searched in the database. That is, the image searching step S50 is a step of determining whether or not there is an image corresponding to a word generated in the controller. If the user fails to combine the words he / she wants to know, the image may not be searched or another image may be searched for it.

The database used in the image search step S50 may be provided in an external server using the built-in or built-in wired / wireless communication equipment in the image processing apparatus 10. The image to be displayed to the user is a 3D image, a separately prepared image in the database, and provides an image through the Internet if the image is not found in the database.

The enhancement processing step S60 is a step in which the image is processed by enhancing a three-dimensional virtual object on the image searched by the controller. That is, it is a step of combining the three-dimensional virtual objects searched in the actual image displayed by the image processing apparatus 10 with each other.

The display step S70 is a step in which the image in which the 3D virtual object has been enhanced by the user is displayed by the controller. The three-dimensional image displayed in the display step S70 is positioned on the upper side of the main body 200 so as not to overlap with the main body 200 when viewed from the side.

According to the design conditions, when a word implied by the character block 100 is a homonym, a mark indicating that the character block 100 is displayed on the screen displayed on the image processing device 10 in a touch screen manner is displayed, Buttons and arrows can also be output. If you want to see an image of a different meaning, you can touch it to output another image.

The 3D image output from the main body 200 through all the steps described above is output as a kind of input means of the character block 100. When the character block 100 moves out of the input space 220, the output image disappears.

In summary, the user can create a word by himself / herself by combining the character block 100, and can visually confirm the shape (image) of the word generated by the language learning apparatus using the augmented reality according to the present invention And it is advantageous to perform language learning more effectively.

It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. It goes without saying that various modifications can be made.

10: Image processing device
100: character block
200:
210: accommodation space
220: input space
230: extra accommodation space
S10: Character input step
S20: Character information recognition step
S30: Character information transmission step
S40: Word generation step
S50: image retrieval step
S60: Strengthening processing step
S70: Display step

Claims (6)

A plurality of character blocks (100) in which different characters are recorded on the outer surface and character information corresponding to each character is stored;
A main body 200 having an accommodation space 210 in which a plurality of the character blocks 100 are accommodated, an input space 220 in which character blocks selected by the user are placed, and an image display unit.
A sensor unit installed in the input space 220 and recognizing the character information corresponding to the character block 100 when the user selects the character block 100;
A communication module installed in the main body 200 to transmit the character information recognized by the sensor unit to the image processing apparatus 10; And,
The character block (100)
A protruding portion of 3 rows and 2 columns is formed on the lower surface of the character block 100 corresponding to the character so that the protrusion arrangement is formed differently according to the character information corresponding to the character,
The sensor unit includes:
The character block 100 is installed in a shape corresponding to the protrusion of the character block 100, recognizes the character information of the character block 100,
The image processing apparatus (10)
And a controller receiving the character information from the communication module and generating a word corresponding to the character information to display an enhanced image of the three-dimensional virtual object,
Wherein,
A display unit of the image processing apparatus 10 displays the actual images displayed by the image processing apparatus 10 together with the 3D virtual objects generated by words corresponding to the character information,
Dimensional virtual object generated by a word corresponding to the character information is positioned on the image display unit of the main body 200,
Dimensional virtual object generated as a word corresponding to the character information is displayed as if it were placed on an image display portion of the actual main body 200. [
delete 2. The apparatus of claim 1, wherein the main body (200)
Further comprising an extra accommodating space (230) in which the additional character block is accommodated.
The communication system according to claim 1,
Wherein the at least one selected from the group consisting of the Internet, PSTN, WCDMA, CDMA, GSM, 4G network, Bluetooth, and ZigBee.
A method for learning a language using a language learning apparatus using an augmented reality according to claim 1,
A character input step (S10) in which the character block (100) selected by the user is placed in the input space (220);
Character information recognizing step (S20) of recognizing character information from a protruding portion of the character block (100) by the sensor unit;
A character information transmitting step (S30) of transmitting the character information to the image processing apparatus (10) by the communication module;
A word generating step (S40) of generating words by judging the meaning of the character information transmitted from the communication module by the controller;
An image retrieving step (S50) in which an image corresponding to the word generated by the control unit is retrieved from a database;
An enhancement processing step (S60) of processing an image by augmenting a three-dimensional virtual object on the image by the controller; And
A display step (S70) of displaying an image of the three-dimensional virtual object on the image display section taken by the image processing apparatus (10) by the controller;
The method of claim 1,
6. The method according to claim 5,
Wherein the language learning method is provided in an external server through a built-in image processing device or via wired / wireless communication.
KR1020160009830A 2016-01-27 2016-01-27 Apparatus for learning language using augmented reality and language learning method using thereof KR101855419B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160009830A KR101855419B1 (en) 2016-01-27 2016-01-27 Apparatus for learning language using augmented reality and language learning method using thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160009830A KR101855419B1 (en) 2016-01-27 2016-01-27 Apparatus for learning language using augmented reality and language learning method using thereof

Publications (2)

Publication Number Publication Date
KR20170089513A KR20170089513A (en) 2017-08-04
KR101855419B1 true KR101855419B1 (en) 2018-05-10

Family

ID=59654263

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160009830A KR101855419B1 (en) 2016-01-27 2016-01-27 Apparatus for learning language using augmented reality and language learning method using thereof

Country Status (1)

Country Link
KR (1) KR101855419B1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101430573B1 (en) * 2013-03-28 2014-08-18 한남대학교 산학협력단 A Braille Point Marker for Visual Disturbance Person and A Marker Recognition Method Thereof

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101430573B1 (en) * 2013-03-28 2014-08-18 한남대학교 산학협력단 A Braille Point Marker for Visual Disturbance Person and A Marker Recognition Method Thereof

Also Published As

Publication number Publication date
KR20170089513A (en) 2017-08-04

Similar Documents

Publication Publication Date Title
Papagiannakis et al. Mixed Reality, Gamified Presence, and Storytelling for Virtual Museums.
CN110110145B (en) Descriptive text generation method and device
CN106254848B (en) A kind of learning method and terminal based on augmented reality
JP2022515620A (en) Image area recognition method by artificial intelligence, model training method, image processing equipment, terminal equipment, server, computer equipment and computer program
Shi et al. Markit and Talkit: a low-barrier toolkit to augment 3D printed models with audio annotations
US20130156266A1 (en) Function extension device, function extension method, computer-readable recording medium, and integrated circuit
CN102142151A (en) Terminal and method for providing augmented reality
US11915606B2 (en) Tactile and visual display with a paired, active stylus
CN109426343B (en) Collaborative training method and system based on virtual reality
EP2597623A2 (en) Apparatus and method for providing augmented reality service for mobile terminal
US8553938B2 (en) Information processing program, information processing system, information processing apparatus, and information processing method, utilizing augmented reality technique
KR20220146366A (en) Non-face-to-face real-time education method that uses 360-degree images and HMD, and is conducted within the metaverse space
KR101983233B1 (en) Augmented reality image display system and method using depth map
CN105511620A (en) Chinese three-dimensional input device, head-wearing device and Chinese three-dimensional input method
KR20160139786A (en) System and method for solving learnig problems using augmented reality
JP4790080B1 (en) Information processing apparatus, information display method, information display program, and recording medium
KR101855419B1 (en) Apparatus for learning language using augmented reality and language learning method using thereof
KR20170039953A (en) Learning apparatus using augmented reality
KR101964192B1 (en) Smart table apparatus for simulation
CN116580211A (en) Key point detection method, device, computer equipment and storage medium
US20120327118A1 (en) Display control apparatus, display control method and program
US10515566B2 (en) Electronic system and method for martial arts movement-based language character symbolization and education
KR100973680B1 (en) Learning system that learning execution used the mat shown by the game form and methodthat this was used
CN112558759B (en) VR interaction method based on education, interaction development platform and storage medium
JP3164748U (en) Information processing device

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E90F Notification of reason for final refusal
E701 Decision to grant or registration of patent right