KR100943477B1 - Method System of Speaking Ability test - Google Patents
Method System of Speaking Ability test Download PDFInfo
- Publication number
- KR100943477B1 KR100943477B1 KR1020070069065A KR20070069065A KR100943477B1 KR 100943477 B1 KR100943477 B1 KR 100943477B1 KR 1020070069065 A KR1020070069065 A KR 1020070069065A KR 20070069065 A KR20070069065 A KR 20070069065A KR 100943477 B1 KR100943477 B1 KR 100943477B1
- Authority
- KR
- South Korea
- Prior art keywords
- data
- evaluation
- raw
- unit
- test
- Prior art date
Links
Images
Landscapes
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Engineering & Computer Science (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Theoretical Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Economics (AREA)
- Marketing (AREA)
- Entrepreneurship & Innovation (AREA)
- General Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
Abstract
The speaking ability notary system includes an examination system, an evaluation system, a management system, and an external access system. The gaze system generates test data including text data, audio data, and video data. The evaluation system includes a reproducing unit for reproducing examination data assigned with an authentication number, and an input unit for generating raw evaluation data corresponding to the authentication number. The management system receives the examinee data and the raw evaluation data from the staring system and the evaluation system, respectively, and assigns the authentication number, and receives the raw evaluation data to receive evaluation data corresponding to the authentication number. An evaluation server to generate, and a memory unit for storing the test data, the raw evaluation data and the evaluation data.
Description
The present invention relates to a speaking ability notary system and a method thereof, and more particularly, to a speaking ability notary system and a method capable of customized evaluation.
Language skills include writing, reading, listening and speaking. Among them, writing, reading and listening can process the evaluation process into various types of data, so that a relatively objective evaluation of a large number of candidates is possible.
However, speech is difficult to process speech assessment into data due to temporal and spatial constraints.
In addition, speech is unsuitable for the evaluation of the scoring method through the complicated process of accepting and producing text through language. For example, the speech evaluation may include a process in which a subject encounters a task, a process of searching for a speaking strategy based on the subject, and a process of expressing a strategy sought by the subject through speaking. Therefore, when only the final result is notified, the most important communication process in speaking does not appear in the evaluation result, making it difficult to evaluate the subject's speaking ability fairly.
In addition, when interviewing a large number of applicants in order to select talent from an external institution, the interview time and costs for the individual are excessively excessive and the objectivity of the interview is reduced.
Accordingly, the present invention has been made in view of such a problem, and the present invention provides a speaking ability notary system capable of customized evaluation.
In addition, the present invention provides a speech notarization method that can be customized evaluation.
A speaking ability notary system according to an aspect of the present invention includes a gaze system, an evaluation system, a management system and an external access system. The gaze system generates test data including text data, audio data, and video data. The evaluation system includes a reproducing unit for reproducing examination data assigned with an authentication number, and an input unit for generating raw evaluation data corresponding to the authentication number. The management system receives the examinee data and the raw evaluation data from the staring system and the evaluation system, respectively, and assigns the authentication number, and receives the raw evaluation data to receive evaluation data corresponding to the authentication number. An evaluation server to generate, and a memory unit for storing the test data, the raw evaluation data and the evaluation data. The external access system receives the raw evaluation data and the evaluation data from the management system.
The gaze system may include a first recording unit for recording a voice element in the speech and converting the voice element into the voice data, and a second recording unit for capturing and converting the non-verbal element in the speech into the image data. The gaze system may further include a voice recognition unit generating the text data based on the voice data. The gazing system may further include a scan unit for scanning the answer sheet produced in the speaking test process, and a character recognition unit for converting the scanned image into the text data.
The reproducing unit may include an image reproducing unit for reproducing the image data, and a voice reproducing unit for reproducing the audio data.
In the method of notarizing speaking ability according to another aspect of the present invention, first, the speaking test process is converted into examinee data including text data, audio data and image data. Subsequently, an authentication number is assigned to the examinee data. Thereafter, audio and video are reproduced based on the examinee data assigned with the authentication number. Subsequently, a plurality of raw evaluation data are generated using the reproduced audio and video. Subsequently, evaluation data corresponding to the authentication number is generated based on the raw evaluation data. After that, the test data, the original evaluation data, and the evaluation data to which the authentication number is assigned are stored in a memory unit. Subsequently, the raw evaluation data and the evaluation data are output.
The raw evaluation data may be generated by evaluating a linguistic expression by reproducing the voice and by evaluating a non-verbal expression by reproducing the image.
The raw evaluation data and the evaluation data may be provided to an external organization with the permission of the candidate.
According to the speaking ability notarization system and method thereof, notary for evaluation of speaking ability is possible, and feedback on the behavior of speaking is possible. In addition, it is possible to access not only the evaluation result but also information on the situation in which the speech evaluation is performed in the situation in which the speech evaluation is performed, so that various customized evaluations of the same test result are possible.
Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.
1 is a block diagram illustrating a speaking ability notary system according to an embodiment of the present invention.
Referring to FIG. 1, the speaking ability notarization system includes a staring
The
The
The
The
In this embodiment, the
The
The reproducing
The
The
In this embodiment, the
Based on the reproduced text, video and audio, the evaluator inputs raw evaluation data corresponding to the authentication number through the
In the present embodiment, the speaking ability notarization system includes a plurality of
FIG. 2 is a block diagram illustrating the management system shown in FIG. 1.
1 and 2, the
The
The
The
The
The
The examinee
The
The text data of the examinee data is stored in the
Among the examinee data, the voice data is stored in the
The video data among the test data is stored in the
The evaluation
The raw evaluation data to which the authentication number is assigned are stored in the
The evaluation data generated by the
In this embodiment, the
The terminal
In this case, the terminal
In this embodiment, the
The
Referring back to FIG. 1, the
The
The
In the present embodiment, the
For example, the permission data is applied to the
When the
FIG. 3 is a block diagram illustrating a method of notarizing speaking ability using the management system shown in FIG. 1.
1 to 3, first, a test management organization recruits testers using the management system 300 (step S10). In this embodiment, by using the web server (310 of Fig. 2) of the
Then, the test application process is performed by the examiner (step S15). In the test application process, the examinee inputs personal information such as a name, resident registration number, and address through a personal computer (PC), and the entered personal information is applied to the
Personal information applied to the
Thereafter, the tester pays the cost using the
Subsequently, the terminal
Subsequently, the speaking test is performed using the
FIG. 4 is a block diagram illustrating the speech evaluation and test data generation step shown in FIG. 3 in more detail.
1, 2 and 4, the speaking test includes an explanation test, persuasion test, presentation test and reading test. In this embodiment, the explanatory test, the persuasion test, the presentation test and the reading test are performed sequentially. At this time, the item, number, order, etc. of the test may be changed.
The
If the test item is not the reading test, the
Then, the selected test question is presented to the examiner (step S330). In the present embodiment, the
The investigator then prepares the answer sheet based on the test questions. For example, if the test item is the explanatory test, the test question includes a memorandum. In the explanatory test, the examiner reads the memorandum and grasps the gist of the memorandum to prepare the answer sheet. This answer sheet provides the evaluator with information about his or her ability to organize and plan what to say. In addition, when the test item is the persuasion test, the test problem includes a situation description sheet. In the persuasion test, the examiner reads the situation description sheet and organizes the entrance sheet to prepare the answer sheet. In addition, when the test item is the presentation test, the test problem includes data such as statistics and graphs. In the presentation test, the examiner reads and interprets the data to prepare the answer sheet. In this case, the step of preparing the answer sheet may be omitted.
Thereafter, the tester performs the speaking and records the speech test process to generate the audio data and the image data (step S340). For example, when the test item is the explanatory test, the examiner explains the gist of the official document. In addition, when the test item is the persuasion test, the tester performs persuasion using the arranged position. In addition, when the test subject is the presentation test, the examiner presents the interpreted data. The
In addition, the scanned answer sheet is scanned using the
In this embodiment, the
Subsequently, it is determined whether the reading test, the explanatory test, the persuasion test, and the presentation test items are all completed (step S370). If all the test items are not completed, it is determined again from the execution of the reading test (step S310).
When all the test items are completed, the test data is applied to the
The
Subsequently, the terminal
Thereafter, the
FIG. 5 is a block diagram illustrating in detail a step of generating a plurality of original evaluation data shown in FIG. 3.
1, 2 and 5, the plurality of examinee data is applied to the plurality of
Then, the reading evaluation score is determined based on the reading voice data (step S420). In this embodiment, the reading evaluation score is calculated by the standard pronunciation compliance rate. The standard pronunciation compliance rate is the ratio of the correct number of vocabulary words out of the total number of vocabulary words. For example, if the total vocabulary is 312 words and the correct pronunciation vocabulary is 276 words, the standard score is 0.88 and the reading score is 88 points. In this case, the determining of the reading evaluation score may further include a step of feeding back an expert review such as an announcer.
Thereafter, the text data is reproduced through the
Table 1 shows a content composition evaluation table for determining the content composition score. In the present embodiment, the content composition score is a score that measures the ability to creatively generate ideas to compose the content of the words, and to organize them appropriately for the purpose, object, and situation, the evaluator With reference to Table 1 of the input through the
TABLE 1
Referring to [Table 1], the evaluator reads the reproduced text data and the scanned image and determines scores for the evaluation items presented in the content composition evaluation table. The scores of the evaluation items are summed to determine the content composition score of each test item.
Subsequently, the image data of each test item and the audio data of each test item are reproduced by using the
Table 2 shows the transferability evaluation table for determining the transferability score. In this embodiment, the transfer ability score means the ability to accurately and effectively express the content of the speech in accordance with the characteristics of the spoken language, and includes not only linguistic elements such as pronunciation, tone, speed, but also non-verbal elements such as body movement and gaze. do.
TABLE 2
Referring to [Table 2], the delivery capability score is determined using the image and the audio reproduced by the
For example, the
Table 3 shows the raw evaluation data including the itemized scores of the speaking test according to an embodiment of the present invention. In this example, seven different scores are generated for each area by seven evaluators.
[Table 3]
Subsequently, the plurality of raw evaluation data are generated using the reading evaluation scores collected from the
1 to 3 again, the
6 is a block diagram illustrating in detail the step of generating the evaluation data shown in FIG.
1, 2, 3, and 6, the
In the present embodiment, the cutoff average is obtained by averaging the remaining scores except the highest and lowest scores of the content composition scores or the delivery ability scores of the respective test items.
Table 4 shows the cut average, variance and standard deviation for each test item of the raw evaluation data disclosed in Table 3 above.
TABLE 4
FIG. 7 is a graph illustrating individual evaluation data of content composition scores and delivery ability scores in the explanation evaluation items, persuasion evaluation items, and presentation evaluation items shown in FIG. 6.
Referring to [Table 4] and FIG. 7, the cutoff averages of the content composition scores and the delivery capability scores of the description evaluation items were 53.8 and 67.8, respectively, and the average score of the description items was 60.8. The test taker can see that in the above description, contents composition is somewhat insufficient but excellent delivery ability.
The cutoff averages of the content composition scores and the delivery ability scores of the persuasion evaluation items were 60.4 and 52.4, respectively, and the average score of the explanation items was 56.4. Although the cut-off average of the deliverability scores of the persuasion evaluation item was 52.4, the variance and standard deviation were 20.51 and 420.67, respectively. .
The cutoff averages of the content composition scores and the delivery ability scores of the presentation evaluation items were 55.4 and 67.8, respectively, and the average score of the presentation items was 61.6. The candidate may notice that the content is somewhat insufficient in the presentation item, but the delivery ability is excellent.
2 and 6 again, comprehensive evaluation data is generated based on the average scores of the respective test items (step S520). The comprehensive evaluation data is stored in the
FIG. 8 is a graph showing comprehensive evaluation data shown in FIG. 6.
Referring to FIG. 8, scores corresponding to the explanatory evaluation item, the persuasion evaluation item, the presentation evaluation item, and the reading evaluation item are shown on a graph having four axes.
In the present embodiment, the test taker shows a plain level in the description item and the presentation item, it can be seen that it is slightly insufficient in the persuasion item, it can be seen that has a superior ability in the reading evaluation item.
Referring to FIGS. 1, 2, and 6 again, an average value of content composition scores of the explanatory evaluation item, the persuasion evaluation item, and the presentation evaluation item, and the average value of the delivery ability scores of the evaluation items are obtained. Next, the diagnosis and prescription data are selected and read from the evaluation database 353 (step S540). In this embodiment, the mean value of the content composition scores is 56.5, and the mean value of the delivery capability scores is 62.7.
9 is a graph showing the grade of the speaking ability of FIG. In the graph, the horizontal axis represents the content composition score, and the vertical axis represents the delivery capability score.
Referring to FIG. 9, when the content structure and the delivery capacity average values are 56.5 and 62.7, respectively, the tester's speaking level is 4th grade.
Table 5 shows data for diagnosing speaking ability based on the content composition mean value, the mean value of the delivery ability, and the speaking grade, and a general rating for each speaking grade. In this embodiment, the data for diagnosing the speaking ability and the overall rating for each speaking grade are stored in the
TABLE 5
Referring to [Table 5], for example, the diagnostic data for the test taker whose content composition and delivery ability average values are 56.5 and 62.7, respectively, indicates that the content composition ability of the horse is normal, but the ability to express it is excellent. . 'And the general comments corresponding to Level 4 of the speaking level.
[Table 6] shows the prescription data for feeding back to the candidate based on the content composition average value, the delivery capacity average value and the speaking grade.
TABLE 6
2 and [Table 6], the prescription data is stored in the
Referring back to FIGS. 1 to 3, the raw evaluation data and the evaluation data to which the authentication number is assigned are then passed through the evaluation
Thereafter, the raw evaluation data and the evaluation data are transmitted to the
Subsequently, the candidate notifies the management system of whether the raw evaluation data and the evaluation data are disclosed (step S62). In the present embodiment, the candidate may allow the raw evaluation data and the evaluation data to be disclosed to an external organization by using the
When the candidate allows the disclosure of the raw evaluation data and the evaluation data, the
If there is a request for viewing the evaluation information about the test taker from the
If the permission data corresponding to the candidate's authentication number is present, the raw evaluation data and the evaluation data stored in the
The
For example, the speech evaluation item of the
Referring back to FIG. 3, an external agency requests a evaluation evaluation tailored to an employment characteristic (step S65), and performs a customized test and evaluation for candidates using a customized test and evaluation tool developed by the request. It can also be done.
For example, if the teacher recruitment agency evaluates the preliminary teacher's classroom skills, the management system uses the management system to conduct examinations on the examination practice test items corresponding to the classroom teaching situation, and The raw evaluation data may be converted into evaluation data to transmit customized raw evaluation data and evaluation data to the teacher recruitment institution.
In the outputting of the raw evaluation data and the evaluation data, the raw evaluation data is not transformed by converting the raw evaluation data into a separate conversion score or by assigning an average score to each of the testers' speech evaluation scenes and scorers. All data related to the assessment, such as scoring status, may be disclosed.
According to the present invention as described above, it is possible to meet the needs of examinees and companies wanting to be notarized by establishing a reliable evaluation system for speaking skills related to the job. In addition, it is possible to build a database of the speaking test process in a video, which can be used for self-assessment of external organizations using the test data and evaluation data.
Although described above with reference to the embodiments, those skilled in the art can be variously modified and changed within the scope of the invention without departing from the spirit and scope of the invention described in the claims below. I can understand.
1 is a block diagram illustrating a speaking ability notary system according to an embodiment of the present invention.
FIG. 2 is a block diagram illustrating the management system shown in FIG. 1.
FIG. 3 is a block diagram illustrating a method of notarizing speaking ability using the management system shown in FIG. 1.
FIG. 4 is a block diagram illustrating the speech evaluation and test data generation step shown in FIG. 3 in more detail.
FIG. 5 is a block diagram illustrating in detail a step of generating a plurality of original evaluation data shown in FIG. 3.
FIG. 6 is a block diagram illustrating in detail the step of generating the evaluation data shown in FIG. 3.
FIG. 7 is a graph showing individual evaluation data of content composition scores and delivery ability scores in the explanation evaluation items, persuasion evaluation items, and presentation evaluation items shown in FIG. 6.
FIG. 8 is a graph showing comprehensive evaluation data shown in FIG. 6.
9 is a graph illustrating a criterion for classifying the grades of the speaking ability of FIG. 6.
<Description of the symbols for the main parts of the drawings>
100: gazing system 110: gajang terminal
111: first recording unit 113: second recording unit
115: scanner 200: evaluation system
210: evaluation terminal 220: regeneration unit
221: video playback unit 223: audio playback unit
225: input unit 300: management system
310: web server 320: authentication server
322: authentication database 330: evaluation server
340: Examination database management system
341
343: image database 345: examinee database
350: Evaluation Database Management System
351: raw evaluation database 353: evaluation database
360: memory unit 370: terminal program providing server
372: program database 380: financial settlement server
400: external access system 410: candidate computer
420: external institution server 500: financial settlement system
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020070069065A KR100943477B1 (en) | 2007-07-10 | 2007-07-10 | Method System of Speaking Ability test |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020070069065A KR100943477B1 (en) | 2007-07-10 | 2007-07-10 | Method System of Speaking Ability test |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20090005766A KR20090005766A (en) | 2009-01-14 |
KR100943477B1 true KR100943477B1 (en) | 2010-02-22 |
Family
ID=40487303
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020070069065A KR100943477B1 (en) | 2007-07-10 | 2007-07-10 | Method System of Speaking Ability test |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR100943477B1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190041773A (en) | 2017-10-13 | 2019-04-23 | 주식회사 하얀마인드 | Apparatus and method for evaluating linguistic performance based on silence interval using Fourier transform |
KR20190041770A (en) | 2017-10-13 | 2019-04-23 | 주식회사 하얀마인드 | Apparatus and method for evaluating linguistic performance based on silence interval for difficulty control |
KR20190041772A (en) | 2017-10-13 | 2019-04-23 | 주식회사 하얀마인드 | Apparatus and method for evaluating linguistic performance based on silence interval using comparison with other users |
KR20190041771A (en) | 2017-10-13 | 2019-04-23 | 주식회사 하얀마인드 | Apparatus and method for evaluating linguistic performance based on silence interval for providing reference content |
KR20190041769A (en) | 2017-10-13 | 2019-04-23 | 주식회사 하얀마인드 | Apparatus and method for evaluating linguistic performance based on pitch and interval of silence |
KR101959080B1 (en) | 2017-10-13 | 2019-07-04 | 주식회사 하얀마인드 | Apparatus and method for evaluating linguistic performance based on silence interval |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5634086A (en) | 1993-03-12 | 1997-05-27 | Sri International | Method and apparatus for voice-interactive language instruction |
KR20010019038A (en) * | 1999-08-24 | 2001-03-15 | 김혜정 | Computer based language test system and methode |
KR20010044657A (en) * | 2001-03-14 | 2001-06-05 | 김선래 | System for speaking proficiency tests |
KR20060087821A (en) * | 2005-01-31 | 2006-08-03 | 김영운 | System and its method for rating language ability in language learning stage based on l1 acquisition |
-
2007
- 2007-07-10 KR KR1020070069065A patent/KR100943477B1/en active IP Right Grant
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5634086A (en) | 1993-03-12 | 1997-05-27 | Sri International | Method and apparatus for voice-interactive language instruction |
KR20010019038A (en) * | 1999-08-24 | 2001-03-15 | 김혜정 | Computer based language test system and methode |
KR20010044657A (en) * | 2001-03-14 | 2001-06-05 | 김선래 | System for speaking proficiency tests |
KR20060087821A (en) * | 2005-01-31 | 2006-08-03 | 김영운 | System and its method for rating language ability in language learning stage based on l1 acquisition |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190041773A (en) | 2017-10-13 | 2019-04-23 | 주식회사 하얀마인드 | Apparatus and method for evaluating linguistic performance based on silence interval using Fourier transform |
KR20190041770A (en) | 2017-10-13 | 2019-04-23 | 주식회사 하얀마인드 | Apparatus and method for evaluating linguistic performance based on silence interval for difficulty control |
KR20190041772A (en) | 2017-10-13 | 2019-04-23 | 주식회사 하얀마인드 | Apparatus and method for evaluating linguistic performance based on silence interval using comparison with other users |
KR20190041771A (en) | 2017-10-13 | 2019-04-23 | 주식회사 하얀마인드 | Apparatus and method for evaluating linguistic performance based on silence interval for providing reference content |
KR20190041769A (en) | 2017-10-13 | 2019-04-23 | 주식회사 하얀마인드 | Apparatus and method for evaluating linguistic performance based on pitch and interval of silence |
KR101959080B1 (en) | 2017-10-13 | 2019-07-04 | 주식회사 하얀마인드 | Apparatus and method for evaluating linguistic performance based on silence interval |
Also Published As
Publication number | Publication date |
---|---|
KR20090005766A (en) | 2009-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chapelle et al. | 20 years of technology and language assessment | |
Henderson | Asking the lost question: what is the purpose of law school | |
US7966265B2 (en) | Multi-modal automation for human interactive skill assessment | |
KR100943477B1 (en) | Method System of Speaking Ability test | |
Joe et al. | A prototype public speaking skills assessment: An evaluation of human‐scoring quality | |
CN117745494A (en) | Multi-terminal-fusion 3D video digital OSCE examination station system | |
Agricola et al. | Teachers’ diagnosis of students’ research skills during the mentoring of the undergraduate thesis | |
Schachter et al. | Bridging the public and private in the study of teaching: Revisiting the research argument | |
Wray | Electronic portfolios in a teacher education program | |
Setyaningrahayu et al. | The use of video-based reflection to facilitate pre-service English teachers’ self-reflection | |
KR100385892B1 (en) | Foreign Language Speaking Assessment System | |
Greenblatt | The consequences of the state implementation of a nationally standardized teacher performance assessment as a certification requirement: A mixed methods study | |
Meyer | Adding Legal Research to the Bar Exam: What Would the Exercise Look Like? | |
CN116745853A (en) | Psychological examination system and method using open psychological scale expert platform | |
Du et al. | Observations of supervisors and an actuarial research student on the qualitative research process | |
JP7416390B1 (en) | mentoring system | |
KR100968929B1 (en) | Meta-Learning System and Method Of Teaching Writing Skill Using The Same | |
Van der Haar | Mentoring Practices in a Teaching School | |
Meetze-Hall | Educating educative mentors: Video as instructional tool | |
Zenisek | A Qualitative Study of Leadership in Implementing Standards-Based Grading in the Middle School Classroom | |
Van Houten et al. | Supporting Integrated English Learner Student Instruction: A Guide to Assess Professional Learning Needs Based on the" Teaching Academic Content and Literacy to English Learners in Elementary and Middle School Practice Guide." REL 2022-122. | |
Plotkin | Employer Perceptions regarding the Use of Portfolios for Employing College Graduates in the Crime Scene Technology Program: A Qualitative Descriptive Study | |
Morley | Exploring Band Students' Motivations Regarding Instrument Selection | |
JP2022188920A (en) | Determination device, determination method, and determination program | |
Hulla | Building the Capacity of Principals to Coach Teachers of Multilingual Language Learners |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
AMND | Amendment | ||
E90F | Notification of reason for final refusal | ||
AMND | Amendment | ||
E601 | Decision to refuse application | ||
E801 | Decision on dismissal of amendment | ||
J201 | Request for trial against refusal decision | ||
AMND | Amendment | ||
B701 | Decision to grant | ||
GRNT | Written decision to grant | ||
FPAY | Annual fee payment |
Payment date: 20130109 Year of fee payment: 4 |
|
FPAY | Annual fee payment |
Payment date: 20140113 Year of fee payment: 5 |
|
FPAY | Annual fee payment |
Payment date: 20150211 Year of fee payment: 6 |
|
FPAY | Annual fee payment |
Payment date: 20160414 Year of fee payment: 7 |
|
FPAY | Annual fee payment |
Payment date: 20170210 Year of fee payment: 8 |
|
FPAY | Annual fee payment |
Payment date: 20180213 Year of fee payment: 9 |
|
FPAY | Annual fee payment |
Payment date: 20190213 Year of fee payment: 10 |