WO2017065324A1 - Sign language education system, method and program - Google Patents

Sign language education system, method and program Download PDF

Info

Publication number
WO2017065324A1
WO2017065324A1 PCT/KR2015/010744 KR2015010744W WO2017065324A1 WO 2017065324 A1 WO2017065324 A1 WO 2017065324A1 KR 2015010744 W KR2015010744 W KR 2015010744W WO 2017065324 A1 WO2017065324 A1 WO 2017065324A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
word
sign language
sign
input
Prior art date
Application number
PCT/KR2015/010744
Other languages
French (fr)
Korean (ko)
Inventor
반호영
최용근
이수빈
양동석
Original Assignee
주식회사 네오펙트
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 네오펙트 filed Critical 주식회사 네오펙트
Priority to PCT/KR2015/010744 priority Critical patent/WO2017065324A1/en
Priority to KR1020157029063A priority patent/KR101793607B1/en
Publication of WO2017065324A1 publication Critical patent/WO2017065324A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute

Definitions

  • the present invention relates to a sign language education system, method, and program, and more particularly, to a system, method, and program for evaluating and teaching a sign language of a user performed according to an item using a body movement measuring device.
  • Visually impaired people can only receive information through hearing.
  • the visually impaired can obtain information only through human speech or sound effects in a video medium such as a TV. This is because information such as movement and behavior is not known.
  • broadcasting stations and the like may provide a service such as screen commentary broadcasting.
  • Hearing impaired people also communicate through sign language that is visually visible or through text on a computer or text on the page.
  • communication through texts often does not contain enough meaning due to differences in different grammar and expression systems. This is because communication through texts causes problems such as information distortion and loss.
  • the communication between the sign language and the non-sign language is mainly done through a sign language interpreter and a text method.
  • the method of interpreters is practically limited in cost. Therefore, when communicating with the hearing impaired, a simple form of text is used as the main medium. Due to the difference in sign language and grammar and expression system of Korean, the sign language is often unable to contain the meaning when conveying meaning in text.
  • sign language has many differences (eg, differences in word order) from the grammar of general languages (ie, non-sign language conversational languages). Therefore, people have difficulty learning.
  • sign language may not communicate properly if the operation is not correct, and even if the operation is accurate, if the sign language does not match the order may not be communicated.
  • the terminal extracts the specific question data provided; Extracting reference sign data corresponding to the item data; Receiving sensing data from at least one body movement measuring device; Generating input sign data by combining one or more pieces of the sensing data; Calculating a comparison result by comparing the input sign language data with reference sign data; And evaluating the item data based on the comparison result.
  • the extracting of the reference sign data may include extracting the reference sign data matched with the item data stored in the database in the terminal.
  • the extracting the reference sign language data may include: changing the word order of the item data to a sign language order; And searching and matching sign language data corresponding to each word of the item data.
  • the sensing data receiving step receiving the sensing data of the finger movement and wrist movement from the glove-type measuring device; And receiving sensing data from a measurement sensor device attached to each body unit.
  • the sensing data receiving step the step of requesting the user to perform a specific reference posture or standard movement; And determining an initial position of each body movement measuring device according to the reference posture or reference movement.
  • the input sign data generating step may be characterized by calculating the positional relationship between the left hand and the right hand by tracking the movement of the body unit on the basis of the initial position.
  • the body movement measuring device includes a vision sensor device
  • receiving the image obtained by the vision sensor device the image includes both hands of the user, image receiving step; And recognizing a positional relationship between the left hand and the right hand in the image.
  • the evaluating step may include calculating a matching rate between the reference sign language data and the input sign data of a word corresponding to the item data when the item data is a word.
  • the terminal compares one or more reference word data of the reference sign data with one or more input word data of the input sign data, and calculates a matching rate. ; Matching the input word data with the reference word data having the highest matching rate; Calculating a word matching result based on a matching rate with respect to each of the input word data; The word difference result is calculated by accumulating the distance difference between the input word data and the reference word data, wherein the distance difference is moved so that the specific word input data is placed in the same sentence position as the reference word data having the highest matching rate. A word count; And calculating an evaluation score by reflecting the word matching result and the word matching result.
  • the method may further include generating feedback data for the user based on the word matching result and the word matching result.
  • the method may further include determining a difficulty level of the next item data by reflecting the evaluation score.
  • a sign language education program is coupled to a hardware terminal to execute the sign language education method and is stored in a medium.
  • the user can be assessed whether the actual sign language is performed correctly, which can help improve sign language skills.
  • the user can be evaluated not only the accuracy of the operation of the word but also the accuracy of the sign language order of a specific sentence. Therefore, the user can learn the sign language order different from the Korean word order.
  • the present invention if only a database of different sign languages is added for each language, the user can learn the sign language of various countries through one terminal.
  • the positional relationship between the user's left hand and right hand can be obtained by using a vision sensor or tracking the movement of the body movement measuring device from the initial position. You can evaluate whether it is correct.
  • FIG. 1 is a block diagram of a sign language education system according to an embodiment of the present invention.
  • FIG. 2 is a flowchart illustrating a sign language education method according to an embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating a process of evaluating question data, which is a sentence, according to an embodiment of the present invention.
  • FIG. 4 is an exemplary diagram for calculating word matching results based on the coincidence rate of words according to an embodiment of the present invention.
  • the body movement measuring device corresponds to a device for sensing movement of a user and transmitting movement data to the terminal through wired or wireless communication.
  • the body movement measuring device may be in the form of a wearable device, or may be a sensor device that can be attached to various positions of the body.
  • the body movement measuring device may be a device of various types that can measure the body movement of the user.
  • the body movement to be measured is a movement of the hand region
  • the body movement measuring apparatus may be implemented as a wearable type worn on the hand.
  • the body movement measuring device may be a sensor patch or the like that can be attached to each body region.
  • the body movement measuring device may include various measuring devices such as a wearable type (for example, a glove type) measuring device and a body type measuring sensor device.
  • FIG. 1 is a block diagram of a sign language education system according to an embodiment of the present invention.
  • Sign language education system is implemented by the internal configuration of the terminal 100.
  • the terminal 100 may be divided into a mobile terminal 100 and a fixed terminal 100 according to whether or not it is movable.
  • the terminal 100 may include all types of terminals 100 including the above configuration.
  • the terminal 100 may include a cellular phone, a PCS phone (Personal Communication Service phone), and a synchronous / asynchronous IMT-2000 (International Mobile Telecommunication-2000) corresponding to the mobile terminal 100.
  • Mobile terminal 100 Palm Personal Computer (PDA), Personal Digital Assistant (PDA), Smartphone, Smart Phone, WAP phone, Wireless Application Protocao phone, Mobile Game Machine, Tablet PC , A netbook, a notebook (Note Book), and the like, and a desktop PC, a television, and the like corresponding to the fixed terminal 100 may correspond.
  • Palm Personal Computer PDA
  • Personal Digital Assistant PDA
  • Smartphone Smart Phone
  • WAP phone Wireless Application Protocao phone
  • Mobile Game Machine Tablet PC
  • a netbook a notebook (Note Book), and the like
  • a desktop PC a television, and the like corresponding to the fixed terminal 100 may correspond.
  • the terminal 100 includes a control unit 110; Communication unit 130; And output unit 130; includes all or part.
  • the terminal 100 is not limited to the components described above, and may further include additional components.
  • the controller 110 typically controls the overall operation of the terminal 100. For example, it performs related control and processing for data communication, image processing for reproduction on the display unit, body movement evaluation (eg, evaluation of input sign data according to user's body movement), and the like. Various functions performed by the controller 110 will be described later.
  • the communication unit 130 performs a function of receiving sensing data from the body movement measuring apparatus 200. In addition, the communication unit 130 performs a function of transmitting the received sensing data to the control unit 110. In addition, the communication unit 130 may transmit an output according to the evaluation result calculated based on the sensing data to the body movement measuring apparatus 200.
  • the communication unit 130 includes a wireless communication unit which is connected to the body movement measuring apparatus 200 by wire and receives the movement data from the body movement measuring apparatus 200 through a wired communication unit or a wireless communication method for receiving data. can do.
  • the wireless communication unit wireless internet module; Or a near field communication module.
  • the wireless internet module refers to a module for wireless internet access and may be embedded or external to the terminal 100.
  • Wireless Internet technologies include Wireless LAN (Wi-Fi), Wireless Broadband (Wibro), World Interoperability for Microwave Access (Wimax), High Speed Downlink Packet Access (HSDPA), long term evolution (LTE), LTE-A Long Term Evolution-Advanced and the like can be used.
  • the short range communication module refers to a module for short range communication.
  • Short range communication technologies include Bluetooth, BLE (Bluetooth Low Energy), Beacon, Radio Frequency Identification (RFID), Near Field Communication (NFC), Infrared Data Association (IrDA), Ultra Wideband (UWB), ZigBee, NRF, etc. may be used.
  • the output unit 130 performs a function of outputting information to be provided to the user.
  • the output unit 130 may include a display unit, a sound output unit, and the like.
  • the display unit displays (outputs) information processed by the terminal 100.
  • the display unit may be implemented as a touch screen by being combined with a touch sensor.
  • the display unit may receive an input operation from the user through a touch operation.
  • the terminal 100 may select an item to be evaluated by receiving a user's touch manipulation at a point corresponding to the specific item data from the item list displayed on the screen.
  • Various embodiments described herein may be implemented in a recording medium readable by a computer or similar device using, for example, software, hardware or a combination thereof.
  • the terminal 100 may further include a memory.
  • the memory may store a program for the operation of the controller 110 and store input / output data or data generated during the performance evaluation of the body (for example, learning data for receiving and storing movement data). It may be.
  • the memory may be included in the controller 110.
  • the memory may be a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (e.g. SD or XD memory, etc.), RAM access memory (RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, It may include at least one type of storage medium of the optical disk.
  • the mobile terminal 100 may operate in connection with a web storage that performs a storage function of the memory on the Internet.
  • the embodiments described herein include application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), and field programmable gate arrays (FPGAs). It may be implemented using at least one of processors, controllers, micro-controllers, microprocessors, and electrical units for performing other functions. In some cases, the embodiments described herein may be implemented by the controller 110 itself.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • embodiments such as the procedures and functions described herein may be implemented as separate software modules.
  • Each of the software modules may perform one or more functions and operations described herein.
  • the software code may be implemented as a software application written in a suitable programming language.
  • the software code may be stored in a memory and executed by the controller 110.
  • FIG. 2 is a flowchart illustrating a sign language education method according to an embodiment of the present invention.
  • the terminal 100 extracts and provides a specific item data (S100); Extracting reference sign data corresponding to the item data (S200); Receiving the sensing data from the at least one body movement measuring device 200 (S300); Generating input sign data by combining one or more pieces of the sensing data (S400); Comparing the input sign language data with reference sign data and calculating a comparison result (S500); And performing an evaluation on the item data based on the comparison result (S600).
  • Sign language education method according to an embodiment of the present invention will be described in order.
  • the terminal 100 extracts and provides specific item data (S100). That is, the terminal 100 extracts specific item data according to the item selection request received from the user. For example, the terminal 100 may perform a specific operation (for example, when the body movement measuring apparatus 200 is a hand wearable device) from the body movement measuring apparatus 200 from a user, a specific point at which specific item data is displayed on the screen. After the hand pointing operation, the user can select the item data by performing a manipulation operation of collecting the hand).
  • the display unit is a touch screen
  • the terminal 100 may display various item data lists on the screen, and receive and select a touch operation for a specific item data from a user. Thereafter, the terminal 100 may provide the selected item data to the user through the display unit or the audio output unit.
  • the terminal 100 extracts reference sign data corresponding to the item data (S200).
  • the reference sign data is data performed by comparison with a sign language operation (ie, input sign data) of a user acquired by the body movement measuring apparatus 200 and corresponds to a sign language expression having a meaning consistent with specific item data.
  • the terminal 100 may extract reference sign language data corresponding to the item data in various ways.
  • the standard sign language data extraction method is not limited to the method described below, and various methods may be applied.
  • the terminal 100 may extract the reference sign data stored by matching the item data specific to the internal database. That is, the terminal 100 may store sign language operation data (ie, reference sign data) corresponding to specific item data in the memory.
  • the terminal 100 may store a sign language action expression corresponding to the item data, which is a specific word or a simple expression (eg, a greeting expression, etc.), and may store a sign language action expression corresponding to the item data, which is a specific sentence.
  • the terminal 100 may store reference sign language data generated in advance in order of sign language order in a database.
  • the terminal 100 may match and store the sign language for each country with respect to the same item data in the database.
  • the terminal 100 may extract and present a sign language expression of a specific country set by a user as reference sign data.
  • the terminal 100 may directly generate reference sign data corresponding to the selected or extracted specific item data. That is, the reference sign language data extracting step (S200) may include: changing the word order of the item data to a sign language order; And searching and matching sign language data corresponding to each word of the item data.
  • the terminal 100 includes data about a sign language order rule, and may convert item data (ie, a text sentence) extracted based on the word order rule data into a sign language order.
  • the terminal 100 may first divide the sentence into word units and then rearrange the sentences. Partitioning by word can be performed based on spacing of sentences. Thereafter, the terminal 100 may search each word in a database to load a corresponding sign language expression, and may generate reference sign data by connecting one or more sign language expressions.
  • the terminal 100 receives sensing data from at least one body movement measuring apparatus 200 (S300).
  • the at least one body movement measuring device 200 may include a glove-type measuring device, a body-mounted measuring sensor device, a vision sensor device, and the like.
  • the glove-type measuring device can be worn on both hands of the user.
  • the attached measuring sensor device may be attached to both the lower arm (ie, the elbow to wrist) or the upper arm (that is, the shoulder to elbow) which are the body parts of the user utilized for sign language.
  • the body-mounted measuring sensor device may be attached to the upper and lower arms, respectively, to measure the bending state, the movement state, and the positional relationship of the arm.
  • the sensing data receiving step (S300) may include finger movement and wrist movement from the hand worn measuring device. Receiving sensing data; And receiving sensing data from an attached measuring sensor device attached to each body unit.
  • the hand wearable measuring device may be worn on both hands of the user, and may measure the bending state or movement of each finger, the bending or movement of the wrist (for example, the direction of the palm toward the wrist rotation, etc.).
  • the hand wearable measuring device may include a banding sensor, an IMU sensor, or the like, and measure the state or movement of the finger and the wrist.
  • the attachable measuring sensor device may include an inertial sensor, and may be attached to each body unit to calculate a movement. That is, the attached measuring sensor device attached to the upper arm and the lower arm acquires the sensing data through the inertial sensor and transmits the sensing data to the terminal 100, and the terminal 100 has a positional relationship between the upper arm and the lower arm based on the sensing data. Can be calculated. Also,
  • the vision sensor device may obtain image data for measuring the movement of the hand and arm by the terminal 100 as sensing data.
  • the hand wearable measuring device or the attached measuring sensor device may include an identification mark that can be recognized by the vision sensor device, the vision sensor device measures the movement of one or more identification marks to transmit to the terminal 100. Can be.
  • the step of receiving an image obtained by the vision sensor device the image includes both hands of the user, the image receiving step; And recognizing a positional relationship between the left hand and the right hand in the image.
  • the sensing data obtained by the vision sensor device may be used to calculate the positional relationship of both arms and hands of the user by the terminal 100, as described below.
  • the vision sensor device may be formed of a necklace type, a deck type disposed in front of a user, an internal mounting type of the terminal 100, or the like.
  • the vision sensor device may acquire the movement of both hands and arms in front of the user's body as an image.
  • the vision sensor is a deck type or a terminal 100 attached or embedded type, it is disposed in front of the user at the time of inputting a sign language expression of the item data (that is, at the time of input sign language data input) to include movement of both hands and arms of the user.
  • a front image can be obtained.
  • the terminal 100 combines one or more pieces of the sensing data to generate input sign data (S400). That is, the terminal 100 may combine the sensing data acquired by the one or more body movement measuring apparatus 200 to generate the movement of the entire body part to which the body movement measuring apparatus 200 is attached as input sign data.
  • the terminal 100 is the positional relationship between the upper arm and the lower arm corresponding to the shape of the arm (for example, the bent state of the arm) through the sensing data measured from the attached measuring sensor device attached to both arms. , The arrangement of each arm, and the type of movement of each arm) can be calculated.
  • the terminal 100 determines the shape of the user's hand at each viewpoint through sensing data (for example, each finger bending state, wrist bending state, wrist rotation state, etc.) received from the hand wearable measuring sensor device. Can be calculated. Thereafter, the terminal 100 may generate a total sign language expression (ie, input sign data) performed by the user by combining the arrangement state of each arm and the shape of each hand.
  • sensing data for example, each finger bending state, wrist bending state, wrist rotation state, etc.
  • the terminal 100 when the terminal 100 receives the sensing data (ie, image data) acquired by the vision sensor device, the terminal 100 analyzes the sensing image acquired through the vision sensor to determine the positional relationship of both hands or arms. Can be calculated.
  • the terminal 100 may generate a total sign language expression (ie, input sign data) performed by a user by combining the positional relationship of both hands or arms calculated through an image with an arm shape and a hand shape.
  • the terminal 100 when the terminal 100 receives a reference posture or a standard movement result from the user, the terminal 100 is based on the initial position (or reference position) identified through the reference posture or standard movement.
  • a relative position of each measurement point for example, a body point to which a body-mounted measuring sensor device is attached
  • the terminal 100 may calculate the positional relationship of both hands or both arms based on the relative position of both hands or both arms with respect to the reference position at the same time point, combined with the shape of the arm and the shape of the hand by the user. Can generate the entire sign language (ie, input sign data).
  • the terminal 100 compares input sign language data with reference sign data to calculate a comparison result (S500). That is, the terminal 100 may calculate a result by comparing the reference sign data (ie, a sign language operation as a reference) corresponding to the item data and the generated input sign data (that is, a sign language operation performed by a user). .
  • the reference sign language data may correspond to one or more pieces of data obtained by measuring a reference user's movement for a sign language operation corresponding to the corresponding word in a dictionary.
  • the acceleration sensor value, the gyro sensor value, the geomagnetic sensor value, and the image information value of each axis according to time (For example, data values such as position, tilt, and direction of a specific body part in the image acquired through the vision sensor device).
  • the terminal accumulates the sign language operations received from the user as input sign data from the user instead of one specific sensing data, and uses a statistical classification method or a machine learning classification method such as supporting vector machine, decision tree, or deep learning. Learning may be performed to generate a specific value or a specific range corresponding to the reference sign data.
  • the terminal may calculate and compare the difference or distance between the two data with respect to the change of the input sign data and the reference sign data over time. For example, the terminal may receive reference sign data through the operation of a real person wearing a measurement sensor device and store sensing values of the measurement sensor devices corresponding to each time point. Thereafter, the terminal may calculate a difference by comparing the sensing value of each measurement sensor device according to the input sign data with the sensing value of each measurement sensor device according to the reference sign data. In addition, the difference between the input sign data and the reference sign data may be calculated by changing a domain for data analysis (for example, changing the frequency domain).
  • the terminal may compare the reference sign data with the input sign data by applying a dynamic time wafer (DTW) algorithm.
  • DTW dynamic time wafer
  • the method of comparing the reference sign data and the input sign data is not limited thereto, and various methods such as the present invention may be applied.
  • the terminal 100 is based on the word corresponding to the item data
  • a coincidence rate between the sign language data and the input sign data can be calculated. That is, since the item data corresponds to one sign language expression and a separate word order expression does not coincide, it is possible to calculate the match ratio of the operation by comparing the reference sign data and the input sign data one to one.
  • the terminal 100 when the item data is a sentence, the terminal 100 is one or more reference word data of the reference sign data and one or more input word data of the input sign data Comparing with each other, calculating a matching rate; may include.
  • the user may input a sign language operation in a different order from the word order corresponding to the item data.
  • the terminal 100 In order to evaluate the accuracy of word order, the terminal 100 must identify which word in the input sign data corresponds to each word of the reference sign data. To this end, the terminal 100 divides the reference sign data and the input sign data into word units (ie, generates one or more reference word data and one or more input word data), and mutually crosses all the reference word data and all the input word data. By comparison, the coincidence rate can be calculated.
  • the terminal 100 may determine the input word data matching the specific reference word data in the evaluation performing step S600 described later through the matching rate calculated by performing mutual comparison between words.
  • the terminal receives a signal according to an operation of an electrical switch, a mechanical switch, or a pressure sensor provided or attached to a body movement measuring device from a user.
  • a user may input the end of a word to a terminal by operating a switch provided at a specific position of the glove-type measuring device (for example, the detection area of the glove-type measuring device which can be easily operated with the thumb).
  • the terminal may recognize the end of the word by recognizing it.
  • the terminal may recognize that a specific word is over when the operation for a specific word is maintained for a predetermined time.
  • the terminal recognizes that a particular word is terminated and the next word is performed when the time range generally performed for each word (or word) is exceeded. can do.
  • the terminal 100 evaluates the item data based on the comparison result (S600).
  • the terminal 100 may calculate a score corresponding to a comparison result (that is, a match rate between the reference word data and the input word data). have.
  • the terminal 100 evaluates the degree of word order matching based on the result of the matching rate through mutual comparison between the reference word data and the input word data, and evaluates the degree of word match between words.
  • the evaluation performing step (S600) as shown in Figure 3, the step of matching the input word data and the reference word data with the highest matching rate (S610); Computing a word order result by accumulating the distance difference between the matched input word data and the reference word data, wherein the distance difference is a reference word data (ie, matched reference word data) having the highest matching rate.
  • Step S620 which is the number of words moved to be placed in the same position as the sentence; Calculating a word matching result based on a matching rate with respect to each of the input word data (S630); And calculating an evaluation score based on the word matching result and the word matching result (S640).
  • the terminal 100 may match the input word data and the reference word data having the highest matching rate based on the comparison result between the reference word data and the input word data (S610). For example, as shown in FIG. 4, a match rate is calculated by comparing word phrase data of an input sentence corresponding to the input sign data with a reference sentence corresponding to the reference sign data, and a criterion having the highest match rate for each input phrase.
  • a word ie, input word 1 may be a reference word 3, input word 2 may be a reference word 2, and input word 3 may be a reference word 1).
  • the terminal 100 may calculate the word matching result by accumulating the distance difference between the matched input word data and the reference word data (S620).
  • the distance difference may be the number of words that are moved so that the specific input word data is placed in the same sentence as the reference word data having the highest matching rate (that is, the matched reference word data). Since the input word data having the highest matching rate with the reference word data is most likely to be an expression intended by the user to input the reference word data, the terminal 100 determines the distance difference between the reference word data with the highest matching rate and the input word data. By calculating the error degree of the word order or the coincidence rate of the word order can be calculated. For example, as shown in FIG. 4, input word 3 is moved by two words to be placed in place of reference word 1, and input word 1 is moved by two words to be placed in place of reference word 3.
  • the word match result which is the number of shifts (shifting), may correspond to four.
  • the terminal 100 may calculate a word matching result based on a matching rate with respect to each of the input word data (S630). That is, the terminal 100 may calculate a word matching result by reflecting a match rate between the input word data and the reference word data that are determined to match each other. The terminal 100 may include an evaluation criterion according to the coincidence rate, thereby calculating the word matching result. Thereafter, the terminal 100 may calculate an evaluation score by reflecting the word matching result and the word matching result (S640).
  • the method may further include generating feedback data for the user based on the word matching result and the word matching result. That is, the terminal 100 may provide the user with feedback (that is, an incorrect answer commentary) regarding which part of the input sign data input by the user is wrong. For example, if the input sign data is wrong in comparison with the reference sign data, the terminal 100 may provide a description of the correct answer order to the user. In addition, when the matching rate between the specific input word data and the reference word data matched to it is equal to or less than the specific value, the terminal 100 may provide the reference word data corresponding to the correct answer on the screen, It can be explained by displaying on the screen whether the motion is wrong.
  • the method may further include determining a difficulty level of the next item data by reflecting the evaluation score.
  • the terminal 100 may provide the item data corresponding to the user's level in order, thereby preventing the user from becoming less interested in the sign language learning.
  • the terminal 100 may calculate the movement of both arms of the user or the positional relationship of both hands through the initial position setting method. That is, the initial position of the attached measuring sensor device or the hand wearable measuring device can be set, and the relative position with respect to the initial position to which the device is attached can be measured at each time point to determine the movement of the user's arms or the positional relationship between the two hands. have.
  • the sensing data receiving step (S300) the step of requesting the user to perform a specific reference posture or reference movement; And determining an initial position of each body movement measuring apparatus 200 according to the reference posture or the standard movement. That is, the terminal 100 may set an initial position (reference position) for determining a sign language operation after receiving a specific reference posture or reference movement from the user. For example, the terminal 100 may request a dressing posture from the user, and thus may set the state at the time of performing the dressing posture of the user to the initial state.
  • the input sign data generating step (S400) by tracking the movement of the body unit based on the initial position, the left and right hand
  • the positional relationship can be calculated.
  • the attached measurement sensor device includes an inertial sensor
  • the state of a specific point in time may be determined by accumulating the movement of the user measured by the inertial sensor. That is, the position of the specific time point may be calculated based on the initial position based on the magnitude and direction of the acceleration and the inclination angle accumulated and measured by the measurement sensor device attached to the specific body part.
  • the terminal 100 may calculate the relative position of the first time point (the time of calculating the relative position for the first time after calculating the initial position) based on the initial position based on the sensing data measured at the first time point, and the second position.
  • the relative position of the viewpoint (relative position calculation point after the first viewpoint) may be calculated based on the sensing data measured at the second viewpoint based on the relative position of the first viewpoint.
  • Sign language teaching method may be implemented as a program (or application) to be executed in combination with the terminal 100 which is hardware and stored in a medium.
  • the above program is C, C ++, JAVA that the client's processor (CPU) can be read through the client's device interface in order for the terminal 100 to read the program and execute the methods implemented as a program.
  • It may include a code (Code) coded in a computer language, such as machine language.
  • code may include functional code associated with a function or the like that defines the necessary functions for executing the methods, and includes control procedure related control code necessary for the processor of the client to execute a predetermined procedure. can do.
  • the code may further include memory reference code for additional information or media required for the client's processor to execute the functions at which location (address address) of the client's internal or external memory should be referenced. have.
  • the code may be used to communicate with any other computer or server remotely using the communication module of the client. It may further include a communication related code for whether to communicate, what information or media should be transmitted and received during communication.
  • the stored medium is not a medium for storing data for a short time such as a register, a cache, a memory, but semi-permanently, and means a medium that can be read by the device.
  • examples of the storage medium include, but are not limited to, a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. That is, the program may be stored in various recording media on various servers to which the client can access or various recording media on the client of the user. The media may also be distributed over network coupled computer systems so that the computer readable code is stored in a distributed fashion.
  • the user can be assessed whether the actual sign language is performed correctly, which can help improve sign language skills.
  • the user can be evaluated not only the accuracy of the operation of the word but also the accuracy of the sign language order of a specific sentence. Therefore, the user can learn the sign language order different from the Korean word order.
  • the present invention if only a database of different sign languages is added for each language, the user can learn the sign language of various countries through one terminal.
  • the present invention can evaluate whether the positional relationship between the user's two hands is correct.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention relates to sign language education system, method and program. A sign language education method, according to an embodiment of the present invention, comprises: a step (S100) for a terminal extracting particular question data and providing same; a step (S200) for extracting reference sign language data corresponding to the question data; a step (S300) for receiving sensing data items from one or more body movement measurement devices; a step (S400) for combining the one or more sensing data items and generating input sign language data; a step (S500) for comparing the input sign language data and reference sign language data and calculating a comparison result; and a step (S600) for performing evaluation with respect to the question data on the basis of the comparison result. The present invention enables a user's sign language skill to be enhanced by means of evaluation of the accuracy of actual sign language gestures made by the user.

Description

수화교육 시스템, 방법 및 프로그램 Sign Language Education System, Method and Program
본 발명은 수화교육 시스템, 방법 및 프로그램에 관한 것으로, 보다 자세하게는 신체움직임측정장치를 이용하여 문항에 따라 수행되는 사용자의 수화를 평가하여 교육하는 시스템, 방법, 및 프로그램에 관한 것이다.The present invention relates to a sign language education system, method, and program, and more particularly, to a system, method, and program for evaluating and teaching a sign language of a user performed according to an item using a body movement measuring device.
시각장애인들은 청각을 통해서만 정보를 전달 받을 수 있다. 시각장애인은 TV와 같은 영상 매체에서 사람의 대사나 효과음을 통해서만 정보를 얻을 수 있다. 그 이유는 움직임 및 행동과 같은 정보는 알 수 없기 때문이다. 이러한 점을 해결하기 위해 방송국 등에서는 화면해설방송과 같은 서비스를 제공할 수 있다.Visually impaired people can only receive information through hearing. The visually impaired can obtain information only through human speech or sound effects in a video medium such as a TV. This is because information such as movement and behavior is not known. In order to solve this problem, broadcasting stations and the like may provide a service such as screen commentary broadcasting.
또한, 청각장애인은 시각적으로 보여지는 수화를 이용하거나 컴퓨터 상의 텍스트나 지면상의 텍스트를 통하여 의사소통을 한다. 그러나 텍스트를 통한 의사 전달은 상이한 문법 및 표현 체계의 차이로 충분히 의미를 담아내지 못하는 경우가 많다. 왜냐하면, 텍스트를 통한 의사전달은 정보 왜곡 및 손실 등의 문제가 발생하기 때문이다.Hearing impaired people also communicate through sign language that is visually visible or through text on a computer or text on the page. However, communication through texts often does not contain enough meaning due to differences in different grammar and expression systems. This is because communication through texts causes problems such as information distortion and loss.
수화자와 비수화자간의 의사소통은 수화 통역사를 통하는 방식과 텍스트를 통하는 방식 등이 주를 이룬다. 그러나, 통역사를 통한 방식은 현실적으로 비용에 한계가 있다. 따라서, 청각장애인과의 의사소통 시에는 간단한 형태의 텍스트를 주된 매체로 활용하고 있다. 수화와 한국어의 문법 및 표현 체계의 차이 때문에, 수화자는 텍스트로 의미를 전달하는 경우 의미를 충분히 담아내지 못하는 경우가 많다.The communication between the sign language and the non-sign language is mainly done through a sign language interpreter and a text method. However, the method of interpreters is practically limited in cost. Therefore, when communicating with the hearing impaired, a simple form of text is used as the main medium. Due to the difference in sign language and grammar and expression system of Korean, the sign language is often unable to contain the meaning when conveying meaning in text.
수화를 사용하여야 하는 사람(예를 들어, 청각장애인 등)과의 의사소통이 필요한 사람들이 수화를 배우려고 시도한다. 그러나, 수화는 일반언어(즉, 수화가 아닌 대화 언어)의 문법체계와 많은 차이점(예를 들어, 어순의 차이점)을 가지고 있다. 따라서 사람들이 배우는데 어려움을 겪는다. People who need to communicate with people who must use sign language (such as hearing impaired people) attempt to learn sign language. However, sign language has many differences (eg, differences in word order) from the grammar of general languages (ie, non-sign language conversational languages). Therefore, people have difficulty learning.
또한, 수화는 동작이 정확하지 않으면 의사소통이 제대로 이루어지지 않을 수 있으며, 동작은 정확하더라도 수화어순과 일치하지 않으면 의사소통이 되지 않을 수 있다. In addition, sign language may not communicate properly if the operation is not correct, and even if the operation is accurate, if the sign language does not match the order may not be communicated.
따라서, 제시된 특정한 문항에 대한 사용자에 의해 수행되는 수화데이터를 획득하고, 제시된 문항에 따른 기준수화데이터와 일치여부를 판단하여, 사용자가 정확한 수화 어휘와 어순을 배울 수 있도록 하는, 수화교육 시스템, 방법 및 프로그램을 제공하고자 한다.Accordingly, a sign language education system and method for acquiring sign language data performed by a user with respect to a specific item presented and determining whether or not it matches the standard sign language data according to the presented item, so that the user can learn an accurate sign language vocabulary and word order. And to provide a program.
본 발명의 일실시예에 따른 수화교육방법은, 단말기가 특정한 문항데이터를 추출하여 제공하는 단계; 상기 문항데이터에 상응하는 기준수화데이터를 추출하는 단계; 하나 이상의 신체움직임측정장치로부터 센싱데이터를 수신하는 단계; 하나 이상의 상기 센싱데이터를 결합하여 입력수화데이터를 생성하는 단계; 상기 입력수화데이터와 기준수화데이터를 비교하여 비교결과를 산출하는 단계; 및 상기 비교결과를 바탕으로 상기 문항데이터에 대한 평가를 수행하는 단계;를 포함한다.Sign language education method according to an embodiment of the present invention, the terminal extracts the specific question data provided; Extracting reference sign data corresponding to the item data; Receiving sensing data from at least one body movement measuring device; Generating input sign data by combining one or more pieces of the sensing data; Calculating a comparison result by comparing the input sign language data with reference sign data; And evaluating the item data based on the comparison result.
또한, 상기 기준수화데이터 추출단계는, 상기 단말기 내 데이터베이스에 상기 문항데이터와 매칭되어 저장된 기준수화데이터를 추출하는 것을 특징으로 할 수 있다.The extracting of the reference sign data may include extracting the reference sign data matched with the item data stored in the database in the terminal.
또한, 상기 문항데이터가 문장에 해당하는 경우, 상기 기준수화데이터 추출단계는, 상기 문항데이터의 어순을 수화어순으로 변경하는 단계; 및 상기 문항데이터의 각 어절에 대응하는 수화데이터를 탐색하여 매칭하는 단계;를 포함할 수 있다.In addition, when the item data corresponds to a sentence, the extracting the reference sign language data may include: changing the word order of the item data to a sign language order; And searching and matching sign language data corresponding to each word of the item data.
또한, 상기 신체움직임측정장치가 장갑형 측정장치 및 부착형 측정센서장치를 포함하는 경우, 상기 센싱데이터 수신단계는, 상기 장갑형 측정장치로부터 손가락움직임 및 손목움직임의 센싱데이터를 수신하는 단계; 및 각각의 신체단위에 부착된 측정센서장치로부터 센싱데이터를 수신하는 단계;를 포함할 수 있다.In addition, when the body movement measuring device includes a glove-type measuring device and the attached measuring sensor device, the sensing data receiving step, receiving the sensing data of the finger movement and wrist movement from the glove-type measuring device; And receiving sensing data from a measurement sensor device attached to each body unit.
또한, 상기 센싱데이터수신단계는, 사용자에게 특정한 기준자세 또는 기준움직임의 수행을 요청하는 단계; 및 상기 기준자세 또는 기준움직임에 따라 각각의 신체움직임측정장치의 초기위치를 결정하는 단계;를 더 포함할 수 있다.In addition, the sensing data receiving step, the step of requesting the user to perform a specific reference posture or standard movement; And determining an initial position of each body movement measuring device according to the reference posture or reference movement.
또한, 상기 입력수화데이터 생성단계는, 상기 초기위치를 기준으로 신체단위의 움직임을 추적하여, 왼손과 오른손의 위치관계를 산출하는 것을 특징으로 할 수 있다.In addition, the input sign data generating step may be characterized by calculating the positional relationship between the left hand and the right hand by tracking the movement of the body unit on the basis of the initial position.
또한, 상기 신체움직임측정장치가 비전센서장치를 포함하는 경우, 상기 비전센서장치에 의해 획득되는 영상을 수신하는 단계로서, 상기 영상은 사용자의 양손을 포함하는, 영상수신단계; 및 상기 영상 내의 왼손 및 오른손의 위치관계를 인식하는 단계;를 더 포함할 수 있다.In addition, when the body movement measuring device includes a vision sensor device, receiving the image obtained by the vision sensor device, the image includes both hands of the user, image receiving step; And recognizing a positional relationship between the left hand and the right hand in the image.
또한, 상기 평가수행단계는, 상기 문항데이터가 단어인 경우, 상기 문항데이터에 상응하는 단어의 상기 기준수화데이터와 상기 입력수화데이터의 일치율을 산출하는 단계;를 포함할 수 있다.The evaluating step may include calculating a matching rate between the reference sign language data and the input sign data of a word corresponding to the item data when the item data is a word.
또한, 상기 평가수행단계는, 상기 문항데이터가 문장인 경우, 상기 단말기가 상기 기준수화데이터의 하나 이상의 기준어절데이터와 상기 입력수화데이터의 하나 이상의 입력어절데이터를 각각 비교하여, 일치율을 산출하는 단계; 일치율이 가장 높은 상기 입력어절데이터와 상기 기준어절데이터를 매칭하는 단계; 각각의 상기 입력어절데이터에 대한 일치율을 바탕으로 어절일치결과를 산출하는 단계; 상기 입력어절데이터와 상기 기준어절데이터 사이의 거리차를 누적하여 어순일치결과를 산출하되, 상기 거리차는 특정한 상기 입력어절데이터가 일치율이 가장 높은 기준어절데이터와 동일한 문장 내 위치에 배치되기 위해 이동되는 어절 개수인, 단계; 및 상기 어절일치결과 및 상기 어순일치결과를 반영하여 평가점수를 산출하는 단계;를 포함할 수 있다.In the evaluating step, when the item data is a sentence, the terminal compares one or more reference word data of the reference sign data with one or more input word data of the input sign data, and calculates a matching rate. ; Matching the input word data with the reference word data having the highest matching rate; Calculating a word matching result based on a matching rate with respect to each of the input word data; The word difference result is calculated by accumulating the distance difference between the input word data and the reference word data, wherein the distance difference is moved so that the specific word input data is placed in the same sentence position as the reference word data having the highest matching rate. A word count; And calculating an evaluation score by reflecting the word matching result and the word matching result.
또한, 상기 어절일치결과 및 상기 어순일치결과를 바탕으로, 사용자에 대한 피드백데이터를 생성하는 단계;를 더 포함할 수 있다.The method may further include generating feedback data for the user based on the word matching result and the word matching result.
또한, 상기 평가점수를 반영하여 다음 문항데이터의 난이도를 결정하는 단계;를 더 포함할 수 있다.The method may further include determining a difficulty level of the next item data by reflecting the evaluation score.
본 발명의 다른 일실시예에 따른 수화교육 프로그램은, 하드웨어인 단말기와 결합되어 상기 수화교육방법을 실행하며, 매체에 저장된다.A sign language education program according to another embodiment of the present invention is coupled to a hardware terminal to execute the sign language education method and is stored in a medium.
상기와 같은 본 발명에 따르면, 아래와 같은 다양한 효과들을 가진다.According to the present invention as described above, has the following various effects.
첫째, 사용자는 실제로 수행하는 수화동작이 정확한 지 평가를 받을 수 있어서 수화 실력 향상에 도움이 될 수 있다.First, the user can be assessed whether the actual sign language is performed correctly, which can help improve sign language skills.
둘째, 본 발명을 통해 사용자는 단어에 대한 동작의 정확도뿐만 아니라, 특정한 문장의 수화어순의 정확도도 평가받을 수 있다. 따라서, 한국어 어순과 상이한 수화 어순을 사용자가 정확하게 익힐 수 있다.Second, according to the present invention, the user can be evaluated not only the accuracy of the operation of the word but also the accuracy of the sign language order of a specific sentence. Therefore, the user can learn the sign language order different from the Korean word order.
셋째, 본 발명을 통해, 언어별로 상이한 수화의 데이터베이스만 추가하면, 사용자는 다양한 국가의 수화를 하나의 단말기를 통해 학습할 수 있는 효과가 있다.Third, through the present invention, if only a database of different sign languages is added for each language, the user can learn the sign language of various countries through one terminal.
넷째, 비전센서를 이용하거나 초기위치로부터의 신체움직임측정장치의 움직임 추적을 통해 사용자의 왼손 및 오른손의 위치관계를 획득할 수 있어서, 본 발명은 특정한 수화 수행 시 사용자의 양 손 사이의 위치관계가 정확한 지 여부를 평가할 수 있다.Fourth, the positional relationship between the user's left hand and right hand can be obtained by using a vision sensor or tracking the movement of the body movement measuring device from the initial position. You can evaluate whether it is correct.
다섯째, 문항데이터를 다양하게 구축하여 사용자의 상황에 꼭 필요한 수화 문항 학습을 수행하도록 할 수 있다. 또한, 다양한 컨텐츠를 결합하여 제공할 수 있어서, 사용자가 즐겁게 수화를 배우도록 할 수 있다. 또한, 수화 평가를 통해 사용자의 학습 정도에 적합한 문항을 제시하여, 사용자에게 수준별 학습을 제공할 수 있다.Fifth, it is possible to build a variety of item data to perform sign language learning necessary for the user's situation. In addition, it is possible to combine and provide a variety of content, so that the user can enjoy learning sign language. In addition, through a sign language evaluation, a question suitable for a user's learning level may be suggested to provide a user with level-specific learning.
도 1은 본 발명의 일실시예에 따른 수화교육시스템의 구성도이다.1 is a block diagram of a sign language education system according to an embodiment of the present invention.
도 2는 본 발명의 일실시예에 따른 수화교육방법에 대한 순서도이다.2 is a flowchart illustrating a sign language education method according to an embodiment of the present invention.
도 3은 본 발명의 일실시예에 따라 문장인 문항데이터를 평가하는 과정을 나타낸 순서도이다.3 is a flowchart illustrating a process of evaluating question data, which is a sentence, according to an embodiment of the present invention.
도 4는 본 발명의 일실시예에 따라 어절의 일치율을 바탕으로 어순일치결과를 산출하는 예시도면이다.4 is an exemplary diagram for calculating word matching results based on the coincidence rate of words according to an embodiment of the present invention.
이하, 첨부된 도면을 참조하여 본 발명의 바람직한 실시예를 상세히 설명한다. 본 발명의 이점 및 특징, 그리고 그것들을 달성하는 방법은 첨부되는 도면과 함께 상세하게 후술되어 있는 실시예들을 참조하면 명확해질 것이다. 그러나 본 발명은 이하에서 게시되는 실시예들에 한정되는 것이 아니라 서로 다른 다양한 형태로 구현될 수 있으며, 단지 본 실시예들은 본 발명의 게시가 완전하도록 하고, 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 발명의 범주를 완전하게 알려주기 위해 제공되는 것이며, 본 발명은 청구항의 범주에 의해 정의될 뿐이다. 명세서 전체에 걸쳐 동일 참조 부호는 동일 구성 요소를 지칭한다.Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. Advantages and features of the present invention and methods for achieving them will be apparent with reference to the embodiments described below in detail with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below, but may be implemented in various forms, and only the embodiments are intended to complete the disclosure of the present invention, and the general knowledge in the art to which the present invention pertains. It is provided to fully inform the person having the scope of the invention, which is defined only by the scope of the claims. Like reference numerals refer to like elements throughout.
다른 정의가 없다면, 본 명세서에서 사용되는 모든 용어(기술 및 과학적 용어를 포함)는 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 공통적으로 이해될 수 있는 의미로 사용될 수 있을 것이다. 또 일반적으로 사용되는 사전에 정의되어 있는 용어들은 명백하게 특별히 정의되어 있지 않는 한 이상적으로 또는 과도하게 해석되지 않는다.Unless otherwise defined, all terms (including technical and scientific terms) used in the present specification may be used in a sense that can be commonly understood by those skilled in the art. In addition, the terms defined in the commonly used dictionaries are not ideally or excessively interpreted unless they are specifically defined clearly.
본 명세서에서 사용된 용어는 실시예들을 설명하기 위한 것이며 본 발명을 제한하고자 하는 것은 아니다. 본 명세서에서, 단수형은 문구에서 특별히 언급하지 않는 한 복수형도 포함한다. 명세서에서 사용되는 "포함한다(comprises)" 및/또는 "포함하는(comprising)"은 언급된 구성요소 외에 하나 이상의 다른 구성요소의 존재 또는 추가를 배제하지 않는다.The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. In this specification, the singular also includes the plural unless specifically stated otherwise in the phrase. As used herein, "comprises" and / or "comprising" does not exclude the presence or addition of one or more other components in addition to the mentioned components.
본 명세서에서 신체움직임측정장치는 사용자의 신체움직임을 센싱하여 유무선통신을 통해 상기 단말기로 움직임데이터를 전송하는 장치에 해당한다. 신체움직임측정장치는 웨어러블 디바이스의 형태가 될 수 있으며, 신체의 여러 위치에 부착할 수 있는 센서장치가 될 수도 있다. 이외에도 신체움직임측정장치는 사용자의 신체움직임을 측정할 수 있는 다양한 형태의 장치가 될 수도 있다. 예를 들어, 측정하고자 하는 신체움직임이 손 부위의 움직임인 경우, 신체움직임측정장치는 손에 착용하는 손착용형으로 구현될 수 있다. 또한, 예를 들어, 측정하고자 하는 신체움직임이 양 팔의 움직임인 경우, 신체움직임측정장치는 각각의 신체영역에 부착할 수 있는 센서 패치 등이 될 수 있다. 또한, 여러 신체부위의 움직임을 종합적으로 측정하는 경우, 신체움직임측정장치는 손착용형(예를 들어, 장갑형) 측정장치, 신체부착형 측정센서장치 등의 여러 측정장치를 포함할 수 있다.In the present specification, the body movement measuring device corresponds to a device for sensing movement of a user and transmitting movement data to the terminal through wired or wireless communication. The body movement measuring device may be in the form of a wearable device, or may be a sensor device that can be attached to various positions of the body. In addition, the body movement measuring device may be a device of various types that can measure the body movement of the user. For example, when the body movement to be measured is a movement of the hand region, the body movement measuring apparatus may be implemented as a wearable type worn on the hand. Also, for example, when the body movement to be measured is the movement of both arms, the body movement measuring device may be a sensor patch or the like that can be attached to each body region. In addition, in the case of comprehensively measuring the movement of various body parts, the body movement measuring device may include various measuring devices such as a wearable type (for example, a glove type) measuring device and a body type measuring sensor device.
이하, 도면을 참조하여 본 발명의 실시예들에 따른 수화교육 시스템, 방법 및 프로그램에 대해 설명하기로 한다.Hereinafter, a sign language education system, method, and program according to embodiments of the present invention will be described with reference to the accompanying drawings.
도 1은 본 발명의 일실시예에 따른 수화교육시스템의 구성도이다.1 is a block diagram of a sign language education system according to an embodiment of the present invention.
본 발명의 일실시예에 따른 수화교육 시스템은, 단말기(100)의 내부 구성에 의해서 구현된다. 상기 단말기(100)는 이동 가능 여부에 따라 이동 단말기(100) 및 고정 단말기(100)로 나뉠 수 있는데, 상기 구성을 포함하는 모든 형태의 단말기(100)가 해당될 수 있다. 예를 들어, 상기 단말기(100)는 이동단말기(100)에 해당하는 셀룰러폰(Cellular phone), 피씨에스폰(PCS phone; Personal Communication Service phone), 동기식/비동기식 IMT-2000(International Mobile Telecommunication-2000)의 이동 단말기(100), 팜 PC(Palm Personal Computer), 개인용 디지털 보조기(PDA; Personal Digital Assistant), 스마트폰(Smart phone), 왑폰(WAP phone; Wireless Application Protocao phone), 모바일 게임기, 태블릿 PC, 넷북, 노트북(Note Book) 등이 해당될 수 있으며, 고정단말기(100)에 해당하는 데스크 탑 PC, 텔레비전 등이 해당될 수 있다.Sign language education system according to an embodiment of the present invention, is implemented by the internal configuration of the terminal 100. The terminal 100 may be divided into a mobile terminal 100 and a fixed terminal 100 according to whether or not it is movable. The terminal 100 may include all types of terminals 100 including the above configuration. For example, the terminal 100 may include a cellular phone, a PCS phone (Personal Communication Service phone), and a synchronous / asynchronous IMT-2000 (International Mobile Telecommunication-2000) corresponding to the mobile terminal 100. ), Mobile terminal 100, Palm Personal Computer (PDA), Personal Digital Assistant (PDA), Smartphone, Smart Phone, WAP phone, Wireless Application Protocao phone, Mobile Game Machine, Tablet PC , A netbook, a notebook (Note Book), and the like, and a desktop PC, a television, and the like corresponding to the fixed terminal 100 may correspond.
상기 단말기(100)는 제어부(110); 통신부(130); 및 출력부(130);를 전부 또는 일부를 포함한다. 단말기(100)는 상기 기재된 구성요소에 한정되지 않고, 추가적인 구성요소를 더 포함할 수 있다.The terminal 100 includes a control unit 110; Communication unit 130; And output unit 130; includes all or part. The terminal 100 is not limited to the components described above, and may further include additional components.
제어부(110)(controller)는 통상적으로 단말기(100)의 전반적인 동작을 제어한다. 예를 들어 데이터통신, 디스플레이부상의 재생을 위한 영상처리, 신체움직임 평가(예를 들어, 사용자의 신체움직임에 따른 입력수화데이터 평가) 등을 위한 관련된 제어 및 처리를 수행한다. 상기 제어부(110)가 수행하는 다양한 기능들에 대해서는 후술한다.The controller 110 typically controls the overall operation of the terminal 100. For example, it performs related control and processing for data communication, image processing for reproduction on the display unit, body movement evaluation (eg, evaluation of input sign data according to user's body movement), and the like. Various functions performed by the controller 110 will be described later.
상기 통신부(130)는 신체움직임측정장치(200)로부터 센싱데이터를 수신하는 기능을 수행한다. 또한, 상기 통신부(130)는 상기 수신한 센싱데이터를 상기 제어부(110)로 전달하는 기능을 수행한다. 또한, 상기 통신부(130)는 상기 센싱데이터를 바탕으로 산출된 평가결과에 따른 출력을 상기 신체움직임측정장치(200)로 전송할 수도 있다.The communication unit 130 performs a function of receiving sensing data from the body movement measuring apparatus 200. In addition, the communication unit 130 performs a function of transmitting the received sensing data to the control unit 110. In addition, the communication unit 130 may transmit an output according to the evaluation result calculated based on the sensing data to the body movement measuring apparatus 200.
상기 통신부(130)는 상기 신체움직임측정장치(200)와 유선으로 연결되어 데이터를 수신하는 유선통신부 또는 무선통신방식을 통해 상기 신체움직임측정장치(200)로부터 상기 움직임데이터를 수신하는 무선통신부를 포함할 수 있다. 상기 무선통신부는 무선 인터넷 모듈; 또는 근거리통신모듈;을 포함할 수 있다.The communication unit 130 includes a wireless communication unit which is connected to the body movement measuring apparatus 200 by wire and receives the movement data from the body movement measuring apparatus 200 through a wired communication unit or a wireless communication method for receiving data. can do. The wireless communication unit wireless internet module; Or a near field communication module.
무선 인터넷 모듈은 무선 인터넷 접속을 위한 모듈을 말하는 것으로, 단말기(100)에 내장되거나 외장될 수 있다. 무선 인터넷 기술로는 WLAN(Wireless LAN)(Wi-Fi), Wibro(Wireless broadband), Wimax(World Interoperability for Microwave Access), HSDPA(High Speed Downlink Packet Access), LTE(long term evolution), LTE-A(Long Term Evolution-Advanced) 등이 이용될 수 있다.The wireless internet module refers to a module for wireless internet access and may be embedded or external to the terminal 100. Wireless Internet technologies include Wireless LAN (Wi-Fi), Wireless Broadband (Wibro), World Interoperability for Microwave Access (Wimax), High Speed Downlink Packet Access (HSDPA), long term evolution (LTE), LTE-A Long Term Evolution-Advanced and the like can be used.
근거리 통신 모듈은 근거리 통신을 위한 모듈을 말한다. 근거리 통신(short range communication) 기술로 블루투스(Bluetooth), BLE(Bluetooth Low Energy), 비콘(Beacon), RFID(Radio Frequency Identification), NFC(Near Field Communication), 적외선 통신(Infrared Data Association; IrDA), UWB(Ultra Wideband), ZigBee, NRF 등이 이용될 수 있다. The short range communication module refers to a module for short range communication. Short range communication technologies include Bluetooth, BLE (Bluetooth Low Energy), Beacon, Radio Frequency Identification (RFID), Near Field Communication (NFC), Infrared Data Association (IrDA), Ultra Wideband (UWB), ZigBee, NRF, etc. may be used.
상기 출력부(130)는 사용자에게 제공할 정보를 출력하는 기능을 수행한다. 출력부(130)는 디스플레이부, 음향출력부 등을 포함할 수 있다.The output unit 130 performs a function of outputting information to be provided to the user. The output unit 130 may include a display unit, a sound output unit, and the like.
상기 디스플레이부는 단말기(100)에서 처리되는 정보를 표시(출력)하는 기능을 수행한다. 또한, 디스플레이부는 터치센서와 결합되어 터치스크린으로 구현될 수 있다. 디스플레이부는 터치조작을 통해 사용자로부터 입력조작을 수신할 수 있다. 예를 들어, 단말기(100)는 화면 상에 표시되는 문항리스트 중에서 특정한 문항데이터에 대응하는 지점에 사용자의 터치조작을 입력받아서 평가할 문항을 선택할 수 있다.The display unit displays (outputs) information processed by the terminal 100. In addition, the display unit may be implemented as a touch screen by being combined with a touch sensor. The display unit may receive an input operation from the user through a touch operation. For example, the terminal 100 may select an item to be evaluated by receiving a user's touch manipulation at a point corresponding to the specific item data from the item list displayed on the screen.
여기에 설명되는 다양한 실시 예는 예를 들어, 소프트웨어, 하드웨어 또는 이들의 조합된 것을 이용하여 컴퓨터 또는 이와 유사한 장치로 읽을 수 있는 기록매체 내에서 구현될 수 있다.Various embodiments described herein may be implemented in a recording medium readable by a computer or similar device using, for example, software, hardware or a combination thereof.
또한, 상기 단말기(100)는 메모리;를 더 포함할 수 있다. 메모리는 제어부(110)의 동작을 위한 프로그램을 저장할 수 있고, 입/출력되는 데이터들 또는 신체움직임 평가 수행 중에 생성되는 데이터들(예를 들어, 움직임데이터를 수신하여 저장하는 학습데이터 등)을 저장할 수도 있다. 상기 메모리는 상기 제어부(110) 내에 포함될 수도 있다.In addition, the terminal 100 may further include a memory. The memory may store a program for the operation of the controller 110 and store input / output data or data generated during the performance evaluation of the body (for example, learning data for receiving and storing movement data). It may be. The memory may be included in the controller 110.
상기 메모리는 플래시 메모리 타입(flash memory type), 하드디스크 타입(hard disk type), 멀티미디어 카드 마이크로 타입(multimedia card micro type), 카드 타입의 메모리(예를 들어 SD 또는 XD 메모리 등), 램(random access memory; RAM), SRAM(static random access memory), 롬(read-only memory; ROM), EEPROM(electrically erasable programmable read-only memory), PROM(programmable read-only memory), 자기 메모리, 자기 디스크, 광디스크 중 적어도 하나의 타입의 저장매체를 포함할 수 있다. 이동 단말기(100)는 인터넷(internet)상에서 상기 메모리의 저장 기능을 수행하는 웹 스토리지(web storage)와 관련되어 동작할 수도 있다.The memory may be a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (e.g. SD or XD memory, etc.), RAM access memory (RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, It may include at least one type of storage medium of the optical disk. The mobile terminal 100 may operate in connection with a web storage that performs a storage function of the memory on the Internet.
하드웨어적인 구현에 의하면, 여기에 설명되는 실시 예는 ASICs(application specific integrated circuits), DSPs(digital signal processors), DSPDs(digital signal processing devices), PLDs(programmable logic devices), FPGAs(field programmable gate arrays), 프로세서(processors), 제어기(controllers), 마이크로 컨트롤러(micro-controllers), 마이크로 프로세서(microprocessors), 기타 기능 수행을 위한 전기적인 유닛 중 적어도 하나를 이용하여 구현될 수 있다. 일부의 경우에 본 명세서에서 설명되는 실시 예들이 제어부(110) 자체로 구현될 수 있다.According to a hardware implementation, the embodiments described herein include application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), and field programmable gate arrays (FPGAs). It may be implemented using at least one of processors, controllers, micro-controllers, microprocessors, and electrical units for performing other functions. In some cases, the embodiments described herein may be implemented by the controller 110 itself.
소프트웨어적인 구현에 의하면, 본 명세서에서 설명되는 절차 및 기능과 같은 실시 예들은 별도의 소프트웨어 모듈들로 구현될 수 있다. 상기 소프트웨어 모듈들 각각은 본 명세서에서 설명되는 하나 이상의 기능 및 작동을 수행할 수 있다.According to the software implementation, embodiments such as the procedures and functions described herein may be implemented as separate software modules. Each of the software modules may perform one or more functions and operations described herein.
소프트웨어 코드는 적절한 프로그램 언어로 쓰여진 소프트웨어 애플리케이션으로 소프트웨어 코드가 구현될 수 있다. 상기 소프트웨어 코드는 메모리에 저장되고, 상기 제어부(110)에 의해 실행될 수 있다.The software code may be implemented as a software application written in a suitable programming language. The software code may be stored in a memory and executed by the controller 110.
도 2는 본 발명의 일실시예에 따른 수화교육방법에 대한 순서도이다.2 is a flowchart illustrating a sign language education method according to an embodiment of the present invention.
도 2를 참조하면, 본 발명의 일실시예에 따른 수화교육방법은, 단말기(100)가 특정한 문항데이터를 추출하여 제공하는 단계(S100); 상기 문항데이터에 상응하는 기준수화데이터를 추출하는 단계(S200); 하나 이상의 신체움직임측정장치(200)로부터 센싱데이터를 수신하는 단계(S300); 하나 이상의 상기 센싱데이터를 결합하여 입력수화데이터를 생성하는 단계(S400); 상기 입력수화데이터와 기준수화데이터를 비교하여 비교결과를 산출하는 단계(S500); 및 상기 비교결과를 바탕으로 상기 문항데이터에 대한 평가를 수행하는 단계(S600);를 포함한다. 본 발명의 일 실시예에 따른 수화교육방법을 순서대로 설명한다.2, the sign language education method according to an embodiment of the present invention, the terminal 100 extracts and provides a specific item data (S100); Extracting reference sign data corresponding to the item data (S200); Receiving the sensing data from the at least one body movement measuring device 200 (S300); Generating input sign data by combining one or more pieces of the sensing data (S400); Comparing the input sign language data with reference sign data and calculating a comparison result (S500); And performing an evaluation on the item data based on the comparison result (S600). Sign language education method according to an embodiment of the present invention will be described in order.
단말기(100)가 특정한 문항데이터를 추출하여 제공한다(S100). 즉, 단말기(100)는 사용자로부터 수신된 문항선택요청에 따라 특정한 문항데이터를 추출한다. 예를 들어, 단말기(100)는 사용자로부터 신체움직임측정장치(200)로부터 특정한 조작(예를 들어, 신체움직임측정장치(200)가 손 착용형 장치인 경우, 화면에 특정한 문항데이터가 표시된 특정한 지점을 가리키는 손 동작 후, 손을 모으는 조작동작)을 수행하여 문항데이터를 선택할 수 있다. 또한, 디스플레이부가 터치스크린인 경우, 단말기(100)는 화면 상에 다양한 문항데이터 리스트를 표시할 수 있고, 사용자로부터 특정한 문항데이터에 대한 터치조작을 수신하여 선택할 수 있다. 그 후, 단말기(100)는 디스플레이부 또는 음향출력부를 통해 사용자에게 선택된 문항데이터를 제공할 수 있다. The terminal 100 extracts and provides specific item data (S100). That is, the terminal 100 extracts specific item data according to the item selection request received from the user. For example, the terminal 100 may perform a specific operation (for example, when the body movement measuring apparatus 200 is a hand wearable device) from the body movement measuring apparatus 200 from a user, a specific point at which specific item data is displayed on the screen. After the hand pointing operation, the user can select the item data by performing a manipulation operation of collecting the hand). In addition, when the display unit is a touch screen, the terminal 100 may display various item data lists on the screen, and receive and select a touch operation for a specific item data from a user. Thereafter, the terminal 100 may provide the selected item data to the user through the display unit or the audio output unit.
단말기(100)가 상기 문항데이터에 상응하는 기준수화데이터를 추출한다(S200). 기준수화데이터는 신체움직임측정장치(200)에 의해 획득되는 사용자의 수화동작(즉, 입력수화데이터)와 비교 수행되는 데이터로, 특정한 문항데이터와 의미가 일치하는 수화표현에 해당한다. 단말기(100)는 다양한 방식으로 문항데이터에 상응하는 기준수화데이터를 추출할 수 있다. 다만, 기준수화데이터 추출방식은 이하 기재되는 방식에 한정되지 아니하고, 다양한 방식이 적용될 수 있다.The terminal 100 extracts reference sign data corresponding to the item data (S200). The reference sign data is data performed by comparison with a sign language operation (ie, input sign data) of a user acquired by the body movement measuring apparatus 200 and corresponds to a sign language expression having a meaning consistent with specific item data. The terminal 100 may extract reference sign language data corresponding to the item data in various ways. However, the standard sign language data extraction method is not limited to the method described below, and various methods may be applied.
기준수화데이터 추출방식의 일실시예로, 단말기(100)는 내부의 데이터베이스에 특정한 문항데이터와 매칭되어 저장된 기준수화데이터를 추출할 수 있다. 즉, 단말기(100)는 메모리 내부에 특정한 문항데이터에 대응하는 수화동작데이터(즉, 기준수화데이터)를 저장할 수 있다. 단말기(100)는 특정한 단어 또는 간단한 표현(예를 들어, 인사 표현 등)인 문항데이터에 상응하는 수화동작표현을 저장할 수 있고, 특정한 문장인 문항데이터에 대응하는 수화동작표현을 저장할 수 있다. 문항데이터가 문장인 경우, 단말기(100)는 데이터베이스에 수화어순에 맞도록 미리 생성된 기준수화데이터를 저장할 수 있다.In one embodiment of the method of extracting the reference sign data, the terminal 100 may extract the reference sign data stored by matching the item data specific to the internal database. That is, the terminal 100 may store sign language operation data (ie, reference sign data) corresponding to specific item data in the memory. The terminal 100 may store a sign language action expression corresponding to the item data, which is a specific word or a simple expression (eg, a greeting expression, etc.), and may store a sign language action expression corresponding to the item data, which is a specific sentence. When the item data is a sentence, the terminal 100 may store reference sign language data generated in advance in order of sign language order in a database.
또한, 국가별로 동일한 의미의 문항데이터에 대한 수화표현이 상이할 수 있으므로, 단말기(100)는 데이터베이스 내에 동일한 문항데이터에 대해 국가별 수화표현을 매칭하여 저장할 수 있다. 단말기(100)는 사용자에 의해 설정된 특정한 국가의 수화표현을 기준수화데이터로 추출하여 제시할 수 있다.In addition, since the sign language expression for the item data having the same meaning may be different for each country, the terminal 100 may match and store the sign language for each country with respect to the same item data in the database. The terminal 100 may extract and present a sign language expression of a specific country set by a user as reference sign data.
기준수화데이터 추출방식의 다른 일실시예로, 상기 문항데이터가 문장에 해당하는 경우, 단말기(100)는 선택 또는 추출된 특정한 문항데이터에 상응하는 기준수화데이터를 바로 생성할 수 있다. 즉, 상기 기준수화데이터 추출단계(S200)는, 상기 문항데이터의 어순을 수화어순으로 변경하는 단계; 및 상기 문항데이터의 각 어절에 대응하는 수화데이터를 탐색하여 매칭하는 단계;를 포함할 수 있다. 단말기(100)는 수화 어순 규칙에 관한 데이터를 포함하고 있고, 상기 어순규칙데이터를 바탕으로 추출된 문항데이터(즉, 텍스트로 된 문장)를 수화 어순으로 변환할 수 있다. 단말기(100)는 수화 어순으로 문장을 구성하는 어절을 재배치하기 위해 문장을 어절 단위로 먼저 분할한 후 재배치할 수 있다. 어절단위로 분할은 문장의 띄어쓰기를 바탕으로 수행할 수 있다. 그 후, 단말기(100)는 각 어절을 데이터베이스에서 탐색하여 대응하는 수화표현을 로드할 수 있고, 하나 이상의 수화표현을 연결하여 기준수화데이터를 생성할 수 있다. In another embodiment of the reference sign data extraction method, when the item data corresponds to a sentence, the terminal 100 may directly generate reference sign data corresponding to the selected or extracted specific item data. That is, the reference sign language data extracting step (S200) may include: changing the word order of the item data to a sign language order; And searching and matching sign language data corresponding to each word of the item data. The terminal 100 includes data about a sign language order rule, and may convert item data (ie, a text sentence) extracted based on the word order rule data into a sign language order. In order to rearrange the words constituting the sentence in sign language order, the terminal 100 may first divide the sentence into word units and then rearrange the sentences. Partitioning by word can be performed based on spacing of sentences. Thereafter, the terminal 100 may search each word in a database to load a corresponding sign language expression, and may generate reference sign data by connecting one or more sign language expressions.
단말기(100)가 하나 이상의 신체움직임측정장치(200)로부터 센싱데이터를 수신한다(S300). 상기 하나 이상의 신체움직임측정장치(200)는 장갑형 측정장치, 신체 부착형 측정센서장치, 비전센서장치 등이 포함될 수 있다. 장갑형 측정장치는 사용자의 양손에 착용될 수 있다. 또한, 부착형 측정센서장치는 수화에 활용되는 사용자의 신체부위인 양쪽의 아래팔(즉, 팔꿈치부터 손목까지의 부분) 또는 위팔(즉, 어깨에서 팔꿈치까지의 부분)에 부착될 수 있다. 예를 들어, 신체 부착형 측정센서장치를 위팔과 아래팔에 각각 부착하여 팔의 구부림 상태, 움직임 상태, 위치관계 등을 측정할 수 있다.The terminal 100 receives sensing data from at least one body movement measuring apparatus 200 (S300). The at least one body movement measuring device 200 may include a glove-type measuring device, a body-mounted measuring sensor device, a vision sensor device, and the like. The glove-type measuring device can be worn on both hands of the user. In addition, the attached measuring sensor device may be attached to both the lower arm (ie, the elbow to wrist) or the upper arm (that is, the shoulder to elbow) which are the body parts of the user utilized for sign language. For example, the body-mounted measuring sensor device may be attached to the upper and lower arms, respectively, to measure the bending state, the movement state, and the positional relationship of the arm.
또한, 상기 신체움직임측정장치(200)가 손 착용형 측정장치 및 부착형 측정센서장치를 포함하는 경우, 상기 센싱데이터 수신단계(S300)는, 상기 손착 착용형 측정장치로부터 손가락움직임 및 손목움직임의 센싱데이터를 수신하는 단계; 및 각각의 신체단위에 부착된 부착형 측정센서장치로부터 센싱데이터를 수신하는 단계;를 포함할 수 있다. 손 착용형 측정장치는 사용자의 양손에 착용될 수 있고, 각각의 손가락의 구부림 상태 또는 움직임, 손목의 구부림 또는 움직임(예를 들어, 손목 회전에 따른 손바닥이 향하는 방향 등) 등을 측정할 수 있다. 손 착용형 측정장치는 밴딩센서 또는 IMU센서 등 포함하여, 손가락 및 손목의 상태 또는 움직임을 측정할 수 있다. In addition, when the body movement measuring device 200 includes a hand wearable measuring device and a wearable measuring sensor device, the sensing data receiving step (S300) may include finger movement and wrist movement from the hand worn measuring device. Receiving sensing data; And receiving sensing data from an attached measuring sensor device attached to each body unit. The hand wearable measuring device may be worn on both hands of the user, and may measure the bending state or movement of each finger, the bending or movement of the wrist (for example, the direction of the palm toward the wrist rotation, etc.). . The hand wearable measuring device may include a banding sensor, an IMU sensor, or the like, and measure the state or movement of the finger and the wrist.
또한, 부착형 측정센서장치는 관성센서를 포함할 수 있어서, 각각의 신체단위에 각각 부착되어 움직임을 산출할 수 있다. 즉, 위팔과 아래팔에 부착된 부착형 측정센서장치는 관성센서를 통해 센싱데이터를 획득하여 단말기(100)로 전송하고, 단말기(100)는 센싱데이터를 바탕으로 위팔과 아래팔 사이의 위치관계를 산출할 수 있다. 또한, In addition, the attachable measuring sensor device may include an inertial sensor, and may be attached to each body unit to calculate a movement. That is, the attached measuring sensor device attached to the upper arm and the lower arm acquires the sensing data through the inertial sensor and transmits the sensing data to the terminal 100, and the terminal 100 has a positional relationship between the upper arm and the lower arm based on the sensing data. Can be calculated. Also,
또한, 상기 신체움직임측정장치(200)가 비전센서장치를 포함하는 경우, 비전센서장치는 단말기(100)에 의해 손과 팔의 움직임을 측정할 영상데이터를 센싱데이터로 획득할 수 있다. 또한, 손 착용형 측정장치 또는 부착형 측정센서장치가 비전센서장치에 의해 인식될 수 있는 식별표지를 포함할 수 있고, 비전센서장치는 하나 이상의 식별표지의 움직임을 측정하여 단말기(100)로 전송할 수 있다. 이를 위해, 본 발명의 일실시예는, 상기 비전센서장치에 의해 획득되는 영상을 수신하는 단계로서, 상기 영상은 사용자의 양손을 포함하는, 영상수신단계; 및 상기 영상 내의 왼손 및 오른손의 위치관계를 인식하는 단계;를 더 포함할 수 있다. 비전센서장치에 의해 획득된 센싱데이터는, 후술하는 바와 같이, 단말기(100)에 의한 사용자의 양팔 및 양손의 위치관계 산출에 이용될 수 있다.In addition, when the body movement measuring device 200 includes a vision sensor device, the vision sensor device may obtain image data for measuring the movement of the hand and arm by the terminal 100 as sensing data. In addition, the hand wearable measuring device or the attached measuring sensor device may include an identification mark that can be recognized by the vision sensor device, the vision sensor device measures the movement of one or more identification marks to transmit to the terminal 100. Can be. To this end, an embodiment of the present invention, the step of receiving an image obtained by the vision sensor device, the image includes both hands of the user, the image receiving step; And recognizing a positional relationship between the left hand and the right hand in the image. The sensing data obtained by the vision sensor device may be used to calculate the positional relationship of both arms and hands of the user by the terminal 100, as described below.
비전센서장치는 목걸이형, 사용자의 정면에 배치되는 데크형, 단말기(100) 내부 장착형 등으로 형성될 수 있다. 비전센서장치가 목걸이형인 경우, 비전센서장치는 사용자의 몸 앞에서의 양손과 양팔의 움직임을 영상으로 획득할 수 있다. 또한, 비전센서가 데크형 또는 단말기(100) 부착 또는 포함형인 경우, 문항데이터에 대한 수화표현 입력 시(즉, 입력수화데이터 입력 시)에 사용자의 정면에 배치되어 사용자의 양손 및 양팔 움직임을 포함하는 정면 영상을 획득할 수 있다.The vision sensor device may be formed of a necklace type, a deck type disposed in front of a user, an internal mounting type of the terminal 100, or the like. When the vision sensor device is a necklace type, the vision sensor device may acquire the movement of both hands and arms in front of the user's body as an image. In addition, when the vision sensor is a deck type or a terminal 100 attached or embedded type, it is disposed in front of the user at the time of inputting a sign language expression of the item data (that is, at the time of input sign language data input) to include movement of both hands and arms of the user. A front image can be obtained.
단말기(100)가 하나 이상의 상기 센싱데이터를 결합하여 입력수화데이터를 생성한다(S400). 즉, 단말기(100)는 하나 이상의 신체움직임측정장치(200)에 의해 획득된 센싱데이터를 결합하여 신체움직임측정장치(200)가 부착된 전체 신체부위의 움직임을 입력수화데이터로 생성할 수 있다. 예를 들어, 단말기(100)는 양팔에 부착된 부착형 측정센서장치로부터 측정된 센싱데이터를 통해 각 시점의 팔의 형태(예를 들어, 팔의 구부림 상태에 해당하는 위팔과 아래팔의 위치관계, 각 팔의 배치상태, 각 팔의 움직임형태)를 산출할 수 있다. 또한, 단말기(100)는 손 착용형 측정센서장치로부터 수신된 센싱데이터(예를 들어, 각 시점의 각 손가락 구부림 상태, 손목 구부림 상태, 손목 회전 상태 등)를 통해 각 시점의 사용자 손의 형태를 산출할 수 있다. 그 후, 단말기(100)는 각 팔의 배치상태와 각 손의 형태를 결합하여 사용자가 수행하는 전체 수화표현(즉, 입력수화데이터)를 생성할 수 있다.The terminal 100 combines one or more pieces of the sensing data to generate input sign data (S400). That is, the terminal 100 may combine the sensing data acquired by the one or more body movement measuring apparatus 200 to generate the movement of the entire body part to which the body movement measuring apparatus 200 is attached as input sign data. For example, the terminal 100 is the positional relationship between the upper arm and the lower arm corresponding to the shape of the arm (for example, the bent state of the arm) through the sensing data measured from the attached measuring sensor device attached to both arms. , The arrangement of each arm, and the type of movement of each arm) can be calculated. In addition, the terminal 100 determines the shape of the user's hand at each viewpoint through sensing data (for example, each finger bending state, wrist bending state, wrist rotation state, etc.) received from the hand wearable measuring sensor device. Can be calculated. Thereafter, the terminal 100 may generate a total sign language expression (ie, input sign data) performed by the user by combining the arrangement state of each arm and the shape of each hand.
또한, 단말기(100)가 비전센서장치에 의해 획득된 센싱데이터(즉, 영상데이터)를 수신하는 경우, 단말기(100)는 비전센서를 통해 획득된 센싱영상을 분석하여 양손 또는 양팔의 위치관계를 산출할 수 있다. 단말기(100)는 영상을 통해 산출된 양손 또는 양팔의 위치관계를 팔의 형태, 손의 형태와 함께 결합하여 사용자가 수행하는 전체 수화표현(즉, 입력수화데이터)를 생성할 수 있다.In addition, when the terminal 100 receives the sensing data (ie, image data) acquired by the vision sensor device, the terminal 100 analyzes the sensing image acquired through the vision sensor to determine the positional relationship of both hands or arms. Can be calculated. The terminal 100 may generate a total sign language expression (ie, input sign data) performed by a user by combining the positional relationship of both hands or arms calculated through an image with an arm shape and a hand shape.
또한, 후술하는 바와 같이, 단말기(100)가 사용자로부터 기준자세 또는 기준움직임 수행결과를 수신하는 경우, 단말기(100)는 기준자세 또는 기준움직임을 통해 파악된 초기위치(또는 기준위치)를 바탕으로, 사용자의 수화표현 수행 시 각 시점의 초기위치에 대한 각 측정지점(예를 들어, 신체부착형 측정센서장치가 부착된 신체지점)의 상대적 위치를 산출할 수 있다. 이를 통해, 단말기(100)는 동일한 시점의 기준위치에 대한 양손 또는 양팔의 상대적 위치를 바탕으로 양손 또는 양팔의 위치관계를 산출할 수 있고, 팔의 형태, 손의 형태와 함께 결합하여 사용자가 수행하는 전체 수화표현(즉, 입력수화데이터)를 생성할 수 있다.In addition, as will be described later, when the terminal 100 receives a reference posture or a standard movement result from the user, the terminal 100 is based on the initial position (or reference position) identified through the reference posture or standard movement. In addition, when performing a sign language expression of a user, a relative position of each measurement point (for example, a body point to which a body-mounted measuring sensor device is attached) may be calculated with respect to the initial position of each time point. Through this, the terminal 100 may calculate the positional relationship of both hands or both arms based on the relative position of both hands or both arms with respect to the reference position at the same time point, combined with the shape of the arm and the shape of the hand by the user. Can generate the entire sign language (ie, input sign data).
단말기(100)가 입력수화데이터와 기준수화데이터를 비교하여 비교결과를 산출한다(S500). 즉, 단말기(100)는 문항데이터에 상응하는 기준수화데이터(즉, 기준이 되는 수화동작)과 생성된 입력수화데이터(즉, 사용자에 의해 수행된 수화동작)을 비교하여 결과를 산출할 수 있다. 기준수화데이터는 사전에 해당 단어에 상응하는 수화동작에 대한 기준 사용자 움직임을 측정한 1개 이상의 데이터에 해당할 수 있으며, 시간에 따른 축별 가속도센서값, 자이로센서값, 지자기센서값, 영상정보값(예를 들어, 비전센서장치를 통해 획득된 영상 내에서 특정한 신체부위의 위치, 기울기, 방향 등의 데이터 값) 등을 포함할 수 있다. 또한, 단말기는 기준수화데이터를 특정한 하나의 센싱데이터가 아닌 사용자로부터 입력수화데이터로 수신된 수화동작들을 누적하고 이에 대해 통계적 분류기법이나 supporting vector machine, decision tree, deep learning 등의 머신러닝 분류기법을 이용하여 학습을 수행하여 기준수화데이터에 상응하는 특정값 또는 특정범위를 생성할 수 있다.The terminal 100 compares input sign language data with reference sign data to calculate a comparison result (S500). That is, the terminal 100 may calculate a result by comparing the reference sign data (ie, a sign language operation as a reference) corresponding to the item data and the generated input sign data (that is, a sign language operation performed by a user). . The reference sign language data may correspond to one or more pieces of data obtained by measuring a reference user's movement for a sign language operation corresponding to the corresponding word in a dictionary.The acceleration sensor value, the gyro sensor value, the geomagnetic sensor value, and the image information value of each axis according to time (For example, data values such as position, tilt, and direction of a specific body part in the image acquired through the vision sensor device). In addition, the terminal accumulates the sign language operations received from the user as input sign data from the user instead of one specific sensing data, and uses a statistical classification method or a machine learning classification method such as supporting vector machine, decision tree, or deep learning. Learning may be performed to generate a specific value or a specific range corresponding to the reference sign data.
기준수화데이터와 입력수화데이터를 비교하는 방식의 일실시예로, 단말기는 입력수화데이터와 기준수화데이터의 시간에 따른 변화에 대한 두 데이터 간 차이나 거리를 계산하여 비교할 수 있다. 예를 들어, 단말기는 측정센서장치를 착용한 실제 사람의 동작을 통해 기준수화데이터를 입력받을 수 있고, 각 시점에 부합하는 측정센서장치들의 센싱값을 저장할 수 있다. 그 후, 단말기는 입력수화데이터에 따른 각 측정센서장치의 센싱값을 기준수화데이터에 따른 각 측정센서장치의 센싱값과 비교하여 차이를 산출할 수 있다. 또한, 입력수화데이터와 기준수화데이터를 데이터 분석을 위한 도메인을 바꾸어서(예를 들어, 주파수 영역으로 변경하여) 차이를 산출할 수 있다. 또한, 단말기는 DTW(Dynamic Time Wapping) 알고리즘을 적용하여 기준수화데이터와 입력수화데이터를 비교할 수 있다. 다만, 기준수화데이터와 입력수화데이터를 비교하는 방식은 이에 한정되지 않고, 등의 다양한 방식을 적용할 수 있다.In one embodiment of the method of comparing the reference sign data and the input sign data, the terminal may calculate and compare the difference or distance between the two data with respect to the change of the input sign data and the reference sign data over time. For example, the terminal may receive reference sign data through the operation of a real person wearing a measurement sensor device and store sensing values of the measurement sensor devices corresponding to each time point. Thereafter, the terminal may calculate a difference by comparing the sensing value of each measurement sensor device according to the input sign data with the sensing value of each measurement sensor device according to the reference sign data. In addition, the difference between the input sign data and the reference sign data may be calculated by changing a domain for data analysis (for example, changing the frequency domain). In addition, the terminal may compare the reference sign data with the input sign data by applying a dynamic time wafer (DTW) algorithm. However, the method of comparing the reference sign data and the input sign data is not limited thereto, and various methods such as the present invention may be applied.
상기 비교결과 산출단계(S500)의 일실시예로, 상기 문항데이터가 단어 또는 짧은 표현(예를 들어, 인사 등의 관용표현)인 경우, 단말기(100)는 상기 문항데이터에 상응하는 단어의 기준수화데이터와 입력수화데이터의 일치율을 산출할 수 있다. 즉, 문항데이터가 하나의 수화표현에 해당하여 별도의 어순표현이 일치하지 않으므로, 기준수화데이터와 입력수화데이터를 일대일로 비교하여 동작의 일치율을 산출할 수 있다.In one embodiment of the comparison result calculation step (S500), if the item data is a word or short expression (for example, common expressions such as greetings), the terminal 100 is based on the word corresponding to the item data A coincidence rate between the sign language data and the input sign data can be calculated. That is, since the item data corresponds to one sign language expression and a separate word order expression does not coincide, it is possible to calculate the match ratio of the operation by comparing the reference sign data and the input sign data one to one.
상기 비교결과 산출단계(S500)의 다른 일실시예로, 상기 문항데이터가 문장인 경우, 상기 단말기(100)가 상기 기준수화데이터의 하나 이상의 기준어절데이터와 상기 입력수화데이터의 하나 이상의 입력어절데이터를 각각 비교하여, 일치율을 산출하는 단계;를 포함할 수 있다. 사용자는 문항데이터에 부합하는 어순과 상이한 순서로 수화동작이 입력될 수 있다. 단말기(100)가 어순이 정확도를 평가하기 위해서는 기준수화데이터의 각 어절에 대응하는 입력수화데이터 내의 어절이 어느 것인지 식별하여야 한다. 이를 위해, 단말기(100)는 기준수화데이터 및 입력수화데이터를 어절단위로 분할(즉, 하나 이상의 기준어절데이터와 하나 이상의 입력어절데이터를 생성)하고, 모든 기준어절데이터와 모든 입력어절데이터를 상호비교하여 일치율을 산출할 수 있다. 단말기(100)는 어절 간 상호 비교를 수행하여 산출된 일치율을 통해, 후술되는 평가수행단계(S600)에서, 특정한 기준어절데이터에 매칭되는 입력어절데이터를 판단할 수 있다.In another embodiment of the comparing result calculating step (S500), when the item data is a sentence, the terminal 100 is one or more reference word data of the reference sign data and one or more input word data of the input sign data Comparing with each other, calculating a matching rate; may include. The user may input a sign language operation in a different order from the word order corresponding to the item data. In order to evaluate the accuracy of word order, the terminal 100 must identify which word in the input sign data corresponds to each word of the reference sign data. To this end, the terminal 100 divides the reference sign data and the input sign data into word units (ie, generates one or more reference word data and one or more input word data), and mutually crosses all the reference word data and all the input word data. By comparison, the coincidence rate can be calculated. The terminal 100 may determine the input word data matching the specific reference word data in the evaluation performing step S600 described later through the matching rate calculated by performing mutual comparison between words.
입력수화데이터를 어절단위로 분할하는 방식의 일실시예로, 단말기는 사용자로부터 신체움직임측정장치에 구비되어 있거나 부착되는 전기적 스위치, 기계적 스위치 또는 압력센서에 대한 조작에 따른 신호를 수신하여 특정한 어절의 종결을 인식할 수 있다. 예를 들어, 사용자가 장갑형 측정장치의 특정위치(예를 들어, 엄지로 간편하게 조작 가능한 장갑형 측정장치의 검지 영역)에 구비된 스위치를 조작하여 어절의 종결을 단말기에 입력할 수 있다. In an embodiment of dividing the input sign data into word units, the terminal receives a signal according to an operation of an electrical switch, a mechanical switch, or a pressure sensor provided or attached to a body movement measuring device from a user. You can recognize the closure. For example, a user may input the end of a word to a terminal by operating a switch provided at a specific position of the glove-type measuring device (for example, the detection area of the glove-type measuring device which can be easily operated with the thumb).
또한, 입력수화데이터를 어절단위로 분할하는 방식의 다른 일실시예로, 사용자가 미리 정의된 특정한 제스처를 취하면, 단말기가 이를 인식하여 어절의 종결을 인식할 수 있다. 또한, 입력수화데이터를 어절단위로 분할하는 방식의 또 다른 일실시예로, 특정 어절에 대한 동작을 수행한 후 일정 시간 이상 유지하면 단말기가 특정한 어절이 끝난 것으로 인식할 수 있다. 또한, 입력수화데이터를 어절단위로 분할하는 방식의 또 다른 일실시예로, 단말기는 특정한 어절(또는 단어)마다 일반적으로 수행되는 시간범위를 초과하면 특정한 어절이 종결되고 다음 어절이 수행되는 것으로 인식할 수 있다.In addition, as another embodiment of a method of dividing the input sign data into word units, when the user takes a predetermined specific gesture, the terminal may recognize the end of the word by recognizing it. In addition, as another embodiment of a method of dividing the input sign data into word units, the terminal may recognize that a specific word is over when the operation for a specific word is maintained for a predetermined time. In another embodiment of the method of dividing the input sign data into word units, the terminal recognizes that a particular word is terminated and the next word is performed when the time range generally performed for each word (or word) is exceeded. can do.
단말기(100)가 상기 비교결과를 바탕으로 상기 문항데이터에 대한 평가를 수행한다(S600). 상기 문항데이터가 단어 또는 짧은 표현(예를 들어, 인사 등의 관용표현)인 경우, 단말기(100)는 비교결과(즉, 기준어절데이터와 입력어절데이터의 일치율)에 부합하는 점수를 산출할 수 있다.The terminal 100 evaluates the item data based on the comparison result (S600). When the item data is a word or a short expression (for example, common expressions such as greetings), the terminal 100 may calculate a score corresponding to a comparison result (that is, a match rate between the reference word data and the input word data). have.
또한, 상기 문항데이터가 문장인 경우, 단말기(100)는 기준어절데이터와 입력어절데이터 간의 상호 비교수행을 통한 일치율 결과를 바탕으로, 어순 일치 정도를 평가하고, 어순 일치 시의 어절간 일치 정도를 평가할 수 있다. 즉, 평가수행단계(S600)의 일실시예는, 도 3에서와 같이, 일치율이 가장 높은 상기 입력어절데이터와 상기 기준어절데이터를 매칭하는 단계(S610); 매칭된 상기 입력어절데이터와 상기 기준어절데이터 사이의 거리차를 누적하여 어순일치결과를 산출하되, 상기 거리차는 특정한 상기 입력어절데이터가 일치율이 가장 높은 기준어절데이터(즉, 매칭된 기준어절데이터)와 동일한 문장 내 위치에 배치되기 위해 이동되는 어절 개수인, 단계(S620); 각각의 상기 입력어절데이터에 대한 일치율을 바탕으로 어절일치결과를 산출하는 단계(S630); 및 상기 어절일치결과 및 상기 어순일치결과를 반영하여 평가점수를 산출하는 단계(S640);를 포함할 수 있다. In addition, when the item data is a sentence, the terminal 100 evaluates the degree of word order matching based on the result of the matching rate through mutual comparison between the reference word data and the input word data, and evaluates the degree of word match between words. Can be evaluated That is, one embodiment of the evaluation performing step (S600), as shown in Figure 3, the step of matching the input word data and the reference word data with the highest matching rate (S610); Computing a word order result by accumulating the distance difference between the matched input word data and the reference word data, wherein the distance difference is a reference word data (ie, matched reference word data) having the highest matching rate. Step S620, which is the number of words moved to be placed in the same position as the sentence; Calculating a word matching result based on a matching rate with respect to each of the input word data (S630); And calculating an evaluation score based on the word matching result and the word matching result (S640).
먼저, 단말기(100)는 기준어절데이터와 입력어절데이터 간의 비교결과를 바탕으로, 일치율이 가장 높은 입력어절데이터와 기준어절데이터를 매칭할 수 있다(S610). 예를 들어, 도 4에서와 같이, 입력수화데이터에 상응하는 입력문장과 기준수화데이터에 상응하는 기준문장의 어절데이터를 상호 비교하여 일치율을 계산하고, 각 입력어절에 대해 가장 높은 일치율을 가지는 기준어절(즉, 입력어절1은 기준어절3, 입력어절2는 기준어절2, 입력어절3은 기준어절1)과 매칭될 수 있다. 그 후, 단말기(100)는 매칭된 상기 입력어절데이터와 상기 기준어절데이터 사이의 거리차를 누적하여 어순일치결과를 산출할 수 있다(S620). 상기 거리차는 특정한 상기 입력어절데이터가 일치율이 가장 높은 기준어절데이터(즉, 매칭된 기준어절데이터)와 동일한 문장 내 위치에 배치되기 위해 이동되는 어절 개수일 수 있다. 기준어절데이터와의 일치율이 가장 높은 입력어절데이터가 상기 기준어절데이터로 사용자가 입력하고자 의도한 표현일 가능성이 높으므로, 단말기(100)는 일치율이 가장 높은 기준어절데이터와 입력어절데이터의 거리차를 산출하여 어순의 오류 정도 또는 어순의 일치율을 산출할 수 있다. 예를 들어, 도 4에서와 같이, 입력어절3은 기준어절1의 자리에 배치되기 위해 두 개의 어절만큼 이동되고 입력어절1은 기준어절3의 자리에 배치되기 위해 두 개의 어절만큼 이동되어서, 총 이동횟수(시프팅 횟수)인 어순일치결과는 4에 해당할 수 있다. 그 후, 단말기(100)는 각각의 상기 입력어절데이터에 대한 일치율을 바탕으로 어절일치결과를 산출할 수 있다(S630). 즉, 단말기(100)는 상호 일치하는 것으로 판단된 입력어절데이터와 기준어절데이터의 일치율을 반영하여 어절일치결과를 산출할 수 있다. 단말기(100)는 일치율에 따른 평가기준을 포함할 수 있고, 이에 따라 어절일치결과를 산출할 수 있다. 그 후, 단말기(100)는 상기 어절일치결과 및 상기 어순일치결과를 반영하여 평가점수를 산출할 수 있다(S640). First, the terminal 100 may match the input word data and the reference word data having the highest matching rate based on the comparison result between the reference word data and the input word data (S610). For example, as shown in FIG. 4, a match rate is calculated by comparing word phrase data of an input sentence corresponding to the input sign data with a reference sentence corresponding to the reference sign data, and a criterion having the highest match rate for each input phrase. A word (ie, input word 1 may be a reference word 3, input word 2 may be a reference word 2, and input word 3 may be a reference word 1). Thereafter, the terminal 100 may calculate the word matching result by accumulating the distance difference between the matched input word data and the reference word data (S620). The distance difference may be the number of words that are moved so that the specific input word data is placed in the same sentence as the reference word data having the highest matching rate (that is, the matched reference word data). Since the input word data having the highest matching rate with the reference word data is most likely to be an expression intended by the user to input the reference word data, the terminal 100 determines the distance difference between the reference word data with the highest matching rate and the input word data. By calculating the error degree of the word order or the coincidence rate of the word order can be calculated. For example, as shown in FIG. 4, input word 3 is moved by two words to be placed in place of reference word 1, and input word 1 is moved by two words to be placed in place of reference word 3. The word match result, which is the number of shifts (shifting), may correspond to four. Thereafter, the terminal 100 may calculate a word matching result based on a matching rate with respect to each of the input word data (S630). That is, the terminal 100 may calculate a word matching result by reflecting a match rate between the input word data and the reference word data that are determined to match each other. The terminal 100 may include an evaluation criterion according to the coincidence rate, thereby calculating the word matching result. Thereafter, the terminal 100 may calculate an evaluation score by reflecting the word matching result and the word matching result (S640).
또한, 상기 어절일치결과 및 상기 어순일치결과를 바탕으로, 사용자에 대한 피드백데이터를 생성하는 단계;를 더 포함할 수 있다. 즉, 단말기(100)는 사용자가 입력한 입력수화데이터에서 어떠한 부분이 잘못되었는지에 대한 피드백(즉, 오답해설)을 사용자에게 제공할 수 있다. 예를 들어, 입력수화데이터가 기준수화데이터와 비교하여 어순이 잘못된 경우, 단말기(100)는 사용자에게 정답어순에 대한 설명을 제공할 수 있다. 또한, 특정한 입력어절데이터와 이에 매칭된 기준어절데이터 간의 일치율이 특정값 이하인 경우, 단말기(100)는 정답에 해당하는 기준어절데이터를 화면으로 제공할 수 있고, 어떤 부분(즉, 어떠한 신체부위의 움직임)이 잘못되었는지 화면상에 표시하여 설명할 수 있다.The method may further include generating feedback data for the user based on the word matching result and the word matching result. That is, the terminal 100 may provide the user with feedback (that is, an incorrect answer commentary) regarding which part of the input sign data input by the user is wrong. For example, if the input sign data is wrong in comparison with the reference sign data, the terminal 100 may provide a description of the correct answer order to the user. In addition, when the matching rate between the specific input word data and the reference word data matched to it is equal to or less than the specific value, the terminal 100 may provide the reference word data corresponding to the correct answer on the screen, It can be explained by displaying on the screen whether the motion is wrong.
또한, 상기 평가점수를 반영하여 다음 문항데이터의 난이도를 결정하는 단계;를 더 포함할 수 있다. 이를 통해, 단말기(100)는 사용자의 수준에 부합하는 문항데이터를 순서대로 제공할 수 있어서, 사용자가 수화 학습을 어렵게 느껴서 흥미가 떨어지는 것을 방지할 수 있다.The method may further include determining a difficulty level of the next item data by reflecting the evaluation score. Through this, the terminal 100 may provide the item data corresponding to the user's level in order, thereby preventing the user from becoming less interested in the sign language learning.
또한, 단말기(100)는, 신체움직임측정장치(200)가 비전센서장치를 포함하지 않는 경우에도, 초기위치 설정방식을 통해 사용자의 양팔의 움직임 또는 양손의 위치관계를 산출할 수 있다. 즉, 부착형 측정센서장치 또는 손 착용형 측정장치의 초기위치를 설정하고, 각 장치가 부착된 초기 위치에 대한 상대적 위치를 각 시점마다 측정하여 사용자의 양팔의 움직임 또는 양손의 위치관계를 파악할 수 있다.In addition, even if the body movement measuring device 200 does not include a vision sensor device, the terminal 100 may calculate the movement of both arms of the user or the positional relationship of both hands through the initial position setting method. That is, the initial position of the attached measuring sensor device or the hand wearable measuring device can be set, and the relative position with respect to the initial position to which the device is attached can be measured at each time point to determine the movement of the user's arms or the positional relationship between the two hands. have.
일실시예에서, 초기위치를 설정하기 위해, 상기 센싱데이터수신단계(S300)는, 사용자에게 특정한 기준자세 또는 기준움직임의 수행을 요청하는 단계; 및 상기 기준자세 또는 기준움직임에 따라 각각의 신체움직임측정장치(200)의 초기위치를 결정하는 단계;를 더 포함할 수 있다. 즉, 단말기(100)는 사용자로부터 특정한 기준자세 또는 기준움직임을 입력받아서 이후 수화동작을 판단하기 위한 초기위치(기준위치)를 설정할 수 있다. 예를 들어, 단말기(100)는 사용자에게 차려 자세를 요청할 수 있고, 이에 따른 사용자의 차려 자세 수행 시의 상태를 초기상태로 설정할 수 있다.In one embodiment, to set the initial position, the sensing data receiving step (S300), the step of requesting the user to perform a specific reference posture or reference movement; And determining an initial position of each body movement measuring apparatus 200 according to the reference posture or the standard movement. That is, the terminal 100 may set an initial position (reference position) for determining a sign language operation after receiving a specific reference posture or reference movement from the user. For example, the terminal 100 may request a dressing posture from the user, and thus may set the state at the time of performing the dressing posture of the user to the initial state.
또한, 일실시예는, 각 위치의 초기위치에 대한 상대적 위치를 산출하기 위해, 상기 입력수화데이터 생성단계(S400)는, 상기 초기위치를 기준으로 신체단위의 움직임을 추적하여, 왼손과 오른손의 위치관계를 산출할 수 있다. 예를 들어, 부착형 측정센서장치가 관성센서를 포함하는 경우, 관성센서에 의해 측정되는 사용자의 움직임을 누적하여 특정 시점의 상태를 파악할 수 있다. 즉, 특정 신체부위에 부착된 측정센서장치에 의해 측정되어 누적되는 가속도 크기 및 방향, 기울기 각도 등을 바탕으로 초기위치를 기준으로 특정시점의 위치를 계산할 수 있다. 구체적으로, 단말기(100)는 초기위치를 기준으로 제1시점(초기위치 산출 후 최초로 상대적 위치를 산출하는 시점)의 상대적 위치를 제1시점에 측정된 센싱데이터를 바탕으로 계산할 수 있고, 제2시점(제1시점 다음의 상대적 위치 산출시점)의 상대적 위치를 제1시점의 상대적 위치를 기준으로 제2시점에 측정된 센싱데이터를 바탕으로 계산할 수 있다.In addition, in one embodiment, to calculate the relative position with respect to the initial position of each position, the input sign data generating step (S400), by tracking the movement of the body unit based on the initial position, the left and right hand The positional relationship can be calculated. For example, when the attached measurement sensor device includes an inertial sensor, the state of a specific point in time may be determined by accumulating the movement of the user measured by the inertial sensor. That is, the position of the specific time point may be calculated based on the initial position based on the magnitude and direction of the acceleration and the inclination angle accumulated and measured by the measurement sensor device attached to the specific body part. Specifically, the terminal 100 may calculate the relative position of the first time point (the time of calculating the relative position for the first time after calculating the initial position) based on the initial position based on the sensing data measured at the first time point, and the second position. The relative position of the viewpoint (relative position calculation point after the first viewpoint) may be calculated based on the sensing data measured at the second viewpoint based on the relative position of the first viewpoint.
이상에서 전술한 본 발명의 일실시예에 따른 수화교육방법은, 하드웨어인 단말기(100)와 결합되어 실행되기 위해 프로그램(또는 어플리케이션)으로 구현되어 매체에 저장될 수 있다.Sign language teaching method according to an embodiment of the present invention described above may be implemented as a program (or application) to be executed in combination with the terminal 100 which is hardware and stored in a medium.
상기 전술한 프로그램은, 상기 단말기(100)가 프로그램을 읽어 들여 프로그램으로 구현된 상기 방법들을 실행시키기 위하여, 상기 클라이언트의 프로세서(CPU)가 상기 클라이언트의 장치 인터페이스를 통해 읽힐 수 있는 C, C++, JAVA, 기계어 등의 컴퓨터 언어로 코드화된 코드(Code)를 포함할 수 있다. 이러한 코드는 상기 방법들을 실행하는 필요한 기능들을 정의한 함수 등과 관련된 기능적인 코드(Functional Code)를 포함할 수 있고, 상기 기능들을 상기 클라이언트의 프로세서가 소정의 절차대로 실행시키는데 필요한 실행 절차 관련 제어 코드를 포함할 수 있다. 또한, 이러한 코드는 상기 기능들을 상기 클라이언트의 프로세서가 실행시키는데 필요한 추가 정보나 미디어가 상기 클라이언트의 내부 또는 외부 메모리의 어느 위치(주소 번지)에서 참조되어야 하는지에 대한 메모리 참조관련 코드를 더 포함할 수 있다. 또한, 상기 클라이언트의 프로세서가 상기 기능들을 실행시키기 위하여 원격(Remote)에 있는 어떠한 다른 컴퓨터나 서버 등과 통신이 필요한 경우, 코드는 상기 클라이언트의 통신 모듈을 이용하여 원격에 있는 어떠한 다른 컴퓨터나 서버 등과 어떻게 통신해야 하는지, 통신 시 어떠한 정보나 미디어를 송수신해야 하는지 등에 대한 통신 관련 코드를 더 포함할 수 있다. The above program is C, C ++, JAVA that the client's processor (CPU) can be read through the client's device interface in order for the terminal 100 to read the program and execute the methods implemented as a program. It may include a code (Code) coded in a computer language, such as machine language. Such code may include functional code associated with a function or the like that defines the necessary functions for executing the methods, and includes control procedure related control code necessary for the processor of the client to execute a predetermined procedure. can do. In addition, the code may further include memory reference code for additional information or media required for the client's processor to execute the functions at which location (address address) of the client's internal or external memory should be referenced. have. Also, if the processor of the client needs to communicate with any other computer or server remotely in order to execute the functions, the code may be used to communicate with any other computer or server remotely using the communication module of the client. It may further include a communication related code for whether to communicate, what information or media should be transmitted and received during communication.
상기 저장되는 매체는, 레지스터, 캐쉬, 메모리 등과 같이 짧은 순간 동안 데이터를 저장하는 매체가 아니라 반영구적으로 데이터를 저장하며, 기기에 의해 판독(reading)이 가능한 매체를 의미한다. 구체적으로는, 상기 저장되는 매체의 예로는 ROM, RAM, CD-ROM, 자기 테이프, 플로피디스크, 광 데이터 저장장치 등이 있지만, 이에 제한되지 않는다. 즉, 상기 프로그램은 상기 클라이언트가 접속할 수 있는 다양한 서버 상의 다양한 기록매체 또는 사용자의 상기 클라이언트상의 다양한 기록매체에 저장될 수 있다. 또한, 상기 매체는 네트워크로 연결된 컴퓨터 시스템에 분산되어, 분산방식으로 컴퓨터가 읽을 수 있는 코드가 저장될 수 있다.The stored medium is not a medium for storing data for a short time such as a register, a cache, a memory, but semi-permanently, and means a medium that can be read by the device. Specifically, examples of the storage medium include, but are not limited to, a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. That is, the program may be stored in various recording media on various servers to which the client can access or various recording media on the client of the user. The media may also be distributed over network coupled computer systems so that the computer readable code is stored in a distributed fashion.
상기와 같은 본 발명에 따르면, 아래와 같은 다양한 효과들을 가진다.According to the present invention as described above, has the following various effects.
첫째, 사용자는 실제로 수행하는 수화동작이 정확한 지 평가를 받을 수 있어서 수화 실력 향상에 도움이 될 수 있다.First, the user can be assessed whether the actual sign language is performed correctly, which can help improve sign language skills.
둘째, 본 발명을 통해 사용자는 단어에 대한 동작의 정확도뿐만 아니라, 특정한 문장의 수화어순의 정확도도 평가받을 수 있다. 따라서, 한국어 어순과 상이한 수화 어순을 사용자가 정확하게 익힐 수 있다.Second, according to the present invention, the user can be evaluated not only the accuracy of the operation of the word but also the accuracy of the sign language order of a specific sentence. Therefore, the user can learn the sign language order different from the Korean word order.
셋째, 본 발명을 통해, 언어별로 상이한 수화의 데이터베이스만 추가하면, 사용자는 다양한 국가의 수화를 하나의 단말기를 통해 학습할 수 있는 효과가 있다.Third, through the present invention, if only a database of different sign languages is added for each language, the user can learn the sign language of various countries through one terminal.
넷째, 비전센서 또는 초기측정위치를 통해 사용자의 왼손 및 오른손의 위치관계를 획득할 수 있어서, 본 발명은 사용자의 양 손 사이의 위치관계가 정확한 지 여부를 평가할 수 있다.Fourth, it is possible to obtain the positional relationship between the user's left hand and right hand through the vision sensor or the initial measurement position, the present invention can evaluate whether the positional relationship between the user's two hands is correct.
다섯째, 문항데이터를 다양하게 구축하여 사용자의 상황에 꼭 필요한 수화 문항 학습을 수행하도록 할 수 있다. 또한, 다양한 컨텐츠와 결합하여 제공할 수 있어서, 사용자가 즐겁게 수화를 배우도록 할 수 있다. 또한, 수화 평가를 통해 사용자의 학습 정도에 적합한 문항을 제시하여, 사용자에게 수준별 학습을 제공할 수 있다.Fifth, it is possible to build a variety of item data to perform sign language learning necessary for the user's situation. In addition, it can be provided in combination with a variety of content, so that the user can enjoy learning sign language. In addition, through a sign language evaluation, a question suitable for a user's learning level may be suggested to provide a user with level-specific learning.
이상 첨부된 도면을 참조하여 본 발명의 실시예들을 설명하였지만, 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자는 본 발명이 그 기술적 사상이나 필수적인 특징을 변경하지 않고서 다른 구체적인 형태로 실시될 수 있다는 것을 이해할 수 있을 것이다. 그러므로 이상에서 기술한 실시예들은 모든 면에서 예시적인 것이며 한정적이 아닌 것으로 이해해야만 한다.Although embodiments of the present invention have been described above with reference to the accompanying drawings, those skilled in the art to which the present invention pertains may implement the present invention in other specific forms without changing the technical spirit or essential features thereof. I can understand that. Therefore, it should be understood that the embodiments described above are exemplary in all respects and not restrictive.

Claims (12)

  1. 단말기가 특정한 문항데이터를 추출하여 제공하는 단계;Extracting and providing specific item data by the terminal;
    상기 문항데이터에 상응하는 기준수화데이터를 추출하는 단계;Extracting reference sign data corresponding to the item data;
    하나 이상의 신체움직임측정장치로부터 센싱데이터를 수신하는 단계;Receiving sensing data from at least one body movement measuring device;
    하나 이상의 상기 센싱데이터를 결합하여 입력수화데이터를 생성하는 단계;Generating input sign data by combining one or more pieces of the sensing data;
    상기 입력수화데이터와 기준수화데이터를 비교하여 비교결과를 산출하는 단계; 및Calculating a comparison result by comparing the input sign language data with reference sign data; And
    상기 비교결과를 바탕으로 상기 문항데이터에 대한 평가를 수행하는 단계;를 포함하는, 수화교육방법.And performing an evaluation on the item data based on the comparison result.
  2. 제1항에 있어서, The method of claim 1,
    상기 기준수화데이터 추출단계는,The reference sign data extraction step,
    상기 단말기 내 데이터베이스에 상기 문항데이터와 매칭되어 저장된 기준수화데이터를 추출하는 것을 특징으로 하는, 수화교육방법.Sign language education method, characterized in that for extracting the reference sign data stored in matching with the item data stored in the database in the terminal.
  3. 제1항에 있어서, The method of claim 1,
    상기 문항데이터가 문장에 해당하는 경우,If the item data corresponds to a sentence,
    상기 기준수화데이터 추출단계는,The reference sign data extraction step,
    상기 문항데이터의 어순을 수화어순으로 변경하는 단계; 및Changing the order of the item data to a sign language order; And
    상기 문항데이터의 각 어절에 대응하는 수화데이터를 탐색하여 매칭하는 단계;를 포함하는, 수화교육방법.And searching and matching sign language data corresponding to each word of the item data.
  4. 제1항에 있어서, The method of claim 1,
    상기 신체움직임측정장치가 장갑형 측정장치 및 부착형 측정센서장치를 포함하는 경우,When the body movement measuring device includes a glove-type measuring device and the attached measuring sensor device,
    상기 센싱데이터 수신단계는,The sensing data receiving step,
    상기 장갑형 측정장치로부터 손가락움직임 및 손목움직임의 센싱데이터를 수신하는 단계; 및Receiving sensing data of a finger movement and a wrist movement from the glove-type measuring device; And
    각각의 신체단위에 부착된 측정센서장치로부터 센싱데이터를 수신하는 단계;를 포함하는, 수화교육방법.And receiving sensing data from a measurement sensor device attached to each body unit.
  5. 제4항에 있어서, The method of claim 4, wherein
    상기 센싱데이터수신단계는,The sensing data receiving step,
    사용자에게 특정한 기준자세 또는 기준움직임의 수행을 요청하는 단계; 및Requesting the user to perform a particular reference posture or reference move; And
    상기 기준자세 또는 기준움직임에 따라 각각의 신체움직임측정장치의 초기위치를 결정하는 단계;를 더 포함하는, 수화교육방법.And determining an initial position of each physical movement measuring device according to the reference posture or the standard movement.
  6. 제5항에 있어서, The method of claim 5,
    상기 입력수화데이터 생성단계는,The input sign data generation step,
    상기 초기위치를 기준으로 신체단위의 움직임을 추적하여, 왼손과 오른손의 위치관계를 산출하는 것을 특징으로 하는, 수화교육방법.The sign language education method, characterized in that to calculate the positional relationship between the left hand and the right hand by tracking the movement of the body unit on the basis of the initial position.
  7. 제4항에 있어서, The method of claim 4, wherein
    상기 신체움직임측정장치가 비전센서장치를 포함하는 경우,When the body movement measuring device includes a vision sensor device,
    상기 비전센서장치에 의해 획득되는 영상을 수신하는 단계로서, 상기 영상은 사용자의 양손을 포함하는, 영상수신단계; 및Receiving an image acquired by the vision sensor device, wherein the image includes both hands of a user; And
    상기 영상 내의 왼손 및 오른손의 위치관계를 인식하는 단계;를 더 포함하는, 수화교육방법.Recognizing the positional relationship between the left and right hand in the image; Sign language education method further comprising.
  8. 제1항에 있어서, The method of claim 1,
    상기 평가수행단계는,The evaluation step is,
    상기 문항데이터가 단어인 경우,If the item data is a word,
    상기 문항데이터에 상응하는 단어의 상기 기준수화데이터와 상기 입력수화데이터의 일치율을 산출하는 단계;를 포함하는, 수화교육방법.And calculating a correspondence rate between the reference sign language data and the input sign data of a word corresponding to the item data.
  9. 제1항에 있어서, The method of claim 1,
    상기 평가수행단계는,The evaluation step is,
    상기 문항데이터가 문장인 경우,If the item data is a sentence,
    상기 단말기가 상기 기준수화데이터의 하나 이상의 기준어절데이터와 상기 입력수화데이터의 하나 이상의 입력어절데이터를 각각 비교하여, 일치율을 산출하는 단계;Calculating, by the terminal, at least one reference word data of the reference sign data and at least one input word data of the input sign data, respectively, to calculate a matching rate;
    일치율이 가장 높은 상기 입력어절데이터와 상기 기준어절데이터를 매칭하는 단계;Matching the input word data with the reference word data having the highest matching rate;
    상기 입력어절데이터와 상기 기준어절데이터 사이의 거리차를 누적하여 어순일치결과를 산출하되, 상기 거리차는 특정한 상기 입력어절데이터가 일치율이 가장 높은 기준어절데이터와 동일한 문장 내 위치에 배치되기 위해 이동되는 어절 개수인, 단계; The word difference result is calculated by accumulating the distance difference between the input word data and the reference word data, wherein the distance difference is moved so that the specific word input data is placed in the same sentence position as the reference word data having the highest matching rate. A word count;
    각각의 상기 기준어절데이터에 매칭된 상기 입력어절데이터의 일치율을 바탕으로 어절일치결과를 산출하는 단계; 및Calculating a word matching result based on a matching rate of the input word data matched with each of the reference word data; And
    상기 어절일치결과 및 상기 어순일치결과를 반영하여 평가점수를 산출하는 단계;를 포함하는, 수화교육방법.And calculating an evaluation score by reflecting the word matching result and the word matching result.
  10. 제9항에 있어서,The method of claim 9,
    상기 어절일치결과 및 상기 어순일치결과를 바탕으로, 사용자에 대한 피드백데이터를 생성하는 단계;를 더 포함하는, 수화교육방법.And generating feedback data for the user based on the word matching result and the word matching result.
  11. 제9항에 있어서,The method of claim 9,
    상기 평가점수를 반영하여 다음 문항데이터의 난이도를 결정하는 단계;를 더 포함하는, 수화교육방법.Determining the difficulty of the next item data by reflecting the evaluation score; Sign language education method further comprising.
  12. 하드웨어인 단말기와 결합되어, 제1항 내지 제11항 중 어느 한 항의 방법을 실행시키기 위하여 매체에 저장된, 수화교육 프로그램.12. A sign language education program, coupled with a terminal, which is hardware, stored in a medium for carrying out the method of any one of claims 1 to 11.
PCT/KR2015/010744 2015-10-13 2015-10-13 Sign language education system, method and program WO2017065324A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/KR2015/010744 WO2017065324A1 (en) 2015-10-13 2015-10-13 Sign language education system, method and program
KR1020157029063A KR101793607B1 (en) 2015-10-13 2015-10-13 System, method and program for educating sign language

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2015/010744 WO2017065324A1 (en) 2015-10-13 2015-10-13 Sign language education system, method and program

Publications (1)

Publication Number Publication Date
WO2017065324A1 true WO2017065324A1 (en) 2017-04-20

Family

ID=58518341

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2015/010744 WO2017065324A1 (en) 2015-10-13 2015-10-13 Sign language education system, method and program

Country Status (2)

Country Link
KR (1) KR101793607B1 (en)
WO (1) WO2017065324A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110457673A (en) * 2019-06-25 2019-11-15 北京奇艺世纪科技有限公司 A kind of natural language is converted to the method and device of sign language

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200094570A (en) 2019-01-30 2020-08-07 한밭대학교 산학협력단 Sign Language Interpretation System Consisting Of Sign Language Gloves And Language Converting Glasses
KR102436239B1 (en) * 2020-11-28 2022-08-24 동서대학교 산학협력단 Hand shape matching VR sign language education system using gesture recognition technology
KR102576358B1 (en) * 2022-12-23 2023-09-11 주식회사 케이엘큐브 Learning data generating device for sign language translation and method of operation thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR19990081035A (en) * 1998-04-24 1999-11-15 정선종 Systems and methods for communication of the blind and deaf
JP2000330467A (en) * 1999-05-18 2000-11-30 Hitachi Ltd Sign language teaching device, sign language teaching method and recording medium recorded with the method
US20030191779A1 (en) * 2002-04-05 2003-10-09 Hirohiko Sagawa Sign language education system and program therefor
KR20070081479A (en) * 2006-02-13 2007-08-17 구자효 Self learning device for talking with the hands and self learning method using the same
KR100953979B1 (en) * 2009-02-10 2010-04-21 김재현 Sign language learning system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR19990081035A (en) * 1998-04-24 1999-11-15 정선종 Systems and methods for communication of the blind and deaf
JP2000330467A (en) * 1999-05-18 2000-11-30 Hitachi Ltd Sign language teaching device, sign language teaching method and recording medium recorded with the method
US20030191779A1 (en) * 2002-04-05 2003-10-09 Hirohiko Sagawa Sign language education system and program therefor
KR20070081479A (en) * 2006-02-13 2007-08-17 구자효 Self learning device for talking with the hands and self learning method using the same
KR100953979B1 (en) * 2009-02-10 2010-04-21 김재현 Sign language learning system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110457673A (en) * 2019-06-25 2019-11-15 北京奇艺世纪科技有限公司 A kind of natural language is converted to the method and device of sign language
CN110457673B (en) * 2019-06-25 2023-12-19 北京奇艺世纪科技有限公司 Method and device for converting natural language into sign language

Also Published As

Publication number Publication date
KR101793607B1 (en) 2017-11-20
KR20170054198A (en) 2017-05-17

Similar Documents

Publication Publication Date Title
US10446059B2 (en) Hand motion interpretation and communication apparatus
US20160042228A1 (en) Systems and methods for recognition and translation of gestures
US10585488B2 (en) System, method, and apparatus for man-machine interaction
US7492367B2 (en) Apparatus, system and method for interpreting and reproducing physical motion
Bukhari et al. American sign language translation through sensory glove; signspeak
CN104850542B (en) Non-audible voice input correction
WO2017065324A1 (en) Sign language education system, method and program
CN109670174B (en) Training method and device of event recognition model
WO2020166896A1 (en) Electronic apparatus and controlling method thereof
CN110827826B (en) Method for converting words by voice and electronic equipment
KR20140143034A (en) Method for providing service based on a multimodal input and an electronic device thereof
WO2020262800A1 (en) System and method for automating natural language understanding (nlu) in skill development
WO2018080228A1 (en) Server for translation and translation method
Watanabe et al. Advantages and drawbacks of smartphones and tablets for visually impaired people——analysis of ICT user survey results——
WO2019190076A1 (en) Eye tracking method and terminal for performing same
Vaidya et al. Design and development of hand gesture based communication device for deaf and mute people
CN207718803U (en) Multiple source speech differentiation identifying system
WO2015037871A1 (en) System, server and terminal for providing voice playback service using text recognition
EP3467820A1 (en) Information processing device and information processing method
KR102009150B1 (en) Automatic Apparatus and Method for Converting Sign language or Finger Language
CN113409770A (en) Pronunciation feature processing method, pronunciation feature processing device, pronunciation feature processing server and pronunciation feature processing medium
AU2021101436A4 (en) Wearable sign language detection system
CN111027353A (en) Search content extraction method and electronic equipment
WO2021118184A1 (en) User terminal and control method therefor
Kala et al. Development of device for gesture to speech conversion for the mute community

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 20157029063

Country of ref document: KR

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15906288

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/08/2018)

122 Ep: pct application non-entry in european phase

Ref document number: 15906288

Country of ref document: EP

Kind code of ref document: A1