KR20160062913A - System and Method for Translating Sign Language for Improving the Accuracy of Lip Motion Device - Google Patents
System and Method for Translating Sign Language for Improving the Accuracy of Lip Motion Device Download PDFInfo
- Publication number
- KR20160062913A KR20160062913A KR1020140166165A KR20140166165A KR20160062913A KR 20160062913 A KR20160062913 A KR 20160062913A KR 1020140166165 A KR1020140166165 A KR 1020140166165A KR 20140166165 A KR20140166165 A KR 20140166165A KR 20160062913 A KR20160062913 A KR 20160062913A
- Authority
- KR
- South Korea
- Prior art keywords
- axis
- coordinate information
- position coordinate
- hand
- information
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B21/00—Teaching, or communicating with, the blind, deaf or mute
- G09B21/04—Devices for conversing with the deaf-blind
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The sign language translation system for improving the accuracy of the sign language translation of the lip motion device is characterized by improving the effectiveness and accuracy of various words by reflecting the mobility of the finger during the translation of the sign language operation. So that it is possible to communicate with the hearing impaired people.
Description
The present invention relates to a sign language translation method and more particularly to a sign language translation system and method for improving the accuracy of sign language translation of a lip motion device that improves the accuracy and accuracy of various words by reflecting the mobility of a finger during translating a sign language operation .
Sign language is a means of communication for people with hearing impairments. Sign language can be used to communicate between deaf and hearing-impaired people, or between people who are deaf and hearing impaired.
However, ordinary people who do not know sign language can not communicate with deaf people using sign language, and considerable efforts must be made to learn sign language.
Thus, people with hearing impairment encounter many difficulties in communicating with the public.
However, it is impossible for everyone in general to master sign language.
Conventional sign language translation algorithms recognize certain types of actions and can only translate some limited actions.
For example, since the right hand recognizes the stop motion and the left hand sees the information as straight motion, it translates into the word 'handsome', so only a few words can be translated.
The conventional sign language recognition method using a video device has a problem that only consonants, vowels, and numbers can be translated, but only limited translation is possible.
In order to solve this problem, a problem to be solved by the present invention is to provide a sign language translation system for improving the accuracy of sign language translation of a lip motion device which improves the accuracy of various words by reflecting the mobility of a finger during translating a sign language operation, and The purpose of the method is to provide.
According to an aspect of the present invention, there is provided a sign language translation system for improving accuracy of sign language translation of a lip motion device,
The first hand position coordinate information and the first finger position coordinate information are recognized as a plurality of frames on the basis of the X axis, the Y axis, and the Z axis, A lip motion device for each frame;
The first hand position coordinate information and the first finger position coordinate information received from the lip motion device are stored in a coordinate storage unit by a hydration operation to be translated into a three-dimensional array space, and the first hand position coordinate information and the first finger position coordinate information A coordinate information processor for adding the first hand direction information in which the first hand position coordinate information is shifted to the three-dimensional arrangement space and storing the added first hand direction information in the coordinate storage unit, the length being varied according to an average sampling length of the hydration operation; And
The first hand position coordinate information, the first finger position coordinate information, and the first hand direction information, which are a hydration operation to be translated, are retrieved from the coordinate storage unit, and the second hand position The second finger direction coordinate information and the second hand direction information in which the hand position is shifted are selected and the hand position coordinate information, the finger position coordinate information, and the hand direction information are selected through the similarity comparison, and the corresponding translation meaning And a control unit for extracting the image.
According to an aspect of the present invention, there is provided a sign language translation method for improving accuracy of sign language translation of a lip motion device,
The first hand position coordinate information and the first finger position coordinate information are recognized as a plurality of frames on the basis of the X axis, the Y axis, and the Z axis, ;
The first hand position coordinate information and the first finger position coordinate information are stored in a three-dimensional arrangement space as a syllable to be translated, and the first hand position coordinate information and the first finger position coordinate information are matched to an average sampling length of a hydration operation Varying the lengths of the X-axis, the Y-axis, and the Z-axis;
Adding first hand direction information to which the first hand position coordinate information is shifted to the three-dimensional arrangement space; And
The first hand position coordinate information, the first hand position coordinate information, the first hand direction information, the second hand position coordinate information of the reference handwriting operation, the second finger position coordinate information, and the second hand position coordinate information in the three- The hand position coordinate information, the finger position coordinate information, and the hand direction information of the hydration operation to be translated through the similarity comparison with the second hand direction information on which the position is moved, and extracting the corresponding translation meaning.
With the above-described configuration, the present invention has the effect of enabling various words to be effectively and accurately improved in the translation of the hydration operation, thereby enabling smooth communication with the hearing-impaired persons.
The present invention has the effect of improving the communication between the general person and the hearing impaired person by solving the disconnection between the disabled person and the general person, which is difficult to quickly cope with in the smart era due to the development of the sign language translation tool.
1 is a diagram illustrating a configuration of a sign language translation system for improving the accuracy of sign language translation of a lip motion device according to an embodiment of the present invention.
2 is a conceptual diagram for explaining a motion recognition radius and normalization according to an embodiment of the present invention.
FIG. 3 is a diagram illustrating array compression for increasing a recognition rate of a speech recognition according to an embodiment of the present invention.
4 is a view showing the directionality in a three-dimensional space according to an embodiment of the present invention.
5 is a diagram illustrating an algorithm for determining the directionality of a hydration operation according to an embodiment of the present invention.
6 is a diagram illustrating a sign language translation method for improving sign language translation accuracy according to an embodiment of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention and the manner of achieving them will become apparent with reference to the embodiments described in detail below with reference to the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below but may be embodied in various forms, and these embodiments are not intended to be exhaustive or to limit the invention to the precise forms disclosed, Is provided to fully convey the scope of the invention to those skilled in the art, and the invention is only defined by the scope of the claims. Like reference numerals refer to like elements throughout the specification.
The terminology used herein is for the purpose of illustrating embodiments and is not intended to be limiting of the present invention. In the present specification, the singular form includes plural forms unless otherwise specified in the specification. It is noted that the terms " comprises "and / or" comprising ", as used herein, do not exclude the presence or addition of one or more other elements, steps and operations.
FIG. 1 is a diagram illustrating a configuration of a sign language translation system for improving the accuracy of sign language translation of a Leap Motion device according to an embodiment of the present invention. FIG. 2 is a diagram illustrating a motion recognition radius and normalization according to an embodiment of the present invention. FIG. 4 is a diagram illustrating a directionality in a three-dimensional space according to an embodiment of the present invention, and FIG. 4 5 is a diagram showing an algorithm for determining the directionality of the hydration operation according to the embodiment of the present invention.
A sign
The
The
The
The
The
As shown in FIG. 2, the coordinate system defines the position coordinates of the finger and the hand in three directions with respect to the X axis, the Y axis, and the Z axis. When the hand is positioned in the center, .
One frame represents the position coordinate information of the hand, the direction information of the hand, and the position coordinate information of the finger in three directions of the X axis, the Y axis and the Z axis.
The
The separated region of the background region may be binarized to represent the white region of the hand region and the black region of the background region separately.
The feature point
The method of extracting these feature points is based on a feature extraction algorithm based on SIFT (Scale Invariant Feature Transform).
In other words, the minutia matching
In order to detect the curvature, adjacent points among the points having an arbitrary interval (n) on the outline are used, and curvature data is calculated using two adjacent points spaced apart by n intervals on both sides of the k- .
The hand
The sign
The coordinate
2, the coordinate
As shown in FIG. 2, the coordinate
The coordinate
The
The existing sign recognition algorithms are two movements with different meanings when there is a straight line type movement (->) to the right and a straight line type movement (<-) And the like.
The coordinate
As shown in FIGS. 4 and 5, the algorithms for grasping the directionality of the hydration operation have a total of 26 directional directions that are moved in a three-dimensional space, that is, (0, 0 (0, 0) position, the directionality is 'B'. If you go from position (-1, -1,0) to position (0,0,0) It has the branching directionality.
In this manner, the coordinate
As shown in (a) of FIG. 4, when the operation (->) in the form of a straight line goes from the (-1,0,0) position to the (0,0,0) position, As shown in (b) of (4), when the direction from the (1,0,0) to the (0,0,0) position is 'D' Lt; / RTI >
The reason for normalizing the coordinate information is that the motion range and position may vary for each person when translating the sign language operations into a three-dimensional array space.
For example, supposing that the user performs a 'no' signing operation, one person can be large and wide, and another person can be small and narrow.
In this case, though they have the same meaning and operation, the problem arises that there is no overlap in the three-dimensional array space, and that the three-dimensional array space is recognized as another operation.
The coordinate
The coordinate
The
The coordinate
The coordinate
6 is a diagram illustrating a sign language translation method for improving sign language translation accuracy according to an embodiment of the present invention.
The
The
The
The
The
If the similarity value of the highest finger is found by matching the searched hand position coordinate information with the finger position coordinate information to be translated among a plurality of hand direction information to be translated, Hand direction information, and finger position coordinate information are selected, and the corresponding translation meaning is extracted and returned (S106, S108). The
While the present invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, You will understand. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive.
100: Sign Language Translation System
110: Lip motion device
111: Infrared camera
112: frame receiver
113: background processor
114: minutiae point region detection section
115:
120: Sign language translation module
121: coordinate information processor
122: Coordinate storage unit
123:
124: sign language database
125:
Claims (14)
Wherein the first hand position coordinate information and the first finger position coordinate information received from the lip motion device are stored in a coordinate storage unit by a syllable operation to be translated into a three dimensional array space, The first hand direction information in which the first hand position coordinate information is shifted is added to the three-dimensional arrangement space and stored in the coordinate storage unit A coordinate information processing unit for generating coordinate information; And
The first hand position coordinate information, the first finger position coordinate information, and the first hand direction information, which are a hydration operation to be translated in the coordinate storage unit, are called up, and the reference hand- The second hand position coordinate information, the second finger position coordinate information, and the second hand direction information in which the hand position is shifted, and selects hand position coordinate information, finger position coordinate information, hand direction information, And a control unit for extracting a translation meaning to be translated.
The control unit compares the first hand position coordinate information with the second hand position coordinate information to search for hand position coordinate information having the highest similarity value, and when the hand position coordinate information of the searched highest similar value is -5 Of the hand position coordinate information,
Comparing the first hand direction information with the second hand direction information based on the searched hand position coordinate information in the sign language database to search hand direction information having the highest similarity value, Searching hand direction information in the range of -5 in the directional information,
If the similarity value of the highest finger is found by matching the first finger position coordinate information among the searched hand position coordinate information and hand direction information, the hand position coordinate information most similar to the hand- And the finger position coordinate information is finally selected to extract the translation meaning.
The coordinate information processing unit may vary the first hand position coordinate information and the first finger position coordinate information on the X axis, the Y axis, and the Z axis with reference to a predetermined range of the X axis and the Z axis, Axis range of -100 to 1.0, the Y-axis length range is represented by 0 to 1.0, and the length range is converted into a range of 0 to 199 to indicate coordinate information, Sign language translation system represented by array space.
The coordinate information processing unit compresses the three-dimensional array space of [200] [200] [200] into [50] [50 [50] and stores the compressed three-dimensional array space in the coordinate storage unit.
The coordinate information processing unit may use the method of setting the directionality that is moved in the three-dimensional space to the specific direction information when the X-axis, Y-axis, and Z- 1 hand directional information is added to the three-dimensional array space and stored in the coordinate storage unit.
The coordinate information processing unit may calculate the second hand position coordinate information and the second finger position coordinate information as the reference handwriting operation based on the length range of the predetermined X axis and the Z axis, The normalization processing for varying the coordinate information in the X-axis, the Y-axis, and the Z-axis to express the length range of the X-axis and the Z-axis in the range of -1.0 to 1.0 and the range of the Y- And then stores it in the sign language database.
When the second hand direction information is shifted to the X axis, the Y axis, and the Z axis at different positions in the X axis, the Y axis, and the Z axis at one position in the three-dimensional space, the coordinate information processing unit Wherein the second hand direction information is added to the three-dimensional arrangement space including the second hand position coordinate information and the second finger position coordinate information and stored in the sign language database.
Wherein the first hand position coordinate information and the first finger position coordinate information are stored in a three-dimensional arrangement space as a translation operation to be performed, and the first hand position coordinate information and the first finger position coordinate information are stored in an average sampling length A step of varying the lengths of the X axis, the Y axis and the Z axis in accordance with the normalization process;
Adding first hand direction information to which the first hand position coordinate information is shifted to the three-dimensional arrangement space; And
The first finger position coordinate information, the first hand direction information, the second hand position coordinate information of the reference hydration operation in the three-dimensional array space previously stored in the sign language database, the second finger position coordinate information And finger direction coordinate information and hand direction information of the hydration operation to be translated through the similarity comparison with the second hand direction information in which the position of the hand is shifted and extracting the corresponding translation meaning How to translate sign language.
Wherein the step of extracting the translation meaning comprises:
The first hand position coordinate information and the second hand position coordinate information are compared in the sign language database, and the hand position coordinate information having the highest similarity value is searched for, and the hand position coordinate information of the searched highest similar value is set to -5 Searching for a range of hand position coordinate information;
Comparing the first hand direction information with the second hand direction information based on the searched hand position coordinate information in the sign language database to search hand direction information having the highest similarity value, Searching hand direction information in a range of -5 in the directional information; And
If the similarity value of the highest finger is found by matching the first finger position coordinate information among the searched hand position coordinate information and hand direction information, the hand position coordinate information most similar to the hand- And the finger position coordinate information is finally selected to extract the translation meaning.
Wherein the normalizing process comprises:
The first hand position coordinate information and the first finger position coordinate information are varied in the X axis, the Y axis, and the Z axis on the basis of the predetermined range of the X axis and the Z axis, 1.0, a length range of the Y-axis is represented by 0 to 1.0, and a range of 0 to 199 is converted into a three-dimensional array space of [200] [200] [200] to represent the length range as coordinate information ≪ / RTI >
Further comprising compressing and storing the three-dimensional array space of [200] [200] [200] into [50] [50 [50].
Wherein the step of adding to the three-
When the directionality moved in the three-dimensional space is shifted to the X axis, the Y axis, and the Z axis at different positions in the X axis, the Y axis, and the Z axis at one side of the three-dimensional space, Adding the information to the three-dimensional array space and storing the information.
The second hand position coordinate information and the second finger position coordinate information, which are the reference handwriting operation, are divided into a first hand position coordinate information and a second finger position coordinate information on the X axis , The Y-axis, and the Z-axis, the length range of the X-axis and the Z-axis ranges from -1.0 to 1.0, and the length range of the Y-axis ranges from 0 to 1.0 in the three-dimensional array space. Further comprising the step of storing the sign language translation method.
When the second hand direction information, which is the reference hydration operation, is moved to the X axis, the Y axis, and the Z axis at different positions in the X axis, the Y axis, and the Z axis at one position in the three-dimensional space, And adding the second hand direction information to the three-dimensional arrangement space including the second hand position coordinate information and the second finger position coordinate information, and storing the second hand direction information in the sign language database.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020140166165A KR20160062913A (en) | 2014-11-26 | 2014-11-26 | System and Method for Translating Sign Language for Improving the Accuracy of Lip Motion Device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020140166165A KR20160062913A (en) | 2014-11-26 | 2014-11-26 | System and Method for Translating Sign Language for Improving the Accuracy of Lip Motion Device |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20160062913A true KR20160062913A (en) | 2016-06-03 |
Family
ID=56192206
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020140166165A KR20160062913A (en) | 2014-11-26 | 2014-11-26 | System and Method for Translating Sign Language for Improving the Accuracy of Lip Motion Device |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20160062913A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101958201B1 (en) * | 2018-02-14 | 2019-03-14 | 안동과학대학교 산학협력단 | Apparatus and method for communicating through sigh language recognition |
KR20220042335A (en) * | 2018-03-15 | 2022-04-05 | 한국전자기술연구원 | Automatic Sign Language Recognition Method and System |
-
2014
- 2014-11-26 KR KR1020140166165A patent/KR20160062913A/en active Search and Examination
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101958201B1 (en) * | 2018-02-14 | 2019-03-14 | 안동과학대학교 산학협력단 | Apparatus and method for communicating through sigh language recognition |
KR20220042335A (en) * | 2018-03-15 | 2022-04-05 | 한국전자기술연구원 | Automatic Sign Language Recognition Method and System |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Athira et al. | A signer independent sign language recognition with co-articulation elimination from live videos: an Indian scenario | |
US9286694B2 (en) | Apparatus and method for detecting multiple arms and hands by using three-dimensional image | |
JP4934220B2 (en) | Hand sign recognition using label assignment | |
KR102036963B1 (en) | Method and system for robust face dectection in wild environment based on cnn | |
Lahiani et al. | Real time hand gesture recognition system for android devices | |
Goyal et al. | Sign language recognition system for deaf and dumb people | |
KR101612605B1 (en) | Method for extracting face feature and apparatus for perforimg the method | |
Bhuyan et al. | Fingertip detection for hand pose recognition | |
Pan et al. | Real-time sign language recognition in complex background scene based on a hierarchical clustering classification method | |
EP2704056A2 (en) | Image processing apparatus, image processing method | |
KR101491461B1 (en) | Method for recognizing object using covariance descriptor and apparatus thereof | |
Agrawal et al. | A survey on manual and non-manual sign language recognition for isolated and continuous sign | |
CN104636725A (en) | Gesture recognition method based on depth image and gesture recognition system based on depth images | |
Qi et al. | Computer vision-based hand gesture recognition for human-robot interaction: a review | |
Bhuyan et al. | Hand pose recognition using geometric features | |
KR100862349B1 (en) | User interface system based on half-mirror using gesture recognition | |
CN111444764A (en) | Gesture recognition method based on depth residual error network | |
Itkarkar et al. | A survey of 2D and 3D imaging used in hand gesture recognition for human-computer interaction (HCI) | |
JP2016014954A (en) | Method for detecting finger shape, program thereof, storage medium of program thereof, and system for detecting finger shape | |
CN112749646A (en) | Interactive point-reading system based on gesture recognition | |
Aziz et al. | Bengali Sign Language Recognition using dynamic skin calibration and geometric hashing | |
CN111460858A (en) | Method and device for determining pointed point in image, storage medium and electronic equipment | |
Ming | Hand fine-motion recognition based on 3D Mesh MoSIFT feature descriptor | |
KR20160062913A (en) | System and Method for Translating Sign Language for Improving the Accuracy of Lip Motion Device | |
CN106406507B (en) | Image processing method and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
AMND | Amendment | ||
E601 | Decision to refuse application | ||
AMND | Amendment |