KR20160062913A - System and Method for Translating Sign Language for Improving the Accuracy of Lip Motion Device - Google Patents

System and Method for Translating Sign Language for Improving the Accuracy of Lip Motion Device Download PDF

Info

Publication number
KR20160062913A
KR20160062913A KR1020140166165A KR20140166165A KR20160062913A KR 20160062913 A KR20160062913 A KR 20160062913A KR 1020140166165 A KR1020140166165 A KR 1020140166165A KR 20140166165 A KR20140166165 A KR 20140166165A KR 20160062913 A KR20160062913 A KR 20160062913A
Authority
KR
South Korea
Prior art keywords
axis
coordinate information
position coordinate
hand
information
Prior art date
Application number
KR1020140166165A
Other languages
Korean (ko)
Inventor
고석주
김지인
조재현
하대규
이동훈
이상은
서대화
Original Assignee
경북대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 경북대학교 산학협력단 filed Critical 경북대학교 산학협력단
Priority to KR1020140166165A priority Critical patent/KR20160062913A/en
Publication of KR20160062913A publication Critical patent/KR20160062913A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/04Devices for conversing with the deaf-blind

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The sign language translation system for improving the accuracy of the sign language translation of the lip motion device is characterized by improving the effectiveness and accuracy of various words by reflecting the mobility of the finger during the translation of the sign language operation. So that it is possible to communicate with the hearing impaired people.

Description

Technical Field [0001] The present invention relates to a sign language translation system for improving the accuracy of sign language translation of a lip motion device,

The present invention relates to a sign language translation method and more particularly to a sign language translation system and method for improving the accuracy of sign language translation of a lip motion device that improves the accuracy and accuracy of various words by reflecting the mobility of a finger during translating a sign language operation .

Sign language is a means of communication for people with hearing impairments. Sign language can be used to communicate between deaf and hearing-impaired people, or between people who are deaf and hearing impaired.

However, ordinary people who do not know sign language can not communicate with deaf people using sign language, and considerable efforts must be made to learn sign language.

Thus, people with hearing impairment encounter many difficulties in communicating with the public.

However, it is impossible for everyone in general to master sign language.

Conventional sign language translation algorithms recognize certain types of actions and can only translate some limited actions.

For example, since the right hand recognizes the stop motion and the left hand sees the information as straight motion, it translates into the word 'handsome', so only a few words can be translated.

The conventional sign language recognition method using a video device has a problem that only consonants, vowels, and numbers can be translated, but only limited translation is possible.

In order to solve this problem, a problem to be solved by the present invention is to provide a sign language translation system for improving the accuracy of sign language translation of a lip motion device which improves the accuracy of various words by reflecting the mobility of a finger during translating a sign language operation, and The purpose of the method is to provide.

According to an aspect of the present invention, there is provided a sign language translation system for improving accuracy of sign language translation of a lip motion device,

The first hand position coordinate information and the first finger position coordinate information are recognized as a plurality of frames on the basis of the X axis, the Y axis, and the Z axis, A lip motion device for each frame;

The first hand position coordinate information and the first finger position coordinate information received from the lip motion device are stored in a coordinate storage unit by a hydration operation to be translated into a three-dimensional array space, and the first hand position coordinate information and the first finger position coordinate information A coordinate information processor for adding the first hand direction information in which the first hand position coordinate information is shifted to the three-dimensional arrangement space and storing the added first hand direction information in the coordinate storage unit, the length being varied according to an average sampling length of the hydration operation; And

The first hand position coordinate information, the first finger position coordinate information, and the first hand direction information, which are a hydration operation to be translated, are retrieved from the coordinate storage unit, and the second hand position The second finger direction coordinate information and the second hand direction information in which the hand position is shifted are selected and the hand position coordinate information, the finger position coordinate information, and the hand direction information are selected through the similarity comparison, and the corresponding translation meaning And a control unit for extracting the image.

According to an aspect of the present invention, there is provided a sign language translation method for improving accuracy of sign language translation of a lip motion device,

The first hand position coordinate information and the first finger position coordinate information are recognized as a plurality of frames on the basis of the X axis, the Y axis, and the Z axis, ;

The first hand position coordinate information and the first finger position coordinate information are stored in a three-dimensional arrangement space as a syllable to be translated, and the first hand position coordinate information and the first finger position coordinate information are matched to an average sampling length of a hydration operation Varying the lengths of the X-axis, the Y-axis, and the Z-axis;

Adding first hand direction information to which the first hand position coordinate information is shifted to the three-dimensional arrangement space; And

The first hand position coordinate information, the first hand position coordinate information, the first hand direction information, the second hand position coordinate information of the reference handwriting operation, the second finger position coordinate information, and the second hand position coordinate information in the three- The hand position coordinate information, the finger position coordinate information, and the hand direction information of the hydration operation to be translated through the similarity comparison with the second hand direction information on which the position is moved, and extracting the corresponding translation meaning.

With the above-described configuration, the present invention has the effect of enabling various words to be effectively and accurately improved in the translation of the hydration operation, thereby enabling smooth communication with the hearing-impaired persons.

The present invention has the effect of improving the communication between the general person and the hearing impaired person by solving the disconnection between the disabled person and the general person, which is difficult to quickly cope with in the smart era due to the development of the sign language translation tool.

1 is a diagram illustrating a configuration of a sign language translation system for improving the accuracy of sign language translation of a lip motion device according to an embodiment of the present invention.
2 is a conceptual diagram for explaining a motion recognition radius and normalization according to an embodiment of the present invention.
FIG. 3 is a diagram illustrating array compression for increasing a recognition rate of a speech recognition according to an embodiment of the present invention.
4 is a view showing the directionality in a three-dimensional space according to an embodiment of the present invention.
5 is a diagram illustrating an algorithm for determining the directionality of a hydration operation according to an embodiment of the present invention.
6 is a diagram illustrating a sign language translation method for improving sign language translation accuracy according to an embodiment of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention and the manner of achieving them will become apparent with reference to the embodiments described in detail below with reference to the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below but may be embodied in various forms, and these embodiments are not intended to be exhaustive or to limit the invention to the precise forms disclosed, Is provided to fully convey the scope of the invention to those skilled in the art, and the invention is only defined by the scope of the claims. Like reference numerals refer to like elements throughout the specification.

The terminology used herein is for the purpose of illustrating embodiments and is not intended to be limiting of the present invention. In the present specification, the singular form includes plural forms unless otherwise specified in the specification. It is noted that the terms " comprises "and / or" comprising ", as used herein, do not exclude the presence or addition of one or more other elements, steps and operations.

FIG. 1 is a diagram illustrating a configuration of a sign language translation system for improving the accuracy of sign language translation of a Leap Motion device according to an embodiment of the present invention. FIG. 2 is a diagram illustrating a motion recognition radius and normalization according to an embodiment of the present invention. FIG. 4 is a diagram illustrating a directionality in a three-dimensional space according to an embodiment of the present invention, and FIG. 4 5 is a diagram showing an algorithm for determining the directionality of the hydration operation according to the embodiment of the present invention.

A sign language translation system 100 using a Leap Motion device according to an embodiment of the present invention includes a lip motion device 110 and a sign language translation module 120. The sign language translation module 120 is applicable to various devices such as a PC and a smart TV.

The lip motion device 110 receives image information using two infrared cameras 111 and determines the X axis, the Y axis, and the Z axis, and structures the 3D shape based on the determined X axis, Y axis, and Z axis, And the motion is recognized by continuously tracking the trajectory in which the characteristic point moves and analyzing the traced characteristic points.

The lip motion device 110 recognizes the hand motion locus in the super-wide angle 150-degree field of view and the depth Z-axis coordinate and displays it in three dimensions.

The lip motion device 110 includes an infrared camera 111, a frame receiving unit 112, a background processing unit 113, a feature point region detection unit 114, and a hand movement detection unit 115.

The infrared camera 111 detects the infrared radiation energy radiated from the object by the detector, extracts the radiation temperature of the object as an electric signal, and expresses it as a two-dimensional visual image.

The frame receiving unit 112 receives the frame of the image of the user's hand photographed by two or more infrared cameras 111, and outputs hand and finger coordinate information of three directions with respect to the X axis, Y axis and Z axis in one second It recognizes 100 frames and divides them by frame for each operation.

As shown in FIG. 2, the coordinate system defines the position coordinates of the finger and the hand in three directions with respect to the X axis, the Y axis, and the Z axis. When the hand is positioned in the center, .

One frame represents the position coordinate information of the hand, the direction information of the hand, and the position coordinate information of the finger in three directions of the X axis, the Y axis and the Z axis.

The background processing unit 113 receives an image from the frame receiving unit 112 and detects only the hand region from the background of the image.

The separated region of the background region may be binarized to represent the white region of the hand region and the black region of the background region separately.

The feature point region detection unit 114 extracts the position coordinate information of the hand and the position coordinates of the finger from the outline of the hand as feature points, and calculates the extracted feature points as feature vectors.

The method of extracting these feature points is based on a feature extraction algorithm based on SIFT (Scale Invariant Feature Transform).

In other words, the minutia matching region detecting unit 114 detects the curvature data of the outline of the hand, extracts the coordinate values of the five points having the largest curvature among the detected curvature data, and sequentially outputs the coordinate values to the respective fingers The position coordinates of the finger are measured in correspondence with the end point coordinate values.

In order to detect the curvature, adjacent points among the points having an arbitrary interval (n) on the outline are used, and curvature data is calculated using two adjacent points spaced apart by n intervals on both sides of the k- .

The hand gesture sensing unit 115 calculates the movement of the extracted vector information of the feature points and detects the movement of the hand or the finger.

The sign language translation module 120 includes a coordinate information processing unit 121, a coordinate storage unit 122, a control unit 123, a sign language database 124, and a display unit 125.

The coordinate information processing unit 121 stores the hand position coordinate information and the finger position coordinate information received from the lip motion device 110 as a syllable motion to be translated into a three dimensional array space and outputs hand position coordinate information and finger position coordinate information The length of the coordinate information is changed and normalized.

2, the coordinate information processing unit 121 receives the hand position coordinate information and the finger position coordinate information from the lip motion device 110, and when it is viewed on the basis of the three-dimensional space, The coordinate information of the hand and the coordinate information of the finger are changed to the X axis, the Y axis, and the Z axis based on the length range of the X axis and the Z axis.

As shown in FIG. 2, the coordinate information processing unit 121 has an inverse pyramid-shaped recognition range based on the center to increase the recognition rate of the operation.

The coordinate information processing unit 121 expresses the length range of the normalized X-axis and Z-axis as -1.0 to 1.0, the length range of the Y-axis as 0 to 1.0, and sets the length range as 0-199 Dimensional array space of [200] [200] [200] as shown in FIG. 3. The coordinate storage unit 122 compresses the generated three-dimensional array space into [50] [50 [50] .

The coordinate storage unit 122 stores coordinate information of the hand and coordinate information of the finger formed in the three-dimensional array space along the X axis, the Y axis, and the Z axis.

The existing sign recognition algorithms are two movements with different meanings when there is a straight line type movement (->) to the right and a straight line type movement (<-) And the like.

The coordinate information processing unit 121 adds directional information indicating the mobility of the hand when storing the hand position coordinate information of the X-axis, the Y-axis, and the Z-axis and the finger position coordinate information in the coordinate storage unit 122 as a three- do.

As shown in FIGS. 4 and 5, the algorithms for grasping the directionality of the hydration operation have a total of 26 directional directions that are moved in a three-dimensional space, that is, (0, 0 (0, 0) position, the directionality is 'B'. If you go from position (-1, -1,0) to position (0,0,0) It has the branching directionality.

In this manner, the coordinate information processing unit 121 newly adds and stores a total of 26 directionalities in the three-dimensional array space in accordance with the number of alphabetic characters.

As shown in (a) of FIG. 4, when the operation (->) in the form of a straight line goes from the (-1,0,0) position to the (0,0,0) position, As shown in (b) of (4), when the direction from the (1,0,0) to the (0,0,0) position is 'D' Lt; / RTI &gt;

The reason for normalizing the coordinate information is that the motion range and position may vary for each person when translating the sign language operations into a three-dimensional array space.

For example, supposing that the user performs a 'no' signing operation, one person can be large and wide, and another person can be small and narrow.

In this case, though they have the same meaning and operation, the problem arises that there is no overlap in the three-dimensional array space, and that the three-dimensional array space is recognized as another operation.

The coordinate information processing unit 121 stores hand position coordinate information, hand direction information, and finger position coordinate information of the X-axis, Y-axis, and Z-axis in the coordinate storage unit 122 in the three-dimensional array space.

The coordinate storage unit 122 may form an operation of sign language as an object and store the objects.

The sign language database 124 stores sign language vector information of a hand representing the reference hydration operation defined as learned and direction vector information representing the movement of the hand in the three-dimensional array space, and stores the translation meaning corresponding to the reference sign language operation .

The coordinate information processing unit 121 converts the hand position coordinate information and the finger position coordinate information into hand axis position coordinate information and finger position coordinate information on the basis of the predetermined range of the X axis and the Z axis as X axis, Axis to the sign language database 124 via the normalization process of expressing the length range of the X axis and the Z axis in the three-dimensional array space from -1.0 to 1.0 and the length range of the Y axis in the range of 0 to 1.0.

The coordinate information processing unit 121 sets the hand direction information of the reference hydration operation to specific direction information when the hand direction information of the reference hydration operation is moved to the X axis, the Y axis, and the Z axis at different positions in the X axis, Y axis, and Z axis at one position The hand direction information of the reference hydration operation is added to the three-dimensional arrangement space including the hand position coordinate information and the finger position coordinate information of the reference hydration operation and stored in the sign language database 124. [

6 is a diagram illustrating a sign language translation method for improving sign language translation accuracy according to an embodiment of the present invention.

The control unit 123 reads the hand position coordinate information, the hand direction information, and the finger position coordinate information of the X-axis, the Y-axis, and the Z-axis indicating the hydration operation to be translated in the coordinate storage unit 122, Hand directional information, and finger positional coordinate information indicating the stored reference hydration operation (S100).

The control unit 123 receives the hand position coordinate information of the three-dimensional array space, which is a hydration operation to be translated in the coordinate storage unit 122, and the hand position coordinates of the three-dimensional array space, And the hand position coordinate information having the highest similarity value is retrieved (S102).

The control unit 123 searches hand position coordinate information in the range of -5 from the hand coordinate information of the highest similarity value found in the sign language database 124. [

The control unit 123 compares the hand direction information of the three-dimensional array space, which is a saliization operation, with the hand direction information of the three-dimensional array space previously stored in the sign language database 124 based on the position coordinate information of the plurality of hands searched, Hand direction information having a high similarity value is retrieved (S104).

The controller 123 retrieves the hand direction information in the range of -5 from the hand direction information of the highest similar value searched in the sign language database 124. [

If the similarity value of the highest finger is found by matching the searched hand position coordinate information with the finger position coordinate information to be translated among a plurality of hand direction information to be translated, Hand direction information, and finger position coordinate information are selected, and the corresponding translation meaning is extracted and returned (S106, S108). The control unit 123 outputs the extracted translation meaning through the display unit 125.

While the present invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, You will understand. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive.

100: Sign Language Translation System
110: Lip motion device
111: Infrared camera
112: frame receiver
113: background processor
114: minutiae point region detection section
115:
120: Sign language translation module
121: coordinate information processor
122: Coordinate storage unit
123:
124: sign language database
125:

Claims (14)

The first hand position coordinate information and the first finger position coordinate information are recognized as a plurality of frames on the basis of the X axis, the Y axis, and the Z axis, A lip motion device for dividing each frame by frame;
Wherein the first hand position coordinate information and the first finger position coordinate information received from the lip motion device are stored in a coordinate storage unit by a syllable operation to be translated into a three dimensional array space, The first hand direction information in which the first hand position coordinate information is shifted is added to the three-dimensional arrangement space and stored in the coordinate storage unit A coordinate information processing unit for generating coordinate information; And
The first hand position coordinate information, the first finger position coordinate information, and the first hand direction information, which are a hydration operation to be translated in the coordinate storage unit, are called up, and the reference hand- The second hand position coordinate information, the second finger position coordinate information, and the second hand direction information in which the hand position is shifted, and selects hand position coordinate information, finger position coordinate information, hand direction information, And a control unit for extracting a translation meaning to be translated.
The method according to claim 1,
The control unit compares the first hand position coordinate information with the second hand position coordinate information to search for hand position coordinate information having the highest similarity value, and when the hand position coordinate information of the searched highest similar value is -5 Of the hand position coordinate information,
Comparing the first hand direction information with the second hand direction information based on the searched hand position coordinate information in the sign language database to search hand direction information having the highest similarity value, Searching hand direction information in the range of -5 in the directional information,
If the similarity value of the highest finger is found by matching the first finger position coordinate information among the searched hand position coordinate information and hand direction information, the hand position coordinate information most similar to the hand- And the finger position coordinate information is finally selected to extract the translation meaning.
The method according to claim 1,
The coordinate information processing unit may vary the first hand position coordinate information and the first finger position coordinate information on the X axis, the Y axis, and the Z axis with reference to a predetermined range of the X axis and the Z axis, Axis range of -100 to 1.0, the Y-axis length range is represented by 0 to 1.0, and the length range is converted into a range of 0 to 199 to indicate coordinate information, Sign language translation system represented by array space.
The method of claim 3,
The coordinate information processing unit compresses the three-dimensional array space of [200] [200] [200] into [50] [50 [50] and stores the compressed three-dimensional array space in the coordinate storage unit.
The method according to claim 1,
The coordinate information processing unit may use the method of setting the directionality that is moved in the three-dimensional space to the specific direction information when the X-axis, Y-axis, and Z- 1 hand directional information is added to the three-dimensional array space and stored in the coordinate storage unit.
The method according to claim 1,
The coordinate information processing unit may calculate the second hand position coordinate information and the second finger position coordinate information as the reference handwriting operation based on the length range of the predetermined X axis and the Z axis, The normalization processing for varying the coordinate information in the X-axis, the Y-axis, and the Z-axis to express the length range of the X-axis and the Z-axis in the range of -1.0 to 1.0 and the range of the Y- And then stores it in the sign language database.
The method according to claim 1,
When the second hand direction information is shifted to the X axis, the Y axis, and the Z axis at different positions in the X axis, the Y axis, and the Z axis at one position in the three-dimensional space, the coordinate information processing unit Wherein the second hand direction information is added to the three-dimensional arrangement space including the second hand position coordinate information and the second finger position coordinate information and stored in the sign language database.
The first hand position coordinate information and the first finger position coordinate information are recognized as a plurality of frames on the basis of the X axis, the Y axis, and the Z axis, Dividing each frame by frame;
Wherein the first hand position coordinate information and the first finger position coordinate information are stored in a three-dimensional arrangement space as a translation operation to be performed, and the first hand position coordinate information and the first finger position coordinate information are stored in an average sampling length A step of varying the lengths of the X axis, the Y axis and the Z axis in accordance with the normalization process;
Adding first hand direction information to which the first hand position coordinate information is shifted to the three-dimensional arrangement space; And
The first finger position coordinate information, the first hand direction information, the second hand position coordinate information of the reference hydration operation in the three-dimensional array space previously stored in the sign language database, the second finger position coordinate information And finger direction coordinate information and hand direction information of the hydration operation to be translated through the similarity comparison with the second hand direction information in which the position of the hand is shifted and extracting the corresponding translation meaning How to translate sign language.
9. The method of claim 8,
Wherein the step of extracting the translation meaning comprises:
The first hand position coordinate information and the second hand position coordinate information are compared in the sign language database, and the hand position coordinate information having the highest similarity value is searched for, and the hand position coordinate information of the searched highest similar value is set to -5 Searching for a range of hand position coordinate information;
Comparing the first hand direction information with the second hand direction information based on the searched hand position coordinate information in the sign language database to search hand direction information having the highest similarity value, Searching hand direction information in a range of -5 in the directional information; And
If the similarity value of the highest finger is found by matching the first finger position coordinate information among the searched hand position coordinate information and hand direction information, the hand position coordinate information most similar to the hand- And the finger position coordinate information is finally selected to extract the translation meaning.
9. The method of claim 8,
Wherein the normalizing process comprises:
The first hand position coordinate information and the first finger position coordinate information are varied in the X axis, the Y axis, and the Z axis on the basis of the predetermined range of the X axis and the Z axis, 1.0, a length range of the Y-axis is represented by 0 to 1.0, and a range of 0 to 199 is converted into a three-dimensional array space of [200] [200] [200] to represent the length range as coordinate information &Lt; / RTI &gt;
11. The method of claim 10,
Further comprising compressing and storing the three-dimensional array space of [200] [200] [200] into [50] [50 [50].
9. The method of claim 8,
Wherein the step of adding to the three-
When the directionality moved in the three-dimensional space is shifted to the X axis, the Y axis, and the Z axis at different positions in the X axis, the Y axis, and the Z axis at one side of the three-dimensional space, Adding the information to the three-dimensional array space and storing the information.
9. The method of claim 8,
The second hand position coordinate information and the second finger position coordinate information, which are the reference handwriting operation, are divided into a first hand position coordinate information and a second finger position coordinate information on the X axis , The Y-axis, and the Z-axis, the length range of the X-axis and the Z-axis ranges from -1.0 to 1.0, and the length range of the Y-axis ranges from 0 to 1.0 in the three-dimensional array space. Further comprising the step of storing the sign language translation method.
9. The method of claim 8,
When the second hand direction information, which is the reference hydration operation, is moved to the X axis, the Y axis, and the Z axis at different positions in the X axis, the Y axis, and the Z axis at one position in the three-dimensional space, And adding the second hand direction information to the three-dimensional arrangement space including the second hand position coordinate information and the second finger position coordinate information, and storing the second hand direction information in the sign language database.
KR1020140166165A 2014-11-26 2014-11-26 System and Method for Translating Sign Language for Improving the Accuracy of Lip Motion Device KR20160062913A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020140166165A KR20160062913A (en) 2014-11-26 2014-11-26 System and Method for Translating Sign Language for Improving the Accuracy of Lip Motion Device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020140166165A KR20160062913A (en) 2014-11-26 2014-11-26 System and Method for Translating Sign Language for Improving the Accuracy of Lip Motion Device

Publications (1)

Publication Number Publication Date
KR20160062913A true KR20160062913A (en) 2016-06-03

Family

ID=56192206

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140166165A KR20160062913A (en) 2014-11-26 2014-11-26 System and Method for Translating Sign Language for Improving the Accuracy of Lip Motion Device

Country Status (1)

Country Link
KR (1) KR20160062913A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101958201B1 (en) * 2018-02-14 2019-03-14 안동과학대학교 산학협력단 Apparatus and method for communicating through sigh language recognition
KR20220042335A (en) * 2018-03-15 2022-04-05 한국전자기술연구원 Automatic Sign Language Recognition Method and System

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101958201B1 (en) * 2018-02-14 2019-03-14 안동과학대학교 산학협력단 Apparatus and method for communicating through sigh language recognition
KR20220042335A (en) * 2018-03-15 2022-04-05 한국전자기술연구원 Automatic Sign Language Recognition Method and System

Similar Documents

Publication Publication Date Title
Athira et al. A signer independent sign language recognition with co-articulation elimination from live videos: an Indian scenario
US9286694B2 (en) Apparatus and method for detecting multiple arms and hands by using three-dimensional image
JP4934220B2 (en) Hand sign recognition using label assignment
KR102036963B1 (en) Method and system for robust face dectection in wild environment based on cnn
Lahiani et al. Real time hand gesture recognition system for android devices
Goyal et al. Sign language recognition system for deaf and dumb people
KR101612605B1 (en) Method for extracting face feature and apparatus for perforimg the method
Bhuyan et al. Fingertip detection for hand pose recognition
Pan et al. Real-time sign language recognition in complex background scene based on a hierarchical clustering classification method
EP2704056A2 (en) Image processing apparatus, image processing method
KR101491461B1 (en) Method for recognizing object using covariance descriptor and apparatus thereof
Agrawal et al. A survey on manual and non-manual sign language recognition for isolated and continuous sign
CN104636725A (en) Gesture recognition method based on depth image and gesture recognition system based on depth images
Qi et al. Computer vision-based hand gesture recognition for human-robot interaction: a review
Bhuyan et al. Hand pose recognition using geometric features
KR100862349B1 (en) User interface system based on half-mirror using gesture recognition
CN111444764A (en) Gesture recognition method based on depth residual error network
Itkarkar et al. A survey of 2D and 3D imaging used in hand gesture recognition for human-computer interaction (HCI)
JP2016014954A (en) Method for detecting finger shape, program thereof, storage medium of program thereof, and system for detecting finger shape
CN112749646A (en) Interactive point-reading system based on gesture recognition
Aziz et al. Bengali Sign Language Recognition using dynamic skin calibration and geometric hashing
CN111460858A (en) Method and device for determining pointed point in image, storage medium and electronic equipment
Ming Hand fine-motion recognition based on 3D Mesh MoSIFT feature descriptor
KR20160062913A (en) System and Method for Translating Sign Language for Improving the Accuracy of Lip Motion Device
CN106406507B (en) Image processing method and electronic device

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
AMND Amendment
E601 Decision to refuse application
AMND Amendment