CN101527092A - Computer assisted hand language communication method under special session context - Google Patents
Computer assisted hand language communication method under special session context Download PDFInfo
- Publication number
- CN101527092A CN101527092A CN200910021906A CN200910021906A CN101527092A CN 101527092 A CN101527092 A CN 101527092A CN 200910021906 A CN200910021906 A CN 200910021906A CN 200910021906 A CN200910021906 A CN 200910021906A CN 101527092 A CN101527092 A CN 101527092A
- Authority
- CN
- China
- Prior art keywords
- sign language
- language
- deaf
- mute
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention discloses a computer assisted hand language communication method under special session context, which comprises the following steps: setting motion freedom range of each joint of a bone model through building the virtual bone model and special session context vocabulary database, and building the communication response corresponding relation of each joint motion of a virtual human and special vocabulary; identifying hand language information by picture processing and biological specificity through acquiring the hand language information sent by a dumb person to obtain the meaning of the acquired hand language information, converting the hand language sent by the dumb person into characters or language for informing an ordinary person, otherwise, automatically matching the corresponding answer speech after obtaining the character information input by the ordinary person or obtaining the meaning of the hand language sent by the identified dumb person, and processing the character information or answer speech and informing the dumb person by the hand language conversion, and realizing the session communication between the ordinary person and the dumb person under special occasion. The method can realize the communication between the ordinary person and the dumb person under special occasion.
Description
Technical field
The invention belongs to image biometrics identification technology field, relate to the computer assisted hand language communication method under a kind of special session context.
Background technology
In the actual life, the normal person both can obtain visual information also can obtain auditory information, and the deaf-mute then can only utilize visual information.The normal person exchanges as media with sign language usually with the deaf-mute, but most normal person knows little about it to sign language, can't carry out session with the deaf-mute at all, makes the communication disorder that existence is difficult to overcome between normal person and the deaf-mute.
Summary of the invention
The purpose of this invention is to provide the computer assisted hand language communication method under a kind of special session context, be used under the limited special scenes of vocabulary, auxiliary normal person carries out session with the deaf-mute and exchanges.
The technical solution adopted in the present invention is, computer assisted hand language communication method under the special session context, set up the corresponding relation of lexical data base and visual human's sign language action, by gathering the sign language information that the deaf-mute gets, with this sign language information via Flame Image Process and living things feature recognition, obtain the implication of the sign language information of gathering, and this implication informed the normal person, otherwise, obtain the Word message of normal person's input, perhaps recognizing of obtaining after the deaf-mute gets the implication of sign language, in lexical data base, match corresponding answer statement automatically, and with this literal information or answer statement through handling and informing the deaf-mute by the mode of sign language, realize that the normal person exchanges with deaf-mute's session under the specific occasion, this method is carried out according to the following steps:
Step 1: in computing machine, set up lexical data base used under the special session context;
Step 2: adopt the method construct three-dimensional animation visual human of computer graphics, and set up this three-dimensional animation visual human's bone joint model;
Step 3: the corresponding relation between the sign language action that the three-dimensional animation visual human of vocabulary in the lexical data base that construction step 1 is set up and step 2 structure characterizes;
Step 4: the lexical data base that normal person and deaf-mute set up by the visual human and the step 1 of step 2 foundation, the corresponding relation of setting up with step 3 is that the session interchange is carried out on the basis,
The sign language that the deaf-mute gets is converted to the concrete steps of literal or language:
1), the sign language image information got by the camera collection deaf-mute, and with this sign language image information input computing machine;
2), the sign language image information that step 1) was collected in the step, obtain the driftlessness background by training, adopt the background subtraction point-score, detect the moving target that comprises hand region in this background, and be different from the color and the form at other position by hand, be partitioned into hand region, adopt the digital video extraction method of key frame, extract the key frame of this hand region, then, by extracting the feature difference of different gestures, identify the static gesture in this key frame;
3), with step 2) in the static gesture sequence that identifies constitute dynamic gesture, and employing comprises that the stochastic process description of Hidden Markov Model (HMM) or neuroid method binding pattern method of identification are discerned this dynamic gesture, the acquisition implication that dynamic gesture comprised;
4), the implication with the dynamic gesture that identifies in the step 3) is converted to literal or the language that the normal person can understand;
With the text conversion of normal person's input is that sign language concrete steps partly are as follows:
A), by the input device inputting word information, perhaps after the implication of the deaf-mute's sign language that obtains to recognize, in special-purpose lexical data base, match corresponding answer statement automatically, and import this literal information or answer statement into computing machine;
B), the information that obtains in step a) step is handled, form the special-purpose vocabulary in the lexical data base that step 1 sets up, this special term Correspondent is crossed the corresponding relation that step 3 makes up form corresponding sign language action;
C), the sign language action that step b) was formed in the step is shown by the three-dimensional animation visual human of step 2 structure.
Feature of the present invention also is,
Step 2 wherein) uniformly-spaced key-frame extraction method is adopted in the extraction of key frame in, in this extraction method at interval L calculate by following formula:
In the formula, VideoNum is the totalframes of video, and CodeNum is the code word number.
Wherein, adopt step 2) based on the realization of multistage chord length function identification to different static gestures.
Wherein, the formula of the Fourier descriptors of a hand shape of calculating is as follows in the Fourier descriptors method:
In the formula, L
i(x
t) expression i level chord length function, divide 8 other chord length functions of level; N representative function sampled point number makes N=8.
The beneficial effect of the inventive method is, by the three-dimensional animation visual human of information acquisition, processing and structure, be sign language with normal person's language or text conversion, perhaps the sign language with the deaf-mute is converted to language or literal, be used under the specific occasion exchanging between normal person and the deaf-mute.
Description of drawings
Fig. 1 is visual human's bone joint model that the inventive method is set up;
Embodiment
The present invention is described in detail below in conjunction with the drawings and specific embodiments.
Language used when people exchange is very abundant, if want the entirely true translation of sign language the said language of normal person and deaf-mute done by auxiliary method, and exchange, exist the problem that comprises many necessary correct translation such as semantic analysis, still there is not method preferably at present.Yet, for some specific public service industry, session content as occasions such as bank, post office, hospital, supermarkets, its vocabulary is limited, can translate interchange by the inventive method, this method is used for the specific occasion, comprise two parts, a part is that the sign language that the deaf-mute gets is converted to literal or language, by gathering the sign language information that the deaf-mute gets, with this sign language information via Flame Image Process and living things feature recognition, obtain the implication of the sign language information of gathering, and this implication is informed the normal person.Another part is that language or text conversion that the normal person will be expressed are sign language, it is normal person's inputting word information, perhaps obtaining to recognize after the deaf-mute gets the implication of sign language, computer system matches corresponding answer statement automatically in special-purpose lexical data base, and with this literal information or answer statement through handling and informing the deaf-mute by the mode of sign language, realize that the normal person exchanges with deaf-mute's session under the specific occasion.With a concrete example method of the present invention is described below:
Step 1: in computing machine, set up special-purpose lexical data base used under the special session context;
The inventive method is used under the special session context exchanging of auxiliary normal person and deaf-mute, need to set up special-purpose lexical data base used under this special session context in computing machine, this special use lexical data base branch different sessions scene stores the special-purpose vocabulary of some.
Step 2: adopt three-dimensional animation visual human of method construct of computer graphics, and set up this three-dimensional animation visual human's bone joint model;
Make up the visual human, make the normal person carry out the session of certain vocabulary by this visual human and deaf-mute.The mathematical model of setting up with the human skeleton articulation structure that adopts visual human's structure realizes its corresponding sign language action.Make up the visual human with reference to the skeleton articulation structure, and this visual human's bone is connected into an integral body, the quantity in visual human's bone joint and the degree of freedom in each joint are set simultaneously in the mode of articulated chain.Setting up visual human's bone joint model, is the associating spinning movement that can convert different joints for the action that makes the visual human to, can reflect the overwhelming majority action of human arm part, does not influence the expression of sign language.According to kniesiological characteristics, visual human's bone joint model that the inventive method is set up, as shown in Figure 1.This bone joint model has been set up 43 joints, is provided with 47 degree of freedom, and the range of movement of each joint freedom degrees is limited, and sets up the motion of each joint freedom degrees of visual human's model shown in the table 1 and adjusts parameter area.Because the expression of sign language mainly concentrates on upper limbs, the inventive method respectively is provided with 23 degree of freedom at the left and right sides of visual human's model arm, is used for realizing various sign language actions; Be provided with one degree of freedom in visual human's model center, be used for the distance of controlling models, make things convenient for the user to observe.
Parameter area is adjusted in the motion of each joint freedom degrees of table 1 visual human model
The visual human of structure can adopt three-dimensional animation, makes it have certain cartoon and affinity, and cartoon is in order during sign language, can to exaggerate the arm part action in performance, and can not feel distortion; Affinity is in communication process, can be accepted by the other side than being easier to.Simultaneously, visual human's arm part details is main description object, and purpose is to finish the comparatively sign language of compound action.
Step 3: the corresponding relation between the sign language action that the three-dimensional animation visual human of vocabulary in the special-purpose lexical data base that construction step 1 is set up and step 2 structure characterizes;
Corresponding relation between the special-purpose vocabulary of Computer Database and the visual human's sign language action is exactly the action sequence of visual human's arm part.By setting in advance visual human's arm action process corresponding parameters,, set up the response relation that exchanges of special-purpose vocabulary under each bone joint action of visual human and the special scenes by the action in each bone joint of this parameter control visual human.When receiving certain vocabulary, retrieve the course of action that needs to express vocabulary or short sentence correspondence according to this interchange response relation, then, access this course of action parameter, drive the visual human, according to the action parameter of appointment, finish certain sign language action.The inventive method adopts a kind of editor and expression of moving based on the sign language of joint angles control, set up the response relation of special session context, after obtaining the sign language implication that recognizes, automatically match corresponding answer statement in special-purpose lexical data base, the driving visual human makes and replys sign language accordingly.In addition, when the vocabulary deficiency of special-purpose lexical data base, can adopt literal to carry out statement editing, and sign language translated in this literal language, also this statement be stored in the reply data storehouse simultaneously.
Realize the editor that the sign language of " hello " is moved and be expressed as example explanation with control three-dimensional animation visual human's joint angles.According to the data area that provides in the table 1, change the action of visual human's arm part by the angle-data of regulating each joint of visual human, comprise the shoulder joint of corresponding left hand and the right hand respectively, elbow joint, wrist joint, carpomaetacarpal joint of thumb, the thumb metacarpophalangeal joints, the thumb articulationes interphalangeae manus, the forefinger metacarpophalangeal joints, forefinger near-end articulations digitorum manus, the middle finger metacarpophalangeal joints, middle finger near-end articulations digitorum manus, nameless metacarpophalangeal joints, nameless near-end articulations digitorum manus, little finger of toe metacarpophalangeal joints or little finger of toe near-end articulations digitorum manus are vertical, laterally and around three directions of bone network axis rotation each joint is set can motion angle.The parameter of controlling each joint motions angle is as shown in table 2.
Table 2 visual human represents each joint motions angle of " hello " sign language action
The action name | Right shoulder joint | Right elbow joint | Right wrist joint | Right thumb | RIF | Right middle finger | Right nameless | Right little finger of toe | Left side shoulder joint | Left side elbow joint | Left side wrist joint | Left side thumb | Left side forefinger | Left side middle finger | ... |
Hello | 34 | -5 | 0 | 19 | -2 | -90 | 0 | -96 | 0 | 0 | 0 | 0 | 0 | 0 | ... |
Hello | 46 | -5 | 0 | -17 | -2 | -90 | 0 | -96 | 0 | 0 | 0 | 0 | 0 | 0 | ... |
Step 4: the special-purpose lexical data base that normal person and deaf-mute set up by the visual human and the step 1 of step 2 foundation, the corresponding relation of setting up with step 3 is that the session interchange is carried out on the basis,
The sign language that the deaf-mute gets is converted to the concrete steps of literal or language:
1) the sign language video information of getting by the camera collection deaf-mute, and with this sign language video information input computing machine;
Since camera collection to video pictures in, except people's hand, also comprise other background parts such as trunk, in order to analyze hand motion, the hand region in the video pictures of camera collection need be split.Therefore, enter down the step;
2) with 1) the sign language video information that collects in the step, obtain the driftlessness background by training, adopt the background subtraction point-score, detect the moving target that comprises hand region in this background, and be different from the color and the form at other position by hand, be partitioned into hand region, and adopt the digital video extraction method of key frame, extract the key frame of this hand region, then, by extracting the feature difference of different gestures, identify the static gesture in this key frame;
Correctness for raising is cut apart allows the deaf-mute select to wear redness, green or blue gloves according to site environment, is used for the auxiliary partition hand region.Put on a two field picture of getting sign language behind the red gloves as the deaf-mute, remove background after, cut apart and obtain hand region figure.
Key frame is meant in the sign language that gesture changes frame greatly.Can adopt the similarity of analyzing the gesture motion between the different frame to obtain.In order to improve the speed of Sign Language Recognition, this method adopts uniformly-spaced extraction method of key frame, and L calculates by following formula at interval:
In the formula, VideoNum is the totalframes of video, and CodeNum is the code word number.
Above example is an example, at first obtain the key frame of the sign language " hello " that the deaf-mute gets, after computing machine picks up this sign language action, cut apart, obtain two key frames, the formation of visual human's action all is to move with reference to the Chinese Industrial Standards (CIS) sign language in each key frame, by regulating the data of each joint angles of control, obtain the angle position that different implications are represented in each joint, the angle-data with each joint of visual human in each frame is kept in the database then.
Sign language action " hello " editor finish.During use, only need input sign language action " hello ", computer system will also access the key frame action data of " hello " successively by special-purpose vocabulary database matching, then, visual human's gesture is carried out the transition to new state by current state, such as, right hand forefinger metacarpophalangeal joints change to-90 ° by 0 °, and near-end articulations digitorum manus joint changes to-102 ° by 0 °, the far-end articulations digitorum manus is limited by the near-end articulations digitorum manus, according to constrained motion, finish the action of forefinger bending, other joint principles are basic identical; After all joints and all action key frame data are finished, just realized the expression of " hello " this action, for the sudden change that prevents to move, the inventive method is carried out linear interpolation between each joint data of two action key frame correspondences, prevent the sudden change between the joint angles data, make that visual human's action is more level and smooth.
Static gesture is a hand motion attitude independently in the key frame.Be that static gesture is meant each gesture constantly of decomposing in the sign language, do not consider to change before and after it.
The purpose of identification static gesture mainly is the feature that obtains to distinguish between the different hand motions.According to image process method, clarification of objective can be a spatial feature, as shape, size, attitude color, texture etc., can adopt the edge to follow the tracks of chain code, and methods such as barycenter and area are extracted; Can be frequency domain character,, can adopt methods such as Fourier transform, Fourier descriptors and wavelet transformation to extract as frequency spectrum etc.; Also can be color characteristic, adopt methods such as color cluster and color histogram to extract.The inventive method adopts the Fourier descriptors method based on multistage chord length function to realize identification to different static gestures.
The formula of the Fourier descriptors of a hand shape of calculating is as follows:
In the formula, L
i(x
t) expression i level chord length function, be divided into other chord length function of 8 levels in the methods of the invention; N representative function sampled point number makes N=8 in the method.
Calculate the value of the Fourier descriptors of typical static gesture commonly used in the Chinese sign language according to formula (1-2), and this value that will calculate to be stored in the database that is used for the auxiliary computing machine that exchanges for future reference.What of typical static gesture are depended on the size of the auxiliary ac computer system scope of application.
Afterwards, calculate the value of the Fourier descriptors of gesture to be detected according to formula (1-2), according to the principle of short Euclidean distance, mate with the value of the Fourier descriptors of the typical static gesture commonly used that stores in the Computer Database, judge the implication of static gesture to be detected.
3) with 2) in the static gesture sequence that identifies constitute dynamic gesture, and adopt and comprise that the stochastic process description of Hidden Markov Model (HMM) or neuroid method binding pattern method of identification are discerned this dynamic gesture, obtain the implication that dynamic gesture comprised;
Continuous static gesture sequence constitutes dynamic gesture, and dynamic gesture constitutes according to the movement locus of different hand motions and gesture.The gesture that the people gets constantly in difference has certain randomness, and simultaneously, different people exists the difference of individual statement for the sign language of identical meanings, and the identification of dynamic gesture, the characteristic that is actually stochastic process is described and discerns.
4) with 3) in the implication of the dynamic gesture that identifies be converted to literal or the language that the normal person can understand.
With normal person's language or text conversion is that sign language concrete steps partly are as follows:
A), perhaps after the deaf-mute who obtains to recognize gets the implication of sign language, in special-purpose lexical data base, match corresponding answer statement automatically, and import this literal information or answer statement into computing machine by the input device inputting word information;
B) information that obtains in a) step is handled, formed the special-purpose vocabulary in the special-purpose lexical data base that step 1 sets up, this special term Correspondent is crossed the corresponding relation that step 3 makes up form corresponding sign language action;
C) with b) the sign language action that forms in the step shown by the three-dimensional animation visual human of step 2 structure.
The inventive method, be used under some special session context, the normal person carries out exchanging of limited vocabulary amount with the deaf-mute, by Sign Language Recognition and demonstration, the auxiliary normal person who is ignorant of sign language can talk with one's hands, and the sign language interpreter of person hard of hearing can be become literal or phonetic representation.
Claims (4)
1. the computer assisted hand language communication method under the special session context, set up the corresponding relation of lexical data base and visual human's sign language action, by gathering the sign language information that the deaf-mute gets, with this sign language information via Flame Image Process and living things feature recognition, obtain the implication of the sign language information of gathering, and this implication informed the normal person, otherwise, obtain the Word message of normal person's input, perhaps recognizing of obtaining after the deaf-mute gets the implication of sign language, in lexical data base, match corresponding answer statement automatically, and with this literal information or answer statement through handling and informing the deaf-mute by the mode of sign language, realize that the normal person exchanges with deaf-mute's session under the specific occasion, it is characterized in that this method is carried out according to the following steps:
Step 1: in computing machine, set up lexical data base used under the special session context;
Step 2: adopt the method construct three-dimensional animation visual human of computer graphics, and set up this three-dimensional animation visual human's bone joint model;
Step 3: the corresponding relation between the sign language action that the three-dimensional animation visual human of vocabulary in the lexical data base that construction step 1 is set up and step 2 structure characterizes;
Step 4: the lexical data base that normal person and deaf-mute set up by the visual human and the step 1 of step 2 foundation, the corresponding relation of setting up with step 3 is that the session interchange is carried out on the basis,
The sign language that the deaf-mute gets is converted to the concrete steps of literal or language:
1), the sign language image information got by the camera collection deaf-mute, and with this sign language image information input computing machine;
2), the sign language image information that step 1) was collected in the step, obtain the driftlessness background by training, adopt the background subtraction point-score, detect the moving target that comprises hand region in this background, and be different from the color and the form at other position by hand, be partitioned into hand region, adopt the digital video extraction method of key frame, extract the key frame of this hand region, then, by extracting the feature difference of different gestures, identify the static gesture in this key frame;
3), with step 2) in the static gesture sequence that identifies constitute dynamic gesture, and employing comprises that the stochastic process description of Hidden Markov Model (HMM) or neuroid method binding pattern method of identification are discerned this dynamic gesture, the acquisition implication that dynamic gesture comprised;
4), the implication with the dynamic gesture that identifies in the step 3) is converted to literal or the language that the normal person can understand;
With the text conversion of normal person's input is that sign language concrete steps partly are as follows:
A), by the input device inputting word information, perhaps after the implication of the deaf-mute's sign language that obtains to recognize, in special-purpose lexical data base, match corresponding answer statement automatically, and import this literal information or answer statement into computing machine;
B), the information that obtains in step a) step is handled, form the special-purpose vocabulary in the lexical data base that step 1 sets up, this special term Correspondent is crossed the corresponding relation that step 3 makes up form corresponding sign language action;
C), the sign language action that step b) was formed in the step is shown by the three-dimensional animation visual human of step 2 structure.
2. according to the described computer assisted hand language communication method of claim 1, it is characterized in that described 2) in the step extraction of key frame adopt uniformly-spaced key-frame extraction method, in this extraction method at interval L calculate by following formula:
In the formula, VideoNum is the totalframes of video, and CodeNum is the code word number.
3. according to the described computer assisted hand language communication method of claim 1, it is characterized in that described step 2) in adopt based on the realization of multistage chord length function identification different static gestures.
4. according to the described computer assisted hand language communication method of claim 3, it is characterized in that the formula of the Fourier descriptors of a hand shape of calculating is as follows in the described Fourier descriptors method:
In the formula, L
i(x
t) expression i level chord length function, divide 8 other chord length functions of level; N representative function sampled point number makes N=8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200910021906A CN101527092A (en) | 2009-04-08 | 2009-04-08 | Computer assisted hand language communication method under special session context |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200910021906A CN101527092A (en) | 2009-04-08 | 2009-04-08 | Computer assisted hand language communication method under special session context |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101527092A true CN101527092A (en) | 2009-09-09 |
Family
ID=41094944
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN200910021906A Pending CN101527092A (en) | 2009-04-08 | 2009-04-08 | Computer assisted hand language communication method under special session context |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101527092A (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102073878A (en) * | 2010-11-15 | 2011-05-25 | 上海大学 | Non-wearable finger pointing gesture visual identification method |
CN102104670A (en) * | 2009-12-17 | 2011-06-22 | 深圳富泰宏精密工业有限公司 | Sign language identification system and method |
CN102193633A (en) * | 2011-05-25 | 2011-09-21 | 广州畅途软件有限公司 | dynamic sign language recognition method for data glove |
CN102799317A (en) * | 2012-07-11 | 2012-11-28 | 联动天下科技(大连)有限公司 | Smart interactive projection system |
CN103136986A (en) * | 2011-12-02 | 2013-06-05 | 深圳泰山在线科技有限公司 | Sign language identification method and sign language identification system |
CN104064187A (en) * | 2014-07-09 | 2014-09-24 | 张江杰 | Sign language conversion voice system |
CN106254960A (en) * | 2016-08-30 | 2016-12-21 | 福州瑞芯微电子股份有限公司 | A kind of video call method for communication disorders and system |
CN108055479A (en) * | 2017-12-28 | 2018-05-18 | 暨南大学 | A kind of production method of animal behavior video |
CN108427910A (en) * | 2018-01-30 | 2018-08-21 | 浙江凡聚科技有限公司 | Deep-neural-network AR sign language interpreters learning method, client and server |
CN108766127A (en) * | 2018-05-31 | 2018-11-06 | 京东方科技集团股份有限公司 | Sign language exchange method, unit and storage medium |
CN109063615A (en) * | 2018-07-20 | 2018-12-21 | 中国科学技术大学 | A kind of sign Language Recognition Method and system |
CN109409255A (en) * | 2018-10-10 | 2019-03-01 | 长沙千博信息技术有限公司 | A kind of sign language scene generating method and device |
CN109460748A (en) * | 2018-12-10 | 2019-03-12 | 内蒙古科技大学 | A kind of trinocular vision hand language recognition device and multi-information fusion sign Language Recognition Method |
CN109543812A (en) * | 2017-09-22 | 2019-03-29 | 吴杰 | A kind of specific true man's behavior fast modeling method |
CN110032740A (en) * | 2019-04-20 | 2019-07-19 | 卢劲松 | It customizes individual character semanteme and learns application method |
CN110070065A (en) * | 2019-04-30 | 2019-07-30 | 李冠津 | The sign language systems and the means of communication of view-based access control model and speech-sound intelligent |
CN110322760A (en) * | 2019-07-08 | 2019-10-11 | 北京达佳互联信息技术有限公司 | Voice data generation method, device, terminal and storage medium |
CN110390239A (en) * | 2018-04-17 | 2019-10-29 | 现代自动车株式会社 | The control method of vehicle and communication system including the communication system for disabled person |
CN110413130A (en) * | 2019-08-15 | 2019-11-05 | 泉州师范学院 | Virtual reality sign language study, test and evaluation method based on motion capture |
CN110826441A (en) * | 2019-10-25 | 2020-02-21 | 深圳追一科技有限公司 | Interaction method, interaction device, terminal equipment and storage medium |
CN111354246A (en) * | 2020-01-16 | 2020-06-30 | 浙江工业大学 | System and method for helping deaf-mute to communicate |
CN112839184A (en) * | 2020-12-31 | 2021-05-25 | 深圳追一科技有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113689879A (en) * | 2020-05-18 | 2021-11-23 | 北京搜狗科技发展有限公司 | Method, device, electronic equipment and medium for driving virtual human in real time |
CN114120770A (en) * | 2021-03-24 | 2022-03-01 | 张银合 | Barrier-free communication method for hearing-impaired people |
CN113689879B (en) * | 2020-05-18 | 2024-05-14 | 北京搜狗科技发展有限公司 | Method, device, electronic equipment and medium for driving virtual person in real time |
-
2009
- 2009-04-08 CN CN200910021906A patent/CN101527092A/en active Pending
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102104670A (en) * | 2009-12-17 | 2011-06-22 | 深圳富泰宏精密工业有限公司 | Sign language identification system and method |
CN102073878A (en) * | 2010-11-15 | 2011-05-25 | 上海大学 | Non-wearable finger pointing gesture visual identification method |
CN102073878B (en) * | 2010-11-15 | 2012-11-07 | 上海大学 | Non-wearable finger pointing gesture visual identification method |
CN102193633A (en) * | 2011-05-25 | 2011-09-21 | 广州畅途软件有限公司 | dynamic sign language recognition method for data glove |
CN103136986A (en) * | 2011-12-02 | 2013-06-05 | 深圳泰山在线科技有限公司 | Sign language identification method and sign language identification system |
CN103136986B (en) * | 2011-12-02 | 2015-10-28 | 深圳泰山在线科技有限公司 | Sign Language Recognition Method and system |
CN102799317A (en) * | 2012-07-11 | 2012-11-28 | 联动天下科技(大连)有限公司 | Smart interactive projection system |
CN102799317B (en) * | 2012-07-11 | 2015-07-01 | 联动天下科技(大连)有限公司 | Smart interactive projection system |
CN104064187A (en) * | 2014-07-09 | 2014-09-24 | 张江杰 | Sign language conversion voice system |
CN106254960A (en) * | 2016-08-30 | 2016-12-21 | 福州瑞芯微电子股份有限公司 | A kind of video call method for communication disorders and system |
CN109543812A (en) * | 2017-09-22 | 2019-03-29 | 吴杰 | A kind of specific true man's behavior fast modeling method |
CN108055479A (en) * | 2017-12-28 | 2018-05-18 | 暨南大学 | A kind of production method of animal behavior video |
CN108427910A (en) * | 2018-01-30 | 2018-08-21 | 浙江凡聚科技有限公司 | Deep-neural-network AR sign language interpreters learning method, client and server |
CN110390239A (en) * | 2018-04-17 | 2019-10-29 | 现代自动车株式会社 | The control method of vehicle and communication system including the communication system for disabled person |
CN108766127A (en) * | 2018-05-31 | 2018-11-06 | 京东方科技集团股份有限公司 | Sign language exchange method, unit and storage medium |
CN109063615B (en) * | 2018-07-20 | 2021-03-09 | 中国科学技术大学 | Sign language identification method and system |
CN109063615A (en) * | 2018-07-20 | 2018-12-21 | 中国科学技术大学 | A kind of sign Language Recognition Method and system |
CN109409255A (en) * | 2018-10-10 | 2019-03-01 | 长沙千博信息技术有限公司 | A kind of sign language scene generating method and device |
CN109460748A (en) * | 2018-12-10 | 2019-03-12 | 内蒙古科技大学 | A kind of trinocular vision hand language recognition device and multi-information fusion sign Language Recognition Method |
CN109460748B (en) * | 2018-12-10 | 2024-03-01 | 内蒙古科技大学 | Three-dimensional visual sign language recognition device and multi-information fusion sign language recognition method |
CN110032740A (en) * | 2019-04-20 | 2019-07-19 | 卢劲松 | It customizes individual character semanteme and learns application method |
CN110070065A (en) * | 2019-04-30 | 2019-07-30 | 李冠津 | The sign language systems and the means of communication of view-based access control model and speech-sound intelligent |
CN110322760A (en) * | 2019-07-08 | 2019-10-11 | 北京达佳互联信息技术有限公司 | Voice data generation method, device, terminal and storage medium |
CN110413130A (en) * | 2019-08-15 | 2019-11-05 | 泉州师范学院 | Virtual reality sign language study, test and evaluation method based on motion capture |
CN110413130B (en) * | 2019-08-15 | 2024-01-26 | 泉州师范学院 | Virtual reality sign language learning, testing and evaluating method based on motion capture |
CN110826441B (en) * | 2019-10-25 | 2022-10-28 | 深圳追一科技有限公司 | Interaction method, interaction device, terminal equipment and storage medium |
CN110826441A (en) * | 2019-10-25 | 2020-02-21 | 深圳追一科技有限公司 | Interaction method, interaction device, terminal equipment and storage medium |
CN111354246A (en) * | 2020-01-16 | 2020-06-30 | 浙江工业大学 | System and method for helping deaf-mute to communicate |
CN113689879A (en) * | 2020-05-18 | 2021-11-23 | 北京搜狗科技发展有限公司 | Method, device, electronic equipment and medium for driving virtual human in real time |
CN113689879B (en) * | 2020-05-18 | 2024-05-14 | 北京搜狗科技发展有限公司 | Method, device, electronic equipment and medium for driving virtual person in real time |
CN112839184A (en) * | 2020-12-31 | 2021-05-25 | 深圳追一科技有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN114120770A (en) * | 2021-03-24 | 2022-03-01 | 张银合 | Barrier-free communication method for hearing-impaired people |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101527092A (en) | Computer assisted hand language communication method under special session context | |
CN110135375B (en) | Multi-person attitude estimation method based on global information integration | |
WO2020108362A1 (en) | Body posture detection method, apparatus and device, and storage medium | |
CN108983636B (en) | Man-machine intelligent symbiotic platform system | |
CN110598576B (en) | Sign language interaction method, device and computer medium | |
CN111223483A (en) | Lip language identification method based on multi-granularity knowledge distillation | |
CN106097835B (en) | Deaf-mute communication intelligent auxiliary system and communication method | |
CN103092329A (en) | Lip reading technology based lip language input method | |
CN108256458B (en) | Bidirectional real-time translation system and method for deaf natural sign language | |
WO2022127494A1 (en) | Pose recognition model training method and apparatus, pose recognition method, and terminal device | |
CN105373810B (en) | Method and system for establishing motion recognition model | |
CN107066635A (en) | A kind of method and system of the architecture information guide to visitors recognized based on image comparison | |
Alksasbeh et al. | Smart hand gestures recognition using K-NN based algorithm for video annotation purposes | |
CN107346207A (en) | A kind of dynamic gesture cutting recognition methods based on HMM | |
Singh et al. | A Review For Different Sign Language Recognition Systems | |
Sharma et al. | Real-time recognition of yoga poses using computer vision for smart health care | |
Siby et al. | Hand gesture recognition | |
KR20050065198A (en) | Three-dimensional motion command recognizer using motion of user | |
CN116363757A (en) | Skeleton and sensor bimodal human behavior recognition method based on self-attention intention convolution | |
Putra et al. | Designing translation tool: Between sign language to spoken text on kinect time series data using dynamic time warping | |
KR102377767B1 (en) | Handwriting and arm movement learning-based sign language translation system and method | |
CN113420783B (en) | Intelligent man-machine interaction method and device based on image-text matching | |
CN112487951B (en) | Sign language recognition and translation method | |
CN108537855A (en) | A kind of ceramic marble paper method for generating pattern and device that sketch is consistent | |
CN107203268A (en) | A kind of three-dimensional style of brushwork recognition methods based on directional chain-code |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Open date: 20090909 |