CN109409255A - A kind of sign language scene generating method and device - Google Patents

A kind of sign language scene generating method and device Download PDF

Info

Publication number
CN109409255A
CN109409255A CN201811177572.1A CN201811177572A CN109409255A CN 109409255 A CN109409255 A CN 109409255A CN 201811177572 A CN201811177572 A CN 201811177572A CN 109409255 A CN109409255 A CN 109409255A
Authority
CN
China
Prior art keywords
sign language
vocabulary
lexical unit
input information
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811177572.1A
Other languages
Chinese (zh)
Inventor
高春鸣
郭松睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Qianbo Information Technology Co Ltd
Original Assignee
Changsha Qianbo Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Qianbo Information Technology Co Ltd filed Critical Changsha Qianbo Information Technology Co Ltd
Priority to CN201811177572.1A priority Critical patent/CN109409255A/en
Publication of CN109409255A publication Critical patent/CN109409255A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/009Teaching or communicating with deaf persons

Abstract

The present invention provides a kind of sign language scene generating method and device, and this method includes obtaining input information;At least one lexical unit for including in the input information is parsed, and establishes the corresponding relationship of the lexical unit and sign language vocabulary;The lexical unit relevant to scene in parsed lexical unit is extracted, and establishes corresponding model of place;Based on the corresponding relationship of the lexical unit and sign language vocabulary established, the corresponding sign language animation of the input information is generated, and generates the corresponding scene image of the input information in the model of place established.In this way, input information is resolvable to lexical unit, then according to lexical unit sign language animation corresponding with the corresponding relationship of sign language vocabulary generation input information, input information can be shown in the form of sign language animation, enable dysaudia crowd to understand input information by the sign language animation of generation, improves the convenience degree that dysaudia crowd exchanges with general population.

Description

A kind of sign language scene generating method and device
Technical field
The present invention relates to sign language interpreter technical field more particularly to a kind of sign language scene generating methods and device.
Background technique
It is exchanged for the ease of crowds such as dysaudias with other people, sign language becomes one kind and conveniently selects.It is existing to push away Wide standard Chinese sign language is that each morpheme or vocabulary correspond to corresponding gesture, by each in gesture representation language when exchange A morpheme or vocabulary.
For the children of low age dysaudia, since its age and dysaudia bring limit, lead to its cognition Ability, so the expression of many concepts of indigestion, and the content that general teaching material or the auxiliary data of religion provide also is limited, and is used Flexibility in the process is poor.
Summary of the invention
The embodiment of the present invention provides a kind of sign language scene generating method and device, to improve the flexibility of sign language teaching.
In order to solve the above-mentioned technical problem, the present invention is implemented as follows:
In a first aspect, the embodiment of the invention provides a kind of sign language scene generating methods, comprising the following steps:
Obtain input information;
At least one lexical unit for including in the input information is parsed, and establishes the lexical unit and sign language vocabulary Corresponding relationship;
The lexical unit relevant to scene in parsed lexical unit is extracted, and establishes corresponding model of place;
Based on the corresponding relationship of the lexical unit and sign language vocabulary established, the corresponding sign language fortune of the input information is generated Dynamic sequence, and the corresponding model of place of the input information is generated in the model of place established.
Optionally, after at least one lexical unit for including in the parsing input information, further includes:
Establish the corresponding relationship of the lexical unit Yu lip reading vocabulary;
The lexical unit relevant to scene extracted in parsed lexical unit, and establish corresponding model of place Later, further includes:
Based on the corresponding relationship of the lexical unit and sign language vocabulary established, lip corresponding with sign language motion sequence is generated Language motion sequence.
Optionally, the lexical unit relevant to scene extracted in parsed lexical unit, and establish corresponding Model of place, comprising:
It is expressed as shallow semantic frame by information is inputted according to extracted lexical unit relevant to scene, and is converted to The Context event stream of action drives;
Preset model is called according to the Context event stream, and the input information is generated by called preset model Scene animation of the corresponding model as the input information.
Optionally, described that shallow semantic frame is expressed as information is inputted according to extracted lexical unit relevant to scene After frame, further includes:
Extract the emotional semantic of shallow semantic frame, and the emotional semantic for calling and being extracted according to preset corresponding relationship The background music to match.
Optionally, described to be based on being established in the case where the input information includes at least two lexical units Lexical unit and sign language vocabulary corresponding relationship, generate the corresponding sign language motion sequence of the input information, comprising:
The acquisition that puts in order according to the lexical unit for including in the input information is corresponding with the lexical unit Sign language vocabulary puts in order;
According to the connecting points calculated between the sign language motion sequence of two neighboring sign language vocabulary that put in order of sign language vocabulary;
According to putting in order for the sign language vocabulary, the corresponding sign language motion sequence of the input information is generated, wherein phase The sign language motion sequence of adjacent two sign language vocabularies is connected at connecting points calculated.
Optionally, it is described according to sign language vocabulary put in order calculate two neighboring sign language vocabulary sign language motion sequence it Between connecting points, comprising:
The exercise data of two neighboring sign language vocabulary is obtained, and calculates the linking of the exercise data of two neighboring sign language vocabulary Interpolation;
The putting in order according to the sign language vocabulary generates the corresponding sign language animation of the input information, comprising:
According to linking interpolation calculated, the exercise data of each sign language vocabulary is integrated, and the exercise data integrated is reflected It is incident upon demonstrating model, to demonstrate the corresponding sign language animation of input information by the demonstrating model.
Optionally, at least one lexical unit for including in the parsing input information, comprising:
Acquired input information is segmented according to syntactic structure and obtains at least one basic vocabulary;
It will segment in basic vocabulary obtained, there are the basic vocabularies of corresponding sign language vocabulary as a vocabulary list Member;
According to default approximate fallback relationship, will segment in basic vocabulary obtained, there is no corresponding sign language vocabularies Basic vocabulary, replaces with that there are the approximate vocabularies of corresponding sign language vocabulary, and will replace resulting approximate vocabularies as a word Money order member.
Optionally, after at least one lexical unit for including in the parsing input information, further includes:
If inputting in the lexical unit of information, there are directive property verbs, are determined according to the subject of directive property verb and predicate The direction of directive property verb, and corresponding lexical unit is associated with according to the direction of directive property verb;
If inputting in the lexical unit of information, there are pronouns, it is determined that the referents of pronoun, and pronoun is replaced with into phase Answer the corresponding lexical unit of referents.
Optionally, the corresponding relationship based on the lexical unit and sign language vocabulary established generates the input information Corresponding sign language animation, comprising:
Based on the corresponding relationship of the lexical unit and sign language vocabulary established, the movement number of each sign language vocabulary is obtained According to, and the exercise data of acquired sign language vocabulary is mapped to according to the corresponding relationship of the lexical unit and sign language vocabulary and is drilled Representation model, to demonstrate the corresponding sign language animation of input information by the demonstrating model.
Second aspect, the embodiment of the invention provides a kind of sign language scene generating means, comprising:
Input module, for obtaining input information;
Natural language processing module for parsing at least one lexical unit for including in the input information, and is established The corresponding relationship of the lexical unit and sign language vocabulary;
Animation producing module for extracting the lexical unit relevant to scene in parsed lexical unit, and is established Corresponding model of place;
Demonstration module generates the input letter for the corresponding relationship based on the lexical unit and sign language vocabulary established Cease corresponding sign language animation.
The embodiment of the present invention passes through the lexical unit in parsing input information and to generate corresponding model of place and sign language dynamic It draws, the corresponding relationship between sign language animation and actual conditions can be established, when being applied to teaching scene, can be improved sign language religion Degree of flexibility.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, needed in being described below to the embodiment of the present invention Attached drawing to be used is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, For those of ordinary skill in the art, without any creative labor, it can also obtain according to these attached drawings Obtain other attached drawings.
Fig. 1 is a kind of flow chart of sign language scene generating method provided in an embodiment of the present invention;
Fig. 2 is the flow chart of another sign language scene generating method provided in an embodiment of the present invention;
Fig. 3 is a kind of structure chart of sign language scene generating means provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
It is a kind of flow chart of sign language scene generating method provided in an embodiment of the present invention referring to Fig. 1, Fig. 1, such as Fig. 1 institute Show, comprising the following steps:
Step 101 obtains input information.
Input information is obtained in the technical solution of the present embodiment first, input information can input in a variety of manners, with suitable Answer different usage scenarios.
The technical solution can be applied to classroom.Under normal circumstances, the teacher of regular school does not understand thoroughly sign language, for hearing For the student of obstacle, can not hear teachers' instruction content, in this way, can using the content in teaching material perhaps in handout as Information is inputted, these information are converted into sign language, is identified convenient for the student of dysaudia.
The program can also be applied to daily exchange, such as the user of a dysaudia carries out ditch with an ordinary user , can be using ordinary user's one's voice in speech as input signal when logical, and corresponding sign language animation is converted to realize the two It links up.
Optionally, in a specific embodiment, which includes:
The voice signal of user's input is obtained, and the voice signal is converted into text information;Or
It identifies the handwriting input of user and generates corresponding text information;Or
Obtain the text information that user imports.
In this specific embodiment, input information can be voice signal, which can be speaking for user Sound acquires voice signal by sound pick-up outfits such as microphone or microphones when in use.
The voice signal is also possible to the broadcastings such as the input signal of electronic equipment, such as mobile phone, tablet computer, television set Audio.When specifically used, for example, by an electronics set play audio when, by directly intercept electronic equipment audio signal Voice signal as input.Specifically, can be realized by the audio signal under recording state in interception electronic equipment.
The voice signal of input can be converted to by speech recognition technology that is existing and being likely to occur realizes the sign language field The identifiable text information of the electronic equipment of scape generation method, is not further qualified herein.
Input information can also be text information, and text information can be realized with other users handwriting input, can also be with It is realized by being introduced directly into a text document.
The format of above-mentioned text information can refer to existing text document format, such as doc format, docx format, txt lattice Formula, pdf format etc., it is clear that be also possible to other for realize the sign language scene generating method electronic equipment identification format, It is not further qualified herein.
Input information is obtained by different modes, is able to satisfy more usage scenarios.
For example, may be implemented between dysaudia crowd and general population by using voice signal as input information Daily exchange can satisfy dysaudia crowd and watch video data etc., moreover it is possible to learn child with hearing impairment directly commonly School can learn.
The text information of the text information or importing that are generated in the way of handwriting input then can satisfy as input information Sign language teaching uses, and is conducive to the promotion and popularization of special education.
Step 102, parsing it is described input information in include at least one lexical unit, and establish the lexical unit with The corresponding relationship of sign language vocabulary.
After obtaining input information, passes through parsing input acquisition of information and input at least one the vocabulary list for including in information Member.
Lexical unit can be parsed according to syntactic structure and be determined.
For example, " thanks " this input information includes the lexical unit that " thanks " this verb is constituted, and " thanks You " it then include verb " thanks " and object " you " the two lexical units.So including at least in one section of complete input information One lexical unit.
Any correct sentence of expression can be divided into corresponding subject, predicate, object, attribute etc. according to syntax rule Syntactic structure, these syntactic structures can be used as a lexical unit.
Lexical unit can also be parsed according to morpheme and be determined.
Such as " milk " in " my drink milk " this word from syntactic structure for be object, but sign language express In, " milk " this vocabulary is composed by the corresponding gesture of " ox " and " milk " the two morphemes.So the word Money order member can also be divided according to morpheme.
Further, after the lexical unit for having included in having parsed input information, lexical unit and sign language vocabulary are established Corresponding relationship.
It should be understood that being expressed in the form of sign language to input information, it includes various for needing to be arranged one Then the motion database of sign language motion sequence demonstrates corresponding sign language by calling the sign language motion sequence in the database Animation.By establishing the corresponding relationship of lexical unit and sign language vocabulary, sign language vocabulary can be called corresponding by the corresponding relationship Sign language motion sequence, thus obtain input information in include the corresponding sign language animation of each lexical unit.
In the present embodiment, when the input information parsed is text information, which includes:
Parse at least one lexical unit for including in the text information.
I.e. the step 102 is that the text information in an optional specific embodiment to above-mentioned steps 101 parses.
Lexical unit relevant to scene in the parsed lexical unit of step 103, extraction, and establish corresponding scene Model.
Model of place is shown in the form of animation or the image of static state.
Further, relevant to scene lexical unit is extracted in the lexical unit analytically obtained, then according to being mentioned The lexical unit taken establishes corresponding model of place, can be realized the model of place that abstract input information is converted to image.
As a kind of optional specific embodiment, which includes:
It is expressed as shallow semantic frame by information is inputted according to extracted lexical unit relevant to scene, and is converted to The Context event stream of action drives;
Preset model is called according to the Context event stream, and the input information is generated by called preset model Scene animation of the corresponding model as the input information.
After above-mentioned steps 102 can with comprising steps of
Establish the corresponding relationship of the lexical unit Yu lip reading vocabulary.
Further, the lexical unit relevant to scene extracted in solved lexical unit, and establish corresponding After model of place further include:
Based on the corresponding relationship of the lexical unit and sign language vocabulary established, lip corresponding with sign language motion sequence is generated Language motion sequence.
The action form of lip when lip reading refers to speaking, dysaudia crowd, also can be auxiliary when with crowd's dialogue is perfected Reading is helped to perfect the lip reading of crowd in order to understand conversation content.
The related content of middle lip reading vocabulary can refer to the set-up mode of sign language vocabulary through this embodiment, and the main distinction exists In the demonstrating model of expression way and mapping is different, and details are not described herein again.
It is corresponding to generate the input information for step 104, the corresponding relationship based on the lexical unit and sign language vocabulary established Sign language motion sequence, and the corresponding scene animation of the input information is generated in the model of place established.
According to the corresponding relationship of lexical unit and sign language vocabulary, by the motion sequence of sign language vocabulary according to the suitable of lexical unit Sequence, which is combined, can be obtained the corresponding sign language animation of input information.It, can be direct convenient for user by providing the sign language animation Understand the corresponding sign language gesture of input information.
And model of place can will then input the corresponding scene of information and be embodied, for example, if input information is that " Xiao Ming goes Washing powder is bought by supermarket ", then a child can be generated and go in supermarket to buy the scene and image of washing powder.Certain dysaudias Children possibly can not understand that is " supermarket " since age and hearing are limited, and the explanation for passing through language may not be yet It is especially accurate, and scene and figure inside a supermarket are provided, then facilitate the model image establishing abstractdesription Yu being embodied Corresponding relationship, help to improve the degree of understanding.
In this way, input information is resolvable to lexical unit, it is then raw according to the corresponding relationship of lexical unit and sign language vocabulary At the corresponding sign language motion sequence of input information and corresponding sign language animation is shown by demonstrating model, can will input information with The form of sign language animation and model of place is shown, allows users to the sign language that input information is understood by the sign language animation of generation The content of expression and the understanding input information representation being embodied by model of place.
It include the feelings of at least two lexical units in the input information as a kind of optional specific embodiment Under condition, which includes:
The acquisition that puts in order according to the lexical unit for including in the input information is corresponding with the lexical unit Sign language vocabulary puts in order;
According to the connecting points calculated between the sign language motion sequence of two neighboring sign language vocabulary that put in order of sign language vocabulary;
According to putting in order for the sign language vocabulary, the corresponding sign language motion sequence of the input information is generated, wherein phase The sign language motion sequence of adjacent two sign language vocabularies is connected at connecting points calculated.
When inputting information includes a lexical unit, corresponding sign language motion sequence can be called directly.
And since the finger of actual user continuously moves, so two neighboring sign language motion sequence needs to carry out centainly Linking.
Therefore, in the present embodiment in the case where inputting information includes at least two units, according to lexical unit Put in order the putting in order of determining sign language vocabulary.
It should be understood that the actually corresponding sign language vocabulary that puts in order of lexical unit puts in order.
After the putting in order of sign language vocabulary has been determined, then the linking between two neighboring sign language motion sequence is calculated Point, when which refers to two sign language motion sequences of continuous expression in the case where most convenient, the two sign language motion sequences The bridging mode of column.
Such as " eating potato " includes " eating " and " potato " two lexical units.The sign language movement of " eating " is to lift hand, expression " eating ", then puts down hand.The sign language movement of " potato " is to lift hand, expresses " potato ", then puts down hand.If by the two hands Language movement is continuously got up, and is: being lifted hand, expression " eating ", is put down hand, lift hand, expression " potato ", put down hand.Obviously, this pole For inconvenience, being located at expression " eating " and expressing putting down hand and lifting hand between " potato " is extra movement.
Therefore, the connecting points between two neighboring sign language motion sequence are further calculated in the present embodiment, such as " will eat soil Beans " are expressed as lifting hand, expression " eating ", expression " potato ", put down hand.
Further, two movements can also will be made to carry out more to smoothness calculating is carried out between " eating " and " potato " Add coherent.
Such as the sign language movement of " eating " is stirred by export-oriented mouth to stretch food, middle finger on the other hand, simulates the shape that eats with chopsticks;" soil The sign language of " soil " of beans " is expressed as proficiency thumb, food, middle finger and mutually twists;The sign language of " beans " is expressed as both hands thumb, index finger breaks into round block Shape, such as potato shape.
Then can be by calculating these three link positions acted, after the sign language movement for having expressed " eating ", the finger Continue express " soil " sign language movement, in the movement of expression " soil ", another hand simultaneously move, with a palmistry before The sign language movement of cooperation expression " beans ".
In such manner, it is possible to make the corresponding sign language motion sequence of the input information finally obtained more coherent.
It further, in a specific embodiment, is to calculate sign language motion by calculating the interpolation of exercise data The interpolation of sequence.
The rank calculated between the sign language motion sequence of two neighboring sign language vocabulary that puts in order according to sign language vocabulary Contact, comprising:
The exercise data of two neighboring sign language vocabulary is obtained, and calculates the linking of the exercise data of two neighboring sign language vocabulary Interpolation.
When the exercise data of sign language vocabulary refers to demonstrating the animation of sign language vocabulary by demonstrating model, demonstrating model Exercise data, which is actually that the movement of demonstrating model is carried out the obtained information of digitization, is specifically as follows The associ-ated motion parameters such as moving coordinate, the movement speed of the extremity body structures such as finger or elbow.By to two neighboring sign language vocabulary Exercise data carry out interpolation arithmetic, it can be deduced that the linking interpolation of the exercise data of two neighboring sign language vocabulary, as adjacent The best connecting points of the sign language motion sequence of two sign language vocabularies.
In the following, the interpolation arithmetic mode to a kind of exercise data illustrates.
To two sections of the motion sequences A and B to be attached, it is assumed that A has m frame, therefrom extracts last k frame data: A (m-k + 1) ..., A (m), extract the preceding k frame data in motion sequence B: B (1) .., B (k) carry out the k frame exercise data of overlapping slotting Value, to obtain the new exercise data of k frame that two sections of motion sequences are overlapped out.
The space coordinate T (i) of the new exercise data of i-th frame in each joint in motion sequence are as follows:
T (i)=α (i) PA(m-k+i)+(1-α(i))PB(i),i∈[1,2,...,k]
Wherein, α (i) is preset coefficient function, and functional value is greater than or equal to 0, and is less than or equal to 1.In general, When the value component of i increases, the functional value of α (i) is gradually reduced.PA(x) and PBIt (x) is respectively motion sequence A and motion sequence The movement function of B.
For example, acting for first synthesis, i.e. when i is equal to 1, the movement is by k frame data data rear in action sequence A First element and action sequence B in first element synthesis, then the coordinate of the movement are as follows:
T (1)=α (1) PA(m-k+1)+(1-α(1))PB(1)。
Synthesis movement is by action sequence A and action sequence B is superimposed forms, since the value of α (i) subtracts with the increase of i Small, then for superimposed synthesis movement, more forward movement, i.e. i value gets over hour, since the value of α (i) is bigger, so The weight of the coordinate of action sequence A is bigger, so being more partial to the movement in action sequence A;And movement in the backward, that is, i value is got over Greatly, then the weight of the coordinate of action sequence B is bigger, so being more partial to the movement in action sequence B.In this way, passing through the interpolation The synthesis movement that operation obtains realizes the smooth transition of action sequence A to action sequence B.
Further, it is dynamic to generate the corresponding sign language of the input information for the putting in order according to the sign language vocabulary It draws, comprising:
According to linking interpolation calculated, the exercise data of each sign language vocabulary is integrated, and the exercise data integrated is reflected It is incident upon demonstrating model, to demonstrate the corresponding sign language animation of input information by the demonstrating model.
By the way that the exercise data after integration is mapped to demonstrating model, the sign language after capable of making demonstrating model demonstration integration is dynamic It draws, improves the fluency of each sign language animation linking.
In the present embodiment, optionally, the corresponding sign language animation of the input information is generated, comprising:
Based on the corresponding relationship of the lexical unit and sign language vocabulary established, the movement number of each sign language vocabulary is obtained According to, and the exercise data of acquired sign language vocabulary is mapped to according to the corresponding relationship of the lexical unit and sign language vocabulary and is drilled Representation model, to demonstrate the corresponding sign language animation of input information by the demonstrating model.
Demonstrating model in most the present embodiment is a humanoid model or hand shape model, then by by sign language motion sequence The model is mapped to, can be realized and the corresponding sign language animation of information is inputted by the model demonstration.So as to more intuitive clear Solve corresponding sign language movement.Meanwhile the sign language animation and model of place that can be demonstrated are generated according to input information, Neng Goubiao Up to different input information, do not need to realize production in teaching use process, while the information that can be expressed is also more, energy Enough improve the flexibility of sign language teaching.
Referring to fig. 2, Fig. 2 is the flow chart of another sign language scene generating method provided in an embodiment of the present invention, with Fig. 1 institute The main distinction for the embodiment shown is, also handles the vocabulary in input information in the present embodiment, and there will be no pair Answering the vocabulary of sign language vocabulary to be substituted for, there are the vocabulary of corresponding sign language vocabulary.
As shown in Fig. 2, method includes the following steps:
Step 201 obtains input information.
Step 202 segments acquired input information according to syntactic structure and obtains at least one basic vocabulary.
Information participle will be inputted according to syntactic structure in the present embodiment, and obtain at least one basic vocabulary.
Step 203 will segment in basic vocabulary obtained, and there are the basic vocabularies of corresponding sign language vocabulary as one Lexical unit.
If there are corresponding sign language vocabularies in the database established, then directly these basic vocabularies can be made For a lexical unit.
For example, some common words such as " you " " I " " he " " good " " bad " etc., frequency of use in daily life It is very high, so in general, in database can all be stored with these expresses corresponding sign language vocabulary.
The default approximate fallback relationship of step 204, basis, will segment in basic vocabulary obtained, corresponding hand is not present The basic vocabulary of language vocabulary, replaces with that there are the approximate vocabularies of corresponding sign language vocabulary, and will replace resulting approximate vocabularies and make For a lexical unit.
Obviously, may and corresponding sign language vocabulary be not present in some expression.For example, " potato " is the scientific name of potato, But in different dialects and speech habits, potato is also called " potato ", " beans ", " potato ", " potato " etc., this What a little words were actually expressed is all potato, but according to existing sign language use habit, " potato " be by " soil " and " beans " this two A morpheme composition, its corresponding gesture is also to be made of the gesture of the two morphemes, so after it is converted to sign language animation, Possibly it can not be understood by other users.Therefore, unified expression is arranged in the present embodiment in the database of sign language vocabulary to practise It is used, for example, the General Expression mode of potato is set as most common " potato ", when the basic vocabulary for including in input information When for " potato ", " beans ", " potato " or " potato ", then this word is replaced with " potato ", and will " potato " as phase The lexical unit answered.
In addition, the common expression way of some expression is essentially identical, such as " help ", " help ", " side " etc..These What vocabulary reached is same meaning, it is possible to be expressed with same sign language animation.
In this way, if basic vocabulary is present in the database of sign language vocabulary, directly using the basic vocabulary as one Lexical unit.
If basic vocabulary be not present in the database of sign language vocabulary, according to preset corresponding relationship, by this base This vocabulary replaces with the approximate vocabularies being present in sign language vocabulary database.
In this way, being conducive to sign language used in unified different dialects or different communicative habits crowds, exchange is easily facilitated.
Lexical unit relevant to scene in the parsed lexical unit of step 205, extraction, and establish corresponding scene Model.
Step 206, the corresponding relationship for establishing the lexical unit and sign language vocabulary.
After lexical unit has been determined, then the corresponding relationship of lexical unit and sign language vocabulary is established.
It optionally, can also include that certain be adaptively adjusted is made according to the magnetism of lexical unit in the present embodiment.
Specifically, further include:
If inputting in the lexical unit of information, there are directive property verbs, are determined according to the subject of directive property verb and predicate The direction of directive property verb, and corresponding lexical unit is associated with according to the direction of directive property verb.
After at least one lexical unit for including in the parsing input information, further includes:
If inputting in the lexical unit of information, there are pronouns, it is determined that the referents of pronoun, and pronoun is replaced with into phase Answer the corresponding lexical unit of referents.
Can also include in the present embodiment in lexical unit directive property verb and pronoun adjust.
For example, " help " this expression be applied to " me is needed to help you " and " you to help me can with " the two When in language environment, gesture is identical, but pointing direction is different, so, in the present embodiment further directed to Directive property verb in lexical unit according to its subject and is located at determining direction, and is associated with to corresponding lexical unit, so as to Enough when calling sign language motion sequence, it is directed toward and makes further adjustment.
And it is directed to pronoun, different articles may be referred to during expression, this can also cause gesture to influence accordingly. At expression " taking up it ", when this sign language is indicated by actual people, corresponding article can be pointing directly at It indicates " it ", for example, mobile phone or the teacup expression that can be directed toward on desktop take up mobile phone or teacup.And when in electronic equipment When middle realization this method, which is then difficult to determine that it indicates direction by screen demonstration, so real meaning beyond expression of words, Therefore, pronoun can also will be replaced with to the lexical unit of its reference in the present embodiment according to its referents.For example, will " will It takes up " replace with " taking up mobile phone ".It is so, so that expression is more accurate.
It is corresponding to generate the input information for step 207, the corresponding relationship based on the lexical unit and sign language vocabulary established Sign language animation, and the corresponding scene animation of the input information is generated in the model of place established.
Finally, it is corresponding to generate the input information according to the corresponding relationship of the lexical unit and sign language vocabulary established Sign language animation and the corresponding model of place of input information.
In this way, and would not exist in the basic vocabulary in sign language vocabulary database by segmenting in input information Replace with the approximate vocabularies that the vocabulary is present in sign language vocabulary database, can unified expression, convenient for different dialects and The user of communicative habits exchanges mutually.
Above-mentioned steps 103 and step 205 can specifically include:
It is expressed as shallow semantic frame by information is inputted according to extracted lexical unit relevant to scene, and is converted to The Context event stream of action drives;
Preset model is called according to the Context event stream, and the input information is generated by called preset model Scene animation of the corresponding model as the input information.
Shallow semantic frame relates generally to the component of predicate frame, role defines and classifies, and specifically refers to existing Technology is not further qualified herein.
After input unit is identified as shallow semantic frame, further, input information is converted into action drives Context event stream, and preset model is called by the Context event stream, to complete modeling process, generate the field of input information Scape model.
It is explained in detail and illustrates with a specific example below.
Scene-behavior description parser modules 1-12 is advised from scene-behavior synchronization describing module 1-11, sign language gesture motion It draws bis- processes of device module 1-17 and receives Json message package, be put into the buffering team that role action generates engine (CAGE) module 1-13 Column queuing processing handles message package by first in first out, and resolver explains behavioral description language message package, it is decomposed into scene Label, personage's label, role's label, speech text, human motion description, adjoint non-sign language behavior (expression, lip movement) are retouched State, role action generate engine loads scene, personage, actor model mark and the Planning of spatial arrangement that is done by module 1-10 into Row is put, and the specified personage to talk and skeletal joint controller are bound, according to the planning of gesture motion, by sign language vocabulary hand Gesture insertion interpolating function is smoothly connected, and the sentence text that personage speaks enters the speech synthesizer of Text To Speech generation, Carry out speech synthesis, the text of sounding be mapped to lip movement, sounding simultaneously lip make meet Chinese sounding lip fortune It is dynamic;Actor model in animation producing, the description according to message package to role movement match corresponding joint control and by matchmaker The action data positioned in body big data is input to controller, generates role's skeleton motion, sentence text when role speaks into Enter Text To Speech generation speech synthesizer, carry out speech synthesis, the text of sounding be mapped to lip movement, sounding simultaneously Lip makes the lip movement for meeting Chinese sounding, while with the non-linguistic behavior spoken, (expression, eyes are infused in message package Apparent direction, eyebrow movement, head movement), it matches corresponding controller and inputs the action data positioned in media big data To controller, calculates and generate facial expression, eye gaze direction, eyebrow movement, head movement.Role action generates engine modules 1-13 presses frame refreshing personage gesture, role-act, scene state, and timesharing establishes visual human in two presumptive areas of screen space The sign language of object and lip reading when speaking, imaginary scene and role's skeleton cartoon, and timesharing carries out scene and role's skeletal joint Dotted state refreshes.Role action is generated into engine and calculates resulting imaginary scene and role's skeleton cartoon, sign language gesture, background sound Musical sound effect, virtual camera motion profile are mapped to Unity3D or Unreal4 digital engine module 1-14 in real time and carry out in real time Draw rendering and display, this just realizes sign language gesture and lip reading it is synchronous with image-text flash generation, and can automatic packaging at video Spread defeated broadcasting.
In an implementation process, the input information of user's input is " Xiao Ming buys washing powder in supermarket ", the input information It can input, can also be inputted in the form of text importing or handwriting input etc. in the form of speech.
After obtaining the input information, to input information handle, with parse including at least one vocabulary Unit.Specific treatment process includes but is not limited to participle, part-of-speech tagging, Lexical Similarity calculates, verb type is analyzed, is referred to The modes such as resolution, syntactic analysis, usual sentence analysis, obtain the grammatical item of sentence.
In the present embodiment, the subject for inputting information is " Xiao Ming ", and predicate is " buying ", and object is " washing powder ", point adverbial For " in supermarket ", described by two shallow semantics that Shallow Semantic Parsing obtains sentence: main body " Xiao Ming ", predicate are " buying ", Object " washing powder ";Predicate " spatial position ", object " supermarket ";Predicate " orientation ", object are " ".
It for ease of description and embodies, can be embodied point of text shallow semantic in the form of story board semantic script Analysis is as a result, specifically, story board semantic script are as follows: scene: supermarket, role: Xiao Ming, event: buying, event body: Xiao Ming, event Object: washing powder.
Further, shallow semantic transferring frame is that the Context event stream of action drives is as follows: scene: supermarket, role: small Bright, event: buying, event body: Xiao Ming, event object: washing powder.
Further, by the way that the entity in Context event stream is navigated to the highest environmental model-of semantic similarity therewith Supermarket, person model-boy.Model can store in preset database, and it is corresponding can specifically to include different teaching materials Various models involved in content and the corresponding model of various common words.
It is separated next, carrying out parsing with environmental model to role, the model of Xiao Ming is loaded onto environmental model supermarket, Then role action " buying " is decomposed into from Xiao Ming current location " walking " and " stretches out one's hand " movement done by object again to target position, used Behavioral description language encodes behavior sequence.
Flow of event carries out parsing with scenario entities supermarket according to role Xiao Ming and separates, and extracts the spatial relationship of Xiao Ming and supermarket It is Xiao Ming in supermarket, the tacit knowledge constructed in conjunction with space geometric from: before shelf of the Xiao Ming in supermarket;And basic geometry Constraint: Xiao Ming separates with shelf, and for Xiao Ming on two floorings, the space layout for carrying out role and scene puts calculating, uses scene Description language space encoder is laid out placing structure.
Meanwhile processing is synchronized to the behavioral description language of role and scene description language, complete scene and personage's row For the synchronization of description.
During sign language animation producing, first to input information " Xiao Ming buys washing powder in supermarket ", sign language grammer is carried out Morphology and syntactic analysis.
Morphological processing is to carry out semantic query matching by text vocabulary word order based on sign language vocabulary gesture big data analysis, Since sign language vocabulary is less than 10,000, and common Chinese vocabulary need to be reflected common Chinese vocabulary by similarity calculation up to 30,000 It is mapped on sign language vocabulary, therefore similarity calculation is carried out to input each vocabulary of text and sign language vocabulary, if current vocabulary Directly exist in sign language vocabulary table and do not have to replacement then, if current vocabulary needs to look for there is no among sign language vocabulary data To current vocabulary in sign language vocabulary data the highest vocabulary of semantic similarity, with the sign language vocabulary replace current vocabulary.Such as " buying " has sign language vocabulary data, and " buying " does not have sign language vocabulary data, calculates two words semantically according to Lexical Similarity It is highly similar, therefore " buying " is added to the synonym collection of sign language vocabulary data " buying ".
Lexical Similarity has been calculated also to need to do directive property analysis later, if " help " is to have directive property in sign language expression , as the direction of the gesture when talking of " help " in " I helps you " and " you help me " act be it is different, therefore It needs to analyze the verb in whole word, obtaining subject associated by verb and predicate respectively is any person, thus Directive property of the verb in current sentence out.Reference resolution is then to aid in machine and analyzes pronoun, obtains pronoun meaning Noun or appellation.
Word order reconstruct is adjusted to the word order for being unsatisfactory for sign language grammer, and result word order can be obtained by morphological analysis [Xiao Ming-- supermarket-buy-washing powder], using the syntactic analysis of the sign language grammer word order, which can satisfy sign language grammer Word order, therefore be not required to carry out word order reconstruct.
After completing sign language Language Processing, next need to carry out the vocabulary combed the mapping and conjunction of sign language movement At.Using [Xiao Ming-- supermarket-buy-washing powder] as gesture sequence, plan that the starting of each gesture terminates between time, gesture Smooth transition, gesture vocabulary symbol and gesture motion data are bound, and describe continuous sign language gesture with behavioral description language.
Further, described that shallow semantic is expressed as information is inputted according to extracted lexical unit relevant to scene After frame, further includes:
Extract the emotional semantic of shallow semantic frame, and the emotional semantic for calling and being extracted according to preset corresponding relationship The background music to match.
It should be understood that dysaudia not refers to not hearing sound completely, so, by being aided with corresponding sound Pleasure helps to improve the degree of understanding in teaching process to emotional expression to express mood.
It should be understood that in embodiment illustrated in fig. 1 and in embodiment illustrated in fig. 2, each optional step is not necessary , and can individually or what is be combined with each other is applied in Fig. 1 and any embodiment shown in Fig. 2, and realize corresponding technology Effect, details are not described herein again.
It is the structure chart of sign language scene generating means 300 provided in an embodiment of the present invention referring to Fig. 3, Fig. 3, as shown in figure 3, Sign language scene generating means 300 include:
Input module 301, for obtaining input information;
Natural language processing module 302 for parsing at least one lexical unit for including in the input information, and is built Found the corresponding relationship of the lexical unit and sign language vocabulary;
Animation producing module 303 for extracting the lexical unit relevant to scene in parsed lexical unit, and is built Found corresponding model of place;
Demonstration module 304 generates the input for the corresponding relationship based on the lexical unit and sign language vocabulary established The corresponding sign language animation of information, and the corresponding scene animation of the input information is generated in the model of place established.
Optionally, further includes:
Lip reading processing module, for establishing the corresponding relationship of the lexical unit Yu lip reading vocabulary;
Lip reading generation module, for the corresponding relationship based on the lexical unit and sign language vocabulary established, generation and sign language The corresponding lip reading motion sequence of motion sequence.
Optionally, the animation producing module 303, comprising:
Transform subblock, for being expressed as shallow-layer language for information is inputted according to extracted lexical unit relevant to scene Adopted frame, and be converted to the Context event stream of action drives;
Model of place generates submodule, for calling preset model according to the Context event stream, and by being called Preset model generates model of place of the corresponding image of the input information as the input information.
Optionally, the animation producing module 303, further includes:
Music submodule, for extracting the emotional semantic of shallow semantic frame, and according to preset corresponding relationship tune With the background music to match with the emotional semantic extracted.
Optionally, in the case where the input information includes at least two lexical units, the demonstration module 304, comprising:
Acquisition submodule, for put in order acquisition and the institute's predicate according to the lexical unit for including in the input information The money order corresponding sign language vocabulary of member puts in order;
Computational submodule, for calculating the sign language motion sequence of two neighboring sign language vocabulary according to putting in order for sign language vocabulary Connecting points between column;
Submodule is generated, for putting in order according to the sign language vocabulary, generates the corresponding sign language of the input information Motion sequence, wherein the sign language motion sequence of the two neighboring sign language vocabulary is connected at connecting points calculated.
Optionally, the computational submodule, comprising:
It is connected interpolation calculation unit, for obtaining the exercise data of two neighboring sign language vocabulary, and calculates two neighboring hand The linking interpolation of the exercise data of language vocabulary;
The demonstration module is specifically used for integrating the exercise data of each sign language vocabulary according to linking interpolation calculated, and The exercise data integrated is mapped into demonstrating model, it is dynamic to demonstrate the corresponding sign language of input information by the demonstrating model It draws.
Optionally, the natural language processing module 302, comprising:
Submodule is segmented, for segmenting to acquired input information according to syntactic structure and obtaining at least one basic word It converges;
Lexical unit determines submodule, and for that will segment in basic vocabulary obtained, there are corresponding sign language vocabularies Basic vocabulary is as a lexical unit;
The lexical unit determines submodule, is also used to be segmented obtained basic according to default approximate fallback relationship In vocabulary, there is no the basic vocabulary of corresponding sign language vocabulary, replace with there are the approximate vocabularies of corresponding sign language vocabulary, and will Resulting approximate vocabularies are replaced as a lexical unit.
Optionally, further includes:
Verb processing module, it is dynamic according to directive property if there are directive property verbs in lexical unit for inputting information The subject and predicate of word determine the direction of directive property verb, and are associated with corresponding lexical unit according to the direction of directive property verb.
If inputting in the lexical unit of information, there are pronouns, it is determined that the referents of pronoun, and pronoun is replaced with into phase Answer the corresponding lexical unit of referents.
Optionally, the demonstration module 304, specifically for based on the lexical unit established pass corresponding with sign language vocabulary System, obtains the exercise data of each sign language vocabulary, and according to the corresponding relationship of the lexical unit and sign language vocabulary by institute The exercise data of the sign language vocabulary of acquisition maps to demonstrating model, to demonstrate the corresponding hand of input information by the demonstrating model Language animation.
Sign language scene generating means provided in an embodiment of the present invention can be realized to be moved in the embodiment of the method for Fig. 1 to Fig. 2 Each process that terminal is realized, to avoid repeating, which is not described herein again.
Preferably, the embodiment of the present invention also provides a kind of mobile terminal, including processor, and memory is stored in memory Computer program that is upper and can running on the processor, the computer program realize above-mentioned sign language field when being executed by processor Each process of scape generation method embodiment, and identical technical effect can be reached, to avoid repeating, which is not described herein again.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium Calculation machine program, the computer program realize each process of above-mentioned sign language scene generating method embodiment when being executed by processor, And identical technical effect can be reached, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, Such as read-only memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, letter Claim RAM), magnetic or disk etc..
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In embodiment provided herein, it should be understood that disclosed device and method can pass through others Mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of device or unit It connects, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.Some or all of unit therein can be selected to realize the embodiment of the present invention according to the actual needs Purpose.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention. And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, ROM, RAM, magnetic or disk etc. are various can store program code Medium.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.

Claims (10)

1. a kind of sign language scene generating method, which comprises the following steps:
Obtain input information;
At least one lexical unit for including in the input information is parsed, and establishes pair of the lexical unit and sign language vocabulary It should be related to;
The lexical unit relevant to scene in parsed lexical unit is extracted, and establishes corresponding model of place;
Based on the corresponding relationship of the lexical unit and sign language vocabulary established, the corresponding sign language animation of the input information is generated, And the corresponding scene animation of the input information is generated in the scene established.
2. the method as described in claim 1, which is characterized in that at least one word for including in the parsing input information After money order member, further includes:
Establish the corresponding relationship of the lexical unit Yu lip reading vocabulary;
The lexical unit relevant to scene extracted in parsed lexical unit, and establish corresponding model of place it Afterwards, further includes:
Based on the corresponding relationship of the lexical unit and sign language vocabulary established, lip reading animation corresponding with sign language animation is generated.
3. the method as described in claim 1, which is characterized in that related to scene in the parsed lexical unit of the extraction Lexical unit, and establish corresponding model of place, comprising:
It is expressed as shallow semantic frame by information is inputted according to extracted lexical unit relevant to scene, and is converted to movement The Context event stream of driving;
Preset model is called according to the Context event stream, and the input information is generated by called preset model and is corresponded to Model of place as it is described input information scene animation.
4. method as claimed in claim 3, which is characterized in that described to be incited somebody to action according to extracted lexical unit relevant to scene Input information is expressed as after shallow semantic frame, further includes:
The emotional semantic of shallow semantic frame is extracted, and according to the emotional semantic phase preset corresponding relationship calling and extracted The background music matched.
5. the method as described in claim 1, which is characterized in that include at least two lexical units in the input information In the case where, it is corresponding to generate the input information for the corresponding relationship based on the lexical unit and sign language vocabulary established Sign language animation, comprising:
Sign language corresponding with the lexical unit is obtained according to the putting in order for lexical unit for including in the input information Vocabulary puts in order;
According to the connecting points calculated between the sign language motion sequence of two neighboring sign language vocabulary that put in order of sign language vocabulary;
According to putting in order for the sign language vocabulary, the corresponding sign language motion sequence of the input information is generated, wherein adjacent two The sign language motion sequence of a sign language vocabulary is connected at connecting points calculated.
6. method as claimed in claim 5, which is characterized in that it is described according to sign language vocabulary put in order calculate it is two neighboring Connecting points between the sign language motion sequence of sign language vocabulary, comprising:
The exercise data of two neighboring sign language vocabulary is obtained, and the linking for calculating the exercise data of two neighboring sign language vocabulary is inserted Value;
The putting in order according to the sign language vocabulary generates the corresponding sign language animation of the input information, comprising:
According to linking interpolation calculated, the exercise data of each sign language vocabulary is integrated, and the exercise data integrated is mapped to Demonstrating model, to demonstrate the corresponding sign language animation of input information by the demonstrating model.
7. the method as described in claim 1, which is characterized in that at least one word for including in the parsing input information Money order member, comprising:
Acquired input information is segmented according to syntactic structure and obtains at least one basic vocabulary;
It will segment in basic vocabulary obtained, there are the basic vocabularies of corresponding sign language vocabulary as a lexical unit;
According to default approximate fallback relationship, will segment in basic vocabulary obtained, there is no the basic of corresponding sign language vocabulary Vocabulary, replaces with that there are the approximate vocabularies of corresponding sign language vocabulary, and will replace resulting approximate vocabularies as a vocabulary list Member.
8. the method as described in claim 1, which is characterized in that at least one word for including in the parsing input information After money order member, further includes:
If inputting in the lexical unit of information, there are directive property verbs, are determined and are directed toward according to the subject of directive property verb and predicate Property verb direction, and corresponding lexical unit is associated with according to the direction of directive property verb;
If inputting in the lexical unit of information, there are pronouns, it is determined that the referents of pronoun, and pronoun is replaced with into corresponding finger For the corresponding lexical unit of object.
9. method as claimed in any one of claims 1-9 wherein, which is characterized in that it is described based on the lexical unit established with The corresponding relationship of sign language vocabulary generates the corresponding sign language animation of the input information, comprising:
Based on the corresponding relationship of the lexical unit and sign language vocabulary established, the motion sequence of each sign language vocabulary is obtained, And the motion sequence of acquired sign language vocabulary is mapped to by demonstration according to the corresponding relationship of the lexical unit and sign language vocabulary Model, to demonstrate the corresponding sign language animation of input information by the demonstrating model.
10. a kind of sign language scene generating means characterized by comprising
Input module, for obtaining input information;
Natural language processing module, for parsing at least one lexical unit for including in the input information, and described in foundation The corresponding relationship of lexical unit and sign language vocabulary;
Animation producing module for extracting the lexical unit relevant to scene in parsed lexical unit, and is established corresponding Model of place;
Demonstration module generates the input information pair for the corresponding relationship based on the lexical unit and sign language vocabulary established The sign language animation answered.
CN201811177572.1A 2018-10-10 2018-10-10 A kind of sign language scene generating method and device Pending CN109409255A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811177572.1A CN109409255A (en) 2018-10-10 2018-10-10 A kind of sign language scene generating method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811177572.1A CN109409255A (en) 2018-10-10 2018-10-10 A kind of sign language scene generating method and device

Publications (1)

Publication Number Publication Date
CN109409255A true CN109409255A (en) 2019-03-01

Family

ID=65467417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811177572.1A Pending CN109409255A (en) 2018-10-10 2018-10-10 A kind of sign language scene generating method and device

Country Status (1)

Country Link
CN (1) CN109409255A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110741385A (en) * 2019-06-26 2020-01-31 Oppo广东移动通信有限公司 Gesture recognition method and device and location tracking method and device
CN111340920A (en) * 2020-03-02 2020-06-26 长沙千博信息技术有限公司 Semantic-driven two-dimensional animation automatic generation method
CN111369652A (en) * 2020-02-28 2020-07-03 长沙千博信息技术有限公司 Method for generating continuous sign language action based on multiple independent sign language actions
CN114157920A (en) * 2021-12-10 2022-03-08 深圳Tcl新技术有限公司 Playing method and device for displaying sign language, smart television and storage medium
CN114179085A (en) * 2021-12-16 2022-03-15 上海景吾智能科技有限公司 Method and system for robot control, track connection and smoothing
CN115457981A (en) * 2022-09-05 2022-12-09 安徽康佳电子有限公司 Method for facilitating hearing-impaired person to watch video and television based on method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101079268A (en) * 2006-12-28 2007-11-28 腾讯科技(深圳)有限公司 System and method for sign language synthesis and display
CN101303773A (en) * 2008-06-10 2008-11-12 中国科学院计算技术研究所 Method and system for generating virtual scene
CN101527092A (en) * 2009-04-08 2009-09-09 西安理工大学 Computer assisted hand language communication method under special session context
US20100199228A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Gesture Keyboarding
CN103956167A (en) * 2014-05-06 2014-07-30 北京邮电大学 Visual sign language interpretation method and device based on Web
CN106294666A (en) * 2016-08-04 2017-01-04 上海汽笛生网络科技有限公司 A kind of method realizing text visualization Dynamic Display
CN106446406A (en) * 2016-09-23 2017-02-22 天津大学 Simulation system and simulation method for converting Chinese sentences into human mouth shapes
CN106713974A (en) * 2015-11-12 2017-05-24 中兴通讯股份有限公司 Data conversion method and device
CN107193853A (en) * 2016-12-08 2017-09-22 孙瑞峰 A kind of social scenario building method and system based on linguistic context
CN107392163A (en) * 2017-07-28 2017-11-24 深圳市唯特视科技有限公司 A kind of human hand and its object interaction tracking based on the imaging of short Baseline Stereo

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101079268A (en) * 2006-12-28 2007-11-28 腾讯科技(深圳)有限公司 System and method for sign language synthesis and display
CN101303773A (en) * 2008-06-10 2008-11-12 中国科学院计算技术研究所 Method and system for generating virtual scene
US20100199228A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Gesture Keyboarding
CN101527092A (en) * 2009-04-08 2009-09-09 西安理工大学 Computer assisted hand language communication method under special session context
CN103956167A (en) * 2014-05-06 2014-07-30 北京邮电大学 Visual sign language interpretation method and device based on Web
CN106713974A (en) * 2015-11-12 2017-05-24 中兴通讯股份有限公司 Data conversion method and device
CN106294666A (en) * 2016-08-04 2017-01-04 上海汽笛生网络科技有限公司 A kind of method realizing text visualization Dynamic Display
CN106446406A (en) * 2016-09-23 2017-02-22 天津大学 Simulation system and simulation method for converting Chinese sentences into human mouth shapes
CN107193853A (en) * 2016-12-08 2017-09-22 孙瑞峰 A kind of social scenario building method and system based on linguistic context
CN107392163A (en) * 2017-07-28 2017-11-24 深圳市唯特视科技有限公司 A kind of human hand and its object interaction tracking based on the imaging of short Baseline Stereo

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
牟远明: "虚拟现实技术在残疾人远程教育中的应用", 《中国优秀硕士学位论文全文数据库_信息科技辑》 *
王卫民等: "基于Android的手语动画自动生成软件设计", 《电子设计工程》 *
陈新伟: "基于表层语义分析的自然手语动作序列生成的研究", 《中国优秀硕士学位论文全文数据库_哲学与人文科学辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110741385A (en) * 2019-06-26 2020-01-31 Oppo广东移动通信有限公司 Gesture recognition method and device and location tracking method and device
CN110741385B (en) * 2019-06-26 2023-11-07 Oppo广东移动通信有限公司 Gesture recognition method and device, and positioning tracking method and device
CN111369652A (en) * 2020-02-28 2020-07-03 长沙千博信息技术有限公司 Method for generating continuous sign language action based on multiple independent sign language actions
CN111369652B (en) * 2020-02-28 2024-04-05 长沙千博信息技术有限公司 Method for generating continuous sign language actions based on multiple independent sign language actions
CN111340920A (en) * 2020-03-02 2020-06-26 长沙千博信息技术有限公司 Semantic-driven two-dimensional animation automatic generation method
CN111340920B (en) * 2020-03-02 2024-04-09 长沙千博信息技术有限公司 Semantic-driven two-dimensional animation automatic generation method
CN114157920A (en) * 2021-12-10 2022-03-08 深圳Tcl新技术有限公司 Playing method and device for displaying sign language, smart television and storage medium
CN114157920B (en) * 2021-12-10 2023-07-25 深圳Tcl新技术有限公司 Method and device for playing sign language, intelligent television and storage medium
CN114179085A (en) * 2021-12-16 2022-03-15 上海景吾智能科技有限公司 Method and system for robot control, track connection and smoothing
CN114179085B (en) * 2021-12-16 2024-02-06 上海景吾智能科技有限公司 Robot control, track connection and smoothing method and system
CN115457981A (en) * 2022-09-05 2022-12-09 安徽康佳电子有限公司 Method for facilitating hearing-impaired person to watch video and television based on method

Similar Documents

Publication Publication Date Title
Oviatt et al. The paradigm shift to multimodality in contemporary computer interfaces
CN109409255A (en) A kind of sign language scene generating method and device
CN108877336A (en) Teaching method, cloud service platform and tutoring system based on augmented reality
Granström et al. Audiovisual representation of prosody in expressive speech communication
Cienki Ten lectures on spoken language and gesture from the perspective of cognitive linguistics: Issues of dynamicity and multimodality
CN109166409A (en) A kind of sign language conversion method and device
KR102174922B1 (en) Interactive sign language-voice translation apparatus and voice-sign language translation apparatus reflecting user emotion and intention
CN111414506B (en) Emotion processing method and device based on artificial intelligence, electronic equipment and storage medium
Naert et al. A survey on the animation of signing avatars: From sign representation to utterance synthesis
Rojc et al. The TTS-driven affective embodied conversational agent EVA, based on a novel conversational-behavior generation algorithm
CN115953521B (en) Remote digital person rendering method, device and system
Nishida Conversational informatics: An engineering approach
Mittelberg et al. Gesture studies and semiotics
Dhanjal et al. An automatic conversion of Punjabi text to Indian sign language
Rastgoo et al. A survey on recent advances in Sign Language Production
Knight A multi-modal corpus approach to the analysis of backchanneling behaviour
Rastgoo et al. All You Need In Sign Language Production
López-Colino et al. Spanish sign language synthesis system
Huenerfauth Representing coordination and non-coordination in an american sign language animation
Solina et al. Multimedia dictionary and synthesis of sign language
Gjaci et al. Towards culture-aware co-speech gestures for social robots
Gibet et al. Signing Avatars-Multimodal Challenges for Text-to-sign Generation
CN109087644A (en) Electronic equipment and its exchange method of voice assistant, the device with store function
Carmigniani Augmented reality methods and algorithms for hearing augmentation
López-Colino et al. Hybrid paradigm for Spanish sign language synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190301