CN106126709A - Generate the method and device of chatting facial expression in real time - Google Patents

Generate the method and device of chatting facial expression in real time Download PDF

Info

Publication number
CN106126709A
CN106126709A CN201610509363.7A CN201610509363A CN106126709A CN 106126709 A CN106126709 A CN 106126709A CN 201610509363 A CN201610509363 A CN 201610509363A CN 106126709 A CN106126709 A CN 106126709A
Authority
CN
China
Prior art keywords
phrases
expression
words
classification
chat
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610509363.7A
Other languages
Chinese (zh)
Inventor
马健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd, Qizhi Software Beijing Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201610509363.7A priority Critical patent/CN106126709A/en
Publication of CN106126709A publication Critical patent/CN106126709A/en
Priority to PCT/CN2017/080821 priority patent/WO2018000910A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Machine Translation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides the method and device of a kind of real-time generation chatting facial expression.The method includes: is labeled the one or more expressions in the expression set of basis, marks the subject correlation message of each expression;When receiving the chat words and phrases from user, described chat words and phrases are mated with the subject correlation message of each expression described in mark, obtains the subject correlation message mated with described chat words and phrases;Using the expression corresponding with the subject correlation message that described chat words and phrases mate as expression to be synthesized, synthesize with described chat words and phrases, generate chatting facial expression, and be supplied to described user.The embodiment of the present invention can input demand according to user and build expression in real time, and in richness, expression has been carried out good extension on range.

Description

Generate the method and device of chatting facial expression in real time
Technical field
The present invention relates to technical field of internet application, the method and device of a kind of real-time generation chatting facial expression.
Background technology
The part that currently social activity chat software (such as wechat, QQ etc.) is indispensable in having become as people's life, and table The interactive mode of feelings plays very important effect in chat system.The generating mode of expression is all manual construction at present, Even with search inquiry to expression also manually build in advance, the shortcoming of this mode is: the expression of structure Limited amount, pattern is fixed, once building, by can only be for fixing word.And the language in-put of user is varied, Gu The expression of determined number and pattern is obviously difficult to the demand meeting user for interactive mode of expressing one's feelings.
Summary of the invention
In view of the above problems, it is proposed that the present invention in case provide one overcome the problems referred to above or at least in part solve on State the method generating chatting facial expression in real time of problem and corresponding device.
According to an aspect of of the present present invention, it is provided that a kind of method of real-time generation chatting facial expression, including:
One or more expressions in the expression set of basis are labeled, mark the subject correlation message of each expression;
When receiving the chat words and phrases from user, by described chat words and phrases and the theme of each expression described in mark Relevant information is mated, and obtains the subject correlation message mated with described chat words and phrases;
Using the expression corresponding with the subject correlation message that described chat words and phrases mate as expression to be synthesized, chat with described They words and phrases synthesize, and generate chatting facial expression, and are supplied to described user.
Alternatively, described chat words and phrases are mated with the subject correlation message of each expression described in mark, including:
Determine the classification of described chat words and phrases;
In the classification the pre-build corresponding relation with category attribute, search the class that the classification of described chat words and phrases is corresponding Other attribute;
Obtain the category attribute value that the category attribute of described chat words and phrases is corresponding;
The described category attribute value got is mated with the subject correlation message of each expression described in mark.
Optionally it is determined that the classification of described chat words and phrases, including:
Obtain the words and phrases classification dictionary built in advance, wherein, described words and phrases classification dictionary stores words and phrases and classification Corresponding relation;
The classification of described chat words and phrases is searched in described words and phrases classification dictionary.
Alternatively, described words and phrases classification dictionary is built by following steps:
Obtain one or more classifications to be built;
Enumerate the words and phrases of each classification;
Based on each classification and the words and phrases of each classification, build described words and phrases classification dictionary.
Alternatively, described classification includes at least one following:
Chinese idiom, verse, song, film, books.
Alternatively, described chat words and phrases are mated with the subject correlation message of each expression described in mark, obtains The subject correlation message mated with described chat words and phrases, including:
Described chat words and phrases are carried out semantic matches with the subject correlation message of each expression described in mark, obtains semanteme The upper subject correlation message same or like with described chat words and phrases.
Alternatively, described method also includes:
When the one or more expressions in the expression set of basis are labeled, for the district of each expression mark synthesis Territory.
Alternatively, described expression to be synthesized is synthesized with described chat words and phrases, generate chatting facial expression, including:
It is retrieved as the region of the synthesis of described expression mark to be synthesized;
Described chat words and phrases are incorporated into the region of described synthesis, generate chatting facial expression.
Alternatively, described method also includes:
When the one or more expressions in the expression set of basis are labeled, for the class of each expression mark synthesis Type.
Alternatively, described expression to be synthesized is synthesized with described chat words and phrases, generate chatting facial expression, including:
It is retrieved as region and the type of synthesis of the synthesis of described expression mark to be synthesized;
Described chat words and phrases are converted into the type of described synthesis, and are incorporated into the region of described synthesis, generate chat table Feelings.
Alternatively, if the type of described synthesis is picture, described chat words and phrases is converted into the type of described synthesis, and closes And to the region of described synthesis, including:
Generate the picture comprising described chat words and phrases;
According to the size in the region of described synthesis, described picture is zoomed in and out operation, the picture after being scaled;
Picture after described scaling is incorporated into the region of described synthesis.
According to another aspect of the present invention, additionally provide the device of a kind of real-time generation chatting facial expression, including:
Labeling module, is suitable to be labeled the one or more expressions in the expression set of basis, marks each expression Subject correlation message;
Matching module, is suitable to when receiving the chat words and phrases from user, described in described chat words and phrases and mark The subject correlation message of each expression mates, and obtains the subject correlation message mated with described chat words and phrases;
Generation module, is suitable to the expression corresponding with the subject correlation message that described chat words and phrases mate as to be synthesized Expression, synthesizes with described chat words and phrases, generates chatting facial expression, and is supplied to described user.
Alternatively, described matching module includes:
Determine unit, be adapted to determine that the classification of described chat words and phrases;
Search unit, be suitable to, in the classification the pre-build corresponding relation with category attribute, search described chat words and phrases Category attribute corresponding to classification;
Acquiring unit, is suitable to obtain the category attribute value that the category attribute of described chat words and phrases is corresponding;
Matching unit, is suitable to the described category attribute value letter relevant to the theme of each expression described in mark that will get Breath mates.
Alternatively, described determine that unit is further adapted for:
Obtain the words and phrases classification dictionary built in advance, wherein, described words and phrases classification dictionary stores words and phrases and classification Corresponding relation;
The classification of described chat words and phrases is searched in described words and phrases classification dictionary.
Alternatively, described device also includes:
Build module, be suitable to obtain one or more classifications to be built;Enumerate the words and phrases of each classification;Based on each class Not and the words and phrases of each classification, described words and phrases classification dictionary is built.
Alternatively, described classification includes at least one following:
Chinese idiom, verse, song, film, books.
Alternatively, described matching module is further adapted for:
Described chat words and phrases are carried out semantic matches with the subject correlation message of each expression described in mark, obtains semanteme The upper subject correlation message same or like with described chat words and phrases.
Alternatively, described labeling module is further adapted for:
When the one or more expressions in the expression set of basis are labeled, for the district of each expression mark synthesis Territory.
Alternatively, described generation module is further adapted for:
It is retrieved as the region of the synthesis of described expression mark to be synthesized;
Described chat words and phrases are incorporated into the region of described synthesis, generate chatting facial expression.
Alternatively, described labeling module is further adapted for:
When the one or more expressions in the expression set of basis are labeled, for the class of each expression mark synthesis Type.
Alternatively, described generation module is further adapted for:
It is retrieved as region and the type of synthesis of the synthesis of described expression mark to be synthesized;
Described chat words and phrases are converted into the type of described synthesis, and are incorporated into the region of described synthesis, generate chat table Feelings.
Alternatively, described generation module is further adapted for:
If the type of described synthesis is picture, generate the picture comprising described chat words and phrases;
According to the size in the region of described synthesis, described picture is zoomed in and out operation, the picture after being scaled;
Picture after described scaling is incorporated into the region of described synthesis.
The embodiment of the present invention proposes a kind of real-time generating mode of brand-new expression, first in the expression set of basis Individual or multiple expressions are labeled, and mark the subject correlation message of each expression.When receiving the chat words and phrases from user, The subject correlation message of chat words and phrases with each expression of mark is mated, obtains relevant to the theme of chat words and phrases coupling Information.Subsequently, using the expression corresponding with the subject correlation message of chat words and phrases coupling as expression to be synthesized, with chat words and phrases Synthesize, generate chatting facial expression, and be supplied to user.As can be seen here, the embodiment of the present invention can input demand according to user Build expression in real time, and in richness, expression is carried out good extension on range.Further, for current limited amount, The expression that pattern is fixed is a abundantest and innovation, greatly meets user's demand for expression interactive mode.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention, And can be practiced according to the content of description, and in order to allow above and other objects of the present invention, the feature and advantage can Become apparent, below especially exemplified by the detailed description of the invention of the present invention.
According to below in conjunction with the accompanying drawing detailed description to the specific embodiment of the invention, those skilled in the art will be brighter Above-mentioned and other purposes, advantage and the feature of the present invention.
Accompanying drawing explanation
By reading the detailed description of hereafter preferred implementation, various other advantage and benefit common for this area Technical staff will be clear from understanding.Accompanying drawing is only used for illustrating the purpose of preferred implementation, and is not considered as the present invention Restriction.And in whole accompanying drawing, it is denoted by the same reference numerals identical parts.In the accompanying drawings:
Fig. 1 shows the flow chart of the method generating chatting facial expression the most in real time;
Fig. 2 a shows the expression schematic diagram in basis expression set according to an embodiment of the invention;
Fig. 2 b shows the schematic diagram in the region that the expression shown in Fig. 2 a marks synthesis according to an embodiment of the invention;
Fig. 2 c shows the schematic diagram of the chatting facial expression generated according to an embodiment of the invention;
Fig. 3 a shows the expression schematic diagram in basis expression set according to another embodiment of the present invention;
Fig. 3 b shows the schematic diagram in the region that the expression shown in Fig. 3 a marks synthesis according to an embodiment of the invention;
Fig. 3 c shows the schematic diagram of the chatting facial expression generated according to another embodiment of the present invention;
Fig. 4 a shows the schematic diagram of chat interface according to an embodiment of the invention;
Fig. 4 b shows the schematic diagram of the chat interface comprising chatting facial expression according to an embodiment of the invention;
Fig. 5 shows the structural representation of the device generating chatting facial expression the most in real time;And
Fig. 6 shows the structural representation of the device generating chatting facial expression the most in real time.
Detailed description of the invention
It is more fully described the exemplary embodiment of the disclosure below with reference to accompanying drawings.Although accompanying drawing shows the disclosure Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure and should be by embodiments set forth here Limited.On the contrary, it is provided that these embodiments are able to be best understood from the disclosure, and can be by the scope of the present disclosure Complete conveys to those skilled in the art.
For solving above-mentioned technical problem, a kind of method embodiments providing real-time generation chatting facial expression.Fig. 1 shows Go out to generate the most in real time the flow chart of the method for chatting facial expression.As it is shown in figure 1, the method is at least wrapped Include following steps S102 to step S106:
One or more expressions in the expression set of basis are labeled, mark the theme of each expression by step S102 Relevant information;
Step S104, when receiving the chat words and phrases from user, by the master of chat words and phrases with each expression of mark Topic relevant information is mated, and obtains the subject correlation message mated with chat words and phrases;
Step S106, using the expression corresponding with the subject correlation message of chat words and phrases coupling as expression to be synthesized, with Chat words and phrases synthesize, and generate chatting facial expression, and are supplied to user.
The embodiment of the present invention proposes a kind of real-time generating mode of brand-new expression, first in the expression set of basis Individual or multiple expressions are labeled, and mark the subject correlation message of each expression.When receiving the chat words and phrases from user, The subject correlation message of chat words and phrases with each expression of mark is mated, obtains relevant to the theme of chat words and phrases coupling Information.Subsequently, using the expression corresponding with the subject correlation message of chat words and phrases coupling as expression to be synthesized, with chat words and phrases Synthesize, generate chatting facial expression, and be supplied to user.As can be seen here, the embodiment of the present invention can input demand according to user Build expression in real time, and in richness, expression is carried out good extension on range.Further, for current limited amount, The expression that pattern is fixed is a abundantest and innovation, greatly meets user's demand for expression interactive mode.
The basis expression set mentioned in above step S102, can be the set of existing expression resource composition, such as, Existing expression resource as shown by figures 2 a and 3, the invention is not restricted to this.In embodiments of the present invention, can be to basis expression collection Each expression mark subject correlation message in conjunction, as in fig. 2 a, the subject correlation message that can mark this expression is " Du Just, state break the mountains and rivers, compose poem ";In fig. 3 a, the subject correlation message of this expression can be marked for " shy, blush ".Need Bright, illustrate is only signal herein, does not limit the invention.
In the alternative embodiment of the present invention, it is also possible to based on expression set in each expression mark synthesis district Territory.Here, the region of synthesis can be square frame, circle or irregularly shaped formed region, for example, it is possible to for Fig. 2 a institute The region of the expression mark synthesis shown, square frame as shown in Figure 2 b;The region that can also synthesize for the expression mark shown in Fig. 3 a, Square frame as shown in Figure 3 b.Certainly, illustrate is only signal herein, does not limit the invention.
In the alternative embodiment of the present invention, it is also possible to based on expression set in each expression mark synthesis class Type.Here, the type of synthesis can include word, picture etc., and user can be configured according to actual demand.
The subject correlation message of chat words and phrases with each expression of mark is mated by step S104, can use Coupling subject correlation message or the mode of semantic matches, will describe in detail below respectively.
Mode one, uses the mode of coupling subject correlation message.First, determine the classification of chat words and phrases, pre-building Classification and category attribute corresponding relation in, search the category attribute that the classification of chat words and phrases is corresponding.Subsequently, chat word is obtained The category attribute value corresponding to category attribute of sentence, and by relevant with each theme expressed one's feelings of mark for the category attribute value that gets Information is mated.
Such as, when receiving chat words and phrases " it is small to have a guide look of all mountains " from user, determine that the classification of these chat words and phrases is " verse ", the category attribute finding " verse " corresponding with the corresponding relation of category attribute in the classification pre-build includes " author ".And then, the author obtaining chat words and phrases " it is small to have a guide look of all mountains " is Tu Fu, by the master of " Tu Fu " with each expression of mark Topic relevant information is mated, can be using the expression as shown in Fig. 2 a or 2b as expression to be synthesized according to matching result.
Further, when determining the classification of chat words and phrases, embodiments provide a kind of optional scheme, at this In scheme, the words and phrases classification dictionary built in advance can be obtained, wherein, this words and phrases classification dictionary stores words and phrases and classification Corresponding relation, thus in words and phrases classification dictionary, search the classification of chat words and phrases.Here classification can include Chinese idiom, verse, Song, film, books etc., the invention is not restricted to this.
Due to Chinese idiom, verse, song, film, books etc. these be all can enumeration type, thus build words and phrases classifier During allusion quotation, one or more classifications to be built can be obtained, enumerate the words and phrases of each classification, and based on each classification and each The words and phrases of classification, build words and phrases classification dictionary.
Mode two, uses the mode of semantic matches.That is, by the subject correlation message of chat words and phrases with each expression of mark Carry out semantic matches, obtain semantically same or like with chat words and phrases subject correlation message.
Such as, when receiving the chat words and phrases from user " picture is the most beautiful dare not be seen ", the master that it is expressed one's feelings with each Topic relevant information carry out semantic matches, can match " shy " with chat words and phrases " picture is the most beautiful dare not be seen " the most identical or Close, can using corresponding to " shy " expression as expression to be synthesized, will the expression shown in Fig. 3 a or 3b as to be synthesized Expression.
Describe chat words and phrases are the most relevant to the theme of each expression of mark respectively with upper type one or mode two Information is mated, in actual applications, it is also possible to combination one and mode two are by each expression of chat words and phrases with mark Subject correlation message mate.That is, when receiving the chat words and phrases from user, at the words and phrases classifier built in advance Allusion quotation is searched the classification of chat words and phrases, if finding, then continues employing mode one and mate, obtain and chat what words and phrases mated Subject correlation message.Otherwise, if not finding, say, that chat words and phrases, then can the side of employing not in words and phrases classification dictionary Formula two carries out semantic matches, obtains semantically same or like with chat words and phrases subject correlation message.
After step S104 obtains the subject correlation message mated with chat words and phrases, step S106 will be with chat words and phrases Corresponding the expressing one's feelings as expression to be synthesized of subject correlation message joined, synthesizes with chat words and phrases, can be based on synthesis The type of region or synthesis is carried out.
First, region based on synthesis synthesizes.As explained above, the embodiment of the present invention can based on expression set In the region of each expression mark synthesis, the region of synthesis here can be square frame, circle or irregularly shaped group The region become.Therefore, after obtaining expression to be synthesized, the region of the synthesis of expression mark to be synthesized can be retrieved as, enter And chat words and phrases are incorporated into the region of synthesis, generate chatting facial expression.Such as, as described above, when the chat word of user's input When sentence is " it is small to have a guide look of all mountains ", can be using the expression shown in Fig. 2 a or 2b as expression to be synthesized, the square frame shown in Fig. 2 b is For the region of the expression mark synthesis shown in Fig. 2 a, chat words and phrases " it is small to have a guide look of all mountains " can be replaced in square frame " mountain is broken by state River exists ", generate chatting facial expression, as shown in Figure 2 c.In figure 2 c, " it is small to have a guide look of all mountains " is positioned at the square frame (side of displaying in Fig. 2 c Frame), the form of its font or arrangement can be configured according to the actual requirements, the invention is not limited in this regard.
Secondly, the type of region based on synthesis and synthesis synthesizes.As explained above, the embodiment of the present invention is all right Based on expression set in each expression mark synthesis type.The type of synthesis here can include word, picture etc., User can be configured according to actual demand.Therefore, after obtaining expression to be synthesized, table to be synthesized can be retrieved as The region of the synthesis of feelings mark and the type of synthesis.And then chat words and phrases are converted into the type of synthesis, and it is incorporated into synthesis Region, generates chatting facial expression.Such as, in Fig. 2 c being will chat words and phrases " it is small to have a guide look of all mountains " and expression to be synthesized with literal type Synthesis.
When the type of synthesis is picture, the picture comprising chat words and phrases can be generated, according to the size in the region of synthesis Picture is zoomed in and out operation, the picture after being scaled, the picture after scaling is incorporated into the region of synthesis.Such as, as above Literary composition is described, when the chat words and phrases of user's input are for " picture is the most beautiful dare not be seen ", can using the expression shown in Fig. 3 a or 3b as Expression to be synthesized, the square frame shown in Fig. 3 b is the region for the expression mark synthesis shown in Fig. 3 a, can generate and comprise chat The picture of words and phrases " picture is the most beautiful dare not be seen ", and according to the size in the region of synthesis, picture is zoomed in and out operation, scaled After picture, the picture after scaling is incorporated into the region of synthesis, as shown in Figure 3 c.In figure 3 c, " picture is the most beautiful!!!Dare not See!!!" it being positioned at square frame (not showing square frame in Fig. 3 c), the form of its font or arrangement can set according to the actual requirements Put, the invention is not limited in this regard.Additionally, when the picture after scaling being incorporated into the region of synthesis, matrix phase can be used The mode added, carries out matrix by the picture after scaling with the region of synthesis and is added.
The realization of the method that in real time generate chatting facial expression of the present invention is discussed in detail below by a specific embodiment Journey.In this embodiment, user B is chatted with good friend A by social chat software, sees chat interface as shown in fig. 4 a.When User B receives the chat messages " being shared with you :) of good friend A " time, input " picture is the most beautiful dare not be seen " at input frame, input frame Right side can automatically generate corresponding chatting facial expression, as shown in Figure 4 b.Here, the concrete mode generating chatting facial expression may refer to Introducing above, here is omitted.
It should be noted that in actual applications, above-mentioned all optional embodiments can be any in the way of using combination Combination, forms the alternative embodiment of the present invention, and this is no longer going to repeat them.
The method generating chatting facial expression in real time provided based on each embodiment above, based on same inventive concept, this Bright embodiment additionally provides the device of a kind of real-time generation chatting facial expression.Fig. 5 shows the most real-time Generate the structural representation of the device of chatting facial expression.As it is shown in figure 5, this device at least can include labeling module 510, coupling mould Block 520 and generation module 530.
Now introduce the structural representation map device of the device generating in real time chatting facial expression of the embodiment of the present invention each composition or Annexation between the function of device and each several part:
Labeling module 510, is suitable to be labeled the one or more expressions in the expression set of basis, marks each expression Subject correlation message;
Matching module 520, is coupled with labeling module 510, is suitable to when receiving the chat words and phrases from user, by institute Each subject correlation message expressed one's feelings described stating chat words and phrases and mark mates, and obtains mating with described chat words and phrases Subject correlation message;
Generation module 530, is coupled with matching module 520, is suitable to letter relevant to the theme that described chat words and phrases mate Corresponding the expressing one's feelings as expression to be synthesized of breath, synthesizes with described chat words and phrases, generates chatting facial expression, and is supplied to described User.
In an embodiment of the present invention, as shown in Figure 6, above-mentioned matching module 520 may include that
Determine unit 521, be adapted to determine that the classification of described chat words and phrases;
Search unit 522, be suitable to, in the classification the pre-build corresponding relation with category attribute, search described chat word The category attribute corresponding to classification of sentence;
Acquiring unit 523, is suitable to obtain the category attribute value that the category attribute of described chat words and phrases is corresponding;
Matching unit 524, is suitable to the described category attribute value got and the theme phase of each expression described in mark Pass information is mated.
In an embodiment of the present invention, above-mentioned determine that unit 521 is further adapted for:
Obtain the words and phrases classification dictionary built in advance, wherein, described words and phrases classification dictionary stores words and phrases and classification Corresponding relation;
The classification of described chat words and phrases is searched in described words and phrases classification dictionary.
In an embodiment of the present invention, as shown in Figure 6, the device that figure 5 above is shown can also include:
Build module 540, and determine that unit 521 is coupled, be suitable to obtain one or more classifications to be built;Enumerate each The words and phrases of individual classification;Based on each classification and the words and phrases of each classification, build described words and phrases classification dictionary.
In an embodiment of the present invention, described classification includes at least one following:
Chinese idiom, verse, song, film, books.
In an embodiment of the present invention, above-mentioned matching module 520 is further adapted for:
Described chat words and phrases are carried out semantic matches with the subject correlation message of each expression described in mark, obtains semanteme The upper subject correlation message same or like with described chat words and phrases.
In an embodiment of the present invention, above-mentioned labeling module 510 is further adapted for:
When the one or more expressions in the expression set of basis are labeled, for the district of each expression mark synthesis Territory.
In an embodiment of the present invention, above-mentioned generation module 530 is further adapted for:
It is retrieved as the region of the synthesis of described expression mark to be synthesized;
Described chat words and phrases are incorporated into the region of described synthesis, generate chatting facial expression.
In an embodiment of the present invention, above-mentioned labeling module 510 is further adapted for:
When the one or more expressions in the expression set of basis are labeled, for the class of each expression mark synthesis Type.
In an embodiment of the present invention, above-mentioned generation module 530 is further adapted for:
It is retrieved as region and the type of synthesis of the synthesis of described expression mark to be synthesized;
Described chat words and phrases are converted into the type of described synthesis, and are incorporated into the region of described synthesis, generate chat table Feelings.
In an embodiment of the present invention, above-mentioned generation module 530 is further adapted for:
If the type of described synthesis is picture, generate the picture comprising described chat words and phrases;
According to the size in the region of described synthesis, described picture is zoomed in and out operation, the picture after being scaled;
Picture after described scaling is incorporated into the region of described synthesis.
According to any one preferred embodiment above-mentioned or the combination of multiple preferred embodiment, the embodiment of the present invention can reach Following beneficial effect:
The embodiment of the present invention proposes a kind of real-time generating mode of brand-new expression, first in the expression set of basis Individual or multiple expressions are labeled, and mark the subject correlation message of each expression.When receiving the chat words and phrases from user, The subject correlation message of chat words and phrases with each expression of mark is mated, obtains relevant to the theme of chat words and phrases coupling Information.Subsequently, using the expression corresponding with the subject correlation message of chat words and phrases coupling as expression to be synthesized, with chat words and phrases Synthesize, generate chatting facial expression, and be supplied to user.As can be seen here, the embodiment of the present invention can input demand according to user Build expression in real time, and in richness, expression is carried out good extension on range.Further, for current limited amount, The expression that pattern is fixed is a abundantest and innovation, greatly meets user's demand for expression interactive mode.
In description mentioned herein, illustrate a large amount of detail.It is to be appreciated, however, that the enforcement of the present invention Example can be put into practice in the case of not having these details.In some instances, it is not shown specifically known method, structure And technology, in order to do not obscure the understanding of this description.
Similarly, it will be appreciated that one or more in order to simplify that the disclosure helping understands in each inventive aspect, exist Above in the description of the exemplary embodiment of the present invention, each feature of the present invention is grouped together into single enforcement sometimes In example, figure or descriptions thereof.But, the method for the disclosure should not be construed to reflect an intention that i.e. required guarantor The application claims feature more more than the feature being expressly recited in each claim protected.More precisely, as following Claims reflected as, inventive aspect is all features less than single embodiment disclosed above.Therefore, The claims following detailed description of the invention are thus expressly incorporated in this detailed description of the invention, the most each claim itself All as the independent embodiment of the present invention.
Those skilled in the art are appreciated that and can carry out the module in the equipment in embodiment adaptively Change and they are arranged in one or more equipment different from this embodiment.Can be the module in embodiment or list Unit or assembly are combined into a module or unit or assembly, and can put them in addition multiple submodule or subelement or Sub-component.In addition at least some in such feature and/or process or unit excludes each other, can use any Combine all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so disclosed appoint Where method or all processes of equipment or unit are combined.Unless expressly stated otherwise, this specification (includes adjoint power Profit requires, summary and accompanying drawing) disclosed in each feature can be carried out generation by providing identical, equivalent or the alternative features of similar purpose Replace.
Although additionally, it will be appreciated by those of skill in the art that embodiments more described herein include other embodiments Some feature included by rather than further feature, but the combination of the feature of different embodiment means to be in the present invention's Within the scope of and form different embodiments.Such as, in detail in the claims, embodiment required for protection one of arbitrarily Can mode use in any combination.
The all parts embodiment of the present invention can realize with hardware, or to run on one or more processor Software module realize, or with combinations thereof realize.It will be understood by those of skill in the art that and can use in practice Microprocessor or digital signal processor (DSP) realize the device generating chatting facial expression in real time according to embodiments of the present invention In the some or all functions of some or all parts.The present invention is also implemented as performing as described herein Part or all equipment of method or device program (such as, computer program and computer program).So The program realizing the present invention can store on a computer-readable medium, or can have the shape of one or more signal Formula.Such signal can be downloaded from internet website and obtain, or provides on carrier signal, or with any other shape Formula provides.
The present invention will be described rather than limits the invention to it should be noted above-described embodiment, and ability Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference marks that should not will be located between bracket is configured to limitations on claims.Word " comprises " and does not excludes the presence of not Arrange element in the claims or step.Word "a" or "an" before being positioned at element does not excludes the presence of multiple such Element.The present invention and can come real by means of including the hardware of some different elements by means of properly programmed computer Existing.If in the unit claim listing equipment for drying, several in these devices can be by same hardware branch Specifically embody.Word first, second and third use do not indicate that any order.These word explanations can be run after fame Claim.
So far, although those skilled in the art will appreciate that the multiple of the most detailed present invention of illustrate and describing show Example embodiment, but, without departing from the spirit and scope of the present invention, still can be direct according to present disclosure Determine or derive other variations or modifications of many meeting the principle of the invention.Therefore, the scope of the present invention is it is understood that and recognize It is set to and covers other variations or modifications all these.
The one side of the embodiment of the present invention, it is provided that A1, a kind of method of real-time generation chatting facial expression, including:
One or more expressions in the expression set of basis are labeled, mark the subject correlation message of each expression;
When receiving the chat words and phrases from user, by described chat words and phrases and the theme of each expression described in mark Relevant information is mated, and obtains the subject correlation message mated with described chat words and phrases;
Using the expression corresponding with the subject correlation message that described chat words and phrases mate as expression to be synthesized, chat with described They words and phrases synthesize, and generate chatting facial expression, and are supplied to described user.
A2, according to the method described in A1, wherein, by described chat words and phrases to described in mark each expression theme relevant Information is mated, including:
Determine the classification of described chat words and phrases;
In the classification the pre-build corresponding relation with category attribute, search the class that the classification of described chat words and phrases is corresponding Other attribute;
Obtain the category attribute value that the category attribute of described chat words and phrases is corresponding;
The described category attribute value got is mated with the subject correlation message of each expression described in mark.
A3, according to the method described in A1 or A2, wherein it is determined that the classification of described chat words and phrases, including:
Obtain the words and phrases classification dictionary built in advance, wherein, described words and phrases classification dictionary stores words and phrases and classification Corresponding relation;
The classification of described chat words and phrases is searched in described words and phrases classification dictionary.
A4, according to the method according to any one of A1-A3, wherein, build described words and phrases classification dictionary by following steps:
Obtain one or more classifications to be built;
Enumerate the words and phrases of each classification;
Based on each classification and the words and phrases of each classification, build described words and phrases classification dictionary.
A5, according to the method according to any one of A1-A4, wherein, described classification includes at least one following:
Chinese idiom, verse, song, film, books.
A6, according to the method according to any one of A1-A5, wherein, by each table described in described chat words and phrases and mark The subject correlation message of feelings mates, and obtains the subject correlation message mated with described chat words and phrases, including:
Described chat words and phrases are carried out semantic matches with the subject correlation message of each expression described in mark, obtains semanteme The upper subject correlation message same or like with described chat words and phrases.
A7, according to the method according to any one of A1-A6, wherein, also include:
When the one or more expressions in the expression set of basis are labeled, for the district of each expression mark synthesis Territory.
A8, according to the method according to any one of A1-A7, wherein, by described expression to be synthesized and described chat words and phrases Synthesize, generate chatting facial expression, including:
It is retrieved as the region of the synthesis of described expression mark to be synthesized;
Described chat words and phrases are incorporated into the region of described synthesis, generate chatting facial expression.
A9, according to the method according to any one of A1-A8, wherein, also include:
When the one or more expressions in the expression set of basis are labeled, for the class of each expression mark synthesis Type.
A10, according to the method according to any one of A1-A9, wherein, by described expression to be synthesized and described chat words and phrases Synthesize, generate chatting facial expression, including:
It is retrieved as region and the type of synthesis of the synthesis of described expression mark to be synthesized;
Described chat words and phrases are converted into the type of described synthesis, and are incorporated into the region of described synthesis, generate chat table Feelings.
A11, according to the method according to any one of A1-A10, wherein, if the type of described synthesis is picture, by described chat They words and phrases are converted into the type of described synthesis, and are incorporated into the region of described synthesis, including:
Generate the picture comprising described chat words and phrases;
According to the size in the region of described synthesis, described picture is zoomed in and out operation, the picture after being scaled;
Picture after described scaling is incorporated into the region of described synthesis.
The another aspect of the embodiment of the present invention, additionally provides B12, the device of a kind of real-time generation chatting facial expression, including:
Labeling module, is suitable to be labeled the one or more expressions in the expression set of basis, marks each expression Subject correlation message;
Matching module, is suitable to when receiving the chat words and phrases from user, described in described chat words and phrases and mark The subject correlation message of each expression mates, and obtains the subject correlation message mated with described chat words and phrases;
Generation module, is suitable to the expression corresponding with the subject correlation message that described chat words and phrases mate as to be synthesized Expression, synthesizes with described chat words and phrases, generates chatting facial expression, and is supplied to described user.
B13, according to the device described in B12, wherein, described matching module includes:
Determine unit, be adapted to determine that the classification of described chat words and phrases;
Search unit, be suitable to, in the classification the pre-build corresponding relation with category attribute, search described chat words and phrases Category attribute corresponding to classification;
Acquiring unit, is suitable to obtain the category attribute value that the category attribute of described chat words and phrases is corresponding;
Matching unit, is suitable to the described category attribute value letter relevant to the theme of each expression described in mark that will get Breath mates.
B14, according to the device described in B12 or B13, wherein, described determine that unit is further adapted for:
Obtain the words and phrases classification dictionary built in advance, wherein, described words and phrases classification dictionary stores words and phrases and classification Corresponding relation;
The classification of described chat words and phrases is searched in described words and phrases classification dictionary.
B15, according to the device according to any one of B12-B14, wherein, also include:
Build module, be suitable to obtain one or more classifications to be built;Enumerate the words and phrases of each classification;Based on each class Not and the words and phrases of each classification, described words and phrases classification dictionary is built.
B16, according to the device according to any one of B12-B15, wherein, described classification includes at least one following:
Chinese idiom, verse, song, film, books.
B17, according to the device according to any one of B12-B16, wherein, described matching module is further adapted for:
Described chat words and phrases are carried out semantic matches with the subject correlation message of each expression described in mark, obtains semanteme The upper subject correlation message same or like with described chat words and phrases.
B18, according to the device according to any one of B12-B17, wherein, described labeling module is further adapted for:
When the one or more expressions in the expression set of basis are labeled, for the district of each expression mark synthesis Territory.
B19, according to the device according to any one of B12-B18, wherein, described generation module is further adapted for:
It is retrieved as the region of the synthesis of described expression mark to be synthesized;
Described chat words and phrases are incorporated into the region of described synthesis, generate chatting facial expression.
B20, according to the device according to any one of B12-B19, wherein, described labeling module is further adapted for:
When the one or more expressions in the expression set of basis are labeled, for the class of each expression mark synthesis Type.
B21, according to the device according to any one of B12-B20, wherein, described generation module is further adapted for:
It is retrieved as region and the type of synthesis of the synthesis of described expression mark to be synthesized;
Described chat words and phrases are converted into the type of described synthesis, and are incorporated into the region of described synthesis, generate chat table Feelings.
B22, according to the device according to any one of B12-B21, wherein, described generation module is further adapted for:
If the type of described synthesis is picture, generate the picture comprising described chat words and phrases;
According to the size in the region of described synthesis, described picture is zoomed in and out operation, the picture after being scaled;
Picture after described scaling is incorporated into the region of described synthesis.

Claims (10)

1. a method for real-time generation chatting facial expression, including:
One or more expressions in the expression set of basis are labeled, mark the subject correlation message of each expression;
When receiving the chat words and phrases from user, described chat words and phrases are relevant to the theme of each expression described in mark Information is mated, and obtains the subject correlation message mated with described chat words and phrases;
Using the expression corresponding with the subject correlation message that described chat words and phrases mate as expression to be synthesized, with described chat word Sentence synthesizes, and generates chatting facial expression, and is supplied to described user.
Method the most according to claim 1, wherein, by described chat words and phrases and the theme phase of each expression described in mark Pass information is mated, including:
Determine the classification of described chat words and phrases;
In the classification the pre-build corresponding relation with category attribute, search the classification genus that the classification of described chat words and phrases is corresponding Property;
Obtain the category attribute value that the category attribute of described chat words and phrases is corresponding;
The described category attribute value got is mated with the subject correlation message of each expression described in mark.
Method the most according to claim 1 and 2, wherein it is determined that the classification of described chat words and phrases, including:
Obtain the words and phrases classification dictionary built in advance, wherein, described words and phrases classification dictionary stores words and phrases corresponding with classification Relation;
The classification of described chat words and phrases is searched in described words and phrases classification dictionary.
4. according to the method according to any one of claim 1-3, wherein, build described words and phrases classifier by following steps Allusion quotation:
Obtain one or more classifications to be built;
Enumerate the words and phrases of each classification;
Based on each classification and the words and phrases of each classification, build described words and phrases classification dictionary.
5. according to the method according to any one of claim 1-4, wherein, described classification includes at least one following:
Chinese idiom, verse, song, film, books.
6. according to the method according to any one of claim 1-5, wherein, by described chat words and phrases and each table described in mark The subject correlation message of feelings mates, and obtains the subject correlation message mated with described chat words and phrases, including:
Subject correlation message of each expression described in described chat words and phrases and mark is carried out semantic matches, obtain semantically with The subject correlation message that described chat words and phrases are same or like.
7. according to the method according to any one of claim 1-6, wherein, also include:
When the one or more expressions in the expression set of basis are labeled, for the region of each expression mark synthesis.
8. according to the method according to any one of claim 1-7, wherein, by described expression to be synthesized and described chat words and phrases Synthesize, generate chatting facial expression, including:
It is retrieved as the region of the synthesis of described expression mark to be synthesized;
Described chat words and phrases are incorporated into the region of described synthesis, generate chatting facial expression.
9. according to the method according to any one of claim 1-8, wherein, also include:
When the one or more expressions in the expression set of basis are labeled, for the type of each expression mark synthesis.
10. a device for real-time generation chatting facial expression, including:
Labeling module, is suitable to be labeled the one or more expressions in the expression set of basis, marks the theme of each expression Relevant information;
Matching module, is suitable to when receiving the chat words and phrases from user, by described in described chat words and phrases and mark each The subject correlation message of expression mates, and obtains the subject correlation message mated with described chat words and phrases;
Generation module, is suitable to the expression corresponding with the subject correlation message that described chat words and phrases mate as table to be synthesized Feelings, synthesize with described chat words and phrases, generate chatting facial expression, and are supplied to described user.
CN201610509363.7A 2016-06-30 2016-06-30 Generate the method and device of chatting facial expression in real time Pending CN106126709A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610509363.7A CN106126709A (en) 2016-06-30 2016-06-30 Generate the method and device of chatting facial expression in real time
PCT/CN2017/080821 WO2018000910A1 (en) 2016-06-30 2017-04-17 Method and device for generating chat expressions in real time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610509363.7A CN106126709A (en) 2016-06-30 2016-06-30 Generate the method and device of chatting facial expression in real time

Publications (1)

Publication Number Publication Date
CN106126709A true CN106126709A (en) 2016-11-16

Family

ID=57468798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610509363.7A Pending CN106126709A (en) 2016-06-30 2016-06-30 Generate the method and device of chatting facial expression in real time

Country Status (2)

Country Link
CN (1) CN106126709A (en)
WO (1) WO2018000910A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018000910A1 (en) * 2016-06-30 2018-01-04 北京奇虎科技有限公司 Method and device for generating chat expressions in real time
CN107977928A (en) * 2017-12-21 2018-05-01 广东欧珀移动通信有限公司 Expression generation method, apparatus, terminal and storage medium
CN108322800A (en) * 2017-01-18 2018-07-24 阿里巴巴集团控股有限公司 Caption information processing method and processing device
CN108334388A (en) * 2017-07-28 2018-07-27 腾讯科技(深圳)有限公司 Session background process method, apparatus, computer equipment and storage medium
CN109088811A (en) * 2018-06-25 2018-12-25 维沃移动通信有限公司 A kind of method for sending information and mobile terminal
CN109710753A (en) * 2018-12-29 2019-05-03 北京金山安全软件有限公司 Method and device for generating shortcut information based on personalized theme and electronic equipment
CN109741423A (en) * 2018-12-28 2019-05-10 北京奇艺世纪科技有限公司 Expression packet generation method and system
CN110221710A (en) * 2019-05-29 2019-09-10 北京金山安全软件有限公司 Keyboard input method and device, electronic equipment and storage medium
CN110221711A (en) * 2019-05-29 2019-09-10 北京金山安全软件有限公司 Keyboard input method and device, electronic equipment and storage medium
CN110442247A (en) * 2019-06-20 2019-11-12 上海萌家网络科技有限公司 A kind of input method and input system
CN110442248A (en) * 2019-06-20 2019-11-12 上海萌家网络科技有限公司 A kind of input method and input system
CN110837307A (en) * 2019-09-29 2020-02-25 上海萌家网络科技有限公司 Input method and system thereof
CN110908525A (en) * 2019-09-29 2020-03-24 上海萌家网络科技有限公司 Input method, client side thereof and method for providing candidate pictures/videos
CN110909194A (en) * 2019-09-29 2020-03-24 上海萌家网络科技有限公司 Input method and system thereof
CN110968204A (en) * 2019-09-29 2020-04-07 上海萌家网络科技有限公司 Input method and system thereof
WO2020221104A1 (en) * 2019-04-30 2020-11-05 上海连尚网络科技有限公司 Emoji packet presentation method and equipment
CN112799748A (en) * 2020-08-05 2021-05-14 腾讯科技(深圳)有限公司 Expression element display method, device and equipment and computer readable storage medium
CN113302659A (en) * 2019-01-18 2021-08-24 斯纳普公司 System and method for generating personalized video with customized text messages
CN114553810A (en) * 2022-02-22 2022-05-27 广州博冠信息科技有限公司 Expression picture synthesis method and device and electronic equipment
CN115190366A (en) * 2022-07-07 2022-10-14 北京字跳网络技术有限公司 Information display method and device, electronic equipment and computer readable medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920630B (en) * 2018-06-29 2023-03-21 联想(北京)有限公司 Data processing method and electronic equipment
CN114531406A (en) * 2021-12-30 2022-05-24 北京达佳互联信息技术有限公司 Interface display method and device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100138503A1 (en) * 2008-12-02 2010-06-03 International Business Machines Corporation Exchanging messages during a chat session
CN102662961A (en) * 2012-03-08 2012-09-12 北京百舜华年文化传播有限公司 Method, apparatus and terminal unit for matching semantics with image
US20130346430A1 (en) * 2012-06-21 2013-12-26 United Video Properties, Inc. Systems and methods for navigating to content without an advertisement
CN105574177A (en) * 2015-12-21 2016-05-11 北京奇虎科技有限公司 Method for presenting search result and display equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126709A (en) * 2016-06-30 2016-11-16 北京奇虎科技有限公司 Generate the method and device of chatting facial expression in real time

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100138503A1 (en) * 2008-12-02 2010-06-03 International Business Machines Corporation Exchanging messages during a chat session
CN102662961A (en) * 2012-03-08 2012-09-12 北京百舜华年文化传播有限公司 Method, apparatus and terminal unit for matching semantics with image
US20130346430A1 (en) * 2012-06-21 2013-12-26 United Video Properties, Inc. Systems and methods for navigating to content without an advertisement
CN105574177A (en) * 2015-12-21 2016-05-11 北京奇虎科技有限公司 Method for presenting search result and display equipment

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018000910A1 (en) * 2016-06-30 2018-01-04 北京奇虎科技有限公司 Method and device for generating chat expressions in real time
CN108322800A (en) * 2017-01-18 2018-07-24 阿里巴巴集团控股有限公司 Caption information processing method and processing device
CN108334388A (en) * 2017-07-28 2018-07-27 腾讯科技(深圳)有限公司 Session background process method, apparatus, computer equipment and storage medium
CN107977928A (en) * 2017-12-21 2018-05-01 广东欧珀移动通信有限公司 Expression generation method, apparatus, terminal and storage medium
CN107977928B (en) * 2017-12-21 2022-04-19 Oppo广东移动通信有限公司 Expression generation method and device, terminal and storage medium
CN109088811A (en) * 2018-06-25 2018-12-25 维沃移动通信有限公司 A kind of method for sending information and mobile terminal
CN109741423A (en) * 2018-12-28 2019-05-10 北京奇艺世纪科技有限公司 Expression packet generation method and system
CN109710753B (en) * 2018-12-29 2021-08-03 北京金山安全软件有限公司 Method and device for generating shortcut information based on personalized theme and electronic equipment
CN109710753A (en) * 2018-12-29 2019-05-03 北京金山安全软件有限公司 Method and device for generating shortcut information based on personalized theme and electronic equipment
CN113302659A (en) * 2019-01-18 2021-08-24 斯纳普公司 System and method for generating personalized video with customized text messages
WO2020221104A1 (en) * 2019-04-30 2020-11-05 上海连尚网络科技有限公司 Emoji packet presentation method and equipment
CN110221710A (en) * 2019-05-29 2019-09-10 北京金山安全软件有限公司 Keyboard input method and device, electronic equipment and storage medium
CN110221711A (en) * 2019-05-29 2019-09-10 北京金山安全软件有限公司 Keyboard input method and device, electronic equipment and storage medium
CN110442248A (en) * 2019-06-20 2019-11-12 上海萌家网络科技有限公司 A kind of input method and input system
CN110442247A (en) * 2019-06-20 2019-11-12 上海萌家网络科技有限公司 A kind of input method and input system
CN110909194A (en) * 2019-09-29 2020-03-24 上海萌家网络科技有限公司 Input method and system thereof
CN110968204A (en) * 2019-09-29 2020-04-07 上海萌家网络科技有限公司 Input method and system thereof
CN110908525A (en) * 2019-09-29 2020-03-24 上海萌家网络科技有限公司 Input method, client side thereof and method for providing candidate pictures/videos
CN110837307A (en) * 2019-09-29 2020-02-25 上海萌家网络科技有限公司 Input method and system thereof
CN112799748A (en) * 2020-08-05 2021-05-14 腾讯科技(深圳)有限公司 Expression element display method, device and equipment and computer readable storage medium
CN112799748B (en) * 2020-08-05 2024-03-01 腾讯科技(深圳)有限公司 Expression element display method, device, equipment and computer readable storage medium
CN114553810A (en) * 2022-02-22 2022-05-27 广州博冠信息科技有限公司 Expression picture synthesis method and device and electronic equipment
CN115190366A (en) * 2022-07-07 2022-10-14 北京字跳网络技术有限公司 Information display method and device, electronic equipment and computer readable medium
CN115190366B (en) * 2022-07-07 2024-03-29 北京字跳网络技术有限公司 Information display method, device, electronic equipment and computer readable medium

Also Published As

Publication number Publication date
WO2018000910A1 (en) 2018-01-04

Similar Documents

Publication Publication Date Title
CN106126709A (en) Generate the method and device of chatting facial expression in real time
Narayan et al. Planning with learned entity prompts for abstractive summarization
Kumar et al. Data augmentation using pre-trained transformer models
Jauhiainen et al. Metaverse and sustainability: Systematic review of scientific publications until 2022 and beyond
CN102262632B (en) Method and system for processing text
Varitimiadis et al. Graph-based conversational AI: Towards a distributed and collaborative multi-chatbot approach for museums
Efthimiou et al. Dicta-sign–sign language recognition, generation and modelling: a research effort with applications in deaf communication
Alonso-Martín et al. Augmented robotics dialog system for enhancing human–robot interaction
CN110489087A (en) A kind of method, apparatus, medium and electronic equipment generating fractal structure
Almeida et al. The AM4I architecture and framework for multimodal interaction and its application to smart environments
Zhao et al. Enhancing the appreciation of traditional Chinese painting using interactive technology
CN110032736A (en) A kind of text analyzing method, apparatus and storage medium
Lee et al. Gemini pro defeated by gpt-4v: Evidence from education
Vichyaloetsiri et al. Web service framework to translate text into sign language
Wolf A cognitive linguistic approach to the cultures of World Englishes: The emergence of a new model
Vázquez et al. Dialogue management and language generation for a robust conversational virtual coach: Validation and user study
Sawai et al. Sentence augmentation for language translation using gpt-2
Neves et al. HamNoSyS2SiGML: translating HamNoSys into SiGML
Das Chakladar et al. 3d avatar approach for continuous sign movement using speech/text
Alruily ArRASA: channel optimization for deep learning-based arabic nlu chatbot framework
Yipu The function development of network teaching system to English pronunciation and tone in the background of internet of things
Aysa et al. Language identification-based evaluation of single channel speech separation of overlapped speeches
Lohar et al. Building Neural Machine Translation Systems for Multilingual Participatory Spaces
Krapež et al. Synthesis of the sign language of the deaf from the sign video clips
Kang et al. The competitive advantage of the Indian and Korean film industries: An empirical analysis using natural language processing methods

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20161116

RJ01 Rejection of invention patent application after publication