CN110209774A - Handle the method, apparatus and terminal device of session information - Google Patents
Handle the method, apparatus and terminal device of session information Download PDFInfo
- Publication number
- CN110209774A CN110209774A CN201810142498.3A CN201810142498A CN110209774A CN 110209774 A CN110209774 A CN 110209774A CN 201810142498 A CN201810142498 A CN 201810142498A CN 110209774 A CN110209774 A CN 110209774A
- Authority
- CN
- China
- Prior art keywords
- information
- user
- session
- target
- source
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
Abstract
The embodiment of the invention provides a kind of method, apparatus and terminal device for handling session information, wherein the method for handling session information includes: the source session information for obtaining auxiliary information and the first user input;Based on the auxiliary information and the source session information, generates target session information and export target session information.Technical solution through the embodiment of the present invention, when generating target session information, the source session information of the first user input is not only relied on, but also to fully consider the auxiliary information got, so that the target session information generated had both met the diversified demand of user, also ensure that user it will be appreciated that generate sentence in noun or things, promoted user experience.
Description
Technical field
The present invention relates to session information processing technology fields, specifically, the present invention relates to a kind of processing session informations
Method, apparatus and terminal device.
Background technique
Spatial term is the branch of artificial intelligence and computational linguistics, is the computer mould based on language information processing
Type, the course of work are written by the semanteme and syntax rule next life that select and execute certain since abstract concept level
This.
Existing spatial term technology is mainly used in question answering system, and task scene is to give a problem, from
It is dynamic to generate corresponding answer.Such as: in the prior art, what Li Tiwen: the place that Beijing has joyful? question answering system can give automatically
The joyful place in Beijing out.Wherein, according to the difference of generating mode, existing spatial term technology can be divided into following two
Kind: (1) it is based on template generation: by semantic analysis, obtaining the logical expression of problem, matching is defined with logical expression in advance
Template, corresponding answer is searched for from database.(2) it is generated based on deep learning: in the training stage, according to existing question and answer language
Expect the generation model of training string to string (sequence-to-sequence);In forecast period, problem sentence is given, according to training
Obtained generation model generates and answers sentence.
Although spatial term in the prior art, the primary demand of people can be met to a certain extent,
Its effective object is single, and the simple question and answer usually between machine and user are unable to satisfy the diversified demand of user, Er Qiesheng
At sentence fully rely on currently given context environmental, do not account for user whether understand noun in the sentence of generation or
Person's things causes the experience of user poor.
Summary of the invention
The purpose of the present invention is intended at least can solve above-mentioned one of technological deficiency, is especially unable to satisfy the multiplicity of user
The technological deficiency of change demand.
The embodiment of the present invention provides a kind of method for handling session information according to one aspect, comprising:
Obtain the source session information of auxiliary information and the first user input;
Based on the auxiliary information and the source session information, generates target session information and export the target session letter
Breath.
The embodiment of the present invention additionally provides a kind of device for handling session information according to other side, comprising:
Data obtaining module, for obtaining the source session information of auxiliary information and the first user input;
Session generation module, for generating target session information simultaneously based on the auxiliary information and the source session information
Export the target session information.
The embodiment of the present invention additionally provides a kind of terminal device, including memory and processor according on the other hand,
It is stored with computer executable instructions on memory, when the instructions are executed by a processor, executes above-mentioned processing session letter
The method of breath.
The method of processing session information provided in an embodiment of the present invention obtains auxiliary information and the compared with prior art
The source session information of one user input, then, based on the auxiliary information and the source session information, generation target session information is simultaneously defeated
Target session information out.It is defeated not only rely on the first user when generating target session information for the technical solution of the embodiment of the present invention
The source session information entered, but also to fully consider the auxiliary information got, so that the target session information generated was both
The diversified demand for meeting user, also ensure that user it will be appreciated that generate sentence in noun or things, promoted user's body
It tests.
The additional aspect of the present invention and advantage will be set forth in part in the description, these will become from the following description
Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, in which:
Fig. 1 is the flow diagram of the method for the processing session information of the embodiment of the present invention;
Fig. 2 is the product process figure of the session information of the embodiment of the present invention;
Fig. 3 is the overall operation structure chart of the sentence Auto-generation Model of the embodiment of the present invention;
Fig. 4 is the work flow diagram of the sentence automatic creation system of the embodiment of the present invention;
Fig. 5 is the schematic diagram of the sentence automatic creation system of single languages of the embodiment of the present invention;
Fig. 6 is the schematic diagram of the multilingual sentence automatic creation system of the embodiment of the present invention;
Fig. 7 is the schematic diagram of the sentence automatic creation system of the load user information of the embodiment of the present invention;
Fig. 8 is the schematic diagram of the sentence automatic creation system based on abbreviation of the embodiment of the present invention;
Fig. 9 is the analogy translation system of the embodiment of the present invention in schematic diagram of the entity class than application scenarios;
Figure 10 is the analogy translation system of the embodiment of the present invention in schematic diagram of another entity class than application scenarios;
Figure 11 is the schematic diagram of the event analogy application scenarios of the analogy translation system of the embodiment of the present invention;
Figure 12 is the entity of the analogy translation system of the embodiment of the present invention and the schematic diagram of event analogy application scenarios;
Figure 13 is the signaling process figure for the user's context modification that the centralized unit of the embodiment of the present invention triggers;
Figure 14 is the entire flow figure across the cross-cutting translation system of language of the embodiment of the present invention;
Figure 15 is the special professional domain expression-form of the embodiment of the present invention to the cross-cutting across language of General Expression form
The schematic diagram of translation system;
Figure 16 is the General Expression form of the embodiment of the present invention to the cross-cutting across language of special professional domain expression-form
The schematic diagram of translation system;
Figure 17 is the entire flow figure across the cross-cultural background translation system of language of the embodiment of the present invention;
Figure 18 be the embodiment of the present invention special culture background field expression-form to General Expression form across language across
The schematic diagram of culture background translation system;
Figure 19 is the special culture background field expression-form with picture output of the embodiment of the present invention to General Expression
The schematic diagram across the cross-cultural background translation system of language of form;
Figure 20 is the entire flow figure of the picture recommender system of the embodiment of the present invention;
Figure 21 is the application schematic diagram of the picture recommender system of the embodiment of the present invention;
Figure 22 is that the structure of the device of the processing session information of the embodiment of the present invention is intended to;
Figure 23 is the block diagram that can be used for realizing the computing system of device of processing session information disclosed by the embodiments of the present invention.
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, and for explaining only the invention, and is not construed as limiting the claims.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singular " one " used herein, " one
It is a ", " described " and "the" may also comprise plural form.It is to be further understood that being arranged used in specification of the invention
Diction " comprising " refer to that there are the feature, integer, step, operation, element and/or component, but it is not excluded that in the presence of or addition
Other one or more features, integer, step, operation, element, component and/or their group.It should be understood that when we claim member
Part is " connected " or when " coupled " to another element, it can be directly connected or coupled to other elements, or there may also be
Intermediary element.In addition, " connection " used herein or " coupling " may include being wirelessly connected or wirelessly coupling.It is used herein to arrange
Diction "and/or" includes one or more associated wholes for listing item or any cell and all combinations.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art
Language and scientific term), there is meaning identical with the general understanding of those of ordinary skill in fields of the present invention.Should also
Understand, those terms such as defined in the general dictionary, it should be understood that have in the context of the prior art
The consistent meaning of meaning, and unless idealization or meaning too formal otherwise will not be used by specific definitions as here
To explain.
Those skilled in the art of the present technique are appreciated that " terminal " used herein above, " terminal device " both include wireless communication
The equipment of number receiver, only has the equipment of the wireless signal receiver of non-emissive ability, and including receiving and emitting hardware
Equipment, have on bidirectional communication link, can carry out two-way communication reception and emit hardware equipment.This equipment
It may include: honeycomb or other communication equipments, shown with single line display or multi-line display or without multi-line
The honeycomb of device or other communication equipments;PCS (Personal Communications Service, PCS Personal Communications System), can
With combine voice, data processing, fax and/or communication ability;PDA (Personal Digital Assistant, it is personal
Digital assistants), it may include radio frequency receiver, pager, the Internet/intranet access, web browser, notepad, day
It goes through and/or GPS (Global Positioning System, global positioning system) receiver;Conventional laptop and/or palm
Type computer or other equipment, have and/or the conventional laptop including radio frequency receiver and/or palmtop computer or its
His equipment." terminal " used herein above, " terminal device " can be it is portable, can transport, be mounted on the vehicles (aviation,
Sea-freight and/or land) in, or be suitable for and/or be configured in local runtime, and/or with distribution form, operate in the earth
And/or any other position operation in space." terminal " used herein above, " terminal device " can also be communication terminal, on
Network termination, music/video playback terminal, such as can be PDA, MID (Mobile Internet Device, mobile Internet
Equipment) and/or mobile phone with music/video playing function, it is also possible to the equipment such as smart television, set-top box.
Existing spatial term technology has the disadvantage that
1) effective object is single, and interactive object is usually between machine and user to include how that machine is allowed to understand nature language
Speech (i.e. human language) and the natural language that how can be understood machine-language coding adult.However for language generation and reason
For solution, truth is much more complex, is especially directly or indirectly exchanged between men by equipment, it is also possible to
Other side or the idea to reception and registration oneself can not be understood because of individual own situation or local environment.Such as at the beginning of foreign language
Scholar, aphasis patient, very busy and urgent circumstances etc., require the language generation skill of very intelligence and hommization
Art helps the user to realize quickly and easily augmentative communication function.Existing technology because do not account between multiple users, with
And the multi-party language environment between multiple users and equipment, therefore the situation of above-mentioned complexity can not be handled.
Such as: in the prior art, when enquirement side proposes an open question, for example, " Saturday, we removed What for
? ", answered at this point, existing question answering system can not be generated according to the keyword that answer party provides.
2) sentence generated fully relies on currently given context environmental, does not account for the sentence whether user understands generation
Noun or things in son.
Such as: assuming that user C is completely strange to video card, when user A and user B discusses video card topic, the prior art without
Method generates the sentence for being able to use family C and understanding its dialogue according to the dialogue of user A and user B.
3) situation that be good at field different of user A and user B are not accounted for when generating sentence, therefore generate
Sentence should also change according to the requirement in output field.
Such as: assuming that user A is a game player, it is delithted with using game term, and user B is inadvertent contact trip
Play, to game term almost without understanding.When user A says user B " tonight eats chicken? fly with you!" (mean " tonight one
Rise and play games? it is won together with you!") when, user B can not understand user's A word, and the prior art also can do nothing to help user B
To understand the words.
4) difference of culture background locating for different user is not accounted for.
Such as: assuming that user A has to American Slang at fingertips, the American Culture and user B has little understanding.When A pairs of user
User B says " In electronic industry, Samsung can be considered the 800pound
When gorilla. ", original meaning is to say " Samsung possesses very powerful strength in electronics industry ", and user B can less to this word
Understand, the prior art can do nothing to help user B also to understand the words.
5) it is only defined in text information, does not account for the multi-modal informations such as image.
To solve the above-mentioned deficiency in existing spatial term technology, the embodiment of the invention provides a kind of processing meetings
The method for talking about information, when generating target session information, does not only rely on the source session letter of the first user input by this programme
Breath, but also to fully consider the auxiliary information got, on the one hand, conducive to the interactive object of extension language generation, such as with
Family-terminal device-user interactive form, is no longer limited between machine and user, on the other hand, generate natural language when not
Currently given context environmental is relied only on again, and to fully consider auxiliary information, allows users to understand oneself generated
Right language sentence enhances the practicability of language generation, erects the bridge of inconvenient normal communication linked up between men, will not
Readable language is converted into straightaway natural language, to break disturbance of understanding of user during communication exchange.
The present invention implement processing session information method, as shown in Figure 1, comprising: step 110, obtain auxiliary information with
The source session information of first user input;Step 120 is based on auxiliary information and source session information, generates target session information simultaneously
Export target session information.
The method of processing session information provided in an embodiment of the present invention, obtains the source meeting of auxiliary information and the first user input
Information is talked about, then, based on the auxiliary information and the source session information, target session information is generated and exports target session information.
The technical solution of the embodiment of the present invention does not only rely on the source session information of the first user input when generating target session information,
But also to fully consider the auxiliary information got, so that the target session information generated had both met the diversification of user
Demand, also ensure that user it will be appreciated that generate sentence in noun or things, promoted user experience.
Specifically, the auxiliary information in step 110 and step 120 includes user's letter of historical session information, the first user
At least one of in breath, the user information of second user, information fields, wherein information fields include source information institute
Category field and target information fields.
Further, user information includes at least one of the following: customer attribute information;User preference information;Subscriber calendar
Arrangement information;Customer position information;User behavior information;User equipment information.
Further, information fields include at least one of following: language category information;Professional domain information;Culture
Background information.
Further, source information fields are to be detected to obtain to source session information or settable, target information
Fields are by being detected to obtain to historical session information or settable.
Specifically, the source session information in step 110 and step 120 includes at least one of following: abbreviation, endless
Whole word, natural language sentence, picture select information.
Specifically, in step 120 based on auxiliary information and source session information, generate target session information, specifically include
Step 1201 (not shown) is at least one of to step 1204 (not shown):
The session of historical session information between step 1201, extraction source session information and the first user and second user is believed
Keyword is ceased, target session information is generated according to session information keyword.
The session of historical session information between step 1202, extraction source session information and the first user and second user is believed
The user information keyword for ceasing the user information of keyword and the first user and/or second user, according to session information key
Word and user information keyword generate target session information.
Step 1203, according to information fields, the source session information of source information fields is translated as target information
The session information of fields extracts going through between the session information and the first user and second user of target information fields
The session information keyword of history session information generates target session information according to session information keyword.
Step 1204, according to information fields, the source session information of source information fields is translated as target information
The session information of fields extracts going through between the session information and the first user and second user of target information fields
The user information of the user information of the session information keyword of history session information and the first user and/or second user is crucial
Word generates target session information according to session information keyword and user information keyword.
Further, the step that target session information is generated according to session information keyword in step 1201 and step 1203
Suddenly, comprising: according to session information keyword, model is generated based on sentence trained in advance, generates target session information;Step
1202 with according to session information keyword and user information keyword, target session information is generated in step 1204 the step of,
It include: that model is generated based on sentence trained in advance, generates target meeting according to session information keyword and user information keyword
Talk about information.
Specifically, in step 120 based on auxiliary information and source session information, generate target session information, specifically include
Step 1205 (not shown) is at least one of to step 1207 (not shown):
Step 1205, according to source session information, obtain source object information, source object information classified to obtain source category
Information;According to the user information of the first user, candidate target classification information is obtained;According to source category information and candidate target classification
The similarity of information obtains target category information;Candidate target object information is obtained according to target category information;According to source object
The similarity of information and candidate target object information, obtains target object information;According to target object information, target session is generated
Information.
Step 1206 is believed according to the historical session between source session information and the first user and more than one second user
Breath, obtains source object information;Source object information is classified to obtain source category information;According to the user information of the first user,
Obtain candidate target classification information;According to the similarity of source category information and candidate target classification information, target category letter is obtained
Breath;Candidate target object information is obtained according to target category information;According to the phase of source object information and candidate target object information
Like degree, target object information is obtained;According to target object information, target session information is generated.
Step 1207, according to information fields, the source session information of source information fields is translated as target information
The session information of fields, according to the session information of target information fields and the first user and more than one second user
Between historical session information, obtain source object information;Source object information is classified to obtain source category information;According to first
The user information of user obtains candidate target classification information;According to the similarity of source category information and candidate target classification information,
Obtain target category information;Candidate target object information is obtained according to target category information;According to source object information and candidate mesh
The similarity for marking object information, obtains target object information;According to target object information, target session information is generated.
Wherein, object includes entity and/or event.
Specifically, in step 120 based on auxiliary information and source session information, generate target session information, specifically include
Step 1208 (not shown), according to information fields, the source session information of source information fields is translated as target
The target session information of information fields.
Further, step 1208 specifically includes: according to information fields, based on translation mould trained in advance accordingly
Type carries out the translation of at least one of category of language, professional domain and culture background according to sequencing to source session information
Processing, obtains target session information.
Further, this method further includes step 130 (not shown), according to source session information and/or target session
The semantic similarity of information and candidate picture, obtains Target Photo corresponding with source session information and/or target session information simultaneously
Export the Target Photo.
Specifically, in step 120 based on auxiliary information and source session information, generate target session information, specifically include
At least one of step 1209 (not shown) and step 1210 (not shown):
Step 1209 selects going through between information and the first user and second user according to the picture of the first user input
History session information obtains session and speculates information;The semantic similarity that information and candidate picture are speculated according to session, from candidate picture
Middle acquisition Target Photo, using Target Photo as target session information.
Step 1210, picture selection information, the history between the first user and second user according to the first user input
The user information of session information and the first user obtain session and speculate information;Information and candidate picture are speculated according to session
Semantic similarity obtains Target Photo from candidate picture, using Target Photo as target session information.
Further, the picture according to the first user input in step 1209 selects information and the first user and second
Historical session information between user obtains session and speculates information, specifically includes: according to the picture selection letter of the first user input
Breath and the historical session information between the first user and second user, the dialogue based on pre-training understand model, obtain session
Speculate information.
Further, being used according to the picture selection information of the first user input, the first user and second in step 1210
The user information of historical session information and the first user between family obtains session and speculates information, specifically include: according to the
The picture selection information of one user input, the historical session information between the first user and second user and the first user
User information, the dialogue based on pre-training understand model, obtain session and speculate information.
Wherein, session speculates that information includes: that the session content that the first user is intended by and/or the first user are intended by
Session mood.
According to foregoing description as can be seen that the present invention implement processing session information method, including but not limited to following 5
The content of a aspect: A, the sentence automatic creation system of the natural language expressing form based on keyword;B, analogy translation system;
C, based on across the cross-cutting translation system of language;D, based on across the cross-cultural background translation system of language;E, picture recommender system;Its
In, the product process figure of the session information of the content of including but not limited to above-mentioned 5 aspects is as shown in Figure 2.
Below with reference to specific embodiment, describe in detail to the content of above-mentioned 5 aspects, wherein above-mentioned nature
The sentence of language expression-form, that is, target session information, the first i.e. above-mentioned user of originating mobile subscriber during being described below, mesh
Mark the second user of user, that is, above-mentioned, the user information of the first user of originating mobile subscriber information, that is, above-mentioned, target user's information is
The user information of above-mentioned second user.
A, the sentence automatic creation system of the natural language expressing form based on keyword (give birth to automatically by following referred to as sentences
At system), be in existing spatial term technology, be confined between user and equipment session information (namely
Natural language) problem is generated, and a kind of of proposition may act on user-equipment-user and according to the language ambience information of outside offer
(i.e. user session information, including dialog history information, the source session information of the first user input etc. between user) automatically generates
The method of the sentence of natural language expressing form, which includes: that session information is extracted from language ambience information
Keyword generates the sentence (i.e. target session information) of natural language expressing form and feeds back to according to session information keyword
Target user.Further, it is also possible to originating mobile subscriber information and target user's information be obtained, according to language ambience information and target user
Information and originating mobile subscriber information generate the sentence of natural language expressing form and feed back to target user.Further, it is also possible to by source
The session information keyword translation of end subscriber input then will be translated at the session information keyword of the category of language of target user
The session information keyword of the category of language of target user generates the sentence of natural language expressing form and feeds back to target user.
The sentence for the natural language expressing form based on keyword that the embodiment of the present invention is stated in realization, which automatically generates, is
When system, main includes the sentence automatic creation system (i.e. the sentence of natural language expressing form generates) on line and the sentence under line
Training two parts of Auto-generation Model work.Wherein, the training of the sentence Auto-generation Model under line be according to existing corpus,
The model for automatically generating the sentence of natural language expressing form according to session information keyword is trained in advance, can be denoted as pre-
Trained sentence Auto-generation Model, such as: dialogue "-, what did Saturday, we do ?-we can go to play basketball, so
After go to the cinema " Kung Fu Panda "." it is a corpus, wherein previous sentence is noted as puing question to the sentence, " basket in latter sentence
Ball ", " Kung Fu Panda " are noted as session information keyword, in specific training process, with the meeting of previous sentence and latter sentence
Talking about information key is input, and the whole word of latter sentence is that output is trained.Sentence automatic creation system on line is basis
Language ambience information passes through the sentence Auto-generation Model of pre-training under the line, generates the natural language expressing form for being suitble to current context
Sentence.In addition, in order to cover complicated language communication environment as far as possible, online under sentence Auto-generation Model training process
It neutralizes in the sentence automatic creation system on line, all considers auxiliary information, i.e., according to language ambience information and auxiliary information, pass through line
The sentence Auto-generation Model of lower pre-training generates the sentence for being suitble to the natural language expressing form of current context, and may be used also
Carry out the automatic identification and completion of imperfect word and/or initialism expression.
It should be noted that in the training process of the sentence Auto-generation Model under online, using deep learning algorithm into
Row sentence is simplified to session information keyword and is trained according to the paired-associate learning of session information keyword generation sentence, from
And reaches and disobey the modes of a large amount of manual interventions such as master plate to learn and use model.Wherein, user information includes the first use
The user information at family and the user information of second user, sentence Auto-generation Model be it is online under be trained, on line directly
Using deep learning algorithm uses available frame, passes through modification input parameter, output parameter and network parameter (such as network
Depth, number of nodes etc.) Lai Shixian.
Specifically, since the sentence automatic creation system on line is according to language ambience information and auxiliary information, by under line
The sentence Auto-generation Model of pre-training generates the sentence for being suitble to the natural language expressing form of current context, therefore, on line
Sentence automatic creation system may be summarized to be following two parts: the sentence Auto-generation Model of information extracting system and pre-training,
Language ambience information, auxiliary information are mainly converted to vector representation by information extracting system, arrive pre-training for information input
It is pre-processed in sentence Auto-generation Model, the sentence Auto-generation Model of pre-training is according to existing corpus, with training under line
Mode train the sentence Auto-generation Model that sentence is automatically generated according to information key, input is with vector shape
The language ambience information and auxiliary information, output that formula indicates are the sentences in the form of natural language expressing, i.e. output is target session letter
Breath.Wherein, language ambience information includes the source session information of historical session information, the first user input, and auxiliary information includes the first use
The user information at family, the user information of second user, in information fields at least one of;Information fields are further wrapped
Source information fields and target information fields are included, source information fields refer to the fields of source session information, mesh
Mark information fields refer to the fields of target session information;User information further comprises at least one of following: user
Attribute information, user preference information, subscriber calendar arrangement information, customer position information, user behavior information, user equipment letter
Breath.
It further says, training process specifically includes under the line of sentence Auto-generation Model: Step 1: extracting training corpus
In key word information.Step 2: the key word information in step 1, which is converted into vector, to be indicated, it is input to the mind of sentence generation
Through generating sentence in network model, then, calculate between the sentence and original sentence generated according to sentence Auto-generation Model
Difference, and difference is transmitted in network parameter by network, wherein difference refers to that sentence Auto-generation Model generates sentence
Vector and original sentence vector between difference, there are many calculating difference mode, it is simplest be directly subtract each other take absolutely
It to value, can also be squared, network parameter is the weight parameter etc. on the side of some connection neurons in neural network, is belonged to
Adjustable parameters.Step 3: repeating step 1 and step 2, until model is restrained.The overall operation of sentence Auto-generation Model
Structure is as shown in Figure 3.
It further says, the workflow of sentence automatic creation system is as shown in figure 4, specifically include: Step 1: information mentions
System is taken to receive user information and language ambience information.Session information keyword extraction is done first for language ambience information, then to extraction
Session information keyword do abbreviation detection or restore and the processing of object language species detection or translation, last basis
The parameters such as the term vector that sentence Auto-generation Model trains are translated into the language ambience information of corresponding vector form, i.e. Fig. 4
In language ambience information vector;For user information, user information keyword extraction is done first, mould is then automatically generated according to sentence
The parameters such as the term vector that type trains are translated into the user information of corresponding vector form, i.e. user information in Fig. 4 to
Amount.Step 2: user information vector is merged with language ambience information vector and is input in sentence Auto-generation Model, is obtained with nature
The sentence that linguistic form indicates, as final target session information.Wherein, above-mentioned user information includes the user of the first user
Information, the user information of second user.
It describes in detail in the following, being automatically generated by following several embodiments to above-mentioned sentence:
Embodiment one: simple things input
In the simple things input for implementing one, not to be familiar with user's using terminal equipment of Chinese, when being exchanged
For sentence automatic creation system.When user encounters language expression obstacle, such as language beginner or use are less
When known language is exchanged with other people, it can only often remember several keywords of expression content, there is no complete tissue language
The ability of sentence, sentence automatic creation system can provide very big help at this time for the user.As shown in figure 5, user uses
Sentence automatic creation system is completed to exchange with other people in the case where being unfamiliar with language expression.
In Fig. 5, right side second user " Lee " is the skilled user of Chinese, and the first user of left side " opening " is that Chinese use has
Obstacle person, in user's communication process shown in Fig. 5, sentence automatic creation system can be used according to the first user (" opening ") with second
The source that historical session information (such as session content of " Lee " input) and the first user (" opening ") between family (" Lee ") provide
The session information keyword of session information, such as " basketball ", " film ", " Kung Fu Panda " etc., are organized into complete sentence (i.e. target
Session information) and feed back to second user (" Lee ") namely sentence automatic creation system extraction source session information and the first user
The session information keyword of historical session information between second user generates target session letter according to session information keyword
It ceases and shows the target session information.Wherein, above-mentioned first user " opening " is that current device uses end subscriber, second user " Lee "
It is the other user, i.e., the user to be conversated with current device using end subscriber.
Specifically, the sentence automatic creation system in the present embodiment includes information extracting system and sentence Auto-generation Model
Two parts, the sentence of sentence automatic creation system, which automatically generates process, can be divided into following two step:
Step 1: identification language ambience information
The present embodiment carries out sentence generation according to the language ambience information that user provides, and user is assisted to complete the tissue and table of language
It reaches, language ambience information here is that the first user cached in terminal device or social software mutually exchanges between second user
Historical session information and source session information (such as keyword " basketball ", " film ", " Kung Fu Panda " of the first user input
Deng).During the sentence of the present embodiment automatically generates, it is necessary first to the session information keyword of language ambience information is extracted,
In, the method that session information keyword obtains includes but is not limited to following two: 1) directly acquiring, if detecting with ad hoc inquiry
The content of text of format input, system carry out the operation of participle and keyword extraction to text automatically;2) it is gone through from user is previous
The language ambience information that abundant sentence generates is obtained in history session information, and session information keyword is extracted from language ambience information.Then
Utilize the term vector parameter in the sentence Auto-generation Model of pre-training under line, it will words information key carries out vectorization, obtains
To the vector of the session information keyword of language ambience information.
Step 2: natural language sentence is generated using language ambience information
The vector of the session information keyword of the language ambience information extracted in step 1 is input under line and has been instructed by the step
In the sentence Auto-generation Model perfected, the sentence for obtaining natural language expressing form is final target session information, together
When export the target session information.Wherein, which directly can be shown to second user by written form,
The target session information can be played into second user when second user clicks voice and plays associated button.
Wherein, the realization scene of above-described embodiment is in social platform, and sentence automatic creation system can be embedded into defeated
Enter in method or social platform, such user need to only input the sources such as session information keyword session information, be provided by social platform
The historical sessions information such as the context of co-text information of user's exchange more easily realize that sentence automatically generates process.
Embodiment two: the sentence automatic creation system of multi information fields
The application scenarios of this embodiment are that the beginner of travel abroad person and language wants to express the idea of heart, but only
It can remember several keywords of oneself mother tongue, thus encounter obstacle when being exchanged.At this point, sentence automatic creation system energy
Enough original language that by detecting the original language type that user uses and the object language type being intended by, user is provided are crucial
Word translates into the keyword of object language, and combines context of co-text information, and group weaves suitable sentence and feeds back to user.To scheme
It is introduced for 6 exchange scene:
In Fig. 6, right side second user " Lee " is the skilled user of Chinese, and the first user of left side " Tom " is that mother tongue is English
The user of text and to Chinese using being extremely unfamiliar with, at this point, " Tom " completes pair with " Lee " using sentence automatic creation system
Words exchange, " Lee " asks the plan of " Tom " Saturday, and " Tom " can only expect " basketball (basketball) " and " The in English
The sources session informations such as Mummy (mummy) ", then these source session informations (such as keyword " basketball (basketball) ",
" The Mummy (mummy) " etc.) sentence automatic creation system is inputed to, sentence automatic creation system is believed using context of co-text
Source session information (such as the keyword " basketball (basketball) ", " The of (i.e. historical session information) and " Tom " offer are provided
Mummy (mummy) " etc.), the sentence of suitable natural language expressing form, i.e. target session information are generated, and be applied to two
In the conversation content of person, " Tom " is assisted to realize the normal communication with " Lee ".Wherein, above-mentioned first user " Tom " is current device
Using end subscriber, second user " Lee " is the other user, i.e., the user to be conversated with current device using end subscriber.
In the present embodiment, the source session information according to the first user offer, source information fields, target information are needed
Historical session information between fields and the first user and second user generates the sentence of natural language expressing form.
Wherein, specific treatment process is as follows:
Step 1: identification language ambience information and unified information fields
Information fields include source information fields and target information fields, and information fields include language
Say at least one in information, professional domain information and culture background information.
For example, the language that the category of language for working as the source session information of the first user offer is used with interlocutor (i.e. second user)
When saying that type is inconsistent, sentence automatic creation system can be according to the source session information detection source session information that the first user inputs
Category of language, according to the category of language of the historical session infomation detection target session information between the first user and second user,
And confirm the category of language of source session information and the category of language of target session information, wherein under multilingual type scene, language
The category of language of source session information and the language kind of target session information can also be freely arranged in sentence automatic creation system by user
Class.Then, sentence automatic creation system is loaded into corresponding translation model, by the source session of all source session information category of language
Information translation becomes the session information of the category of language of target session information.
In another example when professional domain, the culture background of source session information of the first user offer etc. are special with second user
When industry field, culture background difference, sentence automatic creation system can be according to the source session information detection source meeting of the first user input
Professional domain, the culture background etc. for talking about information, according to the historical session infomation detection target between the first user and second user
Professional domain, culture background of session information etc., and confirm the professional domain, culture background and target meeting of source session information
Talk about information professional domain, culture background etc., wherein multi-specialized field, multicultural background scene under, sentence automatically generates
The profession of the professional domain of source session information, culture background and target session information can also be freely arranged in system by user
Field, culture background.Then, sentence automatic creation system is loaded into corresponding translation model, such as professional domain translation module, text
Change translation model etc., the source session information translation of all source session information professional domains, culture background is become into target session
The session information of the professional domain of information, culture background.
By above description as can be seen that sentence automatic creation system at this time is primarily based on the source that the first user inputs
Session information determines the information fields such as category of language, professional domain, culture background of source session information, and uses based on first
Historical session information (such as contextual information) between family and second user determines the category of language of target session information, profession neck
The target informations fields such as domain, culture background.Then the source session of the source session information fields of the first user input
Information translates into the session information of target information fields, then, from the session informations of target information fields and
Session information keyword is extracted in historical session information between first user and second user then utilizes pre-training under line
Sentence Auto-generation Model in term vector parameter, it will words information key carries out vectorization, obtains corresponding session letter
Cease the vector of keyword.
Step 2: generated statement
Sentence automatic creation system identifies language ambience information, and generated statement, the step one in the process and above-mentioned implementation one
Roughly the same with step 2, details are not described herein.
Wherein, the realization scene of above-described embodiment is in social platform, and sentence automatic creation system can be embedded into defeated
Enter in method or social platform, such user need to only input the sources such as keyword session information, provide user's exchange by social platform
The historical sessions information such as context of co-text information, more easily realize sentence generating process.Sentence automatic creation system can be with
Execute, can also execute in server side in terminal side, it is contemplated that the operand of system possibility is larger, and may occupy compared with
Big physical memory can generally be placed on server side execution.
Embodiment three: the sentence automatic creation system of user information is loaded
The application scenarios of this embodiment are similar with embodiment one, but are loaded in sentence automatic creation system richer
User information, such as subscriber calendar arrangement information, customer position information etc..Sentence automatic creation system can comprehensively consider use at this time
Other attributes relevant to user information that the source session information (such as keyword of offer) and system acquisition of family input arrive close
Keyword is more suitable for the sentence of context at that time to generate, and user is assisted to carry out language expression.
By taking the exchange scene of Fig. 7 as an example, the session operational scenarios and embodiment one of Fig. 7 are consistent, and details are not described herein.With embodiment
Unlike one, the source session information (example of the first user " opening " input is not only utilized in sentence automatic creation system at this time
Such as the keyword of offer), the user information of the first user " opening ", example are also had collected automatically by terminal device or the network equipment
Geographical location as locating for the scheduling information of user, user etc., the source session information then inputted according to the first user " opening "
The user information of the first user " opening " of (such as keyword of offer) and system automatic collection, Lai Shengcheng are more suitable for the first user
" opening " wants the sentence content of statement.The operating procedure of sentence automatic creation system at this time is as follows:
Step 1: identification language ambience information and user information
Language ambience information is mutual exchange of information between the user cached in terminal device or social software, including the first user
The source session information (such as key word information) of historical session information and the first user input between second user.User
Information is the description of user's local environment and some personalization attributes information of user, the personalization attributes information of user include
But it is not limited to: the information such as the schedule of user, locating geographical location.During the sentence of the present embodiment automatically generates,
Need to extract the user information keyword of the session information keyword of language ambience information and the user information of the first user, wherein
The method that session information keyword obtains includes but is not limited to following two: 1) directly acquiring, if detecting with ad hoc inquiry lattice
The content of text of formula input, system carry out the operation of participle and keyword extraction to text automatically;2) history previous from user
The language ambience information that abundant sentence generates is obtained in session information, and session information keyword is extracted from language ambience information.Meanwhile it needing
User information keyword is obtained from the user information of the first user got.Then certainly using the sentence of pre-training under line
Above-mentioned session information keyword and user information keyword are carried out vectorization, obtained by the dynamic term vector parameter generated in model
The vector of the vector sum user information keyword of the session information keyword of language ambience information.
Step 2: system generated statement
The step is by the vector sum user information keyword of the session information keyword of the language ambience information provided in step 1
The vector of (the user information keyword of the user information including the first user) merges, and is then input to trained under line
In sentence Auto-generation Model, the sentence of natural language expressing form is obtained, as final target session information exports simultaneously
The target session information, the target session information directly can be shown to second user (" Lee ") by written form, can also be with
When second user (" Lee ") clicks voice and plays associated button, which is played into second user (" Lee ").
It should be noted that in the application scenarios of embodiment three, if second user is ready to share the user's letter of oneself
Breath, such as the user informations such as customer attribute information, user preference information, subscriber calendar arrangement information, customer position information, into
Information sharing between row friend then not only needs to mention during the identification language ambience information and user information of step 1 at this time
The user information keyword for taking the user information of the first user, the user information that can also extract the user information of second user are closed
Keyword, namely the session in the historical session information in step 1 between extraction source session information and the first user and second user
The user information keyword of the user information of the user information and second user of information key and the first user;In step
Be still in two the session information keyword for the language ambience information that will be provided in step 1 vector sum user information keyword (including
The user information keyword of the user information of the user information keyword and second user of the user information of first user) vector
Merge, be then input under line in trained sentence Auto-generation Model, obtain the sentence of natural language expressing form,
As final target session information.In addition, if second user is ready to share the user information of oneself, such as user property letter
The user informations such as breath, user preference information, subscriber calendar arrangement information, customer position information, the information between friend Lai Jinhang
It is shared, and the first user does not share the user information of oneself, i.e., can not get the user information of the first user, then at this time in step
During rapid one identification language ambience information and user information, the user information that can extract the user information of second user is crucial
Word, namely believe in the session of the historical session information in step 1 between extraction source session information and the first user and second user
Cease the user information keyword of the user information of keyword and second user;Being still in step 2 will mention in step 1
For language ambience information session information keyword vector sum user information keyword (user information including second user
Family information key) vector merge, be then input under line in trained sentence Auto-generation Model, obtain nature
The sentence of language expression-form, as final target session information.
Example IV: the sentence automatic creation system of the load user information of multi information fields
The application scenarios of this embodiment are the combinations of embodiment two and embodiment three, at this point, the first user (" opening ") provides
Source session information information fields and the session information of second user (" Lee ") fields it is inconsistent, and sentence from
Richer user information, such as subscriber calendar arrangement information, customer position information etc. are loaded in dynamic generation system.At this time
The operating procedure of sentence automatic creation system is as follows:
Step 1: identification language ambience information and user information, and unified information fields
The source session information that sentence automatic creation system is primarily based on the first user input determines the language of source session information
The information fields such as type, professional domain, culture background, and based on the historical session letter between the first user and second user
Breath (such as contextual information) determines neck belonging to the category of language, professional domain, culture background target information of target session information
Domain, and then the source session information of the source session information fields the first user input, translate into target information fields
Session information.Then, from the history between the session information of target information fields and the first user and second user
Session information keyword is extracted in session information, meanwhile, it needs to obtain user from the user information of the first user got
Information key.Then, using the term vector parameter in the sentence Auto-generation Model of pre-training under line, by above-mentioned target information
Historical session information between the session information of fields, the first user and second user and the user from the first user
The user information keyword got in information carries out vectorization, sequentially generates corresponding session information keyword vector sum user
Information keywords term vector.
Step 2: system generated statement
By the vector sum user information keyword of the session information keyword provided in step 1, (including first use the step
The user information keyword of the user information at family) vector merge, be then input under line trained sentence and give birth to automatically
It is final target session information at the sentence in model, obtaining natural language expressing form, while exports the target session
Information, the target session information directly can be shown to second user by written form, can also click language in second user
When sound plays associated button, which is played into second user.
It should be noted that in the application scenarios of example IV, if second user is ready to share the user's letter of oneself
Breath, such as the user informations such as customer attribute information, user preference information, subscriber calendar arrangement information, customer position information, into
Information sharing between row friend is not only needed from the user information of the first user got then at this time in step 1
User information keyword is obtained, the user information of second user can also be obtained, and then obtain the user information of second user
User information keyword, namely in step 1, according to information fields, the source session information of source information fields is turned over
It is translated into the session information of target information fields, extracts session information and the first user and second of target information fields
The user of the session information keyword of historical session information between user and the user information of the first user and second user
The user information keyword of information;The vector sum that the session information keyword that will be provided in step 1 is still in step 2 is used
Family information key (the user of the user information of the user information keyword and second user of the user information including the first user
Information key) vector merge, be then input under line in trained sentence Auto-generation Model, obtain nature language
Say the sentence of expression-form, as final target session information.
In addition, if second user is ready to share the user information of oneself, such as customer attribute information, user preferences letter
The user informations such as breath, subscriber calendar arrangement information, customer position information, the information sharing between friend Lai Jinhang, and first uses
The user information of oneself is not shared at family, i.e., can not get the user information of the first user, then at this time in step 1, can obtain
The user information of second user is taken, and then obtains the user information keyword of the user information of second user, namely in step 1
In, according to information fields, the source session information of source information fields is translated as to the session of target information fields
Information extracts the meeting of the historical session information between the session information and the first user and second user of target information fields
Talk about the user information keyword of the user information of information key and second user;It is still in step 2 by step 1
The vector sum user information keyword (user information of the user information including second user of the session information keyword of middle offer
Keyword) vector merge, be then input under line in trained sentence Auto-generation Model, obtain natural language table
Up to the sentence of form, as final target session information.
Embodiment five: the sentence of initialism and/or imperfect word automatically generates
The application scenarios of this embodiment are, when user is in busier state, or without energy be absorbed in
When other people chat, often merely desire to carry out brief reply.But such answer can seem less proper, sentence is automatic at this time
The source session information (such as keyword of offer) that generation system can be inputted according to user, generates complete natural language sentence
Statement assists user reasonably to be stated, by taking Fig. 8 as an example.
In fig. 8, the first user " opening " is in busy state, has no time tissue for the inquiry of second user " Lee "
Language is replied.But the idea of the first user " opening " is to eat " Pizza Hut " but to wait a mement, and then " is opened " in idea
Content be supplied to sentence automatic creation system, sentence automatic creation system (such as is talked with according to historical session information at that time
Contextual information), it generates complete sentence and is replied, is i.e. help " opening " goes to answer.The behaviour of sentence automatic creation system at this time
It is identical as previous embodiment one to make step, details are not described herein.Wherein, the first user " opening " in the present embodiment is current device
Using end subscriber, second user " Lee " is the other user, i.e., the user to be conversated with current device using end subscriber.
B, analogy translation system
Since existing sentence Auto does not account for whether user can get a real idea of the natural language of generation
Term or proper noun in sentence, the present invention implement to give birth to automatically in the sentence of the natural language expressing form based on keyword
On the basis of system, analogy translation system is proposed, by the proper noun and/or entity in the natural language sentence of generation
And/or event etc. generates the noun and/or entity and/or event that user is understood that by the algorithm of analogy.Analogy translation
Systematic difference scene are as follows: when understanding of the user to certain things has puzzled, analogy translation system can be mentioned according to user
For the problem of and user information analogy go out the explanation of things known to active user.
Wherein, the overall operation process of analogy translation system is as shown in Figure 9, comprising:
Step 1: the source session information inputted according to the first user, i.e. expression content in Fig. 9 can be one and mention
Question sentence identifies the source objects information such as critical entities and/or the event in the session information of source, i.e. source entities/events in Fig. 9
Detection, and by the source objects information such as the entity collected and/or event, by the class label predefine pass through disaggregated model into
Row classification, obtains the source category information of source object information.Wherein, recognition methods be to the first user input source session information into
Row syntax, semantic character labeling, the source objects such as critical entities and/or event for extracting its syntactic-semantic feature to identify description letter
Breath.
Step 2: each divided source category information and source object information are represented by feature vector respectively
Come, i.e. source category feature in Fig. 9 indicates and source entities/events character representation.
Step 3: the user information of the first user, i.e. user journal in Fig. 9, personal brief introduction including user etc. are collected
Customer attribute information, user are the most recently used the users such as the user equipment informations such as equipment, the operation log of user, social log
Behavioural information can also include the user preference informations such as interest, hobby.Then, extract the feature of user information and be expressed as to
Amount form, i.e. feature extraction in Fig. 9, followed by predicting the most familiar of N of user according to the feature of the user information of extraction
A candidate target classification, the target side classification in candidate target classification, that is, Fig. 9 herein, and candidate target classification information is also led to
Vector form expression is crossed, i.e. category feature in Fig. 9 indicates.Wherein, the user information of the first user is to provide permission in user
Afterwards, it was collected in terminal device.
Step 4: believed by calculating source category information (i.e. source category feature in Fig. 9 indicates) with candidate target classification
The similarity of breath (i.e. category feature in Fig. 9 indicates), obtains the candidate target classification information most like with source category information (i.e.
Most like target side classification in Fig. 9), and it is denoted as target category information.In other words, pass through source classification (i.e. source category information)
With the feature vector of target side classification (i.e. candidate target classification information), the similarity between classification is calculated, is obtained and each source
The most like target side classification of classification.Wherein, there are two types of schemes for specific implementation method: scheme one passes through DNN (Deep Neural
Networks, deep neural network) model calculates, by source classification respectively with each target side classification as DNN model
Input, then calculate a probability value of the similarity for representing two classifications, after all probability values have been calculated, to probability value into
Row sequence, maximum probability value is exactly the target side classification most like with source classification, i.e. target category information.Scheme two is led to
Similarity calculation is crossed, the distance metric of the feature vector of source classification and target side classification is calculated, in feature vector institute table
In the space shown, the smaller similarity of space length between two categories is bigger, with this find out in source category feature space
Apart from the smallest target side classification, i.e. the target side classification namely target category information most like with source classification.
Step 5: candidate target object information is obtained according to target category information, lists the time in target side classification first
Entity is selected, i.e. candidate target end entities/events character representation in Fig. 9, then according to source object information and candidate target object
The similarity of information obtains target object information, namely by calculating source entity and/or event (i.e. source reality in Fig. 9
Body/affair character indicates) with the phase of candidate's entity and/or event (i.e. candidate target end entities/events in Fig. 9) in target side
Like degree, the target side entity namely target object information most like with source entity and/or event are obtained.Wherein, entity herein
Between similarity calculation method, the calculation method of similarity is identical between the classification of step 4.
Step 6: target session information is generated according to target object information and specifically utilizes the clause mould pre-defined
Source entity (i.e. source object information) is replaced with calculated most like target side entity (i.e. target pair in step 5 by plate
Image information), carry out the explanation of target side entity, i.e. entities/events analogy in Fig. 8.
Specifically, the core concept of analogy translation system is the source session information according to the first user, collects source session letter
The source objects information such as the critical entities of breath and/or event, and by the source objects information such as the entity collected and/or event, by pre-
The class label of definition is classified by disaggregated model, obtains source category information.Pass through source category information and candidate target class
The feature vector of other information calculates the similarity between classification, calculates the target category information most like with source category information.So
Afterwards, it by calculating the similarity of source object information and candidate target object information, obtains and the most like target of source object information
Object information, finally, source object information is replaced with calculated target object and is believed using the sentence pattern template pre-defined
Breath generates entity and/or event that user is understood that, plays the work explained to the source object information in the session information of source
With that is, according to target object information generation target session information.
Further, when a user and more than one other user chat, for example, the first user D and more than one the
Two users (such as user A, user B and user C) chat, under such application scenarios, analogy translation system is in step 1, root
According to the historical session information between the source session information and the first user and more than one second user of the first user input, identification
The source objects such as critical entities and/or event in the session information of source information to get arrive source object information.At the same time, analogy is turned over
The other treatment processes for translating system are identical to step 6 as above-mentioned step two, i.e., are classified to obtain source class to source object information
Other information;Further according to the user information of the first user, candidate target classification information is obtained;Then, according to source category information and time
The similarity for selecting target category information obtains target category information;Then, candidate target object is obtained according to target category information
Information;Followed by obtaining target object information according to the similarity of source object information and candidate target object information;Finally, root
According to target object information, target session information is generated.
Further, when a user and more than one other user chat, and the session information of one of user
The information fields such as category of language, professional domain, culture background are different from the fields of the session information of other users
When, such as the first user D and the chat of more than one second user (such as user A, user B and user C), and the first user D
The information fields such as category of language, professional domain, culture background of source session information, with user A or user B or user C etc.
The information such as category of language, professional domain, the culture background of session information belonging to neck it is not identical, under such application scenarios, class
Than translation system in step 1, first according to information fields, the source session information of source information fields is translated as mesh
Mark the session information of information fields, session information and the first user further according to target information fields and more than one
Historical session information between second user, obtains source object information.At the same time, other treatment processes of analogy translation system
It is identical to step 6 as above-mentioned step two, i.e., source object information is classified to obtain source category information;It is used further according to first
The user information at family obtains candidate target classification information;Then, similar to candidate target classification information according to source category information
Degree, obtains target category information;Then, candidate target object information is obtained according to target category information;Followed by according to source pair
The similarity of image information and candidate target object information, obtains target object information;Finally, being generated according to target object information
Target session information.
In the following, being described in detail by following several embodiments to analogy translation system:
Embodiment six: the analogy translation of entity
In this embodiment, when user wants to know about the commodity for oneself not being familiar with field, if simply by commodity
The information such as attribute, parameter explain, user is difficult to be clearly understood that, therefore the embodiment of the present invention six, by by the commodity
The information such as attribute, parameter, explain in commodity scene known to analogy to user, meet the needs of users, specifically answer
With scene as shown in Figure 10,11.
In Figure 10, a user wants one television set of purchase, has wanted the market situation for taking off television set, has passed through user
Information collection recognizes that the user is a housewife.If only carrying out monitor resolution, screen quality, Acoustics Parameters etc.
Description, will lead to be not familiar with electronic equipment housewife it is confused, and do not understand and how to assess a television set
Quality.Analogy translation system passes through the user information of acquisition housewife, by the info class ratio such as the brand of television set and cost performance
Onto the product of kitchen class known to the housewife, enable housewife to the television product of each brand have one more
Intuitive understanding, including from price, quality, feature etc..Wherein, the housewife in Figure 10 is the first user, i.e., currently
Equipment uses end subscriber, in addition, italic indicates that user wants that the product of the television set understood, runic indicate that analogy is turned in Figure 10
The analogy for translating system recommendation is explained.
Figure 11 be equally analogy translation system solve the problems, such as one it is similar, when user want purchase one television set when,
Recognize that the user is automotive hobbyist by user information acquisition, analogy translation system is by television product analogy to user's ratio
In more familiar automobile brand, the clearer price for recognizing each television product, quality, feature etc. are allowed users to.
Wherein, the automotive hobbyist in Figure 11 is the first user, i.e. current device uses end subscriber, in addition, italic indicates in Figure 11
For the product for the television set that user wants to understand, runic indicates that the analogy that analogy translation system is recommended is explained.
Wherein, in the application scenarios shown in Figure 10 and Figure 11, analogy translation system is according to the first user (i.e. in Figure 10
Automotive hobbyist in housewife, Figure 11) input source session information " I wants to buy a television set, but does not know which is bought
A brand ", on the one hand, the critical entities that source can be collected from network can answer some entities of source session information
Information, such as source session information are which platform television set bought, and critical entities may include Samsung TV, Hisense's TV, millet TV
Deng it further includes the characteristic informations such as some prices, the parameter of entity that entity information, which not only includes entity name, on the other hand, can be with
From critical entities were collected in the question and answer of user in open community, in another aspect, can be searched from the official website of these critical entities
Collect the information such as its price, parameter;Then, the entity collected is carried out by the class label predefined by disaggregated model
Classification, and indicated with feature vector;Then, it predicts the most familiar of N number of target category of user, and target class another characteristic is also led to
Cross vector form expression;Followed by, pass through the feature vector of source classification and target side classification calculate classification between similarity, meter
Calculate the most like target side classification of each source classification;Finally, listing the candidate entity in target side classification, and pass through calculating
The similarity of candidate entity, obtains and the most like target side entity of source entity in source entity and target side.
It should be noted that in the embodiment shown in Figure 10 and Figure 11, in the execution process of analogy translation system and such as Fig. 9
The overall operation process of analogy translation system is identical, is not repeated herein.
Embodiment seven: the analogy translation system of event
When user faces to be unfamiliar with the chat content in field, it often may feel that puzzlement, can be turned at this time by analogy
System is translated, by field known to chat content analogy to user, to explain to chat content, so that user be helped to be easier
Ground understands chat content, scene as shown in figure 12.
In Figure 12, user A, user B are being exchanged with user C about computer equipment GPU (Graphics
Processing Unit, graphics processor) thing be difficult to understand for theirs for the user D for not knowing about the equipment
Chat content.At this point, analogy translation system can be according to chat content and the user information of user D, using known to user D
Entity explains the chat content between user A, user B and user C, and user D is helped to participate in mutual exchange.Wherein, on
Stating user D is the first user, i.e., current device uses end subscriber, and user A, user B and user C are second user, i.e. other side uses
Family, namely the user to be conversated with current device using end subscriber.
In this case, the analogy translation flow of event can be summarized as following steps:
Step 1: analogy translation system acquires chat content by chatting platform, extracts the critical event of chat content, and
Construct source event description relevant to chat content.
Step 2: user information, including personal brief introduction (i.e. customer attribute information), social log, operation log etc. are obtained
The user equipment information of user behavior information and equipment used by a user, and construct corresponding target side event description.
Step 3: the similarity by calculating source event description and target side event description, acquisition can embody in chat
Target side event known to the user of appearance.
Step 4: according to the clause pre-defined, source event is explained using target side event, to help to use
Family understands chat content.
Embodiment eight: enhanced analogy translation system
The application scenarios of this embodiment comprehensively consider entity and event, by source entity and event analogy at known to user
Target side entity and event.The analogy information of enhanced analogy translation system comprehensive utilization entity and event, wants user
The entity and event of understanding carry out more thorough explanation, as shown in fig. 13 that application scenarios.
In Figure 13, enhanced analogy translation system not only acquires the logic content of chat, also acquires in chat
Physical contents in appearance, by and user's familiar field in affair logic and physical contents combination carry out analogy, find conjunction
The analogy combination of reason explains chat content.Wherein, physical contents refer to user description entity (it is such as tall and handsome reach, GPU
Deng), logic content refer in user's chat content logical thought (such as xx is proposed the xx of latest version, be respectively xxx and
The upgrade version etc. of xxx).
Wherein, the execution process of the embodiment is similar to embodiment seven, unlike: the present embodiment is in step 1 and step
It joined the acquisition of entity information in two, and comprehensively consider the similarity of physical contents and the phase of affair logic in step 3
Like degree, so that selecting suitable entity occurrence combination carries out analogy explanation to current chat content.
C, the translation system cross-cutting across language
Because category of language, professional domain known to different user etc. is multifarious, the existing skill for generating natural language
Art does not have really to be good at that field is different in view of user and user when generating sentence, caused communication disorder.This
Inventive embodiments propose across the cross-cutting translation system of language, for the expression-form in different majors field, respectively with general neck
Domain is to special dimension and special dimension to the transformation model of general domain, to help the user of different field to exchange.
The embodiment of the present invention across the cross-cutting translation system of language, used basic thought is: according to belonging to information lead
The source session information of the source information fields of first user input is translated as the target session of target information fields by domain
Information.Namely the source session information of source input is received, and obtain or detect its affiliated professional domain, used category of language
Etc. information fields, be then loaded into corresponding field translation model and source session information translated, and be in by translation result
Target user is now given, that is, is translated as the target session information of target information fields.Also it can be described as: cross-cutting across language
The multi-modal information (i.e. the source session information of the first user input) that translation system is inputted according to originating mobile subscriber, detects its institute automatically
Belong to the information fields such as professional domain, category of language, and combine certain special expression forms of the professional domain, to input
Source session information does corresponding translation, to the translation of General Expression form in the form of reaching from the special expressions of certain professional domains
Purpose;It is also possible to using the information of the General Expression form of originating mobile subscriber input as source session information, by specifying mesh
Fields are marked, the source session information of General Expression form is translated as the special expression form under specified target fields
Information.In other words, across the cross-cutting translation system of language according to information fields, based on translation mould trained in advance accordingly
Type carries out the translation processing of category of language, professional domain to source session information according to sequencing, obtains target session information.
Entire flow across the cross-cutting translation system of language is as shown in figure 14.
Specifically, the core concept across the cross-cutting translation system of language of the embodiment of the present invention is according to the more of source input
Modal information detects its affiliated professional domain, and combines certain special expression forms of the professional domain, translates mould by field
Type does corresponding translation to input information.
Wherein, above-mentioned multi-modal information, including but not limited at least one of: user version input, user speech
Input, user images input etc..Above-mentioned information fields include but is not limited at least one following: political realms, military affairs
Field, programming field, field of play, physical field, chemical field, art of mathematics, animation field, building field, music field
Deng.
In the following, being described in detail by following several embodiments to across the cross-cutting translation system of language:
Embodiment nine: special professional domain expression-form to General Expression form
As shown in figure 15, the first user is special professional domain (such as field of play) in the source session information that source inputs
Expression way, and want to be expressed with general expression way in output end, at this point, the place across the cross-cutting translation system of language
Reason mode is as follows:
Step 1: the source session information that the first user inputs in source is received, and belonging to the information of detection source session information
Field, such as professional domain, category of language etc., wherein the first user is following sources in the source session information that source inputs
End input information, the first user is following originating mobile subscriber.
Specifically, the first user is there are many modes of source input source session information, including but not limited to: text, language
Sound, picture.When first user clicks text box, text information can be directly inputted;It, can be from end when clicking the photographing button in the upper left corner
Corresponding picture is selected in the photograph album of end equipment, can also directly be shot using terminal device, across the cross-cutting translation system of language
The text information in picture can be obtained by optical character recognition technology, and is shown in text box;Upper right corner voice is clicked to press
When button, corresponding sound bite can be selected from the recording of terminal device, directly can also carry out voice input using terminal device,
Voice input can be translated into corresponding text information, and show by speech recognition technology across the cross-cutting translation system of language
In text box.
Further, text field classifier is first passed through across the cross-cutting translation system of language, to different input texts
This is classified, and finds out the special professional domain that input text most likely belongs to, and real-time display feeds back to originating mobile subscriber, Figure 15
In " fields " option be to input the special professional domain that may belong to of text information.Meanwhile it is manual to provide originating mobile subscriber
The function of the selection input affiliated professional domain of information, it is wrong to correct a small amount of field differentiation occurred across the cross-cutting translation system of language
Accidentally, it namely when originating mobile subscriber thinks translation system judgement inaccuracy cross-cutting across language, can click beside " fields "
" reselecting " button, will pop up field list at this time, and originating mobile subscriber is according to oneself subjective judgement, to the profession of input text
Field is reselected.Meanwhile also can detecte out the category of language of source input information across the cross-cutting translation system of language,
And originating mobile subscriber is fed back to by real-time display.Similar with realm information, category of language can also be reselected by originating mobile subscriber.
Further, text field classifier be in advance trained sorter model, sorter model include but
It is not limited to: Logic Regression Models, support vector machines (SVM) model, neural network model etc..The generation of text field classifier
Journey is as follows: the vector that text is converted to is as input, the affiliated professional domain of text as output, by constantly correcting output
Mistake adjusts classifier parameters, obtains the off-line model of the text field classifier of different language type.
Wherein, in Figure 15, originating mobile subscriber input for " open at night black? eat chicken with you!" this inputs and can be source use
The source that text on the scanned picture of family obtains inputs information, is also possible to the source input letter that user is inputted using voice
Breath, or information is inputted by the source that other possible input modes obtain;First across the cross-cutting translation system of language
The category of language that can detect source input information is " Chinese ", then passes through the text field classifier of " Chinese ", detects
Source inputs the affiliated professional domain of information as " field of play ", and the information such as category of language, professional domain that will test institute
Category field real-time display on the terminal device, such as " original language " and " fields " in Figure 15.
Step 2: according to information fields, source input information is translated
The information such as the affiliated professional domain information of information, category of language are inputted according to source across language cross-cutting translation system
Fields are loaded into corresponding field translation model and translate to source input information.
Specifically, field translation model is trained string to string (Sequence to Sequence) translation mould
Type, construct field translation model method are as follows: by corresponding sentence or phrase as unit of word or word as input, output, by
It is largely made of common word or word in the sentence of special professional domain, in order to retain its professional domain information, needs same
When using field label as input or output label, be trained together in the form of term vector.By constantly correcting output error,
Model parameter is adjusted, until last convergence.When the input using the expression method of special professional domain as training pattern, with general
When output of the information expression method in field as training pattern, field translation model is that mould is translated in the field of " special -> general "
Type;When the input using the information expression method of general domain as training pattern, with the information expression method of special professional domain
When output as training pattern, field translation model is the field translation model of " general -> special ".
Further, after the given field translation model obtained by above method training, under special professional domain
Input information with special expression form, such as in Figure 15 " open at night black? chicken is eaten with you!", will be translated into is one
As field expression-form, with the identical text information of the source input category of language of information, such as in Figure 15 " at night together
Play games? won with you ", namely " opened at night black according to the input information of the special professional domain of source? chicken is eaten with you!", it carries
Source, is inputted information by the field translation model for entering " the field of play expression way -> general domain expression way " of " Chinese "
" open at night black? chicken is eaten with you!" be translated as " playing games together at night? it is won with you!".
Further, in terms of category of language, corresponding basic language translation is loaded onto across the cross-cutting translation system of language
Model General Expression form, identical with the source input category of language of information text information, such as " is beaten together at night
Game? it is won with you!", it is further translated into the target session information for object language type.In Figure 15, when " target language
Speech " is when being selected as " English ", and the basic language translation model of " in -> English " will be loaded into across the cross-cutting translation system of language,
" at night play games together? with you win " translation become " Do you play games together at night? Take
you to win!".
Wherein, the category of language of " original language " and the category of language of " object language " may be the same or different, language
Type includes and is not limited to following citing: such as Chinese, English, Korean, Japanese.
Further, basic language translation model is also a kind of translation mould of string to string (Sequence to Sequence)
Type, a kind of method for constructing basic language translation model are as follows: using category of language as the input of training pattern, with other Languages kind
Output of the result as training pattern that class translates this category of language reaches different language by constantly adjusting model parameter
The purpose mutually translated between type.
It should be noted that above-mentioned various language include but is not limited to: Chinese, English, Korean, Japanese etc.;It is above-mentioned various
Field includes but is not limited to: political realms, military field, programming field, field of play, physical field, chemical field, mathematics neck
Domain, animation field, building field, music field etc..In addition, at multi-field aspect, since " target domain " default choice is
"None" will skip over this option across the cross-cutting translation system of language, therefore final translation result is " Do you play games
Together at night? Take you to win!".
Step 3: result is presented
Finally, final translation result is presented in the form of text terminal device across the cross-cutting translation system of language
To target user, as shown in figure 15, i.e., the translation result that terminal device is finally presented is " Do you play games
Together at night? Take you to win!".
Further, when " reading aloud automatically " button is activated, text can also be converted to voice broadcasting by terminal device
To user.
Further, according to the description of above-mentioned steps one to step 3 as can be seen that across the cross-cutting translation system of language
Processing, may be summarized to be: according to information fields, based on translation model trained in advance accordingly, according to sequencing pair
Source session information carries out the translation processing of professional domain and category of language, obtains target session information.
It should be noted that originating mobile subscriber (i.e. the first user) is that current device uses end subscriber, mesh in embodiment nine
Marking user (i.e. second user) is the other user, i.e., the user to be conversated with current device using end subscriber.
Embodiment ten: General Expression form to special professional domain expression-form
As shown in figure 16, the first user is General Expression form in the source session information that source inputs, i.e., not special
Professional domain, and want to be expressed with special professional domain in output end, at this point, the processing side across the cross-cutting translation system of language
Formula is as follows:
Step 1: the source session information that the first user inputs in source is received, and belonging to the information of detection source session information
Field, such as professional domain, category of language etc., wherein the first user is following sources in the source session information that source inputs
End input information, the first user is following originating mobile subscriber.
Specifically, the mode of source input information and the mode of infomation detection are identical as the mode in above-described embodiment nine,
Details are not described herein.In Figure 16, it is " Let ' s play games together at night that source, which inputs information,!", this
Source input information can be the information that the text on originating mobile subscriber scanned picture obtains, and is also possible to originating mobile subscriber and utilizes voice
Input obtained information, or the information obtained by other feasible input modes.It is first across the cross-cutting translation system of language
The category of language that can first detect source input information is " English ", is then loaded into the text field classifier of " English ", and examine
The expression for measuring source input information does not have special professional domain background, therefore, " fields " at this time are "None".Equally
, the information being detected above also can real-time display on the terminal device, it is similar with above-described embodiment nine.
Wherein, description of the construction method of text field classifier with step 1 in embodiment nine.
Step 2: according to information fields, source input information is translated
In Figure 16, since " fields " option of source input information is "None", therefore across the cross-cutting translation system of language
System does not do any processing to source input information on this option, i.e., translation result at this time is identical as source input information, still
For " Let ' s play games together at night!".
Specifically, in terms of category of language, since the category of language of source input information is " English ", the language of target side
Type is " Chinese ", i.e., " object language " is selected as " Chinese ", at this point, be loaded onto across the cross-cutting translation system of language " English -> in "
Basic language translation model, current translation result " Let ' s play games together at night!" translate into
For Chinese, " we play games together at night!", i.e., in this step, " Let ' s play games together at
night!" be further translated into as " we play games together at night!".
Wherein, the category of language of " original language " and the category of language of " object language " may be the same or different, language
Type includes and is not limited to following citing: such as Chinese, English, Korean, Japanese.
In addition, description of the construction method of basic language translation model with step 2 in above-described embodiment nine.
Further, at multi-field aspect, when " target domain " is selected as " field of play ", across the cross-cutting translation of language
System will select the field translation model of " the General Expression mode -> field of play expression way " of " Chinese ", and in translator of Chinese
As a result " we play games together at night!" on the basis of, carry out corresponding professional domain translation again to the translation result, i.e.,
By translation result, " we play games together at night!" translate under " field of play " Chinese expression-form " open at night it is black walk
It rises!", namely in this step, " we play games together at night!" further it is translated as the expression side in game speciality field
Formula " open at night it is black walked!".
Wherein, description of the construction method of field translation model with step 2 in above-described embodiment nine.
Step 3: result is presented
Finally, final translation result is presented in the form of text terminal device across the cross-cutting translation system of language
To target user, as shown in figure 16, i.e., the translation result that terminal device is finally presented be " open at night it is black walked!".
Further, when " reading aloud automatically " button is activated, text can also be converted to voice broadcasting by terminal device
To user.
It should be noted that originating mobile subscriber (i.e. the first user) is that current device uses end subscriber, mesh in embodiment ten
Marking user (i.e. second user) is the other user, i.e., the user to be conversated with current device using end subscriber.
D, across the cross-cultural background translation system of language
Existing spatial term technology, does not account for the difference of culture background locating for different user, and the present invention is real
The it is proposed of offer is applied across the cross-cultural background translation system of language, target user can be helped to understand the sentence with culture background.
The embodiment of the present invention across the cross-cultural background translation system of language, used basic thought is: according to information institute
The source session information of the source information fields of first user input is translated as the target of target information fields by category field
Session information.Namely the source session information of source input is received, and obtain or detect its affiliated culture background, used language
Then the information fields such as type are loaded into corresponding culture background translation model and translate to source session information, and will turn over
It translates result and is presented to target user, that is, be translated as the target session information of target information fields.Wherein, cross-cultural across language
The core concept of background translation system is: the affiliated culture background of automatic detection source session information, and combines under the culture background
Certain special expression forms do corresponding translation to source session information, in the form of reaching the special expression of certain culture backgrounds
It is translated as the purpose of General Expression form.
It specifically, is the multi-modal information (i.e. first inputted according to originating mobile subscriber across the cross-cultural background translation system of language
The source session information of user's input), the information fields such as its affiliated culture background, category of language are detected automatically, and combining should
Certain special expression forms of culture background do corresponding translation to the source session information of input, to reach from certain culture back
The special expression form in scape field to General Expression form translation purpose.In other words, across language cross-cultural background translation system
System is according to information fields, based on translation model trained in advance accordingly, carries out according to sequencing to source session information
The translation processing of category of language, culture background, obtains target session information.
Wherein, above-mentioned multi-modal information, including and be not limited at least one following: user version input, voice input,
Image input etc..Above-mentioned culture background field includes but is not limited at least one following: China domestic various regions culture, South Korea's sheet
Native various regions culture, continental United States' various regions culture, Japanese native country various regions culture etc..
In addition, the entire flow across the cross-cultural background translation system of language is as shown in figure 17.Translation of Culture system in figure
It is the Translation of Culture model of an offline pre-training, after the completion of culture translation model training, can directly uses on line.
Wherein, the top half of Figure 17 indicates process on line, and lower half portion indicates the process of the off-line training of Translation of Culture system.
In the following, being described in detail by following several embodiments to across the cross-cultural background translation system of language:
Embodiment 11: special culture background field expression-form to General Expression form
As shown in figure 18, the first user is special culture background (such as American Slang) in the source session information that source inputs
Expression way, and want to be expressed with general expression way in output end, at this point, across the cross-cultural background field translation of language
The processing mode of system is as follows:
Step 1: the source session information that the first user inputs in source is received, and belonging to the information of detection source session information
Field, such as culture background, category of language etc., wherein the first user is following sources in the source session information that source inputs
End input information, the first user is following originating mobile subscriber.
Source inputs there are many modes of information, including but not limited to: text, voice, picture.When originating mobile subscriber clicks text
When this frame, text information can be directly inputted;It, can be from the photograph album of terminal device when originating mobile subscriber clicks upper left corner photographing button
Corresponding picture is selected, can also directly be shot using terminal device, optics can be passed through across the cross-cultural background translation system of language
Character recognition technologies obtain the text information in picture, and are shown in text box;Upper right corner voice is clicked when originating mobile subscriber to press
When button, corresponding sound bite can be selected from the recording of terminal device, directly can also carry out voice input using terminal device,
Voice input can be translated into corresponding text information by speech recognition technology across the cross-cultural background translation system of language, and
It is shown in text box.
Information is inputted across the cross-cultural background translation system detection source of language, and loads the culture background of corresponding language type
Classifier carries out culture background classification to source input information, and feeds back to originating mobile subscriber by real-time display, in Figure 18
" culture background " option is the culture background that source input information may have.There is provided originating mobile subscriber manually selects input simultaneously
The function of cyber culture background differentiates mistake to correct a small amount of culture background occurred across the cross-cultural background translation system of language.
That is, if originating mobile subscriber thinks that " the weight beside " culture background " can be clicked across the cross-cultural background translation system judgement inaccuracy of language
New selection " button, will pop up culture background list at this time, and originating mobile subscriber is according to oneself subjective judgement, to the text of input text
Change background to be reselected.It also can detecte out the language of source input information simultaneously across the cross-cultural background translation system of language
Type, and originating mobile subscriber is fed back to by real-time display.Similar with culture background information, category of language can also be by originating mobile subscriber
It reselects.
Culture background classifier is by culture background sorter model trained in advance, culture background sorter model
Including but not limited to: Logic Regression Models, support vector machines (SVM) model, neural network model etc..Culture background classifier
Generating process is as follows: the vector that text information is converted to is as input, text fields as output, by constantly repairing
Positive output mistake adjusts classifier parameters, obtains the off-line model of the culture background classifier of different language.
In Figure 18, originating mobile subscriber inputs " In electronic industry, Samsung can be
When considered the 800pound gorilla. ", source input is detected first across the cross-cultural background translation system of language
The category of language of information is English, then loads the culture background classifier of " English ", and detects the text of such expression way
Changing background is American Slang culture, therefore " original language " is set to " English ", and " culture background " is set to " American Slang ".At this point,
Originating mobile subscriber can also manually select the type of " original language " Yu " culture background ".It is noted herein that " original language " option
There is certain correlation with " culture background " option, i.e., when " culture background " is " American Slang ", original language can not go out
The now option that similar " Chinese ", " Korean " etc. have no bearing on " American Slang ".
Step 2: according to information fields, source input information is translated
Across the cross-cultural background translation system of language according to culture background information, it is loaded into corresponding Translation of Culture model, to source
End input information is translated.
Wherein, Translation of Culture model is trained string to string (Sequence to Sequence) translation model,
The method for constructing Translation of Culture model are as follows: by corresponding sentence or phrase as input, output as unit of word or word, due to spy
The sentence of different culture background expression-form is largely made of common word or word, in order to retain its realm information, is needed same
When using culture background label as input or output label, be trained together in the form of term vector.By constantly correcting output
Mistake adjusts model parameter, until last convergence.Wherein, when using the information of special culture background expression-form as training pattern
Input, the information in the form of General Expression as training pattern export when, Translation of Culture model be " special culture background -> mono-
As " Translation of Culture model;When the information in the form of General Expression expresses shape as training pattern input, with special culture background
When the information of formula is exported as training pattern, Translation of Culture model is the Translation of Culture model of " general -> special culture background ".
Specifically, after the given Translation of Culture model obtained by above method training, with special culture background
The input information of expression-form, such as " In electronic industry, the Samsung can be in Figure 18
Considered the 800pound gorilla. " will be translated into General Expression form and source input information language
Say the identical text information of type, such as " In electronic industry, the Samsung has quite in Figure 18
Powerful strength. " namely translation system are first loaded into the Translation of Culture system of " American Slang-is general ", source
Input " In electronic industry, the Samsung can be considered the 800pound of user
Gorilla. " translation becomes " In electronic industry, Samsung has quite powerful
strength.”。
Further, it in Figure 18, in terms of category of language, is loaded onto across the cross-cultural background translation system of language corresponding
Basic language translation model, above-mentioned translation result " In electronic industry, Samsung has quite
Powerful strength. " is further translated into as object language type.Since " object language " is selected as " Chinese ", across
The cross-cultural background translation system of language will load the basic language translation model of " English -> in ", " In electronic
Industry, Samsung has quite powerful strength. " translation are as " in electronics industry, Samsung possesses non-
The strength of Chang Qiang great.", i.e. target session information.
Wherein, basic language translation model is also a kind of translation model of string to string (Sequence to Sequence),
The method for constructing basic language translation model are as follows: using a kind of information of category of language as the input of training pattern, use is other
It is that category of language translates the information of this kind of category of language as a result, output as training pattern, by constantly adjusting model parameter,
Achieve the purpose that mutually to translate between different language type.In addition, above-mentioned various language include but is not limited to: Chinese, English, Korea Spro
Text, Japanese etc..
Step 3: result is presented
Finally, across the cross-cultural background translation system of language terminal device by final translation result in the form of text
It is presented to target user, as shown in figure 18, i.e., the translation result that terminal device is finally presented is that " in electronics industry, Samsung possesses
Very powerful strength.".
Further, when " reading aloud automatically " button is activated, text can also be converted to voice broadcasting by terminal device
To user.
It should be noted that originating mobile subscriber (i.e. the first user) is that current device uses end subscriber in embodiment 11,
Target user's (i.e. second user) is the other user, i.e., the user to be conversated with current device using end subscriber.
Embodiment 12: special culture background field expression-form to General Expression form (picture adds square cards for learning characters to export)
When across the cross-cultural background translation system of language detect translation result (i.e. target session information) can with picture come
It, not only can be in this mesh of target side output character translation result across the cross-cultural background translation system of language when more intuitive display
Session information is marked, the picture that can vividly state the target session information can be also exported, allow the user to more intuitively understand
The target session information of output.At this point, as follows across the cross-cultural background translation system operating procedure of language:
Step 1: the source session information that the first user inputs in source is received, and belonging to the information of detection source session information
Field, such as culture background, category of language etc., this step is identical as the step one in embodiment 11, and details are not described herein.
Step 2: according to information fields, source input information is translated
The culture background information for inputting information according to source across the cross-cultural background translation system of language, is loaded into corresponding culture
Translation model translates source input information, the special expression form with special culture background for first inputting source
Input information, be translated as General Expression form, with the identical text information of the source input category of language of information, then carry
Enter corresponding basic language translation model, above-mentioned translation result is further translated into as the session information of object language type.
Wherein, the construction method of Translation of Culture model is identical as the description of step two in embodiment 11, herein no longer
It repeats.
In Figure 19, across the cross-cultural background translation system of language, it is first loaded into the Translation of Culture system of " American Slang-is general "
System, input " Hi, Zhang, Give the me five of user!" translate as " Hi, Zhang, Clap your hands!".It connects
, since " object language " is " Chinese ", and " original language " is " English ", will be loaded across the cross-cultural background translation model of language
" English -> in " basic language translation model, translation result " Hi, Zhang, Clap your hands!" be further translated into
", Zhang Tongxue, to hit a palm!".
Wherein, the construction method of basic language translation model with the step two in embodiment 11 description, herein no longer
It repeats.
Step 3: result is presented
Finally, across the cross-cultural background translation system of language terminal device by final translation result in the form of text
Be presented to the user, as shown in figure 19, i.e., the translation result that terminal device is finally presented be ", Zhang Tongxue, to hit palm!" this
Target session information.
Further, when " reading aloud automatically " button is activated, text can also be converted to voice broadcasting by terminal device
To user.
Further, when target session information can by picture come intuitively show when, across the cross-cultural background of language
Translation system will inquire picture database according to target session information, by calculating target session information and candidate picture
The method of semantic similarity, find out most can exact expression target session information meaning picture, and supplement on the terminal device
Display.Wherein, the semanteme (i.e. picture description information) of candidate picture can be shown on picture, be also possible to according to picture
It generates.
It should be noted that the sentence automatic creation system of the natural language expressing form above-mentioned based on keyword, analogy
Translation system and across cross-cutting translation system of language etc. equally can inquire picture database according to target session information, lead to
Cross the method for calculating the semantic similarity of target session information and candidate picture, finding out most can exact expression target session information
The picture of meaning, and supplement display on the terminal device.
In addition, the sentence automatic creation system of the above-mentioned natural language expressing form based on keyword, analogy translation system
It unites, across the cross-cutting translation system of language and across cross-cultural background translation system of language etc., can also be looked into according to source session information
Picture database is ask, the method for the semantic similarity by calculating source session information and candidate picture, finding out most can exact expression
The picture of the meaning of source session information, and supplement display on the terminal device.
Similarly, the sentence automatic creation system of the above-mentioned natural language expressing form based on keyword, analogy translation
System, across the cross-cutting translation system of language and across cross-cultural background translation system of language etc., can also be according to source session information
With target session information, inquire picture database, by calculate source session information and target session information respectively with candidate picture
Semantic similarity method, find out most can exact expression source session information and target session information meaning picture, and
Display is supplemented on terminal device.
Wherein, picture database, including but not limited to example: picture library in subscriber terminal equipment, user network cloud storage figure
Valut, the picture library etc. that can be searched out on network search engines.
E, picture recommender system
To allow users to preferably be exchanged by multi-modal information, the embodiment of the invention provides pictures to recommend system
System, the entire flow figure of picture recommender system are as shown in figure 20.
Picture selection information that picture recommender system is inputted based on the first user (such as " transmissions " button side in Figure 21
Smiling face, i.e. " picture " button, user click the button, and indicates selection activation picture recommender system) and the first user and the
Historical session information between two users, the sessions such as session content and/or session mood that the first user of prediction is intended by push away
Measurement information, and according to the semantic similarity of session supposition information and candidate picture, Target Photo is obtained from candidate picture, and will
Target Photo is as target session information, namely according to the text information of the first user input, pictorial information and previous goes through
History session information etc. predicts the mood of the first user, to recommend suitable expression or picture.Wherein, the language of candidate picture
Adopted (i.e. picture description information) can be shown on picture, be also possible to according to picture generation, first user's (i.e. source
User) it is current device using end subscriber, second user is the other user, i.e., is conversated with current device using end subscriber
User.
In addition, in order to recommend more appropriate expression or picture, picture recommender system can also obtain the use of the first user
Family information, such as age, gender, hobby, the occupation of user etc., and further combine user's letter of the first user got
Breath, the sessions such as session content and/or session mood that the first user of prediction is intended by speculate information namely picture recommender system
Historical session information and first between picture selection information, the first user and second user based on the first user input
The user information of user, the sessions such as session content and/or session mood that the first user of prediction is intended by speculate information, and root
The semantic similarity of information and candidate picture is speculated according to session, and then obtains Target Photo from candidate picture, and by target figure
Piece is as target session information.Wherein, the semanteme (i.e. picture description) of candidate picture can be shown on picture, can also be with
It is to be generated according to picture.
Specifically, picture recommender system is during realization, by calling dialogue trained in advance to understand model, according to the
The source session information and historical session information of one user input, or the source session information according to the first user input, history meeting
Information and the user information of the first user are talked about, infers that second user may think the content and Sentiment orientation of expression, calling figure
Sheet data library, the recommended models that Load Image are that the first user recommends picture.
Wherein, following source users is the first above-mentioned user, and target user is above-mentioned second user.Above-mentioned
Picture recommender system can go to understand conversational implication, nothing according to source session information, historical session information, user information of source user etc.
Target user is needed to input the information such as keyword, target user can be inferred to from picture database required to express its feelings
Picture is pushed to target user by the picture of thread, and to reduce when the picture library of target user is huger, target user is searched
The time of picture.Since picture recommender system relies on user information, therefore the open picture recommender system acquisition of user is needed to use the Ministry of Revenue
Divide the permission of information.
Wherein, picture database includes but is not limited to: picture library in subscriber terminal equipment, user network cloud storage picture
Library, the picture library etc. that can be searched out on network search engines.
In the following, being described in detail by following examples to picture recommender system:
Embodiment 13: picture recommender system
As shown in figure 21, part operation process of the picture recommender system on subscriber terminal equipment is illustrated, wherein input
" it is dear, to you say joke~... " user of this source session information is source user, another user is target user.Figure
The operating procedure of piece recommender system is as follows:
Step 1: understanding user session, infers that target user wants the information sent
When target user needs to send mood of the picture to express oneself, the smiling face beside " transmission " button can be clicked,
I.e. " picture " button namely picture select information, will activate picture recommender system at this time.Picture recommender system is called pair first
Words understand model, according to user session information and user information, fully understand user session, and be inferred to source user and may wish to
The content and Sentiment orientation of expression, wherein user session information include source user input source session information and source user with
Historical session information between target user, user information refer to the user information of source user.
Wherein, dialogue understands that model is a kind of by string trained in advance to string (Sequence to Sequence) mould
Type, the method that building dialogue understands model are as follows: defeated by constantly correcting by dialogue as input, output as unit of word or word
Making mistake, (word or word are to indicate in the form of vectors in input, output end, and output error is to calculate network output vector and former
Have the error between result vector), it adjusts model parameter (weight, error transfer factor rate between nodes etc.), until most
Convergence afterwards (convergence is a definition mathematically, refers to that model parameter reaches local optimum).Assuming that a (i) is source user A
I-th sentence pair words, b (i) are the i-th sentence pair words of target user B, and assume that source user A first loquiturs, then the 1st subparameter is more
New input and output are a (1), b (1) respectively, and the input and output that the 2nd subparameter updates are b (1), a (2) respectively, and so on,
The input and output that i-th * 2-1 subparameter updates are a (i), b (i) respectively, and the input and output that 2 subparameter of the i-th * updates are b respectively
(i), a (i+1), until end-of-dialogue, starts iteration and update next session parameter.And all dialogue training sets of iteration are straight
(all parameters reach local optimum in network) is restrained to parameter.Wherein, the dialogue understand model be on the basis of existing model,
It joined the new model of the features such as user emotion.
In Figure 21, picture recommender system understands system by the dialogue of pre-training, infers that source user may give expression to
The session content and/or session mood of " happy ", " laugh " etc..
Step 2: recommend picture
After picture recommender system is inferred to the session mood of source user, it will call picture database, and Load Image
Recommended models recommend the picture to match with the session mood of source user to source user.
Specifically, what is stored in picture database is the picture after being converted into containing verbal description, this conversion
Based on picture translation model.Picture translation model is trained coding-decoding (Encoder-Decoder) model,
The method for constructing picture translation model are as follows: picture pixels matrix as input, picture is described to pass through continuous school as output
Positive output mistake adjusts model parameter, until last convergence, to achieve the purpose that picture turns text.
Further, the input of picture recommended models is verbal description, is embedded in (Word2Vec) model by being loaded into word,
Verbal description transforms into vector, and does similarity calculation with the verbal description of picture in database, and use picture with user
Frequency is referred to as weight, is done sequencing of similarity to picture and is marked the maximum preceding K picture of similarity as output.
Wherein, the category of language of above-mentioned source input information includes but is not limited to: Chinese, English, Korean, Japanese etc..
In addition, picture recommender system calls picture recommended models in Figure 20, it will be in the mood of " happy ", " laugh " etc.
Hold as input, since the pre-set recommended amount of user is 4, therefore picture recommended models are by picking similarity highest four
Indicate " happy ", " laugh " picture as recommendation picture as a result, namely as target session information.
Step 3: result is presented
Finally, final recommendation image results are arranged by terminal device and are presented to target use by picture recommender system
Family, as shown in figure 21, i.e. the terminal device picture that finally shows four expressions " happy ", " laugh ".It is a certain when the user clicks
When width picture, corresponding picture will be taken as the answer of user to send.
Further, when user thinks the recommendation inaccuracy of picture recommender system, the lower right corner " lookup " can also be clicked
Button searches oneself desired picture manually.
According to above-described embodiment one to the description for implementing 13 as can be seen that the present invention implements the processing session information provided
Method do not only rely on the source session information of the first user input when generating target session information, but also to fully consider
The auxiliary information got so that second user it will be appreciated that generate natural language sentences, enhance the practicability of language generation,
Unreadable language is converted into straightaway natural language by the bridge linked up between men for erecting inconvenient normal communication
Speech, to break disturbance of understanding of user during communication exchange.
In addition, above-mentioned analogy translation system, the elusive critical entities of user or event etc. can be converted into using
Family it will be appreciated that understanding, enhance user to generate content understanding;It is above-mentioned across the cross-cutting translation system of language, Ke Yiman
Understanding of the sufficient user for the special expression of unfamiliar professional domain;It is above-mentioned across the cross-cultural background translation system of language, can
There is special expression from different cultures to help user to understand;Above-mentioned picture recommender system, preferably makes convenient for user
It is exchanged with multi-modal information.
The embodiment of the present invention according on the other hand additionally provide it is a kind of handle session information device, comprising: acquisition of information
Module 201 and session generation module 211;Wherein: data obtaining module 201 is inputted for obtaining auxiliary information and the first user
Source session information;Session generation module 211, for generating target session information simultaneously based on auxiliary information and source session information
Export target session information.
Specifically, auxiliary information includes user's letter of historical session information, the user information of the first user, second user
At least one of in breath, information fields, wherein information fields include belonging to source information fields and target information
Field.
Further, user information includes at least one of the following:
Customer attribute information;
User preference information;
Subscriber calendar arrangement information;
Customer position information;
User behavior information;
User equipment information.
Further, information fields include at least one of following:
Language category information;
Professional domain information;
Culture background information.
Further, source information fields are to be detected to obtain to source session information or settable, target information
Fields are by being detected to obtain to historical session information or settable.
Further, source session information includes at least one of following: abbreviation, imperfect word, natural language sentence,
Picture selects information.
Further, session generation module 211 is specifically used for following at least one:
The session information keyword of historical session information between extraction source session information and the first user and second user,
Target session information is generated according to session information keyword;
The session information keyword of historical session information between extraction source session information and the first user and second user,
And first user and/or second user user information user information keyword, according to session information keyword and user
Information key generates target session information;
According to information fields, the source session information of source information fields is translated as target information fields
Session information extracts the historical session information between the session information and the first user and second user of target information fields
Session information keyword, according to session information keyword generate target session information;
According to information fields, the source session information of source information fields is translated as target information fields
Session information extracts the historical session information between the session information and the first user and second user of target information fields
Session information keyword and the first user and/or second user user information user information keyword, according to session
Information key and user information keyword generate target session information.
Further, session generation module 211 is specifically used for according to session information keyword, based on sentence trained in advance
Model is generated, target session information is generated;
According to session information keyword and user information keyword, generating target session information includes: according to session information
Keyword and user information keyword generate model based on sentence trained in advance, generate target session information.
Further, session generation module 211 is specifically used for following at least one:
According to source session information, source object information is obtained, source object information is classified to obtain source category information;According to
The user information of first user obtains candidate target classification information;According to the phase of source category information and candidate target classification information
Like degree, target category information is obtained;Candidate target object information is obtained according to target category information;According to source object information and time
The similarity for selecting target object information, obtains target object information;According to target object information, target session information is generated;
According to the historical session information between source session information and the first user and more than one second user, source pair is obtained
Image information;Source object information is classified to obtain source category information;According to the user information of the first user, candidate target is obtained
Classification information;According to the similarity of source category information and candidate target classification information, target category information is obtained;According to target class
Other information obtains candidate target object information;According to the similarity of source object information and candidate target object information, target is obtained
Object information;According to target object information, target session information is generated;
According to information fields, the source session information of source information fields is translated as target information fields
Session information, according to the history between the session information of target information fields and the first user and more than one second user
Session information obtains source object information;Source object information is classified to obtain source category information;According to the user of the first user
Information obtains candidate target classification information;According to the similarity of source category information and candidate target classification information, target class is obtained
Other information;Candidate target object information is obtained according to target category information;According to source object information and candidate target object information
Similarity, obtain target object information;According to target object information, target session information is generated;
Object includes entity and/or event.
Further, session generation module 211 is specifically used for according to information fields, by the source of source information fields
Session information is translated as the target session information of target information fields.
Further, session generation module 211 is specifically used for according to information fields, based on training in advance accordingly
Translation model carries out at least one of category of language, professional domain and culture background to source session information according to sequencing
Translation processing, obtain target session information.
Further, device further include: picture obtains 212 (not shown) of module, and picture obtains module 212 and is used for
According to the semantic similarity of source session information and/or target session information and candidate picture, obtain and source session information and/or mesh
The corresponding Target Photo of mark session information simultaneously exports the Target Photo.
Further, session generation module 211 is specifically used for following at least one:
According to the historical session letter between the picture selection information and the first user and second user of the first user input
Breath obtains session and speculates information;The semantic similarity that information and candidate picture are speculated according to session, obtains mesh from candidate picture
It marks on a map piece, using Target Photo as target session information;
According to the picture selection information of the first user input, the historical session information between the first user and second user,
And first user user information, obtain session speculate information;Speculate that information is similar to the semanteme of candidate picture according to session
Degree, obtains Target Photo from candidate picture, using Target Photo as target session information.
Further, session generation module 211 is specifically used for picture input according to the first user and selects information and the
Historical session information between one user and second user, the dialogue based on pre-training understand model, obtain session and speculate information;
According to the picture selection information of the first user input, the historical session information between the first user and second user,
And first user user information, the dialogue based on pre-training understands model, obtains session and speculate information.
Further, session speculates that information includes: that the session content that second user is intended by and/or the first user want
The session mood of expression.
The embodiment of the present invention according to additionally providing a kind of terminal device on the other hand, comprising: processor;And storage
Device, is configured to storage machine readable instructions, instruction when executed by the processor so as to execute the configuration stated random for processor
Access information method.
Figure 23 is diagrammatically illustrated according to the base station that can be used for realizing the disclosure of the embodiment of the present disclosure or user equipment
The block diagram of computing system.
As shown in figure 23, computing system 2300 includes processor 2310, computer readable storage medium 2320, output interface
2330 and input interface 2340.The computing system 2300 can execute the method described above with reference to Fig. 1.
Specifically, processor 2310 for example may include general purpose microprocessor, instruction set processor and/or related chip group
And/or special microprocessor (for example, specific integrated circuit (ASIC)), etc..Processor 2310 can also include for caching
The onboard storage device of purposes.Processor 2310 can be the list of the different movements for executing the method flow for referring to Fig. 1 description
One processing unit either multiple processing units.
Computer readable storage medium 2320, for example, can be can include, store, transmitting, propagating or transmitting instruction
Arbitrary medium.For example, readable storage medium storing program for executing can include but is not limited to electricity, magnetic, optical, electromagnetic, infrared or semiconductor system, dress
It sets, device or propagation medium.The specific example of readable storage medium storing program for executing includes: magnetic memory apparatus, such as tape or hard disk (HDD);Light
Storage device, such as CD (CD-ROM);Memory, such as random access memory (RAM) or flash memory;And/or wire/wireless communication
Link.
Computer readable storage medium 2320 may include computer program, which may include code/meter
Calculation machine executable instruction executes processor 2310 for example above in conjunction with described by Fig. 1
Method flow and its any deformation.
Computer program can be configured to have the computer program code for example including computer program module.For example,
In the exemplary embodiment, the code in computer program may include one or more program modules, for example including module 1, mould
Block 2 ....It should be noted that the division mode and number of module are not fixation, those skilled in the art can be according to reality
Situation is combined using suitable program module or program module, when the combination of these program modules is executed by processor 2310, is made
Obtaining processor 2310 can execute for example above in conjunction with method flow described in Fig. 1 and its any deformation.
In accordance with an embodiment of the present disclosure, output interface 2330 and input interface 2340 can be used to execute in processor 2310
Above in conjunction with method flow described in Fig. 1 and its any deformation.
Those skilled in the art of the present technique are appreciated that the present invention includes being related to for executing in operation described herein
One or more equipment.These equipment can specially design and manufacture for required purpose, or also may include general
Known device in computer.These equipment have the computer program being stored in it, these computer programs are selectively
Activation or reconstruct.Such computer program can be stored in equipment (for example, computer) readable medium or be stored in
It e-command and is coupled in any kind of medium of bus respectively suitable for storage, the computer-readable medium includes but not
Be limited to any kind of disk (including floppy disk, hard disk, CD, CD-ROM and magneto-optic disk), ROM (Read-Only Memory, only
Read memory), RAM (Random Access Memory, immediately memory), EPROM (Erasable Programmable
Read-Only Memory, Erarable Programmable Read only Memory), EEPROM (Electrically Erasable
Programmable Read-Only Memory, Electrically Erasable Programmable Read-Only Memory), flash memory, magnetic card or light card
Piece.It is, readable medium includes by equipment (for example, computer) with any Jie for the form storage or transmission information that can be read
Matter.
Those skilled in the art of the present technique be appreciated that can be realized with computer program instructions these structure charts and/or
The combination of each frame and these structure charts and/or the frame in block diagram and/or flow graph in block diagram and/or flow graph.This technology neck
Field technique personnel be appreciated that these computer program instructions can be supplied to general purpose computer, special purpose computer or other
The processor of programmable data processing method is realized, to pass through the processing of computer or other programmable data processing methods
The scheme specified in frame or multiple frames of the device to execute structure chart and/or block diagram and/or flow graph disclosed by the invention.
Those skilled in the art of the present technique have been appreciated that in the present invention the various operations crossed by discussion, method, in process
Steps, measures, and schemes can be replaced, changed, combined or be deleted.Further, each with having been crossed by discussion in the present invention
Kind of operation, method, other steps, measures, and schemes in process may also be alternated, changed, rearranged, decomposed, combined or deleted.
Further, in the prior art to have and the step in various operations, method disclosed in the present invention, process, measure, scheme
It may also be alternated, changed, rearranged, decomposed, combined or deleted.
The above is only some embodiments of the invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (17)
1. a kind of method for handling session information characterized by comprising
Obtain the source session information of auxiliary information and the first user input;
Based on the auxiliary information and the source session information, generates target session information and export the target session information.
2. the method according to claim 1, wherein the auxiliary information includes historical session information, the first use
The user information at family, the user information of second user, in information fields at least one of, wherein neck belonging to the information
Domain includes source information fields and target information fields.
3. according to the method described in claim 2, it is characterized in that, the user information includes at least one of the following:
Customer attribute information;
User preference information;
Subscriber calendar arrangement information;
Customer position information;
User behavior information;
User equipment information.
4. according to the method described in claim 2, the information fields include at least one of following:
Language category information;
Professional domain information;
Culture background information.
5. according to the method described in claim 2, the source information fields be source session information is detected to obtain or
Person is settable, and the target information fields are by being detected to obtain to historical session information or settable.
6. the method according to claim 1, wherein the source session information includes at least one of following:
Abbreviation, imperfect word, natural language sentence, picture select information.
7. method according to claim 1-6, described to be based on the auxiliary information and the source session information, life
Include at least one of following at target session information:
The session information keyword of historical session information between extraction source session information and the first user and second user, according to
Session information keyword generates target session information;
The session information keyword of historical session information between extraction source session information and the first user and second user and
The user information keyword of the user information of first user and/or second user, according to session information keyword and user information
Keyword generates target session information;
According to information fields, the source session information of source information fields is translated as to the session of target information fields
Information extracts the meeting of the historical session information between the session information and the first user and second user of target information fields
Information key is talked about, target session information is generated according to session information keyword;
According to information fields, the source session information of source information fields is translated as to the session of target information fields
Information extracts the meeting of the historical session information between the session information and the first user and second user of target information fields
The user information keyword for talking about the user information of information key and the first user and/or second user, according to session information
Keyword and user information keyword generate target session information.
8. according to the method described in claim 7, it is described according to session information keyword generate target session information include: basis
Session information keyword generates model based on sentence trained in advance, generates target session information;
It is described according to session information keyword and user information keyword, generating target session information includes: according to session information
Keyword and user information keyword generate model based on sentence trained in advance, generate target session information.
9. method according to claim 1-6, described to be based on the auxiliary information and the source session information, life
Include at least one of following at target session information:
According to source session information, source object information is obtained, source object information is classified to obtain source category information;According to first
The user information of user obtains candidate target classification information;According to the similarity of source category information and candidate target classification information,
Obtain target category information;Candidate target object information is obtained according to target category information;According to source object information and candidate mesh
The similarity for marking object information, obtains target object information;According to target object information, target session information is generated;
According to the historical session information between source session information and the first user and more than one second user, source object letter is obtained
Breath;Source object information is classified to obtain source category information;According to the user information of the first user, candidate target classification is obtained
Information;According to the similarity of source category information and candidate target classification information, target category information is obtained;Believed according to target category
Breath obtains candidate target object information;According to the similarity of source object information and candidate target object information, target object is obtained
Information;According to target object information, target session information is generated;
According to information fields, the source session information of source information fields is translated as to the session of target information fields
Information, according to the historical session between the session information of target information fields and the first user and more than one second user
Information obtains source object information;Source object information is classified to obtain source category information;Believed according to the user of the first user
Breath, obtains candidate target classification information;According to the similarity of source category information and candidate target classification information, target category is obtained
Information;Candidate target object information is obtained according to target category information;According to source object information and candidate target object information
Similarity obtains target object information;According to target object information, target session information is generated;
The object includes entity and/or event.
10. method according to claim 1-6, described to be based on the auxiliary information and the source session information,
Generating target session information includes:
According to information fields, the source session information of source information fields is translated as to the target of target information fields
Session information.
11. according to the method described in claim 10, described according to information fields, by the source session of source information fields
The target session information that information is translated as target information fields includes:
According to information fields, based on translation model trained in advance accordingly, according to sequencing to source session information into
The translation of at least one of row category of language, professional domain and culture background is handled, and obtains target session information.
12. according to the described in any item methods of claim 7-11, which is characterized in that further include:
According to the semantic similarity of source session information and/or target session information and candidate picture, obtain with source session information and/
Or the corresponding Target Photo of target session information and export the Target Photo.
13. generating target according to the method described in claim 1, described be based on the auxiliary information and the source session information
Session information includes at least one of following:
According to the historical session information between the picture selection information and the first user and second user of the first user input, obtain
Session is taken to speculate information;The semantic similarity that information and candidate picture are speculated according to session, obtains target figure from candidate picture
Piece, using Target Photo as target session information;
According to the picture selection information of the first user input, the historical session information between the first user and second user and
The user information of first user obtains session and speculates information;The semantic similarity that information and candidate picture are speculated according to session, from
Target Photo is obtained in candidate picture, using Target Photo as target session information.
14. according to the method for claim 13, described use according to the picture selection information of the first user input and first
Historical session information between family and second user obtains session and speculates that information includes: the picture choosing according to the first user input
The historical session information between information and the first user and second user is selected, the dialogue based on pre-training understands model, obtains
Session speculates information;
The historical session information according between the picture selection information of the first user input, the first user and second user,
And first user user information, obtain session and speculate that information includes: according to the picture selection information of the first user input, the
The user information of historical session information and the first user between one user and second user, the dialogue reason based on pre-training
Model is solved, session is obtained and speculates information.
15. method described in 3 or 14 according to claim 1, which is characterized in that the session speculates that information includes: the first user
The session mood that the session content and/or the first user being intended by are intended by.
16. a kind of device for handling session information characterized by comprising
Data obtaining module, for obtaining the source session information of auxiliary information and the first user input;
Session generation module, for generating target session information and exporting based on the auxiliary information and the source session information
The target session information.
17. a kind of terminal device, including memory and processor, computer executable instructions are stored on memory, when described
When instruction is executed by processor, method described in any one of preceding claims 1-15 is executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810142498.3A CN110209774A (en) | 2018-02-11 | 2018-02-11 | Handle the method, apparatus and terminal device of session information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810142498.3A CN110209774A (en) | 2018-02-11 | 2018-02-11 | Handle the method, apparatus and terminal device of session information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110209774A true CN110209774A (en) | 2019-09-06 |
Family
ID=67778536
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810142498.3A Pending CN110209774A (en) | 2018-02-11 | 2018-02-11 | Handle the method, apparatus and terminal device of session information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110209774A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110674330A (en) * | 2019-09-30 | 2020-01-10 | 北京达佳互联信息技术有限公司 | Expression management method and device, electronic equipment and storage medium |
CN111339790A (en) * | 2020-02-25 | 2020-06-26 | 北京字节跳动网络技术有限公司 | Text translation method, device, equipment and computer readable storage medium |
CN111797215A (en) * | 2020-06-24 | 2020-10-20 | 北京小米松果电子有限公司 | Dialogue method, dialogue device and storage medium |
CN111831816A (en) * | 2020-07-21 | 2020-10-27 | 虎博网络技术(上海)有限公司 | Core content processing method and device, electronic equipment and readable storage medium |
CN111914077A (en) * | 2020-08-06 | 2020-11-10 | 平安科技(深圳)有限公司 | Customized speech recommendation method, device, computer equipment and storage medium |
TWI719699B (en) * | 2019-11-04 | 2021-02-21 | 建國科技大學 | Artificial intelligence-assisted ways to say good things |
CN112843724A (en) * | 2021-01-18 | 2021-05-28 | 浙江大学 | Game scenario display control method and device, electronic equipment and storage medium |
CN113676394A (en) * | 2021-08-19 | 2021-11-19 | 维沃移动通信(杭州)有限公司 | Information processing method and information processing apparatus |
CN114531408A (en) * | 2022-02-16 | 2022-05-24 | 维沃移动通信有限公司 | Method and device for annotating message, electronic equipment and storage medium |
CN115378890A (en) * | 2022-08-12 | 2022-11-22 | 腾讯科技(武汉)有限公司 | Information input method, information input device, storage medium and computer equipment |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101365012A (en) * | 2008-10-06 | 2009-02-11 | 深圳华为通信技术有限公司 | Abbreviation operating method and hand-hold communication terminal |
CN101542462A (en) * | 2007-05-16 | 2009-09-23 | 莫卡有限公司 | Establishing and translating within multilingual group messaging sessions using multiple messaging protocols |
CN102165437A (en) * | 2008-07-25 | 2011-08-24 | 夏普株式会社 | Information processing device and information processing method |
US20130297285A1 (en) * | 2012-05-02 | 2013-11-07 | Electronics And Telecommunications Research Institute | Apparatus and method for providing two-way automatic interpretation and translation service |
CN103488630A (en) * | 2013-09-29 | 2014-01-01 | 小米科技有限责任公司 | Method, device and terminal for processing picture |
CN104679733A (en) * | 2013-11-26 | 2015-06-03 | 中国移动通信集团公司 | Voice conversation translation method, device and system |
CN104731776A (en) * | 2015-03-27 | 2015-06-24 | 百度在线网络技术(北京)有限公司 | Providing method and system of translating information |
CN104794111A (en) * | 2015-04-15 | 2015-07-22 | 广州华多网络科技有限公司 | Interactive method and device |
CN104836720A (en) * | 2014-02-12 | 2015-08-12 | 北京三星通信技术研究有限公司 | Method for performing information recommendation in interactive communication, and device |
US20160147744A1 (en) * | 2013-12-25 | 2016-05-26 | Beijing Baidu Netcom Science And Technology Co., Ltd. | On-line voice translation method and device |
CN107168546A (en) * | 2017-03-27 | 2017-09-15 | 上海奔影网络科技有限公司 | Input reminding method and device |
CN107491435A (en) * | 2017-08-14 | 2017-12-19 | 深圳狗尾草智能科技有限公司 | Method and device based on Computer Automatic Recognition user feeling |
-
2018
- 2018-02-11 CN CN201810142498.3A patent/CN110209774A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101542462A (en) * | 2007-05-16 | 2009-09-23 | 莫卡有限公司 | Establishing and translating within multilingual group messaging sessions using multiple messaging protocols |
CN102165437A (en) * | 2008-07-25 | 2011-08-24 | 夏普株式会社 | Information processing device and information processing method |
CN101365012A (en) * | 2008-10-06 | 2009-02-11 | 深圳华为通信技术有限公司 | Abbreviation operating method and hand-hold communication terminal |
US20130297285A1 (en) * | 2012-05-02 | 2013-11-07 | Electronics And Telecommunications Research Institute | Apparatus and method for providing two-way automatic interpretation and translation service |
CN103488630A (en) * | 2013-09-29 | 2014-01-01 | 小米科技有限责任公司 | Method, device and terminal for processing picture |
CN104679733A (en) * | 2013-11-26 | 2015-06-03 | 中国移动通信集团公司 | Voice conversation translation method, device and system |
US20160147744A1 (en) * | 2013-12-25 | 2016-05-26 | Beijing Baidu Netcom Science And Technology Co., Ltd. | On-line voice translation method and device |
CN104836720A (en) * | 2014-02-12 | 2015-08-12 | 北京三星通信技术研究有限公司 | Method for performing information recommendation in interactive communication, and device |
CN104731776A (en) * | 2015-03-27 | 2015-06-24 | 百度在线网络技术(北京)有限公司 | Providing method and system of translating information |
CN104794111A (en) * | 2015-04-15 | 2015-07-22 | 广州华多网络科技有限公司 | Interactive method and device |
CN107168546A (en) * | 2017-03-27 | 2017-09-15 | 上海奔影网络科技有限公司 | Input reminding method and device |
CN107491435A (en) * | 2017-08-14 | 2017-12-19 | 深圳狗尾草智能科技有限公司 | Method and device based on Computer Automatic Recognition user feeling |
Non-Patent Citations (2)
Title |
---|
JAE-WOO CHUNG: "Topic Spotting Common Sense translation Assistant", ACM, pages 1280 - 1283 * |
张金凤;: "交际中的会话隐含意义", 科技信息(科学教研), no. 11, pages 134 - 167 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110674330A (en) * | 2019-09-30 | 2020-01-10 | 北京达佳互联信息技术有限公司 | Expression management method and device, electronic equipment and storage medium |
CN110674330B (en) * | 2019-09-30 | 2024-01-09 | 北京达佳互联信息技术有限公司 | Expression management method and device, electronic equipment and storage medium |
TWI719699B (en) * | 2019-11-04 | 2021-02-21 | 建國科技大學 | Artificial intelligence-assisted ways to say good things |
CN111339790A (en) * | 2020-02-25 | 2020-06-26 | 北京字节跳动网络技术有限公司 | Text translation method, device, equipment and computer readable storage medium |
CN111797215A (en) * | 2020-06-24 | 2020-10-20 | 北京小米松果电子有限公司 | Dialogue method, dialogue device and storage medium |
CN111831816B (en) * | 2020-07-21 | 2023-06-27 | 虎博网络技术(上海)有限公司 | Core content processing method, device, electronic equipment and readable storage medium |
CN111831816A (en) * | 2020-07-21 | 2020-10-27 | 虎博网络技术(上海)有限公司 | Core content processing method and device, electronic equipment and readable storage medium |
CN111914077A (en) * | 2020-08-06 | 2020-11-10 | 平安科技(深圳)有限公司 | Customized speech recommendation method, device, computer equipment and storage medium |
CN112843724A (en) * | 2021-01-18 | 2021-05-28 | 浙江大学 | Game scenario display control method and device, electronic equipment and storage medium |
CN113676394A (en) * | 2021-08-19 | 2021-11-19 | 维沃移动通信(杭州)有限公司 | Information processing method and information processing apparatus |
CN114531408A (en) * | 2022-02-16 | 2022-05-24 | 维沃移动通信有限公司 | Method and device for annotating message, electronic equipment and storage medium |
CN115378890A (en) * | 2022-08-12 | 2022-11-22 | 腾讯科技(武汉)有限公司 | Information input method, information input device, storage medium and computer equipment |
CN115378890B (en) * | 2022-08-12 | 2023-08-18 | 腾讯科技(武汉)有限公司 | Information input method, device, storage medium and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110209774A (en) | Handle the method, apparatus and terminal device of session information | |
US11748555B2 (en) | Systems and methods for machine content generation | |
CN110717017B (en) | Method for processing corpus | |
CN107846350B (en) | Method, computer readable medium and system for context-aware network chat | |
CN109478205B (en) | Architecture and method for computer learning and understanding | |
JP6753707B2 (en) | Artificial intelligence system that supports communication | |
Liu | Semiotics in information systems engineering | |
CN111602147A (en) | Machine learning model based on non-local neural network | |
EP3623957A1 (en) | Generation of point of interest copy | |
Lin et al. | Design and application of augmented reality query-answering system in mobile phone information navigation | |
CN110209897A (en) | Intelligent dialogue method, apparatus, storage medium and equipment | |
US20230252224A1 (en) | Systems and methods for machine content generation | |
CN113239169B (en) | Answer generation method, device, equipment and storage medium based on artificial intelligence | |
KR102169382B1 (en) | Artificial Intelligence-Based Personalized Expert Cross Matching and Proposal System | |
Wilks et al. | A prototype for a conversational companion for reminiscing about images | |
Bell et al. | Microblogging as a mechanism for human–robot interaction | |
CN113704460A (en) | Text classification method and device, electronic equipment and storage medium | |
CN113761156A (en) | Data processing method, device and medium for man-machine interaction conversation and electronic equipment | |
Sharif et al. | Vision to language: Methods, metrics and datasets | |
Nagao | Artificial intelligence accelerates human learning: Discussion data analytics | |
CN114969282A (en) | Intelligent interaction method based on rich media knowledge graph multi-modal emotion analysis model | |
Rastgoo et al. | A survey on recent advances in Sign Language Production | |
CN117437317A (en) | Image generation method, apparatus, electronic device, storage medium, and program product | |
CN117271745A (en) | Information processing method and device, computing equipment and storage medium | |
CN113763925B (en) | Speech recognition method, device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |