CN106201424B - A kind of information interacting method, device and electronic equipment - Google Patents
A kind of information interacting method, device and electronic equipment Download PDFInfo
- Publication number
- CN106201424B CN106201424B CN201610538666.1A CN201610538666A CN106201424B CN 106201424 B CN106201424 B CN 106201424B CN 201610538666 A CN201610538666 A CN 201610538666A CN 106201424 B CN106201424 B CN 106201424B
- Authority
- CN
- China
- Prior art keywords
- content
- user
- interaction content
- information
- specific
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000003993 interaction Effects 0.000 claims abstract description 228
- 238000004891 communication Methods 0.000 claims abstract description 8
- 230000002452 interceptive effect Effects 0.000 claims description 75
- 230000001755 vocal effect Effects 0.000 claims description 52
- 238000001514 detection method Methods 0.000 claims description 8
- 239000004568 cement Substances 0.000 claims description 6
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 241000220317 Rosa Species 0.000 description 11
- 238000010586 diagram Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 238000004458 analytical method Methods 0.000 description 8
- 238000012549 training Methods 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 5
- 239000013598 vector Substances 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000005611 electricity Effects 0.000 description 3
- KUBDPQJOLOUJRM-UHFFFAOYSA-N 2-(chloromethyl)oxirane;4-[2-(4-hydroxyphenyl)propan-2-yl]phenol Chemical compound ClCC1CO1.C=1C=C(O)C=CC=1C(C)(C)C1=CC=C(O)C=C1 KUBDPQJOLOUJRM-UHFFFAOYSA-N 0.000 description 2
- 230000018199 S phase Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000003542 behavioural effect Effects 0.000 description 2
- 230000009194 climbing Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000012067 mathematical method Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 238000013179 statistical model Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 241000238558 Eucarida Species 0.000 description 1
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000003999 initiator Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/335—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/14—Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
- G10L15/142—Hidden Markov Models [HMMs]
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/16—Speech classification or search using artificial neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Probability & Statistics with Applications (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the invention provides a kind of information interacting method, device and electronic equipments, are related to communication and artificial intelligence field.A kind of information interacting method, this method comprises: obtaining the whole interaction content of multiple target users;Determine specific interaction content relevant to each target user;Based on the determining specific interaction content judgement, whether target user relevant to the specific interaction content needs information alert;If so, search with the specific matched prompt information of interaction content, and show the prompt information to target user relevant with the specific interaction content in the prompt scene to match to the specific interaction content.By scheme provided by the present application, the interaction content of multiple target users can be analyzed, and prompt information is targetedly provided.
Description
Technical field
The present embodiments relate to communication and artificial intelligence fields more particularly to a kind of information interacting method, device and electricity
Sub- equipment.
Background technique
With the development of semantic parsing and artificial intelligence technology, human-computer interaction (Human-Computer Interaction,
Write a Chinese character in simplified form HCI) obtained more and more applications in real life, the mode of human-computer interaction include people and electronic equipment (mobile phone,
TV, computer, intelligent robot etc.) between use certain conversational language, with certain interactive mode, to complete to determine task
People and electronic equipment between information exchanging process.With the difference of user demand, human-computer interaction is in more and more fields
Suffer from wide application.One is partner robot than more typical example application, i.e., partner robot can be with a variety of
Mode receives the input (these inputs include but is not limited to voice input, text input etc.) of user, and partner robot analysis is used
The input content at family provides corresponding answer by searching for the database of Local or Remote.For example user wonders the day of today
How is gas, as long as saying " Pekinese's weather today " to partner robot, partner robot will receive the voice content of user, leads to
It crosses and retrieves associated weather data library, show user's Weather information to user in a manner of voice or text.
Inventor has found that human-computer interaction process is typically all single in the prior art in the implementation of the present invention
User actively initiates, and user needs to initiate to the electronic equipment with human-computer interaction function targetedly to put question to, intelligence electricity
Sub- equipment can just provide corresponding answer, for the electronic equipment with human-computer interaction function, this interactive mode be by
Dynamic.And in actually using, other than it can directly answer the enquirement of user, user more wishes with human-computer interaction
The electronic equipment of function can obtain the demand information of user in a kind of positive mode, and according to the demand information master of acquisition
Dynamic carry out human-computer interaction.
Summary of the invention
The embodiment of the present invention provides a kind of information interacting method, device and electronic equipment, uses in the prior art to solve
During human-computer interaction, interactive mode is single, is not able to satisfy the shortcomings that interaction demand of complex interaction environment at family, realizes
Intelligence of the human-computer interaction under complex interaction environment.
In a first aspect, the embodiment of the present invention provides a kind of information interacting method, this method comprises:
Obtain the whole interaction content of multiple target users;
Determine specific interaction content relevant to each target user;
Based on the determining specific interaction content judgement, whether target user relevant to the specific interaction content is needed
Want information alert;
If so, search with the specific matched prompt information of interaction content, and with the specific interaction content phase
Matched prompt scene shows the prompt information to target user relevant to the specific interaction content.
A kind of specific implementation according to an embodiment of the present invention, in the entirety interaction for obtaining multiple target users
Hold, comprising:
The interactive voice content of more than one user is obtained, the interactive voice content is based on, determines the whole interaction
Content.
A kind of specific implementation according to an embodiment of the present invention, it is described be based on the interactive voice content, determine described in
Whole interaction content, comprising:
Obtain context characteristic corresponding with the interactive voice content;
Obtain context characteristic database associated with the context characteristic;
To the interactive voice content carry out speech recognition, and by after speech recognition content and the context characteristic data
Context characteristic mode in library is matched, and the whole interaction content is obtained.
A kind of specific implementation according to an embodiment of the present invention, it is described be based on the interactive voice content, determine described in
Whole interaction content, further includes:
Extract the vocal print feature information in the interactive voice content;
Obtain user feature database associated with the vocal print feature information;
To the interactive voice content carry out speech recognition, and by after speech recognition content and the user characteristic data
Content in library is matched, and the whole interaction content is obtained.
A kind of specific implementation according to an embodiment of the present invention, the determination specific friendship relevant to each target user
Mutual content, comprising:
Extract the vocal print feature information in the interactive voice content;
Each target user relevant to the interactive voice content is determined according to the vocal print feature information;
The whole interaction content is split with each target user, is obtained and each target user's phase
The specific interaction content closed.
A kind of specific implementation according to an embodiment of the present invention, it is described by the whole interaction content and each mesh
Mark user splits, and obtains specific interaction content relevant to each target user, comprising:
The whole interaction content is split based on the vocal print feature information;
Obtain user feature database corresponding with the vocal print feature information;
The content obtained after the whole interaction content is split is matched with the user feature database, obtain and
The relevant specific interaction content of each target user.
A kind of specific implementation according to an embodiment of the present invention, the whole interaction content of the multiple target user, packet
It includes:
What active user and more than one other users carried out: voice communication content, mail interaction content, in short message interacting
Appearance, talk interaction content, more than one in instant messaging text interaction content.
A kind of specific implementation according to an embodiment of the present invention, it is described to be mentioned with what the specific interaction content matched
Show that scene shows the prompt information to target user relevant to the specific interaction content, comprising:
The determining and matched prompt scene of the interaction content;
Judge current scene whether with the interaction content it is matched prompt scene matching degree;
Match degree is greater than the preset threshold when described, shows the prompt information to user in a manner of preset information alert.
A kind of specific implementation according to an embodiment of the present invention, it is described to be mentioned with what the specific interaction content matched
After showing that scene shows the prompt information to target user relevant to the specific interaction content, further includes:
Receive the feedback information that target user relevant with the specific interaction content inputs;And
Determine whether to continue information alert based on the feedback information.
Second aspect, the embodiment of the present invention also provide a kind of information interactive device, which includes:
Module is obtained, for obtaining the whole interaction content of multiple target users;
Determining module, for determining specific interaction content relevant to each target user;
Judgment module, for based on determining specific interaction content judgement mesh relevant to the specific interaction content
Whether mark user needs information alert;
Cue module, for when target user relevant to the specific interaction content needs information alert, search with
The matched prompt information of specific interaction content, and the prompt scene to match with the specific interaction content to it is described
Specific interaction content relevant target user's displaying prompt information.
A kind of specific implementation according to an embodiment of the present invention, the acquisition module are also used to:
The interactive voice content of more than one user is obtained, the interactive voice content is based on, determines the whole interaction
Content.
A kind of specific implementation according to an embodiment of the present invention, it is described be based on the interactive voice content, determine described in
Whole interaction content, comprising:
Obtain context characteristic corresponding with the interactive voice content;
Obtain context characteristic database associated with the context characteristic;
To the interactive voice content carry out speech recognition, and by after speech recognition content and the context characteristic data
Context characteristic mode in library is matched, and the whole interaction content is obtained.
A kind of specific implementation according to an embodiment of the present invention, it is described be based on the interactive voice content, determine described in
Whole interaction content, further includes:
Extract the vocal print feature information in the interactive voice content;
Obtain user feature database associated with the vocal print feature information;
To the interactive voice content carry out speech recognition, and by after speech recognition content and the user characteristic data
Content in library is matched, and the whole interaction content is obtained.
A kind of specific implementation according to an embodiment of the present invention, the determining module are also used to:
Extract the vocal print feature information in the interactive voice content;
Each target user relevant to the interactive voice content is determined according to the vocal print feature information;
The whole interaction content is split with each target user, is obtained and each target user's phase
The specific interaction content closed.
A kind of specific implementation according to an embodiment of the present invention, it is described by the whole interaction content and each mesh
Mark user splits, and obtains specific interaction content relevant to each target user, comprising:
The whole interaction content is split based on the vocal print feature information;
Obtain user feature database corresponding with the vocal print feature information;
The content obtained after the whole interaction content is split is matched with the user feature database, obtain and
The relevant specific interaction content of each target user.
A kind of specific implementation according to an embodiment of the present invention, the whole interaction content of the multiple target user, packet
It includes:
What active user and more than one other users carried out: voice communication content, mail interaction content, in short message interacting
Appearance, talk interaction content, more than one in instant messaging text interaction content.
A kind of specific implementation according to an embodiment of the present invention, the cue module are also used to:
The determining and matched prompt scene of the interaction content;
Judge current scene whether with the interaction content it is matched prompt scene matching degree;
Match degree is greater than the preset threshold when described, shows the prompt information to user in a manner of preset information alert.
A kind of specific implementation according to an embodiment of the present invention, the information interactive device further include feedback module, institute
Feedback module is stated to be used for:
It is used in the prompt scene to match to the specific interaction content to target relevant with the specific interaction content
After family shows the prompt information, the feedback information that target user relevant with the specific interaction content inputs is received;With
And
Determine whether to continue information alert based on the feedback information.
The third aspect, the embodiment of the present invention provide a kind of electronic equipment, and the electronic equipment includes: shell, processor, deposits
Reservoir, circuit board and power circuit, wherein circuit board is placed in the space interior that shell surrounds, processor and memory setting
On circuit boards;Power circuit, for each circuit or the device power supply for above-mentioned electronic equipment;Memory is for storing and can hold
Line program code;Processor is run and executable program code pair by reading the executable program code stored in memory
The program answered, for executing aforementioned any information interacting method
Information interacting method, device and electronic equipment provided in an embodiment of the present invention, by carrying out whole interaction content
Targetedly split, it, can be under specific scene targetedly to specific user by the determination of effective interaction scenarios
Information exchange is carried out, solves the disadvantage that interactive mode is excessively single in the prior art, can not adapt to complex interaction environment, is improved
The efficiency of interaction.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair
Bright some embodiments for those of ordinary skill in the art without creative efforts, can be with root
Other attached drawings are obtained according to these attached drawings.
Fig. 1 is a kind of flow diagram of information interacting method provided in an embodiment of the present invention;
Fig. 2 is that the interactive voice mode provided in an embodiment of the present invention that is based on obtains whole interaction content process schematic diagram;
Fig. 3 is that the vocal print feature provided in an embodiment of the present invention based on user carries out the signal of interactive voice content recognition process
Figure;
Fig. 4 provides a kind of determining spy relevant to target user of the vocal print feature information based on user for the embodiment of the present invention
Determine the flow diagram of interaction content;
It is relevant to target user that Fig. 5 for the embodiment of the present invention provides another vocal print feature information determination based on user
The flow diagram of specific interaction content;
Fig. 6 is the flow diagram provided in an embodiment of the present invention for showing prompt information to user based on scene;
Fig. 7 is the flow diagram provided in an embodiment of the present invention that information alert is carried out based on user feedback content;
Fig. 8 is a kind of structural schematic diagram of information interactive device provided in an embodiment of the present invention;
Fig. 9 is the structural schematic diagram of another information interactive device provided in an embodiment of the present invention;
Figure 10 is a kind of electronic equipment structural schematic diagram provided in an embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Interactive mode in the prior art is usually fixed the interactive mode of formula, and the initiator of interaction is typically all to use
Family, and interactive device usually passively receives the input content of user, and then answers the input of user.As one
Example, user can inquire mobile phone assistant (such as Siri built-in in Iphone): " making a phone call to John ", at this time mobile phone helps
Hand will dial the telephone number of John, and this interactive mode is excessively single, and more often, user wishes interactive object
(such as mobile phone assistant) is capable of the behavioural information of the acquisition user of active, and then active provides the user with information alert.
Fig. 1 is the flow diagram of information interacting method in the embodiment of the present invention, this method comprises:
S101: the whole interaction content of multiple target users is obtained.
Before the whole interaction content for obtain user, needs to be arranged whole interaction content and obtain program.It is whole to hand over
Mutual content retrieval program can exist in a variety of forms, for example the information being mounted in subscriber computer (for example receives in mail
Hold) monitoring programme or be mounted in mobile phone cellphone information (short message, message registration, chat record of mobile phone A pp etc.) prison
Control journey or be individually designed one robot device that can monitor and record more than one user's chat content.
The whole interaction content of target user can be obtained in several ways, a kind of mode is that user actively opens
Whole interaction content obtains Operation switch, and whole interaction content, which obtains program, at this time will obtain user's (ratio whithin a period of time
Such as one week) whole interaction content, these interaction contents include that the mailbox contents of a period of time content of user, user surf the Internet
Chat record that search record, the shopping record of user, the message registration of user, user are carried out by instant messaging tools, use
The content etc. that family is made a phone call.
Other than user actively opens the acquisition modes that whole interaction content obtains Operation switch, it can also use whole
Body interaction content obtains the method that program actively obtains, that is, whole after user opens whole interaction content acquisition function
Interaction content obtain program just the moment monitoring user interaction content.
Specifically, whole interaction content obtains program can be built in the multiple equipment of user by the way of software,
Whole interaction content in the multiple equipment of user obtains program and carries out acquisition of information using the same ID users and summarize.
Whole interaction content includes the whole interaction content of active user and one or more users, for example, active user with
The language that the content or active user that one or more users are linked up by mail sit in meeting room with other colleagues
Sound content or active user are in the chat record in QQ groups between multiple QQ friends.
S102: specific interaction content relevant to each target user is determined.
Since there are multiple interactive objects in whole interaction content, it is therefore desirable to carry out the information in whole interaction content
It splits, for example active user John has sent action arrangement, Ada to his subordinate Ada and Bob respectively in mail
Different opinions is proposed to job placement in mail respectively with Bob, and opinion of the John based on Ada and Bob carries out scheme
Modification, at this time, it may be necessary to determine that concrete scheme content relevant to John is, in concrete scheme relevant to Ada and Bob
What appearance is respectively, these concrete scheme content changes relevant to each user constitute the relevant specific friendship of each target user
Mutual content.
Determining specific interaction content relevant to each target user, there are many modes, for example.For mailbox interaction
Content can judge the interaction content of each user by the label of sender and recipients;For active user and one
Or the voice-enabled chat record between multiple users, the content of voice-enabled chat record the inside can be determined by way of speech recognition
Belong to which voice-enabled chat object, and using the specific chat content of each voice-enabled chat object as with the voice-enabled chat object phase
The specific interaction content closed.
S103: based on the determining specific interaction content judgement, target user relevant to the specific interaction content is
It is no to need information alert.
By analyzing the information for being included in specific interaction content, information alert element can be obtained, user prompts element
It is a particular content relevant to user's prompt, for example, the scheduling information of user.With traditional user's driving arrangement day
Unlike journey information, semantic analysis can be carried out according to the information of content-data in this method, and then obtain whether user needs
Want prompt information.For example user James mentions that " weather will be pretty good tomorrow, Steven, we want one in specific interaction content
Rise and go to climb the mountain ", at this moment determine and be possible to have an appointment with Steven in tomorrow for James, determine need to user James into
Row information prompt further can also send information alert to Steven.
S104: if so, search with the specific matched prompt information of interaction content, and with the specific interaction in
Hold the prompt scene that matches and shows the prompt information to target user relevant to the specific interaction content.
After determining that user needs to carry out information alert, it is thus necessary to determine that the particular content of the prompt information of user, Yi Ji
Information alert is carried out to user under which kind of scene.
The particular content of user's prompt information refers to and prompts element to think matched specifying information content with user, for example uses
It is " tomorrow goes to climb the mountain " that the user of family James, which prompts element, then to matched information including: that inquiry is used with " tomorrow goes to climb the mountain "
The family specific departure time recommends to suggest the points for attention climbed the mountain than convenient place of climbing the mountain, to user, goes to and climb to user
The information such as the feasible vehicles of mountainous region point.
Other than determining the particular content of prompt information, it is also necessary to which judgement prompt scene prompts scene to be directed to target
User sends the elements such as prompt information most suitable time, position.For example user David about Mary 20:00 tonight exists
The meeting of the hotel Rose, the stroke for needing 30 minutes from the departure place of David to the hotel Rose, it is contemplated that 10 minutes in advance arrive Rose
Hotel has constituted prompt information time prompting scene when 17:20, can send out in this scenario to user David
It send and shows prompt information.Based on same principle, it can also be sent to Mary and show prompt information.
It shows that prompt information has plurality of optional mode, can be opened up by modes such as mail, short message, calendar promptings
Show, can also be reminded by the alert device (display screen, partner robot etc.) of special setting.
Scheme in through this embodiment can actively extract the whole interaction content of user, based on in whole interaction
The analysis of appearance determines specific suggestion content, and sends prompt information to user under suitably prompt scene, solves user
The problem of needing specifically for reminding item to be separately provided, while more comprehensive user interactive environment letter can be obtained
Breath, provides more comprehensive information reminding.
As another embodiment, Fig. 2 illustrates the scheme that whole interaction content is obtained based on interactive voice mode, should
Scheme includes the following steps:
S201: the interactive voice content of more than one user is obtained.
The interactive voice content of user, such as the built-in interactive voice in the mobile phone of user can be obtained using various ways
Content record program is recorded by the talk that interactive voice content record program records active user and one or more users,
Can also be in such a way that software/hardware combine, for example there is the intelligent electronic device of voice record function to record user for setting
Interactive voice content.
It alternatively, can be by the interactive voice content of user with specific phonetic matrix (MP3, WAV etc.)
It is locally stored, it can also be by interactive voice content uploading to preset server.
S202: context characteristic corresponding with the interactive voice content is obtained.
During carrying out voice record, available current context characteristic, these context characteristics include but unlimited
In temporal information, location information and the chat determined according to the behavioural habits or current chat content of user of interactive voice
Scene.These chat scenarios include: operative scenario, home scenarios, dining scene, moving scene etc..
S203: context characteristic database associated with the context characteristic is obtained.
Context characteristic is stored in context characteristic database, different context characteristics constitutes different contextual models,
The local of electronic equipment is stored with context characteristic database with the server end of electronic equipment communication connection, when step S202 is true
Determine after current context characteristic has been determined, by local or server-side search and the matched database of the context characteristic,
It is determined as context characteristic database associated with the context characteristic.
When context characteristic database is when being locally stored, it can be loaded directly into context characteristic database, when context characteristic number
It is according to warehouse compartment when server end, then context characteristic database is locally downloading, and execute load operation.
S204: carrying out speech recognition to the interactive voice content, and by after speech recognition content and the scene it is special
Context characteristic mode in sign database is matched, and the whole interaction content is obtained.
Before carrying out speech recognition, it usually needs pre-processed to voice signal, pretreatment refers in feature extraction
Before, first raw tone is handled, partially removing noise and different speaker's brings influences, and the signal that makes that treated is more
It can reflect the substantive characteristics of voice.For example, can be pre-processed using end-point detection and speech enhan-cement mode.End-point detection is
Finger distinguishes voice and non-speech audio period in voice signal, accurately determines out the starting point of voice signal.Through
After crossing end-point detection, subsequent processing can only be carried out voice signal, this is to the accuracy and recognition correct rate for improving model
It plays an important role.Speech enhan-cement mainly eliminates influence of the ambient noise to voice, such as by the way of Wiener filtering pair
Noise signal is filtered.
Speech recognition is the process of one mode identification, passes through carrying out one by one with preset reference mode by the voice of record
Compare, and then obtains the result of speech recognition.Speech recognition process, these voices can be completed using multiple voice recognizer
Recognizer includes: vector quantization (VQ) method based on dynamic time warping (DTW) algorithm, based on nonparametric model, is based on
The method of the hidden Markov model (HMM) of parameter model is known based on voices such as artificial neural network (ANN) and support vector machines
Other method etc..
During speech recognition, need to calculate the probabilistic model of a sentence probability of occurrence using language model.
It is mainly used for determining that a possibility which word sequence is bigger, or predict in the case where there are several words it is next will
The content of the word of appearance carries out grammer, semantic analysis to training text database, by obtaining language based on statistical model training
Say model.And different training text databases is used, identify that obtained result also can be different, the side in the embodiment of the present invention
Case is then to carry out Secondary Match by the content in the language content and context characteristic database after identifying, it is accurate to finally obtain
Spend better speech recognition result.Wherein, it during carrying out speech recognition, needs to obtain from context characteristic database special
Determine context characteristic, the interaction content currently obtained is matched with context characteristic mode, to judge current exchange scenario.
Speech recognition is carried out by the way that context characteristic database and identification content are carried out matched mode, improves voice knowledge
Other accuracy.
Fig. 3 illustrates the scheme that the vocal print feature based on user carries out interactive voice content recognition, and scheme shown in Fig. 3 can
To merge execution with scheme shown in Fig. 2, it can also be used as an independent scheme, be individually performed, the program includes following step
It is rapid:
S301: the vocal print feature information in the interactive voice content is extracted.
The task that vocal print feature is extracted is to extract and select have the spies such as separability is strong, stability is high to the vocal print of speaker
The acoustics or language feature of property.Vocal print feature has the content of many aspects, goes out from the angle that can be modeled using mathematical method
Hair, the feature that vocal print automatic identification model can be used includes: (1) acoustic feature (cepstrum);(2) lexical characteristics (speaker
Relevant word n-gram, phoneme n-gram);(3) prosodic features (fundamental tone and energy " posture " that are described using n-gram);(4)
Languages, dialect and accent information;(5) channel information (which kind of channel used).
Usually there is the vocal print feature information of multiple users in interactive voice content, this carry out vocal print feature extraction when
It waits, needs to extract the vocal print feature information of multiple users.
S302: user feature database associated with the vocal print feature information is obtained.
It is stored with user feature database in the local of electronic equipment or with the server end of electronic equipment communication connection, when
After step S302 has determined the vocal print feature of current one or more users, by local or server-side search
With the matched database of the vocal print feature, it is determined as user feature database associated with the vocal print feature.
When user feature database is when being locally stored, it can be loaded directly into user feature database, when user characteristics number
It is according to warehouse compartment when server end, then user feature database is locally downloading, and execute load operation.
S303: carrying out speech recognition to the interactive voice content, and by after speech recognition content and the user it is special
Content in sign database is matched, and the whole interaction content is obtained.
Before carrying out speech recognition, it usually needs pre-processed to voice signal, pretreatment refers in feature extraction
Before, first raw tone is handled, partially removing noise and different speaker's brings influences, and the signal that makes that treated is more
It can reflect the substantive characteristics of voice.For example, can be pre-processed using end-point detection and speech enhan-cement mode.End-point detection is
Finger distinguishes voice and non-speech audio period in voice signal, accurately determines out the starting point of voice signal.Through
After crossing end-point detection, subsequent processing can only be carried out voice signal, this is to the accuracy and recognition correct rate for improving model
It plays an important role.Speech enhan-cement mainly eliminates influence of the ambient noise to voice, such as by the way of Wiener filtering pair
Noise signal is filtered.
Speech recognition is the process of one mode identification, passes through carrying out one by one with preset reference mode by the voice of record
Compare, and then obtains the result of speech recognition.Speech recognition process, these voices can be completed using multiple voice recognizer
Recognizer includes: vector quantization (VQ) method based on dynamic time warping (DTW) algorithm, based on nonparametric model, is based on
The method of the hidden Markov model (HMM) of parameter model is known based on voices such as artificial neural network (ANN) and support vector machines
Other method etc..
During speech recognition, need to calculate the probabilistic model of a sentence probability of occurrence using language model.
It is mainly used for determining that a possibility which word sequence is bigger, or predict in the case where there are several words it is next will
The content of the word of appearance carries out grammer, semantic analysis to training text database, by obtaining language based on statistical model training
Say model.And different training text databases is used, identify that obtained result also can be different, the side in the embodiment of the present invention
Case is then to carry out Secondary Match by the content in the language content and user feature database after identifying, it is accurate to finally obtain
Spend better speech recognition result.
Speech recognition is carried out by the way that user feature database and identification content are carried out matched mode, is further improved
The accuracy of speech recognition.
Fig. 4-5 illustrates the mistake that specific interaction content relevant to target user is determined based on the vocal print feature information of user
Journey, including:
S401: the vocal print feature information in the interactive voice content is extracted.
The task that vocal print feature is extracted is to extract and select have the spies such as separability is strong, stability is high to the vocal print of speaker
The acoustics or language feature of property.Vocal print feature has the content of many aspects, goes out from the angle that can be modeled using mathematical method
Hair, the feature that vocal print automatic identification model can be used includes: (1) acoustic feature (cepstrum);(2) lexical characteristics (speaker
Relevant word n-gram, phoneme n-gram);(3) prosodic features (fundamental tone and energy " posture " that are described using n-gram);(4)
Languages, dialect and accent information;(5) channel information (which kind of channel used).
Usually there is the vocal print feature information of multiple users in interactive voice content, this carry out vocal print feature extraction when
It waits, needs to extract the vocal print feature information of multiple users.
S402: each same family of target relevant to the interactive voice content is determined according to the vocal print feature information.
By way of pattern-recognition or cluster, the information with identical vocal print feature is divided into one kind, and determination is each
User corresponding to class vocal print feature.For example, it can be identified using following identification method:
(1) template matching method: main to use using dynamic time bending (DTW) to be directed at trained and test feature sequence
In the application (usually text inter-related task) of fixed phrases;
(2) arest neighbors method: retaining all characteristic vectors when training, and when identification finds in trained vector each vector
Nearest K, are identified accordingly, and the amount of usual model storage and similar calculating is all very big;
(3) neural network method: there are many kinds of forms, such as Multilayer Perception, radial basis function (RBF), can explicitly instruct
Practice to distinguish speaker and its background speaker;
(4) hidden Markov model (HMM) method: usually using the HMM or gauss hybrid models (GMM) of list state;
(5) VQ clustering method (such as LBG): this method effect is pretty good, and algorithm complexity is not high yet and HMM method is with closing
Can more receive better effect.
S403: the whole interaction content is split with each target user, is obtained and each target
The relevant specific interaction content of user.
Particular user and the associated particular content of each particular user based on the identification in step S402, obtain with it is described
The relevant specific interaction content of each target user.
Further, step S403 splits the whole interaction content with each target user, obtain and
The relevant specific interaction content of each target user, further includes following steps:
S4031: the whole interaction content is split based on the vocal print feature information.
S4032: user feature database corresponding with the vocal print feature information is obtained.
The data of at least two types are stored in user feature database: one is user type data, user according to
The characteristic information (such as vocal print feature information) of user determines specific user;Another type of data are the language of particular user
Property data base loads voice feature data corresponding with user library, to specific after specific user type has been determined
The chat content of user is modified.
S4033: the content obtained after the whole interaction content is split is matched with the user feature database,
Obtain specific interaction content relevant to each target user.
By the content of the embodiment, whole interaction content can be split based on the vocal print feature information of user,
And decide specific aims user and the associated specific interaction content of the target user.
Fig. 6 illustrates the scheme for showing prompt information to user based on scene, and the program includes:
S601: the determining and matched prompt scene of the interaction content.
Other than determining the particular content of prompt information, it is also necessary to which judgement prompt scene prompts scene to be directed to target
User sends the elements such as prompt information most suitable time, position.For example user David about Mary 20:00 tonight exists
The meeting of the hotel Rose, the stroke for needing 30 minutes from the departure place of David to the hotel Rose, it is contemplated that 10 minutes in advance arrive Rose
Hotel has constituted prompt information time prompting scene when 17:20, can send out in this scenario to user David
It send and shows prompt information.Based on same principle, it can also be sent to Mary and show prompt information.
S602: judge current scene whether with the interaction content it is matched prompt scene matching degree.
Not all scene be suitable for user send prompt information, at this time it needs to be determined that current scene with interact field
The matching degree of scape, such as user David plan are met with Mary 20:00 tonight in the hotel Rose, and the current field of determination is passed through
Scape discovery David has arrived at the hotel Rose, at this time just It is not necessary to remind David again.
Specific scene matching degree can be realized using many algorithms, such as the matching process based on credibility, can
Letter property is divided into reliability, availability and timeliness, it is established that the Integrated Model of Credibility Assessment is no longer retouched in detail herein
It states.
S603: match degree is greater than the preset threshold when described, shows the prompt to user in a manner of preset information alert
Information.
It shows that prompt information has plurality of optional mode, can be opened up by modes such as mail, short message, calendar promptings
Show, can also be reminded by the alert device (display screen, partner robot etc.) of special setting.
Fig. 7 gives the flow chart that information alert is carried out based on user feedback content, and the program is in addition to comprising corresponding to Fig. 1
Embodiment in content except, further include following steps:
S105: the feedback information that target user relevant with the specific interaction content inputs is received.
After having shown prompt information to user, the interface of user feedback is also provided, for example user mentions in talk
" intending to buy a mobile phone recently " is crossed, when there are mobile phone advertising campaign, send prompt information to user " has X-type number recently
Mobile phone is being promoted at a reduced price, if needs to buy one? ", user can " I like by voice or manually input at this time
This mobile phone ", the feedback information as same family.
S106: determine whether to continue information alert based on the feedback information.
After receiving the feedback information of user, self study is carried out to the input information of user, is understanding the defeated of user
After entering information, it can be determined that whether need to continue in prompt information, such as S105, this for not liking recommendation is specifically called out in user
After portion's mobile phone, it can continue to show other kinds of mobile phone to user in a manner of prompt information.
By the content of the embodiment, can the feedback information based on user learnt again, it is more to user and
More targetedly show information.
Corresponding with the method for Fig. 1 embodiment, the embodiment of the present invention also discloses a kind of information interactive device, such as Fig. 8
Shown, information interactive device includes:
Module 801 is obtained, for obtaining the whole interaction content of multiple target users.
Before the whole interaction content for obtain user, needs to be arranged whole interaction content and obtain program.It is whole to hand over
Mutual content retrieval program can exist in a variety of forms, for example the information being mounted in subscriber computer (for example receives in mail
Hold) monitoring programme or be mounted in mobile phone cellphone information (short message, message registration, chat record of mobile phone A pp etc.) prison
Control journey or be individually designed one robot device that can monitor and record more than one user's chat content.
The whole interaction content of target user can be obtained in several ways, a kind of mode is that user actively opens
Whole interaction content obtains Operation switch, and whole interaction content, which obtains program, at this time will obtain user's (ratio whithin a period of time
Such as one week) whole interaction content, these interaction contents include that the mailbox contents of a period of time content of user, user surf the Internet
Chat record that search record, the shopping record of user, the message registration of user, user are carried out by instant messaging tools, use
The content etc. that family is made a phone call.
Other than user actively opens the acquisition modes that whole interaction content obtains Operation switch, it can also use whole
Body interaction content obtains the method that program actively obtains, that is, whole after user opens whole interaction content acquisition function
Interaction content obtain program just the moment monitoring user interaction content.
Specifically, whole interaction content obtains program can be built in the multiple equipment of user by the way of software,
Whole interaction content in the multiple equipment of user obtains program and carries out acquisition of information using the same ID users and summarize.
Whole interaction content includes the whole interaction content of active user and one or more users, for example, active user with
The language that the content or active user that one or more users are linked up by mail sit in meeting room with other colleagues
Sound content or active user are in the chat record in QQ groups between multiple QQ friends.
Determining module 802, for determining specific interaction content relevant to each target user.
Since there are multiple interactive objects in whole interaction content, it is therefore desirable to carry out the information in whole interaction content
It splits, for example active user John has sent action arrangement, Ada to his subordinate Ada and Bob respectively in mail
Different opinions is proposed to job placement in mail respectively with Bob, and opinion of the John based on Ada and Bob carries out scheme
Modification, at this time, it may be necessary to determine that concrete scheme content relevant to John is, in concrete scheme relevant to Ada and Bob
What appearance is respectively, these concrete scheme content changes relevant to each user constitute the relevant specific friendship of each target user
Mutual content.
Determining specific interaction content relevant to each target user, there are many modes, for example.For mailbox interaction
Content can judge the interaction content of each user by the label of sender and recipients;For active user and one
Or the voice-enabled chat record between multiple users, the content of voice-enabled chat record the inside can be determined by way of speech recognition
Belong to which voice-enabled chat object, and using the specific chat content of each voice-enabled chat object as with the voice-enabled chat object phase
The specific interaction content closed.
Judgment module 803, for related to the specific interaction content based on the determining specific interaction content judgement
Target user whether need information alert.
By analyzing the information for being included in specific interaction content, information alert element can be obtained, user prompts element
It is a particular content relevant to user's prompt, for example, the scheduling information of user.With traditional user's driving arrangement day
Unlike journey information, semantic analysis can be carried out according to the information of content-data in this method, and then obtain whether user needs
Want prompt information.For example user James mentions that " weather will be pretty good tomorrow, Steven, we want one in specific interaction content
Rise and go to climb the mountain ", at this moment determine and be possible to have an appointment with Steven in tomorrow for James, determine need to user James into
Row information prompt further can also send information alert to Steven.
Cue module 804, for searching when target user relevant to the specific interaction content needs information alert
With the specific matched prompt information of interaction content, and the prompt scene to match with the specific interaction content to institute
It states the relevant target user of specific interaction content and shows the prompt information.
After determining that user needs to carry out information alert, it is thus necessary to determine that the particular content of the prompt information of user, Yi Ji
Information alert is carried out to user under which kind of scene.
The particular content of user's prompt information refers to and prompts element to think matched specifying information content with user, for example uses
It is " tomorrow goes to climb the mountain " that the user of family James, which prompts element, then to matched information including: that inquiry is used with " tomorrow goes to climb the mountain "
The family specific departure time recommends to suggest the points for attention climbed the mountain than convenient place of climbing the mountain, to user, goes to and climb to user
The information such as the feasible vehicles of mountainous region point.
Other than determining the particular content of prompt information, it is also necessary to which judgement prompt scene prompts scene to be directed to target
User sends the elements such as prompt information most suitable time, position.For example user David about Mary 20:00 tonight exists
The meeting of the hotel Rose, the stroke for needing 30 minutes from the departure place of David to the hotel Rose, it is contemplated that 10 minutes in advance arrive Rose
Hotel has constituted prompt information time prompting scene when 17:20, can send out in this scenario to user David
It send and shows prompt information.Based on same principle, it can also be sent to Mary and show prompt information.
It shows that prompt information has plurality of optional mode, can be opened up by modes such as mail, short message, calendar promptings
Show, can also be reminded by the alert device (display screen, partner robot etc.) of special setting.
Scheme in through this embodiment can actively extract the whole interaction content of user, based on in whole interaction
The analysis of appearance determines specific suggestion content, and sends prompt information to user under suitably prompt scene, solves user
The problem of needing specifically for reminding item to be separately provided, while more comprehensive user interactive environment letter can be obtained
Breath, provides more comprehensive information reminding.
As another embodiment, referring to Fig. 9, information interactive device further includes feedback module 805, feedback module 805
For:
It is used in the prompt scene to match to the specific interaction content to target relevant with the specific interaction content
After family shows the prompt information, the feedback information that target user relevant with the specific interaction content inputs is received;With
And
Determine whether to continue information alert based on the feedback information.
Scheme in through this embodiment can actively extract the whole interaction content of user, based on in whole interaction
The analysis of appearance determines specific suggestion content, and sends prompt information to user under suitably prompt scene, solves user
The problem of needing specifically for reminding item to be separately provided, while more comprehensive user interactive environment letter can be obtained
Breath, provides more comprehensive information reminding.
Content performed by modules is opposite with the content that various method steps in Fig. 1-7 execute in embodiment in Fig. 8-9
It answers, details are not described herein.
The embodiment of the present invention also provides a kind of electronic equipment, and the electronic equipment includes dress described in aforementioned any embodiment
It sets.
Figure 10 is the structural schematic diagram of electronic equipment one embodiment of the present invention, be may be implemented real shown in Fig. 1-7 of the present invention
The process of example is applied, as shown in Figure 10, above-mentioned electronic equipment may include: shell 1001, processor 1002, memory 1003, electricity
Road plate 1004 and power circuit 1005, wherein circuit board 1004 is placed in the space interior that shell 1001 surrounds, processor 1002
It is arranged on circuit board 1004 with memory 1003;Power circuit 1005, for each circuit or device for above-mentioned electronic equipment
Part power supply;Memory 1003 is for storing executable program code;Processor 1002 is stored by reading in memory 1003
Executable program code runs program corresponding with executable program code, for executing letter described in aforementioned any embodiment
Cease exchange method.
Processor 1002 passes through operation executable program generation to the specific implementation procedure and processor 1002 of above-mentioned steps
Code the step of further execution, may refer to the description of Fig. 1-7 illustrated embodiment of the present invention, and details are not described herein.
The electronic equipment exists in a variety of forms, including but not limited to:
(1) mobile communication equipment: the characteristics of this kind of equipment is that have mobile communication function, and to provide speech, data
Communication is main target.This Terminal Type includes: smart phone (such as iPhone), multimedia handset, functional mobile phone and low
Hold mobile phone etc..
(2) super mobile personal computer equipment: this kind of equipment belongs to the scope of personal computer, there is calculating and processing function
Can, generally also have mobile Internet access characteristic.This Terminal Type includes: PDA, MID and UMPC equipment etc., such as iPad.
(3) portable entertainment device: this kind of equipment can show and play multimedia content.Such equipment include: audio,
Video player (such as iPod), handheld device, e-book and intelligent toy and portable car-mounted navigation equipment.
(4) server: providing the equipment of the service of calculating, and the composition of server includes that processor, hard disk, memory, system are total
Line etc., server is similar with general computer architecture, but due to needing to provide highly reliable service, in processing energy
Power, stability, reliability, safety, scalability, manageability etc. are more demanding.
(5) other electronic equipments with data interaction function.
Those of ordinary skill in the art will appreciate that realize above-described embodiment method in all or part of the process/, being can
It is completed with instructing relevant hardware by computer program, software instruction data and instruction (software) is stored in respective
It stores in equipment, which shows as one or more computer-readable or usable storage mediums.Storage medium includes not
With the memory of form, including such as semiconductor memory apparatus, for example, dynamic or static random accesP memory (DRAMs or
SRAMs), erasable formula and programmable read-only memory (EEPROMs), Electrical Erasable formula and programmable read only memory
(EEPROMs) and flash memory;Disk includes Fixed disk, floppy disk and mobile hard disk;Other magnetic mediums include tape;And
Optical medium, such as CD (CDs) or video disc (DVDs).It should be pointed out that the instruction of above-mentioned software can be counted by one
Calculation machine is readable or computer-usable storage medium provides, or selectively, can be can be used by multiple computer-readable or computer
Storage medium provides, these storage mediums are distributed in the large scale system with multinode.Computer-readable or computer
Usable storage medium is considered as a part of object (or object of product).The object of object or product refers to the list of any production
A or excessively a component.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
In the technical scope disclosed by the present invention, any changes or substitutions that can be easily thought of by those familiar with the art, all answers
It is included within the scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.
Claims (8)
1. a kind of information interacting method, which is characterized in that this method comprises:
Program is obtained using the whole interaction content in the multiple equipment for being built in user, carries out letter using the same ID users
Breath summarizes, and obtains the whole interaction content of multiple target users, and the entirety interaction content includes the voice of more than one user
Interaction content;
The whole interaction content is pre-processed by end-point detection and speech enhan-cement mode, determines the interactive voice
The starting point of voice signal in content;
The whole interaction content is split based on vocal print feature information, is obtained corresponding with the vocal print feature information
User feature database, the user feature database are stored with the data of at least two types: one is user type data,
Specific user is determined for the characteristic information according to user, and another type of data are the language feature data of particular user
Library loads voice feature data corresponding with user library, chats specific user after specific user type has been determined
Its content is modified;
The content obtained after the whole interaction content is split is matched with the user feature database, determining and each
The relevant specific interaction content of target user;
Based on the prompt element in the determining specific interaction content, judgement target use relevant with the specific interaction content
Whether family needs information alert;
If so, lookup and the specific matched prompt information of interaction content, and determination is matched with the specific interaction content
Prompt scene;
And in the prompt scene to match to the specific interaction content to one or more relevant with the specific interaction content
A target user shows the prompt information;
Later, the feedback information that target user relevant with the specific interaction content inputs is received;And
Determine whether to continue information alert based on the feedback information.
2. the method according to claim 1, wherein the whole interaction content for obtaining multiple target users,
Include:
The interactive voice content of more than one user is obtained, the interactive voice content is based on, determines the whole interaction content.
3. according to the method described in claim 2, it is characterized in that, it is described be based on the interactive voice content, determine described whole
Body interaction content, comprising:
Obtain context characteristic corresponding with the interactive voice content;
Obtain context characteristic database associated with the context characteristic;
Speech recognition is carried out to the interactive voice content, and will be in the content and the context characteristic database after speech recognition
Context characteristic mode matched, obtain the whole interaction content.
4. according to the method described in claim 3, it is characterized in that, it is described be based on the interactive voice content, determine described whole
Body interaction content, further includes:
Extract the vocal print feature information in the interactive voice content;
Obtain user feature database associated with the vocal print feature information;
Speech recognition is carried out to the interactive voice content, and will be in the content and the user feature database after speech recognition
Content matched, obtain the whole interaction content.
5. according to the method described in claim 2, it is characterized in that, the determination specific interaction relevant to each target user
Content, comprising:
Extract the vocal print feature information in the interactive voice content;
Each target user relevant to the interactive voice content is determined according to the vocal print feature information;
The whole interaction content is split with each target user, is obtained relevant to each target user
Specific interaction content.
6. the method according to claim 1, wherein the whole interaction content of the multiple target user, comprising:
What active user and more than one other users carried out: voice communication content, mail interaction content, talk interaction content,
More than one in instant messaging text interaction content.
7. the method according to claim 1, wherein described in the prompt to match with the specific interaction content
Scene shows the prompt information to one or more target users relevant to the specific interaction content, comprising:
The determining and specific matched prompt scene of interaction content;
Determine the matching degree of current scene and the matched prompt scene of the specific interaction content;
Match degree is greater than the preset threshold when described, shows the prompt information to user in a manner of preset information alert.
8. a kind of information interactive device, which is characterized in that the device includes:
Module is obtained, program is obtained using the whole interaction content in the multiple equipment for being built in user, using the same user
ID number carries out information and summarizes, and obtains the whole interaction content of multiple target users, the entirety interaction content includes more than one
The interactive voice content of user;
Determining module is determined for being pre-processed to the whole interaction content by end-point detection and speech enhan-cement mode
Out in the interactive voice content voice signal starting point;
The whole interaction content is split based on vocal print feature information, is obtained corresponding with the vocal print feature information
User feature database, the user feature database are stored with the data of at least two types: one is user type data,
Specific user is determined for the characteristic information according to user, and another type of data are the language feature data of particular user
Library loads voice feature data corresponding with user library, chats specific user after specific user type has been determined
Its content is modified;
The content obtained after the whole interaction content is split is matched with the user feature database, determining and each
The relevant specific interaction content of target user;
Judgment module, for based on the prompt element in the determining specific interaction content, judgement in the specific interaction
Hold whether relevant target user needs information alert;
Cue module, for when target user relevant to the specific interaction content needs information alert,
Then lookup and the specific matched prompt information of interaction content, and the determining and specific matched prompt of interaction content
Scene;
And in the prompt scene to match to the specific interaction content to one or more relevant with the specific interaction content
A target user shows the prompt information;
Later, the feedback information that target user relevant with the specific interaction content inputs is received;And
Determine whether to continue information alert based on the feedback information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610538666.1A CN106201424B (en) | 2016-07-08 | 2016-07-08 | A kind of information interacting method, device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610538666.1A CN106201424B (en) | 2016-07-08 | 2016-07-08 | A kind of information interacting method, device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106201424A CN106201424A (en) | 2016-12-07 |
CN106201424B true CN106201424B (en) | 2019-10-01 |
Family
ID=57473997
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610538666.1A Active CN106201424B (en) | 2016-07-08 | 2016-07-08 | A kind of information interacting method, device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106201424B (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537564A (en) * | 2017-03-02 | 2018-09-14 | 九阳股份有限公司 | A kind of dining information method for pushing and home-services robot |
CN107146614B (en) * | 2017-04-10 | 2020-11-06 | 北京猎户星空科技有限公司 | Voice signal processing method and device and electronic equipment |
CN107977072B (en) * | 2017-07-28 | 2021-06-08 | 北京物灵智能科技有限公司 | Formation method for robot, formation expert system and electronic equipment |
CN107583291B (en) * | 2017-09-29 | 2023-05-02 | 深圳希格玛和芯微电子有限公司 | Toy interaction method and device and toy |
CN109754816B (en) * | 2017-11-01 | 2021-04-16 | 北京搜狗科技发展有限公司 | Voice data processing method and device |
CN108399700A (en) * | 2018-01-31 | 2018-08-14 | 上海乐愚智能科技有限公司 | Theft preventing method and smart machine |
CN110111793B (en) * | 2018-02-01 | 2023-07-14 | 腾讯科技(深圳)有限公司 | Audio information processing method and device, storage medium and electronic device |
CN108831456B (en) * | 2018-05-25 | 2022-04-15 | 深圳警翼智能科技股份有限公司 | Method, device and system for marking video through voice recognition |
CN109325097B (en) * | 2018-07-13 | 2022-05-27 | 海信集团有限公司 | Voice guide method and device, electronic equipment and storage medium |
CN109767338A (en) * | 2018-11-30 | 2019-05-17 | 平安科技(深圳)有限公司 | Processing method, device, equipment and the readable storage medium storing program for executing of enterogastritis reimbursement process |
CN109524001A (en) * | 2018-12-28 | 2019-03-26 | 北京金山安全软件有限公司 | Information processing method and device and child wearable device |
CN111914070A (en) * | 2019-05-09 | 2020-11-10 | 上海触乐信息科技有限公司 | Intelligent information prompting assistant system, information prompting method and terminal equipment |
CN110265013A (en) * | 2019-06-20 | 2019-09-20 | 平安科技(深圳)有限公司 | The recognition methods of voice and device, computer equipment, storage medium |
CN110824940A (en) * | 2019-11-07 | 2020-02-21 | 深圳市欧瑞博科技有限公司 | Method and device for controlling intelligent household equipment, electronic equipment and storage medium |
CN110909142B (en) * | 2019-11-20 | 2023-03-31 | 腾讯科技(深圳)有限公司 | Question and sentence processing method and device of question-answer model, electronic equipment and storage medium |
CN111124121B (en) * | 2019-12-24 | 2021-05-28 | 腾讯科技(深圳)有限公司 | Voice interaction information processing method and device, storage medium and computer equipment |
CN111813281A (en) * | 2020-05-28 | 2020-10-23 | 维沃移动通信有限公司 | Information acquisition method, information output method, information acquisition device, information output device and electronic equipment |
TWI763207B (en) * | 2020-12-25 | 2022-05-01 | 宏碁股份有限公司 | Method and apparatus for audio signal processing evaluation |
CN112929502B (en) * | 2021-02-05 | 2023-03-28 | 国家电网有限公司客户服务中心 | Voice recognition method and system based on electric power customer service |
CN114661899A (en) * | 2022-02-15 | 2022-06-24 | 北京结慧科技有限公司 | Task creating method and device, computer equipment and storage medium |
CN114924666A (en) * | 2022-05-12 | 2022-08-19 | 上海云绅智能科技有限公司 | Interaction method and device for application scene, terminal equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104933028A (en) * | 2015-06-23 | 2015-09-23 | 百度在线网络技术(北京)有限公司 | Information pushing method and information pushing device |
CN105389304A (en) * | 2015-10-27 | 2016-03-09 | 小米科技有限责任公司 | Event extraction method and apparatus |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
MX2007009044A (en) * | 2005-01-28 | 2008-01-16 | Breakthrough Performance Techn | Systems and methods for computerized interactive training. |
-
2016
- 2016-07-08 CN CN201610538666.1A patent/CN106201424B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104933028A (en) * | 2015-06-23 | 2015-09-23 | 百度在线网络技术(北京)有限公司 | Information pushing method and information pushing device |
CN105389304A (en) * | 2015-10-27 | 2016-03-09 | 小米科技有限责任公司 | Event extraction method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN106201424A (en) | 2016-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106201424B (en) | A kind of information interacting method, device and electronic equipment | |
US9905228B2 (en) | System and method of performing automatic speech recognition using local private data | |
US10832674B2 (en) | Voice data processing method and electronic device supporting the same | |
US9742912B2 (en) | Method and apparatus for predicting intent in IVR using natural language queries | |
US20170277993A1 (en) | Virtual assistant escalation | |
US9786281B1 (en) | Household agent learning | |
KR101891492B1 (en) | Method and computer device for providing contextual natural language conversation by modifying plain response, and computer readable recording medium | |
KR20180070684A (en) | Parameter collection and automatic dialog generation in dialog systems | |
WO2021066939A1 (en) | Automatically determining and presenting personalized action items from an event | |
CN112513833A (en) | Electronic device and method for providing artificial intelligence service based on presynthesized dialog | |
US10580407B1 (en) | State detection and responses for electronic devices | |
CN107004410A (en) | Voice and connecting platform | |
US11687526B1 (en) | Identifying user content | |
US9502029B1 (en) | Context-aware speech processing | |
CN103035240A (en) | Speech recognition repair using contextual information | |
CN107506166A (en) | Information cuing method and device, computer installation and readable storage medium storing program for executing | |
Neustein | Advances in speech recognition: mobile environments, call centers and clinics | |
KR101891498B1 (en) | Method, computer device and computer readable recording medium for multi domain service resolving the mixture of multi-domain intents in interactive ai agent system | |
KR102120751B1 (en) | Method and computer readable recording medium for providing answers based on hybrid hierarchical conversation flow model with conversation management model using machine learning | |
KR101945983B1 (en) | Method for determining a best dialogue pattern for achieving a goal, method for determining an estimated probability of achieving a goal at a point of a dialogue session associated with a conversational ai service system, and computer readable recording medium | |
KR101950387B1 (en) | Method, computer device and computer readable recording medium for building or updating knowledgebase models for interactive ai agent systen, by labeling identifiable but not-learnable data in training data set | |
CN112567718A (en) | Electronic device for performing tasks including calls in response to user speech and method of operation | |
US11381675B2 (en) | Command based interactive system and a method thereof | |
KR101932263B1 (en) | Method, computer device and computer readable recording medium for providing natural language conversation by timely providing a substantive response | |
CN105869631B (en) | The method and apparatus of voice prediction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |