CN105304082A - Voice output method and voice output device - Google Patents

Voice output method and voice output device Download PDF

Info

Publication number
CN105304082A
CN105304082A CN201510568430.8A CN201510568430A CN105304082A CN 105304082 A CN105304082 A CN 105304082A CN 201510568430 A CN201510568430 A CN 201510568430A CN 105304082 A CN105304082 A CN 105304082A
Authority
CN
China
Prior art keywords
phonetic entry
entry content
user
degree
cognition degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510568430.8A
Other languages
Chinese (zh)
Other versions
CN105304082B (en
Inventor
王天一
刘升平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisound Intelligent Technology Co Ltd
Original Assignee
Beijing Yunzhisheng Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yunzhisheng Information Technology Co Ltd filed Critical Beijing Yunzhisheng Information Technology Co Ltd
Priority to CN201510568430.8A priority Critical patent/CN105304082B/en
Publication of CN105304082A publication Critical patent/CN105304082A/en
Priority to CN201680002958.1A priority patent/CN107077845B/en
Priority to PCT/CN2016/082427 priority patent/WO2017041510A1/en
Application granted granted Critical
Publication of CN105304082B publication Critical patent/CN105304082B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

The invention discloses a voice output method and a voice output device. The method comprises the following steps of receiving a voice input content input by a user; determining the cognition degree of the user on the affiliated category of the voice input content according to the voice input content, wherein the cognition degree is the professional knowledge cognition degree of the user on the category; and obtaining and outputting voice output contents matched with the cognition degree from at least one voice output content corresponding to the voice input content. By using the technical scheme, the voice output contents matched with the cognition degree of the user can be selected for the user according to the cognition degree of the user on the affiliated category of the input voice input content, and can be output, so that the voice output contents conform to the requirements of the user; the personalized voice output function is provided for the user; meanwhile, the voice output accuracy is improved, so that the user can obtain the maximum information quantity from the voice output contents; and the user experience degree is improved.

Description

A kind of speech output method and device
Technical field
The present invention relates to technical field of information processing, particularly relate to a kind of speech output method and device.
Background technology
At present, along with the development of electronics technology, phonetic entry is more and more praised highly by people, a kind of input mode of phonetic entry to be the Content Transformation of people being spoken by speech recognition be text.Along with intelligent terminal popularizing in people's life, increasing intelligent terminal has the function of voice service gradually, such as, user can be asked a question by phonetic entry, voice software on intelligent terminal is by analyzing the voice of user, the same problem answering user in the mode of voice, thus be user's service of offering help.But, although this method brings great convenience for user, answer is obtained by loaded down with trivial details online inquiry without the need to user, but only there is a kind of answering model in current voice service software, that is, different users puts question to identical problem (main contents of problem are identical), then export identical help information.And the technical merit of different user or patent ability are all not identical, for user, different help informations may be needed, or different answer-modes, therefore, said method cannot distinguish different technical needs for user provides voice to help, not pointed.
Summary of the invention
The embodiment of the present invention provides a kind of speech output method and device.Described technical scheme is as follows:
A kind of speech output method, comprises the following steps:
Receive the phonetic entry content of user's input;
According to described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic, described cognition degree is the professional knowledge degree of awareness of described user to described classification;
From at least one voice output content corresponding with described phonetic entry content, obtain and export the voice output content matched with described cognition degree.
Some beneficial effects of the embodiment of the present invention can comprise:
Technique scheme, can according to the cognition degree of user to the phonetic entry content generic of input, for user selects the voice output content matched with its cognition degree to export, voice output content is made more to meet the demand of user, thus provide more personalized voice output function for user, improve the accuracy of voice output simultaneously, enable user get maximum quantity of information from voice output content, improve the Experience Degree of user.
In one embodiment, described according to described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic, comprising:
Identify the voiceprint of described user;
According to described voiceprint, determine whether the phonetic entry content receiving described user first;
When phonetic entry content for receiving described user first, determine described user to the cognition degree of described phonetic entry content generic for presetting minimum cognition degree.
In this embodiment, be that the voice output content that user selects to match exports according to being whether the phonetic entry content receiving user first, voice output content is made more to meet the demand of user, thus provide more personalized voice output function for user, improve the accuracy of voice output simultaneously, enable user get maximum quantity of information from voice output content, improve the Experience Degree of user.
In one embodiment, described method also comprises:
Record the input time of described phonetic entry content and use duration, described use duration is the duration receiving described phonetic entry content and export between described voice output content.
In this embodiment, by input time and the use duration of record phonetic entry content, make follow-up for user export voice output content time, determine that the foundation of user cognition degree is abundanter, thus determine the cognition degree of user more exactly, and then export more accurate, personalized voice output content for user.
In one embodiment, described according to described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic, comprising:
Identify the voiceprint of described user;
According to the voiceprint of described user, judge the adjacent phonetic entry content received for twice whether by same user is inputted;
When the phonetic entry content to receive for adjacent twice is inputted by same user, according to input time of the described adjacent phonetic entry content received for twice with use duration, calculate the time interval between the adjacent phonetic entry content received for twice;
According to the described time interval, determine the cognition degree of described user to described phonetic entry content generic; Wherein, the described time interval is longer, and described cognition degree is lower.
In this embodiment, by the time interval between the phonetic entry content that calculates the adjacent same user received for twice, make the foundation determining user cognition degree abundanter, thus determine the cognition degree of user more exactly, and then export more accurate, personalized voice output content for user.
In one embodiment, described according to described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic, comprising:
Identify the voiceprint of described user;
According to the voiceprint of described user, obtain the history corresponding with described user and input recorded information, described history input recorded information comprises history and adds up at least one information in service time, history accumulative input number of times and history incoming frequency;
According to described history input recorded information, determine the cognition degree of described user to described phonetic entry content generic; Wherein, it is longer that described history adds up service time, and described cognition degree is higher; Described history accumulative input number of times is more, and described cognition degree is higher; Described history incoming frequency is higher, and described cognition degree is higher.
In this embodiment, the history input recorded information corresponding according to user is determined the cognition degree of user to make terminal can determine the cognition degree of user more exactly, and then is more accurate, the personalized voice output content of user's output.
In one embodiment, described according to described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic, comprising:
Extract the keyword in described phonetic entry content;
Determine the matching degree of keyword in described phonetic entry content and predetermined keyword;
According to the matching degree of the keyword in described phonetic entry content and predetermined keyword, determine the cognition degree of described user to described phonetic entry content generic; Wherein, the matching degree of the keyword in described phonetic entry content and the professional keyword in predetermined keyword is higher, and described cognition degree is higher; The matching degree of the keyword in described phonetic entry content and the amateur keyword in predetermined keyword is higher, and described cognition degree is lower.
In this embodiment, determine the cognition degree of user according to the matching degree of the keyword in phonetic entry content and predetermined keyword, make the determination of user cognition degree more accurate, personalized, thus export more accurate, personalized voice output content for user.
In one embodiment, described according to described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic, comprising:
Determine the sentence structure type of described phonetic entry content, described sentence structure type comprises professional statement structure type or amateur sentence structure type;
According to the sentence structure type of described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic; Wherein, described user is to the cognition degree of the cognition degree of the phonetic entry content generic of described professional statement structure type higher than the phonetic entry content generic to described amateur sentence structure type.
In this embodiment, determine the cognition degree of user according to the sentence structure type of phonetic entry content, make the determination of user cognition degree more accurate, personalized, thus export more accurate, personalized voice output content for user.
In one embodiment, described according to described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic, comprising:
When judging that the adjacent phonetic entry content received for twice is inputted by same user, according to the keyword in the phonetic entry content to receive for adjacent twice, determine the degree of association between the described adjacent phonetic entry content received for twice;
According to the degree of association between the described adjacent phonetic entry content received for twice, determine the cognition degree of described user to described phonetic entry content generic; Wherein, the described degree of association is higher, and described cognition degree is lower.
In this embodiment, the cognition degree of user is determined according to the degree of association between the phonetic entry content of the same user to receive for adjacent twice, make the determination of user cognition degree more accurate, personalized, thus export more accurate, personalized voice output content for user.
In one embodiment, described according to described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic, comprising:
According to described phonetic entry content, determine at least two phonetic entry parameters of described phonetic entry content, described phonetic entry parameter comprises: the history corresponding with described user of the time interval between the phonetic entry content of the voiceprint of described user, adjacent twice input of same user input keyword in recorded information, described phonetic entry content and the matching degree of predetermined keyword, described phonetic entry content twice input adjacent with same user of sentence structure type phonetic entry content between the degree of association;
According to the weight of each the phonetic entry parameter preset, calculate the cognition degree of described user to described phonetic entry content generic.
In this embodiment, according to the different weights of the phonetic entry parameter of multinomial phonetic entry content, calculate the cognition degree of user to phonetic entry content generic, make the determination of user cognition degree more accurate, personalized, thus export more accurate, personalized voice output content for user.
In one embodiment, described according to described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic, comprising:
When the phonetic entry parameter of described phonetic entry content cannot be determined, determine described user to the cognition degree of described phonetic entry content generic for presetting minimum cognition degree.
In this embodiment, for the phonetic entry content cannot determining phonetic entry parameter, export the voice output content matched with this phonetic entry content, thus provide more accurate and personalized voice output function for user, enable user get more how useful information from voice output content, improve the Experience Degree of user.
In one embodiment, described from least one voice output content corresponding with described phonetic entry content, obtain and export the voice output content matched with described cognition degree, comprising:
According to the corresponding relation between cognition degree and cognitive grade, determine the cognitive grade that described cognition degree is corresponding;
According to the corresponding relation between cognitive grade and voice output content, obtain the voice output content corresponding with described cognitive grade;
Export described voice output content.
In this embodiment, according to the corresponding relation between cognitive grade and voice output content for user selects the voice output content of mating to export, thus export for user selects the voice output content matched with user cognition degree, voice output content is made more to meet the demand of user, improve the accuracy of voice output, enable user get maximum quantity of information from voice output content, improve the Experience Degree of user.
In one embodiment, described method also comprises:
According to input time and the use duration of described phonetic entry content, upgrade described history input recorded information.
In this embodiment, by the renewal to history input recorded information, make again for user export voice output content time, the cognition degree of user can be determined according to the input of history accurately record, thus be user's output voice output content more accurately.
In one embodiment, described method also comprises:
Store the cognition degree of described user to described phonetic entry content generic;
Described according to described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic, comprising:
Identify the voiceprint of described user;
The cognition degree of described user to described phonetic entry content generic is inquired about according to the voiceprint of described user.
In this embodiment, by the cognition degree of inquiring user, the cognition degree of user to phonetic entry content generic conveniently can be determined rapidly, thus more quickly and accurately for user selects the voice output content matched to export.
A kind of instantaneous speech power, comprising:
Receiver module, for receiving the phonetic entry content of user's input;
Determination module, for according to described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic, described cognition degree is the professional knowledge degree of awareness of described user to described classification;
Output module, for from least one voice output content corresponding with described phonetic entry content, obtains and exports the voice output content matched with described cognition degree.
In one embodiment, described determination module comprises:
First recognin module, for identifying the voiceprint of described user;
First judges submodule, for according to described voiceprint, determines whether the phonetic entry content receiving described user first;
Second determines submodule, for when phonetic entry content for receiving described user first, determines described user to the cognition degree of described phonetic entry content generic for presetting minimum cognition degree.
In one embodiment, described device also comprises:
Logging modle, for recording the input time of described phonetic entry content and using duration, described use duration is the duration receiving described phonetic entry content and export between described voice output content.
In one embodiment, described determination module comprises:
Second recognin module, for identifying the voiceprint of described user;
Second judges submodule, for the voiceprint according to described user, judges the adjacent phonetic entry content received for twice whether by same user is inputted;
First calculating sub module, during for being inputted by same user when the phonetic entry content to receive for adjacent twice, according to input time and the use duration of the described adjacent phonetic entry content received for twice, calculate the time interval between the adjacent phonetic entry content received for twice;
3rd determines submodule, for according to the described time interval, determines the cognition degree of described user to described phonetic entry content generic; Wherein, the described time interval is longer, and described cognition degree is lower.
In one embodiment, described determination module comprises:
3rd recognin module, for identifying the voiceprint of described user;
First obtains submodule, for the voiceprint according to described user, obtain the history corresponding with described user and input recorded information, described history input recorded information comprises history and adds up at least one information in service time, history accumulative input number of times and history incoming frequency;
4th determines submodule, for according to described history input recorded information, determines the cognition degree of described user to described phonetic entry content generic; Wherein, it is longer that described history adds up service time, and described cognition degree is higher; Described history accumulative input number of times is more, and described cognition degree is higher; Described history incoming frequency is higher, and described cognition degree is higher.
In one embodiment, described determination module comprises:
Extract submodule, for extracting the keyword in described phonetic entry content;
5th determines submodule, for determining the matching degree of keyword in described phonetic entry content and predetermined keyword;
6th determines submodule, for the matching degree according to the keyword in described phonetic entry content and predetermined keyword, determines the cognition degree of described user to described phonetic entry content generic; Wherein, the matching degree of the keyword in described phonetic entry content and the professional keyword in predetermined keyword is higher, and described cognition degree is higher; The matching degree of the keyword in described phonetic entry content and the amateur keyword in predetermined keyword is higher, and described cognition degree is lower.
In one embodiment, described determination module comprises:
7th determines submodule, and for determining the sentence structure type of described phonetic entry content, described sentence structure type comprises professional statement structure type or amateur sentence structure type;
8th determines submodule, for the sentence structure type according to described phonetic entry content, determines the cognition degree of described user to described phonetic entry content generic; Wherein, described user is to the cognition degree of the cognition degree of the phonetic entry content generic of described professional statement structure type higher than the phonetic entry content generic to described amateur sentence structure type.
In one embodiment, described determination module comprises:
9th determines submodule, for when judging that the adjacent phonetic entry content received for twice is inputted by same user, according to the keyword in the phonetic entry content to receive for adjacent twice, determine the degree of association between the described adjacent phonetic entry content received for twice;
Tenth determines submodule, for according to the degree of association between the described adjacent phonetic entry content received for twice, determines the cognition degree of described user to described phonetic entry content generic; Wherein, the described degree of association is higher, and described cognition degree is lower.
In one embodiment, described determination module comprises:
11 determines submodule, for according to described phonetic entry content, determine at least two phonetic entry parameters of described phonetic entry content, described phonetic entry parameter comprises: the history corresponding with described user of the time interval between the phonetic entry content of the voiceprint of described user, adjacent twice input of same user input keyword in recorded information, described phonetic entry content and the matching degree of predetermined keyword, described phonetic entry content twice input adjacent with same user of sentence structure type phonetic entry content between the degree of association;
Calculating sub module, for the weight according to each the phonetic entry parameter preset, calculates the cognition degree of described user to described phonetic entry content generic.
In one embodiment, described determination module comprises:
12 determines submodule, for when determining the phonetic entry parameter of described phonetic entry content, determines described user to the cognition degree of described phonetic entry content generic for presetting minimum cognition degree.
In one embodiment, described output module comprises:
13 determines submodule, for according to the corresponding relation between cognition degree and cognitive grade, determines the cognitive grade that described cognition degree is corresponding;
Second obtains submodule, for according to the corresponding relation between cognitive grade and voice output content, obtains the voice output content corresponding with described cognitive grade;
Output sub-module, for exporting described voice output content.
In one embodiment, described device also comprises:
Update module, for according to input time of described phonetic entry content with use duration, upgrades described history input recorded information.
In one embodiment, described device also comprises:
Memory module, for storing the cognition degree of described user to described phonetic entry content generic;
Described determination module comprises:
4th recognin module, for identifying the voiceprint of described user;
Inquiry submodule, for inquiring about the cognition degree of described user to described phonetic entry content generic according to the voiceprint of described user.
Some beneficial effects of the embodiment of the present invention can comprise:
Said apparatus, can according to the cognition degree of user to the phonetic entry content generic of input, for user selects the voice output content matched with its cognition degree to export, voice output content is made more to meet the demand of user, thus provide more personalized voice output function for user, improve the accuracy of voice output simultaneously, enable user get maximum quantity of information from voice output content, improve the Experience Degree of user.
Other features and advantages of the present invention will be set forth in the following description, and, partly become apparent from instructions, or understand by implementing the present invention.Object of the present invention and other advantages realize by structure specifically noted in write instructions, claims and accompanying drawing and obtain.
Below by drawings and Examples, technical scheme of the present invention is described in further detail.
Accompanying drawing explanation
Accompanying drawing is used to provide a further understanding of the present invention, and forms a part for instructions, together with embodiments of the present invention for explaining the present invention, is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is the process flow diagram of a kind of speech output method in the embodiment of the present invention;
Fig. 2 is the process flow diagram of step S12 in a kind of speech output method in the embodiment of the present invention;
Fig. 3 is the process flow diagram of step S12 in a kind of speech output method in the embodiment of the present invention;
Fig. 4 is the process flow diagram of step S12 in a kind of speech output method in the embodiment of the present invention;
Fig. 5 is the process flow diagram of step S12 in a kind of speech output method in the embodiment of the present invention;
Fig. 6 is the process flow diagram of step S13 in a kind of speech output method in the embodiment of the present invention;
Fig. 7 is the block diagram of a kind of instantaneous speech power in the embodiment of the present invention;
Fig. 8 is the block diagram of determination module in a kind of instantaneous speech power in the embodiment of the present invention;
Fig. 9 is the block diagram of determination module in a kind of instantaneous speech power in the embodiment of the present invention;
Figure 10 is the block diagram of determination module in a kind of instantaneous speech power in the embodiment of the present invention;
Figure 11 is the block diagram of determination module in a kind of instantaneous speech power in the embodiment of the present invention;
Figure 12 is the block diagram of output module in a kind of instantaneous speech power in the embodiment of the present invention;
Figure 13 is the block diagram of a kind of instantaneous speech power in the embodiment of the present invention.
Embodiment
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are described, should be appreciated that preferred embodiment described herein is only for instruction and explanation of the present invention, is not intended to limit the present invention.
Fig. 1 is the process flow diagram of a kind of speech output method in the embodiment of the present invention.As shown in Figure 1, the method is used in terminal, and terminal can be mobile phone, computing machine, digital broadcast terminal, messaging devices, game console, tablet device, Medical Devices, body-building equipment, and personal digital assistant etc., comprise the following steps S11-S13:
Step S11, receives the phonetic entry content of user's input.
In this step, user can input phonetic entry content by the mode of typing sound.
Step S12, according to phonetic entry content, determines the cognition degree of user to phonetic entry content generic; This cognition degree is the professional knowledge degree of awareness of user to phonetic entry content generic.
Such as, how user input voice input content " arranges air-conditioner temperature ", and so the cognition degree of user to phonetic entry content generic is the professional knowledge degree of awareness of user to air-conditioning class; User input voice input content " what medicine aspirin is ", so the cognition degree of user to phonetic entry content generic is the professional knowledge degree of awareness of user to pharmaceutical.Terminal, by extracting the keyword in phonetic entry content, determines the classification belonging to phonetic entry content.
Step S13, from least one voice output content corresponding with phonetic entry content, obtains and exports the voice output content matched with cognition degree.
Adopt the technical scheme that the embodiment of the present invention provides, can according to the cognition degree of user to the phonetic entry content generic of input, for user selects the voice output content matched with its cognition degree to export, voice output content is made more to meet the demand of user, thus provide more personalized voice output function for user, improve the accuracy of voice output simultaneously, enable user get maximum quantity of information from voice output content, improve the Experience Degree of user.
In step s 12, the cognition degree of user to phonetic entry content generic is determined by various ways.First according to phonetic entry content, the phonetic entry parameter of phonetic entry content can be determined, then determine the cognition degree of user to phonetic entry content generic according to phonetic entry parameter.Wherein, according to the difference of phonetic entry parameter, the determination mode of cognition degree is also different, the degree of association between the phonetic entry content of the keyword in the history input recorded information corresponding with user of the time interval between the phonetic entry content that phonetic entry parameter can comprise the voiceprint of user, adjacent twice input of same user, phonetic entry content and the matching degree of predetermined keyword, sentence structure type twice input adjacent with same user of phonetic entry content, etc.The embodiment of description of step S12 is carried out below by way of different embodiments.
In one embodiment, as shown in Figure 2, step S12 may be embodied as following steps S21-S23:
Step S21, identifies the voiceprint of user.
Step S22, according to voiceprint, determines whether the phonetic entry content receiving user first.
Step S23, when phonetic entry content for receiving user first, determines user to the cognition degree of phonetic entry content generic for presetting minimum cognition degree.
In the present embodiment, the each self-corresponding voiceprint of different user is stored in terminal, when user input voice input content, if terminal can inquire the voiceprint of this user at the voiceprint prestored, illustrate it is not the phonetic entry content receiving this user first, if and terminal fails to inquire at the voiceprint prestored the voiceprint of this user, then illustrate that terminal is the phonetic entry content receiving this user first.When not being the phonetic entry content receiving user first, then terminal continues to determine other phonetic entry parameters according to phonetic entry content, and performs step S12 according to other phonetic entry parameters.In the terminal, being previously stored with the corresponding relation between cognition degree and phonetic entry content, wherein comprising the phonetic entry content corresponding with presetting minimum cognition degree.
In one embodiment, said method is further comprising the steps of: the input time of record phonetic entry content and use duration, this use duration is the duration receiving phonetic entry content and export between voice output content.Therefore, as shown in Figure 3, step S12 may be embodied as following steps S31-S34:
Step S31, identifies the voiceprint of user.
Step S32, according to the voiceprint of user, judges the adjacent phonetic entry content received for twice whether by same user is inputted.
Step S33, when the phonetic entry content to receive for adjacent twice is inputted by same user, according to input time of the phonetic entry content to receive for adjacent twice with use duration, calculates the time interval between the adjacent phonetic entry content received for twice.
Step S34, according to the time interval, determines the cognition degree of user to phonetic entry content generic; Wherein, the time interval is longer, and cognition degree is lower.
In the present embodiment, when the phonetic entry content to receive for adjacent twice is inputted by same user, the time interval between the so adjacent phonetic entry content received for twice can reflect the reaction duration of user to the upper voice output content that terminal exports, in addition, user also can export by the last time time interval that voice output content receives phonetic entry content to this to the reaction duration of the upper voice output content that terminal exports and characterizes.Such as, the phonetic entry content that the terminal last time receives is " how arranging air-conditioner temperature ", and for this phonetic entry content, it is " be introduced into temperature shaping modes, then change temperature " that terminal exports voice output content corresponding with it; This phonetic entry content received of terminal is " how entering temperature shaping modes ", when terminal judges that the adjacent phonetic entry content received for twice is inputted by same user, then can adopt receive phonetic entry content " how air-conditioner temperature is set " and receive phonetic entry content " how to enter temperature shaping modes " between time interval characterizing consumer " temperature shaping modes is introduced into; then change temperature " to a upper voice output content reaction duration, and then determine the cognition degree of user to phonetic entry content generic.Or, also can adopt export voice output content " be introduced into temperature shaping modes; then change temperature " and this receive phonetic entry content " how to enter temperature shaping modes " between time interval characterizing consumer " temperature shaping modes is introduced into; then change temperature " to a upper voice output content reaction duration, and then determine the cognition degree of user to phonetic entry content generic.The time interval is longer, and illustrate that the reaction duration of user to a upper voice output content is longer, so cognition degree is lower.
In addition, also can preset a prefixed time interval, when the phonetic entry content to receive for adjacent twice input to by same user and time interval between the adjacent phonetic entry content received for twice exceedes prefixed time interval time, terminal directly can determine user to the cognition degree of phonetic entry content generic for presetting minimum cognition degree, and the voice output content that acquisition and default minimum cognition degree match exports.
In one embodiment, as shown in Figure 4, step S12 may be embodied as following steps S41-S43:
Step S41, identifies the voiceprint of user.
Step S42, according to the voiceprint of user, obtains the history input recorded information corresponding with user; History input recorded information comprises history and adds up at least one information in service time, history accumulative input number of times and history incoming frequency.
Step S43, according to history input recorded information, determines the cognition degree of user to phonetic entry content generic; Wherein, it is longer that history adds up service time, and cognition degree is higher; History accumulative input number of times is more, and cognition degree is higher; History incoming frequency is higher, and cognition degree is higher.
In the present embodiment, terminal receives the phonetic entry content of user's input at every turn, will record the input time of phonetic entry content and use duration, and this use duration is the duration receiving phonetic entry content and export between voice output content.Terminal is according to the input time of record and use duration can count corresponding with user history input recorded information, and wherein, history adds up service time and be the summation of use duration recorded each time.In addition, said method is further comprising the steps of: according to input time and the use duration of phonetic entry content, more new historical input recorded information.Like this, when terminal determines the cognition degree of user to phonetic entry content generic according to the history corresponding with user input recorded information, the history input recorded information of institute's foundation is more accurately abundant, thus more accurate and personalized voice output content can be selected to export for user.
In one embodiment, as shown in Figure 5, step S12 may be embodied as following steps S51-S53:
Step S51, extracts the keyword in phonetic entry content.
Step S52, determines the matching degree of keyword in phonetic entry content and predetermined keyword.
Step S53, according to the matching degree of the keyword in phonetic entry content and predetermined keyword, determines the cognition degree of user to phonetic entry content generic; Wherein, the matching degree of the professional keyword in the keyword in phonetic entry content and predetermined keyword is higher, and cognition degree is higher; The matching degree of the amateur keyword in the keyword in phonetic entry content and predetermined keyword is higher, and cognition degree is lower.
In the present embodiment, the predetermined keyword prestored in terminal comprises professional keyword and amateur keyword two types, when performing step S52, need to determine respectively the matching degree between keyword in phonetic entry content and professional keyword, and and amateur keyword between matching degree.Such as, specialty keyword comprises " arranging path ", amateur keyword comprises " how using ", if the phonetic entry content that terminal receives is " ... arrange path ", so can judge that the matching degree between keyword in phonetic entry content and professional keyword is higher, therefore the cognition degree of user to phonetic entry content generic is also higher; If the phonetic entry content that terminal receives is " ... how to use ", so can judge that the matching degree between keyword in phonetic entry content and amateur keyword is higher, therefore the cognition degree of user to phonetic entry content generic is also lower.
In one embodiment, step S12 may be embodied as following steps A1-A2:
Steps A 1, determines the sentence structure type of phonetic entry content, and sentence structure type comprises professional statement structure type or amateur sentence structure type.
Steps A 2, according to the sentence structure type of phonetic entry content, determines the cognition degree of user to phonetic entry content generic; Wherein, user is to the cognition degree of the cognition degree of the phonetic entry content generic of professional statement structure type higher than the phonetic entry content generic to amateur sentence structure type.
In the present embodiment, prestore sentence structure type in terminal, sentence structure type embodies by regular expression.Wherein, the regular expression of professional statement structure type is as adjective+noun+verb; The regular expression of amateur sentence structure type is as pronoun+verb.It is pointed out that the manifestation mode of sentence structure type is not limited to regular expression, the mode that also can embody sentence structure by other embodies.Be exemplified below, the phonetic entry content that terminal receives is " what the step of start is ", terminal is by analyzing this phonetic entry content, determine that the sentence structure type of this phonetic entry content is for " adjective+noun+verb+pronoun ", so can determine that the sentence structure type of this phonetic entry content is professional statement structure type, the cognition degree of user to this phonetic entry content generic is higher.For another example, the phonetic entry content that terminal receives is " how this thing is used ", terminal is by analyzing this phonetic entry content, determine that the sentence structure type of this phonetic entry content is for " pronoun+verb ", so can determine that the sentence structure type of this phonetic entry content is amateur sentence structure type, the cognition degree of user to this phonetic entry content generic is lower.
In one embodiment, step S12 may be embodied as following steps B1-B2:
Step B1, when judging that the adjacent phonetic entry content received for twice is inputted by same user, according to the keyword in the phonetic entry content to receive for adjacent twice, determines the degree of association between the adjacent phonetic entry content received for twice.
Step B2, according to the degree of association between the phonetic entry content to receive for adjacent twice, determines the cognition degree of user to phonetic entry content generic; Wherein, the degree of association is higher, and cognition degree is lower.
In the present embodiment, when the phonetic entry content to receive for adjacent twice is inputted by same user, the degree of association between the so adjacent phonetic entry content received for twice can reflect the degree of understanding of user to a upper voice output content, therefore the degree of association between the adjacent phonetic entry content received for twice is higher, illustrate that the degree of understanding of user to a upper voice output content is lower, the cognition degree of user to phonetic entry content generic is also lower; The degree of association between the adjacent phonetic entry content received for twice is lower, and illustrate that the degree of understanding of user to a upper voice output content is higher, the cognition degree of user to phonetic entry content generic is also higher.Such as, the phonetic entry content that the terminal last time receives is " how arranging air-conditioner temperature ", this phonetic entry content received of terminal is " how entering temperature shaping modes " simultaneously, when terminal judges that the adjacent phonetic entry content received for twice is inputted by same user, the keyword in the adjacent phonetic entry content received for twice can be extracted, as keyword " air-conditioner temperature ", " temperature shaping modes ", the degree of association between the adjacent phonetic entry content received for twice is determined by the degree of association between keyword " air-conditioner temperature " and keyword " temperature shaping modes ", because " air-conditioner temperature " is all the keyword relevant with temperature with " temperature shaping modes ", therefore the degree of association is therebetween higher.For another example, the phonetic entry content that the terminal last time receives is " how arranging air-conditioner temperature ", this phonetic entry content received of terminal is " what the step of start is " simultaneously, when terminal judges that the adjacent phonetic entry content received for twice is inputted by same user, the keyword extracted respectively in adjacent two phonetic entry contents is " air-conditioner temperature " and " start ", due to the keyword that these two keywords are two irrelevant types, therefore the degree of association is therebetween almost nil, also just illustrate that the degree of association between the adjacent phonetic entry content received for twice is very low, the degree of understanding of user to a upper voice output content is higher, and then illustrate that the cognition degree of user to phonetic entry content generic is higher.
In one embodiment, for the various ways performing step S12 in above-described embodiment, also multinomial phonetic entry parameter can be combined, and calculate the cognition degree of user to phonetic entry content generic according to the weight pre-set.Therefore, above-mentioned steps S12 can also be embodied as following steps: according to phonetic entry content, determine at least two phonetic entry parameters of phonetic entry content, wherein, phonetic entry parameter comprises: the degree of association between the phonetic entry content of the history input recorded information corresponding with user of the time interval between the phonetic entry content of the voiceprint of user, adjacent twice input of same user, keyword in phonetic entry content and the matching degree of predetermined keyword, sentence structure type twice input adjacent with same user of phonetic entry content; According to the weight of each the phonetic entry parameter preset, calculate user to the cognition degree of phonetic entry content generic.
In one embodiment, said method is further comprising the steps of: when determining the phonetic entry parameter of phonetic entry content, determines user to the cognition degree of phonetic entry content generic for presetting minimum cognition degree.In the present embodiment, when receiving the phonetic entry content of user's input, for the phonetic entry content cannot determining phonetic entry content, terminal directly can determine user to the cognition degree of this phonetic entry content generic for presetting minimum cognition degree, therefore, even cannot determine the phonetic entry content of phonetic entry parameter, user also can get the voice output content matched with it, thus improves the Experience Degree of user.
In one embodiment, said method is further comprising the steps of: store user to the cognition degree of phonetic entry content generic.Now, step S12 may be embodied as following steps: the voiceprint identifying user; According to the voiceprint inquiring user of user to the cognition degree of phonetic entry content generic.In the present embodiment, by the cognition degree of inquiring user, the cognition degree of user to phonetic entry content generic conveniently can be determined rapidly, thus more quickly and accurately for user selects the voice output content matched to export.
In one embodiment, as shown in Figure 6, step S13 can be embodied as following steps S61-S63:
Step S61, according to the corresponding relation between cognition degree and cognitive grade, determines the cognitive grade that cognition degree is corresponding.
Step S62, according to the corresponding relation between cognitive grade and voice output content, obtains the voice output content corresponding with cognitive grade.
Step S63, exports voice output content.
In the present embodiment, terminal prestores the corresponding relation between cognition degree and cognitive grade, and the corresponding relation between cognitive grade and voice output content, such as, cognitive grade can be divided into low cognitive grade, middle cognitive grade, high cognitive grade Three Estate as required, the low cognitive grade of the correspondence of cognition degree between " 0% ~ 30% ", cognitive grade in the correspondence of cognition degree between " 31% ~ 70% ", the high cognitive grade of the correspondence of cognition degree between " 71% ~ 100% ".The voice output content corresponding with low cognitive grade is detailed version voice output content, the voice output content corresponding with middle cognitive grade is standard edition voice output content, the voice output content corresponding with the cognitive grade of height is succinct version voice output content, for each phonetic entry content, terminal all can store detailed version corresponding with it, succinct version, standard edition three kinds of voice output contents.For example, how " air-conditioner temperature is arranged " for phonetic entry content, voice output content corresponding thereto comprises: version " clicks the mode button in first row centre position; click and enter temperature shaping modes for twice; click second row left side button ' +/-' and change temperature; click once, temperature ' +/-' 1 degree " in detail; Standard edition " click mode button and enter temperature shaping modes, button click ' +/-' changes temperature "; Succinct version " is introduced into temperature shaping modes, then changes temperature ".In addition, the cognitive grade corresponding with presetting minimum cognition degree can be low cognitive grade, therefore, for cannot determining the phonetic entry content of phonetic entry parameter or receiving the phonetic entry content of user first, terminal directly can export detailed version voice output content.Visible, adopt the technical scheme of the present embodiment, make terminal when exporting voice output content for user, can by determining that the cognition degree of user to phonetic entry content generic analyzes the current demand of user, and the demand current according to user exports the voice output content matched with it, makes user can get more information more accurately from voice output content.
Corresponding to above-mentioned a kind of speech output method, the embodiment of the present invention also provides a kind of instantaneous speech power, and this device is used to perform the above method.
Fig. 7 is the block diagram of a kind of instantaneous speech power in the embodiment of the present invention.As shown in Figure 7, this device comprises:
Receiver module 71, for receiving the phonetic entry content of user's input;
Determination module 72, for according to phonetic entry content, determines the cognition degree of user to phonetic entry content generic, and cognition degree is the professional knowledge degree of awareness of user to classification;
Output module 73, for from least one voice output content corresponding with phonetic entry content, obtains and exports the voice output content matched with cognition degree.
In one embodiment, as shown in Figure 8, determination module 72 comprises:
First recognin module 721, for identifying the voiceprint of user;
First judges submodule 722, for according to voiceprint, determines whether the phonetic entry content receiving user first;
Second determines submodule 723, for when phonetic entry content for receiving user first, determines user to the cognition degree of phonetic entry content generic for presetting minimum cognition degree.
In one embodiment, said apparatus also comprises:
Logging modle, for recording the input time of phonetic entry content and using duration, uses duration to be the duration receiving phonetic entry content and export between voice output content.
In one embodiment, as shown in Figure 9, determination module 72 comprises:
Second recognin module 724, for identifying the voiceprint of user;
Second judges submodule 725, for the voiceprint according to user, judges the adjacent phonetic entry content received for twice whether by same user is inputted;
First calculating sub module 726, during for being inputted by same user when the phonetic entry content to receive for adjacent twice, according to input time of the phonetic entry content to receive for adjacent twice with use duration, calculate the time interval between the adjacent phonetic entry content received for twice;
3rd determines submodule 727, for according to the time interval, determines the cognition degree of user to phonetic entry content generic; Wherein, the time interval is longer, and cognition degree is lower.
In one embodiment, as shown in Figure 10, determination module 72 comprises:
3rd recognin module 728, for identifying the voiceprint of user;
First obtains submodule 729, and for the voiceprint according to user, obtain the history input recorded information corresponding with user, history input recorded information comprises history and adds up at least one information in service time, history accumulative input number of times and history incoming frequency;
4th determines submodule 7210, for according to history input recorded information, determines the cognition degree of user to phonetic entry content generic; Wherein, it is longer that history adds up service time, and cognition degree is higher; History accumulative input number of times is more, and cognition degree is higher; History incoming frequency is higher, and cognition degree is higher.
In one embodiment, as shown in figure 11, determination module 72 comprises:
Extract submodule 7211, for extracting the keyword in phonetic entry content;
5th determines submodule 7212, for determining the matching degree of keyword in phonetic entry content and predetermined keyword;
6th determines submodule 7213, for the matching degree according to the keyword in phonetic entry content and predetermined keyword, determines the cognition degree of user to phonetic entry content generic; Wherein, the matching degree of the professional keyword in the keyword in phonetic entry content and predetermined keyword is higher, and cognition degree is higher; The matching degree of the amateur keyword in the keyword in phonetic entry content and predetermined keyword is higher, and cognition degree is lower.
In one embodiment, determination module 72 comprises:
7th determines submodule, and for determining the sentence structure type of phonetic entry content, sentence structure type comprises professional statement structure type or amateur sentence structure type;
8th determines submodule, for the sentence structure type according to phonetic entry content, determines the cognition degree of user to phonetic entry content generic; Wherein, user is to the cognition degree of the cognition degree of the phonetic entry content generic of professional statement structure type higher than the phonetic entry content generic to amateur sentence structure type.
In one embodiment, determination module 72 comprises:
9th determines submodule, for when judging that the adjacent phonetic entry content received for twice is inputted by same user, according to the keyword in the phonetic entry content to receive for adjacent twice, determine the degree of association between the adjacent phonetic entry content received for twice;
Tenth determines submodule, for the degree of association between the phonetic entry content that receives according to adjacent twice, determines the cognition degree of user to phonetic entry content generic; Wherein, the degree of association is higher, and cognition degree is lower.
In one embodiment, determination module 72 comprises:
11 determines submodule, for according to phonetic entry content, determine at least two phonetic entry parameters of phonetic entry content, phonetic entry parameter comprises: the degree of association between the phonetic entry content of the history input recorded information corresponding with user of the time interval between the phonetic entry content of the voiceprint of user, adjacent twice input of same user, keyword in phonetic entry content and the matching degree of predetermined keyword, sentence structure type twice input adjacent with same user of phonetic entry content;
Calculating sub module, for the weight according to each the phonetic entry parameter preset, calculates user to the cognition degree of phonetic entry content generic.
In one embodiment, determination module 72 comprises:
12 determines submodule, for when determining the phonetic entry parameter of phonetic entry content, determines user to the cognition degree of phonetic entry content generic for presetting minimum cognition degree.
In one embodiment, as shown in figure 12, output module 73 comprises:
13 determines submodule 731, for according to the corresponding relation between cognition degree and cognitive grade, determines the cognitive grade that cognition degree is corresponding;
Second obtains submodule 732, for according to the corresponding relation between cognitive grade and voice output content, obtains the voice output content corresponding with cognitive grade;
Output sub-module 733, for exporting voice output content.
In one embodiment, as shown in figure 13, said apparatus also comprises:
Update module 74, for according to input time of phonetic entry content with use duration, more new historical input recorded information.
Memory module 75, for storing the cognition degree of user to phonetic entry content generic.
In one embodiment, determination module 72 comprises:
4th recognin module, for identifying the voiceprint of user;
Inquiry submodule, for according to the voiceprint inquiring user of user to the cognition degree of phonetic entry content generic.
Adopt the device that the embodiment of the present invention provides, can according to the cognition degree of user to the phonetic entry content generic of input, for user selects the voice output content matched with its cognition degree to export, voice output content is made more to meet the demand of user, thus provide more personalized voice output function for user, improve the accuracy of voice output simultaneously, enable user get maximum quantity of information from voice output content, improve the Experience Degree of user.
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt the form of complete hardware embodiment, completely software implementation or the embodiment in conjunction with software and hardware aspect.And the present invention can adopt in one or more form wherein including the upper computer program implemented of computer-usable storage medium (including but not limited to magnetic disk memory and optical memory etc.) of computer usable program code.
The present invention describes with reference to according to the process flow diagram of the method for the embodiment of the present invention, equipment (system) and computer program and/or block scheme.Should understand can by the combination of the flow process in each flow process in computer program instructions realization flow figure and/or block scheme and/or square frame and process flow diagram and/or block scheme and/or square frame.These computer program instructions can being provided to the processor of multi-purpose computer, special purpose computer, Embedded Processor or other programmable data processing device to produce a machine, making the instruction performed by the processor of computing machine or other programmable data processing device produce device for realizing the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
These computer program instructions also can be stored in can in the computer-readable memory that works in a specific way of vectoring computer or other programmable data processing device, the instruction making to be stored in this computer-readable memory produces the manufacture comprising command device, and this command device realizes the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
These computer program instructions also can be loaded in computing machine or other programmable data processing device, make on computing machine or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computing machine or other programmable devices is provided for the step realizing the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
Obviously, those skilled in the art can carry out various change and modification to the present invention and not depart from the spirit and scope of the present invention.Like this, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.

Claims (26)

1. a speech output method, is characterized in that, comprising:
Receive the phonetic entry content of user's input;
According to described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic, described cognition degree is the professional knowledge degree of awareness of described user to described classification;
From at least one voice output content corresponding with described phonetic entry content, obtain and export the voice output content matched with described cognition degree.
2. method according to claim 1, is characterized in that, described according to described phonetic entry content, determines the cognition degree of described user to described phonetic entry content generic, comprising:
Identify the voiceprint of described user;
According to described voiceprint, determine whether the phonetic entry content receiving described user first;
When phonetic entry content for receiving described user first, determine described user to the cognition degree of described phonetic entry content generic for presetting minimum cognition degree.
3. method according to claim 1, is characterized in that, described method also comprises:
Record the input time of described phonetic entry content and use duration, described use duration is the duration receiving described phonetic entry content and export between described voice output content.
4. method according to claim 3, is characterized in that, described according to described phonetic entry content, determines the cognition degree of described user to described phonetic entry content generic, comprising:
Identify the voiceprint of described user;
According to the voiceprint of described user, judge the adjacent phonetic entry content received for twice whether by same user is inputted;
When the phonetic entry content to receive for adjacent twice is inputted by same user, according to input time of the described adjacent phonetic entry content received for twice with use duration, calculate the time interval between the adjacent phonetic entry content received for twice;
According to the described time interval, determine the cognition degree of described user to described phonetic entry content generic; Wherein, the described time interval is longer, and described cognition degree is lower.
5. method according to claim 3, is characterized in that, described according to described phonetic entry content, determines the cognition degree of described user to described phonetic entry content generic, comprising:
Identify the voiceprint of described user;
According to the voiceprint of described user, obtain the history corresponding with described user and input recorded information, described history input recorded information comprises history and adds up at least one information in service time, history accumulative input number of times and history incoming frequency;
According to described history input recorded information, determine the cognition degree of described user to described phonetic entry content generic; Wherein, it is longer that described history adds up service time, and described cognition degree is higher; Described history accumulative input number of times is more, and described cognition degree is higher; Described history incoming frequency is higher, and described cognition degree is higher.
6. method according to claim 1, is characterized in that, described according to described phonetic entry content, determines the cognition degree of described user to described phonetic entry content generic, comprising:
Extract the keyword in described phonetic entry content;
Determine the matching degree of keyword in described phonetic entry content and predetermined keyword;
According to the matching degree of the keyword in described phonetic entry content and predetermined keyword, determine the cognition degree of described user to described phonetic entry content generic; Wherein, the matching degree of the keyword in described phonetic entry content and the professional keyword in predetermined keyword is higher, and described cognition degree is higher; The matching degree of the keyword in described phonetic entry content and the amateur keyword in predetermined keyword is higher, and described cognition degree is lower.
7. method according to claim 1, is characterized in that, described according to described phonetic entry content, determines the cognition degree of described user to described phonetic entry content generic, comprising:
Determine the sentence structure type of described phonetic entry content, described sentence structure type comprises professional statement structure type or amateur sentence structure type;
According to the sentence structure type of described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic; Wherein, described user is to the cognition degree of the cognition degree of the phonetic entry content generic of described professional statement structure type higher than the phonetic entry content generic to described amateur sentence structure type.
8. method according to claim 1, is characterized in that, described according to described phonetic entry content, determines the cognition degree of described user to described phonetic entry content generic, comprising:
When judging that the adjacent phonetic entry content received for twice is inputted by same user, according to the keyword in the phonetic entry content to receive for adjacent twice, determine the degree of association between the described adjacent phonetic entry content received for twice;
According to the degree of association between the described adjacent phonetic entry content received for twice, determine the cognition degree of described user to described phonetic entry content generic; Wherein, the described degree of association is higher, and described cognition degree is lower.
9. the method according to any one of claim 1-8, is characterized in that, described according to described phonetic entry content, determines the cognition degree of described user to described phonetic entry content generic, comprising:
According to described phonetic entry content, determine at least two phonetic entry parameters of described phonetic entry content, described phonetic entry parameter comprises: the history corresponding with described user of the time interval between the phonetic entry content of the voiceprint of described user, adjacent twice input of same user input keyword in recorded information, described phonetic entry content and the matching degree of predetermined keyword, described phonetic entry content twice input adjacent with same user of sentence structure type phonetic entry content between the degree of association;
According to the weight of each the phonetic entry parameter preset, calculate the cognition degree of described user to described phonetic entry content generic.
10. method according to claim 9, is characterized in that, described according to described phonetic entry content, determines the cognition degree of described user to described phonetic entry content generic, comprising:
When the phonetic entry parameter of described phonetic entry content cannot be determined, determine described user to the cognition degree of described phonetic entry content generic for presetting minimum cognition degree.
11. methods according to claim 1, is characterized in that, described from least one voice output content corresponding with described phonetic entry content, obtain and export the voice output content matched with described cognition degree, comprising:
According to the corresponding relation between cognition degree and cognitive grade, determine the cognitive grade that described cognition degree is corresponding;
According to the corresponding relation between cognitive grade and voice output content, obtain the voice output content corresponding with described cognitive grade;
Export described voice output content.
12. methods according to claim 5, is characterized in that, described method also comprises:
According to input time and the use duration of described phonetic entry content, upgrade described history input recorded information.
13. methods according to claim 1, is characterized in that, described method also comprises:
Store the cognition degree of described user to described phonetic entry content generic;
Described according to described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic, comprising:
Identify the voiceprint of described user;
The cognition degree of described user to described phonetic entry content generic is inquired about according to the voiceprint of described user.
14. 1 kinds of instantaneous speech powers, is characterized in that, comprising:
Receiver module, for receiving the phonetic entry content of user's input;
Determination module, for according to described phonetic entry content, determine the cognition degree of described user to described phonetic entry content generic, described cognition degree is the professional knowledge degree of awareness of described user to described classification;
Output module, for from least one voice output content corresponding with described phonetic entry content, obtains and exports the voice output content matched with described cognition degree.
15. devices according to claim 14, is characterized in that, described determination module comprises:
First recognin module, for identifying the voiceprint of described user;
First judges submodule, for according to described voiceprint, determines whether the phonetic entry content receiving described user first;
Second determines submodule, for when phonetic entry content for receiving described user first, determines described user to the cognition degree of described phonetic entry content generic for presetting minimum cognition degree.
16. devices according to claim 14, is characterized in that, described device also comprises:
Logging modle, for recording the input time of described phonetic entry content and using duration, described use duration is the duration receiving described phonetic entry content and export between described voice output content.
17. devices according to claim 16, is characterized in that, described determination module comprises:
Second recognin module, for identifying the voiceprint of described user;
Second judges submodule, for the voiceprint according to described user, judges the adjacent phonetic entry content received for twice whether by same user is inputted;
First calculating sub module, during for being inputted by same user when the phonetic entry content to receive for adjacent twice, according to input time and the use duration of the described adjacent phonetic entry content received for twice, calculate the time interval between the adjacent phonetic entry content received for twice;
3rd determines submodule, for according to the described time interval, determines the cognition degree of described user to described phonetic entry content generic; Wherein, the described time interval is longer, and described cognition degree is lower.
18. devices according to claim 14, is characterized in that, described determination module comprises:
3rd recognin module, for identifying the voiceprint of described user;
First obtains submodule, for the voiceprint according to described user, obtain the history corresponding with described user and input recorded information, described history input recorded information comprises history and adds up at least one information in service time, history accumulative input number of times and history incoming frequency;
4th determines submodule, for according to described history input recorded information, determines the cognition degree of described user to described phonetic entry content generic; Wherein, it is longer that described history adds up service time, and described cognition degree is higher; Described history accumulative input number of times is more, and described cognition degree is higher; Described history incoming frequency is higher, and described cognition degree is higher.
19. devices according to claim 14, is characterized in that, described determination module comprises:
Extract submodule, for extracting the keyword in described phonetic entry content;
5th determines submodule, for determining the matching degree of keyword in described phonetic entry content and predetermined keyword;
6th determines submodule, for the matching degree according to the keyword in described phonetic entry content and predetermined keyword, determines the cognition degree of described user to described phonetic entry content generic; Wherein, the matching degree of the keyword in described phonetic entry content and the professional keyword in predetermined keyword is higher, and described cognition degree is higher; The matching degree of the keyword in described phonetic entry content and the amateur keyword in predetermined keyword is higher, and described cognition degree is lower.
20. devices according to claim 14, is characterized in that, described determination module comprises:
7th determines submodule, and for determining the sentence structure type of described phonetic entry content, described sentence structure type comprises professional statement structure type or amateur sentence structure type;
8th determines submodule, for the sentence structure type according to described phonetic entry content, determines the cognition degree of described user to described phonetic entry content generic; Wherein, described user is to the cognition degree of the cognition degree of the phonetic entry content generic of described professional statement structure type higher than the phonetic entry content generic to described amateur sentence structure type.
21. devices according to claim 14, is characterized in that, described determination module comprises:
9th determines submodule, for when judging that the adjacent phonetic entry content received for twice is inputted by same user, according to the keyword in the phonetic entry content to receive for adjacent twice, determine the degree of association between the described adjacent phonetic entry content received for twice;
Tenth determines submodule, for according to the degree of association between the described adjacent phonetic entry content received for twice, determines the cognition degree of described user to described phonetic entry content generic; Wherein, the described degree of association is higher, and described cognition degree is lower.
22. devices according to any one of claim 14-20, it is characterized in that, described determination module comprises:
11 determines submodule, for according to described phonetic entry content, determine at least two phonetic entry parameters of described phonetic entry content, described phonetic entry parameter comprises: the history corresponding with described user of the time interval between the phonetic entry content of the voiceprint of described user, adjacent twice input of same user input keyword in recorded information, described phonetic entry content and the matching degree of predetermined keyword, described phonetic entry content twice input adjacent with same user of sentence structure type phonetic entry content between the degree of association;
Calculating sub module, for the weight according to each the phonetic entry parameter preset, calculates the cognition degree of described user to described phonetic entry content generic.
23. devices according to claim 22, is characterized in that, described determination module comprises:
12 determines submodule, for when determining the phonetic entry parameter of described phonetic entry content, determines described user to the cognition degree of described phonetic entry content generic for presetting minimum cognition degree.
24. devices according to claim 14, is characterized in that, described output module comprises:
13 determines submodule, for according to the corresponding relation between cognition degree and cognitive grade, determines the cognitive grade that described cognition degree is corresponding;
Second obtains submodule, for according to the corresponding relation between cognitive grade and voice output content, obtains the voice output content corresponding with described cognitive grade;
Output sub-module, for exporting described voice output content.
25. devices according to claim 18, is characterized in that, described device also comprises:
Update module, for according to input time of described phonetic entry content with use duration, upgrades described history input recorded information.
26. devices according to claim 14, is characterized in that, described device also comprises:
Memory module, for storing the cognition degree of described user to described phonetic entry content generic;
Described determination module comprises:
4th recognin module, for identifying the voiceprint of described user;
Inquiry submodule, for inquiring about the cognition degree of described user to described phonetic entry content generic according to the voiceprint of described user.
CN201510568430.8A 2015-09-08 2015-09-08 A kind of speech output method and device Active CN105304082B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201510568430.8A CN105304082B (en) 2015-09-08 2015-09-08 A kind of speech output method and device
CN201680002958.1A CN107077845B (en) 2015-09-08 2016-05-18 Voice output method and device
PCT/CN2016/082427 WO2017041510A1 (en) 2015-09-08 2016-05-18 Voice output method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510568430.8A CN105304082B (en) 2015-09-08 2015-09-08 A kind of speech output method and device

Publications (2)

Publication Number Publication Date
CN105304082A true CN105304082A (en) 2016-02-03
CN105304082B CN105304082B (en) 2018-12-28

Family

ID=55201255

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201510568430.8A Active CN105304082B (en) 2015-09-08 2015-09-08 A kind of speech output method and device
CN201680002958.1A Active CN107077845B (en) 2015-09-08 2016-05-18 Voice output method and device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201680002958.1A Active CN107077845B (en) 2015-09-08 2016-05-18 Voice output method and device

Country Status (2)

Country Link
CN (2) CN105304082B (en)
WO (1) WO2017041510A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251862A (en) * 2016-07-19 2016-12-21 东莞市优陌儿智护电子科技有限公司 The implementation method of complete semantic intelligence intercommunication and system thereof
WO2017041510A1 (en) * 2015-09-08 2017-03-16 北京云知声信息技术有限公司 Voice output method and device
CN106649698A (en) * 2016-12-19 2017-05-10 宇龙计算机通信科技(深圳)有限公司 Information processing method and information processing device
CN107767869A (en) * 2017-09-26 2018-03-06 百度在线网络技术(北京)有限公司 Method and apparatus for providing voice service
CN107863108A (en) * 2017-11-16 2018-03-30 百度在线网络技术(北京)有限公司 Information output method and device
CN109036386A (en) * 2018-09-14 2018-12-18 北京网众共创科技有限公司 A kind of method of speech processing and device
CN109766411A (en) * 2019-01-14 2019-05-17 广东小天才科技有限公司 A kind of method and system of the parsing of search problem
CN110619870A (en) * 2018-06-04 2019-12-27 佛山市顺德区美的电热电器制造有限公司 Man-machine conversation method and device, household appliance and computer storage medium
CN111782782A (en) * 2020-06-09 2020-10-16 苏宁金融科技(南京)有限公司 Consultation reply method and device for intelligent customer service, computer equipment and storage medium
CN114398514A (en) * 2021-12-24 2022-04-26 北京达佳互联信息技术有限公司 Video display method and device and electronic equipment

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110018843B (en) * 2018-01-09 2022-08-30 北京小度互娱科技有限公司 Method and device for testing application program operation strategy
CN109035896B (en) * 2018-08-13 2021-11-05 广东小天才科技有限公司 Oral training method and learning equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006208483A (en) * 2005-01-25 2006-08-10 Sony Corp Device, method, and program for assisting survey of interesting matter of listener, and recording medium
CN101304457A (en) * 2007-05-10 2008-11-12 许罗迈 Method and apparatus for implementing automatic spoken language training based on voice telephone
CN101616221A (en) * 2008-06-25 2009-12-30 富士通株式会社 Guidance information display unit and guidance information display packing
CN103594086A (en) * 2013-10-25 2014-02-19 鸿富锦精密工业(深圳)有限公司 Voice processing system, device and method
CN103680222A (en) * 2012-09-19 2014-03-26 镇江诺尼基智能技术有限公司 Question-answer interaction method for children stories
US20140257794A1 (en) * 2013-03-11 2014-09-11 Nuance Communications, Inc. Semantic Re-Ranking of NLU Results in Conversational Dialogue Applications

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1553381A (en) * 2003-05-26 2004-12-08 杨宏惠 Multi-language correspondent list style language database and synchronous computer inter-transtation and communication
US7542971B2 (en) * 2004-02-02 2009-06-02 Fuji Xerox Co., Ltd. Systems and methods for collaborative note-taking
CN1838159B (en) * 2006-02-14 2010-08-11 北京未名博思生物智能科技开发有限公司 Cognition logic machine and its information processing method
US7725308B2 (en) * 2006-06-07 2010-05-25 Motorola, Inc. Interactive tool for semi-automatic generation of a natural language grammar from a device descriptor
US20090216757A1 (en) * 2008-02-27 2009-08-27 Robi Sen System and Method for Performing Frictionless Collaboration for Criteria Search
EP2707872A2 (en) * 2011-05-12 2014-03-19 Johnson Controls Technology Company Adaptive voice recognition systems and methods
KR101307578B1 (en) * 2012-07-18 2013-09-12 티더블유모바일 주식회사 System for supplying a representative phone number information with a search function
CN103578469A (en) * 2012-08-08 2014-02-12 百度在线网络技术(北京)有限公司 Method and device for showing voice recognition result
CN103000173B (en) * 2012-12-11 2015-06-17 优视科技有限公司 Voice interaction method and device
CN104637007A (en) * 2013-11-07 2015-05-20 大连东方之星信息技术有限公司 Statistical analysis system employing degree-of-cognition system
CN104408099B (en) * 2014-11-18 2019-03-12 百度在线网络技术(北京)有限公司 Searching method and device
CN104574251A (en) * 2015-01-06 2015-04-29 熊国顺 Intelligent public safety information system and application method
CN105304082B (en) * 2015-09-08 2018-12-28 北京云知声信息技术有限公司 A kind of speech output method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006208483A (en) * 2005-01-25 2006-08-10 Sony Corp Device, method, and program for assisting survey of interesting matter of listener, and recording medium
CN101304457A (en) * 2007-05-10 2008-11-12 许罗迈 Method and apparatus for implementing automatic spoken language training based on voice telephone
CN101616221A (en) * 2008-06-25 2009-12-30 富士通株式会社 Guidance information display unit and guidance information display packing
CN103680222A (en) * 2012-09-19 2014-03-26 镇江诺尼基智能技术有限公司 Question-answer interaction method for children stories
US20140257794A1 (en) * 2013-03-11 2014-09-11 Nuance Communications, Inc. Semantic Re-Ranking of NLU Results in Conversational Dialogue Applications
CN103594086A (en) * 2013-10-25 2014-02-19 鸿富锦精密工业(深圳)有限公司 Voice processing system, device and method

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017041510A1 (en) * 2015-09-08 2017-03-16 北京云知声信息技术有限公司 Voice output method and device
CN106251862A (en) * 2016-07-19 2016-12-21 东莞市优陌儿智护电子科技有限公司 The implementation method of complete semantic intelligence intercommunication and system thereof
CN106649698A (en) * 2016-12-19 2017-05-10 宇龙计算机通信科技(深圳)有限公司 Information processing method and information processing device
CN107767869A (en) * 2017-09-26 2018-03-06 百度在线网络技术(北京)有限公司 Method and apparatus for providing voice service
CN107863108B (en) * 2017-11-16 2021-03-23 百度在线网络技术(北京)有限公司 Information output method and device
CN107863108A (en) * 2017-11-16 2018-03-30 百度在线网络技术(北京)有限公司 Information output method and device
CN110619870A (en) * 2018-06-04 2019-12-27 佛山市顺德区美的电热电器制造有限公司 Man-machine conversation method and device, household appliance and computer storage medium
CN110619870B (en) * 2018-06-04 2022-05-06 佛山市顺德区美的电热电器制造有限公司 Man-machine conversation method and device, household appliance and computer storage medium
CN109036386A (en) * 2018-09-14 2018-12-18 北京网众共创科技有限公司 A kind of method of speech processing and device
CN109766411A (en) * 2019-01-14 2019-05-17 广东小天才科技有限公司 A kind of method and system of the parsing of search problem
CN111782782A (en) * 2020-06-09 2020-10-16 苏宁金融科技(南京)有限公司 Consultation reply method and device for intelligent customer service, computer equipment and storage medium
CN114398514A (en) * 2021-12-24 2022-04-26 北京达佳互联信息技术有限公司 Video display method and device and electronic equipment
CN114398514B (en) * 2021-12-24 2022-11-22 北京达佳互联信息技术有限公司 Video display method and device and electronic equipment

Also Published As

Publication number Publication date
WO2017041510A1 (en) 2017-03-16
CN107077845A (en) 2017-08-18
CN107077845B (en) 2020-07-17
CN105304082B (en) 2018-12-28

Similar Documents

Publication Publication Date Title
CN105304082A (en) Voice output method and voice output device
KR102649208B1 (en) Apparatus and method for qusetion-answering
KR101909807B1 (en) Method and apparatus for inputting information
CN106201424B (en) A kind of information interacting method, device and electronic equipment
US10043520B2 (en) Multilevel speech recognition for candidate application group using first and second speech commands
CN105391730A (en) Information feedback method, device and system
TW201935273A (en) A statement user intention identification method and device
CN103699530A (en) Method and equipment for inputting texts in target application according to voice input information
CN105320726A (en) Reducing the need for manual start/end-pointing and trigger phrases
CN105489221A (en) Voice recognition method and device
CN108519998B (en) Problem guiding method and device based on knowledge graph
CN109710739B (en) Information processing method and device and storage medium
CN109145213A (en) Inquiry recommended method and device based on historical information
US20190066669A1 (en) Graphical data selection and presentation of digital content
CN106205622A (en) Information processing method and electronic equipment
CN104992715A (en) Interface switching method and system of intelligent device
CN113868427A (en) Data processing method and device and electronic equipment
CN107809654A (en) System for TV set and TV set control method
CN109710732A (en) Information query method, device, storage medium and electronic equipment
CN108920543A (en) The method and device of inquiry and interaction, computer installation, storage medium
CN109165286A (en) Automatic question-answering method, device and computer readable storage medium
CN109324515A (en) A kind of method and controlling terminal controlling intelligent electric appliance
CN108053826A (en) For the method, apparatus of human-computer interaction, electronic equipment and storage medium
CN114357278B (en) Topic recommendation method, device and equipment
CN106372203A (en) Information response method and device for smart terminal and smart terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100191 Beijing, Huayuan Road, Haidian District No. 2 peony technology building, five floor, A503

Patentee after: Yunzhisheng Intelligent Technology Co., Ltd.

Address before: 100191 Beijing, Huayuan Road, Haidian District No. 2 peony technology building, five floor, A503

Patentee before: Beijing Yunzhisheng Information Technology Co., Ltd.

CP01 Change in the name or title of a patent holder