CN104866091A - Method and device for outputting audio-effect information in computer equipment - Google Patents

Method and device for outputting audio-effect information in computer equipment Download PDF

Info

Publication number
CN104866091A
CN104866091A CN201510134636.XA CN201510134636A CN104866091A CN 104866091 A CN104866091 A CN 104866091A CN 201510134636 A CN201510134636 A CN 201510134636A CN 104866091 A CN104866091 A CN 104866091A
Authority
CN
China
Prior art keywords
information
input operation
audio
user
audio information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510134636.XA
Other languages
Chinese (zh)
Other versions
CN104866091B (en
Inventor
姜建建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201510134636.XA priority Critical patent/CN104866091B/en
Publication of CN104866091A publication Critical patent/CN104866091A/en
Application granted granted Critical
Publication of CN104866091B publication Critical patent/CN104866091B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a method for outputting audio-effect information in computer equipment. The method comprises the following steps: acquiring input operation information according to input operation performed by a user on an input method; determining matched audio-effect information according to the input operation information; outputting the audio-effect information. According to the method, on the premise of realizing basic input method functions, the matched audio-effect information can be output according to the input operation performed by the user on the input method, so that the output effect of the input method can be richer, fatigue feeling of the user during input can be reduced, and input errors can be reduced accordingly.

Description

A kind of method and apparatus for output sound effective information in computer equipment
Technical field
The present invention relates to field of computer technology, particularly relate to a kind of method and apparatus for output sound effective information in computer equipment.
Background technology
In prior art, when user uses input method to input, input method only provides candidate item according to the list entries of user's input, selects for user.
Summary of the invention
The object of this invention is to provide a kind of method and apparatus for output sound effective information in computer equipment.
According to an aspect of the present invention, provide a kind of method for output sound effective information in computer equipment, wherein, the method comprises:
According to the input operation that user performs input method, obtain input operation information;
According to described input operation information, determine the audio information matched;
Export described audio information.
According to another aspect of the present invention, additionally provide a kind of in computer equipment for the device of output sound effective information, wherein, this device comprises:
For the input operation performed input method according to user, obtain the device of input operation information;
For according to described input operation information, determine the device of the audio information matched;
For exporting the device of described audio information.
Compared with prior art, the present invention has the following advantages: 1) under the prerequisite realizing basic input method function, the input operation (this input operation is not limited to the list entries of user's input) that can perform input method according to user, export the audio information matched, thus make the output effect comparatively horn of plenty of input method, the sense of fatigue of user when inputting can be alleviated, and then reduce the input error caused therefrom; 2) audio information of the present invention comprises sense of hearing audio information and visual sound effect information, can perform in the process of input operation user, for user brings the sense of hearing and visual complete new experience; 3) the present invention by determining the emotional information of user, can determine the audio information matched, thus make to strengthen user incorporate sense, and the mood that can effectively regulate user current.
Accompanying drawing explanation
By reading the detailed description done non-limiting example done with reference to the following drawings, other features, objects and advantages of the present invention will become more obvious:
Fig. 1 is the schematic flow sheet for the method for output sound effective information in computer equipment of a preferred embodiment of the invention;
Fig. 2 is the structural representation for the device of output sound effective information in computer equipment of a preferred embodiment of the invention;
Fig. 3 is the schematic diagram of the emotional information of an example and the corresponding relation of audio information.
In accompanying drawing, same or analogous Reference numeral represents same or analogous parts.
Embodiment
Fig. 1 is the schematic flow sheet for the method for output sound effective information in computer equipment of a preferred embodiment of the invention.
Wherein, the method for the present embodiment realizes mainly through computer equipment; Described computer equipment comprises the network equipment and subscriber equipment.The described network equipment includes but not limited to the server group that single network server, multiple webserver form or the cloud be made up of a large amount of computing machine or the webserver based on cloud computing (CloudComputing), wherein, cloud computing is the one of Distributed Calculation, the super virtual machine be made up of a group loosely-coupled computing machine collection; Network residing for the described network equipment includes but not limited to internet, wide area network, Metropolitan Area Network (MAN), LAN (Local Area Network), VPN etc.Described subscriber equipment includes but not limited to PC, panel computer, smart mobile phone, PDA, IPTV etc.
It should be noted that, described computer equipment is only citing, and other computer equipments that are existing or that may occur from now on, as being applicable to the present invention, within also should being included in scope, and are contained in this with way of reference.
Step S1, step S2 and step S3 is comprised according to the method for the present embodiment.
In step sl, the input operation that computer equipment performs input method according to user, obtains input operation information.
Wherein, described input operation comprises any operation that user can perform in input method; Preferably, the selection operation etc. that described input operation includes but not limited to the operation of the letter input that user is undertaken by the keyboard presented input method or stroke input etc., user carries out according to the candidate item that the list entries that user inputs provides input method.
Wherein, described input operation information comprises any information relevant to the input operation that user performs.Preferably, described input operation information includes but not limited to:
1) list entries of user's input.
Wherein, described list entries comprises any symbol that can input in input method; Preferably, described list entries includes but not limited to letter, stroke, special character (as@) etc.
2) that select, corresponding with the list entries that user the inputs candidate item of user.
Preferably, input method provides at least one candidate item for the list entries that user inputs.
Such as, user input list entries be " keyi ", the candidate item that input method provides according to this list entries comprises: " can ", " deliberately ", " suspicious ", " can discuss ", " tech "; Described input operation information comprise the candidate item of user selected in these candidate item " can ".
3) the input velocity information of input operation.
Wherein, described input velocity information is used to indicate the speed corresponding to input operation.Preferably, described input velocity information can adopt multiform expression; Such as, described input velocity information is expressed as speed class, as quick, common, at a slow speed etc.; Again such as, described input velocity information is expressed as concrete velocity amplitude, if velocity amplitude is 60WPM (words per minute, words per minute).
It should be noted that, above-mentioned input operation information is only citing, but not limitation of the present invention, those skilled in the art will be understood that any information relevant to the input operation that user performs, and all should be included in the scope of input operation information of the present invention.
Particularly, computer equipment can adopt various ways to obtain described input operation information.
Such as, computer equipment determines described input operation information by the input operation detecting user's execution in real time, as, the symbol quantity of computer equipment counting user input and the time corresponding to input operation, and when symbol quantity and the ratio of time are greater than predetermined speed, determine that input velocity information is used to indicate speed class for " fast ".
Again such as, computer equipment directly obtains the input operation information that input method provides, and e.g., computer equipment directly obtains input velocity information that input method provides and the candidate item that user selects.
It should be noted that, above-mentioned citing is only and technical scheme of the present invention is described better, but not limitation of the present invention, those skilled in the art should understand that, any input operation input method performed according to user, obtain the implementation of input operation information, all should be within the scope of the present invention.
In step s 2, computer equipment, according to input operation information, determines the audio information matched.
Wherein, described audio information comprises any information relevant to audio; Preferably, described audio information includes but not limited to:
1) sense of hearing audio information.
Wherein, described sense of hearing audio information comprises any information relevant to the auditory effect of audio.Preferably, described sense of hearing audio information includes but not limited to:
A) cadence information of audio.
Wherein, described cadence information comprises any information relevant to the rhythm of audio; Preferably, described cadence information includes but not limited to the rhythm etc. corresponding to combination of rhythm corresponding to a note or multiple note.
B) pattern information of audio.
Wherein, described pattern information is used to indicate the pattern corresponding to audio; Preferably, described pattern comprises common audio, classic audio, rock and roll audio, popular audio, jazz's audio, voice audio, Environmental Audio Extension etc.
Preferably, there is multiple preassigned pattern in input method; More preferably, often kind of preassigned pattern is to there being at least one predetermined cadence.Such as, preassigned pattern comprises Environmental Audio Extension, the corresponding following predetermined cadence of this preassigned pattern: sound of the wind rhythm, underwater sound rhythm, chirping of birds rhythm etc.
C) information volume.
Wherein, described information volume comprises any information relevant to the volume of audio; Preferably, described information volume includes but not limited to: the size of volume, the information relevant to the enhancing of volume, the information etc. relevant with the equilibrium of volume.
2) visual sound effect information.
Wherein, described visual sound effect information comprises any information relevant to the visual effect of audio.Preferably, described visual sound effect information includes but not limited to: the animation etc. corresponding to the note pattern presented, audio.
Preferably, input method comprises multiple predetermined visual sound effect information, as predetermined note pattern, predetermined animation etc.; More preferably, predefine the mapping between this multiple predetermined visual sound effect information and sense of hearing audio information in input method, such as, in input method, predefine the mapping between multiple predetermined note pattern and multiple predetermined cadence.
It should be noted that, above-mentioned audio information is only citing, but not limitation of the present invention, those skilled in the art will be understood that any information relevant to audio, all should be included in the scope of audio information of the present invention.
Particularly, computer equipment, according to input operation information, determines that the implementation of the audio information matched includes but not limited to:
1) input operation information comprises the list entries of user's input or user is that select, corresponding with the list entries that user inputs candidate item, and computer equipment, according to described list entries or candidate item, determines the audio information matched.
Wherein, computer equipment, according to described list entries or candidate item, determines that the implementation of the audio information matched includes but not limited to:
A) computer equipment is directly according to described list entries or candidate item, determines the audio information matched with described list entries or candidate item.
Such as, the list entries of user's input is " k ", then the cadence information of audio information corresponding to symbol " k " that match with this list entries directly determined by computer equipment.
Again such as, user selects, the candidate item corresponding with the list entries that user inputs " gaoxing " is " happiness ", the then tone of computer equipment corresponding to vocabulary " happiness ", determine the audio information matched with this candidate item, this audio information comprise Heibei provincial opera ("-") and falling tone (" ˋ ") combine corresponding to cadence information.
Again such as, that user selects, the candidate item corresponding with the list entries that user inputs " huanghe " be " the Yellow River ", then the cadence information of audio information corresponding to underwater sound rhythm that match with this candidate item determined by computer equipment.
B) in this implementation, the present embodiment also comprises step S4, and step S2 comprises step S21 further.
In step s 4 which, computer equipment obtains the contextual information of list entries or candidate item.
Wherein, described contextual information is used to indicate and list entries or the adjacent information of candidate item.Preferably, described contextual information includes but not limited to: the text message (the one or more vocabulary as before and after insertion position) before and after candidate item insertion position in the text, the list entries (the one or more symbols e.g., before and after input position) etc. before and after the input position of this list entries.
Such as, last at text of candidate item " happiness " insertion position in the text, and the vocabulary before this insertion position is " very ", then the contextual information of candidate item " happiness " is " very ".
Again such as, the former list entries in input method input frame is " gaxing ", and user is incoming symbol " o " between the symbol " a " and symbol " x " of this list entries, and the contextual information of then symbol " o " comprises " ga " and " xing ".
Wherein, computer equipment can adopt various ways to obtain the contextual information of list entries or candidate item.
Such as, namely computer equipment, according to cursor position in the text (also candidate item insertion position in the text), determines the vocabulary before and after this position, and by the contextual information of determined vocabulary alternatively item.
Again such as, computer equipment, according to the position of cursor in the input frame of input method (being also the input position of current input sequence), determines the symbol before and after this position, and using the contextual information of determined symbol as list entries.
It should be noted that, above-mentioned citing is only and technical scheme of the present invention is described better, but not limitation of the present invention, those skilled in the art should understand that, the implementation of the contextual information of any acquisition list entries or candidate item, all should be within the scope of the present invention.
In the step s 21, computer equipment, according to described input operation information and described contextual information, determines the audio information matched.
Such as, the candidate item that user selects is " seat ", the contextual information of this candidate item is the vocabulary " Aries " in the front, insertion position being positioned at this candidate item, then computer equipment determines that the audio information matched comprises cadence information corresponding to " Aries " and animation.
As a kind of preferred version, step S21 comprises step S21-1 and step S21-2 further.
In step S21-1, computer equipment, according to input operation information and contextual information, determines the emotional information of described user.
Wherein, described emotional information comprises any information being used to indicate the mood of user.Preferably, described emotional information can have multiform expression; Such as, described emotional information is used to indicate a kind of specific mood (as happiness, sadness, indignation, suspection etc.); Again such as, described emotional information be used to indicate mood grade (as high, low etc.); Again such as, described emotional information is a mood numerical value after digitizing.
Particularly, computer equipment can come according to input operation information and contextual information based on semantic analysis technology, determines the emotional information of described user.
Such as, the candidate item that user selects is " happiness ", and the contextual information of this candidate item is the vocabulary " very " in the front, insertion position being positioned at this candidate item, then based on semantic analysis, computer equipment determines that the mood grade of user is for high.
Again such as, the candidate item that user selects is " happiness ", and the contextual information of this candidate item is " very not " in the front, insertion position being positioned at this candidate item, then based on semantic analysis, computer equipment determines that the mood grade of user is low.
It should be noted that, above-mentioned citing is only and technical scheme of the present invention is described better, but not limitation of the present invention, those skilled in the art should understand that, any according to input operation information and contextual information, determine the implementation of the emotional information of described user, all should be within the scope of the present invention.
In step S21-2, computer equipment, according to emotional information, determines the audio information matched.
Such as, when described emotional information is used to indicate the mood of user for time glad, the cadence information of sense of hearing audio information corresponding to more cheerful and more light-hearted rhythm matched with this emotional information determined by computer equipment, and visual sound effect information is smiling face's animation.
Preferably, in input method, predefine is in a bad mood the corresponding relation between information and audio information, and computer equipment according to emotional information and this corresponding relation, can determine the audio information matched.
Such as, Fig. 3 is the schematic diagram of the emotional information of an example and the corresponding relation of audio information, when the mood of emotional information indicating user is " happiness ", then computer equipment from the audio information corresponding to " happiness " Stochastic choice cadence information rhythm2 and note pattern style1 as the audio information matched.
It should be noted that, above-mentioned citing is only and technical scheme of the present invention is described better, but not limitation of the present invention, those skilled in the art should understand that, any according to emotional information, determine the implementation of the audio information matched, all should be within the scope of the present invention.
2) input operation information comprises input velocity information corresponding to input operation, and computer equipment, according to described input velocity information, determines the sense of hearing audio information matched.
Such as, when inputting the speed indicated by velocity information and exceeding threshold value value1, the pattern of computer equipment determination audio is rock and roll audio, and from the multiple predetermined cadence corresponding to rock and roll audio Stochastic choice predetermined cadence; When inputting the speed indicated by velocity information lower than value2 (value1>value2), the pattern of computer equipment determination audio is classic audio, and from the multiple predetermined cadence corresponding to classic audio Stochastic choice predetermined cadence.
Preferably, the speed of input indicated by velocity information is larger, and volume corresponding to sense of hearing audio information is larger, and the speed inputted indicated by velocity information is less, and volume corresponding to sense of hearing audio information is less.
It should be noted that, above-mentioned citing is only and technical scheme of the present invention is described better, but not limitation of the present invention, those skilled in the art should understand that, any according to input operation information, determine the implementation of the audio information matched, all should be within the scope of the present invention.
In step s3, computer equipment exports the determined audio information matched.
Preferably, described audio information comprises sense of hearing audio information and/or visual sound effect information, and this sense of hearing audio information play by computer equipment, and/or, show this visual sound effect information.Preferably, computer equipment shows this visual sound effect information in the input frame of input method.
Such as, the audio information that computer equipment is determined in step s 2 comprises cadence information and the note pattern of audio, then in step s3, this cadence information play by computer equipment, and shows this note pattern in the input frame of input method.
Again such as, the audio information that computer equipment is determined in step s 2 comprises the cadence information of audio and the size of volume, then in step s3, computer equipment plays this cadence information according to the size of described volume.
Again such as, the audio information that computer equipment is determined in step s 2 comprises the animation corresponding to audio, then in step s3, computer equipment shows this animation in the input frame of input method.
Need illustrate time, the list entries inputted in input method whenever user changes (as inputted new symbol or stroke etc.), and computer equipment exports corresponding audio information according to current list entries; Circulation like this, along with the continuous input of user in input method, computer equipment constantly exports corresponding audio information.Such as, user inputs " k " in input method, the cadence information corresponding to computer equipment out alphabet " k " and note pattern; Then, user continues to input " a ", the cadence information corresponding to computer equipment out alphabet " a " and note pattern; Then, user continues to input " i ", the cadence information corresponding to computer equipment out alphabet " i " and note pattern; By that analogy, until user is " kaixin " at the list entries that input method inputs; When user selects the candidate item " happily " corresponding with this list entries, the cadence information corresponding to " happily " play by computer equipment, and shows smiling face's animation in the input frame of input method.
It should be noted that, above-mentioned citing is only and technical scheme of the present invention is described better, but not limitation of the present invention, it should be appreciated by those skilled in the art that the implementation of any output sound effective information, all should be within the scope of the present invention.
In prior art, when user uses input method to input, input method only provides candidate item according to the list entries of user's input, selects for user.The output effect of the program is very single, especially for the long-time user performing input operation, more easily makes user produce sense of fatigue and barren sense.
According to the scheme of the present embodiment, under the prerequisite realizing basic input method function, the input operation (this input operation is not limited to the list entries of user's input) that can perform input method according to user, export the audio information matched, thus make the output effect comparatively horn of plenty of input method, the sense of fatigue of user when inputting can be alleviated, and increase input method enjoyment; In addition, audio information of the present invention comprises sense of hearing audio information and visual sound effect information, can perform in the process of input operation user, for user brings the sense of hearing and visual complete new experience; Further, the present invention by determining the emotional information of user, can determine the audio information matched, thus make to strengthen user incorporate sense, and the mood that can effectively regulate user current.
Fig. 3 is the structural representation for the device of output sound effective information in computer equipment of a preferred embodiment of the invention.This device (hereinafter referred to as " sound effect output unit ") being used for output sound effective information comprises the input operation for performing input method according to user, obtaining the device (hereinafter referred to as " the first acquisition device 1 ") of input operation information, for according to described input operation information, determining the device (hereinafter referred to as " determining device 2 ") of the audio information matched and the device (hereinafter referred to as " output unit 3 ") for exporting described audio information.
The input operation that first acquisition device 1 performs input method according to user, obtains input operation information.
Wherein, described input operation comprises any operation that user can perform in input method; Preferably, the selection operation etc. that described input operation includes but not limited to the operation of the letter input that user is undertaken by the keyboard presented input method or stroke input etc., user carries out according to the candidate item that the list entries that user inputs provides input method.
Wherein, described input operation information comprises any information relevant to the input operation that user performs.Preferably, described input operation information includes but not limited to:
1) list entries of user's input.
Wherein, described list entries comprises any symbol that can input in input method; Preferably, described list entries includes but not limited to letter, stroke, special character (as@) etc.
2) that select, corresponding with the list entries that user the inputs candidate item of user.
Preferably, input method provides at least one candidate item for the list entries that user inputs.
Such as, user input list entries be " keyi ", the candidate item that input method provides according to this list entries comprises: " can ", " deliberately ", " suspicious ", " can discuss ", " tech "; Described input operation information comprise the candidate item of user selected in these candidate item " can ".
3) the input velocity information of input operation.
Wherein, described input velocity information is used to indicate the speed corresponding to input operation.Preferably, described input velocity information can adopt multiform expression; Such as, described input velocity information is expressed as speed class, as quick, common, at a slow speed etc.; Again such as, described input velocity information is expressed as concrete velocity amplitude, if velocity amplitude is 60WPM (words per minute, words per minute).
It should be noted that, above-mentioned input operation information is only citing, but not limitation of the present invention, those skilled in the art will be understood that any information relevant to the input operation that user performs, and all should be included in the scope of input operation information of the present invention.
Particularly, the first acquisition device 1 can adopt various ways to obtain described input operation information.
Such as, first acquisition device 1 determines described input operation information by the input operation detecting user's execution in real time, as, the symbol quantity of the first acquisition device 1 counting user input and the time corresponding to input operation, and when symbol quantity and the ratio of time are greater than predetermined speed, determine that input velocity information is used to indicate speed class for " fast ".
Again such as, the first acquisition device 1 directly obtains the input operation information that input method provides, e.g., and the input velocity information that the first acquisition device 1 direct acquisition input method provides and the candidate item that user selects.
It should be noted that, above-mentioned citing is only and technical scheme of the present invention is described better, but not limitation of the present invention, those skilled in the art should understand that, any input operation input method performed according to user, obtain the implementation of input operation information, all should be within the scope of the present invention.
Determining device 2, according to input operation information, determines the audio information matched.
Wherein, described audio information comprises any information relevant to audio; Preferably, described audio information includes but not limited to:
1) sense of hearing audio information.
Wherein, described sense of hearing audio information comprises any information relevant to the auditory effect of audio.Preferably, described sense of hearing audio information includes but not limited to:
A) cadence information of audio.
Wherein, described cadence information comprises any information relevant to the rhythm of audio; Preferably, described cadence information includes but not limited to the rhythm etc. corresponding to combination of rhythm corresponding to a note or multiple note.
B) pattern information of audio.
Wherein, described pattern information is used to indicate the pattern corresponding to audio; Preferably, described pattern comprises common audio, classic audio, rock and roll audio, popular audio, jazz's audio, voice audio, Environmental Audio Extension etc.
Preferably, there is multiple preassigned pattern in input method; More preferably, often kind of preassigned pattern is to there being at least one predetermined cadence.Such as, preassigned pattern comprises Environmental Audio Extension, the corresponding following predetermined cadence of this preassigned pattern: sound of the wind rhythm, underwater sound rhythm, chirping of birds rhythm etc.
C) information volume.
Wherein, described information volume comprises any information relevant to the volume of audio; Preferably, described information volume includes but not limited to: the size of volume, the information relevant to the enhancing of volume, the information etc. relevant with the equilibrium of volume.
2) visual sound effect information.
Wherein, described visual sound effect information comprises any information relevant to the visual effect of audio.Preferably, described visual sound effect information includes but not limited to: the animation etc. corresponding to the note pattern presented, audio.
Preferably, input method comprises multiple predetermined visual sound effect information, as predetermined note pattern, predetermined animation etc.; More preferably, predefine the mapping between this multiple predetermined visual sound effect information and sense of hearing audio information in input method, such as, in input method, predefine the mapping between multiple predetermined note pattern and multiple predetermined cadence.
It should be noted that, above-mentioned audio information is only citing, but not limitation of the present invention, those skilled in the art will be understood that any information relevant to audio, all should be included in the scope of audio information of the present invention.
Particularly, determining device 2, according to input operation information, determines that the implementation of the audio information matched includes but not limited to:
1) input operation information comprises the list entries of user's input or user is that select, corresponding with the list entries that user inputs candidate item, and determining device 2, according to described list entries or candidate item, determines the audio information matched.
Wherein, determining device 2, according to described list entries or candidate item, determines that the implementation of the audio information matched includes but not limited to:
A) determining device 2 is directly according to described list entries or candidate item, determines the audio information matched with described list entries or candidate item.
Such as, the list entries of user's input is " k ", then determining device 2 directly determines the cadence information of audio information corresponding to symbol " k " that match with this list entries.
Again such as, user selects, the candidate item corresponding with the list entries that user inputs " gaoxing " is " happiness ", the then tone of determining device 2 corresponding to vocabulary " happiness ", determine the audio information matched with this candidate item, this audio information comprise Heibei provincial opera ("-") and falling tone (" ˋ ") combine corresponding to cadence information.
Again such as, that user selects, the candidate item corresponding with the list entries that user inputs " huanghe " be " the Yellow River ", then determining device 2 determines the cadence information of audio information corresponding to underwater sound rhythm that match with this candidate item.
B) in this implementation, the sound effect output unit of the present embodiment also comprises the device of the contextual information for obtaining list entries or candidate item (hereinafter referred to as " the second acquisition device ", figure does not show), and determining device 2 comprises for according to described input operation information and described contextual information further, determine the device (hereinafter referred to as " the first sub-determining device ", figure does not show) of the audio information matched.
Second acquisition device obtains the contextual information of list entries or candidate item.
Wherein, described contextual information is used to indicate and list entries or the adjacent information of candidate item.Preferably, described contextual information includes but not limited to: the text message (the one or more vocabulary as before and after insertion position) before and after candidate item insertion position in the text, the list entries (the one or more symbols e.g., before and after input position) etc. before and after the input position of this list entries.
Such as, last at text of candidate item " happiness " insertion position in the text, and the vocabulary before this insertion position is " very ", then the contextual information of candidate item " happiness " is " very ".
Again such as, the former list entries in input method input frame is " gaxing ", and user is incoming symbol " o " between the symbol " a " and symbol " x " of this list entries, and the contextual information of then symbol " o " comprises " ga " and " xing ".
Wherein, the second acquisition device can adopt various ways to obtain the contextual information of list entries or candidate item.
Such as, namely the second acquisition device, according to cursor position in the text (also candidate item insertion position in the text), determines the vocabulary before and after this position, and by the contextual information of determined vocabulary alternatively item.
Again such as, the second acquisition device, according to the position of cursor in the input frame of input method (being also the input position of current input sequence), determines the symbol before and after this position, and using the contextual information of determined symbol as list entries.
It should be noted that, above-mentioned citing is only and technical scheme of the present invention is described better, but not limitation of the present invention, those skilled in the art should understand that, the implementation of the contextual information of any acquisition list entries or candidate item, all should be within the scope of the present invention.
First sub-determining device, according to described input operation information and described contextual information, determines the audio information matched.
Such as, the candidate item that user selects is " seat ", the contextual information of this candidate item is the vocabulary " Aries " in the front, insertion position being positioned at this candidate item, then the first sub-determining device determines that the audio information matched comprises cadence information corresponding to " Aries " and animation.
As a kind of preferred version, first sub-determining device comprises for according to described input operation information and contextual information further, determine that the device of the emotional information of described user is (hereinafter referred to as " the second sub-determining device ", figure does not show) and for according to described emotional information, determine the device (hereinafter referred to as " the 3rd sub-determining device ", figure does not show) of the audio information matched.
Second sub-determining device, according to input operation information and contextual information, determines the emotional information of described user.
Wherein, described emotional information comprises any information being used to indicate the mood of user.Preferably, described emotional information can have multiform expression; Such as, described emotional information is used to indicate a kind of specific mood (as happiness, sadness, indignation, suspection etc.); Again such as, described emotional information be used to indicate mood grade (as high, low etc.); Again such as, described emotional information is a mood numerical value after digitizing.
Particularly, the second sub-determining device can be come according to input operation information and contextual information based on semantic analysis technology, determines the emotional information of described user.
Such as, the candidate item that user selects is " happiness ", and the contextual information of this candidate item is the vocabulary " very " in the front, insertion position being positioned at this candidate item, then based on semantic analysis, the second sub-determining device determines that the mood grade of user is for high.
Again such as, the candidate item that user selects is " happiness ", and the contextual information of this candidate item is " very not " in the front, insertion position being positioned at this candidate item, then based on semantic analysis, the second sub-determining device determines that the mood grade of user is low.
It should be noted that, above-mentioned citing is only and technical scheme of the present invention is described better, but not limitation of the present invention, those skilled in the art should understand that, any according to input operation information and contextual information, determine the implementation of the emotional information of described user, all should be within the scope of the present invention.
3rd sub-determining device, according to emotional information, determines the audio information matched.
Such as, when described emotional information is used to indicate the mood of user for time glad, the 3rd sub-determining device determines the cadence information of sense of hearing audio information corresponding to more cheerful and more light-hearted rhythm matched with this emotional information, and visual sound effect information is smiling face's animation.
Preferably, in input method, predefine is in a bad mood the corresponding relation between information and audio information, and the 3rd sub-determining device according to emotional information and this corresponding relation, can determine the audio information matched.
Such as, Fig. 3 is the schematic diagram of the emotional information of an example and the corresponding relation of audio information, when the mood of emotional information indicating user is " happiness ", then the 3rd sub-determining device from the audio information corresponding to " happiness " Stochastic choice cadence information rhythm2 and note pattern style1 as the audio information matched.
It should be noted that, above-mentioned citing is only and technical scheme of the present invention is described better, but not limitation of the present invention, those skilled in the art should understand that, any according to emotional information, determine the implementation of the audio information matched, all should be within the scope of the present invention.
2) input operation information comprises input velocity information corresponding to input operation, determining device 2 comprises for according to described input velocity information further, determine the device (hereinafter referred to as " the 4th sub-determining device ", figure does not show) of the sense of hearing audio information matched.
Such as, when inputting the speed indicated by velocity information and exceeding threshold value value1, the pattern of the 4th sub-determining device determination audio is rock and roll audio, and from the multiple predetermined cadence corresponding to rock and roll audio Stochastic choice predetermined cadence; When inputting the speed indicated by velocity information lower than value2 (value1>value2), the pattern of the 4th sub-determining device determination audio is classic audio, and from the multiple predetermined cadence corresponding to classic audio Stochastic choice predetermined cadence.
Preferably, the speed of input indicated by velocity information is larger, and volume corresponding to sense of hearing audio information is larger, and the speed inputted indicated by velocity information is less, and volume corresponding to sense of hearing audio information is less.
It should be noted that, above-mentioned citing is only and technical scheme of the present invention is described better, but not limitation of the present invention, those skilled in the art should understand that, any according to input operation information, determine the implementation of the audio information matched, all should be within the scope of the present invention.
Output unit 3 exports the determined audio information matched.
Preferably, described audio information comprises sense of hearing audio information and/or visual sound effect information, and output unit 3 comprises further for playing this sense of hearing audio information, and/or, show the device (hereinafter referred to as " sub-output unit ", figure does not show) of this visual sound effect information.Preferably, sub-output unit shows this visual sound effect information in the input frame of input method.
Such as, the audio information that determining device 2 is determined comprises cadence information and the note pattern of audio, and sub-output unit plays this cadence information, and shows this note pattern in the input frame of input method.
Again such as, the audio information that determining device 2 is determined comprises the cadence information of audio and the size of volume, and sub-output unit plays this cadence information according to the size of described volume.
Again such as, the audio information that determining device 2 is determined comprises the animation corresponding to audio, and sub-output unit shows this animation in the input frame of input method.
Need illustrate time, the list entries inputted in input method whenever user changes (as inputted new symbol or stroke etc.), and computer equipment exports corresponding audio information according to current list entries; Circulation like this, along with the continuous input of user in input method, computer equipment constantly exports corresponding audio information.Such as, user inputs " k " in input method, the cadence information corresponding to computer equipment out alphabet " k " and note pattern; Then, user continues to input " a ", the cadence information corresponding to computer equipment out alphabet " a " and note pattern; Then, user continues to input " i ", the cadence information corresponding to computer equipment out alphabet " i " and note pattern; By that analogy, until user is " kaixin " at the list entries that input method inputs; When user selects the candidate item " happily " corresponding with this list entries, the cadence information corresponding to " happily " play by computer equipment, and shows smiling face's animation in the input frame of input method.
It should be noted that, above-mentioned citing is only and technical scheme of the present invention is described better, but not limitation of the present invention, it should be appreciated by those skilled in the art that the implementation of any output sound effective information, all should be within the scope of the present invention.
In prior art, when user uses input method to input, input method only provides candidate item according to the list entries of user's input, selects for user.The output effect of the program is very single, especially for the long-time user performing input operation, more easily makes user produce sense of fatigue and barren sense.
According to the scheme of the present embodiment, under the prerequisite realizing basic input method function, the input operation (this input operation is not limited to the list entries of user's input) that can perform input method according to user, export the audio information matched, thus make the output effect comparatively horn of plenty of input method, the sense of fatigue of user when inputting can be alleviated, and increase input method enjoyment; In addition, audio information of the present invention comprises sense of hearing audio information and visual sound effect information, can perform in the process of input operation user, for user brings the sense of hearing and visual complete new experience; Further, the present invention by determining the emotional information of user, can determine the audio information matched, thus make to strengthen user incorporate sense, and the mood that can effectively regulate user current.
It should be noted that, device for output sound effective information in computer equipment of the present invention can be mounted to this computer equipment in advance by the production firm of computer equipment or sale service business, also can be loaded into computer equipment from other equipment (as server) voluntarily by user.Whether those skilled in the art will be understood that the device of the function in any can be used in the present invention, no matter be loaded in computer equipment, be all included in protection scope of the present invention.
It should be noted that the present invention can be implemented in the assembly of software and/or software restraint, such as, each device of the present invention can adopt special IC (ASIC) or any other similar hardware device to realize.In one embodiment, software program of the present invention can perform to realize step mentioned above or function by processor.Similarly, software program of the present invention (comprising relevant data structure) can be stored in computer readable recording medium storing program for performing, such as, and RAM storer, magnetic or CD-ROM driver or flexible plastic disc and similar devices.In addition, steps more of the present invention or function can adopt hardware to realize, such as, as coordinating with processor thus performing the circuit of each step or function.
To those skilled in the art, obviously the invention is not restricted to the details of above-mentioned one exemplary embodiment, and when not deviating from spirit of the present invention or essential characteristic, the present invention can be realized in other specific forms.Therefore, no matter from which point, all should embodiment be regarded as exemplary, and be nonrestrictive, scope of the present invention is limited by claims instead of above-mentioned explanation, and all changes be therefore intended in the implication of the equivalency by dropping on claim and scope are included in the present invention.Any Reference numeral in claim should be considered as the claim involved by limiting.In addition, obviously " comprising " one word do not get rid of other unit or step, odd number does not get rid of plural number.Multiple unit of stating in system claims or device also can be realized by software or hardware by a unit or device.First, second word such as grade is used for representing title, and does not represent any specific order.

Claims (16)

1., for a method for output sound effective information in computer equipment, wherein, the method comprises:
According to the input operation that user performs input method, obtain input operation information;
According to described input operation information, determine the audio information matched;
Export described audio information.
2. method according to claim 1, wherein, described audio information comprises following at least one item:
-sense of hearing audio information;
-visual sound effect information.
3. method according to claim 2, wherein, the step exporting described audio information comprises:
Play described sense of hearing audio information, and/or, show described visual sound effect information.
4. according to the method in any one of claims 1 to 3, wherein, described input operation information comprises following at least one item:
The list entries of-described user's input;
That-described user selects, corresponding with the list entries that user inputs candidate item;
The input velocity information of-described input operation.
5. method according to claim 4, wherein, described input operation information comprises described list entries or described candidate item, and wherein, the method also comprises:
Obtain the contextual information of described list entries or candidate item;
Wherein, according to described input operation information, determine that the step of the audio information matched comprises:
According to described input operation information and described contextual information, determine the audio information matched.
6. method according to claim 5, wherein, according to described input operation information and described contextual information, determine that the step of the audio information matched comprises:
According to described input operation information and contextual information, determine the emotional information of described user;
According to described emotional information, determine the audio information matched.
7. method according to claim 4, described input operation information comprises input velocity information corresponding to described input operation, according to described input operation information, determines that the step of the audio information matched comprises:
According to described input velocity information, determine the sense of hearing audio information matched.
8. method according to claim 7, wherein, the speed indicated by described input velocity information is larger, and volume corresponding to described sense of hearing audio information is larger, speed indicated by described input velocity information is less, and volume corresponding to described sense of hearing audio information is less.
9. for a device for output sound effective information in computer equipment, wherein, this device comprises:
For the input operation performed input method according to user, obtain the device of input operation information;
For according to described input operation information, determine the device of the audio information matched;
For exporting the device of described audio information.
10. device according to claim 9, wherein, described audio information comprises following at least one item:
-sense of hearing audio information;
-visual sound effect information.
11. devices according to claim 10, wherein, comprise for the device exporting described audio information:
For playing described sense of hearing audio information, and/or, show the device of described visual sound effect information.
12. devices according to any one of claim 9 to 11, wherein, described input operation information comprises following at least one item:
The list entries of-described user's input;
That-described user selects, corresponding with the list entries that user inputs candidate item;
The input velocity information of-described input operation.
13. devices according to claim 12, wherein, described input operation information comprises described list entries or described candidate item, and wherein, this device also comprises:
For obtaining the device of the contextual information of described list entries or candidate item;
Wherein, for according to described input operation information, determine that the device of the audio information matched comprises:
For according to described input operation information and described contextual information, determine the device of the audio information matched.
14. devices according to claim 13, wherein, for according to described input operation information and described contextual information, determine that the device of the audio information matched comprises:
For according to described input operation information and contextual information, determine the device of the emotional information of described user;
For according to described emotional information, determine the device of the audio information matched.
15. devices according to claim 12, described input operation information comprises input velocity information corresponding to described input operation, for according to described input operation information, determines that the device of the audio information matched comprises:
For according to described input velocity information, determine the device of the sense of hearing audio information matched.
16. devices according to claim 15, wherein, the speed indicated by described input velocity information is larger, and volume corresponding to described sense of hearing audio information is larger, speed indicated by described input velocity information is less, and volume corresponding to described sense of hearing audio information is less.
CN201510134636.XA 2015-03-25 2015-03-25 A kind of method and apparatus for being used to export audio information in computer equipment Active CN104866091B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510134636.XA CN104866091B (en) 2015-03-25 2015-03-25 A kind of method and apparatus for being used to export audio information in computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510134636.XA CN104866091B (en) 2015-03-25 2015-03-25 A kind of method and apparatus for being used to export audio information in computer equipment

Publications (2)

Publication Number Publication Date
CN104866091A true CN104866091A (en) 2015-08-26
CN104866091B CN104866091B (en) 2018-02-23

Family

ID=53911973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510134636.XA Active CN104866091B (en) 2015-03-25 2015-03-25 A kind of method and apparatus for being used to export audio information in computer equipment

Country Status (1)

Country Link
CN (1) CN104866091B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706040A (en) * 2019-10-16 2020-01-17 柯优兔区块链研究(广州)中心(有限合伙) Information acquisition method and device
CN110853606A (en) * 2019-11-26 2020-02-28 Oppo广东移动通信有限公司 Sound effect configuration method and device and computer readable storage medium
CN110858099A (en) * 2018-08-20 2020-03-03 北京搜狗科技发展有限公司 Candidate word generation method and device
CN111507143A (en) * 2019-01-31 2020-08-07 北京字节跳动网络技术有限公司 Expression image effect generation method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001175260A (en) * 1999-12-20 2001-06-29 Sharp Corp Sound information input processor
CN1491021A (en) * 2002-08-30 2004-04-21 雅马哈株式会社 Portable terminal device
CN101761860A (en) * 2008-11-21 2010-06-30 南通芯迎设计服务有限公司 USB typing speed indication lamp
CN102323919A (en) * 2011-08-12 2012-01-18 百度在线网络技术(北京)有限公司 Method for displaying input information based on user mood indication information and equipment
CN102541408A (en) * 2010-12-26 2012-07-04 上海量明科技发展有限公司 Method and system for calling out matched music by using character input interface
CN103092969A (en) * 2013-01-22 2013-05-08 上海量明科技发展有限公司 Method, client side and system for conducting streaming media retrieval to input method candidate item

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001175260A (en) * 1999-12-20 2001-06-29 Sharp Corp Sound information input processor
CN1491021A (en) * 2002-08-30 2004-04-21 雅马哈株式会社 Portable terminal device
CN101761860A (en) * 2008-11-21 2010-06-30 南通芯迎设计服务有限公司 USB typing speed indication lamp
CN102541408A (en) * 2010-12-26 2012-07-04 上海量明科技发展有限公司 Method and system for calling out matched music by using character input interface
CN102323919A (en) * 2011-08-12 2012-01-18 百度在线网络技术(北京)有限公司 Method for displaying input information based on user mood indication information and equipment
CN103092969A (en) * 2013-01-22 2013-05-08 上海量明科技发展有限公司 Method, client side and system for conducting streaming media retrieval to input method candidate item

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110858099A (en) * 2018-08-20 2020-03-03 北京搜狗科技发展有限公司 Candidate word generation method and device
CN110858099B (en) * 2018-08-20 2024-04-12 北京搜狗科技发展有限公司 Candidate word generation method and device
CN111507143A (en) * 2019-01-31 2020-08-07 北京字节跳动网络技术有限公司 Expression image effect generation method and device and electronic equipment
CN111507143B (en) * 2019-01-31 2023-06-02 北京字节跳动网络技术有限公司 Expression image effect generation method and device and electronic equipment
CN110706040A (en) * 2019-10-16 2020-01-17 柯优兔区块链研究(广州)中心(有限合伙) Information acquisition method and device
CN110853606A (en) * 2019-11-26 2020-02-28 Oppo广东移动通信有限公司 Sound effect configuration method and device and computer readable storage medium

Also Published As

Publication number Publication date
CN104866091B (en) 2018-02-23

Similar Documents

Publication Publication Date Title
CN108288468B (en) Audio recognition method and device
CN103064826B (en) A kind of method, equipment and system for input of expressing one's feelings
CN109657213B (en) Text similarity detection method and device and electronic equipment
CN102722483B (en) For determining method, device and equipment that the candidate item of input method sorts
CN108595519A (en) Focus incident sorting technique, device and storage medium
CN109271493A (en) A kind of language text processing method, device and storage medium
CN108920649B (en) Information recommendation method, device, equipment and medium
CN104951542A (en) Method and device for recognizing class of social contact short texts and method and device for training classification models
CN103760991A (en) Physical input method and physical input device
CN104866091A (en) Method and device for outputting audio-effect information in computer equipment
CN111488740A (en) Causal relationship judging method and device, electronic equipment and storage medium
CN111225236A (en) Method and device for generating video cover, electronic equipment and computer-readable storage medium
CN104866275A (en) Image information acquisition method and device
CN108831442A (en) Point of interest recognition methods, device, terminal device and storage medium
CN110880324A (en) Voice data processing method and device, storage medium and electronic equipment
CN104615689A (en) Searching method and device
CN110198482A (en) A kind of video emphasis bridge section mask method, terminal and storage medium
CN105094603A (en) Method and device for related inputting
CN105260092A (en) Method and device for dynamically changing input keyboard
CN110297932B (en) Method and device for determining maximum inscribed circle of closed graph in vector diagram and electronic equipment
CN111353070B (en) Video title processing method and device, electronic equipment and readable storage medium
CN111428487A (en) Model training method, lyric generation method, device, electronic equipment and medium
CN104077320A (en) Method and device for generating to-be-published information
CN105045590A (en) Method and device for altering interface resources
US20220059074A1 (en) Method for evaluating satisfaction with voice interaction, device, and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant